title
sequencelengths 0
18
| author
sequencelengths 0
4.41k
| authoraffiliation
sequencelengths 0
6.45k
| venue
sequencelengths 0
9
| abstract
stringlengths 1
37.6k
| doi
stringlengths 10
114
⌀ | pdfurls
sequencelengths 1
3
⌀ | corpusid
int64 158
259M
| arxivid
stringlengths 9
16
| pdfsha
stringlengths 40
40
| text
stringlengths 66
715k
| github_urls
sequencelengths 0
36
|
---|---|---|---|---|---|---|---|---|---|---|---|
[
"Charge transport in a quantum electromechanical system",
"Charge transport in a quantum electromechanical system"
] | [
"D Wahyu Utami \nCenter for Quantum Computer Technology\nDepartment of Physics\nSchool of Physical Sciences\nThe University of Queensland\n4072QLDAustralia\n",
"Hsi-Sheng Goan \nCenter for Quantum Computer Technology\nUniversity of New South Wales\n2052SydneyNSWAustralia\n",
"G J Milburn \nCenter for Quantum Computer Technology\nDepartment of Physics\nSchool of Physical Sciences\nThe University of Queensland\n4072QLDAustralia\n"
] | [
"Center for Quantum Computer Technology\nDepartment of Physics\nSchool of Physical Sciences\nThe University of Queensland\n4072QLDAustralia",
"Center for Quantum Computer Technology\nUniversity of New South Wales\n2052SydneyNSWAustralia",
"Center for Quantum Computer Technology\nDepartment of Physics\nSchool of Physical Sciences\nThe University of Queensland\n4072QLDAustralia"
] | [] | We describe a quantum electromechanical system(QEMS) comprising a single quantum dot harmonically bound between two electrodes and facilitating a tunneling current between them. An example of such a system is a fullerene molecule between two metal electrodes [Park et al. Nature, 407, 57 (2000)]. The description is based on a quantum master equation for the density operator of the electronic and vibrational degrees of freedom and thus incorporates the dynamics of both diagonal (population) and off diagonal (coherence) terms. We derive coupled equations of motion for the electron occupation number of the dot and the vibrational degrees of freedom, including damping of the vibration and thermo-mechanical noise. This dynamical description is related to observable features of the system including the stationary current as a function of bias voltage. | 10.1103/physrevb.70.075303 | [
"https://export.arxiv.org/pdf/cond-mat/0406102v1.pdf"
] | 13,817,221 | cond-mat/0406102 | 45ce27dadff7a892d999f54d273844006a0a49c8 |
Charge transport in a quantum electromechanical system
4 Jun 2004
D Wahyu Utami
Center for Quantum Computer Technology
Department of Physics
School of Physical Sciences
The University of Queensland
4072QLDAustralia
Hsi-Sheng Goan
Center for Quantum Computer Technology
University of New South Wales
2052SydneyNSWAustralia
G J Milburn
Center for Quantum Computer Technology
Department of Physics
School of Physical Sciences
The University of Queensland
4072QLDAustralia
Charge transport in a quantum electromechanical system
4 Jun 2004(Dated: March 22, 2022)numbers: 7270+m7323-b7363Kv6225+g6146+w
We describe a quantum electromechanical system(QEMS) comprising a single quantum dot harmonically bound between two electrodes and facilitating a tunneling current between them. An example of such a system is a fullerene molecule between two metal electrodes [Park et al. Nature, 407, 57 (2000)]. The description is based on a quantum master equation for the density operator of the electronic and vibrational degrees of freedom and thus incorporates the dynamics of both diagonal (population) and off diagonal (coherence) terms. We derive coupled equations of motion for the electron occupation number of the dot and the vibrational degrees of freedom, including damping of the vibration and thermo-mechanical noise. This dynamical description is related to observable features of the system including the stationary current as a function of bias voltage.
I. INTRODUCTION
A quantum electromechanical system (QEMS) is a submicron electromechanical device fabricated through stateof-the-art nanofabrication 1 . Typically, such devices comprise a mechanical oscillator (a singly or doubly clamped cantilever) with surface wires patterned through shadow mask metal evaporation. The wires can be used to drive the mechanical system by carrying an AC current in an external static magnetic field. Surface wires can also be used as motion transducers through induced EMFs as the substrate oscillates in the external magnetic field. Alternatively the mechanical resonators can form an active part of a single electron transducer, such as one plate of a capacitively coupled single electron transistor 2,3 . These devices have been proposed as sensitive weak force probes with the potential to achieve single spin detection 4, 5 . However they are of considerable interest in their own right as nano-fabricated mechanical resonators capable of exhibiting quantum noise features, such as squeezing and entanglement 6 .
In order to observe quantum noise in a QEMS device we must recognize that these devices are open quantum systems and explicitly describe the interactions between the device and a number of thermal reservoirs. This is the primary objective of this paper. There are several factors that determine whether a system operates in the quantum or classical regime. When the system consists only of an oscillator coupled to a bath the oscillator quantum of energy should be greater than the thermo-mechanical excitation of the system;hω o ≥ k B T where ω o is the resonant frequency of the QEMS oscillator and T is the temperature of the thermal mechanical bath in equilibrium with the oscillator. At a temperature of 10 milliKelvin, this implies an oscillator frequency of the order of GHz or greater. Recently Huang et al. reported the operation of a GHz mechanical oscillator 7 . A very different approach to achieving a high mechanical frequency was the fullerene molecular system of Park et al. 8 , and it is this system which we take as the prototype for our theoretical description. Previous work on the micro-mechanical degrees of freedom coupled to mesoscopic conductors 9,10,11,12 , indicate that transport of carriers between source and drain can act as a damping reservoir, even in the absence of an other explicit mechanism for mechanical damping into a thermal reservoir. This is also predicted by the theory we present for a particular bias condition. As is well known, dissipation can restore semiclassical behavior. Transport induced damping can also achieve this result.
The model we describe, Fig. 1, consists of a single quantum dot coupled via tunnel junctions to two reservoirs, the source and the drain. We will assume that the coulomb blockade permits only one quasi-bound single electron state on the dot which participates in the tunneling between the source and the drain. We will ignore spin, as the source and drain are not spin polarized, and there is no external magnetic field. A gate voltage controls the energy of this quasi-bound state with respect to the Fermi energy in the source. The quantum dot can oscillate around an equilibrium position mid way between the source and the drain contacts due to weak restoring forces. When an electron tunnels onto the dot an electrostatic force is exerted on the dot shifting its equilibrium position. In essence this is a quantum dot single electron transistor. In the experiment of Park et al. 8 , the quantum dot was a single fullerene molecule weakly bound by van der Walls interactions between the molecule and the electrodes. The dependence of the conductance on gate voltage was found to exhibit features attributed to transitions between the quantized vibrational levels of the mechanical oscillations of the molecule.
Boese and Schoeller 13 have recently given a theoretical description of the conductance features of this system. A more detailed analysis using similar techniques was given by Aji et al. 15 . Our objective is to extend these models to FIG. 1: Schematic representation of tunneling between a source and a drain through a quantum dot. The dot is harmonically bound and vibrational motion can be excited as electrons tunnel through the system. provide a full master equation description of the irreversible dynamics, including quantum correlation between the mechanical and electronic degrees of freedom. We wish to go beyond a rate equation description so as to be able to include coherent quantum effects which arise, for example, when the mechanical degree of freedom is subject to coherent driving.
II. THE MODEL
We will assume that the center of mass of the dot is bound in a harmonic potential with resonant frequency ω o . This vibrational degree of freedom is then described by a displacement operatorx which can be written in terms of annihilation and creation operators a, a † asx
= h 2mω o (a + a † ).(1)
The electronic single quasi-bound state on the dot is described by Fermi annihilation and creation operators c, c † , which satisfy the anti commutation relation cc † + c † c = 1. The Hamiltonian of the system can then be written as,
H =hω I (V g )c † c + U cn 2 (2) +hω o a † a (3) +h k ω Sk a † k a k +h k ω Dk b † k b k (4) −χ(a † + a)n(5)+ k T Sk (a k c † + ca † k ) + k T Dk (b k c † + cb † k )(6)+ ph ω p d † p d p + g p (a † d p + ad † p ),(7)
wheren = c † c is the excess electron number operator on the dot. The first term of the Hamiltonian describes the free energy for the island. A particular gate voltage V g , with a correspondinghω I = 15meV , for the island is chosen for calculation. U c is the Coulomb charge energy which is the energy that is required to add an electron when there is already one electron occupying the island. We will assume this energy is large enough so that no more than one electron occupies the island at any time. This is a Coulomb Blockade effect. The charging energy of the fullerene molecule transistor has been observed by Park et al. to be larger than 270 meV which is two orders of magnitude larger than the vibrational quantum of energyhω o . The free Hamiltonian for the oscillator is described in term (3). The Park et al. experiment gives the valuehω o = 5 meV, corresponding to a THz oscillator. The electrostatic energy of electrons in the source and drain reservoirs is written as term (4). Term (5) is the coupling between the oscillator and charge while term (6) represents the source-island tunnel coupling and the drain-island tunnel coupling. The last term, (7), describes the coupling between the oscillator and the thermo-mechanical bath responsible for damping and thermal noise in the mechanical system in the rotating wave approximation 16 . This is an additional source of damping to that which can arise due to the transport process itself (see below). We include it in order to bound the motion under certain bias conditions. A possible physical origin of this source of dissipation will be discussed after the derivation of the master equation.
We have neglected the position dependence of the tunneling rate onto and off the island. This is equivalent to assuming that the distance, d, between the electrodes and the equilibrium position of the uncharged quantum dot, is much larger than the rms position fluctuations in the ground state of the oscillator. There are important situations where this approximation cannot be made, for example in the so called 'charge shuttle' systems 17 .
A primary difficulty in analyzing the quantum dynamics of this open system is the presence of different time scales associated with the oscillator, the tunneling events and the coupling between the oscillator and electronic degrees of degree due to the electrostatic potential, term (4). The standard approach would be to move to an interaction picture for the oscillator and the electronic degrees of freedom. However this would make the electrostatic coupling energy time dependent, and rapidly oscillating. Were we to approximate this with the secular terms stemming from a Dyson expansion of the Hamiltonian, the resulting effective coupling between the oscillator and the electron occupation of the dot simply shifts the free energy of the dot and no excitation of the mechanical motion can occur.
To avoid this problem we eliminate the coupling term of the oscillator and charge by doing a canonical transformation with unitary representation U = e s where,
s = −λ(a † − a)n (8) with λ = χ hω o .(9)
This unitary transformation gives a conditional displacement of the oscillator conditional on the electronic occupation the dot. One might call this a displacement picture. This derivation follows the approach of Mahan 18 . The motivation behind this is as follows. The electrostatic interaction, term (4), displaces the equilibrium position of the oscillator so that the average value of the oscillator amplitude in the ground state becomes
a = λ,(10)
We can shift this back to the origin by a phase-space displacement a ≡ e s ae −s = a + λn.
This unitary transformation gives a conditional displacement of the oscillator, conditional on the electronic occupation of the dot. Applying U to the Fermi operator c gives
c = ce λ(a † −a) .(12)
The Schrödinger equation for the displaced state,ρ = e s ρe −s , then takes the form
dρ dt = − ī h [H,ρ],(13)
where the transformed Hamiltonian is
H =hω o a † a +hω I (V g )c † c + kh ω Sk a † k a k + kh ω Dk b † k b k + (U c + χ 2 hω o )n 2 + k T Sk a k c † e λ(a † −a) + ca † k e −λ(a † −a) + k T Dk b k c † e λ(a † −a) + cb † k e −λ(a † −a) .(14)
We will now work exclusively in this displacement picture.
To derive a master equation for the dot, we first transform to an interaction picture in the usual way to give the Hamiltonian
H I = T Sk (a k c † e i(ωI −η−ω Sk )t e −λ(a † e iωo t −ae −iωo t ) +ca † k e −i(ωI −η−ω Sk )t e λ(a † e iωo t −ae −iωo t ) ) + T Dk (b k c † e i(ωI −η−ω Dk )t e −λ(a † e iωo t −ae −iωot ) +cb † k e −i(ωI −η−ω Dk )t e λ(a † e iωo t −ae −iωot ) ),(15)
where η = χ 2 /(hω o ) = χλ. At this point we might wish to trace out the phonon bath, however we will postpone this for a closer look at the tunneling Hamiltonian at the individual phonon level. We use the exponential approximation e x = 1+x+x 2 /(2!)+· · · , when x is small for the term e λ(a † e iωo t −ae −iωo t ) . We expect an expansion to second order in λ to give an adequate description of transport, in that at least one step in the current vs. bias voltage curve is seen due to phonon mediated tunneling. In the experiment of Park et al., λ was less than unity, but not very small. Strong coupling between the electronic and vibrational degrees of freedom (large λ) will give multi-phonon tunneling events, and corresponding multiple steps in the current vs. bias voltage curves. The Hamiltonian can then be written as
H I = T Sk (a k c † e i(ωI −η−ω Sk )t + ca † k e −i(ωI −η−ω Sk )t ) +λ T Sk (a k c † ae i(ωI −η−ω Sk −ωo)t + ca † k a † e −i(ωI −η−ω Sk −ωo)t −a k c † a † e i(ωI −η−ω Sk +ωo)t − ca † k ae −i(ωI −η−ω Sk +ωo)t ) + λ 2 2 T Sk (a k c † aae i(ωI −η−ω Sk −2ωo)t + ca † k a † a † e −i(ωI −η−ω Sk −2ωo)t −2a k c † a † ae i(ωI −η−ω Sk )t − 2ca † k a † ae −i(ωI −η−ω Sk )t +a k c † a † a † e i(ωI −η−ω Sk +2ωo)t − ca † k aae −i(ωI −η−ω Sk +2ωo)t ) + T Dk (b k c † e i(ωI −η−ω Dk )t + cb † k e −i(ωI −η−ω Dk )t ) +λ T Dk (−b k c † a † e i(ωI −η−ω Dk +ωo)t − cb † k ae −i(ωI −η−ω Dk +ωo)t +b k c † ae i(ωI −η−ω Dk −ωo)t + cb † k a † e −i(ωI −η−ω Dk −ωo)t ) + λ 2 2 T Dk (b k c † aae i(ωI −η−ω Dk −2ωo)t + cb † k a † a † e −i(ωI −η−ω Dk −2ωo)t −2b k c † a † ae i(ωI −η−ω Dk )t − 2cb † k a † ae −i(ωI −η−ω Dk )t +b k c † a † a † e i(ωI −η−ω Dk +2ωo)t − cb † k aae −i(ωI −η−ω Dk +2ωo)t ).(16)
The terms of zero order in λ describe bare tunneling through the system and do not cause excitations of the vibrational degree of freedom. The terms linear in λ correspond to the exchange of one vibrational quantum, or phonon. The terms quadratic in λ correspond to tunneling with the exchange of two vibrational quanta. Higher order terms could obviously be included at considerable computational expense. We will proceed to derive the master equation up to quadratic order in λ.
III. MASTER EQUATION
Our objective here is to find an evolution equation of the joint density operator for the electronic and vibrational degrees of freedom. We will use standard methods based on the Born and Markov approximation 16 . In order to indicate where these approximations occur, we will sketch some of the key elements of the derivation in what follows. The Born approximation assumes that the coupling between the leads and the local system is weak and thus second order perturbation theory will suffice to describe this interaction;
ρ = −1 h 2 t 0 dt ′ Tr[H I (t), [H I (t ′ ), R]],(17)
where R is the joint density matrix for the vibrational and electronic degrees of freedom of the local system and the reservoirs. At this point we would like to trace out the electronic degrees of freedom for the source and drain. We will assume that the states of the source and drain remain in local thermodynamic equilibrium at temperature T . This is part of the Markov approximation. Its validity requires that any correlation that develops between the electrons in the leads and the local system, as a result of the tunneling interaction, is rapidly damped to zero on time scales relevant for the system dynamics. We need the following moments:
Tr[a † k a k ρ] = f Sk , Tr[b † k b k ρ] = f Dk , Tr[a k a † k ρ] = 1 − f Sk , Tr[b k b † k ρ] = 1 − f Dk .
where f Sk = f (E Sk ) is the Fermi function describing the average occupation number in the source and similarly f Dk = f (E Dk ), for the drain. The Fermi function has an implicit dependence on the temperature, T , of the electronic system. The next step is to convert the sum over modes to a frequency-space integral:
k f Sk |T Sk | 2 → ∞ 0 dω g(ω)f D (ω)|T S (ω)| 2 ,(18)
where |T Sk | 2 = T * Sk T Sk and g(ω) is the density of states. Evaluating the time integral, we use,
∞ 0 dτ e ±iǫτ = πδ(ǫ) ± iP V (1/ǫ),(19)
where τ = t − t ′ and the imaginary term is ignored.
Using these methods, we can combine the terms for the source and drain as the left and right tunneling rates, γ L and γ R respectively
∞ 0 dω g(ω)|T S (ω)| 2 δ(ω 0 ) = γ L (ω 0 ).(20)
In the same way, we can define
γ L1 = γ L (hω I − η − µ L ), f 1L = f (hω I − η − µ L ), γ L2 = γ L (hω I − η −hω o − µ L ), f 2L = f (hω I − η −hω o − µ L ), γ L3 = γ L (hω I − η +hω o − µ L ), f 3L = f (hω I − η +hω o − µ L ).
and similarly for γ R1 , γ R2 γ R3 , f 1R , f 2R , f 3R replacing µ L with µ R and f being the Fermi functions which have a dependence on the bias voltage (through the chemical potential) and also on the phonon energyhω o . As the bias voltage is increased from zero, the first Fermi function to be significantly different from zero is f 2L followed by f 1L and then f 3L . This stepwise behavior will be important in understanding the dependence of the stationary current as a function of bias voltage. The master equation in the canonical transformed picture to the second order in λ may be written as
dρ dt = γ L1 (1 − λ 2 )(f 1L D[c † ]ρ + (1 − f 1L )D[c]ρ)
+λ 2 (f 1L (−a † ac †ρ c + a † acc †ρ − c †ρ ca † a +ρcc † a † a)
+(1 − f 1L )(−a † acρc † + a † ac † cρ − cρc † a † a +ρa † ac † c)) + γ L2 λ 2 f 2L D[ac † ]ρ + (1 − f 2L )D[a † c]ρ + γ L3 λ 2 f 3L D[a † c † ]ρ + (1 − f 3L )D[ac]ρ + γ R1 (1 − λ 2 )(f 1R D[c † ]ρ + (1 − f 1R )D[c]ρ) +λ 2 (f 1R (−a † ac †ρ c + a † acc †ρ − c †ρ ca † a +ρcc † a † a) +(1 − f 1R )(−a † acρc † + a † ac † cρ − cρc † a † a +ρa † ac † c)) + γ R2 λ 2 f 2R D[ac † ]ρ + (1 − f 2R )D[a † c]ρ + γ R3 λ 2 f 3R D[a † c † ]ρ + (1 − f 3R )D[ac]ρ + κ(n p + 1)D[a]ρ + κn p D[a † ]ρ + κλ 2 (2n p + 1)D[c † c]ρ ,(21)
where the notation D is defined for arbitrary operators X and Y as
D[X]Y = J [X]Y − A[Y ] = XY X † − 1 2 (X † XY + Y X † X),(22)andn p (ω o ) = 1 eh ωo/kB T − 1 .(23)
We have included in this model an explicit damping process of the oscillators motion at rate κ into a thermal oscillator bath with mean excitationn p and T is the effective temperature of reservoir responsible for this damping process. A possible physical origin for this kind of damping could be as follows. Thermal fluctuations in the metal contacts of the source and drain cause fluctuations in position of the center of the trapping potential confining the molecule, that is to say small, fluctuating linear forces act on the molecule. For a harmonic trap, this appears to the oscillator as a thermal bath. However we expect such a mechanism to be very weak. This fact, together with the very large frequency of the oscillator, justifies our use of the quantum optical master equation (as opposed to the Brownian motion master equation) to describe this source of dissipation 16 . The thermo-mechanical and electronic temperatures are not necessarily the same, although we will generally assume this to be the case.
Setting λ = 0 we recover the standard master equation for a single quantum dot coupled to leads 14 . The superoperator D[c † ] adds one electron to the dot. Terms containing this super operator describe a conditional Poisson event in which an electron tunnels onto the dot. The electron can enter from the source, with probability per unit time of γ L1 f 1L cc † , or it can enter from the drain, with probability per unit time γ R1 f 1R cc † . Likewise the term D[c] describes an electron leaving the dot, again via tunneling into the source (terms proportional to γ L1 ) or the drain (terms proportional to γ R1 ). When λ = 0 there are additional terms describing phonon mediated tunneling events onto and off the dot. Any term proportional to γ Li , i = 1, 2, 3. describes an electron tunneling out of, or into, the source, while any term proportional to γ Ri , i = 1, 2, 3 describes an electron tunneling out of, or into, the drain.
The average currents through the left junction (source lead-dot) and the right junction (dot-drain lead) are related to each other, and the average occupation of the dot, by
I L (t) − I R (t) = e d c † c dt .(24)
In the steady state, the occupation of the dot is constant and the average currents through the two junctions are equal. Of course, the actual fluctuating time dependent currents are almost never equal. The external current arises as the external circuit adjusts the chemical potential of the local Fermi reservoir when electrons tunnel onto or off the dot. It is thus clear that the current through the left junction must depend only on the tunneling rates γ Li , i = 1, 2, 3. in the left barrier. This makes it easy to identify the average current through the left (or right) junction by inspection of the equation of motion for c † c : all terms in the right hand side of Eq. (24) proportional to γ Li correspond to the average current through the left junction, I L (t), while all terms on the right hand side proportional to γ Ri correspond to the negative of the current through the right junction, −I R (t).
IV. LOCAL SYSTEM DYNAMICS
We can now compute the current through the quantum dot. The current reflects how the reservoirs of the source and drain respond to the dynamics of the vibrational and electronic degrees of freedom. Of course in an experiment the external current is typically all we have access to. However the master equation enables us to calculate the coupled dynamics of the vibrational and electronic degrees of freedom. Understanding this dynamics is crucial to explaining the observed features in the external current. As electrons tunnel on and off the dot, the oscillator experiences a force due to the electrostatic potential. While the force is conservative, the tunnel events are stochastic (in fact a Poisson process) and thus the excitation of the oscillator is stochastic. Furthermore the vibrational and electronic degrees of freedom become correlated through the dynamics. In this section we wish to investigate these features in some detail.
From the master equation, the rate of change of this average electron number in the dot may be obtained:
d c † c CT dt = tr[c † c dρ dt ] (25) = [γ L1 (1 − λ 2 )(f 1L − c † c ) + γ R1 (1 − λ 2 )(f 1R − c † c ) −2γ L1 λ 2 (f 1L a † a − a † ac † c ) +γ L2 λ 2 (f 2L a † a − a † ac † c − (1 − f 2L ) c † c ) +γ L3 λ 2 (f 3L 1 + a † a − f 3L c † c − a † ac † c ) −2γ R1 λ 2 (f 1R a † a − a † ac † c ) +γ R2 λ 2 (f 2R a † a − a † ac † c − (1 − f 2R ) c † c ) +γ R3 λ 2 (f 3R 1 + a † a − f 3R c † c − a † ac † c )] CT .(26)
While for the vibrational degrees of freedom, we see that
d a † a CT dt = tr[a † a dρ dt ] (27) = λ 2 [γ L2 (−f 2L a † a + a † ac † c + (1 − f 2L ) c † c ) +γ L3 (f 3L 1 + a † a − f 3L c † c − a † ac † c ) +γ R2 (−f 2R a † a + a † ac † c + (1 − f 2R ) c † c ) +γ R3 (f 3R 1 + a † a − f 3R c † c − a † ac † c ) − κ a † a ] CT + κn p .(28)
Here the subscript CT indicates that the quantity to which it is attached is evaluated in the canonical transformed (CT) basis. The average occupational number of electron in the dot in the original basis is the same as in the CT basis:
c † c = tr[c † cρ] = tr[c † cρ] = c † c CT .
While for the vibrational degrees of freedom, we have
a † a = tr[a † aρ] = tr[a † ae −s e −iωoa † atρ e iωoa † at e s ] = tr[e iωoa † at (a † + λn)(a + λn)e −iωoa † atρ ] = a † a CT + λ (a † e −iωoa † at + ae −iωoa † at )n CT + λ 2 n 2 .(29)
If the initial displacement x is zero, the second time dependent term in the previous expression remains zero. We will assume this is the case in what follows. In general we do not get a closed set of equations for the mean phonon and electron number due to the presence in these equations of the higher order moment a † ac † c . This reflects the fact that the electron and vibrational degrees of freedom are correlated (and possibly entangled) through the dynamics. One might proceed by introducing a semiclassical factorization approximation by replacing a † ac † c by the factorized average values, i.e., a † ac † c ≈ a † a c † c , then the evolution equations (26) and (28) forms a closed set of equations. However there is a special case for which this is not necessary. If we let γ L1 = γ L2 = γ L3 = γ L and γ R1 = γ R2 = γ R3 = γ R which is the assumption of energy-independent tunnel couplings, the equations do form a closed set:
d c † c dt = A 1 c † c + B 1 a † a CT + C 1 ,(30)A 1 = − γ L (1 − f 2L λ 2 + f 3L λ 2 ) + γ R (1 − f 2R λ 2 + f 3R λ 2 ) , B 1 = λ 2 (−2f 1L γ L + f 2L γ L + f 3L γ L − 2f 1R γ R + f 2R γ R + f 3R γ R ), C 1 = (1 − λ 2 )γ L f 1L + γ L f 3L λ 2 + (1 − λ 2 )γ R f 1R + γ R f 3R λ 2 , and d a † a CT dt = A 2 c † c + B 2 a † a CT + C 2 ,(31)A 2 = λ 2 (γ L (1 − f 2L − f 3L ) + γ R (1 − f 2R − f 3R )), B 2 = λ 2 (−γ L f 2L + γ L f 3L − γ R f 2R + γ R f 3R ) − κ, C 2 = λ 2 (γ L f 3L + γ R f 3R ) + κn p .
Consideration of Eq. (31) indicates that it is possible for the oscillator to achieve a steady state even when there is no explicit thermo-mechanical damping (κ = 0). This requires bias conditions such that f 3L = f 3R = 0. It is remarkable that the process of electrical transport between source and drain alone can induce damping of the mechanical motion. This result has been indicated by other authors 9,10,11,12 . These equations were solved numerically and the results, for various values of λ and bias voltage, are shown in Fig. 2. A feature of our approach is that we can directly calculate the dynamics of the local degrees of freedom, for example the mean electron occupation of the dot as well as the mean vibrational occupation number in the oscillator.
From these equations we can reproduce behavior for the stationary current similar to that observed in the experiment. We concentrate here on the stationary current through the left junction (connected to the source). Similar results apply for the right junction. We assume that the electronic temperature is 1.5K, which is the temperature used in the experiment by Park et al. 8 .
Following the discussion below Eq. (24), we see from Eq. (30) that the average steady state current through the left junction is given by
I st = eγ L [(−1 + f 2L λ 2 − f 3L λ 2 ) c † c st + (−2f 1L λ 2 + f 2L λ 2 + f 3L λ 2 ) a † a CT,st +(1 − λ 2 )f 1L + f 3L λ 2 ].(32)
which is of course equal to the average steady state current through the right junction. The steady state current, I st can then be found by finding the steady state solution for each of the phonon number and electron number,
c † c st = B 1 C 2 − B 2 C 1 A 1 B 2 − A 2 B 1 ,(33)a † a CT,st = −A 2 B 2 B 1 C 2 − B 2 C 1 A 1 B 2 − A 2 B 1 − C 2 B 2 .(34)
In Fig. 2, we assume that the bias voltage is applied symmetrically, i.e, µ L = −µ R = eV bias /2. In this case, all the Fermi factors f 1R , f 2R , and f 3R effectively equal to zero in the positive bias regime, the regime of Fig. 2. From Eqs. (24) and (30), we see that Eq. (32) also equals to the steady state current through the right junction as:
I st = eγ R c † c st .
In the case γ R = γ L , the steady state c † c st shown in Fig. 2(a) and (d) at long times should thus equal respectively to I st /(eγ L ) shown in Fig. 2(c) and (f) at long times. This is indeed the case, although the transient behaviors in these plots differ considerably. We note that the plot shown in Fig. 2(c) and (f) is the current through the left junction, normalized by (eγ L ). The values of f 1L , f 2L , and f 3L depend on the strength of the applied bias voltage and are important in understanding the stepwise behavior of the stationary current as a function of the bias voltage. We will now concentrate exclusively on the positive bias regime.
When the bias voltage is small, the current is zero. As the bias voltage increases the first Fermi factor in the left lead to become non zero is f 2L , with the other Fermi factors very small or zero. In the case that f 2L = 1, f 1L = 0, f 3L = 0, the steady state current is
I (1) st = eγ L [λ 2 a † a CT,st − (1 − λ 2 ) c † c st ] ,(35)
For low temperatures this is very small. Only if the phonon temperature is large, so that the stationary mean phonon number is significant, does this first current step become apparent (see Fig. 4). As the bias voltage is increased both f 2L and f 1L become non zero. In the case where they are both unity, the steady state current is
I (2) st = eγ L [(1 − λ 2 ) cc † st − λ 2 a † a CT,st ] .(36)
The first term here is the same form as the bare tunneling case except that the effective tunneling rate is reduced by (1 − λ 2 ). This is not too surprising. If the island is moving, on average it reduces the effective tunneling rate across the two barriers. Thus the first current step will be reduced below the value of the bare (no phonon) rate. At the region where bias voltage is large, all the Fermi factors are unity and
I (3) st = eγ L cc † st ,(37)
which is the expected result for the bare tunneling case.
To explicitly evaluate the stationary current we need to solve for the stationary mean electronic and phonon occupation numbers. We have done this numerically and the results are shown in the figures below. However the large bias case can be easily solved;
c † c st = γ L γ L + γ R ,(38)a † a st = λ 2 + λ 2 (−γ L + γ R ) κ γ L γ L + γ R + γ L λ 2 + κn p κ ,(39)I st = e γ L γ R γ L + γ R .(40)
This is the result for tunneling through a single quasi bound state between two barriers 14 .
The steady state current for larger values of λ shows two steps. As one can see from Fig. 2(a), the current vanishes until the first Coulomb Blockade energy is overcome. The first step in the stationary current is thus due to bare tunneling though the dot. The second step represents single phonon mediated tunneling through the dot. These results are consistent with the semiclassical theory of Boese and Schoeller 13 given that our expansion to second order in λ can only account for single phonon events. The height of the step depends on λ which is the ratio of the coupling strength between the electron and the vibrational level χ, and the oscillator energyhω o .
Looking at Fig. 2, the average electron number approaches a steady state (e.g., a steady state value of 0.5 at large bias since we have set the value of γ L to be equal to γ R ) while the average phonon number, without external damping, behaves differently in various regions (Fig. 2(b)). The average phonon number slowly reaches a steady state value within the first step, while at the second step, the mean phonon number grows linearly with time ( Fig. 2(b)). The steady state at the first step is the previously noted effect of transport induced damping. These results are as expected since from Eq. (21), the term consisting γ L2 λ 2 f 2L D[ac † ]ρ corresponds to a jump of one electron onto the island with the simultaneous annihilation of a phonon, while γ L3 λ 2 f 3L D[a † c † ]ρ corresponds to the jump of an electron onto the island along with the simultaneous creation of a phonon. The dynamics can be understood by relating the behavior of these terms to the rate of average phonon number change in Eq. (31). At the first step, when both f 1L and f 2L are both one, while f 3L is zero, the coefficient B 2 has negative value, and therefore the mean phonon number could reach a steady state under this transport induced damping. In this regime, we find that
c † c st = (1 − λ 2 )/2 , (41) a † a st ≈ 1/2 ,(42)
where we have set γ L = γ R and used Eq. (29) in obtaining Eq. (42). The corresponding effective temperature can be found using
T eff =h ω o k B ln[1 + (1/ a † a st )] .(43)
When all the Fermi factors for the left lead are unity, the rate of growth for the mean phonon number now depends on a constant C 2 in Eq. (31) and therefore the mean phonon number will grow linearly with time. However, the current will be still steady ( Fig. 2(f)). This indicates that the steady state current and mean electron number in the dot [given by Eqs. (40) and (38) respectively] do not depend on the mean phonon number. This is supported by the fact that the coefficient B 1 in Eq. (30) vanishes in this regime. When damping is included, the phonon number reaches a steady value of 0.35 [see Fig. 2(e) and also Eq. (39)]. In Fig. 3 we plot the steady state current versus bias voltage for different values of λ. At the region of the first phonon excitation level (when bias voltage is between 30 meV and 40 meV), the steady state current drops by a factor proportional to quadratic order of λ. Compared to the current at large bias voltage, the size of the drop is
∆I st = γ L λ 2 (2γ L λ 2 + κ(2n p + 1)) 4γ L λ 2 − 2κ(λ 2 − 2) .(44)
We thus see that the effect of the oscillatory motion of the island is two fold. Firstly the vibrational motion leads to an effective reduction in the tunneling rate by an amount λ 2 to lowest order in λ. Secondly there is a second step at higher bias voltage due to phonon mediated tunneling. This is determined by the dependence of the Fermi factors on the vibrational quantum of energy. One might have expected another step at a smaller bias voltage. However this step is very small unless there is a significant thermally excited mean phonon number present in the steady state. If we increase the phonon temperature such that the energy is larger than the energy quantum of the oscillator,hω o , (for this example we choose T = 2hω o /k B ≈ 116K ) this step can be seen (Fig 4). Thus we see three steps as expected, corresponding to the three bias voltages when the three Fermi factors switch from zero.
In order to explore the steady state correlation between phonon number and electron number on the dot, we can find the steady state directly by solving the master equation from Eq. (21). In Fig. 5 we plot the correlation function a † ac † c − a † a c † c as a function of λ and the bias voltage. The correlation is seen to be small except when a transition occurs between two conductance states. This is not surprising, as at this point one expects fluctuations in the charge on the dot, and consequently the fluctuations of phonon number, to be large. This interpretation implies that damping of the oscillator should suppress the correlation, as the response of the oscillator to fluctuations in the dot occupation are suppressed. This is seen in Fig. 6.
V. CONCLUSIONS
We have given a quantum description of a QEMS comprising a single quantum dot harmonically bound between two electrodes based on a quantum master equation for the density operator of the electronic and vibrational degrees of freedom. The description thus incorporates the dynamics of both diagonal (population) and off-diagonal (coherence) terms. We found a special set of parameters for which the equations of motion for the mean phonon number and the electron number form a closed set. From this we have been able to reproduce the central qualitative features of the current vs. bias voltage curve obtained experimentally by Park et al. 8 and also of the semiclassical phenomenological theory by Boese and Schoeller 13 . We also calculate the correlation function between phonon and electron number in the steady state and find that it is only significant at the steps of the the steady state conductance. The results reported in this paper do not probe the full power of the master equation approach as the model does not couple the diagonal and off-diagonal elements of the density matrix. This can arise when the vibrational motion of the dot is subject to a conservative driving force, in addition to the stochastic driving that arises when electrons tunnel on and off the dot in a static electric field. The full quantum treatment will enable us to include coherent effects which are likely to arise when a spin doped quantum dot is used in a static or RF external magnetic field.
FIG. 2 :
2Average number of electron, phonon and current through the dot against bias voltage withhωo = 5 meV,hωI − η = 15 meV, kBT = 0.13 meV, andhγL =hγR = 2 µeV for λ = 0.3.Figures (a),(b),(c) without damping and (d),(e),(f) with damping κ = 0.3γL. We assume µL = −µR = eV bias /2.
FIG. 3 :FIG. 4 :
34Steady state current for different values of λ with damping κ = 0.3γL, the electronic and phonon temperatures are both 1.5K. Steady state current for different values of λ and damping κ = 0.3γL, the electronic temperature is 1.5K, and the phonon temperature is 116K which is chosen to be 2hωo/kB in order to make manifest the step at a smaller bias voltage.
FIG. 5 :FIG. 6 :
56Difference in the correlation function for different values of λ with damping κ = 0.3γL. Difference in the correlation function for different values of κ with damping λ = 0.3γL. The plot starts at a value κ = 0.
M. Roukes, Physics World 14, 25, February (2001). 2 R.G. Knobel and A.N. Cleland, Nature 424, 291 (2003). 3 M. D. LaHaye, O. Buu, B. Camarota, and K. C. Schwab, Science 304, 74 (2004). 4 Z. Zhang, M. L. Roukes, and P. C. Hammel, Journ. App. Phys. 80, 6931 (1996); H. J. Mamin, R. Budakian, B. W. Chui and D. Rugar, Phys. Rev. Lett. 91, 207604 (2003).
. T A Brun, H.-S Goan, Phys. Rev. A. 6832301T. A. Brun and H.-S. Goan, Phys. Rev. A 68, 032301 (2003);
. G P Berman, F Borgonovi, H.-S Goan, S A Gurvitz, V I Tsifrinovich, Phys. Rev. B. 6794425G. P. Berman, F. Borgonovi, H.-S. Goan, S. A. Gurvitz and V. I. Tsifrinovich, Phys. Rev. B 67, 094425 (2003).
. A D Armour, M P Blencowe, K C Schwab, Phys. Rev. Lett. 88148301A. D. Armour, M. P. Blencowe, and K. C. Schwab, Phys. Rev. Lett. 88, 148301 (2002).
. X M H Huang, C A Zorman, M Mehregany, M Roukes, Nature. 421496X. M. H. Huang, C. A. Zorman, M. Mehregany, and M. Roukes, Nature 421, 496 (2003).
. H Park, J Park, A K Lim, E H Anderson, A P Allvisator, P L Mceuen, Nature. 407H. Park, J. Park, A. K. Lim, E. H. Anderson, A. P. Allvisator, and P. L. McEuen, Nature 407, 57, (2000).
. D Mozyrsky, I Martin, Phys. Rev. Lett. 8918301D. Mozyrsky and I. Martin, Phys. Rev. Lett. 89, 018301 (2002).
. A Yu, L G Smirnov, N J M Mourokh, Horing, Phys. Rev. B. 67115312A. Yu. Smirnov, L. G. Mourokh, and N. J. M. Horing, Phys. Rev. B 67, 115312(2003)
. D Mozyrsky, I Martin, M B Hastings, Phys. Rev. Lett. 9218303D. Mozyrsky, I. Martin and M. B. Hastings, Phys. Rev. Lett. 92, 018303 (2004).
. A D Armour, M P Blencowe, Y Zhang, Phys. Rev. B. 69125313A. D. Armour, M. P. Blencowe, and Y. Zhang, Phys. Rev. B 69, 125313 (2004).
. D Boese, H Schoeller, Europhys. Lett. 54668D. Boese and H. Schoeller, Europhys. Lett. 54, 668 (2001).
. H B Sun, G J Milburn, Phys. Rev. B. 5910748H. B. Sun and G. J. Milburn, Phys. Rev. B. 59, 10748 (1999);
. G J Milburn, Aust. J. Phys. 53477G. J. Milburn, Aust. J. Phys. 53, 477 (2000).
V Aji, J E Moore, C M Varma, cond-mat/0302222Electronic-vibrational coupling in single-molecule devices. V. Aji, J. E. Moore and C. M. Varma, Electronic-vibrational coupling in single-molecule devices, cond-mat/0302222 (2003).
C W Gardiner, P Zoller, Quantum Noise. BerlinSpringer-Verlag2nd editionC. W. Gardiner and P. Zoller, Quantum Noise, 2nd edition, (Springer-Verlag, Berlin, 2000)
. R I Shekhter, Yu Galperin, L Y Gorelik, A Isacsson, M Jonson, J. Phys.: Condens. Matter. 15441R. I. Shekhter, Yu. Galperin, L. Y. Gorelik, A. Isacsson and M. Jonson, J. Phys.: Condens. Matter 15 R441 (2003).
G Mahan, Many-Particle Physics. Plenum PressG. Mahan, Many-Particle Physics, (Plenum Press, 1990).
| [] |
[
"A robust fusion-extraction procedure with summary statistics in the presence of biased sources",
"A robust fusion-extraction procedure with summary statistics in the presence of biased sources"
] | [
"Ruoyu Wang ",
"Qihua Wang ",
"Wang Miao ",
"\nAcademy of Mathematics and Systems Science\nChinese Academy of Sciences\n100190BeijingChina\n",
"\nSchool of Mathematical Sciences\nUniversity of Chinese Academy of Sciences\n100049BeijingChina\n",
"\nPeking University\n100871BeijingChina\n"
] | [
"Academy of Mathematics and Systems Science\nChinese Academy of Sciences\n100190BeijingChina",
"School of Mathematical Sciences\nUniversity of Chinese Academy of Sciences\n100049BeijingChina",
"Peking University\n100871BeijingChina"
] | [] | Information from multiple data sources is increasingly available. However, some data sources may produce biased estimates due to biased sampling, data corruption, or model misspecification. This calls for robust data combination methods with biased sources. In this paper, a robust data fusion-extraction method is proposed. In contrast to existing methods, the proposed method can be applied to the important case where researchers have no knowledge of which data sources are unbiased. The proposed estimator is easy to compute and only employs summary statistics, and hence can be applied to many different fields, e.g., meta-analysis, Mendelian randomization, and * [email protected] 1 arXiv:2108.12600v3 [stat.ME] 6 Feb 2023 distributed systems. The proposed estimator is consistent even if many data sources are biased and is asymptotically equivalent to the oracle estimator that only uses unbiased data. Asymptotic normality of the proposed estimator is also established. In contrast to the existing meta-analysis methods, the theoretical properties are guaranteed even if the number of data sources and the dimension of the parameter diverges as the sample size increases. Furthermore, the proposed method provides a consistent selection for unbiased data sources with probability approaching one. Simulation studies demonstrate the efficiency and robustness of the proposed method empirically.The proposed method is applied to a meta-analysis data set to evaluate the surgical treatment for moderate periodontal disease and to a Mendelian randomization data set to study the risk factors of head and neck cancer. | 10.1093/biomet/asad013 | [
"https://export.arxiv.org/pdf/2108.12600v3.pdf"
] | 237,353,284 | 2108.12600 | 3c379e57753c9c51f0b0c18ad40aec3b63c8478e |
A robust fusion-extraction procedure with summary statistics in the presence of biased sources
Ruoyu Wang
Qihua Wang
Wang Miao
Academy of Mathematics and Systems Science
Chinese Academy of Sciences
100190BeijingChina
School of Mathematical Sciences
University of Chinese Academy of Sciences
100049BeijingChina
Peking University
100871BeijingChina
A robust fusion-extraction procedure with summary statistics in the presence of biased sources
Data fusionInverse variance weightingMendelian randomizationMeta- analysisRobust statistics 2
Information from multiple data sources is increasingly available. However, some data sources may produce biased estimates due to biased sampling, data corruption, or model misspecification. This calls for robust data combination methods with biased sources. In this paper, a robust data fusion-extraction method is proposed. In contrast to existing methods, the proposed method can be applied to the important case where researchers have no knowledge of which data sources are unbiased. The proposed estimator is easy to compute and only employs summary statistics, and hence can be applied to many different fields, e.g., meta-analysis, Mendelian randomization, and * [email protected] 1 arXiv:2108.12600v3 [stat.ME] 6 Feb 2023 distributed systems. The proposed estimator is consistent even if many data sources are biased and is asymptotically equivalent to the oracle estimator that only uses unbiased data. Asymptotic normality of the proposed estimator is also established. In contrast to the existing meta-analysis methods, the theoretical properties are guaranteed even if the number of data sources and the dimension of the parameter diverges as the sample size increases. Furthermore, the proposed method provides a consistent selection for unbiased data sources with probability approaching one. Simulation studies demonstrate the efficiency and robustness of the proposed method empirically.The proposed method is applied to a meta-analysis data set to evaluate the surgical treatment for moderate periodontal disease and to a Mendelian randomization data set to study the risk factors of head and neck cancer.
Introduction
In the big data era, it is common to have different data sources addressing a specific scientific problem of interest. An important question is how to combine these data to draw a final conclusion. In practice, due to privacy concerns or data transmission restrictions, individual-level data from different sources are usually not all available to the researcher.
For some data sources, researchers only have access to certain summary statistics. To combine data information efficiently in this scenario, plenty of methods have been developed in the literature of meta-analysis, including the confidence distribution methods (Singh et al., 2005;Xie et al., 2011;Liu et al., 2015), the generalized method of moments (GMM), empirical likelihood-based methods (Sheng et al., 2020;Qin et al., 2015;Chatterjee et al., 2016;Kundu et al., 2019;Zhang et al., 2019Zhang et al., , 2020, and calibration methods (Lin and Chen, 2014;Yang and Ding, 2020). In many cases, estimators provided by meta-analysis methods are as efficient as the pooled estimators that use all the individual-level data (Olkin and Sampson, 1998;Mathew and Nordstrom, 1999;Lin and Zeng, 2010;Xie et al., 2011;Liu et al., 2015). Moreover, meta-analysis methods have been applied to large-scale data sets to reduce the computation and communication complexity (Jordan, 2013;Fan et al., 2014;Wang et al., 2016), even though all individual-level data are available in this scenario.
Ideally, all data sources can provide valid summary statistics (or consistent local estimates) based on data from them, respectively. Unfortunately, the summary statistics (or local estimates) from some data sources may be invalid (or inconsistent) due to biased sampling, data corruption, model misspecification or other problems. Typical examples include the Simpson's paradox in meta-analysis (Hanley and Thériault, 2000), the invalid instrument problem in Mendelian randomization (Qi and Chatterjee, 2019;Burgess et al., 2020) and Byzantine failure problem in distributed estimation (Lamport et al., 1982;Yin et al., 2018;Tu et al., 2021).
Example 1 (Simpson's paradox in meta-analysis). The dataset reported by Hanley and Thériault (2000) consists of five case-control studies that examine the role of high voltage power lines in the etiology of leukemia in children. Hanley and Thériault (2000) point out that different data fusion methods provide opposite conclusions. The reason is that three studies are conducted among the entire population, while two other studies undertake their investigation among the subpopulations living close to the power lines and thus suffer from biased sampling. Thus, two biased studies lead the final meta-analysis estimator to be biased. In this illustrative example, one knows which studies are biased and thus can just remove these studies. However, in practice, we seldom have such knowledge.
Example 2 (Mendelian randomization with invalid instruments). In Mendelian randomization, single nucleotide polymorphisms (SNPs) are used as instrumental variables to evaluate the causal effect of a risk factor on the outcome. A SNP is a valid instrumental variable if (i) it is associated with the risk factor of interest; (ii) there is no confounder of the SNP-outcome association and (iii) it does not affect the outcome directly. See the Mendelian randomization dictionary (Lawlor et al., 2019) for more details. Suppose we have access to summary data representing the estimated effect of the kth SNP on the risk factor (β k ), and on the outcome (γ k ) for k = 1, . . . , K. If the kth SNP is a valid instrument, thenγ k /β k is a consistent estimator of the true causal effect. As data on several SNPs are available, we can use meta-analysis methods to produce a final estimator. However, in practice, a SNP may be an invalid instrument due to pleiotropy, linkage disequilibrium, and population stratification. In this case,γ k /β k is no longer consistent and traditional meta-analysis methods can lead to biased estimates. In practice, however, it is hard to know which instrument is valid.
Example 3 (Byzantine failures in distributed systems). To reduce the computational burden with large-scale data, the calculation of estimators is often conducted on distributed systems. Summary statistics are calculated on each local machine and transmitted to the central machine. The central machine produces the final estimator by combining these summary statistics. In practice, some local machines may produce wrong results and hence are biased sources due to hardware or software breakdowns, data crashes, or communication failures, which is called the "Byzantine failures". Usually, one does not know which machines have Byzantine failures. Byzantine failures may deteriorate the performance of many divide-and-conquer methods.
As shown in the above three examples, many issues can result in biased estimation from a particular data source. In this paper, we call a data source biased if it produces an estimate that does not converge to the parameter of interest. Most of the aforementioned meta-analysis methods are not robust against the presence of biased sources, with the exception of Singh et al. (2005); Shen et al. (2020) and Zhai and Han (2022). Nevertheless, the method proposed in Singh et al. (2005) is limited to the one-dimensional parameter case. Moreover, to apply the method of Singh et al. (2005) and Shen et al. (2020), there must be at least one known unbiased data source for reference. While the paper was submitted for review, Zhai and Han (2022) proposed a data fusion method based on the empirical likelihood, which can deal with summary statistics from biased data sources. Their method relies on the parametric conditional density model and requires individual-level data from a data source that is known to be unbiased. However, such an unbiased data source is often unavailable in practice. The main challenge for such a problem is that we do not know which data sources are biased, and hence one cannot remove the biased data sources from these data sources directly. Clearly, the use of biased data sources is adverse to defining a consistent estimator for the parameter of interest. This paper proposes a fusion-extraction procedure to define a consistent estimator and an asymptotically normal estimator, respectively, by combining all the summary statistics from all the data sources in the presence of biased sources. In contrast to existing methods, the proposed method is applicable without any knowledge on which sources are unbiased.
The proposed fusion-extraction procedure uses only summary statistics from different data sources and consists of two stages. In the first stage, we fuse the summary statistics from different data sources and obtain an initial estimator by minimizing the weighted Euclid distance from the estimators of all data sources. In the second stage, with the assistance of the initial estimator and the penalization method, we extract information from the unbiased sources and obtain the final estimator via a convex optimization problem. Both optimization problems in the two stages can be solved efficiently. Biased data sources do not affect the consistency of our method, as long as the proportion of unbiased sources among all sources is not too small. Moreover, our method can be implemented without knowledge on which data sources are unbiased. This makes our method more practical.
The theoretical properties of our estimator are investigated under some mild conditions. We first show the consistency of the initial estimator produced by the first stage. Then we show that with this initial estimator, the final estimator produced by the second stage optimization is close in terms of Euclid norm to the oracle estimator that uses only unbiased data sources. Furthermore, it is shown that the extraction procedure in the second stage can consistently select unbiased sources. Based on these "oracle properties", the asymptotic normality of the proposed estimator is also established. The established theorems are general in the sense that the number of data sources, K, and the dimension of parameter, d, can diverge as sample size increases. To our knowledge, no existing literature considers the meta-analysis problem in the presence of unknown biased sources when both K and d diverge.
Our method is robust to biased data sources and computationally simple, and hence can be applied to many different fields, e.g. meta-analysis, Mendelian randomization, and distributed system. Besides its generality, it has some attractive properties across different fields. In contrast to existing works in meta-analysis with heterogeneous data sources (Singh et al., 2005;Shen et al., 2020;Zhai and Han, 2022), the proposed method does not require knowledge on which data sources are unbiased. In the field of Mendelian randomization with invalid instruments (Han, 2008), our method does not require that at least half of the instruments are valid while allowing for multiple or diverging number of treatments. Furthermore, our method can also be applied to the distribution system with Byzantine failures (Lamport et al., 1982;Yin et al., 2018). In contrast to the existing work (Tu et al., 2021), the asymptotic normality of our method is guaranteed without requiring the proportion of biased sources among all sources to converge to zero. Moreover, our estimator has a faster convergence rate compared to that of Tu et al. (2021) if the proportion of biased sources does not converge to zero.
Simulations under different scenarios demonstrate the efficiency and robustness of the proposed method. The proposed method is applied to a meta-analysis data set (Berkey et al., 1998) to evaluate the surgical treatment for moderate periodontal disease, and a Mendelian randomization data set (Gormley et al., 2020) to study the risk factors of head and neck cancer. The real data analysis results show the robustness of the proposed method empirically.
The rest of this paper is organized as follows. In Section 2, we suggest a two-stage method to provide an estimator for the parameter of interest in the presence of biased sources. In Section 3.1, we investigate the theoretical properties of the proposed estimator under certain general conditions on convergence rate. Under further assumptions, we establish the asymptotic normality of the proposed estimator in Section 3.2. Simulation studies were conducted to evaluate the finite sample performance of our method in Section 4, followed by two real data examples in Section 5. Further simulation studies and all proofs are deferred to the supplementary material due to limited space.
2 Estimation in the presence of biased sources
Identification
Suppose θ 0 is a d dimensional parameter of interest and K data sources can be used to estimate this parameter. Each data source provides an estimator for the parameter of interest. Estimators from some data sources may be inconsistent for θ 0 . The data sources that provide inconsistent estimators are called biased data sources but we do not know which data sources are biased. For k = 1, . . . , K, letθ k be the estimate from the kth source, n k the sample size of the kth source, n = K k=1 n k andπ k = n k /n. Estimates from different sources may be constructed using different methods and let θ * k be their probability limits respectively, i.e., θ k − θ * k → 0 in probability for k = 1, . . . , K, where · is the Euclid norm. Then a data source is biased if θ * k = θ 0 . We assume that some of the sources are unbiased in the sense that θ * k = θ 0 but we do not know which are unbiased. Let K 0 = {k : θ * k = θ 0 } be the set of indices of unbiased sources and b * k = θ * k − θ 0 be the bias of source k. Throughout this paper, we assume b * k is bounded away from zero for k ∈ K c 0 = {1, . . . , K} \ K 0 . Since we do not have any knowledge about K 0 , we do not know whether θ * k equals to the parameter of interest or not. Fortunately, the following proposition shows that θ 0 can be identified as a weighted geometric median of θ * k for k = 1, . . . , K if the proportion of data from unbiased sources among all data is larger than a certain threshold.
Proposition 1. If k∈K 0π k > k∈K c 0π k b * k b * k ,(1)then θ 0 = arg min θ K k=1π k θ * k − θ .(2)
See the supplementary material for the proof of this proposition. The proposition shows that θ 0 can be uniquely determined by θ * k for k = 1, . . . , K if (1) holds. Note that
k∈K c 0π k b * k b * k ≤ k∈K c 0π k .(3)
Thus a sufficient condition for (1) is
k∈K 0π k > k∈K c 0π k or equivalently k∈K 0π k > 1 2 .(4)
Inequality (4) requires more than half of the data come from unbiased data sources, which is related to the majority rule widely adopted in the invalid instrument literature (Kang et al., 2016;Bowden et al., 2016;Windmeijer et al., 2019). The previous analysis implies that (1) is true under (4). Next, we illustrate that (1) can still hold even though less than a half of the data come from unbiased sources with a toy example. Suppose d = 3, K =
6, θ 0 = (0, 0, 0) T ,π k = 1/6, b * 1 = b * 2 = (0, 0, 0) T , b * 3 = (1, 0, 0) T , b * 4 = (−2, 0, 0) T , b * 5 = (0, 1, 0) T and b * 6 = (0, 0, 1) T .
In this case, k∈K 0π k = 1/3 < 1/2, however,
k∈K c 0π k b * k / b * k =
√ 2/6 < 1/3 and hence (1) is satisfied. In Section 4, we provide a further example where only 20% of the data come from unbiased sources and (1) still holds.
Theoretically, if the equality in (3) holds, (1) is equivalent to (4); otherwise (1) is weaker than (4). The equality in (3) holds only if all b * k 's lie occasionally on the same direction, which is rarely true because the biases are often irregular in practice.
In Proposition 1, the quantityπ k can be viewed as the weight attached to the k-th data source for k = 1, . . . , K. It is observed in meta-analysis that small studies tend to be of relatively low methodological quality and are more likely to be affected by publication and selection bias (Sterne et al., 2000). Thus we use the weights {π k } K k=1 that attach small weights to data sources with small sample sizes in this paper.
By replacing θ * k byθ k in equation (2), we can construct an estimator for θ 0 . Further, we use the defined estimator as an initial estimator to obtain a more efficient estimator.
Estimation
According to Proposition 1, we propose the following estimatorθ that minimizes the
weighted distance fromθ k ,θ = arg min θ K k=1π k θ k − θ .(5)
This optimization problem is convex and can be solved efficiently by routine algorithms.
We show the consistency ofθ in Section 3. However, according to the well-known tradeoff between robustness and efficiency (Hample et al., 2005;Lindsay, 1994), the robust estimatorθ may not be fully efficient. Also, it may have a large finite sample bias becausẽ θ uses summary statistics from biased sources. Our simulations confirm this. The large finite sample bias implies thatθ may not be n 1/2 -consistent or asymptotically normal. Here we give a concrete example. Suppose d = 1, K → ∞, n k = n/K,θ k ∼ N (θ * k , 1/n k ) for k = 1, . . . , K andθ k 's are independent of each other. Assume θ 0 = 0, θ * k = θ 0 + b * k , b * k = 0 for k = 1, . . . , (1/2 + τ )K and b * k = 1 for k = (1/2 + τ )K + 1, . . . , K, where 0 < τ < 1/2. In the supplementary material, we show
P θ − θ 0 ≥ K 1/2 h * n 1/2 → 1,(6)
where h * = Φ −1 ((3/8 + τ /4)/(1/2 + τ )) and Φ is the cumulative distribution function of standard normal distribution. Because h * > 0 and K → ∞, (6) impliesθ is not n 1/2consistent.
Besides the aforementioned issue, the covariance structure ofθ k for k = 1, 2, ..., K is not considered in the construction of the estimatorθ and this may lead to a loss of efficiency.
These facts motivate us to propose an estimator which is not only n 1/2 -asymptotically normal but also more efficient by using penalization technique and incorporating covariance structures ofθ k for k = 1, 2, ..., K.
It is well known that the oracle inverse-variance weighted (IVW) estimator
θ IVW = arg min θ k∈K 0π k 2 (θ k − θ) TṼ k (θ k − θ)(7)
is the most efficient meta-analysis estimator and asymptotically normal if K 0 is known and n 1/2 Lin and Zeng, 2010;Xie et al., 2011;Burgess et al., 2020), whereṼ k is a consistent estimator of Σ −1 k for k ∈ K 0 . See Shen et al. (2020) for further discussion on the optimality ofθ IVW . In general,Ṽ k 's can be any positive definite matrices if an estimator for Σ −1 k is unavailable (Liu et al., 2015) andθ IVW is still √ n consistent and asymptotically normal under certain regularity conditions. However,θ IVW is infeasible if K 0 is unknown. Next, we develop a penalized inverse-variance weighted estimation method, which obviates the need to know K 0 and is asymptotically equivalent toθ IVW under mild conditions.
k (θ k − θ * k ) → N (0, Σ k ) for each k in K 0 (
To obtain a feasible estimator, we first replace K 0 with {1, . . . , K} and obtain the objec-
tive function K k=1π k 2 (θ k − θ) TṼ k (θ k − θ).(8)
Simply minimizing (8) with respect to θ may produce an inconsistent estimator because some ofθ k 's may not converge to θ 0 . Noticing that θ k − θ 0 − b * k → 0 in probability for k = 1, . . . , K, we add bias parameters b k 's to (8) and get
K k=1π k 2 (θ k − θ − b k ) TṼ k (θ k − θ − b k ).(9)
One may expect to recover θ 0 and b * 1 , . . . , b * K by minimizing (9) with respect to θ, b 1 , . . . , b K . Unfortunately, it is not the case because, for any given θ, (9) is minimized as long as b k takesθ k − θ for k = 1, . . . , K. Hence θ, b 1 , . . . , b K that minimize (9) are not necessarily close to θ 0 , b * 1 , . . . , b * K . To resolve this problem, we leverage the fact that b * k = 0 for k ∈ K 0 and impose penalties on b k to force b k to be zero and leave b k for k ∈ K c 0 unconstrained. Hence we want to impose a large penalty on b k for k ∈ K 0 and impose no or a small penalty on b k for k ∈ K c 0 . To this end, we make use of the consistent estimatorθ and define the following estimator
(θ T ,b T 1 , . . . ,b T K ) T ∈ arg min θ,b 1 ,...,b K K k=1 π k 2 (θ k − θ − b k ) TṼ k (θ k − θ − b k ) + λw k b k ,(10)
wherew k = b k −α ,Ṽ k is some weighting matrix and λ is a tuning parameter with α > 0 andb k =θ k −θ being an initial estimator of b * k . For k ∈ K 0 ,w k tends to be large becausẽ b k → 0 in probability. Thus b k may be estimated as zero in (10) for k ∈ K 0 . On the
other hand, because b k → b * k > 0 for k ∈ K c 0 , a smaller penalty is imposed on b k for k ∈ K c 0 compared to k ∈ K 0 .
The optimization problem in (10) produces a continuous solution and is computationally attractive due to its convexity (Zou, 2006). We proposê θ as an estimator for θ 0 . The form of (10) is akin to the adaptive Lasso (Zou, 2006) and the group Lasso (Yuan and Lin, 2006). The optimization problem in (10) can be rewritten as an adaptive group lasso problem and solved efficiently by the R package ggLasso (https://cran.r-project.org/web/packages/gglasso/index.html). It is noted thatθ makes contribution toθ throughw k . This may help to select the estimates from unbias sources and control the bias ofθ. We show in Section 3 that thisθ performs as well as the oracle estimatorθ IVW .
Implementation in examples
Equations (5) and (10) provide two general estimating procedures in the presence of biased data sources and can be applied to many specific problems. Only estimatesθ k from different data sources are required to conduct the proposed procedure. Specifically, in Example 1, we can takeθ k to be the estimate from the kth study for k = 1, . . . , 5. In Example 2, we let θ k =γ k /β k and use the proposed procedure to deal with the invalid instrument problem.
In Example 3, we use the output of each local machine asθ k 's and apply our method to mitigate the effects of Byzantine failures. We investigate the theoretical properties of the proposed fusion-extraction procedure in the next section since they are of wide application.
Theoretical properties 3.1 Consistency and oracle properties
In this subsection, we provide asymptotic results for the estimators proposed in Section 2.2. In our theoretical development, both the dimension of parameter d and the number of sources K are allowed to diverge as n → ∞. Let
δ = k∈K 0π k − k∈K c 0π k b * k b * k .
Then (1) is equivalent to δ > 0 and we have the following theorem.
Theorem 1. If δ > 0 and δ −1 max k θ k − θ * k → 0 in probability, then θ − θ 0 → 0 in probability.
Proof of this theorem is relegated to the supplementary material. Theorem 1 establishes the consistency ofθ under the condition that δ is not too small and thatθ k converges uniformly for k = 1, . . . , K. If K is fixed, the condition δ −1 max k θ k −θ * k → 0 in probability can be satisfied as long as δ is positive and bounded away from zero. Having established the theoretical property of the initial estimatorθ, next we investigate the theoretical properties ofθ defined in (10). As pointed out previously,θ IVW is an oracle estimate that uses summary data from unbiased sources only by combining them in an efficient way (Lin and Zeng, 2010;Xie et al., 2011). It is of interest to investigate how far away the proposed estimatorθ departs from the oracle estimatorθ IVW is. To establish the convergence rate of θ −θ IVW , the following conditions are required. For any symmetric matrix A, let λ min (A) and λ max (A) be the minimum and maximum eigenvalue of A, respectively. We use · to denote the Euclid and spectral norm when applying to a vector and a matrix, respectively.
Condition 1. k∈K 0π k is bounded away from zero.
Condition 2. There are some deterministic matrices V * k (k ∈ K 0 ), such that max k∈K 0 Ṽ k − V * k = o P (1)
whereṼ k is the weighting matrix appearing in (10). Moreover, the eigenvalues of V * k are bounded away from zero and infinity for k ∈ K 0 .
Condition 3. K = O(n ν 1 ), δ > 0 and δ −1 max k θ k −θ * k = O P (n −ν 2 )
for some ν 1 ∈ [0, 1) and ν 2 ∈ (0, 1).
Condition 1 assumes that the proportion of data from unbiased data sources is bounded away from zero, which is a reasonable requirement. The weighting matrixṼ k may affect the performance of the resulting estimator. In many cases, the optimal choice ofṼ k is shown to be the inverse ofθ k 's asymptotic variance matrix (Lin and Zeng, 2010;Liu et al., 2015).
Condition 2 can be easily satisfied if the inverses of the estimated asymptotic variance matrices are used and d, K are not too large (Vershynin, 2018;Wainwright, 2019). There are some difficulties in estimating the asymptotic variance matrix and its inverse when the dimension is high (Wainwright, 2019). In addition, sometimes the estimated asymptotic variance matrix is not available from the summary statistics (Liu et al., 2015). However, Condition 2 just requiresṼ k to converge to some nonsingular matrix. In these cases, we can simply takeṼ k to be the identity matrix for k = 1, . . . , K and Condition 2 can always be satisfied. Condition 3 assumes the number of data sources K is not too large.
The convergence rate in Condition 3 can be satisfied by many commonly-used estimators, e.g., the maximum likelihood estimator and lasso-type estimators, under certain regularity conditions (Spokoiny, 2012;Battey et al., 2018). Then we are ready to state the theorem.
Theorem 2. Under Conditions 1, 2 and 3, if λ 1/n and α > max{ν
1 ν −1 2 , ν −1 2 − 1}, we have θ −θ IVW = O P K n .
Proof of this theorem is in the supplementary material. Theorem 2 establishes the convergence rate of θ −θ IVW , which indicates that the proposed estimator is close to the oracle estimator. If K = o(n 1/2 ), then θ −θ IVW = o P (n −1/2 ) and our proposal is asymptotically equivalent to the oracle estimator up to an error term of order o P (n −1/2 ).
The estimator proposed in Shen et al. (2020) possesses the similar asymptotic equivalence property. However, theoretical results in Shen et al. (2020) require d and K to be fixed, which is not required by Theorem 2. Moreover, implementation of the estimator proposed in Shen et al. (2020) requires at least one known unbiased data source. In contrast, we do not need any information about K 0 to calculateθ.
Theorem 2 is generic in the sense that it only relies on some convergence rate conditions and does not impose restrictions on the form ofθ k . In practice,θ k may be calculated based on complex data, such as survey sampling or time-series data, via some complex procedure, such as deep learning or Lasso-type penalization procedure. In these cases, Theorem 2 holds consistently as long as Conditions 2 and 3 are satisfied. Moreover, Theorem 2 does not requireθ k 's to be independent of each other, which ensures validity of the theorem in meta-analysis with overlapping subjects (Lin and Sullivan, 2009) or one sample Mendelian randomization (Minelli et al., 2021).
When solving (10), we also get an estimatorb k of the bias. A question is whether {b k } K k=1 selects the unbiased sources consistently, that is, whetherK 0 = K 0 with probability ap-
proaching one, whereK 0 = {k :b k = 0}.
To assure this selection consistency, a stronger version of Condition 2 is required.
Condition 4. For some deterministic matrices V * k (k = 1, . . . , K), such that max k Ṽ k − V * k = o P (1)
whereṼ k is the weighting matrix appears in (10). Moreover, the eigenvalues of V * k are bounded away from zero and infinity for k = 1, . . . , K.
This condition requires that Condition 2 holds not only for k ∈ K 0 but also for k ∈ K c 0 , which is still a mild requirement. Then we are ready to establish the selection consistency.
Theorem 3. Under Conditions 1, 3 and 4, if λ 1/n and α > max{ν 1 ν −1 2 , ν −1 2 − 1}, we have
P (K 0 = K 0 ) → 1 provided min k∈K c 0π k > C π /K and K log n/n → 0 where C π is some positive constant.
Proof of this theorem is relegated to the supplementary material.
Asymptotic normality
In this subsection, we establish the asymptotic normality of the proposed estimatorθ. Under Conditions 1, 2 and 3, if K = o(n 1/2 ), then θ −θ IVW = o P (n −1/2 ). Ifθ IVW is n 1/2 -asymptotically normal, thenθ is n 1/2 -asymptotically normal and has the same asymptotic variance asθ IVW . There exist some results on asymptotic normality ofθ IVW in the literature. However, these results either focus on the fixed dimension case (Lin and Zeng, 2010;Zhu et al., 2021) or are only suited to some specific estimators under sparse linear or generalized liner model (Battey et al., 2018). Here, we establish the asymptotic normality ofθ IVW , and hence ofθ in a general setting where d and K can diverge andθ k (k ∈ K 0 ) can be any estimator that admits uniformly an asymptotically linear representation defined in the following. Suppose the original data from the kth source Z
(k) 1 , . . . , Z (k) n k are i.i.d. copies
of Z (k) and the data from different data sources are independent from each other. Then we are ready to state the condition.
Condition 5 (Uniformly asymptotically linear representation). For k ∈ K 0 , there is some
function Ψ k (·) such that E[Ψ k (Z (k) )] = 0 and θ k − θ 0 = 1 n k n k i=1 Ψ k (Z (k) i ) + R k ,(11)
where
R k satisfies max k R k = o P (n −1/2 ).
Some examples satisfying Condition 5 will be discussed later. With the assistance of the uniformly asymptotically linear representation condition (Condition 5), we can establish the asymptotic normality ofθ IVW and hence ofθ.
Theorem 4. Suppose Conditions 1, 3 and 5 hold. If (i) ν 1 < 1/2; (ii) there are some
deterministic matrices V * k , k ∈ K 0 , such that max k∈K 0 Ṽ k − V * k = o P (n −1/2+ν 2 ); (iii) for
k ∈ K 0 , the eigenvalues of V * k and var Ψ k (Z (k) ) is bounded away from zero and infinity; (iv) for k ∈ K 0 , u ∈ R d , u = 1 and some τ > 0, E[|u T Ψ k (Z (k) )| 1+τ ] are bounded; (v) λ 1/n and α > max{ν 1 ν −1 2 , ν −1 2 − 1}, then for any fixed q and q × d matrix W n such that the eigenvalues of W n W T n are bounded away from zero and infinity, we have
n 1/2 I −1/2 n W n (θ − θ 0 ) → N (0, I q )
in distribution, where I q is the identity matrix of order q,
I n = k∈K 0π k H n,k var Ψ k (Z (k) ) H T n,k with H n,k = W n V * −1 0 V * k and V * 0 = k∈K 0π k V * k .
Proof of this theorem is in the supplementary material. Many estimators have the asymptotically linear representation (11) with R k = o P (n −1/2 k ), see for instance Bickel et al. (1993); Spokoiny (2013); Zhou et al. (2018) and Chen and Zhou (2020). For these estimators, Condition 5 is satisfied if remainder terms R k 's are uniformly small for k ∈ K 0 in the
sense that max k∈K 0 R k = o P (n −1/2 ). If K is fixed, then max k∈K 0 R k = o P (n −1/2 )1 n k n k i=1 L k (Z (k) i , θ), for k = 1, . . . , K, where L k (·, ·)
is some loss function that may differ from source to source. The result on M-estimator covers many commonly used estimators, e.g., the least squares estimator and maximum likelihood estimator.
Theorem 4 establishes the asymptotic normality ofθ. Unlike the existing work that can deal with biased sources in the meta-analysis literature (Singh et al., 2005;Shen et al., 2020;Zhai and Han, 2022), both the proposed estimatorθ andθ can be obtained without any knowledge on K 0 . Compared to existing estimators in the literature of Mendelian randomization that focus on a one-dimensional parameter (Kang et al., 2016;Bowden et al., 2016;Windmeijer et al., 2019;Hartwig et al., 2017;Guo et al., 2018;Ye et al., 2021), the proposedθ is applicable to the case where a multidimensional parameter is of interest. Moreover, the corresponding theoretical results are more general in the sense that they allow for the divergence of both d and K as the sample size increases. Thus besides univariable Mendelian randomization, our method can also be applied to multivariable Mendelian randomization (Burgess and Thompson, 2015;Rees et al., 2017;Sanderson et al., 2019) in the presence of invalid instruments. Recently, Tu et al. (2021) developed a method that can deal with biased sources and also allows d and K to diverge. In contrast to their work, the estimator obtained by our method achieves the n 1/2 -asymptotic normality without requiring the proportion of biased sources among all sources to converge to zero. According to the discussion after Proposition 1, the asymptotic normality ofθ is guaranteed even if more than half of the data come from biased sources. Therefore, our method is quite robust against biased sources and this is confirmed by our simulation results in the next section.
Simulation
In this section, we conducted three simulation studies to evaluate the empirical performance of the proposed methods. We consider different combinations of d and K, with d = 3, 18
and K = 10, 30.
Least squares regression
First, we consider the case whereθ k 's are obtained via least squares. Let 1 s be the s dimensional vector consisting of 1's and ⊗ be the Kronecker product. In this simulation, the data from the kth source are generated from the following data generation process:
X k ∼ N d (0, 3I d ), Y k | X k ∼ N d (X T k (θ 0 + b * k ), 1), where I d is the identity matrix of order d, θ 0 = 1 d/3 ⊗ (2, 1, −1) T , (b * 1 , . . . , b * K ) = 1 T K/10 ⊗ 1 d/3 ⊗ B and B = 0 0 5 −1 1 1 −2 −2 5 −1 0 0 0 0 0 −1 0 2 5 −1 0 0 0 0 −1 1 2 −2 5 1 .(12)
From each data source, an i.i.d sample of size n * is generated. We consider three different values of n * , namely, n * = 100, 200 or 500. In this simulation setting, only 20% of the data come from unbiased sources. However, the biases do not lie in the same direction and it can be verified that δ > 0 by straightforward calculations. Letθ k be the least squares estimator from the kth data source. In the simulation, we simply takeṼ k to be identity matrix to ensure that all the conditions onṼ k in this paper are satisfied. We compute the naive estimator K k=1θ k /K, the oracle estimatorθ IVW , the iFusion estimator proposed by Shen et al. (2020), the initial estimatorθ and the proposed estimatorθ. Note that the iFusion estimator is infeasible unless at least one data source is known to be unbiased. In this section, we always assume that the first data source is known to be unbiased when computing the iFusion estimator. This information is not required byθ andθ. The following table presents the norm of the bias vector (NB) and summation of the component-wise standard error (SSE) of these estimators calculated from 200 simulated data sets for all the four combinations of d and K with d = 3, 18 and K = 10, 30.
[Insert Table 1 about here.] The naive estimator has a large bias which renders its small standard error meaningless.
The bias of all the other estimators decreases as n * increases. The iFusion estimator performs similarly to the oracle estimator when d and K are small. However, if d = 18 and K = 30, it has a much larger standard error compared to the oracle estimator andθ. The initial estimatorθ performs well in terms of standard error. Nevertheless, it has a far larger bias compared to the oracle estimator andθ, especially when d and K are large. The reason may be that it is not √ n-consistent. The performance ofθ is similar to the oracle estimator.
This confirms the asymptotic equivalence betweenθ andθ IVW established in Section 3.1.
Next, we evaluate the performance of our methods when all the data sources are unbiased. We set b * k to be a zero vector for k = 1, . . . , K while keeping other parameters unchanged. In this scenario, the naive estimator reduces to the oracle estimator. NB and SSE of the estimators calculated from 200 simulated data sets for all the combinations of d and K are summarized in the following table.
[Insert Table 2 about here.] Table 2 shows that the iFusion estimator has a slightly larger bias and a much larger standard error compared to other estimators when d = 18. All other estimators have similar performance when there are no biased sources. This implies that there is little loss of efficiency to apply our methods when all the sources are unbiased.
Logistic regression
In this subsection, we conducted a simulation study under the scenario where the responses are binary andθ k 's are obtained via logistic regression. All simulation settings are the same as in Section 4.1 except for that Y k | X k ∼ Bernoulli(t(X T k (θ 0 + b * k ))) andθ k is the maximum likelihood estimator of logistic regression model from the kth data source, where t(x) = exp(x)/(1 + exp(x)) is the logistic function, θ 0 = 0.1 × 1 d/3 ⊗ (2, 1, −1) T , (b * 1 , . . . , b * K ) = 0.5×1 T K/10 ⊗1 d/3 ⊗B and B is defined in (12). We add a small ridge penalty when solvingθ k to avoid the problem that the maximum likelihood estimator may not be uniquely determined in finite sample (Silvapulle, 1981). NB and SSE of these estimators calculated from 200 simulated data sets are summarized in the following table.
[Insert Table 3 about here.]
The naive estimator has a large bias and the bias does not decrease as n * increases. The bias of all the other estimators decreases as n * increases. The iFusion estimator has a bias similar to that of the oracle estimator. Its standard error is much larger than the oracle estimator andθ especially when d, K are large and n * is small. The initial estimatorθ has a much larger bias compared to the oracle estimator and the proposed estimatorθ, which are consistent with the simulation results under least squares regression. The performance ofθ is similar to the oracle estimator.
Next, we evaluate the performance of our methods when all the data sources are unbiased. We set b * k to be the zero vector for k = 1, . . . , K while keeping other parameters unchanged. The naive estimator reduces to the oracle estimator in this scenario. NB and SSE of the estimators calculated from 200 simulated data sets are summarized in the following table.
[Insert Table 4
Mendelian randomization with invalid instruments
We consider Mendelian randomization with invalid instruments in this subsection. To closely mimic what we will encounter in practice, we generate data based on a real-world data set, the BMI-SBP data set in the R package mr.raps (version 0.4) of Zhao et al. (2020).
The data set contains estimates of the effects of 160 different SNPs on Body Mass Index (BMI) {β k } 160 k=1 and the corresponding standard error {σ 1,k } 160 k=1 from a study by the Genetic Investigation of ANthropometric Traits consortium (Locke et al., 2015) (sample size: 152893), and estimates of the effects on Systolic Blood Pressure (SPB) {γ k } 160 k=1 and the corresponding standard error {σ 2,k } 160 k=1 from the UK BioBank (sample size: 317754). The goal is to estimate the causal effect of BMI on SPB.
In this simulation, we generate data via the following process:
β k ∼ N (β k ,σ 2 1,k ), k = 1, . . . , 160, γ k ∼ N (β k θ 0 + 0.15 + 3β k ,σ 2 2,k ), k = 1, . . . , 100 andγ k ∼ N (β k θ 0 ,σ 2 2,k ), k = 101, . . . , 160
where θ 0 = 1. Thenθ k is given byγ k /β k . Under this data generation process, 100 out of 160 instruments are invalid instruments. We apply the proposed methods to estimate θ 0 based onθ k 's. For comparison, we apply five standard methods in Mendelian randomization, namely the MR-Egger regression (Bowden et al., 2015), the weighted median method (Bowden et al., 2016), the IVW method, the weighted mode method (Hartwig et al., 2017) and the robust adjusted profile score method (RAPS, Zhao et al., 2020). Results of these five methods are calculated by the R package TwoSampleMR [Insert Table 5 about here.] Table 5 shows thatθ has the smallest bias among all the estimators. The standard error of the proposedθ is smaller than other estimators except for the weighted median. However, the weighted median estimator has a much larger bias compared toθ.
Real Data Analysis
Effects of surgical procedure for the treatment of moderate periodontal disease
In this subsection, we apply our methods to the data set provided in Berkey et al. (1998).
Data used in this subsection are available from the R package mvmeta (https://cran.r-project.or packages/mvmeta/index.html). The data set contains results of five randomized controlled trials comparing the effect of surgical and non-surgical treatments for moderate periodontal disease. In all these studies, different segments of each patients' mouth were randomly allocated to different treatment procedures. The two outcomes, probing depth (PD) and attachment level (AL), were assessed from each patient. The goal of treatment is to decrease PD and to increase AL around the teeth. The data set provides the
Effects of smoking and alcohol in head and neck cancer
Head and neck cancer is the sixth most common cancer in the world. Established risk factor of this cancer includes smoking and alcohol. However, researchers only have a limited understanding of the causal effect of these risk factors due to the unmeasured confounding (Gormley et al., 2020). Thanks to the recent developments in the genome-wide association study (GWAS), Mendelian randomization (Katan, 2004) The IVW method that uses all these 108 estimates gives the estimate of 1.791. To mitigate the invalid instrument problem, we combine these 108 estimators by the procedures proposed in this paper, which givesθ = 1.956 andθ = 1.856, respectively. The results are close to that produced by IVW method. This is in conformity with the fact that, among the 108 SNPs, no invalid instrument is identified by Gormley et al. (2020). The analysis result suggests a positive causal effect of CSI on head and neck cancer with confidence interval . We then compare these results with four standard methods in Mendelian randomization problem with invalid instruments, the MR-Egger regression (Bowden et al., 2015), the weighted median method (Bowden et al., 2016), the weighted mode method (Hartwig et al., 2017) and the RAPS method (Zhao et al., 2020). Results of these four methods are calculated by the R package TwoSampleMR
Discussion
In this paper, we present a fusion-extraction procedure to combine summary statistics from different data sources in the presence of biased sources. The idea of the proposed method is quite general and is applicable to many estimation problems. However, several questions are left open by this paper. First, the results in this paper do not apply to the conventional random-effect model in meta-analysis. It warrant further investigation to extend results in this paper to random-effect models. Second, we assume in this paper that different data sources share the same true parameter but some data sources fail to provide a consistent estimator. In practice, the true parameters in different data sources might be heterogeneous and the estimator from a data sources may converge to the true parameter of the data source (Claggett et al., 2014). In this case, (2) defines the "least false parameter" that minimizes the weighted average distance to true parameters of each data source. It is of interest to investigate the theoretical properties ofθ andθ in this case.
Throughout this proof, we use B L (B U ) to denote the lower (upper) bound of a positive sequence that is bounded away from zero (infinity).
A Details of the counter example in Section 2.2
In this subsection, we prove (6) in Section 2.2.
Proof. Let n * = n/K and Z k =θ k − θ 0 . Then it is easy to verify thatθ = med(θ 1 , . . . ,θ K ) and henceθ − θ 0 = med(Z 1 , . . . , Z K ) where med(·) is the univariate median. Let
F K (z) = 1 K K k=1 1{Z k ≤ z}.
Recall that h * = Φ −1 ((3/8 + τ /4)/(1/2 + τ )). By the definition of median, to prove (6), it suffices to show F K (h * / √ n * ) < 1/2 with probability approaching one. Note that
E[1{Z k ≤ z}] = Φ( √ n * (z − b * k )) for any z. Letting z = h * / √ n * ,F K (h * / √ n * ) − 1 K K k=1 Φ(h * − b * k √ n * ) ≤ K − 1 4(13)
with probability at least 1 − 2 exp(−2 √ K). Clearly, as K → ∞, 1 − 2 exp(−2 √ K) → 1.
Notice that
1 K K k=1 Φ(h * − b * k √ n * ) = 1 K k∈K 0 Φ(h * ) + o(1) = 1 K ( 1 2 +τ )K i=1 3/8 + τ /4 1/2 + τ + o(1) ≤ 1 2 + τ 3/8 + τ /4 1/2 + τ + o(1) = 3 8 + τ 4 + o(1).(14)
Because 3/8 + τ /4 < 1/2, combining (13) and (14), we have F K (h * / √ n * ) < 1/2 with probability approaching one, which implies (6).
B Proof of Proposition 1
Proof. Let G(θ) = K k=1π k θ * k − θ = k∈K 0π k θ 0 − θ + k∈K c 0π k θ 0 + b * k − θ . For any θ = θ 0 , the directional derivative of G(θ) at the point θ 0 in the direction θ − θ 0 is θ − θ 0 k∈K 0π k + k∈K c 0π k b * T k (θ − θ 0 ) b * k θ − θ 0 = θ − θ 0 k∈K 0π k + k∈K c 0π k b * k b * k T θ − θ 0 θ − θ 0 ≥ θ − θ 0 k∈K 0π k − k∈K c 0π k b * k b * k > 0.
Hence Proposition 1 follows from the fact that G(θ) is convex.
C Proof of Theorem 1
Theorem 1 is a straightforward corollary of the following Lemma.
Lemma 1. If δ > 0, then
θ − θ 0 ≤ 2δ −1 K k=1π k θ k − θ * k .
Proof. By the convexity of G(θ) and the directional derivative given in the proof of Proposition 1, we have
G(θ) − G(θ 0 ) ≥ δ θ − θ 0 .(15)
LetG(θ) = K k=1π k θ k − θ . Then by the triangle inequality of the Euclid norm,
|G(θ) − G(θ)| ≤ K k=1π k θ k − θ * k for any θ. ThusG (θ) −G(θ 0 ) ≥ G(θ) − G(θ 0 ) − 2 K k=1π k θ k − θ * k .
This together with (15) provesG
(θ) −G(θ 0 ) > 0 for all θ satisfying θ − θ 0 > 2δ −1 K k=1π k θ k − θ * k .
Recalling the definition ofθ in Section 2.2, we haveG(θ) ≤G(θ 0 ) and hence θ − θ 0 ≤ 2δ −1 K k=1π k θ k − θ * k .
D Proof of Theorem 2
To prove Theorem 2, we first establish two useful lemmas. Here we just state the lemmas and the key ideas. See the next Section for the formal proof of the two lemmas are relegated.
A key step of the proof is to construct a "good event" that happens with high probability and on the good eventθ has some desirable properties.
For any positive numbers n and constants C L > 0 and C U > 1 such that
C L < B L ≤ B U < C U , let ∆ M = min{B L − C L , C U − B U }.
We first construct three event as follows,
S 1 = Ṽ k − V * k ≤ ∆ M , k ∈ K 0 , S 2 = θ IVW −θ k < (2π k C U ) −1 λw k , k ∈ K 0 , S 3 = min k∈K 0 λw k π k ≥ 2 n , C U k∈K c 0 λw k ≤ B L C L n .(16)
On S 1 , for k ∈ K 0 ,Ṽ k is close to V * k . On S 2 , for k ∈ K 0 , the penalty coefficient of b k in problem (10) dominates the difference betweenθ IVW andθ k . Hence b k 's are likely to be penalized to zero for k ∈ K 0 on S 2 . On S 3 , the penalty coefficient of b k is not too small for k ∈ K 0 and not too large for k ∈ K c 0 . Intuitively, these three events are all good events on whichθ would perform well. Let S = S 1 ∩ S 2 ∩ S 3 . Next, we show thatθ is close toθ IVW on the event S. The formal result is stated in the following lemma.
Lemma 2. On the event S, we have
θ −θ IVW ≤ n .
Under Conditions 1 and 2 and some conditions on the convergence rate of θ k − θ * k , we have P (S) → 1 and hence P (θ −θ IVW ≤ n ) → 1 according to Lemma 2. The formal result is summarized in the following lemma. See Appendix E for the proof of Lemma 2 and 3.
Lemma 3. Under Conditions 1 and 2, if the tuning parameter λ satisfies
λ −1 δ −α max k { θ k − θ * k α+1 } = o P (1),
then for any sequence n such that λK/ n → 0 and n λ −1 δ
−α max k { θ k − θ * k α } = o P (1), P ( θ −θ IVW ≤ n ) → 1.
With the assistance of Lemma 3, we are able to prove Theorem 2.
Proof. Condition 3 and the fact that
α > max{ν 1 ν −1 2 , ν −1 2 − 1} together imply δ −(α+1) max k { θ k − θ * k α+1 } = o P (n −1 ), δ −α max k { θ k − θ * k α } = o P (n −αν 2 ).(17)
Because λ 1/n, the conditions Lemma 3 is satisfied with n = a n K/n where a n is an arbitrary sequence of positive numbers such that a n → ∞ and a n n ν 1 −αν 2 → 0. Note that ν 1 − αν 2 < 0. Then we have P ( θ −θ IVW ≤ a n K/n) → 1
for arbitrary a n that diverges to infinity at a sufficiently slow rate. This indicates that
θ −θ IVW = O P (K/n).
To see this, assuming that n θ −θ IVW /K is not bounded in probability, then for some > 0 there is some m 1 ≥ e such that P (n θ −θ IVW /K > 1) ≥ when n = m 1 . For s = 2, 3, . . . , there is some m s > max{m s−1 , e s } such that P (n θ −θ IVW /K > s) ≥ when n = m s . Let a n = s for m s ≤ n < m s+1 . Then for this sequence, we have a n → ∞ and a n ≤ log n. Hence a n satisfies a n n ν 1 −αν 2 → 0.
Moreover, for any positive integer s, P ( θ −θ IVW ≤ a n K/n) ≤ 1 − when n = m s .
Thus, lim inf n P ( θ −θ IVW ≤ a n K/n) ≤ 1 − , which contradicts to (18).
E Proof of Lemmas 2 and 3
To prove the two lemmas, we first analyse the optimization problem (10) in the main text.
We denote γ = (θ T , b T 1 , . . . , b T K ) T as a grand parameter vector. Let
Γ 0 = {γ : γ = (θ, b T 1 , . . . , b T K ) T , b k = 0 for k ∈ K 0 and b k = 0 for k ∈ K c 0 }, and L(γ) = K k=1π k 2 (θ k − θ − b k ) TṼ k (θ k − θ − b k ).
Consider the following oracle problem that sets the term b k to be zero in prior for k ∈ K 0
min γ∈Γ 0 {L(γ) + k∈K c 0 λw k b k }.(19)
The following lemma establishes the relationship between the minimum point of this problem and the problem (10).
Lemma 4. Let M be the set of minimum points of problem (10) andM be the set of minimum points of problem (19). If there exists a minimum pointγ = (θ T ,b T 1 , . . . ,b T K ) T of problem (19) such thatπ k Ṽ k (θ k −θ) < λw k for k ∈ K 0 , then M =M.
Proof. Becauseγ is a minimum point of problem (19), it follows from the Karush-Kunh-Tucker condition that
k∈K 0π kṼk (θ k −θ) + k∈K c 0π kṼk (θ k −θ −b k ) = 0, π kṼk (θ k −θ −b k ) = λw kzk , k ∈ K c 0 (20) wherez k =b k / b k ifb k = 0 and z k ≤ 1 ifb k = 0. Becauseπ k Ṽ k (θ k −θ) < λw k
for k ∈ K 0 ,γ also satisfies the Karush-Kunh-Tucker condition of problem (10) and hencē γ ∈ M by the convexity of problem (10).
One the one hand, for anyγ ∈M, because bothγ andγ belongs toM, we have L(γ)+ K k=1 λw k b k = L(γ ) + K k=1 λw k b k . In addition, according to the above discussion, we haveγ ∈ M. Then we have L(γ) + K k=1 λw k b k = min γ {L(γ) + K k=1 λw k b k } and hence L(γ ) + K k=1 λw k b k = min γ {L(γ) + K k=1 λw k b k }. This impliesγ ∈ M and provesM ⊂ M.
On the other hand, for any γ ∈ M, because both γ andγ belongs toM, we have L(γ)
+ K k=1 λw k b k = L(γ ) + K k=1 λw k b k , this implies L(γ) − L(γ ) = K k=1 λw k ( b k − b k ). Recall that, for k ∈ K c 0 ,z k =b k / b k ifb k = 0 and z k ≤ 1 ifb k = 0. In addition, letz k = (λw k ) −1π kṼk (θ k −θ) for k ∈ K 0 .
Then it is easy to verify that z k ≤ 1 for k ∈ K c 0 , z k < 1 for k ∈ K 0 ,z T kb k = b k for k = 1, . . . , K, and ∇L(γ) = −(0 T , λw 1z T 1 , . . . , λw Kz T K ) T . By the convexity of L(γ), we have
0 ≥ L(γ) − L(γ ) + ∇L(γ) T (γ −γ) = K k=1 λw k ( b k − b k ) − K k=1 λw kz T k (b k −b k ) = K k=1 λw k ( b k −z T k b k ) ≥ K k=1 λw k ( b k − z k b k ) ≥ k∈K 0 λw k b k (1 − z k ).
Because λw k (1 − z k ) = λw k −π k Ṽ k (θ k −θ) > 0 for k ∈ K 0 , we have b k = 0 for k ∈ K 0 . Combining this with the fact that L(γ)+ K k=1 λw k b k = L(γ )+ K k=1 λw k b k , we have γ ∈M. This completes the proof of the lemma.
Recall that throughout the proofs, we always use B L (B U ) to denote the lower (upper) bound of a sequence that is bounded away from zero (infinity). For any positive numbers n and constants C L > 0 and C U > 1 such that
C L < B L ≤ B U < C U , let ∆ M = min{B L − C L , C U − B U }, S 1 = { Ṽ k − V * k ≤ ∆ M , k ∈ K 0 }, S 2 = { θ IVW −θ k < (2π k C U ) −1 λw k , k ∈ K 0 }, S 3 = {min k∈K 0 λw k /π k ≥ 2 n , C U k∈K c 0 λw k ≤ B L C L n }, and S = S 1 ∩ S 2 ∩ S 3 .
Then we are ready to give the proof of Lemma 2.
Restate of Lemma 2. On the event S, we have
θ −θ IVW ≤ n .
Proof. Letγ ∈M be any minimum point of problem (19). Then by the KKT condition (20), we haveθ
= ( k∈K 0π kṼk ) −1 ( k∈K 0π kṼkθk ) + ( k∈K 0π kṼk ) −1 ( k∈K c 0 λw kzk ) =θ IVW + k∈K 0π kṼk −1 k∈K c 0 λw kzk . (21) By Weyl's Theorem, we have max k∈K 0 {max{|λ min (Ṽ k )−λ min (V * k )|, |λ max (Ṽ k )−λ max (V * k )|}} ≤ max k∈K 0 Ṽ k − V * k ≤ ∆ M on S 1 . Then by Condition 2, it follows C L ≤ λ min (Ṽ k ) ≤ λ max (Ṽ k ) ≤ C U
for k ∈ K 0 on S 1 . Then by Conditions 1 and 2, we havẽ
π k Ṽ k (θ k −θ) ≤π k Ṽ k θ k −θ IVW +π k Ṽ k ( k∈K 0π kṼk ) −1 k∈K c 0 λw kzk ≤π k C U θ k −θ IVW +π k C U B −1 L C −1 L k∈K c 0 λw k < λw k 2 +π k n < λw k
on the event S. According to Lemma 4, we haveM = M on S. By equation (21), for any
(θ T ,b T 1 , . . . ,b T K ) T ∈ M =M, we havê θ −θ IVW = k∈K 0π kṼk −1 k∈K c 0 λw kzk .
This implies
θ −θ IVW ≤ B −1 L C −1 L k∈K c 0 λw k ≤ C −1 U n on S.
Note that C U > 1 and this completes the proof of the lemma.
Next, we move on to the proof of Lemma 3.
Restate of Lemma 3. Under Conditions 1, and 2, if the tuning parameter λ satisfies
λ −1 δ −α max k { θ k − θ * k α+1 } = o P (1),
then for any sequence n such that λK/ n → 0 and n λ −1 δ
−α max k { θ k − θ * k α } = o P (1), P ( θ −θ IVW ≤ n ) → 1.
Proof. To prove the lemma, according to Lemma 2, it suffices to prove P (S) → 1 with C L = 0.9B L , C U = 1.1B U . By Condition 2, we have P (S 1 ) → 1. By the definition of θ IVW , we haveθ
IVW −θ k = j∈K 0π jṼj −1 j∈K 0π jṼj (θ j −θ k ) .
For k ∈ K 0 , on the event S 1 , we have
θ IVW −θ k ≤ (B L − max j∈K 0 Ṽ j − V * j ) −1 (max j∈K 0 Ṽ j (θ j −θ k ) ) ≤ (B L − max j∈K 0 Ṽ j − V * j ) −1 (B U + max j∈K 0 Ṽ j − V * j ) max j∈K 0 θ j −θ k ≤ C −1 L C U max j∈K 0 θ j −θ k(22)
by Condition 2. Because for j ∈ K 0 , θ * j = θ 0 , we have
max j∈K 0 θ j −θ k ≤ max j θ j − θ 0 + max k θ k − θ 0 = 2 max j θ j − θ * j .(23)
Note that by Lemma 1,
θ − θ 0 ≤ 2δ −1 max j θ j − θ * j .
This together with the definitions ofb k and b * k proves
b k − b * k ≤ (1 + 2δ −1 ) max j θ j − θ * j .(24)
Recalling thatw k = 1/ b k α , according to (22), (23) and (24) we have
S 1 ∩ S 2 = S 1 ∩ {λ −1π k b k α θ IVW −θ k < (2C U ) −1 , k ∈ K 0 } ⊃ S 1 ∩ {2C −1 L C U λ −1 (1 + 2δ −1 ) α (max j θ j − θ * j ) α+1 < (2C U ) −1 } =: S 1 ∩ S * 2 . Because λ −1 δ −α max j { θ j − θ * j α+1 } = o P (1)
, then P (S * 2 ) → 1 and hence P (S 1 ∩ S * 2 ) → 1. This implies P (S 1 ∩ S 2 ) → 1. Notice that S 3 can be rewritten as
{2 n λ −1 max k∈K 0 {π k b k α } ≤ 1, k∈K c 0 λw k / n ≤ C −1 U B L C L }.
According to (24), because b * k is bounded away from zero for k ∈ K c 0 ,
S 3 ⊃ {2 n λ −1 (1 + 2δ −1 ) α max k { θ k − θ * k α } ≤ 1, k∈K c 0 λw k / n ≤ C −1 U B L C L } ⊃ {2 n λ −1 (1 + 2δ −1 ) α max k { θ k − θ * k α } ≤ 1, max k∈K c 0 w k λK/ n ≤ C −1 U B L C L } = {2 n λ −1 (1 + 2δ −1 ) α max k { θ k − θ * k α } ≤ 1, λ −1 n min k∈K c 0 b k α ≥ C U B −1 L C −1 L K} ⊃ {2 n λ −1 (1 + 2δ −1 ) α max k { θ k − θ * k α } ≤ 1, (B L − (1 + 2δ −1 ) max k θ k − θ * k ) α ≥ C U B −1 L C −1 L λK/ n } =: S * 3 . Since λK/ n → 0, we have λ/ n → 0. Because n λ −1 δ −α max k { θ k − θ * k α } = o P (1), then we have (1 + 2δ −1 ) max k θ k − θ * k = ( −1 n λ × n λ −1 (1 + 2δ −1 ) α max j { θ j − θ * j α }) 1 α = o P (1).
Hence P (S * 3 ) → 1 and hence P (S 3 ) → 1. This completes the proof.
F Proof of Theorem 3
We first establish a lemma that is needed in the proof of Theorem 3.
Lemma 5. On the event S, we haveb k = 0 for k ∈ K 0 .
Proof. According to the proof of Lemma 2, we haveM = M on S. By the definition of
M, for any (θ T ,b T 1 , . . . ,b T K ) T ∈ M =M, we haveb k = 0 for k ∈ K 0 .
Restate of Theorem 3. Under Conditions 1, 3 and 4, if λ 1/n and α > max{ν 1 ν −1 2 , ν −1 2 − 1}, we have
P (K 0 = K 0 ) → 1
provided min k∈K c 0π k > C π /K and K log n/n → 0 where C π is some positive constant.
Proof. As before, we let
C L = 0.9B L , C U = 1.1B U , ∆ M = min{B L − C L , C U − B U }
and n = a n K/n where a n is a sequence of positive numbers such that a n → ∞ and a n / log n → 0. Define
S 1 = { Ṽ k − V * k ≤ ∆ M , k = 1, . . . , K}, S 2 = { θ IVW −θ k > (π k C L ) −1 λw k + n , k ∈ K c 0 }, S = S 1 ∩ S 2 .
Recall the definition of S in (16). According to Lemmas 2 and 5, we have θ −θ IVW ≤ n andb k = 0 for k ∈ K 0 when S holds.It is straightforward to verify that the conditions of Lemma 3 is satisfied under the conditions of this theorem. Hence we have P (S) → 1 according to Lemma 3. To prove this theorem, it then suffices to proveb k = 0 for k ∈ K c 0 on S ∩ S and P (S ) → 1.
First, we prove thatb k = 0 for k ∈ K c 0 on the event S ∩ S. The arguments in the rest of this paragraph are derived on the event S ∩S. By Weyl's Theorem,
max k {max{|λ min (Ṽ k )− λ min (V * k )|, |λ max (Ṽ k ) − λ max (V * k )|}} ≤ max k Ṽ k − V * k ≤ ∆ M .
Thus, for k = 1, . . . , K,
C L ≤ λ min (Ṽ k ) ≤ λ max (Ṽ k ) ≤ C U .
Because (θ T ,b T 1 , . . . ,b T K ) T is a minimum point of problem (10) in the main text, we havẽ
π kṼk (θ k −θ −b k ) = λw kẑk , k ∈ K c 0(25)
for some ẑ k ≤ 1 according to the KKT condition. Thus by (25), the definition of eigenvalue and the triangular inequality, we havẽ
π k C L (| θ k −θ − b k |) ≤π k λ min (Ṽ k )(| θ k −θ − b k |) ≤ π kṼk (θ k −θ −b k ) ≤ λw k for k ∈ K c 0 . Then on S 2 b k ≥ θ k −θ − | θ k −θ − b k | ≥ θ k −θ − (π k C L ) −1 λw k ≥ θ k −θ IVW − θ −θ IVW − (π k C L ) −1 λw k ≥ θ k −θ IVW − n − (π k C L ) −1 λw k > 0.
This indicates thatb k = 0 for k ∈ K c 0 on the event S ∩ S. Next, we prove that P (S ) → 1. By Condition 4, we have P (S 1 ) → 1. Note that
θ IVW −θ k = j∈K 0π jṼj −1 j∈K 0π jṼj (θ j −θ k ) .
On the event S 1 , for k ∈ K c 0 , we have
θ IVW −θ k ≥ (B U + max j∈K 0 Ṽ j − V * j ) −1 B L (min j∈K 0 V * j (θ j −θ k ) ) ≥ (B U + max j∈cK 0 Ṽ j − V * j ) −1 B L (B L − max j∈K 0 Ṽ j − V * j ) min j∈K 0 θ j −θ k ≥ C −1 U B L C L min j∈K 0 θ j −θ k .
Because for k ∈ K c 0 and j ∈ K 0 ,
θ j −θ k ≥ b * k − θ j − θ 0 − θ k − θ * k ,
we have
min j∈K 0 ,k∈K c 0 θ j −θ k ≥ B L − 2 max j θ j − θ * j . Thus S 1 ∩ S 2 = S 1 ∩ { θ IVW −θ k > (π k C L ) −1 λw k + n , k ∈ K c 0 } ⊃ S 1 ∩ {C −1 U B L C L (B L − 2 max j θ j − θ * j ) > (min k∈K c 0π k C L ) −1 λ max k∈K c 0w k + n } =: S 1 ∩ S * 2 .
According to Condition 3,
C −1 U B L C L (B L − 2 max j θ j − θ * j ) = C −1 U C L B 2 L + o P (1). Recall thatw k = 1/ b k α .
Then max k∈K c 0w k = O P (1) according to (24) and (17). Since min k∈K c 0π k > C π /K and λ 1/n, we have (min k∈K c 0π k C L ) −1 λ max k∈K c 0w k = O P (K/n). Moreover, n = o(K log n/n) by definition. (min k∈K c 0π k C L ) −1 λ max k∈K c 0w k + n = o P (1) because K log n/n → 0. Thus P (S * 2 ) → 1 and hence P (S ) → 1. This completes the proof.
G Proof of Theorem 4
Restate of Theorem 4. Suppose Conditions 1, 3 and 5 hold. If (i) ν 1 < 1/2; (ii) there are some deterministic matrices V * k , k ∈ K 0 , such that max k∈K 0 Ṽ k − V * k = o P (n −1/2+ν 2 );
(iii) for k ∈ K 0 , the eigenvalues of V * k and var Ψ k (Z (k) ) are bounded away from zero and infinity; (iv) for k ∈ K 0 , u ∈ R d , u = 1 and some τ > 0, E[|u T Ψ k (Z (k) )| 1+τ ] are bounded; (v)λ 1/n and α > max{ν 1 ν −1 2 , ν −1 2 − 1}, then for any fixed q and q × d matrix W n such that the eigenvalues of W n W T n are bounded away from zero and infinity, we have
√ nI −1/2 n W n (θ − θ 0 ) d → N (0, I q ),
where I q is the identity matrix of order q,
I n = k∈K 0π k H n,k var Ψ k (Z (k) ) H T n,k , H n,k = W n V * −1 0 V * k and V * 0 = k∈K 0π k V * k .
Proof. According to Theorem 2, under (i), (ii), (v), Condition 1 and 3, we have
θ −θ IVW = o P 1 √ n .
Then to establish the asymptotic normality result ofθ, it suffices to establish the asymptotic normality ofθ IVW . According to (ii), (iii) and Condition 1, we have
I −1/2 n W n (θ IVW − θ 0 ) = I −1/2 n W n k∈K 0π kṼk −1 k∈K 0π kṼk (θ k − θ 0 ) = I −1/2 n W n V * −1 0 + o P (1) k∈K 0π k V * k (θ k − θ 0 ) + I −1/2 n W n V * −1 0 + o P (1) k∈K 0π k Ṽ k − V * k (θ k − θ 0 ) , where V * 0 = k∈K 0π k V * k .
According to (iii) and (iv), we have
I −1/2 n W n = O(1).(26)
Because
max k θ k − θ * k = O P (n −ν 2 ), max k∈K 0 Ṽ k − V * k = o P (n −1/2+ν 2 ) and (26), we have I −1/2 n W n V * −1 0 + o P (1) k∈K 0π k Ṽ k − V * k (θ k − θ 0 ) = O P max k θ k − θ * k O P max k∈K 0 Ṽ k − V * k = o P 1 √ n ,
and hence
I −1/2 n W n (θ IVW − θ 0 ) = I −1/2 n W n V * −1 0 k∈K 0π k V * k (θ k − θ 0 ) + o P I −1/2 n W n V * −1 0 k∈K 0π k V * k (θ k − θ 0 ) + o P 1 √ n .
Thus Theorem 4 is proved if we show
√ nI −1/2 n W n V * −1 0 k∈K 0π k V * k (θ k − θ 0 ) d → N (0, I q ).(27)
By Condition 5, we have
√ nI −1/2 n W n V * −1 0 k∈K 0π k V * k (θ k − θ 0 ) = √ nI −1/2 n W n V * −1 0 k∈K 0π k V * k 1 n k n k i=1 Ψ k (Z (k) i ) + o P (1) = k∈K 0 n k i=1 η k,i + o P (1), where η k,i = I −1/2 n W n V * −1 0 V * k Ψ k (Z (k) i )/ √ n. Because E Ψ k (Z (k) ) = 0 for k ∈ K 0 , (iii)
and (
H Uniform asymptotically linear representation of M-estimator
In this section, we establish the uniform asymptotically linear representation in the case whereθ k 's are M-estimators, i.e.
θ k = arg min θ 1 n k n k i=1 L k (Z (k) i , θ),
for k = 1, . . . , K, where L k (·, ·) is some loss function that may differ from source to source. In this case, the probability limit ofθ k is the minimum point of
E[L k (Z (k) ; θ)]
under some regularity conditions. Hence here we use θ * k to denote the minimum point of
E[L k (Z (k) ; θ)]. Let ζ k (θ) = L k (Z (k) ; θ) − E[L k (Z (k) ; θ)].
To begin with, we first introduce a commonly used condition in the literature with diverging parameter dimension.
Condition 6. There are some constants σ 1 , u 1 , b independent of n and some positive definite matrices Φ k (k = 1, . . . , K) that may depend on n such that
E exp λ γ T ∇ζ k (θ) Φ k γ ≤ exp σ 2 1 λ 2 2 and E[L k (Z (k) , θ)] − E[L k (Z (k) , θ * k )] ≥ b Φ k (θ − θ * k ) 2 ,
for all θ, |λ| ≤ u 1 , γ = 1 and k = 1, . . . , K. such that ν 0 + ν 1 < 1 and (iii)π k ≥ C * /K for some positive constant C * , then max k θ k −
θ * k = O P n −(1−ν 0 −ν 1 )/2 .
Proof. For convenience, in this and the following proofs, we let C be a generic positive constant that may be different in different places. Under Condition 6, according to Theorem (2012), we have
in Spokoiny
P Φ k (θ k − θ * k ) ≥ 6σ 1 b −1 3d + t n k ≤ e −t ,
for t ≤ ∆ k with ∆ k = (3b −1 σ 2 1 u 1 n 1/2 k − 1) 2 − 3d. Thus, according to (i), we have
P θ k − θ * k ≥ L 3d + t n k ≤ e −t ,
for t ≤ ∆ k and k = 1, . . . , K, where L = 6σ 1 b −1 B −1 L . By Bonferroni inequality, it follows
P max k θ k − θ * k ≥ L max k 3d + t n k ≤ Ke −t .
By (iii) we have n k ≥ C * n/K. Letting t n = min {n ν 0 , min k ∆ k }, according to (ii) and
(iii), we have
max k (3d + t n )/n k ≤ K(3d + t n )/(C * n) ≤ C n − 1−ν 0 −ν 1 2 . Thus P max k θ k − θ * k ≥ Cn − 1−ν 0 −ν 1 2 ≤ K exp (−t n ) → 0.(28)
This indicates that max k θ k − θ * k = O P n −(1−ν 0 −ν 1 )/2 .
To establish the uniform asymptotically linear representation, some further conditions on the Hessian of the expected loss function are required. Let D k (θ) = (∇ 2 E[L k (Z (k) , θ)]) 1/2 be the Hessian of the expected loss function and let D k * = D k (θ * k ).
Condition 7. For k ∈ K 0 , the eigenvalues of D k * are bounded away from zero and infinity, and there is some constant M * such that D 2 k (θ) − D 2 k * ≤ M * θ − θ 0 for all θ. Moreover, for some constants σ 2 and u 2 , E exp λ γ T 1 ∇ 2 ζ k (θ)γ 2 D k * γ 1 D k * γ 2 ≤ exp σ 2 2 λ 2 2 for all |λ| ≤ u 2 , γ 1 = 1, γ 2 = 1 and k ∈ K 0 .
Under Condition 7 and the conditions of Proposition 2, we establish the uniform asymptotically linear representation (Condition 5).
Proposition 3. Under Condition 7 and the conditions of Proposition 2, if ν 0 + ν 1 < 1/2, then Condition 5 holds with Ψ k (Z (k) ) = −D −2 k * ∇L(Z (k) , θ 0 )
Proof. For k ∈ K 0 , according to Condition 7, it is not hard to verify that the Condition ED 2 in (Spokoiny, 2013) is satisfied with g = u 2 √ n k and ω = 1/ √ n k . Because B L ≤ min k λ min (D k * ) ≤ max k λ max (D k * ) ≤ B U and D 2 k (θ) − D 2 k * ≤ M * θ − θ 0 , we have
D −1 k * D 2 k (θ)D −1 k * − I d ≤ D −1 k * 2 D 2 k (θ) − D 2 k * ≤ M * B −3 L D k * (θ − θ 0 ) ,
for k ∈ K 0 , where I d is the d × d identity matrix.
Thus Condition L in Spokoiny (2013) is satisfied with δ(r) = M * B −3 L r. For k ∈ K 0 , define the event
E (k) r,t = sup θ∈Θ * (r) 1 n k n i=1 D −1 k * ∇L(Z (k) i , θ) − ∇L(Z (k) i , θ 0 ) − D k * (θ − θ 0 ) ≥ (k) r,t ,
where Θ * (r) = D k * (θ − θ 0 ) ≤ r and (k) r,t = M * B 3 U r 2 + 6σ 2 r (4p + 2t)/n k . According to Proposition 3.1 in Spokoiny (2013), we have
P E (k) r,t ≤ exp(−t)(29)
for k ∈ K 0 and t ≤ ∆ k with ∆ k = −2p + u 2 n k /2. By (28), there is some C such that P max k θ k − θ * k ≥ Cn −(1−ν 0 −ν 1 )/2 → 0.
Let r n = B U Cn −(1−ν 0 −ν 1 )/2 . Then we have
k∈K 0 θ k / ∈ Θ(r n ) ⊂ max k { θ k − θ * k } ≥ Cn − 1−ν 0 −ν 1 2 .
This implies
P k∈K 0 θ k / ∈ Θ(r n ) ≤ P max k { θ k − θ * k } ≥ Cn − 1−ν 0 −ν 1 2 → 0.(30)
Letting t n = min {n ν 0 , min k ∆ k }, we have
P k∈K 0 E (k) rn,tn ≤ K exp(−t n ) → 0(31)
according to (29) and the rate conditions on K. By the definition ofθ k , we have n k i=1 ∇L(Z (k)
i ,θ k )/n k = 0. Combining this with (30) and (31), we have
P max k∈K 0 1 n k n i=1 D −1 k * ∇L(Z (k) i , θ 0 ) + D k * (θ k − θ 0 ) ≥ ξ n → 0(32)
where ξ n = M * B 3 U r 2 n + 6σ 2 r n (4d + 2t n )/n k = o(1/ √ n) because ν 0 + ν 1 < 1/2. Thus
max k∈K 0 1 n k n i=1 D −2 k * ∇L(Z (k) i , θ 0 ) + (θ k − θ 0 ) = o P 1 √ n ,(33)
and this implies the result of the proposition.
(https://github.com/MRCIEU/ TwoSampleMR). Bias and standard error (SE) of the estimators based on 200 simulations are summarized in the following table.
estimated benefit of surgical treatment over non-surgical treatment in PD and AL (positive values mean that surgery results in a better outcome). The sample size of each study and estimated covariance matrix of the estimators are also available. The inversevariance weighted method using all data sources produces an estimator (0.307, −0.394) for the effect on (PD, AL). By applying our methods, we obtainθ = (0.260, −0.310) and θ = (0.282, −0.303). Next, we assess the robustness of our methods against the bias of the published results. To do this, we add a perturbation t × (1, −3) to the first published result in the data set. After perturbing, the first published result becomes a biased estimator for the parameter of interest. We plot the resulting estimates with different values of t in the following figure. [Insert Fig 1 about here.] Figure 1 shows thatθ andθ provide quite stable estimates under different values of t compared to the IVW estimator. Based on the reported estimated covariance matrices and the asymptotic normality ofθ, we conduct two hypothesis testings to test whether these effects are significant. The p-values of the two-sided tests for effect on PD and AL are 1.660 × 10 −7 and 5.278 × 10 −15 , respectively. This suggests that both effects are significant at 0.05 significance level. The 95% confidence intervals for effects of the surgical treatment on PD and AL based onθ are [0.176, 0.387] and [−0.379, −0.227], respectively. In summary, our result suggests that the surgical treatment has a positive effect on PD and a negative effect on AL and our result is robust against potential bias in the results of the five published trials.
has become a powerful tool to tackle the unmeasured confounding problem(Kang et al., 2016;Bowden et al., 2016;Windmeijer et al., 2019;Hartwig et al., 2017;Guo et al., 2018;Zhao et al., 2020;Ye et al., 2021). In Mendelian randomization analysis, as discussed in Example 2, each SNP is used as instrumental variables to estimate the causal effect and the estimator is consistent if the SNP is a valid instrument. The final estimator is obtained via combining the estimators produced by each SNP to improve the efficiency. However, if some SNPs are invalid instruments due to pleiotropy, linkage disequilibrium, and population stratification, the final estimator may be biased.In this subsection, we use the comprehensive smoking index (CSI) and alcoholic drinks per week (DPW) as quantitative measures of smoking and alcohol intake and conduct the analysis using the genetic data provided byGormley et al. (2020). A copy of the data used in this subsection is available at https:// github.com/rcrichmond/smoking alcohol headaThe data set contains the estimated effect of 168 independent SNPs on the head and neck cancer, CSI and DPW and the corresponding standard error. Summary-level data for the effect on head and neck cancer is from a GWAS with sample size of 12, 619 conducted by the Genetic Associations and Mechanisms in Oncology Network(Lesseur et al., 2016).Summary-level data for the effect on CSI is derived byWootton et al. (2020) from the UK BioBank (sample size 462, 690), and the DPW data is obtained from a GWAS with sample size 226, 223 in the GWAS & Sequencing Consortium of Alcohol and Nicotine use. SeeGormley et al. (2020) for further details of the data. FollowingGormley et al. (2020), we conduct the univariable Mendelian randomization to analyze the causal effect of CSI and DPW separately. Estimators of the causal effect of CSI on head and neck cancer are constructed based on 108 SNPs used inGormley et al. (2020), which produce 108 estimates.
[ 0 .
0969, 2.744] (based onθ). Then the causal effect of DPW is estimated similarly based on 60 SNPs used in Gormley et al. (2020). The result of the IVW method is 2.111. The results of the proposed methods areθ = 1.622 andθ = 1.598. When analysing the causal effect of DPW, Gormley et al. (2020) identify an invalid instrument rs1229984. When rs1229984 is not included, the IVW method based on the remaining 59 SNPs gives the estimate 1.381, which is quite different from the case where rs1229984 is included. The proposed estimators based on the remaining 59 SNPs areθ = 1.619 andθ = 1.590, which are close to the case where rs1229984 is included. This demonstrates the robustness of the proposed methods against the invalid instrument rs1229984. The analysis result suggests that DPW has a positive causal effect on the head and neck cancer with confidence interval [0.414, 2.774] (based onθ without rs1229984)
iv) implies that Lindeberg-Feller condition (Van der Vaart, 2000) is satisfied.
See
Spokoiny (2012Spokoiny ( , 2013;Zhou et al. (2018);Chen and Zhou (2020) for further explanations and examples of this condition. The following proposition shows that Condition 6 along with some other conditions implies Condition 3. Proposition 2. Under Condition 6, if (i) for k = 1, . . . , K, the eigenvalues of Φ k are bounded away from zero; (ii) d = O(n ν 0 ), K = O(n ν 1 ) for some positive constants ν 0 , ν 1
) IVW estimation using all datasources for treatment effect in PD: red solid line; estimation for treatment effect in PD produced byθ: green dashed line; estimation for treatment effect in PD produced byθ: blue dotted line. ) IVW estimation using all data sources for treatment effect in AL: red solid line; estimation for treatment effect in AL produced byθ: green dashed line; estimation for treatment effect in AL produced byθ: blue dotted line.
Figure 1 :
1Estimation results under different t.
as long asπ k 's are bounded away from zero. For the case where K → ∞, in the supplementary material, we show that Condition 5 holds under some regularity conditions ifθ k 's are M-estimators, i.e.θk = arg min
θ
about here.] All the estimators have similar performance inTable 4except for that iFusion estimator has a much larger standard error compared to other estimators, and there is little loss of efficiency to apply our methods when all the sources are unbiased.
Table 1 :
1NB and SSE with least squares regression in the presence of biased sources (results are multiplied by 10)Estimator
n *
naive
oracle
iFusionθθ
NB
SSE NB SSE NB
SSE
NB SSE NB SSE
Table 2 :
2NB and SSE with least squares regression and no biased sources (results are multiplied by 10)Estimator
n *
oracle
iFusionθθ
NB SSE NB
SSE
NB SSE NB SSE
d = 3, K = 10
Table 3 :
3NB and SSE with logistic regression in the presence of biased sources (results are multiplied by 10) d = 18, K = 30 100 6.24 9.63 1.86 14.24 1.90 35.18 2.85 9.25 1.43 13.54 200 7.15 6.96 0.78 8.03 0.93 19.59 2.04 6.16 0.74 7.98 500 9.12 4.62 0.29 4.59 0.31 11.34 1.33 3.91 0.27 4.57Estimator
n *
naive
oracle
iFusionθθ
NB
SSE
NB
SSE
NB
SSE
NB
SSE
NB
SSE
d = 3, K = 10
100 4.79 2.32 0.24 2.73 0.24 3.80 0.84 2.48 0.15 2.78
200 5.89 1.89 0.13 1.83 0.18 2.34 0.58 1.88 0.09 1.89
500 7.36 1.62 0.02 1.10 0.05 1.35 0.42 1.23 0.02 1.12
d = 3, K = 30
100 4.77 1.34 0.22 1.62 0.28 3.86 0.99 1.37 0.13 1.61
200 5.88 1.11 0.10 1.08 0.16 2.21 0.73 0.97 0.09 1.08
500 7.37 0.84 0.06 0.65 0.03 1.20 0.49 0.67 0.06 0.65
d = 18, K = 10
100 6.19 16.47 1.97 24.68 1.90 35.18 2.54 16.44 1.48 24.57
200 7.12 11.91 0.82 13.70 0.93 19.59 1.75 11.04 0.80 13.64
500 9.11 8.13 0.28 8.04 0.31 11.34 1.07 7.13 0.27 8.01
Table 4 :
4NB and SSE with logistic regression and no biased sources (results are multipliedby 10)
Estimator
n *
oracle
iFusionθθ
NB
SSE
NB
SSE
NB
SSE
NB
SSE
Table 5 :
5Bias and SE in Mendelian randomization with invalid instruments (results are multiplied by 10) Estimator MR-Egger Weighted Median IVW Weighted Mode RAPSθθBias
6.48
-2.05
14.44
-1.30
16.95 0.71 -0.26
SE
4.79
1.09
1.73
82.14
2.34 1.79 1.17
Distributed testing and estimation under sparse high dimensional models. H Battey, J Fan, H Liu, J Lu, Z Zhu, The Annals of Statistics. 4631352Battey, H., J. Fan, H. Liu, J. Lu, and Z. Zhu (2018). Distributed testing and estimation under sparse high dimensional models. The Annals of Statistics 46(3), 1352.
. C Berkey, D Hoaglin, A Antczak-Bouckoms, F Mosteller, G Colditz, Berkey, C., D. Hoaglin, A. Antczak-Bouckoms, F. Mosteller, and G. Colditz (1998).
Meta-analysis of multiple outcomes by regression with random effects. Statistics in Medicine. 1722Meta-analysis of multiple outcomes by regression with random effects. Statistics in Medicine 17(22), 2537-2550.
Efficient and Adaptive Estimation for Semiparametric Models. P J Bickel, C A Klaassen, P J Bickel, Y Ritov, J Klaassen, J A Wellner, Y Ritov, Johns Hopkins University Press4BaltimoreBickel, P. J., C. A. Klaassen, P. J. Bickel, Y. Ritov, J. Klaassen, J. A. Wellner, and Y. Ritov (1993). Efficient and Adaptive Estimation for Semiparametric Models, Volume 4. Johns Hopkins University Press Baltimore.
Mendelian randomization with invalid instruments: effect estimation and bias detection through egger regression. J Bowden, G Davey Smith, S Burgess, International Journal of Epidemiology. 442Bowden, J., G. Davey Smith, and S. Burgess (2015). Mendelian randomization with invalid instruments: effect estimation and bias detection through egger regression. International Journal of Epidemiology 44(2), 512-525.
Consistent estimation in mendelian randomization with some invalid instruments using a weighted median estimator. J Bowden, G Davey Smith, P C Haycock, S Burgess, Genetic Epidemiology. 404Bowden, J., G. Davey Smith, P. C. Haycock, and S. Burgess (2016). Consistent estimation in mendelian randomization with some invalid instruments using a weighted median estimator. Genetic Epidemiology 40(4), 304-314.
A robust and efficient method for mendelian randomization with hundreds of genetic variants. S Burgess, C N Foley, E Allara, J R Staley, J M Howson, Nature Communications. 111Burgess, S., C. N. Foley, E. Allara, J. R. Staley, and J. M. Howson (2020). A robust and efficient method for mendelian randomization with hundreds of genetic variants. Nature Communications 11(1), 1-11.
Multivariable mendelian randomization: the use of pleiotropic genetic variants to estimate causal effects. S Burgess, S G Thompson, American Journal of Epidemiology. 1814Burgess, S. and S. G. Thompson (2015). Multivariable mendelian randomization: the use of pleiotropic genetic variants to estimate causal effects. American Journal of Epidemi- ology 181(4), 251-260.
Constrained maximum likelihood estimation for model calibration using summary-level information from external big data sources. N Chatterjee, Y.-H Chen, P Maas, R J Carroll, Journal of the American Statistical Association. 111513Chatterjee, N., Y.-H. Chen, P. Maas, and R. J. Carroll (2016). Constrained maximum like- lihood estimation for model calibration using summary-level information from external big data sources. Journal of the American Statistical Association 111(513), 107-117.
Robust inference via multiplier bootstrap. X Chen, W.-X Zhou, The Annals of Statistics. 483Chen, X. and W.-X. Zhou (2020). Robust inference via multiplier bootstrap. The Annals of Statistics 48(3), 1665-1691.
Meta-analysis with fixed, unknown, studyspecific parameters. B Claggett, M Xie, L Tian, Journal of the American Statistical Association. 109508Claggett, B., M. Xie, and L. Tian (2014). Meta-analysis with fixed, unknown, study- specific parameters. Journal of the American Statistical Association 109(508), 1660- 1671.
Challenges of big data analysis. J Fan, F Han, H Liu, National Science Review. 12Fan, J., F. Han, and H. Liu (2014). Challenges of big data analysis. National Science Review 1(2), 293-314.
A multivariable mendelian randomization analysis investigating smoking and alcohol consumption in oral and oropharyngeal cancer. M Gormley, T Dudding, E Sanderson, R M Martin, S Thomas, J Tyrrell, A R Ness, P Brennan, M Munafò, M Pring, Nature Communications. 111Gormley, M., T. Dudding, E. Sanderson, R. M. Martin, S. Thomas, J. Tyrrell, A. R. Ness, P. Brennan, M. Munafò, M. Pring, et al. (2020). A multivariable mendelian randomiza- tion analysis investigating smoking and alcohol consumption in oral and oropharyngeal cancer. Nature Communications 11(1), 1-10.
Confidence intervals for causal effects with invalid instruments by using two-stage hard thresholding with voting. Z Guo, H Kang, T Tony Cai, D S Small, Journal of the Royal Statistical Society: Series B (Statistical Methodology). 804Guo, Z., H. Kang, T. Tony Cai, and D. S. Small (2018). Confidence intervals for causal ef- fects with invalid instruments by using two-stage hard thresholding with voting. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 80(4), 793-815.
Robust Statistics: The Approach Based on Influence Functions. F R Hample, E M Ronchetti, P J Rousseeuw, W A Stahel, John Wiley & SonsNew YorkHample, F. R., E. M. Ronchetti, P. J. Rousseeuw, and W. A. Stahel (2005). Robust Statistics: The Approach Based on Influence Functions. John Wiley & Sons, New York.
Detecting invalid instruments using l1-gmm. C Han, Economics Letters. 1013Han, C. (2008). Detecting invalid instruments using l1-gmm. Economics Letters 101(3), 285-287.
Simpson's paradox in meta-analysis. J A Hanley, G Thériault, Epidemiology. 115613Hanley, J. A. and G. Thériault (2000). Simpson's paradox in meta-analysis. Epidemiol- ogy 11(5), 613.
Robust inference in summary data mendelian randomization via the zero modal pleiotropy assumption. F P Hartwig, G Davey Smith, J Bowden, International Journal of Epidemiology. 466Hartwig, F. P., G. Davey Smith, and J. Bowden (2017). Robust inference in summary data mendelian randomization via the zero modal pleiotropy assumption. International Journal of Epidemiology 46(6), 1985-1998.
On statistics, computation and scalability. M I Jordan, Bernoulli. 194Jordan, M. I. (2013). On statistics, computation and scalability. Bernoulli 19(4), 1378- 1390.
Instrumental variables estimation with some invalid instruments and its application to mendelian randomization. H Kang, A Zhang, T T Cai, D S Small, Journal of the American Statistical Association. 111513Kang, H., A. Zhang, T. T. Cai, and D. S. Small (2016). Instrumental variables estimation with some invalid instruments and its application to mendelian randomization. Journal of the American Statistical Association 111(513), 132-144.
Commentary: Mendelian randomization, 18 years on. M B Katan, International Journal of Epidemiology. 331Katan, M. B. (2004). Commentary: Mendelian randomization, 18 years on. International Journal of Epidemiology 33(1), 10-11.
Generalized meta-analysis for multiple regression models across studies with disparate covariate information. P Kundu, R Tang, N Chatterjee, Biometrika. 1063Kundu, P., R. Tang, and N. Chatterjee (2019). Generalized meta-analysis for multiple re- gression models across studies with disparate covariate information. Biometrika 106(3), 567-585.
The byzantine generals problem. L Lamport, R Shostack, M Pease, ACM Transactions on Programming Languages and Systems. 43Lamport, L., R. Shostack, and M. Pease (1982). The byzantine generals problem. ACM Transactions on Programming Languages and Systems 4(3), 382-401.
A mendelian randomization dictionary: Useful definitions and descriptions for undertaking, understanding and interpreting mendelian randomization studies. D A Lawlor, K Wade, M C Borges, T Palmer, F P Hartwig, G Hemani, J Bowden, Lawlor, D. A., K. Wade, M. C. Borges, T. Palmer, F. P. Hartwig, G. Hemani, and J. Bowden (2019). A mendelian randomization dictionary: Useful definitions and descriptions for undertaking, understanding and interpreting mendelian randomization studies.
Genome-wide association analyses identify new susceptibility loci for oral cavity and pharyngeal cancer. C Lesseur, B Diergaarde, A F Olshan, V Wünsch-Filho, A R Ness, G Liu, M Lacko, J Eluf-Neto, S Franceschi, P Lagiou, Nature Genetics. 4812Lesseur, C., B. Diergaarde, A. F. Olshan, V. Wünsch-Filho, A. R. Ness, G. Liu, M. Lacko, J. Eluf-Neto, S. Franceschi, P. Lagiou, et al. (2016). Genome-wide association analyses identify new susceptibility loci for oral cavity and pharyngeal cancer. Nature Genet- ics 48(12), 1544-1550.
Meta-analysis of genome-wide association studies with overlapping subjects. D.-Y Lin, P F Sullivan, The American Journal of Human Genetics. 856Lin, D.-Y. and P. F. Sullivan (2009). Meta-analysis of genome-wide association studies with overlapping subjects. The American Journal of Human Genetics 85(6), 862-872.
On the relative efficiency of using summary statistics versus individual-level data in meta-analysis. D.-Y Lin, D Zeng, Biometrika. 972Lin, D.-Y. and D. Zeng (2010). On the relative efficiency of using summary statistics versus individual-level data in meta-analysis. Biometrika 97(2), 321-332.
Adjustment for missing confounders in studies based on observational databases: 2-stage calibration combining propensity scores from primary and validation data. H.-W Lin, Y.-H Chen, American Journal of Epidemiology. 1803Lin, H.-W. and Y.-H. Chen (2014). Adjustment for missing confounders in studies based on observational databases: 2-stage calibration combining propensity scores from primary and validation data. American Journal of Epidemiology 180(3), 308-317.
Efficiency versus robustness: the case for minimum hellinger distance and related methods. B G Lindsay, The Annals of Statistics. 222Lindsay, B. G. (1994). Efficiency versus robustness: the case for minimum hellinger dis- tance and related methods. The Annals of Statistics 22(2), 1081-1114.
Multivariate meta-analysis of heterogeneous studies using only summary statistics: efficiency and robustness. D Liu, R Y Liu, M Xie, Journal of the American Statistical Association. 110509Liu, D., R. Y. Liu, and M. Xie (2015). Multivariate meta-analysis of heterogeneous stud- ies using only summary statistics: efficiency and robustness. Journal of the American Statistical Association 110(509), 326-340.
Genetic studies of body mass index yield new insights for obesity biology. A E Locke, B Kahali, S I Berndt, A E Justice, T H Pers, F R Day, C Powell, S Vedantam, M L Buchkovich, J Yang, Nature. 5187538Locke, A. E., B. Kahali, S. I. Berndt, A. E. Justice, T. H. Pers, F. R. Day, C. Powell, S. Vedantam, M. L. Buchkovich, J. Yang, et al. (2015). Genetic studies of body mass index yield new insights for obesity biology. Nature 518(7538), 197-206.
On the equivalence of meta-analysis using literature and using individual patient data. T Mathew, K Nordstrom, Biometrics. 554Mathew, T. and K. Nordstrom (1999). On the equivalence of meta-analysis using literature and using individual patient data. Biometrics 55(4), 1221-1223.
The use of two-sample methods for mendelian randomization analyses on single large datasets. C Minelli, F Del Greco, M , D A Van Der Plaat, J Bowden, N A Sheehan, J Thompson, International Journal of Epidemiology. 505Minelli, C., F. Del Greco M, D. A. van der Plaat, J. Bowden, N. A. Sheehan, and J. Thomp- son (2021). The use of two-sample methods for mendelian randomization analyses on single large datasets. International Journal of Epidemiology 50(5), 1651-1659.
Comparison of meta-analysis versus analysis of variance of individual patient data. I Olkin, A Sampson, Biometrics. 541Olkin, I. and A. Sampson (1998). Comparison of meta-analysis versus analysis of variance of individual patient data. Biometrics 54(1), 317-322.
Mendelian randomization analysis using mixture models for robust and efficient estimation of causal effects. G Qi, N Chatterjee, Nature Communications. 101Qi, G. and N. Chatterjee (2019). Mendelian randomization analysis using mixture models for robust and efficient estimation of causal effects. Nature Communications 10(1), 1-10.
Using covariate-specific disease prevalence information to increase the power of case-control studies. J Qin, H Zhang, P Li, D Albanes, K Yu, Biometrika. 1021Qin, J., H. Zhang, P. Li, D. Albanes, and K. Yu (2015). Using covariate-specific disease prevalence information to increase the power of case-control studies. Biometrika 102(1), 169-180.
Extending the mr-egger method for multivariable mendelian randomization to correct for both measured and unmeasured pleiotropy. J M Rees, A M Wood, S Burgess, Statistics in Medicine. 3629Rees, J. M., A. M. Wood, and S. Burgess (2017). Extending the mr-egger method for multivariable mendelian randomization to correct for both measured and unmeasured pleiotropy. Statistics in Medicine 36(29), 4705-4718.
An examination of multivariable mendelian randomization in the single-sample and two-sample summary data settings. E Sanderson, G Davey Smith, F Windmeijer, J Bowden, International Journal of Epidemiology. 483Sanderson, E., G. Davey Smith, F. Windmeijer, and J. Bowden (2019). An examination of multivariable mendelian randomization in the single-sample and two-sample summary data settings. International Journal of Epidemiology 48(3), 713-727.
ifusion: Individualized fusion learning. J Shen, R Y Liu, M.-G Xie, Journal of the American Statistical Association. 115531Shen, J., R. Y. Liu, and M.-g. Xie (2020). ifusion: Individualized fusion learning. Journal of the American Statistical Association 115(531), 1251-1267.
Censored linear regression in the presence or absence of auxiliary survival information. Y Sheng, Y Sun, D Deng, C.-Y Huang, Biometrics. 763Sheng, Y., Y. Sun, D. Deng, and C.-Y. Huang (2020). Censored linear regression in the presence or absence of auxiliary survival information. Biometrics 76(3), 734-745.
On the existence of maximum likelihood estimators for the binomial response models. M J Silvapulle, Journal of the Royal Statistical Society: Series B (Statistical Methodology). 433Silvapulle, M. J. (1981). On the existence of maximum likelihood estimators for the bi- nomial response models. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 43(3), 310-313.
Combining information from independent sources through confidence distributions. K Singh, M Xie, W E Strawderman, The Annals of Statistics. 331Singh, K., M. Xie, and W. E. Strawderman (2005). Combining information from indepen- dent sources through confidence distributions. The Annals of Statistics 33(1), 159-183.
Parametric estimation. finite sample theory. The Annals of Statistics. V Spokoiny, 40Spokoiny, V. (2012). Parametric estimation. finite sample theory. The Annals of Statis- tics 40(6), 2877-2909.
Bernstein-von mises theorem for growing parameter dimension. V Spokoiny, arXiv preprintSpokoiny, V. (2013). Bernstein-von mises theorem for growing parameter dimension. arXiv preprint.
Publication and related bias in metaanalysis: power of statistical tests and prevalence in the literature. J A Sterne, D Gavaghan, M Egger, Journal of Clinical Epidemiology. 5311Sterne, J. A., D. Gavaghan, and M. Egger (2000). Publication and related bias in meta- analysis: power of statistical tests and prevalence in the literature. Journal of Clinical Epidemiology 53(11), 1119-1129.
Variance reduced median-of-means estimator for byzantine-robust distributed inference. J Tu, W Liu, X Mao, X Chen, Journal of Machine Learning Research. 2284Tu, J., W. Liu, X. Mao, and X. Chen (2021). Variance reduced median-of-means es- timator for byzantine-robust distributed inference. Journal of Machine Learning Re- search 22(84), 1-67.
A W Van Der Vaart, Asymptotic Statistics. Cambridge university press3Van der Vaart, A. W. (2000). Asymptotic Statistics, Volume 3. Cambridge university press.
High-dimensional Probability: An Introduction with Applications in Data Science. R Vershynin, Cambridge university press47CambridgeVershynin, R. (2018). High-dimensional Probability: An Introduction with Applications in Data Science, Volume 47. Cambridge university press, Cambridge.
M J Wainwright, High-dimensional Statistics: A Non-asymptotic Viewpoint. Cambridge University Press48Wainwright, M. J. (2019). High-dimensional Statistics: A Non-asymptotic Viewpoint, Vol- ume 48. Cambridge University Press.
Statistical methods and computing for big data. C Wang, M.-H Chen, E Schifano, J Wu, J Yan, Statistics and Its Interface. 94399Wang, C., M.-H. Chen, E. Schifano, J. Wu, and J. Yan (2016). Statistical methods and computing for big data. Statistics and Its Interface 9(4), 399.
On the use of the lasso for instrumental variables estimation with some invalid instruments. F Windmeijer, H Farbmacher, N Davies, G. Davey Smith, Journal of the American Statistical Association. 114527Windmeijer, F., H. Farbmacher, N. Davies, and G. Davey Smith (2019). On the use of the lasso for instrumental variables estimation with some invalid instruments. Journal of the American Statistical Association 114(527), 1339-1350.
Evidence for causal effects of lifetime smoking on risk for depression and schizophrenia: a mendelian randomisation study. R E Wootton, R C Richmond, B G Stuijfzand, R B Lawn, H M Sallis, G M Taylor, G Hemani, H J Jones, S Zammit, G D Smith, Psychological Medicine. 5014Wootton, R. E., R. C. Richmond, B. G. Stuijfzand, R. B. Lawn, H. M. Sallis, G. M. Taylor, G. Hemani, H. J. Jones, S. Zammit, G. D. Smith, et al. (2020). Evidence for causal effects of lifetime smoking on risk for depression and schizophrenia: a mendelian randomisation study. Psychological Medicine 50(14), 2435-2443.
Confidence distributions and a unifying framework for meta-analysis. M Xie, K Singh, W E Strawderman, Journal of the American Statistical Association. 106493Xie, M., K. Singh, and W. E. Strawderman (2011). Confidence distributions and a unifying framework for meta-analysis. Journal of the American Statistical Association 106(493), 320-333.
Combining multiple observational data sources to estimate causal effects. S Yang, P Ding, Journal of the American Statistical Association. 115531Yang, S. and P. Ding (2020). Combining multiple observational data sources to estimate causal effects. Journal of the American Statistical Association 115(531), 1540-1554.
Debiased inverse-variance weighted estimator in two-sample summary-data mendelian randomization. T Ye, J Shao, H Kang, The Annals of statistics. 494Ye, T., J. Shao, and H. Kang (2021). Debiased inverse-variance weighted estimator in two-sample summary-data mendelian randomization. The Annals of statistics 49(4), 2079-2100.
Byzantine-robust distributed learning: Towards optimal statistical rates. D Yin, Y Chen, R Kannan, P Bartlett, PMLRInternational Conference on Machine Learning. Yin, D., Y. Chen, R. Kannan, and P. Bartlett (2018). Byzantine-robust distributed learning: Towards optimal statistical rates. In International Conference on Machine Learning, pp. 5650-5659. PMLR.
Model selection and estimation in regression with grouped variables. M Yuan, Y Lin, Journal of the Royal Statistical Society: Series B (Statistical Methodology). 681Yuan, M. and Y. Lin (2006). Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society: Series B (Statistical Methodol- ogy) 68(1), 49-67.
Data integration with oracle use of external information from heterogeneous populations. Y Zhai, P Han, Journal of Computational and Graphical Statistics. 314Zhai, Y. and P. Han (2022). Data integration with oracle use of external information from heterogeneous populations. Journal of Computational and Graphical Statistics 31(4), 1001-1012.
Generalized integration model for improved statistical inference by leveraging external summary data. H Zhang, L Deng, M Schiffman, J Qin, K Yu, Biometrika. 1073Zhang, H., L. Deng, M. Schiffman, J. Qin, and K. Yu (2020). Generalized integra- tion model for improved statistical inference by leveraging external summary data. Biometrika 107(3), 689-703.
On mendelian randomization analysis of case-control study. H Zhang, J Qin, S I Berndt, D Albanes, L Deng, M H Gail, K Yu, Biometrics. 76Zhang, H., J. Qin, S. I. Berndt, D. Albanes, L. Deng, M. H. Gail, and K. Yu (2019). On mendelian randomization analysis of case-control study. Biometrics 76, 380-391.
Statistical inference in two-sample summary-data mendelian randomization using robust adjusted profile score. Q Zhao, J Wang, G Hemani, J Bowden, D S Small, The Annals of Statistics. 483Zhao, Q., J. Wang, G. Hemani, J. Bowden, and D. S. Small (2020). Statistical inference in two-sample summary-data mendelian randomization using robust adjusted profile score. The Annals of Statistics 48(3), 1742-1769.
A new perspective on robust mestimation: Finite sample theory and applications to dependence-adjusted multiple testing. W.-X Zhou, K Bose, J Fan, H Liu, The Annals of Statistics. 465Zhou, W.-X., K. Bose, J. Fan, and H. Liu (2018). A new perspective on robust m- estimation: Finite sample theory and applications to dependence-adjusted multiple test- ing. The Annals of Statistics 46(5), 1904.
Least-square approximation for a distributed system. X Zhu, F Li, H Wang, Journal of Computational and Graphical Statistics. 304Zhu, X., F. Li, and H. Wang (2021). Least-square approximation for a distributed system. Journal of Computational and Graphical Statistics 30(4), 1004-1018.
The adaptive lasso and its oracle properties. H Zou, Journal of the American Statistical Association. 101476Zou, H. (2006). The adaptive lasso and its oracle properties. Journal of the American Statistical Association 101(476), 1418-1429.
| [
"https://github.com/MRCIEU/"
] |
[
"COADJOINT ORBITS OF LIE ALGEBRAS AND CARTAN CLASS",
"COADJOINT ORBITS OF LIE ALGEBRAS AND CARTAN CLASS"
] | [
"Michel Goze ",
"Elisabeth Remm "
] | [] | [] | We study the coadjoint orbits of a Lie algebra in terms of Cartan class. In fact, the tangent space to a coadjoint orbit O(α) at the point α corresponds to the characteristic space associated to the left invariant form α and its dimension is the even part of the Cartan class of α. We apply this remark to determine Lie algebras such that all the non singular orbits (non reduced to a point) have the same dimension, in particular when this dimension is 2 or 4. We determine also the Lie algebras of dimension 2n or 2n + 1 having an orbit of dimension 2n. | 10.3842/sigma.2019.002 | [
"https://arxiv.org/pdf/1806.07553v2.pdf"
] | 119,725,626 | 1806.07553 | d8874286b0c4e2eb91c5814015f06ef31ae106b3 |
COADJOINT ORBITS OF LIE ALGEBRAS AND CARTAN CLASS
20 Jun 2018
Michel Goze
Elisabeth Remm
COADJOINT ORBITS OF LIE ALGEBRAS AND CARTAN CLASS
20 Jun 2018arXiv:1806.07553v1 [math.RA]Lie algebras -Coadjoint representation Contact forms Frobeniusian Lie algebras Cartan class MSC: 17Bxx53D1053D05
We study the coadjoint orbits of a Lie algebra in terms of Cartan class. In fact, the tangent space to a coadjoint orbit O(α) at the point α corresponds to the characteristic space associated to the left invariant form α and its dimension is the even part of the Cartan class of α. We apply this remark to determine Lie algebras such that all the non singular orbits (non reduced to a point) have the same dimension, in particular when this dimension is 2 or 4. We determine also the Lie algebras of dimension 2n or 2n + 1 having an orbit of dimension 2n.
Introduction
Let G be a connected Lie group, g its Lie algebra and g * the dual vector space of g. We identify g with the Lie algebra of left invariant vector fields on G and g * with the vector space of left invariant Pfaffian forms on G. The Lie group G acts on g * by the coadjoint action. If α belongs to g * , its coadjoint orbit O(α) associated with this action is reduced to a point if α is closed for the adjoint cohomology of g. If not, the coadjoint orbit is an evendimensional manifold provided with a symplectic structure. Such manifolds are interesting because any symplectic homogeneous manifold is of type O(α) for some Lie group G and α ∈ g * . From the Kirillov theory, if G is a connected and simply connected nilpotent Lie group, the set of coadjoint orbits coincides with the set of equivalent classes of unitary representations of this Lie group. In this work, we establish a link between the dimension of the coadjoint orbit of the form α and cl(α) its class in Elie Cartan's sense. More precisely dim O(α) = 2 cl(α) 2 . Recall that the Cartan class of α corresponds to the number of independent Pfaffian forms needed to define α and its differential dα and it is equal to the codimension of the characteristic space. The dimension of O(α) results in a natural relation between this characteric space and the tangent space at the point α to the orbit O(α).
As applications, we describe classes of Lie algebras with additional properties related to its coadjoint orbits. For example, we determine all Lie algebras whose nonsingular orbits are all of dimension 2 or 4 and also the Lie algebras of dimension 2p or 2p + 1 admitting a maximal orbit of dimension 2p that is admitting α ∈ g * such that cl(α) ≥ 2p.
Dimension of coadjoint orbits and Cartan class
2.1. Cartan class of a Pfaffian form. Let M be a n-dimensional differentiable manifold and α a Pfaffian form on M, that is a differential form of degree 1. The characteristic space of α at a point x ∈ M is the linear subspace C x (α) of the tangent space T x (M) of M at the point x defined by
C x (α) = A(α(x)) ∩ A(dα(x))
where A(α(x)) = {X x ∈ T x (M), α(x)(X x ) = 0} is the associated subspace of α(x),
A(dα(x)) = {X x ∈ T x (M), i(X x )dα(x) = 0}
is the associated subspace of dα(x) and i(X x )dα(x)(Y x ) = dα(x)(X x , Y x ). Definition 1. Let α be a Pfaffian form on the differential manifold M. The Cartan class of α at the point x ∈ M is the codimension of the characteristic space C x (α) in the tangent space T x (M) to M at the point x. We denote it by cl(α)(x).
The function x → cl(α)(x) is with positive integer values and is semi-continuous, that is, if x 1 ∈ M is in a neighborhood of x, then cl(α)(x 1 ) ≥ cl(α)(x).
The characteristic system of α at the point x of M is the subspace C * x (α) of the dual T *
x (M) of T x (M) orthogonal to C x (α): In the first case, there exists a basis {ω 1 (x) = α(x), ω 2 (x), · · · , ω n (x)} of T * x (M) such that dα(x) = ω 2 (x) ∧ ω 3 (x) + · · · + ω 2p (x) ∧ ω 2p+1 (x) and
C * x (α) = {ω(x) ∈ T * x (M), ω(x)(X x ) = 0, ∀X x ∈ C x (α)}. Then cl(α)(x) = dim C * x (α).
C * x (α) = R{α(x)} + A * (dα(x)). In the second case, there exists a basis {ω 1 (x) = α(x), ω 2 (x), · · · , ω n (x)} of T * x (M) such that dα(x) = α(x) ∧ ω 2 (x) + · · · + ω 2p−1 (x) ∧ ω 2p (x) and
C * x (α) = A * (dα(x)).
If the function cl(α)(x) is constant, that is, cl(α)(x) = cl(α)(y) for any x, y ∈ M, we say that the Pfaffian form α is of constant class and we denote by cl(α) this constant. The distribution
x → C x (α) is then regular and it is an integrable distribution of dimension n − cl(α), called the characteristic distribution of α. It is equivalent to say that the Pfaffian system
x → C * x (α) is integrable and of dimension cl(α).
If M = G is a connected Lie group, we identify its Lie algebra g with the space of left invariant vector fields and its dual g * with the space of left invariant Pfaffian forms. Then if α ∈ g * , the differential dα is the 2-differential left invariant form belonging to Λ 2 (g * ) and defined by
dα(X, Y ) = −α[X, Y ]
for any X, Y ∈ g. It is obvious that any left invariant form α ∈ g * is of constant class and we will speak on the Cartan class cl(α) of a linear form α ∈ g * . We have
• cl(α) = 2p + 1 if and only if α ∧ (dα) p = 0 and (dα) p+1 = 0,
• cl(α) = 2p if and only if (dα) p = 0 and α ∧ (dα) p = 0.
Definition 3. Let g be an n-dimensional Lie algebra. If α ∈ g * is neither a contact nor a frobeniusian form, the characteristic space C(α) = C e (α) at the unit e of G is not trivial and the characteristic distribution on G given by C x (α) with x ∈ G has a constant non zero dimension. As it is integrable, the subspace C(α) is a Lie subalgebra of g.
Proposition 4. Let α ∈ g * be a linear form of maximal class, that is ∀β ∈ g * , cl(α) ≥ cl(β).
Then C(α) is an abelian subalgebra of g.
Proof.
Assume that α is a linear form of maximal class. If cl(α) = 2p + 1, there exists a basis {ω 1 = α, ω 2 , · · · , ω n } of g * such that dα = ω 2 ∧ ω 3 + · · · + ω 2p ∧ ω 2p+1 and C * (α) is generated by {ω 1 , · · · , ω 2p+1 }. If the subalgebra C(α) is not abelian, there exists j, 2p + 2 ≤ j ≤ n such that dω j ∧ ω 1 ∧ · · · ∧ ω 2p+1 = 0. Then
cl(α + tω k ) > cl(α)
for some t ∈ R. But α is of maximal class. Then C(α) is abelian. The proof when cl(α) = 2p is similar.
Recall some properties of the class of a linear form on a Lie algebra. The proofs of these statements are given in [19,14,16] • If g is a finite dimensional nilpotent Lie algebra, then the class of any non zero α ∈ g * is always odd.
• A real or complex finite dimensional nilpotent Lie algebra is never a Frobenius Lie algebra. More generally, an unimodular Lie algebra is non frobeniusian [10].
• Let g be a real compact Lie algebra. Any non trivial α ∈ g * has an odd Cartan class.
• Let g be a real or complex semi-simple Lie algebra of rank r. Then any α ∈ g * satisfies cl(α) ≤ n − r + 1. In particular, a semi-simple Lie algebra is never a Frobenius algebra. A semi-simple Lie algebra is a contact Lie algebra if its rank is 1, that is, g is isomorphic to sl(2, R) or so(3).
• The Cartan class of any linear non trivial form on a simple non exceptional Lie algebra of rank r satisfies cl(α) ≥ 2r. Moreover, if g is isomorphic of type A r , there exists a linear form of class 2r which reaches the lower bound.
• Any 2p + 1-dimensional contact real Lie algebra such that any non trivial linear form is a contact form is isomorphic to so(3).
2.2.
Cartan class and the index of a Lie algebra. For any α ∈ g * , we consider the stabilizer g α = {X ∈ g, α • adX = 0} and d the minimal dimension of g α when α lies in g * . It is an invariant of g which is called the index of g. If α satisfies dim g α = d then g α is an abelian subalgebra of g. Considering the Cartan class of α, g α is the associated subspace of dα: g α = A(dα), so the minimality is realized by a form of maximal class and we have d = n − cl(α) + 1 if the Cartan class cl(α) is odd or d = n − cl(α) if cl(α) is even. In particular (1) If g is a 2p-dimensional Frobeniusian Lie algebra, then the maximal class is 2p and d = 0. (2) If g is (2p + 1)-dimensional contact Lie algebra, then d = n − n + 1 = 1.
This relation between index and Cartan class is useful to compute sometimes this index. For example we have Proposition 5. Let L n or Q 2p be the naturally graded filiform Lie algebra. Their index satisfy
(1) d(L n ) = n − 2, (2) d(Q 2p ) = 2.
In fact L n is defined in a basis {e 0 , · · · , e n−1 } by [e 0 , e i ] = e i+1 for i = 1, · · · , n − 2 and we have cl(α) ∈ {1, 3} and d = n − 2 for any α ∈ L * n . The second algebra Q 2p is defined in the basis {e 0 , · · · , e 2p−1 } by [e 0 , e i ] = e i+1 for i = 1, · · · , 2p − 3, [e i , e 2p−1−i ] = (−1) i−1 e 2p−1 for i = 1, · · · , p − 1. In this case we have cl(α) ∈ {1, 3, 2p − 1} for any α ∈ Q * 2p and d = 2. Let us note that a direct computation of these indexes are given in [1].
2.3. The coadjoint representation. Let G be a connected Lie group an g its Lie algebra. The adjoint representation of G on g is the homomorphism of groups :
Ad : G → Aut(g) defined as follows. For every x ∈ G, let A(x) be the automorphism of G given by A(x)(y) = xyx −1 . This map is differentiable and the tangent map to the identity e of G is an automorphism of g. We denote it by Ad(x). Definition 6. The coadjoint representation of G on the dual g * of g is the homomorphism of groups:
Ad * : G → Aut(g * ) defined by < Ad * (x)α, X >=< α, Ad(x −1 )X >
for any α ∈ g * and X ∈ g.
The coadjoint representation is sometimes called the K-representation. For α ∈ g * , we denote by O(α) its orbit, called the coadjoint orbit, for the coadjoint representation. The following result is classical [22]: any coadjoint orbit is an even dimensional differentiable manifold endowed with a symplectic form.
Let us compute the tangent space to this manifold O(α) at the point α.
Any β ∈ O(α) is written β = α • Ad(x −1 ) for some x ∈ G. The map ρ : G → g * defined by ρ(x) = α • Ad(x)
is differentiable and its tangent map at the identity of G is given by ρ T e (X) = i(X)dα for any X ∈ g with i(X)dα(Y ) = −α[X, Y ]. Then the tangent space to O(α) at the point α corresponds to A * (dα) = {ω ∈ g * , ω(X) = 0, ∀X ∈ A(dα)}.
Proposition 7. Consider a non zero α ∈ g * . The tangent space to O(α) at the point α is isomorphic to the dual space A * (dα) of the associated space A(dα) of dα. An immediate application is Proposition 9. Any (2p + 1)-dimensional Lie algebra with all coadjoint orbits O(α) are of maximal dimension 2p for α = 0, is isomorphic to so(3) or sl(2, R) and then of dimension 3.
Proof. From [19,14] any contact Lie algebra such that any non trivial linear form is a contact form is isomorphic to so (3). Assume now that any non trivial form on g is of Cartan class equal to 2p or 2p + 1. With similar arguments developed in [19,14], we prove that such a Lie algebra is semi-simple. But if g is simple of rank r, we have 2r ≤ cl(α) ≤ 2p + 2 − r. Then r = 1 and g is isomorphic to sl(2, R).
Remark. Assume that dim g = 2p and all the nonsingular coadjoint orbits are also of dimension 2p. Then for any non trivial α ∈ g * , cl(α) = 2p. If I is a non trivial abelian ideal of g, there exists ω ∈ g * , ω = 0 such that ω(X) = 0 for any X ∈ I. The Cartan class of this form ω is smaller than 2p. Then g is semi-simple. But the behavior of the Cartan class on simple Lie algebra leads to a contradiction.
We deduce also from the previous corollary:
Proposition 10.
(1) If g is isomorphic to the (2p + 1)-dimensional Heisenberg algebra, then any nontrivial coadjoint orbit is of dimension 2p.
(2) If g is isomorphic to the graded filiform algebra L n , then any non trivial coadjoint orbit is of dimension 2.
(3) If g is isomorphic to the graded filiform algebra Q n , then any non trivial coadjoint orbit is of dimension 2 or n − 2.
(4) If g is a (2p + 1)-dimensional 2-step nilpotent Lie algebra with a coadjoint orbit of dimension 2p, then g is isomorphic to the (2p+1)-dimensional Heisenberg Lie algebra. (see [19]).
(5) If g is a complex classical simple Lie algebra of rank r, then the maximal dimension of the coadjoint orbits is equal to n − r is this number is even, if not to n − r − 1 (see [14]).
3. Lie algebras whose coadjoint orbits are of dimension 2 or 0
In this section, we determine all Lie algebras whose coadjoint orbits are of dimension 2 or 0. This problem was initiated in [4,5]. This is equivalent to say that the Cartan class of any linear form is smaller or equal to 3. If g is a Lie algebra having this property, any direct product g 1 = g I of g by an abelian ideal I satisfies also this property. We shall describe these Lie algebras up to an abelian direct factor, that is indecomposable Lie algebras. It is obvious to see that for any Lie algebra of dimension 2 or 3, the dimensions of the coadjoint orbits are equal to 2 or 0. We have also seen:
Proposition 11. Let g be a simple Lie algebra of rank 1. Then for any α = 0 ∈ g * , dim O(α) = 2. Conversely, if g is a Lie algebra such that dim O(α) = 2 for any α = 0 ∈ g * then g is simple of rank 1. Now we examine the general case. Assume that g is a Lie algebra of dimension greater or equal to 4 such that for any nonzero α ∈ g * we have cl(α) = 3, 2 or 1.
Assume in a first step that cl(α) = 2 for any non closed α ∈ g * . Let α be a non zero 1-form and cl(α) = 2. Then there exists a basis {α = ω 1 , · · · , ω n } of g * such that dα = dω 1 = ω 1 ∧ ω 2 . This implies d(dω 1 ) = 0 = −ω 1 ∧ dω 2 . Therefore dω 2 = ω 1 ∧ ω with ω ∈ g * . As cl(ω 2 ) ≤ 2, ω 2 ∧ ω = 0. If {X 1 , X 2 , · · · , X n } is the dual basis of {α = ω 1 , · · · , ω n }, then A(dω 1 ) = K{X 3 , · · · , X n } is an abelian subalgebra of g. Suppose [X 1 , X 2 ] = X 1 + aX 2 + U where U, X 1 , X 2 are linearly independent. The dual form of U would be of class 3 so U = 0. Under the assumption of change of basis if a = 0 we can assume that [X 1 , X 2 ] = X 1 . So dω 2 = ω 1 ∧ ω and ω ∧ ω 2 = 0 imply that dω 2 = 0. This implies that A(dω 1 ) is an abelian ideal of codimension 2. Let β be in A(dω 1 ) * , with dβ = 0. If such a form doesn't exist then
KX 1 ⊕ A(dω 1 ) is an abelian ideal of codimension 1. Otherwise dβ = ω 1 ∧ β 1 + ω 2 ∧ β 2 with β 1 , β 2 ∈ A(dω 1 ). As β ∧ dβ = dβ 2 = 0, β 1 ∧ β 2 = 0 which implies dβ = (aω 1 + bω 2 ) ∧ β. But d(dβ) = 0 implies adω 1 ∧ β = 0 = aω 1 ∧ ω 2 ∧ β thus a = 0 and dβ = bω 2 ∧ β. We deduce that [X 1 , A(dω 1 )] = 0 and K{X 1 } ⊕ A(dω 1 ) is an abelian ideal of codimension 1.
Assume now that there exists ω of class 3. There exists a basis B = {ω 1 , ω 2 , ω 3 = ω, · · · , ω n } of g * such as dω = dω 3 = ω 1 ∧ ω 2 and the subalgebra A(dω 3 ) = K{X 3 , · · · , X n } is abelian.
If {X 1 , · · · X n } is the dual basis of B, we can assume that [X 1 , X 2 ] = X 3 . As A(dω 3 ) is an abelian subalgebra of g, for any α ∈ g * we have dα = ω 1 ∧α 1 + ω 2 ∧α 2 with α 1 , α 2 ∈ A(dω 3 ) * . But cl(α) ≤ 3. Therefore ω 1 ∧ α 1 ∧ ω 2 ∧ α 2 = 0 which implies that α 1 ∧ α 2 = 0. We deduce that for any α ∈ A(dω 3 ) * there exist α 1 ∈ A(dω 3 ) * such that dα = (aω 1 + bω 2 ) ∧ α 1 .
Since g is indecomposable, for any X ∈ A(dω 3 ) and X / ∈ D(g), there exists X 12 ∈ R{X 1 , X 2 } such that [X 12 , X] = 0. We deduce Proposition 12. Let g an indecomposable Lie algebra such that the dimension of the nonsingular coadjoint orbits is 2. We suppose that there exists ω ∈ g * such that cl(ω) = 3. If n ≥ 7 then g = t ⊕ I n−1 where I n−1 is an abelian ideal of codimension 1.
It remain to study the particular cases of dimension 4, 5 and 6. The previous remarks show that:
• If dim g = 4 then g is isomorphic to one of the following Lie algebra given by its Maurer-
Cartan equations dω 3 = ω 1 ∧ ω 2 , dω 1 = ω 1 ∧ ω 4 , dω 2 = −ω 2 ∧ ω 4 , dω 4 = 0, , dω 3 = ω 1 ∧ ω 2 , dω 1 = ω 2 ∧ ω 4 , dω 2 = −ω 1 ∧ ω 4 , dω 4 = 0, , t ⊕ I 3
where I 3 is an abelian ideal of dimension 3.
• If dim g = 5 then g is isomorphic to one of the following Lie algebra
dω 3 = ω 1 ∧ ω 2 , dω 1 = dω 2 = 0, dω 4 = ω 1 ∧ ω 3 , dω 5 = ω 2 ∧ ω 3 , t ⊕ I 4
where I 4 is an abelian ideal of dimension 4.
• If dim g = 6 then g is isomorphic to one of the following Lie algebra
dω 3 = ω 1 ∧ ω 2 , dω 1 = dω 2 = dω 4 = 0, dω 5 = ω 2 ∧ ω 4 , dω 6 = ω 1 ∧ ω 4 , t ⊕ I 5
where I 5 is an abelian ideal of dimension 5.
Remarks
1. Among the Lie algebras g = t ⊕ I n−1 where I n−1 is an abelian ideal of dimension n − 1 there exist a family of nilpotent Lie algebras which are the "model" for a given characteristic sequence (see [23]). They are the nilpotent Lie algebras L n,c , c ∈ {(n − 1, 1), (n − 3, 2, 1), · · · , (2, 1, · · · , 1)} defined by
[U, X 1 ] = X 2 , [U, X 2 ] = X 3 , · · · , [U, X n 1 −1 ] = X n 1 , [U, X n 1 ] = 0, [U, X n 1 +1 ] = X n 1 +2 , [U, X n 1 +2 ] = X n 1 +3 , · · · , [U, X n 2 −1 ] = X n 2 , [U, X n 2 ] = 0, · · · [U, X n k−2 +1 ] = X n k−2 +2 , · · · , [U, X n k−1 −1 ] = X n k−1 , [U, X n k−1 ] = 0.
The characteristic sequence c corresponds de c(U) and {X 1 , · · · , X n k−1 } is a Jordan basis of adU. We shall return to this notion in the next section.
2. Let U(g) be the universal enveloping algebra of g and consider the category U(g)−Mod of right U(g)-module. Then if g is a Lie algebra described in this section (that is with coadjoint orbits of dimension 0 or 2) thus any U(g)-mod satisfy the property that "any injective hulls of simple right U(g)-module are locally Artinian" (see [18]). 4. Lie algebras whose non singular coadjoint orbit are of dimension 4
We generalize some results of the previous section, considering here real Lie algebras such that for a fixed p ∈ N, dim O(ω) = 2p or 0 for any ω ∈ g * . We are interested, in this section, in the case p = 2. The Cartan class of any non closed linear form is equal to 5 or 4.
Lemma 13. Let g be a Lie algebra whose Cartan class of any non trivial and non closed linear form is 4 or 5. Then g is solvable.
Proof. If g is a simple Lie algebra of rank r and dimension n, then the Cartan class of any linear form ω ∈ g * satisfies c ≤ n − r + 1 and this upper bound is reached. Then n − r + 1 = 4 or 5 and the only possible case is for r = 2 and g = so (4). Since this algebra is compact, the Cartan class is odd. We can find a basis of so(4) whose corresponding Maurer-Cartan equations are
dω 1 = −ω 2 ∧ ω 4 − ω 3 ∧ ω 5 , dω 2 = ω 1 ∧ ω 4 − ω 3 ∧ ω 6 , dω 3 = ω 1 ∧ ω 5 + ω 2 ∧ ω 6 , dω 4 = −ω 1 ∧ ω 2 − ω 5 ∧ ω 6 , dω 5 = −ω 1 ∧ ω 3 + ω 4 ∧ ω 6 , dω 6 = −ω 2 ∧ ω 3 − ω 4 ∧ ω 5
, If each of the linear forms of this basis has a Cartan class equal to 5, it is easy to find a linear form, for example ω 1 + ω 6 , of Cartan class equal to 3. Then g is neither simple nor semi-simple. This implies also that the Levi part of a non solvable Lie algebra is also trivial, then g is solvable.
As consequence, g contains a non trivial abelian ideal. From the result of the previous section, the codimension of this ideal I is greater or equal to 2 and g = t ⊕ I. Assume in a first step that dim t = 3. In this case g/I is abelian, that is [t, t] ⊂ I. This implies that for any ω ∈ t * we have dω = 0 and we obtain, considering the dimension of [t, t], the following Lie algebras which are nilpotent because the Cartan class is always odd:
(i) dim[t, t] = 3: (1) dω 1 = dω 2 = dω 3 = dω 7 = 0, dω 4 = ω 1 ∧ ω 2 + ω 3 ∧ ω 7 , dω 5 = ω 1 ∧ ω 3 − ω 2 ∧ ω 7 , dω 6 = ω 2 ∧ ω 3 + ω 1 ∧ ω 7 ,
which is of dimension 7 sometimes called the Kaplan Lie algebra or the generalized Heisenberg algebra.
(2)
dω 1 = dω 2 = dω 3 = dω 7 = dω 8 = 0, dω 4 = ω 1 ∧ ω 2 + ω 3 ∧ ω 7 , dω 5 = ω 1 ∧ ω 3 + ω 2 ∧ ω 8 , dω 6 = ω 2 ∧ ω 3 + ω 1 ∧ ω 7 , of dimension 8 (3) dω 1 = dω 2 = dω 3 = dω 7 = dω 8 = dω 9 = 0, dω 4 = ω 1 ∧ ω 2 + ω 3 ∧ ω 7 , dω 5 = ω 1 ∧ ω 3 + ω 2 ∧ ω 8 , dω 6 = ω 2 ∧ ω 3 + ω 1 ∧ ω 9 , dω 1 = dω 2 = dω 3 = dω 7 = dω 8 = 0, dω 4 = ω 1 ∧ ω 2 + ω 3 ∧ ω 7 , dω 5 = ω 1 ∧ ω 3 + ω 2 ∧ ω 7 , dω 6 = ω 1 ∧ ω 7 + ω 2 ∧ ω 8 , which is of dimension 8. (5) dω 1 = dω 2 = dω 3 = dω 7 = dω 8 = dω 9 = 0, dω 4 = ω 1 ∧ ω 2 + ω 3 ∧ ω 7 , dω 5 = ω 1 ∧ ω 3 + ω 2 ∧ ω 9 , dω 6 = ω 1 ∧ ω 7 + ω 2 ∧ ω 8 , which is of dimension 9. (iii) dim[t, t] = 1 (6) dω 1 = dω 2 = dω 3 = dω 7 = dω 8 = 0, dω 4 = ω 1 ∧ ω 2 + ω 3 ∧ ω 7 , dω 5 = ω 1 ∧ ω 8 + ω 2 ∧ ω 7 , dω 6 = ω 1 ∧ ω 7 + ω 3 ∧ ω 8 , which is of dimension 8. (iv) dim[t, t] = 0 (7) dω 1 = dω 2 = dω 3 = dω 7 = dω 8 = 0, dω 4 = ω 1 ∧ ω 7 + ω 2 ∧ ω 8 , dω 5 = ω 1 ∧ ω 8 + ω 3 ∧ ω 7 , dω 6 = ω 2 ∧ ω 7 + ω 1 ∧ ω 8 ,(4)
which is of dimension 8.
Assume now that dim t = 4. In this case g/I is abelian or isomorphic to the solvable Lie algebra whose Maurer-Cartan equations are dω 2 = dω 4 = 0,
dω 1 = ω 1 ∧ ω 2 + ω 3 ∧ ω 4 dω 3 = ω 3 ∧ ω 2 + aω 1 ∧ ω 4
with a = 0. Let us assume that g/I is not abelian. Let {X 1 , · · · , X n } be a basis of g such that {X 1 , · · · , X 4 } is the basis of t dual of ω 1 , · · · , ω 4 } and {X 5 , · · · , X n } a basis of I. Since I is maximal, then [X 1 , I] and [X 3 , I] are not trivial. There exists a vector of I, for example, X 5 such that [X 1 , X 5 ] = 0. Let us put [X 1 , X 5 ] = Y with Y ∈ I and let be ω its dual form. Then
dω = ω 1 ∧ ω 5 + θ
with ω 1 ∧ ω 5 ∧ θ = 0 and ω 3 ∧ ω 4 ∧ θ = ω 3 ∧ ω 2 ∧ θ = 0 if not there exists a linear form of class greater that 5. This implies they there exists ω 6 independent with ω 5 dω = ω 1 ∧ ω 5 + bω 3 ∧ ω 6
with b = 0. Now the Jacobi condition's which are equivalent to d(dω) = 0 implies that we cannot have mega = ω 5 and omega = ω 7 . Then we put ω = ω 7 . This implies
dω 7 = ω 1 ∧ ω 5 + b 2 ω 3 ∧ ω 6 , dω 5 = ω 2 ∧ ω 5 + b 3 ω 4 ∧ ω 6 , dω 6 = ω 4 ∧ ω 5 + b 4 ω 2 ∧ ω 6 ,
We deduce Proposition 14. If g = t ⊕ I where I is a maximal abelian ideal of codimension 4, then g is isomorphic to the Lie algebra whose Maurer-Cartan equations are
(8) dω 2 = dω 4 = 0, dω 1 = ω 1 ∧ ω 2 + ω 3 ∧ ω 4 dω 3 = ω 3 ∧ ω 2 + a 1 ω 1 ∧ ω 4 dω 5 = ω 2 ∧ ω 5 + a 2 ω 4 ∧ ω 6 , dω 6 = ω 4 ∧ ω 5 + a 3 ω 2 ∧ ω 6 dω 7 = ω 1 ∧ ω 5 + a 4 ω 3 ∧ ω 6 ,
with a 1 a 2 a 3 a 4 = 0.
If dim t ≥ 5, then dim A(ω) = 4 or 0 and the codimension of I is greater than n − 4. Then dim t ≤ 4.
Proposition 15. Let g be a Lie algebra whose the dimension of the coadjoint orbit of any non closed linear form is 4. Then (1) g is isomorphic to (1), (2),(3), (4), (5), (6), (7), (8), (2) or g admits a decomposition g = t ⊕ I where I is a codimension 2 abelian ideal.
It remains to describe the action of t on I when we consider the second case. Assume that g = t ⊕ I and dim t = 2. Let {T 1 , T 2 } be a basis of t. Then g = g/K{T 2 } is a Lie algebra whose any non closed linear form is of class 2 or 3. Such Lie algebra is described in Proposition 12.
Proposition 16. Let g be a Lie algebra whose the dimension of the coadjoint orbit of any non closed linear form is 4 such that g = t⊕I where I is a codimension 2 abelian ideal. Then g is a one dimensional extension by a derivation f of g such that f (T 1 ) = 0, Im(f ) = Im(adT 1 ) and for any Y ∈ Im(adT 1 ), there exits X 1 , X 2 ∈ I linearly independent such that
f (X 2 ) = [T 1 , X 1 ] = Y.
Examples.
• dim g = 4. Then dim g = 3 and it is isomorphic to to one of the two algebras whose Lie brackets are given by
• [T 1 , X 1 ] = X 2 . • or [T 1 , X 1 ] = X 1 , [T 1 , X 2 ] = aX 2 , a = 0.
In the first case, it is easy to see that we cannot find derivation of g satisfying Proposition 16. In the second case the matrix of f in the basis
{T 1 , X 1 , X 2 } is 0 0 0 0 b c 0 d e
We have no solution if a = 1. If a = 1 the f satisfies (e − b) 2 + 4cd < 0.
In particular for b = e = 0 and c = 1 we obtain Proposition 17. Any 4-dimensional Lie algebra whose the coadjoint orbit of non closed linear form are of dimension 4 is isomorphic to tthe following Lie algebra g 4 (λ) whose Maurer-Cartan equations are dα 1 = dα 2 = 0,
dω 1 = α 1 ∧ ω 1 + α 2 ∧ ω 2 , dω 2 = α 1 ∧ ω 2 + λα 2 ∧ ω 1 , with λ < 0.
• dim g = 5. Let us put g = K{T 1 } ⊕ I. Let h 1 be the restriction of adT 1 to I. It is an endomorphism of I and since dim I = 3, it admits an eigenvalues λ. Assume that λ = 0 and let X 1 be an associated eigenvector. Then, since f is a derivation commuting with adT 1 ,
[T 1 , f (X 1 )] = f ([T 1 , X 1 ]) = λf (X 1 ).
Then f (X 1 ) is also an eigenvector associated with λ . By hypothesis X 1 and f (X 1 ) are independent and λ is a root of order 2. Thus h 1 is semi-simple. Let λ 2 be the third eigenvalue. If X 3 is an associated eigenvector, as above f (X 3 ) is also an eigenvector and λ 2 is a root of order 2 except if λ 2 = λ 1 . Then Lemma 18. If dim I is odd, then if the restriction h 1 of adT 1 to I admits a nonzero eigenvalue, we have h 1 = λId.
Proof. We have proved this lemma for dim I = 3. By induction we find the general case.
We assume that h 1 = λId with λ = 0. The derivation f of I is of rank 3 because f and h 1 have the same rank by hypothesis. Since f is an endomorphism in a 3-dimensional space, it admits a non zero eigenvalue µ. Let Y be an associated eigenvector, then
f (Y ) = µY, h 1 (Y ) = λY.
This implies that there exists Y such that f (Y ) and h 1 (Y ) are not linearly independent. This is a contradiction. We deduce that λ = 0.
As consequence, any eigenvalues of h 1 are null and h 1 is a nilpotent operator. In particular dim Im(h 1 ) ≤ 2. If this rank is equal to 2, the kernel is of dimension 1. Let X 1 be a generator of this kernel. Then [T 1 , X 1 ] = 0 this implies that f (X 1 ) = 0 because f and h 1 have the same image. We deduce that the subspace of g generated by X 1 is an belian ideal and g is not indecomposable. Then dim Im(h 1 ) = 1 and g is the 5-dimensional Heisenberg algebra. Proposition 19. Any 5-dimensional Lie algebra whose the coadjoint orbit of non closed linear form are of dimension 4 is isomorphic to the 5-dimensional Heisenberg algebra whose Maurer-Cartan equations are
dω 1 = dω 2 = dω 3 = dω 4 = 0, dω 5 = ω 1 ∧ ω 2 + ω 3 ∧ ω 4
Solvable non nilpotent case. Since the Cartan class of any linear form on a solvable Lie algebra is odd if and only if this Lie algebra is nilpotent, then if we assume that g is solvable non nilpotent, there exists a linear form of Cartan class equal to 4. We assume also that g = t ⊕ I with dim t = 2 and satisfying Proposition 16. The determination of these Lie algebras is similar to (8) without the hypothesis [X 1 , I] = 0 and [X 3 , I] = 0. In this case X 1 and X 3 are also in I. We deduce immediately: Proposition 20. Let g = I with dim t = 2 and I an abelian ideal be a solvable non nilpotent Lie algebra whose the dimensions of non singular coadjoint orbits are equal to 4. Then g is isomorphic to the followingg Lie algebra whose Maurer-cartan equations are
(9) dω 2 = dω 4 = dω 2l+2 = dω 2l+3 = · · · = dω 2l+2+3s = dω 2l+3+3s dω 1 = ω 1 ∧ ω 2 + ω 3 ∧ ω 4 dω 3 = ω 3 ∧ ω 2 + a 1 ω 1 ∧ ω 4 dω 5 = ω 2 ∧ ω 5 + a 2 ω 4 ∧ ω 6 , dω 6 = ω 2 ∧ ω 6 + a 3 ω 4 ∧ ω 5 · · · dω 2l−1 = ω 2 ∧ ω 2l−1 + a 2l−4 ω 4 ∧ ω 2l , dω 2l = ω 2 ∧ ω 2l + a 2l−3 ω 4 ∧ ω 2l−1 dω 2l+1 = ω 2 ∧ ω 2l+2 + ω 2 ∧ ω 2l+3 · · · dω 2l+1+3s = ω 2 ∧ ω 2l+2+3s + ω 2 ∧ ω 2l+3+3s
with a 1 · · · a 2l−3 = 0.
Nilpotent case. Let us describe nilpotent algebras of type t ⊕ I where I is a maximal abelian ideal with dim A(dω) = n − 4 or n for any ω ∈ g * . Let us recall also that the Cartan class of any non trivial linear form is odd then here equal to 5. In the previous examples, we have seen that for the 5-dimensional case, we have obtained only the Heisenberg algebra. Before to study the general case, we begin by a description of an interesting example. Let us consider the following nilpotent Lie algebra, denoted by h(p, 2) given by
dω 1 = α 1 ∧ β 1 + α 2 ∧ β 2 , dω 2 = α 1 ∧ β 3 + α 2 ∧ β 4 , · · · dω p = α 1 ∧ β 2p−1 + α 2 ∧ β 2p , dα 1 = dα 2 = dβ i = 0, i = 1, · · · 2p.
This Lie algebra is nilpotent of dimension 3p + 2 and it has been introduced in [17] in the study of Pfaffian system of rank greater than 1 and of maximal class.
Proposition 21. For any non closed linear form on h(p, 2), the dimension of the coadjoint orbit is equal to 4.
To study the general case, we shall use the notion of characteristic sequence which is an invariant up to isomorphism of nilpotent Lie algebras (see for example [23] for a presentation of this notion. For any X ∈ g, let c(X) be the ordered sequence, for the lexicographic order, of the dimensions of the Jordan blocks of the nilpotent operator adX. The characteristic sequence of g is the invariant, up to isomorphism, c(g) = max{c(X), X ∈ g − C 1 (g)}.
In particular, if c(g) = (c 1 , c 2 , · · · , 1), then g is c 1 -step nilpotent. For example, we have c(h(p, 2)) = (c 1 = 2, · · · , c p = 2, 1, · · · , 1). A vector X ∈ g such that c(X) = c(g) is called a characteristic vector of g.
Theorem 22. Let g be a nilpotent Lie algebra such that the dimension of the coadjoint orbit of non closed form is 4 admitting the decomposition g = t ⊕ I where I is an abelian ideal of codimension 2. Then t admits a basis {T 1 , T 2 } of characteristic vector of g with the same characteristic sequence and Im(adT 1 ) = Im(adT 2 ).
Proof. Let T be a non null vector of t such that g = g/K{T } is a nilpotent Lie algebra given in (??). Then g = t 1 ⊕ I and if T 1 ∈ t 1 , T 1 = 0, then T 1 is a characteristic vector of g. Then T 1 can be considered as a characteristic vector of g. Let be T 2 ∈ t such that adT 1 and adT 2 have the same image in I. Then T 2 is also a characteristic vector with same characteristic sequence, if not c(T 1 ) will be not maximal.
Let us consider a Jordan basis {X 1 1 , · · · , X c 1 1 , X 1 2 , · · · , X c 2 2 , · · · X c k k , T 2 , T 1 } of adT 1 corresponding to the characteristic sequence c(g) = (c 1 , · · · , c k , 1, 1), . In the dual basis
{ω 1 1 , · · · , ω c 1 1 , ω 1 2 , · · · , ω c 2 2 , · · · ω c k k , α 2 , α 1 ) we have dω j s = α 1 ∧ ω j−1 s + α 2 ∧ β j s
for any s = 1, · · · , k and j = 1, · · · , c s where β s i satisfies β s i ∧ ω s−1 i = 0. The Jacobi's conditions implies that β 1 s , · · · , β cs s are the dual basis of a Jordan block. This implies Lemma 23. If c(g) = (c 1 , · · · , c k , 1, 1) is the characteristic sequence of adT 1 , then for any c s ∈ c(g), c s = 1, then c s − 1 is also in c(g).
Thus, if c(g) is a strictly decreasing sequence, that is if c s > c s−1 for c s ≥ 2, we shall have
c(g) = (c 1 , c 1 − 1, c 1 − 2, · · · , 2, 1, 1), or c(g) = (c 1 , c 1 − 1, c 1 − 2, · · · , 2, 1, 1, 1).
In all the other cases, we shall have c(g) = (c 1 , · · · , c 1 , c 1 − 1, · · · , c 1 − 1, c 1 − 2, · · · , 1).
Let us describe the nilpotent Lie algebras whose the characteristic sequence is strictly decreasing, the other cases can be deduced. Assume that c 1 = l ≥ 3. Then g is isomorphic to
(10) [T 1 , X i 1 ] = X i+1 1 , i = 1, · · · , l − 1, [T 2 , X i 1 ] = 0, i = 1, · · · , l − 1 [T 1 , X i 2 ] = X i+1 2 , i = 1, · · · , l − 2, [T 2 X i 2 ] = X i+1 1 i = 1, · · · , l − 1, · · · [T 1 , X 1 l−1 ] = X 2 l−1 , [T 2 , X i l−1 ] = X i+1 l−2 i = 1, 2, [T 1 , X 1 l ] = 0 [T 2 , X 1 l ] = X 2 l−1 when c(g) = (l, l − 1, · · · , 2, 1, 1, 1) or (11) [T 1 , X i 1 ] = X i+1 1 , i = 1, · · · , l − 1, [T 2 , X l 1 ] = X 2 l−1 , [T 2 , X i 1 ] = 0, i = 1, · · · , l − 1 [T 1 , X i 2 ] = X i+1 2 , i = 1, · · · , l − 2, [T 2 X i 2 ] = X i+1 1 i = 1, · · · , l − 1, · · · [T 1 , X 1 l−1 ] = X 2 l−1 , [T 2 , X i l−1 ] = X i+1
l−2 i = 1, 2, when c(g) = (l, l − 1, · · · , 2, 1, 1) In particular we deduce Proposition 24. Let g be a n-dimensional nilpotent Lie algebra whose nonsingular coadjoint orbit are of dimension 4. Then if c((g) = (c 1 , · · · , c k , 1) is its characteristic sequence, then
c 1 ≤ √ 8n − 7 − 1 2 .
In fact,for the Lie algebras (10) or (11) we have 2n = l(l+1) 2 + 1 or 2n = l(l+1) 2 + 2. Other nilpotent Lie algebras satisfying this hypothesis on the dimension of coadjoint orbits have characteristic sequences (c 1 , · · · , c k , 1) with c 1 ≥ c 2 · · · ≥ k ≥ 1, the inequalities being here no strict.
If c 1 = 2, the Lie algebra is 2-step nilpotent and the characteristic sequence of g is of type c(g) = (2, · · · , 2 l , 1, · · · , 1 l−s+2 ) where s ≤ l. In fact, g is an extension by a derivation of g = t 1 ⊕ I which is equal to 0 on t 1 . Let us note that the characteristic sequence of the Kaplan algebra (1) is (2, 2, 2, 1), but this Lie algebra is a particular case which do not corresponds to the previous decomposition. In general, the Maurer-Cartan equations will be done by
(12) dω 2 1 = α 1 ∧ ω 1 1 + α 2 ∧ β 1 , dω 2 2 = α 1 ∧ ω 1 2 + α 2 ∧ β 2 , · · · dω 2 l = α 1 ∧ ω 1 l + α 2 ∧ β l , with dα 1 = dα 2 = dω i 1 = dβ i = 0, i = 1
, · · · , l. and β s+1 ∧ · · · ∧ β l ∧ ω 1 1 ∧ · · · ∧ ω l 1 = 0, and for any i = 1, · · · , s,β i ∈ R{ω 1 1 , · · · , ω l 1 } with ω 1 i ∧ β j − ω 1 j ∧ β i = 0 for any i, j ∈ {1, · · · , s}.
Lie algebras with coadjoint orbits of maximal dimension
In this section we are interested by n-dimensional Lie algebra admitting orbits of dimension n if n is even or dimension n − 1 is n is odd. This is equivalent to consider Frobeniusian Lie algebras in the first case and contact Lie algebras in the second case. 5.1. (2p + 1)-dimensional Lie algebras with a 2p-dimensional coadjoint orbit. Let h 2p+1 be the (2p + 1)-dimensional Heisenberg algebra. There exists a basis {X 1 , · · · , X 2p+1 } such that the Lie brackets of h 2p+1 are [X 1 , X 2 ] = · · · = [X 2p−1 , X 2p ] = X 2p+1 , and [X i , X j ] = 0 for i < j and (i, j) / ∈ {(1, 2), · · · , (2p − 1, 2p)}. If {ω 1 , · · · , ω 2p+1 } denotes the dual basis of h * 2p+1 , the Maurer-Cartan equations writes
dω 2p+1 = −ω 1 ∧ ω 2 − · · · − ω 2p−1 ∧ ω 2p , dω i = 0, i = 1, · · · , 2p.
Then ω 2p+1 is a contact form on h 2p+1 and the coadjoint orbit O(ω 2p+1 ) is of maximal dimension 2p. Let us note that, in h 2p+1 all the orbits are of dimension 2p or are singular. In the following, we will denote by µ 0 the Lie bracket of h 2p+1 .
Definition 25. A formal quadratic deformation g t of h 2p+1 is a (2p + 1)-dimensional Lie algebra whose Lie bracket µ t is given by
µ t (X, Y ) = µ 0 (X, Y ) + tϕ 1 (X, Y ) + t 2 ϕ 2 (X, Y )
where the maps ϕ i are bilinear on h 2p+1 with values in h 2p+1 and satisfying
(13) δ µ 0 ϕ 1 = 0, ϕ 1 • ϕ 1 + δ µ 0 ϕ 2 = 0, ϕ 2 • ϕ 2 = 0, ϕ 1 • ϕ 2 + ϕ 2 • ϕ 1 = 0,
In this definition δ µ denotes the coboundary operator of the Chevalley-Eilenberg cohomology of a Lie algebra g whose Lie bracket is µ with values in g, and if ϕ and ψ are bilinear maps, then ϕ • ψ is the trilinear map given by
ϕ • ψ(X, Y, Z) = ϕ(ψ(X, Y ), Z) + ϕ(ψ(Y, Z), X) + ϕ(ψ(Z, X), Y ).
In particular ϕ • ϕ = 0 is equivalent to Jacobi Identity and ϕ, in this case, is a Lie bracket and the coboundary operator writes
δ µ ϕ = µ • ϕ + ϕ • µ.
Theorem 26. [19] Any (2p + 1)-dimensional contact Lie algebra g is isomorphic to a quadratic formal deformation of h 2p+1 .
Then its Lie bracket µ writes µ = µ 0 + tϕ 1 + t 2 ϕ 2 and the bilinear maps ϕ 1 and ϕ 2 have the following expressions in the basis {X 1 , · · · , X 2p+1 }:
(14) ϕ 1 (X 2k−1 , X 2k ) = 2p s=1 C s 2k−1,2k X s , k = 1, · · · , p, ϕ 1 (X l , X r ) = 2p s=1 C s l,r X s , 1 ≤ l < r ≤ 2p, (l, r) = (2k − 1, 2k),
and (15) ϕ 2 (X l , X r ) = 0, l, r = 1, · · · , 2p, ϕ 2 (X l , X 2p+1 ) = 2p s=1 C s l,2p+1 X s l = 1, · · · , 2p, and the non defined values are equal to 0. Since the center of any deformation of h 2p+1 is of dimension less than or equal to the dimension of h 2p+1 , we deduce:
Corollary 27. [14] The center Z(g) of a contact Lie algebra g is of dimension less or equal to 1. 5.1.1. Case of nilpotent Lie algebras. If g is a contact nilpotent Lie algebra, its center is of dimension 1. In the given basis, this center is R{X 2p+1 }. This implies that ϕ 2 = 0.
Proposition 28. Any (2p + 1)-dimensional contact nilpotent Lie algebra is isomorphic to a linear deformation µ t = µ 0 + tϕ 1 of the Heisenberg algebra h 2p+1 .
As consequence, we have
Corollary 29. Any (2p + 1)-dimensional contact nilpotent Lie algebra is isomorphic to a central extension of a 2p-dimensional symplectic Lie algebra by its symplectic form.
Proof. Let t be the 2p-dimensional vector space generated by {X 1 , · · · , X 2p }. The restriction to t of the 2-cocycle ϕ 1 is with values in t. Since ϕ 1 • ϕ 1 = 0, it defines on t a structure of 2p-dimensional Lie algebra. If {ω 1 , · · · , ω 2p+1 } is the dual basis of the given classical basis of h 2p+1 , then θ = ω 1 ∧ ω 2 + · · · + ω 2p−1 ∧ ω 2p is a 2-form on t. We denote by d ϕ 1 the differential operator on the Lie algebra (t, ϕ 1 ), that is d ϕ 1 ω(X, Y ) = −ω(ϕ 1 (X, Y )) for all X, Y ∈ t and ω ∈ t * . Since µ 0 is the Heisenberg Lie algebra multiplication, the condition δ µ 0 ϕ 1 = 0 is equivalent to d ϕ 1 (θ) = 0. It implies that θ is a closed 2-form on t and g is a central extension of t by θ. ♣ We deduce:
Theorem 30. [19] Let g be a (2p + 1)-dimensional k-step nilpotent Lie algebra. Then there exists on g a coadjoint orbit of dimension 2p if and only if g is a central extension of a (2p)-dimensional (k − 1)-step nilpotent symplectic Lie algebra t, the extension being defined by the 2-cocycle given by the symplectic form.
Since the classification of nilpotent Lie algebras is known up the dimension 7, the previous result permits to establish the classification of contact nilpotent Lie algebras of dimension 3, 5 and 7 using the classification in dimension 2,4 and 6. For example, the classification of 5-dimensional nilpotent Lie algebras with an orbit of dimension 4 is the following:
• g is 4-step nilpotent:
n 1 5 : [X 1 , X i ] = X i+1 , i = 2, 3, 4, [X 2 , X 3 ] = X 5 . • g is 3-step nilpotent: n 3 5 : [X 1 , X i ] = X i+1 , i = 2, 3 [X 2 , X 5 ] = X 4 . • g is 2-step nilpotent: n 6 5 : [X 1 , X 2 ] = X 3 [X 4 , X 5 ] = X 3 .
We shall now study a contact structure in respect of the characteristic sequence of a nilpotent Lie algebra. For any X ∈ g, let c(X) be the ordered sequence, for the lexicographic order, of the dimensions of the Jordan blocks of the nilpotent operator adX. The characteristic sequence of g is the invariant, up to isomorphism, c(g) = max{c(X), X ∈ g − C 1 (g)}.
Then c(g) is a sequence of type (c 1 , c 2 , · · · , c k , 1) with c 1 ≥ c 2 ≥ · · · ≥ c k ≥ 1 and c 1 + c 2 + · · · + c k + 1 = n = dim g. For example, (1) c(g) = (1, 1, · · · , 1) if and only if g is abelian, (2) c(g) = (2, 1, · · · , 1) if and only if g is a direct product of an Heisenberg Lie algebra by an abelian ideal, (3) If g is 2-step nilpotent then there exists p and q such that c(g) = (2, 2, · · · , 2, 1, · · · , 1) with 2p + q = n, that is p is the occurrence of 2 in the characteristic sequence and q the occurrence of 1. (4) g is filiform if and only if c(g) = (n − 1, 1).
This invariant was introduced in [2] in order to classify 7-dimensional nilpotent Lie algebras. A link between the notions of breath of nilpotent Lie algebra introduced in [21] and characteristic sequence is developed in [23]. If c(g) = (c 1 , c 2 , · · · , c k , 1) is the characteristic sequence of g then g is c 1 -step nilpotent. Assume now that c 1 = c 2 = · · · = c l and c l+1 < c l . Then the dimension of the center of g is greater that l because in each Jordan blocks corresponding to c 1 , · · · , c l , the last vector is in C c 1 (g) which is contained in the center of g. We deduce
Proposition 31. Let g be a contact nilpotent Lie algebra. Then its characteristic sequence is of type c(g) = (c 1 , c 2 , · · · , c k , 1) with c 2 = c 1 .
Exemples
(1) If g is 2-step nilpotent, then c(g) = (2, · · · , 2, 1, · · · , 1) and if g is a contact Lie algebra, we have c(g) = (2, 1, · · · , 1). We find again the results given in [19] which precises that any 2-step nilpotent 2p+1-dimensional contact Lie algebra is isomorphic to the Heisenberg algebra h 2p+1 . (2) If g is 3-step nilpotent, then c(g) = (3, 2, · · · , 2, 1, · · · , 1) or c(g) = (3, 1, · · · , 1). In case of dimension 7, this gives c(g) = (3, 2, 1, 1) and c(g) = (3, 1, 1, 1, 1). For each one of characteristic sequences there are contact Lie algebras: (a) The Lie algebra given by
[X 1 , X 2 ] = X 3 , [X 1 , X 3 ] = [X 2 , X 5 ] = [X 6 , X 7 ] = X 4 ,
is a 7-dimensional contact Lie algebra of characteristic (3, 1, 1, 1, 1) (b) The Lie algebras given
[X 1 , X i ] = X i+1 , i = 2, 3, 5, [X 2 , X 5 ] = X 7 , [X 2 , X 7 ] = X 4 , [X 5 , X 6 ] = X 4 , [X 5 , X 7 ] = αX 4
with α = 0 are contact Lie algebras of characteristic (3, 2, 1, 1).
(3) If g is 4-step nilpotent, then c(g) = (4, 3, · · · , 1). For exemple, the 9-dimensional Lie algebra given by
[X 1 , X i ] = X i+1 , i = 2, 3, 4, 6, 7, [X 6 , X 9 ] = X 3 , [X 7 , X 9 ] = X 4 , [X 8 , X 9 ] = X 5 , [X 2 , X 6 ] = (1 + α)X 4 , [X 3 , X 6 ] = X 5 , [X 2 , X 7 ] = αX 5
with α = 0 is a contact Lie algebra with c(g) = (4, 3, 1, 1).
Let us note also, that in [24], we construct the contact nilpotent filiform Lie algebras, that is with characteristic sequence equal to (2p, 1).
5.1.2.
The non nilpotent case. It is equivalent to consider Lie algebras with a contact form defined by a quadratic non linear deformations of the Heisenberg algebra. We refer to [19] to the description of this class of Lie algebras. An interesting particular case consists to determinate all the (2p + 1)-dimensional Lie algebras (p = 0), such that all the coadjoint orbits of non trivial elements are of dimension 2p.
Lemma 32. [14] Let g a simple Lie algebra of rank r and dimension n. Then any non trivial linear form α on g satisfies cl(α) ≤ n − r + 1. Moreover, if g is of classical type, we have cl(α) ≥ 2r.
In particular, a simple Lie algebra admits a contact form if its rank is equal to 1 and this Lie algebra is isomorphic to sl(2, R) or so(3). Now, if g is a (2p + 1)-dimensional Lie algebras whose the dimension of the coadjoint orbit of any non trivial linear form is equal to 2p, then the Cartan class of non trivial linear form is 2p or 2p + 1. Such Lie algebra cannot be solvable. From the previous lemma, the Levi semisimple subalgebra is of rank 1 and the radical cannot be of dimension greater than 1. Then g is simple of rank 1 and we have Proposition 33. Any (2p+1)-dimensional Lie algebra whose the dimension of the coadjoint orbits of non trivial forms are equal to 2p is simple of rank 1 and isomorphic to sl(2, R) or so(3).
(2p)-dimensional
Lie algebras with a 2p-dimensional coadjoint orbit. Such Lie algebra is frobeniusian. Since the Cartan class of a linear form on a nilpotent Lie algebra is always odd, this Lie algebra is not nilpotent. In the contact case, we have seen that any contact Lie algebra is a deformation of the Heisenberg algebra. On other words, any contact Lie algebra can be contracted on the Heisenberg algebra. In the frobeniusian case, we have a similar but more complicated situation. We have to determinate an irreducible family of frobeniusian Lie algebras with the property that any frobeniusian Lie algebra can be contracted on a Lie algebra of this family.
In a first step, we recall the notion of contraction of Lie algebras. Let g 0 be a n-dimensional Lie algebra whose Lie bracket is denoted by µ 0 . We consider {f t } t∈]0,1] a sequence of isomorphisms in K n with K = R or C. Any Lie bracket
µ t = f −1 t • µ 0 (f t × f t )
corresponds to a Lie algebra g t which is isomorphic to g 0 . If the limit lim t→0 µ t exists (this limit is computed in the finite dimensional vector space of bilinear maps in K n ), it defines a Lie bracket µ of a n-dimensional Lie algebra g called a contraction of g 0 .
Remark. Let L n be the variety of Lie algebra laws over C n provided with its Zariski topology. The algebraic structure of this variety is defined by the Jacobi polynomial equations on the structure constants. The linear group GL (n, C) acts on C n by changes of basis. A Lie algebra g is contracted to the g 0 if the Lie bracket of g is in the closure of the orbit of the Lie bracket of g 0 by the group action (for more details see [8,16]).
Classification of complex Frobenius
Lie algebras up to a contraction. Let g be a 2pdimensional Frobenius Lie algebra. There exists a basis {X 1 , · · · , X 2p } of g such that the dual basis {ω 1 , · · · , ω 2p } is adapted to the frobeniusian structure, that is, dω 1 = ω 1 ∧ ω 2 + ω 3 ∧ ω 4 + · · · + ω 2p−1 ∧ ω 2p .
In the following, we define Lie algebras, not with its brackets, but with its Maurer-Cartan equations. We assume here that K = C.
Theorem 34. Let g a 1 ,··· ,a p−1 , a i ∈ C be the Lie algebras defined by dω 1 = ω 1 ∧ ω 2 + p−1 k=1 ω 2k+1 ∧ ω 2k+2 , dω 2 = 0, dω 2k+1 = a k ω 2 ∧ ω 2k+1 , 1 ≤ k ≤ p − 1, dω 2k+2 = − (1 + a k ) ω 2 ∧ ω 2k+2 , 1 ≤ k ≤ p − 1.
Then any 2p-dimensional Frobenius Lie algebra can be contracted in an element of the family F = {g a 1 ,··· ,a p−1 } a i ∈C . Moreover, any element of F cannot be contracted in another element of this family.
Proof. see [15] and [19] Remark. The notion of principal element of a Frobenius Lie algebra is defined in [12]. In the basis {X 1 , · · · , X 2p } for which dω 1 = ω 1 ∧ ω 2 + p−1 k=1 ω 2k+1 ∧ ω 2k+2 , the principal element is X 2 .
Proposition 35. The parameter {a 1 , · · · , a p−1 } which are the invariants of Frobenius Lie algebras up to contraction are the eigenvalues of the principal element of g a 1 ,··· ,a p−1 .
5.2.2.
Classification of real Frobenius Lie algebras up to contraction. We have seen that, in the complex case, the classification up to contraction of 2p-dimensional Lie algebras is in correspondence with the reduced matrix of the principal element. We deduce directly the classification in the real case:
Theorem 36. Let g a 1 ,··· ,as,b 1 ,··· ,b 2p−s−1 , a i , b j ∈ R be the 2p-dimensional Lie algebras given by [X 1 , X 2 ] = [X 2k−1 , X 2k ] = X 1 , k = 2, · · · , p,
[X 2 , X 4k−1 ] = a k X 4k−1 + b k X 4k+1 , [X 2 , X 4k ] = (−1 − a k )X 4k − b k X 4k+2 , k ≤ s, [X 2 , X 4k+1 ] = −b k X 4k−1 + a k X 4k+1 , [X 2 , X 4k+2 ] = b k X 4k + (−1 − a k )b k X 4k+2 , k ≤ s, [X 2 , X 4s+2k−1 ] = − 1 2 X 4s+2k−1 + b k X 4k+2k , 2 ≤ k ≤ p − 2s, [X 2 , X 4s+2k ] = −b k X 4s+2k−1 − 1 2 X 4s+2k , 2 ≤ k ≤ p − 2s.
Proposition 2 .
2If α is a Pfaffian form on M, then • cl(α)(x) = 2p + 1 if (α ∧ (dα) p )(x) = 0 and (dα) p+1 (x) = 0, • cl(α)(x) = 2p if (dα) p (x) = 0 and (α ∧ (dα) p )(x) = 0.
Corollary 8 .
8Consider a non zero α in g * . Then dim O(α) = 2p if and only if cl(α) = 2p or cl(α) = 2p + 1.
) dim[t, t] = 2:
Then any 2p-dimensional Frobenius real Lie algebra is contracted in an element of the family F = {g a 1 ,··· ,as,b 1 ,··· ,b 2p−s−1 } a i ,b j ∈R . Moreover, any element of F cannot be contracted in other different element of this family.
Corollary 37 .
37Any (2p)-dimensional Lie algebras described in (34) in the complex case and in (36) in the real case have an open coadjoint orbit of dimension 2p. Moreover any (2p)dimensional Lie algebras with an open coadjoint orbit of dimension (2p) is a deformation of a Lie algebra of these families.
• It is called contact Lie algebra if n = 2p + 1 and if there exists a contact linear form, that is, a linear form of Cartan class equal to 2p + 1.• It is called Frobenius Lie algebra if n = 2p and if there exists a Frobeniusian linear form, that is, a linear form of Cartan class equal to 2p.
E-mail address: [email protected], [email protected]
Index of Graded Filiform and Quasi Filiform Lie Algebras. Adimi, ; Hadjer, Abdenacer Makhlouf, arXiv:1212.1650Adimi, Hadjer; Makhlouf, Abdenacer. Index of Graded Filiform and Quasi Filiform Lie Algebras. arXiv:1212.1650
. Ancochea Bermúdez, Jose Maria, Ancochea Bermúdez, Jose Maria;
Classification des algèbres de Lie nilpotentes complexes de dimension 7. Michel Goze, FrenchArch. Math. (Basel). 522175185Classification of complex nilpotent Lie algebras of dimension 7Goze, Michel. Classification des algèbres de Lie nilpotentes complexes de dimension 7. (French) [Classification of complex nilpotent Lie algebras of dimension 7] Arch. Math. (Basel) 52 (1989), no. 2, 175185.
Michel Pfaffian systems, k-symplectic systems. Azzouz ; Awane, Goze, Kluwer Academic Publishers240DordrechtAwane, Azzouz; Goze, Michel Pfaffian systems, k-symplectic systems. Kluwer Academic Publishers, Dordrecht, 2000. xiv+240 pp. ISBN: 0-7923-6373-6
Lie groups whose coadjoint orbits are of dimension smaller or equal to two. D Arnal, M Cahen, J Ludwig, Lett. Math. Phys. 332Arnal, D.; Cahen, M.; Ludwig, J. Lie groups whose coadjoint orbits are of dimension smaller or equal to two. Lett. Math. Phys. 33 (1995), no. 2, 183-186.
Contractions of Lie algebras with 2-dimensional generic coadjoint orbits. Daniel ; Beltitȃ, Benjamin Cahen, Linear Algebra Appl. 4664163Beltitȃ, Daniel ; Cahen , Benjamin. Contractions of Lie algebras with 2-dimensional generic coadjoint orbits. Linear Algebra Appl. 466 (2015), 4163.
Coadjoint orbits of stepwise square integrable representations. (English summary). Ingrid ; Beltitȃ, Daniel Beltitȃ, Proc. Amer. Math. Soc. 144313431350Beltitȃ, Ingrid; Beltitȃ, Daniel. Coadjoint orbits of stepwise square integrable representations. (English summary) Proc. Amer. Math. Soc. 144 (2016), no. 3, 13431350.
Sur les algèbres de Lie munies d'une forme symplectique. Rendiconti del seminario della Facolta di Scienze di cagliari. Abdelkader ; Bouyakoub, Michel Goze, LVII (1).Bouyakoub, Abdelkader; Goze, Michel. Sur les algèbres de Lie munies d'une forme symplectique. Ren- diconti del seminario della Facolta di Scienze di cagliari. LVII (1). (1987), 85-97.
Degenerations of 7-dimensional nilpotent Lie algebras. Dietrich Burde, Comm. Algebra. 334Burde, Dietrich. Degenerations of 7-dimensional nilpotent Lie algebras. Comm. Algebra 33 (2005), no. 4, 1259-1277.
Les systèmes de Pfaff,à cinq variables et leséquations aux dérivées partielles du second ordre. Elie Cartan, FrenchAnn. Sci. cole Norm. Sup. 3109192Cartan, Elie Les systèmes de Pfaff,à cinq variables et leséquations aux dérivées partielles du second ordre. (French) Ann. Sci. cole Norm. Sup. (3) 27 (1910), 109192.
Left invariant contact structures on Lie groups. André Diatta, Differential Geom. Appl. 265Diatta, André. Left invariant contact structures on Lie groups. Differential Geom. Appl. 26 (2008), no. 5, 544-552.
Une propriété de la représentation coadjointe d'une algèbre de Lie. Michel ; Duflo, Michèle Vergne, C.R.A.S. 268Duflo, Michel; Vergne, Michèle. Une propriété de la représentation coadjointe d'une algèbre de Lie. C.R.A.S., vol. 268, (1969), p. 583-585.
The principal element of a Frobenius Lie algebra. Murray ; Gerstenhaber, Anthony Giaquinto, Lett. Math. Phys. 881-3Gerstenhaber, Murray; Giaquinto, Anthony. The principal element of a Frobenius Lie algebra. Lett. Math. Phys. 88 (2009), no. 1-3, 333-341.
Géométrie différentielle et mécanique analytique. Hermann Editeurs. Collection Méthodes. Claude Godbillon, Godbillon, Claude. Géométrie différentielle et mécanique analytique. Hermann Editeurs. Collection Méthodes. 1969.
Sur la classe des formes et systèmes invariantsà gauche sur un groupe de Lie. Michel Goze, C. R. Acad. Sci. Paris Sér. A-B. 2837Goze, Michel. Sur la classe des formes et systèmes invariantsà gauche sur un groupe de Lie. C. R. Acad. Sci. Paris Sér. A-B 283 (1976), no. 7, Aiii, A499-A502.
Modèles d'algèbres de Lie frobeniusiennes. Michel Goze, C.R. Acad. Sci. Paris Sér. I Math. 2938Goze, Michel. Modèles d'algèbres de Lie frobeniusiennes. C.R. Acad. Sci. Paris Sér. I Math. 293 (1981), no. 8, 425-427.
Algèbres de Lie de dimension finie. Michel Goze, Ramm Algebra Center. Goze, Michel. Algèbres de Lie de dimension finie. Ramm Algebra Center. http://ramm-algebra-center.monsite-orange.fr
Sur les r-systmes de contact. Michel ; Goze, FrenchYuri Haraguchi, FrenchC. R. Acad. Sci. Paris Sr. I Math. 2942On r-contact systemsGoze, Michel; Haraguchi, Yuri. Sur les r-systmes de contact. (French) [On r-contact systems] C. R. Acad. Sci. Paris Sr. I Math. 294 (1982), no. 2, 95-97.
Injective Hulls of Simple Modules Over Nilpotent Lie Color Algebras. Can Hatipoglu, 1411.1512Hatipoglu, Can. Injective Hulls of Simple Modules Over Nilpotent Lie Color Algebras. Arxiv 1411.1512.
Contact and Frobeniusian forms on Lie groups. Michel; Remm Goze, Elisabeth, Differential Geom. Appl. 35Goze, Michel; Remm Elisabeth. Contact and Frobeniusian forms on Lie groups. Differential Geom. Appl. 35 (2014), 74-94.
Remm Elisabeth. k-step nilpotent Lie algebras. Michel Goze, Georgian Math. J. 222219234Goze, Michel; Remm Elisabeth. k-step nilpotent Lie algebras. Georgian Math. J. 22 (2015), no. 2, 219234.
On nilpotent Lie algebras of small breadth. B Khuhirun, K C Misra, E Stitzinger, Journal of Algebra. 444Khuhirun B., Misra K.C., Stitzinger E. On nilpotent Lie algebras of small breadth. Journal of Algebra 444 (2015) 328-338.
Elements of the theory of representations. A A Kirillov, Grundlehren der Mathematischen Wissenschaften. Berlin-New YorkSpringer-Verlag220xi+315 ppKirillov A.A. Elements of the theory of representations. Translated from the Russian by Edwin Hewitt. Grundlehren der Mathematischen Wissenschaften, Band 220. Springer-Verlag, Berlin-New York, 1976. xi+315 pp.
Breadth and characteristic sequence of nilpotent Lie algebras. Elisabeth Remm, Comm. Algebra. 457Remm, Elisabeth. Breadth and characteristic sequence of nilpotent Lie algebras. Comm. Algebra 45 (2017), no. 7, 2956-2966
On filiform Lie algebras. Geometric and algebraic studies. Elisabeth Remm, TOME LXIII. 2Revue Roumaine de Mathmatiques Pures et AppliquesRemm, Elisabeth. On filiform Lie algebras. Geometric and algebraic studies. Revue Roumaine de Math- matiques Pures et Appliques. TOME LXIII , No 2 ( 2018).
M , Ramm Algebra Center, 4 rue de Cluny. F.68800 Rammersmatt E.R: Laboratoire de Mathématiques et Applications. FranceUniversité de Haute AlsaceFaculté des Sciences et Techniques, 4, rue des Frères Lumière, 68093 Mulhouse cedexM.G: Ramm Algebra Center, 4 rue de Cluny. F.68800 Rammersmatt E.R: Laboratoire de Mathématiques et Applications, Université de Haute Alsace, Faculté des Sciences et Techniques, 4, rue des Frères Lumière, 68093 Mulhouse cedex, France.
| [] |
[
"Even Faster Exact Exchange for Solids via Tensor Hypercontraction",
"Even Faster Exact Exchange for Solids via Tensor Hypercontraction"
] | [
"Adam Rettig ",
"Joonho Lee [email protected] ",
"Martin Head-Gordon ",
"\nof Chemistry\n‡Chemical Sciences Division\nUniversity of California\n94720BerkeleyCaliforniaUSA\n",
"\n¶Department of Chemistry\nLawrence Berkeley National Laboratory\n94720BerkeleyCaliforniaUSA\n",
"\nColumbia University\n10027New YorkNew YorkUSA\n"
] | [
"of Chemistry\n‡Chemical Sciences Division\nUniversity of California\n94720BerkeleyCaliforniaUSA",
"¶Department of Chemistry\nLawrence Berkeley National Laboratory\n94720BerkeleyCaliforniaUSA",
"Columbia University\n10027New YorkNew YorkUSA"
] | [] | Hybrid density functional theory (DFT) remains intractable for large periodic systems due to the demanding computational cost of exact exchange. We apply the tensor hypercontraction (THC) (or interpolative separable density fitting) approximation to periodic hybrid DFT calculations with Gaussian-type orbitals. This is done to lower the computational scaling with respect to the number of basis functions (N ), and k-points (N k ). Additionally, we propose an algorithm to fit only occupied orbital products via THC (i.e. a set of points, N ISDF ) to further reduce computation time and memory usage. This algorithm has linear scaling cost with k-points, no explicit dependence of N ISDF on basis set size, and overall cubic scaling with unit cell size.Significant speedups and reduced memory usage may be obtained for moderately sized systems, with additional gains for large systems. Adequate accuracy can be obtained 1 arXiv:2304.05505v1 [physics.comp-ph] 11 Apr 2023 using THC-oo-K for self-consistent calculations. We perform illustrative hybrid density function theory calculations on the benzene crystal in the basis set and thermodynamic limits to highlight the utility of this algorithm.Recently, our group showed the extension of the molecular occ-RI-K algorithm 33 to GTObased PBC calculations leads to significant speedups for large basis sets, up to two orders of magnitude for the systems studied. 34 ISDF approaches have also been developed for both GTO and numerical atomic orbital (NAO) based Γ-point PBC calculations.35,36In this study, we propose a THC algorithm for periodic exact exchange with k-point sampling in a GTO basis utilizing ISDF. Similar to the analogous plane wave implementa- | null | [
"https://export.arxiv.org/pdf/2304.05505v1.pdf"
] | 258,078,868 | 2304.05505 | fd9d7d1c4dd7244e1859a3c29c3ea112ac639c8e |
Even Faster Exact Exchange for Solids via Tensor Hypercontraction
Adam Rettig
Joonho Lee [email protected]
Martin Head-Gordon
of Chemistry
‡Chemical Sciences Division
University of California
94720BerkeleyCaliforniaUSA
¶Department of Chemistry
Lawrence Berkeley National Laboratory
94720BerkeleyCaliforniaUSA
Columbia University
10027New YorkNew YorkUSA
Even Faster Exact Exchange for Solids via Tensor Hypercontraction
Hybrid density functional theory (DFT) remains intractable for large periodic systems due to the demanding computational cost of exact exchange. We apply the tensor hypercontraction (THC) (or interpolative separable density fitting) approximation to periodic hybrid DFT calculations with Gaussian-type orbitals. This is done to lower the computational scaling with respect to the number of basis functions (N ), and k-points (N k ). Additionally, we propose an algorithm to fit only occupied orbital products via THC (i.e. a set of points, N ISDF ) to further reduce computation time and memory usage. This algorithm has linear scaling cost with k-points, no explicit dependence of N ISDF on basis set size, and overall cubic scaling with unit cell size.Significant speedups and reduced memory usage may be obtained for moderately sized systems, with additional gains for large systems. Adequate accuracy can be obtained 1 arXiv:2304.05505v1 [physics.comp-ph] 11 Apr 2023 using THC-oo-K for self-consistent calculations. We perform illustrative hybrid density function theory calculations on the benzene crystal in the basis set and thermodynamic limits to highlight the utility of this algorithm.Recently, our group showed the extension of the molecular occ-RI-K algorithm 33 to GTObased PBC calculations leads to significant speedups for large basis sets, up to two orders of magnitude for the systems studied. 34 ISDF approaches have also been developed for both GTO and numerical atomic orbital (NAO) based Γ-point PBC calculations.35,36In this study, we propose a THC algorithm for periodic exact exchange with k-point sampling in a GTO basis utilizing ISDF. Similar to the analogous plane wave implementa-
Introduction
Density functional theory (DFT) has dominated the field of quantum chemistry for the last several decades due to its combination of satisfactory accuracy and relatively low O(N 3 −N 5 ) computational cost, where N is the number of basis functions. Within density functional theory, hybrid functionals, which include Hartree-Fock "exact exchange" (shown in Eq. (1)),
K µν = λσ (µλ|σν)P λσ(1)
have emerged as the most accurate and widely used functionals in nearly all applications for molecules, [1][2][3][4] with only a mild increase in the computational cost compared to local density functionals. Unsurprisingly, hybrid density functional theory accounts for a large portion of quantum chemical studies performed on molecules today.
Hybrid functionals are known to offer large improvements in accuracy for some systems under periodic boundary conditions (PBC) as well. [5][6][7][8][9] Despite this, local (or pure) density functionals account for the majority of PBC studies due to their computational efficiency.
Periodic calculations are often significantly more expensive than typical molecular studies because one needs to reach the thermodynamic limit (TDL). By virtue of translational symmetry, one may reach TDL results by numerically integrating over the first Brillouin zone (i,e, by sampling k-points). This approach leads to lower scaling than equivalent supercell approaches. In local DFT, different k points can be treated independently, leading to an extra factor in the computational scaling equal to the number of k points, N k . Hybrid density functional theory incurs an extra penalty as the different k points are coupled through the periodic exact exchange, shown in Eq. (2).
K k µν = λσk (φ k µ φ k λ |φ k σ φ k ν )P k λσ .(2)
This leads to an extra computational scaling factor of N 2 k , giving an overall computational Efficient exact exchange computation for molecular systems has been the subject of considerable study over the last few decades; many of these ideas have also been transferred to periodic systems. For sufficiently large systems, the number of nonzero elements in the two-electron integral tensor increases only quadratically with the system size, motivating sparsity-aware approaches to lower the scaling of exact exchange to O(N ) for gapped systems. [10][11][12][13][14] In practice, this asymptotic region is hard to reach, limiting the usefulness of such approaches. Approximate factorization of the two-electron integral tensors into lower rank tensors has proved more successful, leading to the pseudospectral method 15 (and the closely related chain of spheres method 16,17 ), Cholesky decomposition, 18,19 the resolution of the identity (RI) or density fitting approximation, [20][21][22] and tensor hypercontraction (THC). 23,24 Of these, RI, has emerged as the most popular in both exact exchange and correlated wavefunction theory due to its large speedup and its relatively robust, low error of ∼ 50µE H per atom (which also often cancels in energy differences).
Many periodic codes utilize plane waves as a basis set in contrast to the Gaussian-type orbitals (GTO) used in the molecular case, complicating the direct application of these algorithms to periodic calculations. The adaptively compressed exchange (ACE) algorithm, similar to the occ-RI-K algorithm developed for molecular systems, has become the most common way to speed up exact exchange evaluation in planewave-based periodic quantum chemistry codes. 25 More recently, the interpolative separable density fitting (ISDF) [26][27][28] approach to THC has been shown to offer considerable speedup for both gamma point and k-point plane wave calculations. [29][30][31][32] We use GTOs for periodic calculations via the Bloch orbital framework, allowing for a more direct application of molecular-based exact exchange algorithms. Our crystalline GTO basis functions are given by:
φ k µ (r) = 1 √ N k R e ik·Rφ µ (r − R) = 1 √ N k e ik·r u k µ (r)(3)
where R are direct lattice vectors, k is the crystalline momentum, andφ µ (r) is an atomic orbital. u k µ (r) is the cell periodic part of the Bloch orbital, which is periodic along the lattice (i.e., u k µ (r) = u k µ (r + R)), and is given by:
u k µ (r) = 1 √ N k R e ik·(R−r)φ µ (r − R)(4)
tion, 30 the use of ISDF can reduce the computational scaling of exact exchange with respect to the number of k-points. We will additionally propose a new ISDF approach by fitting only products of occupied orbitals, as is done in ACE-ISDF in plane waves to realize further computational savings. We illustrate the scaling improvements of these algorithms and perform an illustrative hybrid DFT study of the benzene crystal cohesive energy.
Theory
Review of Molecular and Γ-point ISDF
In molecular and Γ-point periodic ISDF, the products of atomic orbitals are approximated by a sum of interpolation vectors {ξ P (r)} weighted by the orbitals evaluated at a set of N ISDF interpolation points {r P }:
φ µ (r) * φ ν (r) ≈ N ISDF P φ µ (r P ) * φ ν (r P )ξ [nn] P (r)(5)
In the above, we have labeled the interpolation vectors with the [nn] superscript to denote that these functions are fit to products of two atomic orbitals.
The set of interpolation points {r P } can be selected in numerous ways; commonly, either a QR decomposition with column pivoting 29 or a centroidal Voronoi tessellation (CVT) kmeans algorithm 28 is used. The number of interpolation points is set via a parameter c [nn] ISDF :
N [nn] ISDF = c [nn] ISDF N.(6)
After the interpolation points are chosen, the interpolation vectors ξ
[nn] P (r) are determined via a least squares fit 24,37 to the orbital products desired:
Q S [nn] P Q ξ [nn] Q (r) = µν φ µ (r) * φ ν (r)φ ν (r P ) * φ µ (r P )(7)
In the above, S represents the ISDF metric matrix (not to be confused with the AO overlap matrix):
S [nn] P Q = µν φ µ (r P ) * φ µ (r Q ) * φ ν (r P ) * φ ν (r Q )(8)
The exchange matrix may then be computed using only the basis functions evaluated at a set of interpolation points and the two-electron integrals between interpolation vectors, M P Q , of which there are only N 2 ISDF .
M P Q = dr 1 dr 2 ξ P (r 1 )ξ Q (r 2 ) r 12(9)
M P Q , given in Eq. (9) may be computed in reciprocal space according to the GPW algorithm.
The cost of computing M P Q is O(N g N 2 ISDF ) due to the contraction of the two interpolation vectors in reciprocal space.
M P Q can be very expensive to compute for large systems, but needs only be computed once at the start of a calculation. It may then be used each iteration to compute K µν according to Eq. (10).
K µν = λσ φ µ (r P ) * φ λ (r P )M P Q φ σ (r Q ) * φ ν (r Q )P λσ(10)
This is done most efficiently by first contracting φ λ (r P )φ σ (r Q ) * P λσ and taking a Hadamard (element-wise) product with M P Q . This intermediate is then contracted with φ µ (r P ) * and then φ ν (r Q ), which is the bottleneck at O(N N 2 ISDF ) cost. As N << N g , the per iteration cost is significantly less than the initial formation of M P Q . Sharma et al. showed that large speedups were possible with this algorithm for periodic Γ-point calculations. 35
Periodic k-point ISDF
We first examine the extension of the Γ-point algorithm presented above to k-point GTObased calculations. We do this by fitting the ISDF interpolation vectors to only the cell periodic part of our basis functions, u k µ (r): 30
u k µ (r) * u k ν (r) ≈ P u k µ (r P ) * u k ν (r P )ξ [nn] P (r)(11)
The full product of basis functions may then be approximated by adding in the Bloch phase factors:
φ k µ (r) * φ k ν (r) ≈ P u k µ (r P ) * u k ν (r P )e iq·r ξ [nn] P (r)(12)
In the above we have defined q = k − k. Note that our interpolation vectors do not depend on k-points.
The interpolation functions are once again determined via a least squares fit, but we now must include a sum over k-points:
Q S [nn] P Q ξ [nn] Q (r) = µνkk u k µ (r) * u k ν (r)u k ν (r P ) * u k µ (r P )(13)
The S matrix will similarly include a sum over k points. The k-point K matrix may then be computed as:
K k µν = λσqP Q u k µ (r P ) * u k+q λ (r P )M q P Q u k+q σ (r Q ) * u k ν (r Q )P k+q λσ(14)
We have folded the Bloch phase factors included in our basis functions into our definition of M, so that it now has a q-dependence:
M q P Q = drdr e iq·r 1 ξ P (r 1 ) * e −iq·r 2 ξ Q (r 2 ) r 12(15)
However, M depends only on q, rather than both k and k . The number of unique twoelectron integrals is, therefore, linear in the number of k-points. M can be computed via While this algorithm is overall cubic scaling, a significant prefactor is associated with computing the M matrix. This prefactor was so significant in molecular cases that the THC algorithm does not become competitive with higher scaling approaches unless extremely large systems are studied. 27 In this study, we will analyze the timing of this algorithm to see if the added benefit of reduced k-point scaling will lead to a crossover at useful system sizes.
GPW in O(N k N 2 ISDF N g )
Additionally, one must be concerned with memory usage, as storing the M matrix requires O(N 2 ISDF N k ) space; this can quickly become impractical to store for large systems and large basis sets.
Algorithm 1 THC-AO-K algorithm
{r P } ← CVT k-means Compute ξ [nn] P (r) ξ [nn] P (G) ← FFT(ξ [nn] P (r)) M q P Q ← G ξ [nn] P (G)V q (G)ξ [nn] Q (G) while SCF unconverged do X k P Q ← iq u k+q i (r P )M q P Q u k+q i (r Q ) * (FFT Convolve) K k µν ← P Q u k µ (r P ) * X k P Q u k ν (r Q ) end while
THC-oo-K
Further reduction in compute time and storage requirements can be realized by fitting only products of occupied orbitals via ISDF as seen in Eq. (16):
u k i (r) * u k j (r) ≈ P u k i (r P ) * u k j (r P )ξ [oo] P (r)(16)
Accordingly, we label the interpolation vectors with the [oo] superscript to indicate that these functions are fit to products of two occupied orbitals. This approach is similar to what is done with plane-wave ISDF codes using the ACE procedure in an algorithm we term THC-oo-K. THC-oo-K is motivated by the fact that the exchange energy depends only on occupied orbitals. Of course, the orbital gradient contributions due to exact exchange depend on the virtual orbitals, but they can be computed by directly differentiating the THC-oo-K energy function. In THC-oo-K, the interpolation vectors fit far fewer quantities -which scale only with the number of occupied orbitals (i.e., independent of basis size!). We therefore define the number of ISDF points similarly to Eq. 6 via:
N [oo] ISDF = c [oo] ISDF N occ ,(17)
with the expectation that N
[oo] ISDF << N [nn]
ISDF (where nn refer to atomic orbitals). The ISDF exchange energy is then computed as before via:
E X = − ijkqP Q u k i (r P ) * u k+q j (r P )M q P Q u k+q j (r Q ) * u k i (r Q )(18)
While the potential memory and computational savings arising from the decrease in N ISDF seem very promising at first glance, there are several complications that arise from this approach. First, the occupied orbitals change each iteration, necessitating the recalculation of ξ P (r) and therefore M q P Q on each iteration. Second, the orbital gradient of the exchange energy is no longer as simple as the THC-AO-K case seen in Eq. (10) due to the dependence of M on the occupied orbitals. The first point shows that there will be a tradeoff in compute time -the computation of M is much cheaper due to the decrease in N ISDF . However, one must compute M each iteration compared to only once for THC-AO-K. We will subsequently investigate whether this tradeoff is favorable for THC-oo-K in normal use cases.
The second point can be addressed by deriving the analytical expression for the vo block of the exchange matrix, given by:
K k ai = jk 2 P Q u k a (r P ) * u k 2 j (r P )M q P Q u k 2 j (r Q ) * u k i (r Q ) + jk 2 qP QRr W q P Q V q P (r)S −1 QR u k a (r R ) * u k i (r)u k 2 j (r) * u k 2 j (r R ) + u k a (r) * u k i (r R )u k 2 j (r R ) * u k 2 j (r) − jk 2 qP QST W q P Q S −1 QS M q P T u k a (r S ) * u k i (r T )u k 2 j (r S ) * u k 2 j (r T ) + u k a (r T ) * u k i (r S )u k 2 j (r T ) * u k 2 j (r S )(19)
In the above equation,
W q P Q = ijk u k i (r P ) * u k+q j (r P )u k+q j (r Q ) * u k i (r Q )
where the convolution over k-points may be done using FFT once again. The dependence of M on the occupied orbitals leads to additional two terms in the K matrix, but they are relatively easy to compute as most pieces are needed during the computation of the first term anyway. The effect of these extra terms is an additional prefactor of ∼ 3 in the computation time in our experience. These terms may be contracted in a similar manner to THC-AO-K, leading to an overall scaling of O(N k N 2 ISDF N g ) once again. A summary of the THC-oo-K algorithm is given in Algorithm 2.
In this study, we will investigate the accuracy of both THC-AO-K and THC-oo-K as a function of N ISDF and analyze their relative computation times across a variety of system sizes.
Algorithm 2 THC-oo-K algorithm All calculations were performed using the Gaussian Plane Wave (GPW) algorithm 41,42 for computing 2-electron integrals. A sufficient kinetic energy cutoff for the auxiliary basis was used to fully converge all computations below 10 −6 Hartrees. To treat the divergence of the exact exchange, we use a Madelung correction. 43 Calculations for the benzene system were performed with a counterpoise correction including basis functions for a (2,2,2) supercell. Cohesive energies for varying k-point mesh sizes and extrapolated to the thermodynamic limit via:
{r P } ← CVT k-means while SCF unconverged do Compute ξ [oo] P (r) ξ [oo] P (G) ← FFT(ξ [oo] P (r)) M q P Q ← G ξ [oo] P (G)V q (G)ξ [oo] Q (G) X k P Q ← iq u k+q i (r P )M q P Q u k+q i (r Q ) * (FFT Convolve) Compute ∂M ∂θ k ia K k ij ← P Q u k i (r P ) * X k P Q u k j (r Q ) K k ia ← P Q u k i (r P ) * X k P Q u k a (r Q ) + ∂ME TDL = E N k 2 N −1 k 1 − E N k 1 N −1 k 2 N −1 k 1 − N −1 k 2(20)
All calculations were performed in the Q-Chem software package. 34 ISDF as we will see. The interpolation vectors must now represent orbital pairs at multiple k-points however so we expect this value to increase slightly as more k-points are used, although it should eventually plateau. 30 Additionally, we seek to determine an appropriate value for c [oo] ISDF , so that the efficiency of THC-AO-K and THC-oo-K can be meaningfully compared.
We first investigate the accuracy of the THC energy computed for both algorithms. In ISDF is increased -the error is not strictly monotonically decreasing; this is likely due to the randomness inherent to the CVT algorithm used to select ISDF points.
In Figure 1b, we show the root mean square of the orbital gradient for the same system.
As we are using non-THC converged orbitals, we expect this value to be zero if there were no ISDF error. We see that at c for some basis sets, suggesting that the THC-AO-K orbitals are slightly different than the converged non-THC algorithms, due to the ISDF factorization error. This is also systematically improvable through the c ISDF leads to larger RMS gradients for the smaller basis sets.
We show similar plots for THC-oo-K in figure 1c and 1d. We see that a larger value of c [oo] ISDF = 50 is needed to reach an acceptable error for this algorithm. It should, however, be pointed out that the number of ISDF points for this algorithm is determined by scaling N occ (Eq. 17), and this leads to significantly fewer points than THC-AO-K despite the higher c ISDF . Accordingly, this means that the number of ISDF points used in THC-oo-K for each of the basis sets tested is the same! THC-oo-K is, therefore, highly effective at compactly representing the HF exchange energy for large basis sets. On the other hand, looking at the gradient in figure 1d, we see that the THC-oo-K orbital gradient is significantly ISDF seems to be quite slow. Moreover, there seems to be a degree of basis set dependence as a larger basis set has many more degrees of freedom; the SZV-GTH basis has a much smaller error in the gradient, DZVP-GTH, TZV2P-GTH, and QZV2P-GTH all have roughly similar errors. This inaccuracy in the gradient could be due to fluctuations in the THC error over the entire energy surface and could potentially lead to SCF convergence issues. We will next investigate selfconsistent energies obtained via THC-K algorithms to address this. We next investigated the error in the HF energy for the converged density -i.e. we optimize orbitals using the THC algorithms. In figure 2a, we show the error in the HF self-consistent total energy using THC-AO-K as a function of c We next investigate the accuracy as the k-point mesh is increased. Our algorithms rely on the fact that the number of ISDF points required to yield an accurate solution will plateau quickly with the k-mesh size. In Figure 3, we show a plot of the absolute error in the selfconsistent HF energy of diamond in the QZV2P-GTH basis computed with THC-oo-K for several different k-point meshes. We see that there is a very steep increase in the number of ISDF points required when going from gamma point to a (2,2,2) k-point mesh; evidently, it is much more difficult to represent products of occupied orbitals at multiple k points than at a single k point. However, increasing the mesh further has very little effect. It therefore appears that the c We therefore tentatively recommend starting with values c ISDF = 50. One may verify this is adequate on a case-by-case basis by increasing the parameters and ensuring little variation in overall results. We note that this study analyzed the absolute energies, which leaves out the effect of error cancellation in observables such as the lattice energy or band gaps. Error cancellation could lead to more leniency in the c ISDF parameters, necessitating fewer interpolation points. Additionally, we showed errors in the HF energy here, which includes 100% exact exchange. When using a hybrid density functional with a small fraction of exact exchange or short-range exact exchange, the THC errors will therefore be further scaled down. Indeed, this was observed in our molecular study. 27 We therefore believe that our recommended coefficients are conservative.
Computational scaling
We found that both periodic THC-AO-K and THC-oo-K require more interpolation points than their molecular counterparts when using k-point sampling. However, even molecular THC was seen to require too many interpolation points to be computationally competitive except for very large systems. We therefore investigate the K build time for both THC-AO-K and THC-oo-K to see if ISDF-based algorithms are useful for small to medium-sized periodic systems. In the following calculations, we use c We additionally investigated the computational scaling of THC-K with basis set size.
We previously saw the occ-RI-K algorithm was largely independent of basis set size, making it very effective for large basis calculations (with small numbers of k-points). We expect THC-oo-K to have this same property, as N ISDF is independent of the basis size, shown by fig. 2b. We, therefore, expect THC-oo-K to become more efficient than THC-AO-K as the basis set size increases. In figure 5, we show the average K matrix build time as a function of the basis set size for the diamond system with a (7,7,7) k-mesh. THC-AO-K is more efficient than THC-oo-K and occ-RI-K for the smallest two basis sets used, SZV-GTH and DZVP-GTH; the bottleneck of this algorithm is precomputing the M matrix, which is fairly small for smaller basis set sizes. occ-RI-K is slightly better than THC-AO-K for TZV2P-GTH and QZV2P-GTH as it is independent of basis size. THC-oo-K evidently does not become competitive with THC-AO-K until the very large unc-def2-QZVP-GTH basis set is used.
Finally, we note that the THC algorithms can offer significantly reduced memory usage compared to occ-RI-K. For both THC algorithms, we store only ξ P (r) and M q P Q , i.e. no quantities that scale with both the grid and number of k-points. This is in contrast to the occ-RI-K algorithm which stores the basis functions (linear in N k ) and coulomb kernel (quadratic in N k ) on the grid by our default in Q-Chem. Using an integral direct approach to occ-RI-K will eliminate this memory requirement 34 but results in much slower performance, making it less competitive with the THC algorithms. We highlight this in figure 6 by plotting the memory requirements for each algorithm for the diamond system in the QZV2P-GTH basis as a function of the number of k-points. We see that the large number of ISDF points required for THC-AO-K leads to larger memory consumption than occ-RI-K for k-point meshes smaller than (6,6,6), however, the linear scaling in N k eventually wins out at k-point meshes larger than this. THC-oo-K offers a huge reduction in memory usage over THC-AO-K due to a large reduction in the number of ISDF points. For this diamond QZV2P-GTH example, with a (9,9,9) k-point mesh, THC-oo-K requires only 9 GB compared to 430 GB for occ-RI-K!
We can now define specific use cases for each algorithm studied here. For small k-point meshes, we recommend using the occ-RI-K algorithm as it will be faster than all THC algorithms. For large k-point meshes we recommend using THC-AO-K if using a small basis set and THC-oo-K for large basis sets. Across all system sizes and k-point mesh sizes, if the suggested algorithm requires too much memory, we recommend using THC-oo-K instead.
It is particularly well-suited for implementation on memory-constrained computing devices, such as graphical processing units.
Molecular Crystals
Finally, we perform an illustrative computation of the cohesive energy of the benzene crystal as a rigorous test of the limits of THC-K. Accurate theoretical estimates of this value are available via high-order wavefunction theory calculations performed via fragment-based approaches. [45][46][47] Periodic calculations have thus far been fairly limited as HF and pure density functionals are known to fail for binding purposes 48 and higher order wavefunction theo-ries are very expensive for periodic studies of benzene. Local density functionals have been shown to yield reasonable results when dispersion corrections are utilized, 49 and recent studies using periodic MP2 have been performed. 50,51 However, these periodic calculations are generally limited to small k-point meshes or small basis sets, or both due to computational constraints. Only a few hybrid functionals have been tested for benzene 49,[52][53][54][55] due to the computational difficulty of obtaining exact exchange energies in the thermodynamic limit.
We, therefore, decided to utilize the THC-oo-K algorithm to benchmark hybrid functional for cohesive energies in order to compare with theoretical best estimates.
We investigated the dispersion-corrected pure functional B97M-rV 56,57 and the hybrid functionals PBE0-D3, 58,59 MN15, 60 M06-2X, 61 SCAN0-D3, 62 ωB97X-rV, 63 and ωB97M-rV. 64 We attempted to converge our results to both the complete basis set (CBS) limit and the TDL, including counterpoise corrections (see Sec. 3). We cover hybrid functionals utilizing both the D3 and rVV10 dispersion corrections and empirical functionals parameterized to include dispersion. We note that MN15 and M06-2X are not dispersion-corrected but their performance on molecular data sets of non-covalent interaction was found to be quite accurate. 1 The cohesive energies computed for varying sizes of k-point meshes can be seen in table 1. We performed our calculations using the QZV2P-GTH basis.
Compared to the theoretical best estimate (TBE) of -54.58 kJ/mol, 45 we see a great variance in the performance of DFT. As expected, the dispersion treatment is paramount to obtaining accurate, cohesive energies. The DFT-D3 treatment of dispersion yields quite accurate results, with the global hybrid functionals PBE0-D3 and SCAN0-D3 performing well, giving errors of 3.4 and 2.0 kJ/mol, respectively. MN15 and M06-2X, parameterized to implicitly treat dispersion, yield extremely different results; MN15 is the most accurate of the functionals tested with an error of 1.1 kJ/mol while M06-2X performs substantially worse than all other functionals, giving an error of 13 kJ/mol. Use of the rVV10 dispersion correction yields unsatisfactory results as well; the pure functional B97M-rV overbinds by 9.0 kJ/mol while the range separated hybrids ωB97M-rV and ωB97X-rV, which are among the most accurate functionals for molecular systems, 1 perform only slightly better with errors of 7.2 and 8.0 kJ/mol respectively. The consistent overbinding of these three VV10-based functionals suggests that rVV10 is the main source of their errors.
The trends seen in our data are in line with previous studies on dispersion corrections for benzene and the X23 dataset. DFT-D3 is known to perform quite well, 49 while the failure of rVV10 for molecular crystals has previously been attributed to the lack of screening (i.e., many-body dispersion) effects. [65][66][67] Notably, by comparing B97M-rV to ωB97M-rV, we find that the addition of exact exchange makes a minor improvement of 1 kJ/mol. We see that with a (3,3,3) k-point mesh, pure functionals are essentially converged, whereas the hybrid functionals are still up to 5 kJ/mol off from the TDL. The convergence of the hybrid functionals is demonstrated in figure 7, where the convergence rate is dependent on the fraction of exact exchange in the functional -highlighting the difficulty of periodic hybrid DFT. Nonetheless, the size extrapolation using 1/N k works well for this system, making reliable hybrid functional calculations accessible. We see that for the benzene crystal, including exact exchange can lead to modest improvements in accuracy, but the treatment of dispersion is much more important. More in-depth analysis is required to evaluate general trends of hybrid functionals for periodic applications.
Still, the discrepancy between molecular 1 and periodic cases suggests that further functional development may be beneficial for hybrids applied to periodic applications. To summarize, we have presented an extension of the ISDF approximation for exact exchange for use with periodic k-point calculations using GTOs. We additionally presented a new algorithm, THC-oo-K, by fitting only the product of occupied orbitals via the ISDF approach, which is all that is needed for the energy. While this means that errors in THC-oo-K orbital gradients (which involve occupied-virtual products) are larger than for the energy, the effect on the self-consistently optimized energy is still small. We have shown that these algorithms reduce the computational scaling for exact exchange to cubic in system size and linear with the number of k-points; the THC-oo-K algorithm has the additional advantage of THC dimension scaling independently of basis set size. Initial investigation showed that these algorithms provide substantial computational savings for large k-point meshes; THC-oo-K additionally provides computational savings over occ-RI-K at even medium-sized k-point meshes and huge reductions in memory usage in all cases, at only a minor compromise in overall accuracy. We believe this cost reduction will make periodic studies using hybrid functionals more feasible, which we illustrated via a study of the benzene lattice energy, computed with the QZV2P-GTH basis up to a (3,3,3) k-point mesh.
Further reduction in error may be obtained by fitting only one side of the two-electron integral tensor via ISDF, termed robust fit THC. 35,68 This reduces the error in the energy to quadratic in the THC fit error rather than linear. We performed preliminary studies of this approach for the THC-AO-K algorithm. Still, we found that while the robust fit did reduce error substantially, it significantly increased memory requirements to an intractable degree due to the requirement of storing V q P (r), the potential due to interpolating function ξ P (r) which increases the memory usage of the already expensive THC-AO-K by a factor of N k ; additionally, the number of expensive FFTs for the GPW algorithm is increased by a factor of N k , significantly increasing the compute time. Nonetheless, a multi-node MPI implementation may increase the available memory enough to make this approach tractable. 35 For the THC-oo-K algorithm, there is no need to store V q P (r) each iteration, so it is possible to batch over this quantity and avoid storing the entire thing; the increase in the number of FFTs will however still lead to a significant increase in compute cost. A robust fit THC-oo-K algorithm is therefore an interesting direction for future development, but the trade-off in compute time may turn out to be unfavorable.
All work presented here was used with the GPW algorithm, which only applies to pseudopotential calculations. A big advantage of Gaussian orbitals in PBC calculations compared to the more popular plane waves basis is the ability to model core orbitals. It would be interesting to investigate all-electron versions of these algorithms. This extension should be possible using the projector augmented-wave method (PAW), 69 density fitting, 70,71 or similar approaches, by selecting ISDF points from Becke grids as is done for molecular codes. 27 Finally, we note that this approach offers a general framework for approximating the twoelectron integrals and its usefulness is not limited to exact exchange. The same intermediates may be used to speed up the computation of matrix elements necessary for Møller-Plesset theory, coupled cluster theory, and other correlated wavefunction methods. This has already proved successful for molecular ISDF. 27,72,73 The extension of this algorithm to correlated wavefunction theory would allow for much larger k-point calculations.
scaling for periodic exact exchange of O(N 2 k N 3 ) compared to the O(N k N 3 ) of local functionals. k-point calculations yield a massive reduction in computational cost for both local and hybrid functionals compared to equivalent supercell calculations that would scale as O(N 3 k N 3 ) in this notation. Despite the computational speedup of k-point calculations, hybrid functionals remain considerably more expensive than local functionals for periodic systems with an additional overhead of N k . Furthermore, hybrid functionals converge slower to the TDL than local functionals, necessitating even larger k-point calculations, further adding to the computational cost.
time. However, this intermediate may be computed once, and stored for reuse in each iteration. Naively, evaluating Eq. (14) would lead to O(N 2 k N N 2 ISDF ) scaling. However, we write k as k + q to emphasize that the sum over q done in the contraction of M q P Q with u k+q λ (r P )u k+q σ (r Q ) * P k+q λσ is a convolution in k space. Convolutions may be done utilizing FFT in O(N k log(N k )) time. Using this trick, we can lower the computational scaling of this algorithm, which we term THC-AO-K, to O(N k log(N k )N N 2 ISDF ) per iteration. The ratelimiting step of this algorithm is therefore computing M due to the large value of N g . In this way, significant savings can be realized for large k-point calculations -the exchange may be computed only cubically in system size and linearly in N k . A summary of the THC-AO-K algorithm is shown in Algorithm 1.
were performed using the SZV-GTH, DZVP-GTH, TZV2P-GTH, QZV2P-GTH, unc-def2-QZVP 38 basis sets designed for use in periodic systems with the Goedecker, Teter, Hutter (GTH) pseudopotentials. The GTH-PBE pseudopotential39,40 was used for all DFT calculations, while the GTH-HF pseudopotential was used for all HF calculations.
Figure 1a
1awe show the absolute error per atom in the THC energy computed for diamond (8 electrons total) with a (2,2,2) k-point mesh given the converged SCF density for several different basis sets. We see that THC-AO-K can accurately represent the energy using c [nn] ISDF = 25 for all bases other than SZV-GTH. SZV-GTH is the smallest basis and from Eq. 6, we see that the same c[nn] ISDF parameter gives rise to smaller numbers of ISDF points for smaller basis sets. The smallest useful c[nn] ISDF = 25 value will therefore depend somewhat on basis set size. Importantly, we see that the THC-AO-K energy is systematically improvable, so one may increase the number of points used if a more accurate energy is desired. We do observe a some fluctuations in the error as c[nn]
25, the gradient still has a magnitude of ∼ 10 −3
and will eventually converge to the exact solution once enough points are used. Consistent with the discussion of the energies, we see that typically (though not always due to ISDF point selection as already mentioned) a given value of c [nn]
Figure 1 :
1Error in the THC-AO-K HF energy (a), THC-AO-K orbital gradient (b), THC-oo-K HF energy (c), THC-oo-K orbital gradient (d), for diamond with a (2,2,2) k-point mesh, given the converged density computed without the ISDF quadrature error. The numbers of ISDF interpolation points used for THC-AO-K and THC-oo-K are related to the parameters c that of THC-AO-K and the convergence with increasing c[oo]
GTH and SZV-GTH bases, as well as AlN (16 electrons total) in the GTH-QZV2P basis, all with a (2,2,2) k-point mesh. We see that our suggested value of c[nn] ISDF = 25 holds up pretty well for the QZV2P-GTH systems, although diamond in the SZV-GTH basis does have a noticeably higher error in this case. A similar plot for THC-oo-K can be seen in figure 2b. We see that despite possible concerns arising from significant deviations in the orbital gradient when evaluated with exact converged density, the THC-oo-K algorithm also yields adequately accurate self-consistent solutions for these systems. Most encouragingly, the c [oo]ISDF parameter is evidently transferable across different systems and basis sets.
Figure 2 :
2Absolute Error per atom in the unit cell in the self-consistent HF energy using (a) THC-AO-K vs c nn ISDF (see Eq. 6) and (b) THC-oo-K vs. c oo ISDF (see Eq. 17) for both diamond and AlN using varying basis set sizes and a (2,2,2) k-point mesh. The dashed line represents 50 µE H / atom accuracy.
Figure 3 :
350 value is adequate for all k-point mesh sizes, although a significantly lower value may likely be sufficient for Γ-point calculations. Absolute Error in the HF energy using THC-oo-K, as a function of c oo ISDF for diamond in the QZV2P-GTH basis with several k-point meshes.
Figure 4 :
4compute the average THC-AO-K iteration time by assuming 10 iterations, as the majority of the computational cost of this algorithm is the M build step which is performed only once, i.e. we add one-tenth of the M build time into the per iteration cost. If more than 10 iterations are required (i.e., the initial guess was poor), THC-AO-K will be more favorable than shown here, and vice versa.InFig. 4a, we plot the K build time for AlN in the QZV2P-GTH basis as a function of the number of k-points for both THC-AO-K and THC-oo-K. We see two important conclusions from this plot. First, as anticipated, the THC-K algorithms are linear with the number of k-points due to the FFT convolution. By contrast, occ-RI-K exhibits quadratic scaling with number of k-points. Second, the THC-K algorithms have a large prefactor which causes occ-RI-K to be the more efficient approach for small k-point meshes. The linear k-(a) Average K build time per cycle and (b) breakdown of the THC initial cycle timings vs the number of k-points for AlN in the QZV2P-GTH basis.scaling makes THC-oo-K extremely effective at reducing compute time for large k-point calculations, and the algorithm becomes more efficient than occ-RI-K between a (4,4,4) and(5,5,5) k-point mesh. Due to the large basis set and therefore large N[nn]ISDF , the THC-AO-K algorithm is significantly slower than both other algorithms, and could not be run for k-point meshes larger than (4,4,4) due to memory requirements.We present a breakdown of the THC-AO-K and THC-oo-K first cycle computation time (the full THC-AO-K M build time is included) for the AlN system inFig. 4b. We see that the THC-AO-K time is dominated by the initial single M build, making the per iteration cost to contract of M to form K according to Eq.(14) negligible. The THC-oo-K M is significantly cheaper (note the 10x scale difference) to form, but the computation of the orbital gradient adds an extra factor of 2-3 to the per-iteration compute cost.
Figure 5 :
5Average K build time per cycle vs the number of basis functions for diamond with a (7,7,7) k-point mesh.
Figure 6 :
6Memory usage for the occ-RI-K, THC-AO-K, and THC-oo-K algorithms as a function of the number of k-points for the diamond system using the QZV2P-GTH basis.
Figure 7 :
7Cohesive energy of the benzene crystal as a function of k-point sampling for several hybrid functionals. The dashed line indicates the reference value.
We first investigate the accuracy of both THC algorithms to determine acceptable values of c ISDF . In the molecular case, it was found that c ISDF = 20−40 was sufficient to reproduce the RI-level of accuracy of 50 µE H per atom for several datasets including thermochemistry and noncovalent interactions.27 Such a large c ISDF was needed to avoid non-variational collapse to spurious solutions (found with smaller c ISDF ). While we did not observe any non-variational collapse in PBC calculations, we found that a similar value is needed for c,44
4 Results and Discussion
4.1 THC Accuracy
[nn]
[nn]
[nn]
[nn]
Table 1 :
1Error in the benzene crystal cohesive energy in kJ/mol for several density functionals as a function of the k-point mesh size.N k
B97M-rV
HF PBE0-D3 MN15 M06-2X SCAN0-D3 ωB97X-rV ωB97M-rV
1 3
-10.97 -55.87
-37.18 -62.59
-59.70
-32.38
-139.54
-141.48
2 3
-8.29 58.48
-7.44
-8.81
4.09
-2.33
-23.70
-24.74
3 3
-8.98
-4.59
-3.76
10.33
0.54
-12.42
-13.40
TDL
-8.98 74.81
-3.38
-1.63
12.95
1.75
-7.68
-8.62
Thirty years of density functional theory in computational chemistry: An overview and extensive assessment of 200 density functionals. N Mardirossian, M Head-Gordon, Mol. Phys. 115Mardirossian, N.; Head-Gordon, M. Thirty years of density functional theory in com- putational chemistry: An overview and extensive assessment of 200 density functionals. Mol. Phys. 2017, 115, 2315-2372.
A look at the density functional theory zoo with the advanced GMTKN55 database for general main group thermochemistry, kinetics and noncovalent interactions. L Goerigk, A Hansen, C Bauer, S Ehrlich, A Najibi, S Grimme, Phys. Chem. Chem. Phys. 19Goerigk, L.; Hansen, A.; Bauer, C.; Ehrlich, S.; Najibi, A.; Grimme, S. A look at the density functional theory zoo with the advanced GMTKN55 database for gen- eral main group thermochemistry, kinetics and noncovalent interactions. Phys. Chem. Chem. Phys. 2017, 19, 32184-32215.
Comprehensive thermochemical benchmark set of realistic closed-shell metal organic reactions. S Dohm, A Hansen, M Steinmetz, S Grimme, M P Checinskii, J. Chem. Theory Comput. 14Dohm, S.; Hansen, A.; Steinmetz, M.; Grimme, S.; Checinskii, M. P. Comprehensive thermochemical benchmark set of realistic closed-shell metal organic reactions. J. Chem. Theory Comput. 2018, 14, 2596-2608.
Assessment of DFT methods for transition metals with the TMC151 compilation of data sets and comparison with accuracies for main-group chemistry. B Chan, P M W Gill, M Kimura, J. Chem. Theory Comput. 15Chan, B.; Gill, P. M. W.; Kimura, M. Assessment of DFT methods for transition metals with the TMC151 compilation of data sets and comparison with accuracies for main-group chemistry. J. Chem. Theory Comput. 2019, 15, 3610-3622.
On the prediction of band gaps from hybrid functional theory. J Muscat, A Wander, N Harrison, Chemical Physics Letters. 342Muscat, J.; Wander, A.; Harrison, N. On the prediction of band gaps from hybrid functional theory. Chemical Physics Letters 2001, 342, 397-401.
Comparison of screened hybrid density functional theory to diffusion Monte Carlo in calculations of total energies of silicon phases and defects. E R Batista, J Heyd, R G Hennig, B P Uberuaga, R L Martin, G E Scuseria, C Umrigar, J W Wilkins, Physical Review B. 121102Batista, E. R.; Heyd, J.; Hennig, R. G.; Uberuaga, B. P.; Martin, R. L.; Scuseria, G. E.; Umrigar, C.; Wilkins, J. W. Comparison of screened hybrid density functional theory to diffusion Monte Carlo in calculations of total energies of silicon phases and defects. Physical Review B 2006, 74, 121102.
Accurate defect levels obtained from the HSE06 range-separated hybrid functional. P Deák, B Aradi, T Frauenheim, E Janzén, A Gali, Physical Review B. 153203Deák, P.; Aradi, B.; Frauenheim, T.; Janzén, E.; Gali, A. Accurate defect levels ob- tained from the HSE06 range-separated hybrid functional. Physical Review B 2010, 81, 153203.
Assessing the accuracy of hybrid functionals in the determination of defect levels: Application to the As antisite in GaAs. H.-P Komsa, A Pasquarello, Physical Review B. 75207Komsa, H.-P.; Pasquarello, A. Assessing the accuracy of hybrid functionals in the de- termination of defect levels: Application to the As antisite in GaAs. Physical Review B 2011, 84, 075207.
Accurate treatment of solids with the HSE screened hybrid. physica status solidi (b). T M Henderson, J Paier, G E Scuseria, 248Henderson, T. M.; Paier, J.; Scuseria, G. E. Accurate treatment of solids with the HSE screened hybrid. physica status solidi (b) 2011, 248, 767-774.
Linear scaling computation of the Hartree-Fock exchange matrix. E Schwegler, M Challacombe, The Journal of chemical physics. 105Schwegler, E.; Challacombe, M. Linear scaling computation of the Hartree-Fock ex- change matrix. The Journal of chemical physics 1996, 105, 2726-2734.
Linear scaling computation of the Fock matrix. II. Rigorous bounds on exchange integrals and incremental Fock build. E Schwegler, M Challacombe, M Head-Gordon, The Journal of chemical physics. 106Schwegler, E.; Challacombe, M.; Head-Gordon, M. Linear scaling computation of the Fock matrix. II. Rigorous bounds on exchange integrals and incremental Fock build. The Journal of chemical physics 1997, 106, 9708-9717.
A linear scaling method for Hartree-Fock exchange calculations of large molecules. J C Burant, G E Scuseria, M J Frisch, The Journal of chemical physics. 105Burant, J. C.; Scuseria, G. E.; Frisch, M. J. A linear scaling method for Hartree-Fock exchange calculations of large molecules. The Journal of chemical physics 1996, 105, 8969-8972.
Linear scaling computation of the Fock matrix. M Challacombe, E Schwegler, The Journal of chemical physics. 106Challacombe, M.; Schwegler, E. Linear scaling computation of the Fock matrix. The Journal of chemical physics 1997, 106, 5526-5536.
Linear and sublinear scaling formation of Hartree-Fock-type exchange matrices. C Ochsenfeld, C A White, M Head-Gordon, The Journal of chemical physics. 109Ochsenfeld, C.; White, C. A.; Head-Gordon, M. Linear and sublinear scaling formation of Hartree-Fock-type exchange matrices. The Journal of chemical physics 1998, 109, 1663-1669.
Solution of self-consistent field electronic structure equations by a pseudospectral method. Chemical physics letters. R A Friesner, 116Friesner, R. A. Solution of self-consistent field electronic structure equations by a pseu- dospectral method. Chemical physics letters 1985, 116, 39-43.
Efficient, approximate and parallel Hartree-Fock and hybrid DFT calculations. A 'chain-of-spheres' algorithm for the Hartree-Fock exchange. F Neese, F Wennmohs, A Hansen, U Becker, Chemical Physics. 356Neese, F.; Wennmohs, F.; Hansen, A.; Becker, U. Efficient, approximate and paral- lel Hartree-Fock and hybrid DFT calculations. A 'chain-of-spheres' algorithm for the Hartree-Fock exchange. Chemical Physics 2009, 356, 98-109.
An overlap fitted chain of spheres exchange method. R Izsák, F Neese, The Journal of chemical physics. 135Izsák, R.; Neese, F. An overlap fitted chain of spheres exchange method. The Journal of chemical physics 2011, 135, 144105.
Simplifications in the generation and transformation of two-electron integrals in molecular calculations. N H Beebe, J Linderberg, International Journal of Quantum Chemistry. 12Beebe, N. H.; Linderberg, J. Simplifications in the generation and transformation of two-electron integrals in molecular calculations. International Journal of Quantum Chemistry 1977, 12, 683-705.
On the Beebe-Linderberg two-electron integral approximation. Chemical physics letters. I Røeggen, E Wisløff-Nilssen, 132Røeggen, I.; Wisløff-Nilssen, E. On the Beebe-Linderberg two-electron integral approx- imation. Chemical physics letters 1986, 132, 154-160.
Self-consistent molecular Hartree-Fock-Slater calculations I. The computational procedure. E Baerends, D Ellis, P Ros, Chemical Physics. 2Baerends, E.; Ellis, D.; Ros, P. Self-consistent molecular Hartree-Fock-Slater calcu- lations I. The computational procedure. Chemical Physics 1973, 2, 41-51.
Coulombic potential energy integrals and approximations. J L Whitten, The Journal of Chemical Physics. 58Whitten, J. L. Coulombic potential energy integrals and approximations. The Journal of Chemical Physics 1973, 58, 4496-4501.
Electron repulsion integral approximations and error bounds: Molecular applications. J Jafri, J Whitten, The Journal of Chemical Physics. 61Jafri, J.; Whitten, J. Electron repulsion integral approximations and error bounds: Molecular applications. The Journal of Chemical Physics 1974, 61, 2116-2121.
Tensor hypercontraction density fitting. I. Quartic scaling second-and third-order Møller-Plesset perturbation theory. E G Hohenstein, R M Parrish, T J Martínez, The Journal of chemical physics. 13744103Hohenstein, E. G.; Parrish, R. M.; Martínez, T. J. Tensor hypercontraction density fitting. I. Quartic scaling second-and third-order Møller-Plesset perturbation theory. The Journal of chemical physics 2012, 137, 044103.
Tensor hypercontraction. II. Least-squares renormalization. R M Parrish, E G Hohenstein, T J Martínez, C D Sherrill, The Journal of chemical physics. 137224106Parrish, R. M.; Hohenstein, E. G.; Martínez, T. J.; Sherrill, C. D. Tensor hypercon- traction. II. Least-squares renormalization. The Journal of chemical physics 2012, 137, 224106.
Adaptively compressed exchange operator. L Lin, Journal of chemical theory and com. 12Lin, L. Adaptively compressed exchange operator. Journal of chemical theory and com- putation 2016, 12, 2242-2249.
Tensor hypercontracted ppRPA: Reducing the cost of the particle-particle random phase approximation from O (r 6) to O (r 4). N Shenvi, H Van Aggelen, Y Yang, W Yang, The Journal of Chemical Physics. 24119Shenvi, N.; Van Aggelen, H.; Yang, Y.; Yang, W. Tensor hypercontracted ppRPA: Reducing the cost of the particle-particle random phase approximation from O (r 6) to O (r 4). The Journal of Chemical Physics 2014, 141, 024119.
Systematically improvable tensor hypercontraction: Interpolative separable density-fitting for molecules applied to exact exchange, secondand third-order Møller-Plesset perturbation theory. J Lee, L Lin, M Head-Gordon, Journal of chemical theory and computation. 16Lee, J.; Lin, L.; Head-Gordon, M. Systematically improvable tensor hypercontraction: Interpolative separable density-fitting for molecules applied to exact exchange, second- and third-order Møller-Plesset perturbation theory. Journal of chemical theory and computation 2019, 16, 243-263.
Interpolative separable density fitting through centroidal voronoi tessellation with applications to hybrid functional electronic structure calculations. K Dong, W Hu, L Lin, Journal of Chemical Theory and Computation. 14Dong, K.; Hu, W.; Lin, L. Interpolative separable density fitting through centroidal voronoi tessellation with applications to hybrid functional electronic structure calcula- tions. Journal of Chemical Theory and Computation 2018, 14, 1311-1320.
Interpolative separable density fitting decomposition for accelerating hybrid density functional calculations with applications to defects in silicon. W Hu, L Lin, C Yang, Journal of Chemical Theory and Computation. 13Hu, W.; Lin, L.; Yang, C. Interpolative separable density fitting decomposition for accelerating hybrid density functional calculations with applications to defects in silicon. Journal of Chemical Theory and Computation 2017, 13, 5420-5431.
Low-rank approximations accelerated plane-wave hybrid functional calculations with k-point sampling. K Wu, X Qin, W Hu, J Yang, Journal of Chemical Theory and Computation. 18Wu, K.; Qin, X.; Hu, W.; Yang, J. Low-rank approximations accelerated plane-wave hybrid functional calculations with k-point sampling. Journal of Chemical Theory and Computation 2021, 18, 206-218.
Complex-Valued K-means Clustering for Interpolative Separable Density Fitting to Large-Scale Hybrid Functional Ab Initio Molecular Dynamics with Plane-Wave Basis Sets. J Li, X Qin, L Wan, S Jiao, W Hu, J Yang, arXiv:2208.077312022arXiv preprintLi, J.; Qin, X.; Wan, L.; Jiao, S.; Hu, W.; Yang, J. Complex-Valued K-means Clustering for Interpolative Separable Density Fitting to Large-Scale Hybrid Functional Ab Ini- tio Molecular Dynamics with Plane-Wave Basis Sets. arXiv preprint arXiv:2208.07731 2022,
Interpolative Separable Density Fitting for Accelerating Two-Electron Integrals: A Theoretical Perspective. X Qin, W Hu, J Yang, Journal of Chemical Theory and Computation. Qin, X.; Hu, W.; Yang, J. Interpolative Separable Density Fitting for Accelerating Two-Electron Integrals: A Theoretical Perspective. Journal of Chemical Theory and Computation 2023,
Fast, accurate evaluation of exact exchange: The occ-RI-K algorithm. S F Manzer, P R Horn, N Mardirossian, M Head-Gordon, J. Chem. Phys. 24113Manzer, S. F.; Horn, P. R.; Mardirossian, N.; Head-Gordon, M. Fast, accurate evalua- tion of exact exchange: The occ-RI-K algorithm. J. Chem. Phys. 2015, 143, 024113.
Faster Exact Exchange for Solids via occ-RI-K: Application to Combinatorially Optimized Range-Separated Hybrid Functionals for Simple Solids Near the Basis Set Limit. J Lee, A Rettig, X Feng, E Epifanovsky, M Head-Gordon, arXiv:2207.090282022arXiv preprintLee, J.; Rettig, A.; Feng, X.; Epifanovsky, E.; Head-Gordon, M. Faster Exact Ex- change for Solids via occ-RI-K: Application to Combinatorially Optimized Range- Separated Hybrid Functionals for Simple Solids Near the Basis Set Limit. arXiv preprint arXiv:2207.09028 2022,
Fast exchange with Gaussian basis set using robust pseudospectral method. S Sharma, A F White, G Beylkin, arXiv:2207.046362022arXiv preprintSharma, S.; White, A. F.; Beylkin, G. Fast exchange with Gaussian basis set using robust pseudospectral method. arXiv preprint arXiv:2207.04636 2022,
Interpolative separable density fitting decomposition for accelerating Hartree-Fock exchange calculations within numerical atomic orbitals. X Qin, J Liu, W Hu, J Yang, The Journal of Physical Chemistry A. 124Qin, X.; Liu, J.; Hu, W.; Yang, J. Interpolative separable density fitting decomposition for accelerating Hartree-Fock exchange calculations within numerical atomic orbitals. The Journal of Physical Chemistry A 2020, 124, 5664-5674.
Improved grid optimization and fitting in least squares tensor hypercontraction. D A Matthews, Journal of chemical theory. 2020Matthews, D. A. Improved grid optimization and fitting in least squares tensor hyper- contraction. Journal of chemical theory and computation 2020, 16, 1382-1385.
Approaching the basis set limit in Gaussian-orbital-based periodic calculations with transferability: Performance of pure density functionals for simple semiconductors. J Lee, X Feng, L A Cunha, J F Gonthier, E Epifanovsky, M Head-Gordon, J. Chem. Phys. 2021164102Lee, J.; Feng, X.; Cunha, L. A.; Gonthier, J. F.; Epifanovsky, E.; Head-Gordon, M. Approaching the basis set limit in Gaussian-orbital-based periodic calculations with transferability: Performance of pure density functionals for simple semiconductors. J. Chem. Phys. 2021, 155, 164102.
Separable dual-space Gaussian pseudopotentials. S Goedecker, M Teter, J Hutter, Physical Review B. 1703Goedecker, S.; Teter, M.; Hutter, J. Separable dual-space Gaussian pseudopotentials. Physical Review B 1996, 54, 1703.
Pseudopotentials for H to Kr optimized for gradient-corrected exchangecorrelation functionals. M Krack, Theoretical Chemistry Accounts. 114Krack, M. Pseudopotentials for H to Kr optimized for gradient-corrected exchange- correlation functionals. Theoretical Chemistry Accounts 2005, 114, 145-152.
A hybrid Gaussian and plane wave density functional scheme. B G Lippert, J H Parrinello, Michele, Molecular Physics. 92Lippert, B. G.; PARRINELLO, J. H.; MICHELE, A hybrid Gaussian and plane wave density functional scheme. Molecular Physics 1997, 92, 477-488.
Quickstep: Fast and accurate density functional calculations using a mixed Gaussian and plane waves approach. J Vandevondele, M Krack, F Mohamed, M Parrinello, T Chassaing, J Hutter, Computer Physics Communications. 167VandeVondele, J.; Krack, M.; Mohamed, F.; Parrinello, M.; Chassaing, T.; Hutter, J. Quickstep: Fast and accurate density functional calculations using a mixed Gaussian and plane waves approach. Computer Physics Communications 2005, 167, 103-128.
Finitesize effects and Coulomb interactions in quantum Monte Carlo calculations for homogeneous systems with periodic boundary conditions. L M Fraser, W Foulkes, G Rajagopal, R Needs, S Kenny, A Williamson, Physical Review B. 1814Fraser, L. M.; Foulkes, W.; Rajagopal, G.; Needs, R.; Kenny, S.; Williamson, A. Finite- size effects and Coulomb interactions in quantum Monte Carlo calculations for ho- mogeneous systems with periodic boundary conditions. Physical Review B 1996, 53, 1814.
Software for the frontiers of quantum chemistry: An overview of developments in the Q-Chem 5 package. E Epifanovsky, A T Gilbert, X Feng, J Lee, Y Mao, N Mardirossian, P Pokhilko, A F White, M P Coons, A L Dempwolff, J. Chem. Phys. 84801Epifanovsky, E.; Gilbert, A. T.; Feng, X.; Lee, J.; Mao, Y.; Mardirossian, N.; Pokhilko, P.; White, A. F.; Coons, M. P.; Dempwolff, A. L., et al. Software for the frontiers of quantum chemistry: An overview of developments in the Q-Chem 5 pack- age. J. Chem. Phys. 2021, 155, 084801.
Ab initio determination of the crystalline benzene lattice energy to sub-kilojoule/mole accuracy. J Yang, W Hu, D Usvyat, D Matthews, M Schütz, G K Chan, .-L , Science. 345Yang, J.; Hu, W.; Usvyat, D.; Matthews, D.; Schütz, M.; Chan, G. K.-L. Ab initio determination of the crystalline benzene lattice energy to sub-kilojoule/mole accuracy. Science 2014, 345, 640-643.
First principles computation of lattice energies of organic solids: The benzene crystal. A L Ringer, C D Sherrill, Chemistry-A European Journal. 14Ringer, A. L.; Sherrill, C. D. First principles computation of lattice energies of organic solids: The benzene crystal. Chemistry-A European Journal 2008, 14, 2542-2547.
Communication: Resolving the three-body contribution to the lattice energy of crystalline benzene: Benchmark results from coupled-cluster theory. M R Kennedy, A R Mcdonald, Iii Deprince, A E Marshall, M S Podeszwa, R Sherrill, C D , The Journal of chemical physics. 140121104Kennedy, M. R.; McDonald, A. R.; DePrince III, A. E.; Marshall, M. S.; Podeszwa, R.; Sherrill, C. D. Communication: Resolving the three-body contribution to the lattice energy of crystalline benzene: Benchmark results from coupled-cluster theory. The Journal of chemical physics 2014, 140, 121104.
Ab initio investigation of structure and cohesive energy of crystalline urea. B Civalleri, K Doll, C Zicovich-Wilson, The Journal of Physical Chemistry B. 111Civalleri, B.; Doll, K.; Zicovich-Wilson, C. Ab initio investigation of structure and cohesive energy of crystalline urea. The Journal of Physical Chemistry B 2007, 111, 26-33.
DFT-D3 study of some molecular crystals. J Moellmann, S Grimme, The Journal of Physical Chemistry C. 118Moellmann, J.; Grimme, S. DFT-D3 study of some molecular crystals. The Journal of Physical Chemistry C 2014, 118, 7615-7621.
Integral-Direct Hartree-Fock and Møller-Plesset Perturbation Theory for Periodic Systems with Density Fitting: Application to the Benzene Crystal. S J Bintrim, T C Berkelbach, H.-Z Ye, Journal of Chemical Theory and Computation. 18Bintrim, S. J.; Berkelbach, T. C.; Ye, H.-Z. Integral-Direct Hartree-Fock and Møller- Plesset Perturbation Theory for Periodic Systems with Density Fitting: Application to the Benzene Crystal. Journal of Chemical Theory and Computation 2022, 18, 5374- 5381.
Energy ranking of molecular crystals using density functional theory calculations and an empirical van der Waals correction. M A Neumann, M.-A Perrin, The Journal of Physical Chemistry B. 109Neumann, M. A.; Perrin, M.-A. Energy ranking of molecular crystals using density functional theory calculations and an empirical van der Waals correction. The Journal of Physical Chemistry B 2005, 109, 15531-15541.
B3LYP augmented with an empirical dispersion term (B3LYP-D*) as applied to molecular crystals. B Civalleri, C M Zicovich-Wilson, L Valenzano, P Ugliengo, Crys-tEngComm. 10Civalleri, B.; Zicovich-Wilson, C. M.; Valenzano, L.; Ugliengo, P. B3LYP augmented with an empirical dispersion term (B3LYP-D*) as applied to molecular crystals. Crys- tEngComm 2008, 10, 405-410.
Towards hybrid density functional calculations of molecular crystals via fragment-based methods. O A Loboda, G A Dolgonos, A D Boese, The Journal of Chemical Physics. 149124104Loboda, O. A.; Dolgonos, G. A.; Boese, A. D. Towards hybrid density functional cal- culations of molecular crystals via fragment-based methods. The Journal of Chemical Physics 2018, 149, 124104.
XDM-corrected hybrid DFT with numerical atomic orbitals predicts molecular crystal lattice energies with unprecedented accuracy. A J Price, A Otero-De-La Roza, E R Johnson, Chemical Science. Price, A. J.; Otero-de-la Roza, A.; Johnson, E. R. XDM-corrected hybrid DFT with numerical atomic orbitals predicts molecular crystal lattice energies with unprecedented accuracy. Chemical Science 2023,
Assessment of different quantum mechanical methods for the prediction of structure and cohesive energy of molecular crystals. M Cutini, B Civalleri, M Corno, R Orlando, J G Brandenburg, L Maschio, P Ugliengo, Journal of chemical theory and computation. 12Cutini, M.; Civalleri, B.; Corno, M.; Orlando, R.; Brandenburg, J. G.; Maschio, L.; Ugliengo, P. Assessment of different quantum mechanical methods for the prediction of structure and cohesive energy of molecular crystals. Journal of chemical theory and computation 2016, 12, 3340-3352.
Mapping the genome of meta-generalized gradient approximation density functionals: The search for B97M-V. N Mardirossian, M Head-Gordon, The Journal of chemical physics. 14274111Mardirossian, N.; Head-Gordon, M. Mapping the genome of meta-generalized gradient approximation density functionals: The search for B97M-V. The Journal of chemical physics 2015, 142, 074111.
Use of the rVV10 nonlocal correlation functional in the B97M-V density functional: Defining B97M-rV and related functionals. The journal of physical chemistry letters. N Mardirossian, L Ruiz Pestana, J C Womack, C.-K Skylaris, T Head-Gordon, M Head-Gordon, 8Mardirossian, N.; Ruiz Pestana, L.; Womack, J. C.; Skylaris, C.-K.; Head-Gordon, T.; Head-Gordon, M. Use of the rVV10 nonlocal correlation functional in the B97M-V density functional: Defining B97M-rV and related functionals. The journal of physical chemistry letters 2017, 8, 35-40.
Toward reliable density functional methods without adjustable parameters: The PBE0 model. C Adamo, V Barone, The Journal of chemical physics. 110Adamo, C.; Barone, V. Toward reliable density functional methods without adjustable parameters: The PBE0 model. The Journal of chemical physics 1999, 110, 6158-6170.
A consistent and accurate ab initio parametrization of density functional dispersion correction (DFT-D) for the 94 elements H-Pu. S Grimme, J Antony, S Ehrlich, H Krieg, The Journal of chemical physics. 154104Grimme, S.; Antony, J.; Ehrlich, S.; Krieg, H. A consistent and accurate ab initio parametrization of density functional dispersion correction (DFT-D) for the 94 elements H-Pu. The Journal of chemical physics 2010, 132, 154104.
MN15: A Kohn-Sham global-hybrid exchange-correlation density functional with broad accuracy for multi-reference and single-reference systems and noncovalent interactions. S Y Haoyu, X He, S L Li, D G Truhlar, Chemical science. 7Haoyu, S. Y.; He, X.; Li, S. L.; Truhlar, D. G. MN15: A Kohn-Sham global-hybrid exchange-correlation density functional with broad accuracy for multi-reference and single-reference systems and noncovalent interactions. Chemical science 2016, 7, 5032- 5051.
The M06 suite of density functionals for main group thermochemistry, thermochemical kinetics, noncovalent interactions, excited states, and transition elements: two new functionals and systematic testing of four M06-class functionals and 12 other functionals. Theoretical chemistry accounts. Y Zhao, D G Truhlar, 120Zhao, Y.; Truhlar, D. G. The M06 suite of density functionals for main group ther- mochemistry, thermochemical kinetics, noncovalent interactions, excited states, and transition elements: two new functionals and systematic testing of four M06-class func- tionals and 12 other functionals. Theoretical chemistry accounts 2008, 120, 215-241.
SCAN-based hybrid and double-hybrid density functionals from models without fitted parameters. K Hui, J.-D Chai, The Journal of chemical physics. 14444114Hui, K.; Chai, J.-D. SCAN-based hybrid and double-hybrid density functionals from models without fitted parameters. The Journal of chemical physics 2016, 144, 044114.
ωB97X-V: A 10-parameter, range-separated hybrid, generalized gradient approximation density functional with nonlocal correlation, designed by a survival-of-the-fittest strategy. N Mardirossian, M Head-Gordon, Physical Chemistry Chemical Physics. 16Mardirossian, N.; Head-Gordon, M. ωB97X-V: A 10-parameter, range-separated hybrid, generalized gradient approximation density functional with nonlocal correlation, de- signed by a survival-of-the-fittest strategy. Physical Chemistry Chemical Physics 2014, 16, 9904-9924.
B97M-V: A combinatorially optimized, rangeseparated hybrid, meta-GGA density functional with VV10 nonlocal correlation. N Mardirossian, M Head-Gordon, The Journal of chemical physics. 144214110Mardirossian, N.; Head-Gordon, M. ω B97M-V: A combinatorially optimized, range- separated hybrid, meta-GGA density functional with VV10 nonlocal correlation. The Journal of chemical physics 2016, 144, 214110.
Electronic exchange and correlation in van der Waals systems: Balancing semilocal and nonlocal energy contributions. J Hermann, A Tkatchenko, Journal of chemical theory and computation. 14Hermann, J.; Tkatchenko, A. Electronic exchange and correlation in van der Waals systems: Balancing semilocal and nonlocal energy contributions. Journal of chemical theory and computation 2018, 14, 1361-1369.
Theory and practice of modeling van der Waals interactions in electronic-structure calculations. M Stöhr, T Van Voorhis, A Tkatchenko, Chemical Society Reviews. 48Stöhr, M.; Van Voorhis, T.; Tkatchenko, A. Theory and practice of modeling van der Waals interactions in electronic-structure calculations. Chemical Society Reviews 2019, 48, 4118-4154.
Seeking widely applicable dispersion-corrected GGA functionals: The performances of TCA+ D3 and RevTCA+ D3 on solid-state systems. E Fabiano, P Cortona, Computational Materials Science. 216111826Fabiano, E.; Cortona, P. Seeking widely applicable dispersion-corrected GGA func- tionals: The performances of TCA+ D3 and RevTCA+ D3 on solid-state systems. Computational Materials Science 2023, 216, 111826.
Robust Approximation of Tensor Networks: Application to Grid-Free Tensor Factorization of the Coulomb Interaction. K Pierce, V Rishi, E F Valeev, J. Chem. Theory Comput. 17Pierce, K.; Rishi, V.; Valeev, E. F. Robust Approximation of Tensor Networks: Appli- cation to Grid-Free Tensor Factorization of the Coulomb Interaction. J. Chem. Theory Comput. 2021, 17, 2217-2230.
Projector augmented-wave method. P E Blöchl, Physical review B. 17953Blöchl, P. E. Projector augmented-wave method. Physical review B 1994, 50, 17953.
Gaussian and plane-wave mixed density fitting for periodic systems. Q Sun, T C Berkelbach, J D Mcclain, G K Chan, .-L , The Journal of chemical physics. 147164119Sun, Q.; Berkelbach, T. C.; McClain, J. D.; Chan, G. K.-L. Gaussian and plane-wave mixed density fitting for periodic systems. The Journal of chemical physics 2017, 147, 164119.
Fast periodic Gaussian density fitting by range separation. H.-Z Ye, T C Berkelbach, The Journal of Chemical Physics. 154131104Ye, H.-Z.; Berkelbach, T. C. Fast periodic Gaussian density fitting by range separation. The Journal of Chemical Physics 2021, 154, 131104.
Communication: Tensor hypercontraction. III. Least-squares tensor hypercontraction for the determination of correlated wavefunctions. E G Hohenstein, R M Parrish, C D Sherrill, T J Martínez, The Journal of chemical physics. 137221101Hohenstein, E. G.; Parrish, R. M.; Sherrill, C. D.; Martínez, T. J. Communication: Ten- sor hypercontraction. III. Least-squares tensor hypercontraction for the determination of correlated wavefunctions. The Journal of chemical physics 2012, 137, 221101.
Cubic scaling algorithms for RPA correlation using interpolative separable density fitting. J Lu, K Thicke, Journal of Computational Physics. 351Lu, J.; Thicke, K. Cubic scaling algorithms for RPA correlation using interpolative separable density fitting. Journal of Computational Physics 2017, 351, 187-202.
| [] |
[
"Spin effects in the magneto-drag between double quantum wells",
"Spin effects in the magneto-drag between double quantum wells"
] | [
"J G S Lok \nMax-Planck-Institut für Festkörperforschung\nHeisenbergstrasse 170569StuttgartGermany\n",
"S Kraus \nMax-Planck-Institut für Festkörperforschung\nHeisenbergstrasse 170569StuttgartGermany\n",
"M Pohlt \nMax-Planck-Institut für Festkörperforschung\nHeisenbergstrasse 170569StuttgartGermany\n",
"W Dietsche \nMax-Planck-Institut für Festkörperforschung\nHeisenbergstrasse 170569StuttgartGermany\n",
"K Von Klitzing \nMax-Planck-Institut für Festkörperforschung\nHeisenbergstrasse 170569StuttgartGermany\n",
"W Wegscheider \nWalter Schottky Institut\nTechnische Universität München\n85748GarchingGermany\n\nInstitut für angewandte und experimentelle Physik\nUniversität Regensburg\n93040RegensburgGermany\n",
"M Bichler \nWalter Schottky Institut\nTechnische Universität München\n85748GarchingGermany\n"
] | [
"Max-Planck-Institut für Festkörperforschung\nHeisenbergstrasse 170569StuttgartGermany",
"Max-Planck-Institut für Festkörperforschung\nHeisenbergstrasse 170569StuttgartGermany",
"Max-Planck-Institut für Festkörperforschung\nHeisenbergstrasse 170569StuttgartGermany",
"Max-Planck-Institut für Festkörperforschung\nHeisenbergstrasse 170569StuttgartGermany",
"Max-Planck-Institut für Festkörperforschung\nHeisenbergstrasse 170569StuttgartGermany",
"Walter Schottky Institut\nTechnische Universität München\n85748GarchingGermany",
"Institut für angewandte und experimentelle Physik\nUniversität Regensburg\n93040RegensburgGermany",
"Walter Schottky Institut\nTechnische Universität München\n85748GarchingGermany"
] | [] | We report on the selectivity to spin in a drag measurement. This selectivity to spin causes deep minima in the magneto-drag at odd fillingfactors for matched electron densities at magnetic fields and temperatures at which the bare spin energy is only one tenth of the temperature. For mismatched densities the selectivity causes a novel 1/B-periodic oscillation, such that negative minima in the drag are observed whenever the majority spins at the Fermi energies of the two-dimensional electron gasses (2DEGs) are anti-parallel, and positive maxima whenever the majority spins at the Fermi energies are parallel. | 10.1103/physrevb.63.041305 | [
"https://export.arxiv.org/pdf/cond-mat/0011017v1.pdf"
] | 117,750,174 | cond-mat/0011017 | fdcf542c6cef82e14749209ab73e267814bc962f |
Spin effects in the magneto-drag between double quantum wells
1 Nov 2000 (2 April 2000)
J G S Lok
Max-Planck-Institut für Festkörperforschung
Heisenbergstrasse 170569StuttgartGermany
S Kraus
Max-Planck-Institut für Festkörperforschung
Heisenbergstrasse 170569StuttgartGermany
M Pohlt
Max-Planck-Institut für Festkörperforschung
Heisenbergstrasse 170569StuttgartGermany
W Dietsche
Max-Planck-Institut für Festkörperforschung
Heisenbergstrasse 170569StuttgartGermany
K Von Klitzing
Max-Planck-Institut für Festkörperforschung
Heisenbergstrasse 170569StuttgartGermany
W Wegscheider
Walter Schottky Institut
Technische Universität München
85748GarchingGermany
Institut für angewandte und experimentelle Physik
Universität Regensburg
93040RegensburgGermany
M Bichler
Walter Schottky Institut
Technische Universität München
85748GarchingGermany
Spin effects in the magneto-drag between double quantum wells
1 Nov 2000 (2 April 2000)
We report on the selectivity to spin in a drag measurement. This selectivity to spin causes deep minima in the magneto-drag at odd fillingfactors for matched electron densities at magnetic fields and temperatures at which the bare spin energy is only one tenth of the temperature. For mismatched densities the selectivity causes a novel 1/B-periodic oscillation, such that negative minima in the drag are observed whenever the majority spins at the Fermi energies of the two-dimensional electron gasses (2DEGs) are anti-parallel, and positive maxima whenever the majority spins at the Fermi energies are parallel.
The physics of two-dimensional electron gases (2DEGs) has spawned numerous discoveries over the last two decades with the integer and fractional quantum Hall effects being the most prominent examples. More recently, interaction phenomena between closely spaced 2DEGs in quantizing magnetic fields have found strong interest both experimentally [1][2][3] and theoretically [4,5], because of the peculiar role the electron spin plays in these systems. Particularly interesting is a measurement of the frictional drag between two 2DEGs, as it probes the densityresponse functions in the limit of low frequency and finite wavevector (see [6] and references therein), a quantity which is not easily accessible otherwise.
Experimental data of drag at zero magnetic field are reasonably well understood. Several puzzling issues however exist for the magneto-drag. Firstly, at matched densities in the 2DEGs, the magneto-drag displays a double peak around odd fillingfactor [7,8] when spin-splitting is not visible at all in the longitudinal resistances of each individual 2DEG. These double peaks were ascribed to either an enhanced screening when the Fermi energy (E F ) is in the centre of a Landau level [9,7], or to an enhanced spin-splitting [8]. Secondly, at mismatched densities negative magneto-drag has been observed [10], i.e. an accelleration of the electrons opposite to the direction of the net transferred momentum. This negative drag was speculatively ascribed to a hole-like-dispersion in the less-than-half-filled Landau levels brought about by disorder [10].
In this Letter we present data taken in a hitherto unexplored temperature-magnetic field regime which clearly demonstrate the decisive role the electron spin plays in the drag. We find that both the above issues have a common origin; they are caused by the fact that the drag is selective to the spin of the electrons, such that electrons with anti-parallel spin in each 2DEG have a negative and those with parallel spin have a positive contribution to the drag. At mismatched densities this selectivity causes a novel 1/B-periodic oscillation in the magneto-drag around zero with frequency h∆n/2e, with ∆n the density difference between the 2DEGs. Our finding that the drag is selective to the spin of the electrons is surprising since established coupling mechanisms via Coulomb or phonon interactions are a priori not sensitive to spin, as spin-orbit interaction is extremely weak for electrons in GaAs.
In a drag experiment a current is driven through one of two electrically isolated layers, the so called drive layer. Interlayer carrier-carrier scattering through phonons, plasmons or the direct Coulomb interaction transfers part of the momentum of the carriers in the drive layer to those in the drag layer, causing a charge accumulation in the drag layer in the direction of the drift velocity of carriers in the drive layer. The drag (ρ T ) is defined as minus the ratio of the electric field originating from this charge accumulation, to the drive current density. ρ T of layers with the same types of carriers is thus expected to be positive, while that of layers with different types of carriers should be negative.
We have studied transport in several double quantum wells fabricated from three wafers that differ in the thickness of their barrier only. The 20 nm wide quantum wells are separated by Al 0.3 Ga 0.7 As barriers with widths of 30, 60 or 120 nm. The densities per quantum well are typically 2·10 11 cm −2 and all mobilities exceed 2·10 6 cm 2 V −1 s −1 . The presented results are obtained on 30 nm barrier samples, and qualitatively identical results are obtained on samples fabricated from the other wafers. Measurements were carried out on Hall bars with a width of 80 µm and a length of 880 µm. Separate contacts to each quantum well are achieved through the selective depletion technique [11] using ex-situ prepared n-doped buried backgates [12] and metallic front gates. Measurements were performed in a 3 He system with the sample mounted at the end of a cold finger. Standard drag-tests (changing ground in the drag layer, interchanging drag and drive layer, current linearity, and changing the direction of the applied magnetic field [13]) confirmed that the signal measured is a pure drag-signal. 0.26 and 1.0 K. With increasing magnetic field ρ xx shows the usual Shubnikov-de Haas oscillations which, at 0.26 K, start at a magnetic field of 0.07 T. Spin-splitting becomes visible at a magnetic field of 0.51 T, and it is completely developed at 1.2 T. By contrast, at 0.26 K the oscillations in ρ T show a double peak in magnetic fields as low as 0.11 T (ν=77, see inset). The appearance of a double peak in ρ T at fields and temperatures where ρ xx shows no spin-splitting yet has been predicted theoretically [9]. The theory states that ρ T consists essentially of the product in the density of states (DOS) at E F in each layer, multiplied with the strength of the interlayer interaction. This strength supposedly strongly decreases at the centre of a Landau level where, due to the large DOS at E F , screening is very effective. The decrease would then more than compensate for the increase in the product of the DOS of the 2DEGs, thus resulting in a double peak in ρ T . The theory was consistent with experiments described in a subsequent paper [7]. However, the most critical test for the theory, namely the occurrence of a double peak in ρ T measured at a fully spin-split Landau level (that doesn't show fractional features), could not be performed due to the moderate mobility of the sample and the accessible temperature range. Our experiment does allow such a test and fig. 1 shows that ρ T does not show this predicted double peak for spin-split Landau levels. We further note that at 1 T the longitudinal conductivity in our sample is 50% higher then in the experiment [7] and the theory [9] and screening should thus be even more effective in our samples. The theory is thus not applicable to explain our experimental results and one is forced to reconsider the possible role of spin. We note furthermore that at 0.11 T and 0.26 K the bare spin energy (gµ B B) is only one tenth of the thermal energy so there is a significant thermal excitation between the Landau levels with different spin. This rules out enhanced spin-splitting [8,14,15] as the cause for the double peak in ρ T . In the following we will nonetheless show that it is spin that is causing the double peak, through a mechanism where electrons with parallel spin in each layer have a positive and those with anti-parallel spin have a negative contribution to ρ T . The minima at large odd fillingfactor then occur, because the positive and negative contributions cancel.
In order to prove the above scenario we have measured magneto-drag at mismatched densities. Then successive Landau levels in the 2DEGs pass through their E F at different magnetic fields. At certain magnetic fields (depending on density and density difference of the 2DEGs) the situation will be such that Landau levels with antiparallel spins will be at E F in the 2DEGs, while at somewhat different magnetic fields Landau levels with parallel spin will be at E F . Alternatively we have fixed the magnetic field and used one of the gates in the sample to change the density in one 2DEG, bringing about the same effect. The first measurement is plotted in the lower part of fig. 2 together with ρ xx of both 2DEGs (top). As is apparent, for mismatched densities ρ T is no longer always positive. Instead ρ T consists of the sum of two 1/B-periodic oscillations. A quick one with the frequency h(n 1 + n 2 )/2e, that results from the overlap of the (in ρ T for B >0.17 T doubly peaked) Landau levels of the 2DEGs plus a slower one with the frequency h(n 1 − n 2 )/2e, which causes ρ T to oscillate around zero. The arrows in fig. 2 indicate the magnetic fields at which the fillingfactor difference between the 2DEGs (∆ν=ν 1ν 2 ) equals an integer. ∆ν is calculated from the densities of the 2DEGs that are obtained from the positions of the minima in the Shubnikov-de Haas oscillations in ρ xx . It is clear that when ∆ν is odd ρ T is most negative, while when ∆ν is even ρ T is most positive. The inset of fig. 2 confirms this even/odd behavior. It plots ρ T at 0.641 T (ν 1 = 13.5, maximum ρ T in fig. 1) versus ∆ν which is changed continuously by decreasing the density of one 2DEG with a gate. In such a measurement the DOS in the other 2DEG is kept constant, thus removing the quick oscillation. However, the periodic slow oscillation with alternating sign still remains and its amplitude increases upon decreasing the density in the second 2DEG.
The observation of negative ρ T at odd ∆ν and positive ρ T at even ∆ν hints the involvement of spin. If spin-splitting were fully developed, odd ∆ν corresponds to electrons with anti-parallel spin at the E F 's in the 2DEGs. In our experiment, however, negative ρ T is observed in the regime of incomplete spin-splitting. One may then expect a maximum positive ρ T at ∆ν=even and a maximum negative ρ T at ∆ν=even+∆ν spin , with ∆ν spin the fillingfactor difference between spin↑ and spin↓ peaks in ρ xx (which equals 1 only if spin-splitting is complete). A simulation of ρ T (see below), assuming positive coupling between electrons with parallel spins and negative coupling between electrons with antiparallel spins, shows however that ρ T is most positive for ∆ν =even and most negative for ∆ν =odd, irrespective of the magnitude of the spin-splitting. This magnitude only influences the amplitude of the oscillations in ρ T , but does not alter its phase or periodicity.
In lack of a theory to compare our results with, we present an empirical model, assuming ρ xx ∝(DOS ↑ + DOS ↓ ) 2 and ρ T ∝B α (DOS ↑ -DOS ↓ ) layer1 ×(DOS ↑ -DOS ↓ ) layer2 , with DOS ↑,↓ the density of states at E F for spin↑ and spin↓, and B the magnetic field. To account for the unknown change in the coupling between the layers with magnetic field a factor of B α (α ≈-3.5) is used to scale the amplitude of ρ T (B) to approximately the experimental value. The DOS at E F is given by the sum of a set of Gaussians with an intrinsic width (due to disorder and temperature) plus a width that increases with √ B. The intrinsic width (1.5 K) is extracted from the experiment through a Dingle analysis of the oscillatory part of the low field Shubnikov-de-Haas oscillations. The coefficient in front of the √ B (2.7 K for the lower density 2DEG and 2.3 K for the other) is determined by fitting the simulated ρ xx to the measured one. In the simulation the densities are kept constant (i.e. E F oscillates) and for the results shown in fig. 3 we assume an exchange enhanced spin gap: ∆ spin = gµ B B + |(n ↑ − n ↓ )/(n ↑ + n ↓ )| × 2E c , with E c the Coulomb energy e 2 /4πǫl B , g the bare g-factor in GaAs (-0.44), µ B the Bohr magneton, ǫ the dielectric constant, l B the magnetic length and n ↑,↓ the number of particles with spin↑ and spin↓. There is some discussion in literature whether in low fields the relevant length scale for E c is l B or (the much smaller) k −1 F (see [14] and references therein). In our simulation 0.5l B is appropriate, i.e. the factor of 2E c is used as it reproduces the experimental ρ xx traces. With a fixed enhanced g-factor (or even the bare g-factor), however, qualitatively similar results for ρ T are obtained. Fig. 3 shows the results of the simulation. For both ρ xx and ρ T , the overall agreement between simulation and experiment is satisfactory. For matched densities (not shown) using the same parameters the agreement is equally good. In fields above 0.8 T the asymmetry in the height of the experimental spin-split ρ xx -peaks is not reproduced, but this could be due to a different coupling strength of spin↑ and spin↓ edge channels to the bulk [16], which is not included in the simulation. The asymmetry in the ρ T -peaks at matched densities ( fig. 1, B >0.65 T) may have a similar origin. The simulation also fails to reproduce some of the finer details in the amplitude of the quick oscillation in ρ T , but we find that this amplitude is quite sensitive to overlap between Landau levels in different layers which in turn depends on details in their width and separation.
The two sets of oscillations in ρ T are observed in all samples from all three wafers at mismatched densities. The slow oscillation can be recognised as such for T<∼1K although a few negative spikes remain visible till 1.4-1.9 K (depending on density difference). The inverse period of the slow oscillation is accurately given by h/2e × ∆n in the density range studied (∆n ∈ [0, 1.2] × 10 11 cm −2 , n 1 =2.0×10 11 cm −2 ) confirming that the appearance of negative ρ T for odd ∆ν and positive ρ T for even ∆ν is not restricted to one particular density difference.
The appearance of negative ρ T when Landau levels with anti-parallel majority-spin are at E F in the 2DEGs is a puzzling result, as it implies that electrons in the drag layer gain momentum in the direction opposite to that of the net momentum lost by electrons in the drive layer. In the single particle picture, this can only occur if the dispersion relation for electrons has a hole-like character (i.e. ∂ 2 E/∂k 2 y < 0 [10]), but we know of no mechanism through which spins can cause that. The explanation for negative ρ T must then be sought for beyond the single particle picture, possibly in terms of spin-waves or coupled states between the layers. We note that our empirical formula describing ρ T consists of the three possible triplet spin wavefunctions and one could speculate about an interaction between electrons with opposite momentum in the different layers. Considering the observation of the effect in the 120 nm barrier samples, the coupling mechanism is most likely not the direct Coulomb interaction. In any case, our results at least convincingly demonstrate the importance of the electron spin.
Our empirical model seems to accurately describe ρ T . There is however a limitation to its applicability: in fields above 1.2 T the negative ρ T vanishes in the 30 nm barrier samples. For the density mismatch in fig. 2 this is easily explained, as in fields above ∼1.2 T there is no more chance of finding an overlap between Landau levels with different spin. However, for larger density differences, such that there is the necessary overlap of Landau levels with different spin, we only find positive ρ T for all temperatures studied (0.25 K<T<10 K). We note that at our lowest temperature (0.25 K) the field of 1.2 T corresponds to a complete spin-splitting in ρ xx . Samples from the other wafers have similar spin-splittings and the negative ρ T vanishes at comparable fields. It is further worth noting that the upper bound for the magnetic field below which negative ρ T is observed, does not depend on density or density difference of the 2DEGs (provided an overlap exists between Landau levels with different spin for B >1.2 T) and thus not on fillingfactor.
Finally we comment on the interpretation of negative magneto-drag in ref. [10]. Due to the higher lowest temperature (1.15 K), no spin-splitting in ρ xx and no slow oscillations in ρ T were observed. Nevertheless, the remains of half of a slow period which was filled up with the quick oscillation, were visible. It thus seemed that negative ρ T appeared only when in one 2DEG the Landau level at E F was more than half filled, while in the other the Landau level at E F was less than half filled. It was argued that disorder induces a hole-like dispersion in the less-than-half-filled Landau level, leading to negative ρ T . Our lower temperatures allow probing the regime where ρ xx shows spin-splitting. The less-than-half-filled, more-than-half-filled Landau level explanation should hold for spin-split Landau levels as well, thus doubling the frequency of the quick oscillation in ρ T . Our experiment shows no doubling, disproving such a scenario. Moreover, as fig. 2 shows, negative ρ T can be observed as well when the (in ρ xx partly or almost completely spin-split) Landau levels are both less than half filled (0.62 T, 0.73 T) or both more than half filled (1.0 T). Our data are thus inconsistent with the interpretation given in ref. [10], while our empirical model does explain the data of ref. [10].
Summarising, at matched densities the double peak in the magneto-drag measured at fields and temperatures where the longitudinal resistance shows no spin-splitting at all, is the result of the drag being selective to the spin of the electrons, such that electrons with parallel spin in each layer have a positive contribution to the drag, while those with anti-parallel spin have a negative contribution. This selectivity to spin further causes the occurrence of a negative drag whenever Landau levels with anti-parallel spin are at E F in the 2DEGs, resulting in a novel 1/B-periodic oscillation in the low field low temperature drag for mismatched electron densities with the inverse period given by h∆n/2e. Our empirical model assuming ρ T ∝ (DOS ↑ -DOS ↓ ) layer1 ×(DOS ↑ -DOS ↓ ) layer2 quite accurately describes the results at matched as well as mismatched densities. The origin of the negative coupling between electrons with anti-parallel spin as well as its disappearance when spin splitting in ρ xx is complete remains to be explained.
We acknowledge financial support from BMBF and European Community TMR network no. ERBFMRX-CT98-0180. We are grateful to L.I. Glazman for helpful discussion and to J. Schmid for experimental support.
FIG. 1 .
1Fig. 1plots ρ T and ρ xx measured at temperatures of ρT (bottom) and ρxx (top) at 0.26 K and 1.0 K and at matched densities (n1=n2=2.13·10 11 cm −2 ), showing the absence of a double peak in ρT for completely spin-split peaks in ρxx. Inset is a blow-up of ρT at 0.26 K, showing a double peak in fields above 0.11 T.
FIG. 2 .
2ρT (bottom) and ρxx (top) for both 2DEGs at mismatched densities (n1=2.27 and n2=2.08·10 11 cm −2 ) as a function of magnetic field at T =0.25 (K). Two sets of oscillations can be distinguished in ρT : i) a quick one resulting from the overlap of the Landau level in the 2DEGs and ii) a slow one which causes (positive) maxima in ρT whenever the fillingfactor difference between the 2DEGs is even, and (negative) minima whenever this difference is odd. The inset shows ρT at fixed magnetic field of 0.641 (T) (maximum in ρT infig.1) versus fillingfactor difference.
FIG. 3 .
3Comparison of simulation and experiment of ρxx and ρT (details can be found in the text). Toptraces show ρxx, upper curves are offset vertically (solid line: experiment, dotted line: simulation). Lower traces show the drag, the simulation is offset vertically.
. V Pellegrini, Phys. Rev. Lett. 78799ScienceV. Pellegrini et al., Phys. Rev. Lett. 78 (1997), 310; V. Pellegrini et al., Science 281 (1998), 799
. A Sawada, Phys. Rev. Lett. 804534A. Sawada et al., Phys. Rev. Lett. 80, 4534 (1998)
. V S Khrapai, Phys. Rev. Lett. 84725V.S. Khrapai et al., Phys. Rev. Lett. 84, 725 (2000)
. L Zheng, R J Radtke, S. Das Sarma, ; S Das, S Sarma, L Sachdev, Zheng, ibid. 79Phys. Rev. Lett. 917S. Das Sarma, S. Sachdev and L. Zheng784672Phys. Rev. BL. Zheng, R.J. Radtke and S. Das Sarma, Phys. Rev. Lett. 78 (1997), 2453; S. Das Sarma, S. Sachdev and L. Zheng, ibid. 79 (1997), 917; S. Das Sarma, S. Sachdev and L. Zheng, Phys. Rev. B 58 (1998), 4672
. L Brey, E Demler, S Sarma, ; B Paredes, Phys. Rev. Lett. 832250Phys. Rev. Lett.L.Brey, E.Demler, S.Das Sarma, Phys. Rev. Lett. 83, 168 (1999), B.Paredes et al., Phys. Rev. Lett. 83, 2250 (1999)
. A G Rojo, J. Phys. Condens. Matter. 1131A.G. Rojo, J. Phys. Condens. Matter 11, R31 (1999)
. H , Phys. Rev. Lett. 781763H. Rubel et al., Phys. Rev. Lett. 78, 1763 (1997)
. N P R Hill, Physica B. 868N.P.R. Hill et al., Physica B 249-251, 868 (1998)
. M C Bonsager, Phys. Rev. Lett. 771366M.C. Bonsager et al., Phys. Rev. Lett. 77, 1366 (1996);
. Phys. Rev. B. 5610314Phys. Rev. B 56, 10314 (1997)
. X G Feng, Phys. Rev. Lett. 813219X.G. Feng et al., Phys. Rev. Lett. 81, 3219 (1998)
. J P Eisenstein, L N Pfeiffer, K W West, Appl. Phys. Lett. 572324J.P. Eisenstein, L.N. Pfeiffer, and K.W. West, Appl. Phys. Lett. 57, 2324 (1990)
. H , Mater. Sci. Eng. B. 51207H. Rubel et al., Mater. Sci. Eng. B 51, 207 (1998)
. T J Gramila, Phys. Rev. Lett. 661216T.J. Gramila et al., Phys. Rev. Lett. 66, 1216 (1991)
. D R Leadley, Phys. Rev. B. 5813036D.R. Leadley et al., Phys. Rev. B 58, 13036 (1998)
. M M Fogler, B I Shklovskii, Phys. Rev. B. 5217366M.M. Fogler, and B.I. Shklovskii, Phys. Rev. B 52, 17366 (1995)
. P Svoboda, Phys. Rev. B. 458763P. Svoboda et al., Phys. Rev. B 45, 8763 (1992)
| [] |
[
"CHEX-MATE: Constraining the origin of the scatter in galaxy cluster radial X-ray surface brightness profiles",
"CHEX-MATE: Constraining the origin of the scatter in galaxy cluster radial X-ray surface brightness profiles"
] | [
"I Bartalucci ",
"S Molendi ",
"E Rasia ",
"G W Pratt ",
"M Arnaud ",
"M Rossetti ",
"F Gastaldello ",
"D Eckert ",
"M Balboni ",
"S Borgani ",
"H Bourdin ",
"M G "
] | [] | [] | We investigate the statistical properties and the origin of the scatter within the spatially resolved surface brightness profiles of the CHEX-MATE sample, formed by 118 galaxy clusters selected via the SZ effect. These objects have been drawn from the Planck SZ catalogue and cover a wide range of masses, M 500 = [2 − 15] × 10 14 M , and redshift, z=[0.05,0.6]. We derived the surface brightness and emission measure profiles and determined the statistical properties of the full sample and sub-samples according to their morphology, mass, and redshift. We found that there is a critical scale, R∼ 0.4R 500 , within which morphologically relaxed and disturbed object profiles diverge. The median of each sub-sample differs by a factor of ∼ 10 at 0.05 R 500 . There are no significant differences between mass-and redshift-selected sub-samples once proper scaling is applied. We compare CHEX-MATE with a sample of 115 clusters drawn from the The Three Hundred suite of cosmological simulations. We found that simulated emission measure profiles are systematically steeper than those of observations. For the first time, the simulations were used to break down the components causing the scatter between the profiles. We investigated the behaviour of the scatter due to object-by-object variation. We found that the high scatter, approximately 110%, at R < 0.4R Y SZ 500 is due to a genuine difference between the distribution of the gas in the core of the clusters. The intermediate scale,is characterised by the minimum value of the scatter on the order of 0.56, indicating a region where cluster profiles are the closest to the self-similar regime. Larger scales are characterised by increasing scatter due to the complex spatial distribution of the gas. Also for the first time, we verify that the scatter due to projection effects is smaller than the scatter due to genuine object-by-object variation in all the considered scales. Key words. intracluster medium -X-rays: galaxies: clusters 1 xmm-heritage.oas.inaf.it Article number, page 1 of 23 arXiv:2305.03082v1 [astro-ph.CO] 4 May 2023 A&A proofs: manuscript no. 46189corrPratt). Specifically, we investigate for the first time the statistical properties of the X-ray surface brightness and emission measure radial profiles of a sample of galaxy clusters observed with unprecedented and homogeneous deep XMM-Newton observations. The sample, being based on the Planck catalogue, is SZ selected and thus predicted to be tightly linked to the mass of the cluster (e.g.Planelles et al. 2017 andLe Brun et al. 2018), and hence it should yield a minimally biased sample of the underlying cluster population.Our analysis is strengthened by the implementation of the results from a mass-redshift equivalent sample from cosmological and hydrodynamical simulations of the The Three Hundred collaboration(Cui et al. 2016). We used a new approach to understand the different components of the scatter, considering the population (i.e. cluster-to-cluster) scatter and the single object scatter inherent to projection effects.In Sect. 2, we present the CHEX-MATE sample. In Sects. 3 and 4, we describe the methodology used to prepare the data and the derivation of the radial profiles of the CHEX-MATE and numerical datasets, respectively. In Sect. 5, we discuss the shape of the profiles. In Sect. 6, we present the scatter within the CHEX-MATE sample. In Sect. 7, we investigate the origin of the scatter of the EM profiles, and finally in Sect. 8, we discuss our results and present our conclusions.We adopted a flat Λ-cold dark matter cosmology with Ω M (0) = 0.3, Ω Λ = 0.7, H 0 = 70 km Mpc s −1 , E(z)= (Ω M (1 + z) 3 + Ω Λ ) 1/2 , and Ω M (z) = Ω M (0)(1 + z) 3 /E(z) 2 throughout. The same cosmology was used for the numerical simulations, except for h = 0.6777. Uncertainties are given at the 68% confidence level (i.e. 1σ). All the fits were performed via χ 2 minimisation. We characterised the statistical properties of a sample by computing the median and the 68% dispersion around it. This dispersion was computed by ordering the profiles according to their χ 2 with respect to the median and by considering the profile at ±34% around it. We use natural logarithm throughout the work except for where we state otherwise.Article number, page 17 of 23 A&A proofs: manuscript no. 46189corr | 10.1051/0004-6361/202346189 | [
"https://export.arxiv.org/pdf/2305.03082v1.pdf"
] | 258,547,262 | 2305.03082 | 510eae182dda1db7a25ba3201efd3bb5e2722905 |
CHEX-MATE: Constraining the origin of the scatter in galaxy cluster radial X-ray surface brightness profiles
May 8, 2023
I Bartalucci
S Molendi
E Rasia
G W Pratt
M Arnaud
M Rossetti
F Gastaldello
D Eckert
M Balboni
S Borgani
H Bourdin
M G
CHEX-MATE: Constraining the origin of the scatter in galaxy cluster radial X-ray surface brightness profiles
1813May 8, 2023received -acceptedAstronomy & Astrophysics manuscript no. 46189corr Campitiello 9, 10 , S. De Grandi 11 , M. De Petris 12 , R.T. Duffy 4 , S. Ettori 9, 13 , A. (Affiliations can be found after the references)intracluster medium -X-rays: galaxies: clusters
We investigate the statistical properties and the origin of the scatter within the spatially resolved surface brightness profiles of the CHEX-MATE sample, formed by 118 galaxy clusters selected via the SZ effect. These objects have been drawn from the Planck SZ catalogue and cover a wide range of masses, M 500 = [2 − 15] × 10 14 M , and redshift, z=[0.05,0.6]. We derived the surface brightness and emission measure profiles and determined the statistical properties of the full sample and sub-samples according to their morphology, mass, and redshift. We found that there is a critical scale, R∼ 0.4R 500 , within which morphologically relaxed and disturbed object profiles diverge. The median of each sub-sample differs by a factor of ∼ 10 at 0.05 R 500 . There are no significant differences between mass-and redshift-selected sub-samples once proper scaling is applied. We compare CHEX-MATE with a sample of 115 clusters drawn from the The Three Hundred suite of cosmological simulations. We found that simulated emission measure profiles are systematically steeper than those of observations. For the first time, the simulations were used to break down the components causing the scatter between the profiles. We investigated the behaviour of the scatter due to object-by-object variation. We found that the high scatter, approximately 110%, at R < 0.4R Y SZ 500 is due to a genuine difference between the distribution of the gas in the core of the clusters. The intermediate scale,is characterised by the minimum value of the scatter on the order of 0.56, indicating a region where cluster profiles are the closest to the self-similar regime. Larger scales are characterised by increasing scatter due to the complex spatial distribution of the gas. Also for the first time, we verify that the scatter due to projection effects is smaller than the scatter due to genuine object-by-object variation in all the considered scales. Key words. intracluster medium -X-rays: galaxies: clusters 1 xmm-heritage.oas.inaf.it Article number, page 1 of 23 arXiv:2305.03082v1 [astro-ph.CO] 4 May 2023 A&A proofs: manuscript no. 46189corrPratt). Specifically, we investigate for the first time the statistical properties of the X-ray surface brightness and emission measure radial profiles of a sample of galaxy clusters observed with unprecedented and homogeneous deep XMM-Newton observations. The sample, being based on the Planck catalogue, is SZ selected and thus predicted to be tightly linked to the mass of the cluster (e.g.Planelles et al. 2017 andLe Brun et al. 2018), and hence it should yield a minimally biased sample of the underlying cluster population.Our analysis is strengthened by the implementation of the results from a mass-redshift equivalent sample from cosmological and hydrodynamical simulations of the The Three Hundred collaboration(Cui et al. 2016). We used a new approach to understand the different components of the scatter, considering the population (i.e. cluster-to-cluster) scatter and the single object scatter inherent to projection effects.In Sect. 2, we present the CHEX-MATE sample. In Sects. 3 and 4, we describe the methodology used to prepare the data and the derivation of the radial profiles of the CHEX-MATE and numerical datasets, respectively. In Sect. 5, we discuss the shape of the profiles. In Sect. 6, we present the scatter within the CHEX-MATE sample. In Sect. 7, we investigate the origin of the scatter of the EM profiles, and finally in Sect. 8, we discuss our results and present our conclusions.We adopted a flat Λ-cold dark matter cosmology with Ω M (0) = 0.3, Ω Λ = 0.7, H 0 = 70 km Mpc s −1 , E(z)= (Ω M (1 + z) 3 + Ω Λ ) 1/2 , and Ω M (z) = Ω M (0)(1 + z) 3 /E(z) 2 throughout. The same cosmology was used for the numerical simulations, except for h = 0.6777. Uncertainties are given at the 68% confidence level (i.e. 1σ). All the fits were performed via χ 2 minimisation. We characterised the statistical properties of a sample by computing the median and the 68% dispersion around it. This dispersion was computed by ordering the profiles according to their χ 2 with respect to the median and by considering the profile at ±34% around it. We use natural logarithm throughout the work except for where we state otherwise.Article number, page 17 of 23 A&A proofs: manuscript no. 46189corr
Introduction
Galaxy clusters represent the ultimate manifestation of large-scale structure formation. Dark matter comprises 80% of the total mass in a cluster and is the main actor of the gravitation assembly prcoess (Voit 2005;Allen et al. 2011;Borgani & Kravtsov 2011). This influences the prevalent baryonic component represented by a hot and rarefied plasma that fills the cluster volume, that is, the intracluster medium (ICM). This plasma's properties are affected by the individual assembly history and ongoing merging activities. The study of its observational properties is thus fundamental to study how galaxy clusters form and evolve. The ideal tool for investigating this component is X-ray observations, as the ICM emits in this band via thermal Bremsstrahlung.
The radial profiles of the X-ray surface brightness (SX) of a galaxy cluster and the derived emission measure (EM) are direct probes of the plasma properties. These two quantities can be easily measured in the X-ray band and have played a crucial role in the characterisation of the ICM distribution since the advent of high spatial resolution X-ray observations (e.g. Vikhlinin et al. 1999). Neumann &Arnaud 1999 andNeumann & compared SX profiles with expectations from theory to test the self-similar evolution scenario and investigate the relation between the cluster luminosity and its mass and temperature. Arnaud et al. (2001) tested the self-similarity of the EM profiles of 25 clusters in the [0.3-0.8] redshift range, finding that clusters evolve in a self-similar scenario, which deviates from the sim-plest models because of the individual formation history. The SX and EM profiles have been used to investigate the properties of the outer regions of galaxy clusters, both in observations (e.g. Vikhlinin et al. 1999;Neumann 2005;Ettori & Balestra 2009) and in a suite of cosmological simulations (see e.g. Roncarelli et al. 2006). These regions are of particular interest because of the plethora of signatures from the accretion phenomena, but they are hard to observe because of their faint signal. More recent works based on large catalogues (see e.g. Rossetti et al. 2017 andAndrade-Santos et al. 2017) have determined the effects of the X-ray versus the Sunyaev Zel'Dovich (SZ; Sunyaev & Zeldovich 1980) selection by studying the concentration of the surface brightness profiles in the central regions of galaxy clusters. Finally, the SX radial profile represents the baseline for any study envisaging to derive the thermodynamical properties of the ICM, such as the 3D spatial distribution of the gas (Sereno et al. 2012(Sereno et al. , 2017(Sereno et al. , 2018. This information can be combined with the radial profile of the temperature, and together, they can be used to derive quantities such as the entropy (see, e.g. Voit et al. 2005), pressure, and mass of the galaxy cluster under the assumption of hydrostatic equilibrium Pratt et al. 2022).
In this paper, we used the exceptional data quality of the 118 galaxy clusters from the Cluster HEritage project with XMM-Newton -Mass Assembly and Thermodynamics at the Endpoint of structure formation (CHEX-MATE 1 , PI; S. Ettori and G.W.
The CHEX-MATE sample
Definition
This work builds on the sample defined for the XMM-Newton heritage programme accepted in AO-17. We briefly report the sample definition and selection criteria here that are detailed in CHEX-MATE Collaboration (2021). The scientific objective of this programme is to investigate the ultimate manifestation of structure formation in mass and time by observing and characterising the radial thermodynamical and dynamic properties of a large, minimally biased and S/N-limited sample of galaxy clusters. This objective is achieved by selecting 118 objects from the Planck PSZ2 catalogue (Planck Collaboration XXVII 2016), applying an SNR threshold of 6.5 in the SZ identification, and folding the XMM-Newton visibility criteria.
The key quantity M Y SZ 500 , defined as the mass enclosed within the radius R Y SZ 500 of the cluster where its average total matter density is 500 times the critical density of the Universe, is measured by the Planck collaboration using the MMF3 SZ detection algorithm detailed in Planck Collaboration XXVII (2015). This algorithm measures the Y SZ flux associated to each detected cluster, and it is used to derive the M Y SZ 500 using the M 500 -Y SZ relation calibrated in Arnaud et al. (2010), assuming self-similar evolution. We note that while the clusters' precise mass determination is one of the milestones of the multi-wavelength coverage of the CHEX-MATE programme, in this paper we consider the radii and mass values The masses in the Planck catalogue were derived iteratively from the M 500 -Y SZ relation calibrated using hydrostatic masses from XMM-Newton; they were not corrected for the hydrostatic equilibrium bias. The magenta and green points represent the Tier 1 and Tier 2 clusters of the CHEX-MATE sample, respectively (CHEX-MATE Collaboration 2021). The triangles and squares identify the morphologically relaxed and disturbed clusters, respectively, which were identified according to the classification scheme in Campitiello et al. (2022). The two red crosses identify the clusters excluded from the analysis of this work.
directly from the Planck catalogue. The impact of this choice will be discussed in Sect. 5.5.
The CHEX-MATE sample is split in two sub-samples according to the cluster redshift. Tier 1 provides a local sample of 61 objects in the [0.05-0.2] redshift range in the northern sky (i.e. DEC > 0), and their M Y SZ 500 span the [2 − 9] × 10 14 M mass range. These objects represent a local anchor for any evolution study. Tier 2 offers a sample of the massive clusters, M Y SZ 500 > 7.25 × 10 14 M in the [0.2-0.6] redshift range. These objects represent the culmination of cluster evolution in the Universe.
The distribution in the mass and redshift plane of the CHEX-MATE sample and its sub-samples are shown in Fig. 1. The exposure times of these observations were optimised to allow the determination of spatially resolved temperature profiles at least up to R 500 with a precision of 15%.
The clusters PSZ2 G028.63+50.15 and PSZ2 G283.91+73.87 were excluded from the analysis presented in this work since their radial analysis could introduce large systematic errors without increasing the statistical quality of the sample. Indeed, the former system presents a complex morphology (see Schellenberger et al. 2022 for a detailed analysis), and it has a background cluster at z = 0.38 within its extended emission. The latter is only ∼ 30 arcmin from M87, and thus its emission is heavily affected by the extended emission of Virgo. The basic properties of the final sample of 116 objects are listed in Table D.1.
Sub-samples
We defined CHEX-MATE sub-samples based on key quantities: mass, redshift, and morphological status. The analysis of the morphology of the CHEX-MATE clusters sample is described in detail in Campitiello et al. (2022). The authors use a combination of morphological parameters (see Rasia et al. 2013 for the definition of these parameters) to classify the clusters as morphologically relaxed, disturbed, or mixed. Following the criteria described in Sect. 8.2 of Campitiello et al. (2022), the authors identified the 15 most relaxed and 25 most disturbed clusters. We adopted their classification in this paper and refer to the former group as morphologically relaxed clusters and the latter group as disturbed clusters.
We defined the sub-samples of nearby and distant clusters considering the 85 and 31 clusters at z ≤ 0.33 and z > 0.33, respectively, the value 0.33 being the mean redshift of the sample. Similarly, we built the sub-samples of high-and low-massive clusters considering the 40 and 76 clusters with M Y SZ 500 ≤ 5 × 10 14 M and M Y SZ 500 > 5 × 10 14 M , respectively.
Data analysis
3.1. Data preparation
XMM-Newton data
The clusters used in this work were observed using the European Photon Imaging Camera (EPIC; Turner et al. 2001 andStrüder et al. 2001). The instrument comprises three CCD arrays, namely, MOS1, MOS2, and pn, that simultaneously observe the target. Datasets were reprocessed using the Extended-Science Analysis System (ESAS 2 ; Snowden et al. 2008) embedded in SAS version 16.1. The emchain and epchain tools were used to apply the latest calibration files made available January 2021 and produce pn out-of-time datasets. Events in which the keyword PATTERN is greater than four for the MOS1 and MOS2 cameras and greater than 12 for the pn camera were filtered out from the analysis. The CCDs showing an anomalous count rate in the MOS1 and MOS2 cameras were also removed from the analysis. Time intervals affected by flares were removed using the tools mos-filter and pn-filter by extracting the light curves in the [2.5-8.5] keV band and removing the time intervals where the count rate exceeded 3σ times the mean count rate from the analysis. Point sources were filtered from the analysis following the scheme detailed in Section 2.2.3 of Ghirardini et al. (2019), which we summarise as follows. Point sources were identified by running the SAS wavelet detection tool ewavdetect on [0.3 − 2] keV and [2 − 7] keV images obtained from the combination of the three EPIC cameras and using wavelet scales in the range of 1-32 pixels and an S/N threshold of five, with each bin width being ∼ 2 arcsec. The PSF and sensitivity of XMM-Newton depends on the off-axis angle. For this reason, the fraction of unresolved point sources forming the Cosmic X-ray Background (CXB; Giacconi et al. 2001) is spatially dependent. We used a threshold in the LogN-LogS distribution of detected sources, below which we deliberately left the point source in the images to ensure a constant CXB flux across the detector. Catalogues produced from the two energy band images were then merged. At the end of the procedure, we inspected the identified point sources by eye to check for false detections in CCD gaps. We also identified extended bright sources other than the cluster itself by eye and removed them from the analysis. We identified 13 clusters affected by at 2 cosmos.esa.int/web/xmm-newton least one sub-structure within R Y SZ 200 that were masked by applying circular masks of ∼ 3 arcmin radius on average.
Image preparation
We undertook the following procedures to generate the images from which we derived the profiles. Firstly, we extracted the photon count images in the [0.7 − 1.2] keV band for each camera, this energy band maximises the source to background ratio (Ettori et al. 2010). An exposure map for each camera folding the vignetting effect was produced using the ESAS tool eexpmap.
The background affecting the X-ray observations was due to a sky and instrumental component. The former was from the local Galactic emission and the CXB (Kuntz & Snowden 2000), and its extraction is described in detail in Sect. 3.3. The latter was due to the interaction of high energy particles with the detector. We followed the strategy described in Ghirardini et al. (2019) to remove this component by producing background images that accounted for the particle background and the residual soft protons.
The images, exposure, and background maps of the three cameras were merged to maximise the statistic. The pn exposure map was multiplied by a factor to account for the ratio of the effective area MOS to pn in the [0.7 − 1.2] keV band when merging the exposure maps. This factor was computed using XSPEC by assuming a mean temperature and using the hydrogen column absorption value, N H , reported in Table D.1. Henceforth, we refer to the combined images of the three cameras and the background maps simply as the observation images and the particle background datasets, respectively.
Global quantities
Average temperature
We estimated the average temperature, T avg , of each cluster by applying the definition of the temperature of a singular isothermal sphere with mass M 500 as described in Appendix A of Arnaud et al. (2010):
T avg = 0.8 × T 500 = 0.8 × µm p GM Y SZ 500 2R Y SZ 500 ,(1)
where µ = 0.59 is the mean molecular weight, m p is the proton mass, G is the gravitational constant, and the 0.8 factor represents the average value of the universal temperature profile derived by Ghirardini et al. (2019) with respect to T 500 . These temperatures are reported in Table D.1.
Cluster coordinates
We produced point source free emission images by filling the holes from the masking procedure with the local mean emission estimated in a ring around each excluded region by using the tool dmfilth. We then performed the vignetting correction by dividing them for the exposure map. We used these images to determine the peak by identifying the maximum of the emission after the convolution of the map with a Gaussian filter with ∼ 10 arcsec width. The centroid of the cluster was determined by performing a weighted-mean of the pixel positions using the counts as weight within a circular region centred on the peak and with its radius as R Y S Z 500 . This has been done to avoid artefact contamination near the detector edges. The coordinates obtained are reported in Table D Median of the EM S profiles centred on the X-ray peak scaled for self-similar evolution using Eq. 4. The dispersion is shown with the black solid line and the grey envelope. The magenta and green solid lines represent the median of the low-mass, M Y SZ 500 ≤ 5 × 10 14 M , and high-mass, M Y SZ 500 > 5 × 10 14 M , sub-samples, respectively. The red and blue dotted lines represent the medians of the low-, z ≤ 0.33, and high-redshift, z > 0.33, sub-samples, respectively. Bottom-left panel: Ratio of the sub-samples medians with respect to the full CHEX-MATE sample median. The dotted horizontal lines represent the 25% plus and minus levels. Middle panels: Comparison of the statistical properties of the EM profiles scaled to account also for mass and redshift evolution divided in mass selected sub-samples. On the top panel we show the same as in the top left panel except the medians and the dispersion were computed using profiles scaled with Eq. 5. On the bottom we show the ratio between the medians of the sub-samples as respect to the median of the full sample. The dotted horizontal lines in the lower-middle panel represent the plus and minus 5% levels. Right panels: Comparison of the statistical properties of the scaled EM profiles divided in redshift selected sub-samples. On the top panel we show the same as in the top left panel except that the medians and the dispersion were computed using the profiles scaled with Eq. 5. On the bottom we show the ratio between the medians of the sub-samples as respect to the median of the full CHEX-MATE sample.
Radial profiles
Surface brightness profiles
Azimuthal mean profiles. The surface brightness radial profiles, S(Θ), were extracted using the following technique. We defined concentric annuli centred on the X-ray peak and the centroid. The minimum width was set to 4 and was increased using a logarithmic factor. In each annulus, we computed the sum of the photons from the observation image as well as from the particle background datasets. The particle background-subtracted profile was divided by the exposure folding the vignetting in the same annulus region. We estimated the sky background component as the average count rate between R 200 = 1.49R Y SZ 500 and 13.5 arcmin and subtracted it from the profile. If R 200 was outside the field of view, we estimated the sky background component using the XMM-Newton-ROSAT background relation described in Appendix B. The sky background-subtracted profiles were re-binned to have at least nine counts per bin after background subtraction. We corrected the profiles for the PSF using the model developed by Ghizzardi (2001). We refer hereafter to these profiles as the mean SX profiles. Azimuthal median profiles. We also computed the surface brightness radial profiles considering the median in each annulus following the procedure detailed in Section 3 of Eckert et al. (2015). This procedure has been introduced to limit the bias caused by the emission of sub-clumps and sub-structures too faint to be identified and masked (e.g. Roncarelli et al. 2013;Zhuravleva et al. 2013). Briefly, we applied the same binning scheme and point source mask to the particle background dataset to perform the background subtraction. Employing the procedure of Cappellari & Copin (2003) and Diehl & Statler (2006), we first produced Voronoi-binned maps to ensure 20 counts per bin on average to apply the Gaussian approximation. We then extracted the Notes: We used the EM median profiles centred on the X-ray peak. We also report the number of profiles that have been used to compute the relative error in the third column.
surface brightness median profile with the same annular binning of the mean profile, considering in each radial bin the median count rate of the Voronoi cells, whose centre lies within the annulus. The sky background was estimated with the same approach used for the mean profile except that we estimated the median count rate. Finally, the sky background-subtracted profiles were re-binned using the same 3σ binning of the mean profiles. The four resulting types of surface brightness profiles are shown in D.1. We were able to measure the profiles beyond R Y SZ 500 for 107 of the 116 (i.e. ∼ 92%) CHEX-MATE objects.
We report in Table 1 the median relative errors at fixed radii to illustrate the excellent data quality. From now on, we refer to these profiles as the median SX profiles, and throughout the paper, we use these profiles centred on the X-ray peak unless stated otherwise.
Emission measure profiles
We computed the EM radial profiles using Equation 1 of Arnaud et al. (2002):
EM(r) = S(Θ) 4π(1 + z) 4 (T, z) ,(2)
where Θ = r/d A (z), with d A (z) as the angular diameter distance, and is the emissivity integrated in the E 1 = 0.7 keV and E 2 = 1.2 keV band and is defined as
(T, z) = E 2 E 1 Σ(E)e −σ(E)N H f T ((1 + z)E)(1 + z) 2 dE,(3)
where Σ(E) is the detector effective area at energy E, σ(E) is the absorption cross section, N H is the hydrogen column density along the line of sight, and f T ((1 + z)E) is the emissivity at energy (1+z)E for a plasma at temperature T . The factor was computed using an absorbed Astrophysical Plasma Emission Code (APEC) within the XSPEC environment. The absorption was calculated using the phabs model folding the Hydrogen absorption column reported in Table D.1. The dependency of on temperature and abundance in the band we used to extract the profile is weak (e.g. Lovisari & Ettori 2021). Therefore, for APEC we used the average temperature, kT avg , of the cluster within R Y SZ 500 and the abundance fixed to 0.25 (Ghizzardi et al. 2021) with respect to the solar abundance table of Anders & Grevesse (1989). Finally, we used the redshift values reported in Table D.1. We obtained EM azimuthal mean and azimuthal median profiles centred on the X-ray peak and centroid, converting the respective surface brightness profiles. The EM profiles were first scaled considering only the self-similar evolution scenario, EM S , as in Arnaud et al. (2002):
EM S (r, T, z) = E(z) −3 × kT avg 10 −1/2 × EM(r),(4)
where x=r/R Y SZ 500 and T avg is the average temperature of the cluster, as in Equation 1. The left panel of Fig. 2 shows the median of the EM S profiles centred on the X-ray peak as well as its 68% dispersion. In the same plot, we also show the medians of the sub-samples introduced in Sect 2.2.
Their ratio with respect to the CHEX-MATE median shown in the bottom panel demonstrates that the employed re-scaling is not optimal since for all sub-samples there are variations with respect to the median that range between 10% and 50% at all scales. We therefore tested another re-scaling following Pratt et al. (2022) and Ettori et al. (2022), who point out how the mass dependency is not properly represented by the self-similar scenario and had a small correction also with respect to the redshift evolution. The final scaling that we considered is given by the following:
EM(r, T, z) = E(z) −3.17 kT avg 10 keV −1.38 × EM(r).(5)
The effect of this scaling on the mass and redshift residual dependency is shown in the middle and right panel of Fig. 2, respectively. The medians of the sub-samples show little variations in relation to the whole sample within the order of a few percentage points on average. We show the individual scaled median radial profiles centred on the X-ray peak in Fig. 3 together with the 68% dispersion. The discussion of the difference between the relaxed and disturbed sub-samples is detailed in Section 5. peak. The blue and red solid lines indicate morphologically relaxed and disturbed clusters, respectively. The profiles extracted from clusters with mixed morphology are shown with black solid lines. The selection criteria was based on the classification made by Campitiello et al. (2022) in their Section 8.2. The grey-shaded envelope represents the dispersion at the 68% level.
Cosmological simulations data
The main scientific goal of this paper is to investigate the origin of the diversity of the EM profiles. The main source of the scatter between the profiles is expected to be due to a genuine different spatial distribution of the ICM related to the individual formation history of the cluster. The other sources that impact the observed scatter are related to how we observe clusters. There are systematic errors associated with X-ray analysis and observing clusters in projection. This latter point is of crucial importance when computing the scatter within a cluster sample. For instance, a system formed by two merging halos of similar mass will appear as a merging system if the projection is perpendicular to the merging axis but will otherwise appear regular if the projection is parallel.
In this work, we employed cosmological simulations from the The Three Hundred collaboration (Cui et al. 2018) to evaluate this effect. Specifically, we study the GADGET-X version of The Three Hundred suite. This is composed of re-simulations of the 324 most massive clusters identified at z = 0 within the dark matteronly MULTIDARK simulation Klypin et al. (2016), and thus it constitutes an ideal sample of massive clusters from which to extract a CHEX-MATE simulated counterpart. The cosmology assumed in the MULTIDARK simulation is that of the Planck collaboration XIII (2016) and is similar to what is assumed in this paper. The adopted baryon physics include metal-dependent radiative gas cooling, star formation, stellar feedback, supermassive black hole growth, and active galactic nuclei (AGN) feedback (Rasia et al. 2015). To cover the observational redshift range, the simulated sample was extracted from six different snapshots corresponding to z = 0.067, 0.141, 0.222, 0.333, 0.456, and 0.592. For each observed object, in addition to the redshift, we matched Article number, page 5 of 23 A&A proofs: manuscript no. 46189corr respectively. The gap between 6 − 7 × 10 14 M Y SZ 500 is an artefact from the CHEX-MATE sample being divided into two tiers, shown in Fig. 1. The shift between the two distributions is due to the fact that the CHEX-MATE masses are assumed to be 0.8 lower than the true mass due to the hydrostatic bias. For more details, refer to Section 4. the cluster mass M 500 imposed to be close to M Y S Z 500 /0.8. With this condition, we followed the indication of the Planck collaboration (see Planck Collaboration XX 2014) that assumed a baseline mass bias of 20% (1 − b = 0.8). We also checked whether the selected simulated clusters have a strikingly inconsistent morphological appearance, such as a double cluster associated to a relaxed system. In such cases, we considered the second closest mass object. In the final sample, the standard deviation of the M 500,sim /(M Y SZ 500 /0.8) is equal to 0.037. Due to the distribution of the CHEX-MATE sample in the mass-redshift place, we allowed a few Tier 2 clusters to be matched to the same simulated clusters taken from different cosmic times. Even with this stratagem, which will not impact the results of this investigation, we observed that a very massive cluster at z = 0.4 remained unmatched. The final simulated sample thus includes 115 objects.
The simulation sample mass distribution is shown in Fig. 4. For each simulated cluster, we generated 40 EM maps centred on the cluster total density peak and integrating the emission along different lines of sight for a distance equal to 6R 500 using the Smac code (Dolag et al. 2005;Ansarifard et al. 2020). Henceforth, we refer to these maps as "sim EM" and they are in units of [Mpc cm −6 ].
X-ray mock images of simulated clusters
We produced mock X-ray observations by applying observational effects to the The Three Hundred maps. Firstly, we transformed the EM in surface brightness maps by inverting Equation 2. The emissivity factor (T,z) was computed using the same procedure as in Section 3.3.2. The absorption was fixed to the average value of the CHEX-MATE sample, N H = 2 × 10 20 cm −2 , and the average temperature was computed by using the M 500 of the cluster and applying Equation 1. The instrumental effects were accounted for by folding in the pn instrumental response files computed at the aimpoint. We produced the count rate maps by multiplying the surface brightness maps by the median exposure time of the CHEX-MATE programme, 4 × 10 4 s, and by the size of the pixel in arcminutes 2 . We added to these maps a spatially non-uniform sky background whose count rate is cr sky = 5.165 × 10 −3 [ct/s/arcmin 2 ], as measured by pn in the [0.5 − 2] keV band. We then included the XMM-Newton vignetting as derived from the calibration files, and we simulated the PSF effect by convolving the map with a Gaussian function with a width of ten arcsec. Finally, we drew a Poisson realisation of the expected counts in each pixel and produced a mock X-ray observation. We divided the field of view into square tiles with sides of 2.6 arcmin within which we introduced 3% variations to the mean sky background count rate to mimic the mean variations of the sky on the field of view of XMM-Newton. We multiplied these maps by 1.07 and 0.93 to create over-and underestimated background maps, respectively, which account for the systematic error related to the background estimation. We randomly chose the over-or underestimated map and subtracted it from the mock X-ray observation. After the subtraction, we corrected for the vignetting by using a function obtained through the fit of the calibration values to those we randomly added a 1 ± 0.05 factor to mimic our imprecision in the calibration of the response as a function of the off-axis angle.
The typical effects introduced by the procedures described above are shown in Fig. 5. The EM map produced using the simulation data is shown in the left panel where there is a large sub-structure in the west sector and a small one in the south-west sector within R 500 . The right panel shows the mock X-ray image where the degradation effects are evident. The spatial features within the central regions were lost due to the PSF. Despite the resolution loss, the ellipsoidal spatial distribution of the ICM is clearly visible, and the presence of features such as the small substructure in the south-west are still visible. The emission outside R 500 is dominated by the background, and the small filament emission in the south-west was too faint to remain visible. The large sub-structure is still evident, but the bridge connecting it to the main halo has become muddled into the background.
Simulation emission measure profiles
We extracted the EM profiles from the The Three Hundred maps by computing the median EM of all the pixels within concentric annuli, the bin width being 2 arcsec. These annuli are centred on the map centre (i.e. the peak of the halo total density). We obtained the EM profiles by applying this process to our sample of 115 simulated clusters and for each of the 40 projections, and we scaled them according to Equation 5. From hereon, we refer to these profiles as the "Sim" profiles. Similarly, we extracted the X-ray mock profiles, henceforth the "Simx" profiles, from the synthetic X-ray maps. These are shown along one randomly selected projection with a grey solid line in Fig. 6. We show the emission measure profile projected along only one line of sight because the results along the other projections are similar.
The comparison of the sample medians of the Sim and Simx profiles is shown in Fig. 6. The two sample medians are in excellent agreement, up to ∼ 0.7R 500 . Beyond that radius, the Simx median is flatter. This is an effect of the PSF, which redistributes on larger scales the contribution of sub-halos and local inhomogeneities. However, the fact that the medians are similar after the Comparison between the medians of the Sim and Simx EM profiles extracted from random projections. These are shown with black solid and dotted lines, respectively. The grey solid lines represent the Sim profiles. Bottom panel: Ratio between the Sim over the Simx median. The black solid and dotted lines correspond to the identity and the ± 2% levels, respectively.
application of the X-ray effects is likely due to the combination of the good statistics of the CHEX-MATE programme and the procedure used to derive each cluster EM profile, which considers the medians of all pixels. The former ensures that the extraction of profiles is not affected by large statistical scatter, at least up to R 500 , and the latter tends to hamper effects related to the presence of sub-structures.
There are key differences between the analysis of the Simx and CHEX-MATE profiles despite our underlying strategy of applying the same procedures. For example, the centre used in the simulations introduces a third option with respect to the X-ray centroid and peak. Furthermore, in simulated clusters, we com-puted the azimuthal median on pixels instead of on the Voronoi cells. We expected that the centre offsets would affect the profiles at small scales, R < 0.1R 500 , as shown in the left panel of Fig. 7. Finally, the X-ray analysis masks the emission associated to sub-halos, while this is not possible in simulations, as the development of an automated procedure to detect extended sources in the large number of images of our simulations, 4600 = 115 × 40, was beyond the scope of this paper. The impact of this difference on the scatter is discussed in Appendix A.
The profile shape
In this section, we study the shape of the emission measure profiles by checking the impact of the centre definition (as in Sect. 3.3.1) and of the radial profile procedure (as described in Sect. 3.3). Subsequently, we compare the sample median profiles of the relaxed and disturbed sub-samples and compare the CHEX-MATE median profile with the literature and the The Three Hundred simulations.
The impact of the profile centre
The impact of the choice of the centre for the profile extraction is crucial for any study that builds on the shape of profiles, such as the determination of the hydrostatic mass profile (see Pratt et al. 2019 for a recent review). The heterogeneity of morphology and the exquisite data quality of the CHEX-MATE sample offer a unique opportunity to assess how the choice of the centre affects the overall shape of the profile.
We show in the top part of the left panel in Fig. 7 the ratio between the medians of the EM azimuthal median profiles centred on the peak and those centred on the centroid. The colours of the lines respectively refer to the entire sample and the morphologically relaxed and disturbed sub-samples. The bottom panel shows a similar ratio where the azimuthal mean profiles are considered. From the figure, we noticed that the results obtained using mean or median profiles are similar, with the exception of the outskirts of the disturbed systems, which will be discussed below. On average, the relaxed sub-sample shows little deviation from one at all radial scales, as would be expected since for these systems the X-ray peak likely coincides with the centroid. 7: Ratio between the medians of the EM profiles obtained using the X-ray peak or the centroid as centre and using the azimuthal average or median. Left panels: Ratio between the medians of the profiles centred on the peak and centroid. The top and bottom panels show the ratio computed using the azimuthal median and azimuthal mean EM profiles, respectively. The black solid lines represent the median of the ratio considering the whole sample. The blue and red solid lines show the ratio considering only the morphologically relaxed and disturbed clusters, respectively. The grey solid lines indicate the identity line, and the dotted lines represent the plus and minus 5% levels. Right panels: Same as the left panels except we show the ratio between the medians of the azimuthal median and mean EM profiles. The top and bottom panels show the ratio computed using the profiles extracted with the X-ray peak and the centroid as centre, respectively. The legend of the solid and dotted lines is the same as in the left panels except for the fact that we show only the minus 5% level.
profiles centred on the X-ray peak are steeper, and about 5% in the [0.15-0.5]R Y SZ 500 region, where the centroid profiles have greater emission. These variations are not reflected in the entire CHEX-MATE sample, despite the fact that it includes approximately 87% of the disturbed and morphologically mixed systems. Indeed, in this case, all deviations are within 2%, implying that the choice of referring to the X-ray peak does not influence the shape of the sample median profile.
Mean versus median
We proceeded by testing the radial profile procedure (Sect. 3.3.1) next, comparing the azimuthal median and the azimuthal mean profiles (Fig. 7) and centring both on either the X-ray peak (top panel) or, for completeness, on the centroid (bottom panel). As expected from the previous results, there are little differences between the two panels. Overall, we noticed that the azimuthal mean profiles are greater than the azimuthal medians, implying that greater density fluctuations are present at all scales and that they play a larger role in the outskirts where a larger number of undetected clumps might be present. The differences between the two profiles are always within 5% for the relaxed systems. The deviations are more important for the disturbed objects, especially when centred on the X-ray peak. This last remark implies that the regions outside ∼ 0.4R Y SZ 500 of the CHEX-MATE disturbed objects not only have greater density fluctuations but are also spherically asymmetric in their gas distribution; otherwise, the same mean-median deviations would be detected when considering the centroid as centre. The global effect on the CHEX-MATE sample is that the median profiles are about 7% lower than the mean profiles at R > 0.3 − 0.4R Y SZ 500 . We noticed that similar results were obtained by Eckert et al. (2015) (cfr. Fig.6). This test confirmed that our choice of using the azimuthal medians for each cluster profile is more robust for our goal of describing the overall CHEX-MATE radial profiles.
The median CHEX-MATE profiles and the comparison between relaxed and disturbed systems
In Fig. 2, we show the behaviour of the CHEX-MATE EM median as well as the medians of the mass and redshift sub-samples.
In the left panel of Fig. 8, we compare the medians of the relaxed and disturbed sub-samples whose individual profiles are shown in Fig. 3. The former is approximately two times greater than the median of the whole sample at R ∼ 0.1R Y SZ 500 and is not within the dispersion. The morphologically disturbed clusters are on average within the dispersion, being 70% smaller than the whole sample median at R < 0.2R Y SZ 500 . The morphologically relaxed profiles become steeper than the disturbed profiles at R > 0.4R Y SZ 500 . A similar behaviour has been observed in several works, such as Figure 4), when comparing cool core systems with non-cool core systems.
Combining these results with those of Fig. 2, we concluded that CHEX-MATE Eq. 5 provides reasonable mass normalisation and captures the evolution of the cluster population well. The large sample dispersion seen in the cluster cores is linked to the variety of morphologies present in the sample. The medians of the relaxed and disturbed sub-samples indeed differ by more than a factor of ten at R< 0.1R Y SZ 500 . At around R Y SZ 500 , we also noticed some different behaviours in our sub-sample: The most massive objects are approximately 25% larger than the least massive ones, and the morphologically disturbed clusters are 50% larger than the relaxed ones (see also Sayers et al. 2022
Comparison with other samples
In this section, the statistical properties of the CHEX-MATE profiles are compared to SZ and X-ray selected samples at z 0.3 with similar mass ranges in order to investigate the impact of different selection effects. The SZ-selected sample is the XMM-Newton Cluster Outskirts Project (X-COP; Eckert et al. 2017) sample that contains 12 SZ-selected clusters in the [0.05-0.1] redshift range and has a total mass range similar to CHEX-MATE but with a greater median mass (∼ 6 × 10 14 M ). The individual EM profiles for X-COP were computed using the same procedure as described in this work. The profiles were scaled by applying Equation 5, with T avg given by Equation 1, using the masses presented in Table 1 of Ettori et al. (2019). We also compare the CHEX-MATE profile properties to the X-ray selected Representative XMM-Newton Cluster Structure Survey (REXCESS; Böhringer et al. 2007) sample, which is composed of 31 X-ray selected clusters in the [0.05-0.3] redshift range, with a mass range spanning the [1-8]×10 14 M and the median mass being 2.7 × 10 14 M . The REXCESS EM profiles were obtained from the surface brightness presented in Appendix A of (Croston et al. 2008). These profiles were computed using the azimuthal average in each annulus. For this reason, we compare the REXCESS profiles with the mean CHEX-MATE profiles. The REXCESS profiles were scaled using Equation 5 with T avg from Pratt et al. (2009).
The median and its dispersion for each of these samples were computed using the procedure described above, and their comparison with CHEX-MATE is shown in the right panel of Fig. 8. Both sample medians present an overall good agreement that is within 10% at R > 0.2R Y SZ 500 . The X-COP median is 25% more peaked in the central regions at R < 0.2R Y SZ 500 with respect to both CHEX-MATE and REXCESS. Nevertheless, the X-COP median is well within the dispersion of the CHEX-MATE sample and variations of such order are expected in the core where the EM values are comprised in the wide range ∼ [6, 30] × 10 −5 cm −6 Mpc.
Comparison with simulations
The 115 Simx EM profiles extracted from random projections for each cluster are shown together with their median value in Fig. 9. The median of the CHEX-MATE sample is also shown. Overall, the CHEX-MATE median is flatter than the medians of Simx in the [0.06-1]R Y SZ 500 radial range, and specifically it is ∼ 50% smaller in the centre and ∼ 50% larger in the outskirts. Part of the difference in the external regions might be caused by the re-scaling of the observational sample. Indeed, each CHEX-MATE profile has been scaled using the R Y SZ 500 derived from M Y SZ 500 , which is expected to be biased low by 20%. Factoring in this aspect, a more proper re-scaling should be done with respect to R Y SZ 500 /(0.8 1/3 ). The agreement between the CHEX-MATE data and the simulations increases at R > 0.5R Y SZ 500 , with relative variations of about 40% in the [0.2-1] R Y SZ 500 radial range. These considerations do not have any repercussion on the central regions, which remain larger in the simulated profiles, confirming the results found in Campitiello et al. (2022) and Darragh-Ford et al. (2023).
Measuring the slopes
We measured the slopes of the CHEX-MATE EM profiles adopting the technique described in Section 3.1 of Ghirardini et al. (2019). Briefly, we considered four radial bins in the [0.2-1]×R Y SZ 500 radial range and with widths equal to 0.2R Y SZ 500 . We excluded the innermost bin [0.-0.2]R Y SZ 500 because of the very high dispersion of the profiles within this region. We measured the slope, α, and normalisation, A, of each profile by performing the fit within each radial bin using the following expression:
Q(x) = Ax α e ±σ int ,(6)
where x = R/R Y SZ 500 and σ int is the intrinsic scatter. The error on each parameter was estimated via a Monte Carlo procedure, producing 100 realisations of each profile. The left panel of Fig. 10 shows the power law computed using the median of the α and A within each radial bin.
The fit of the [0.2-0.4]R Y SZ 500 bin revealed that there is a striking difference between the morphologically relaxed and disturbed objects. This result is notable because the considered region is far from the cooling region at ∼ 0.1 − 0.15R Y SZ 500 . The median power law index of the morphologically relaxed object profiles is α rel = 2.57 ± 0.15 and is not consistent with the morphologically disturbed one, which is α dis = 1.37±0.2 at more than the 3σ level. That is, the shape of the most disturbed and relaxed objects differ at least up to 0.4R Y SZ 500 . However, the fitted power law is within the dispersion of the full sample, whose median index is α = 2.02 ± 0.36. The median values in the [0.4 − 0.6]R Y SZ 500 region are α rel = 3±0.22 and α dis = 2.3±0.2 for the morphologically relaxed and disturbed objects, respectively. The indexes are consistent at the 2σ level, implying that the profiles are still affected by the morphology in the centre. The overall scenario changes at R > 0.6R Y SZ 500 . The power law index of the morphologically relaxed and disturbed objects are consistent with the median obtained from fitting the whole sample. Ettori & Balestra (2009) found that the average slope of a sample of 11 clusters at 0.4R 200 (∼ 0.6R Y SZ 500 ) and 0.7R 200 (∼ R Y SZ 500 ) is 3.15 ± 0.46 and 3.86 ± 0.7, respectively. These values are consistent within 1σ with our measurements.
We show in the right panel of Fig. 10 the comparison between the median α computed from the CHEX-MATE sample in each radial bin with the same quantity obtained using REXCESS and X-COP. There is an excellent agreement in all the considered radial bins. Interestingly, there is also a good agreement in the shape between a sample selected in X-ray (REXCESS) and SZ. One could expect to see more differences in the central parts, as X-ray selection should favour peaked clusters.
The comparison with the median α obtained using the Simx profiles is also shown in the right panel of Fig. 10. The Simx median α is systematically greater than the median of the observed sample. As discussed in Sect. 5.5, the bias introduced by using M Y SZ 500 might play a role when comparing CHEX-MATE to Simx and partly contributes to this systematic difference. However, we stress the fact that the slopes are consistent within 1σ in the four radial bins. Campitiello et al. (2022) find similar results when comparing the concentration of surface brightness profiles within fixed apertures of simulated and CHEX-MATE clusters. This quantity measures how concentrated the cluster core is with respect to the outer regions (i.e. more concentrated clusters show a steeper profile). The concentration of the simulations is systematically higher by approximately 20 − 30% (cfr.
The EM radial profile scatter
Computation of the scatter
Departures from self-similarity are linked to individual formation history as well as non-gravitational processes such as AGN feedback (outflows, jets, cavities, shocks) and feeding (cooling, multi-phase condensation; e.g. Gaspari et al. 2020). The additional terms used to obtain the EM profiles can partly account for these effects. The scatter of these profiles offers the opportunity to quantify such departures, and the CHEX-MATE sample is ideal to achieve this goal since the selection function is simple and well understood.
We computed the intrinsic scatter of the CHEX-MATE radial profiles by applying the following procedure. First, we interpolated each scaled profile on a common grid formed by ten logarithmically spaced radial bins in the [0.05 − 1.1]R Y SZ 500 radial range. We used the model for which the observed distribution of the points, S obs , in each radial bin is the realisation of an underlying normal distribution, S true , with log-normal intrinsic scatter, σ int : ln S true ∼ Normal(ln µ, σ int ),
Article number, page 10 of 23 with µ as the mean value of the distribution. We set broad priors on the parameters we are interested in: ln µ ∼ Normal(ln EM(r) , σ = 10),
σ int ∼ Half − Cauchy(β = 1.0),(8)
where EM(r) is the mean value of the interpolated EM profiles at the radius r. We assumed a Half-Cauchy distribution for the scatter, as this quantity is defined as positive. Since σ(ln X) = σ(X)/X, the intrinsic scatter in linear scale becomes:
σ lin = σ int * µ,(10)
and the total scatter, σ tot , is the quadratic sum of σ lin and the statistical scatter σ stat :
σ tot = σ 2 lin + σ 2 stat .(11)
The observed data were then assumed to be drawn from a normal realisation of the mean value and total scatter:
S obs ∼ Normal(µ, σ tot ).(12)
We determined the intrinsic scatter σ int and its 1σ error by applying the No U-Turn Sampler (NUTS) as implemented in the Python package PyMC3 (Salvatier et al. 2016) and using 1,000 output samples. Our sample contains nine objects for which we were not able to measure the profile above R Y SZ 500 . Six of these objects are less massive than M Y SZ 500 4 × 10 14 M and are classified as "mixed morphology" objects. We investigated the impact of excluding these profiles from the computation of the scatter by comparing the scatter computed within 0.9R Y SZ 500 using the full sample with the scatter computed excluding the nine objects. We noticed that this exclusion reduces the scatter by a factor of approximately15% at R Y SZ 500 starting from ∼ 0.4R Y SZ 500 . We argue that the reduction of the scatter is linked to the fact that the nine clusters contribute positively to the total scatter being morphologically mixed. For this reason, we corrected for this effect, defining a correction factor, c f , that quantifies the difference in the scatter due to the exclusion of these profiles. We computed the ratio between the scatter including and excluding the nine profiles in the [0.06 − 0.9R Y SZ 500 ] radial range, where we extracted the profiles for the whole CHEX-MATE sample. We fitted this ratio via the mpcurvefit routine using a two degree polynomial function of the form c f (r) = ar 2 + br + c and obtained the coefficients [a, b, c] = [0.410, −0.117, 0.993]. We multiplied the scatter of the whole sample by c f in the [0.06 − 1]R Y SZ 500 radial range. From hereon, we refer to this scatter as the "corrected intrinsic scatter".
The CHEX-MATE scatter
The corrected intrinsic scatter of the EM profiles is reported in the top-left panel of Fig. 11. The scatter computed using the profiles centred either on the centroid or on the peak gave consistent results.
The intrinsic scatter of the scaled EM profiles substantially depends on the scale considered. In the central regions, the large observed scatter of ∼ 0.8, at R ∼ 0.1R Y SZ 500 reflects the complexity of the cluster cores in the presence of non-gravitational phenomena, such as cooling and AGN feedback. On top of that, merging events are known to redistribute gas properties between the core and the outskirts, which flattens the gas density profiles in cluster cores. The scatter reaches a minimum value of ∼ 0.2 in the [0.3-0.7]R Y SZ 500 radial range, where the scatter remains almost constant. This result confirms the behaviour observed in the left panel of Fig. 8 (Croston et al. 2008) samples. The X-COP and REXCESS scatters are shown with green and orange envelopes, respectively. We recall that the scaling of the X-COP and REXCESS profiles was performed using temperatures obtained differently than those for CHEX-MATE, see Section 5.4 for details. this radial range and the scaled profiles converge to very similar values. The scatter increases at R > 0.7R Y SZ 500 from 0.2 to 0.35. The scatter of the morphologically disturbed and relaxed clusters considered separately are shown in the top-left panel of Fig. 11. The scatter of the morphologically disturbed clusters is higher but consistent at R < 0.3R Y SZ 500 with the relaxed one. This is expected, as the scatter originates from the combination of non-gravitational processes in the core and merging phenomena. This reinforces the scenario in which the differences between the EM profiles of relaxed and disturbed objects disappear in cluster outskirts, as already shown with the study of the shapes in Sect. 5. The dependency of the scatter on cluster mass was identified by comparing the scatter between high-and low-mass objects, as shown in the top-right panel of Fig. 11. No significant differences could be seen. We investigated the evolution of the scatter by comparing the most massive clusters, M Y SZ 500 > 5 × 10 14 M , in the low-and high-redshift samples. This is shown in the bottomleft panel of the figure, and as for the mass sub-samples, we found no significant differences except in the very inner core at R < 0.1R Y SZ 500 , where the local objects indicate larger variation.
Comparison with other samples
We computed the scatters of the profiles of the REXCESS and X-COP sample following the same procedure we used for the CHEX-MATE sample. These are shown in the bottom-right panel of Fig. 11. The width of the envelope corresponds to 1σ uncertainty. Overall, the CHEX-MATE, REXCESS, and X-COP scatters are consistent at the [0.07-0.6] R Y SZ 500 radial range. This excellent agreement is due to the fact that the samples are representative of the wide plethora of EM profile shapes in the core of clusters. There is slight disagreement at a larger scale between the samples, with the CHEX-MATE scatter being lower at more than 1σ, which could be do to the re-scaling. This is one important issue that will be investigated in forthcoming papers recurring also to multi-wavelength data. Fig. 12: EM maps of two of the simulated clusters used in this work projected along the lines of sight X, Y, and Z. The top row shows a cluster whose morphology appears roundish in the three projections considered. On the bottom we show, on the contrary, a cluster whose morphology is particularly complex and appears different in each of the three projections. We refer to the cluster in the top row as regular and the latter as irregular. The white circle indicates R 500 .
Investigating the origin of the scatter
Simulation scatters
In this section, we turn our attention to the The Three Hundred dataset. The cosmological simulations allowed us to break down the sample scatter, or total scatter, into two components: the genuine cluster-to-cluster scatter, which would be the sample scatter measured between the true 3D profiles of the objects, and the projection scatter. The latter measures the differences that various observers across the Universe would detect when looking at the same object from distinct points of view.
In this work, we scaled the CHEX-MATE EM profiles using the results of Pratt et al. (2022) and Ettori et al. (2022), which were derived using empirical ad-hoc adaptation of the self-similar scaling predictions. However, the same scaling is less suitable for the simulations, which agree better with the self-similar evolution of Eq. 4 since this expression minimises their scatter. For this reason, all the scatters presented from this point on were derived from EM profiles scaled assuming only self-similar evolution both for The Three Hundred and CHEX-MATE samples.
The projection scatter term
The evaluation of this term requires the knowledge of the 3D spatial distribution of the ICM. A perfectly spherical symmetric object would appear identical from all perspectives, and the projection scatter would be equal to zero. On the other hand, an object whose ICM spatial distribution presents a complex morphology will produce a large projection scatter. This can be visualised by looking at the three EM maps obtained for three orthogonal lines of sight for two objects of the The Three Hundred collaboration in Fig. 12. In detail, the cluster shown in the top rows is roundish and does not show evident traces of merging activity within a radius of R=R 500 (white circle). The cluster in the bottom row, however, exhibits a complex morphology due to ongoing merging activities and the presence of sub-structures, which cause it to appear different in the three projections.
This complexity is reflected in the projection scatters shown in Fig. 13, which was computed considering the 40 lines of projection for the two objects and not only the three shown with the images. The scatters are similar within R < 0.2R 500 . At R>0.4R 500 , the irregular cluster scatter diverges, while the one of the regular object remains almost constant. In particular, in the case of the irregular object, ∼ 0.4R 500 corresponds to the position of the big sub-structure visible in the bottom row of the left panel of Fig. 12. Interestingly, the Simx projection scatter increases rapidly at R∼ 0.8R 500 also for the regular cluster, while the Sim one remains mostly constant. This difference in the behaviour is due to the deliberate 7% over-and underestimated background correction explained in Section 4.2. The over-and underestimation of the background yields profiles that are steeper or flatter than the correct profiles, respectively, and hence they increase the scatter between the profiles. This effect is particularly important at R∼ R 500 because the cluster signal reaches the background level.
We calculated the projection scatter between the 40 projections for each of the 115 objects from The Three Hundred sample, and these profiles are shown in Fig. 14. On average, the scatter starts from a value of 0.15 at R ∼ 0.1R 500 and then reaches the value of 0.3 at R 500 , with a rapid increase from 0.2 to 0.3 at R ∼ 0.9R 500 . This rapid increase is due to the complex spatial ICM distribution at large radii. There are approximately five outliers that exhibit a larger scatter from the envelope and a complex behaviour. These clusters are characterised by the presence of sub-structures that happen to be behind or in front of the main halo of the cluster along some lines of projection. For this reason, the sub-structure emission is not visible as it blends with the emission from the core of the cluster. On the contrary, if the sub-halo is on a random position as respect to the main halo it will appear as a sub-structure in different position depending on the projection. In this case, the cluster morphology is complex. For these reasons, the resulting profile for these clusters can show remarkable differences depending on the line of sight.
The total scatter term
The total scatter term measures the differences between the cluster EM profiles within a sample. We recall that each of our simulated objects is seen along 40 lines of sight. With this possibility in hand, we created 40 realisations of the same sample of 115 objects and computed the scatter for each realisation. The 40 total scatters of the Simx profiles are shown in Fig. 14 .
The average high value of 0.9 at R < 0.3R 500 of the total scatter captures the wide range of the profile shapes within the inner core. The scatter reaches its minimum value of ∼ 0.4 at R ∼ 0.5R Y SZ 500 and then rapidly increases afterwards, due to the presence of sub-structures in the outskirts and the phenomena related to merging activities as well as the background subtraction discussed in Section 7.2.
Comparison between total and projection
Direct comparison of the projection and total scatter terms in the simulated sample allowed us to investigate the origin of the scatter as predicted by numerical models. The two scatters are shown in Fig. 14. The total scatter is almost eight times greater than the projection at R ∼ 0.1R 500 and rapidly decreases to be only two times greater at R ∼ 0.3R 500 , as shown in Fig. 14. This indicates that differences between clusters dominate with respect to the variations from the projection along different lines of sight at such scale.
The total scatter is only 20% greater than the projection term in the [0.4 − 0.8]R Y SZ 500 radial range. This scale is where the cluster differences are smaller. At R > 0.8R Y SZ 500 , both scatters increase, implying that merging phenomena and sub-structures are impacting the distribution of the gas. Furthermore, we argue that the deliberate background over-and under-subtraction discussed in Section 7.2 contributes to increasing both scatter terms by enlarging the distribution of the profiles where the signal of the cluster reaches the background level. The total scatters obtained using the Sim are similar to the ones obtained using Simx up to 0.9R 500 but remain below ∼ 0.45 at R 500 .
Simulation versus observations
We can break down the contributions to the scatter in the CHEX-MATE sample by using the numerical simulation scatter terms as a test bed. The Simx total and projection scatter medians and their dispersion are shown in Fig. 15. The CHEX-MATE scatter dispersion is also shown in the same figure. We recall that it is computed using the EM profiles scaled according to the self-similar model using Equation 2 and is greater than the one shown in Fig. 11 due to the residual dependency on mass and redshift discussed in Section 3.3.2 in the [0.2 − 0.8] radial range. However, the scatters reach the value of approximately 0.4 at R Y SZ 500 , indicating that the differences between the profiles are dominated by clumpy patches in the ICM distribution due to sub-halos and filamentary structures.
The CHEX-MATE and Simx total scatters are in excellent agreement at R< 0.1R Y SZ 500 and marginally consistent within 2σ in the [0.1-0.3] radial range. Generally speaking, they exhibit the same behaviour, rapidly declining from the maximum value of the scatter of 1.2 to 0.4. The projection scatter on the other hand is at a minimum value of 0.1 and is almost constant up to 0.3R Y SZ 500 . This result implies that the observed scatter between the EM S profiles within 0.5R Y SZ 500 is dominated by genuine differences between objects and not by the projection along one line of sight, as explained in Sect. 7.4. In other words, we are not limited by the projection on the plane of the sky when studying galaxy cluster population properties at such scales.
The total and CHEX-MATE scatters reach the minimum value, approximately 0.4, in the [0.3 − 0.5]R Y SZ 500 radial range and remain almost constant within these radii. This minimum value quantifies the narrow distribution of the profiles shown in Fig. 3 and Fig. 6. Furthermore, the slopes between morphologically relaxed and disturbed objects become consistent at such radii, as shown in the right panel of Fig. 10. This suggests that the differences between EM profiles are minimum at such intermediate scales despite their morphological statuses, mass or redshift. As for the inner regions, the projection term increases mildly in these regions and provides a small contribution.
The The Three Hundred scatters rapidly increase at R > 0.5R Y SZ 500 , with the total scatter reaching the value of approximately 0.7 at R 500 . The projection scatter reaches the CHEX-MATE scatter at R Y SZ 500 . We argue that this effect is due to a combination of not masking the sub-structures when extracting the Simx profiles and the deliberate wrong background subtraction discussed in Sect. 7.2. Indeed, the use of median profiles reduces this effect, and we discuss this effect in detail in Appendix A, where we show that the use of azimuthal median profiles is efficient for removing part of these spatial features. The fact that the scatter terms increase in a similar manner despite the use of median profiles reinforces that this behaviour is likely related to analysis techniques rather than genuine differences within the profiles and projection effects.
Discussion and conclusion
We have studied the properties of the SX and EM radial profiles of the CHEX-MATE sample, which comprises 116 SZ selected clusters observed for the first time with deep and homogeneous XMM-Newton observations. Our main findings are as follows:
-The choice of making the centre between the peak and the centroid for extraction of the SX profiles yields consistent results in the [0.05-1]R Y SZ 500 radial range. Significant differences can be seen within ∼ 0.05R Y SZ 500 . -The use of azimuthal average and median techniques to extract the profiles impacts the overall profile normalisation by a factor of 5% on average. The shape is mostly affected at R > 0.8R Y SZ 500 , with azimuthal averaged profiles being flatter at this scale.
-The EM profiles exhibit a dependency on the mass and a mild dependency on redshift, which is not accounted for by the computed scaling according to the self-similar scenario, as found also by Pratt et al. (2022) and Ettori et al. (2022). -Morphologically disturbed and relaxed cluster EM profiles have different normalisations and shapes within ∼ 0.4R Y SZ 500 . The differences at larger radii are on average within 10% and are consistent within the dispersion of the full sample.
-The shape and normalisation of the EM profiles present a continuum distribution within the [0.2-0.4]R Y SZ 500 radial range. The extreme cases of morphologically relaxed and disturbed objects are characterised by power law indexes, α = 2.51 ± 0.13 and α = 1.38 ± 0.2, respectively, that are not consistent at the 3σ level. The picture changes at R Y SZ 500 > 0.4, where the slopes of these extremes becomes marginally consistent at 1σ in the [0.4 − 0.6]R Y SZ 500 radial bin. The slopes in the last bin are in excellent agreement.
-The scatter of the CHEX-MATE sample depends on the scale.
The scatter maximum is ∼ 1.1 within 0.3R Y SZ 500 , reflecting the wide range of profile shapes within the cluster cores that range from the flat emission of disturbed objects to the peaked emission of the relaxed clusters. The scatter decreases towards its minimum value, 0.2, at 0.4R Y SZ 500 and increases rapidly to 0.4 at R Y SZ 500 . This result is coherent with the overall picture of a characteristic scale, R∼ 0.4R Y SZ 500 , at which the differences between profiles in terms of shape and normalisation are minimum. The increase of the scatter at R Y SZ 500 is expected, as this is the scale at which merging related phenomena and patchy distribution of the ICM become important.
-The scatters of the morphologically relaxed cluster and the disturbed cluster are different within 0.4R Y SZ 500 , the former being smaller. Above this radius, they are in excellent agreement between themselves and with the entire sample as well, implying that the properties of EM profiles in the outer parts are not affected by the properties in the core. There are no differences in the scatter of the sub-samples formed by high-and low-mass objects, and we found no evolution of the scatter for high-mass objects.
The overall emerging picture is that there is a characteristic scale, R∼ 0.4R Y SZ 500 , where the differences between profiles in terms of shape and normalisation are minimum. The exceptional data quality has allowed us to provide to the scientific community the scatter of SX and EM radial profiles of a representative cluster sample with an unprecedented precision of approximately 5%.
The results from observations were compared to a sample drawn from the numerical simulation suite The Three Hundred formed by 115 galaxy clusters selected to reflect the CHEX-MATE mass and redshift distribution. For each cluster, we computed the EM along 40 randomly distributed lines of sight, which allowed for the investigation of projection effects for the first time. Our main findings can be summarised as follows:
-The properties derived using the Sim or the Simx profiles are similar within R 500 , confirming the statistical quality of the mock X-ray images, which were calibrated to match the CHEX-MATE average statistical quality. -The simulation EM profiles appear systematically steeper than those from observations. The hydrostatic bias might play a key role in explaining this difference. The scaling of the CHEX-MATE profiles by R Y SZ 500 /(0.8 1/3 ), assuming a 20% bias, alleviates these differences, and the ratio between the CHEX-MATE and simulation medians becomes closer to one, with the exception of the centre where simulations typically have a greater gas density.
-The total scatter of the simulation sample follows the same behaviour as that of the observations up to 0.6R Y SZ 500 and then increases more rapidly to an average value of approximately 0.7, whereas the observation reaches the value of 0.4 at R Y SZ 500 . The comparison with the projection scatter at such scales hints at a contribution from projection effects on the order of 0.3.
-The projection scatter allowed us to study the spherical symmetry of clusters. This term slightly increases from approximately 0.1 at 0.1R Y SZ 500 up to approximately 0.3 at ∼ R Y SZ 500 , exhibiting a rapid gradient at R Y SZ 500 . This term is smaller than the total in the entire [0.1 − 0.9]R Y SZ 500 radial range considered, and its dispersion is on the order of 10%. This implies that the difference we observe between objects is due to a genuine difference in the gas spatial distribution.
-The background subtraction process becomes crucial at R 500 for determining of the profile shape at R Y SZ 500 . The deliberate over-or underestimation significantly contributes to increasing both the total and projection scatter at such large scales. Furthermore, the rapid increase of both scatters can be also explained by the fact that sub-structures are not masked in simulated images.
The large statistics offered by the simulation dataset allowed us for the first time to investigate the origin of the scatter and break down the components, namely the projection and total terms, and study them as a function of R Y SZ 500 . The overall picture emerging is that there are three regimes amongst the scatter:
-[0.1 − 0.4]:
The differences between profiles are genuinely due to a different distribution of the gas and also influenced by feedback processes and their implementation (see, e.g., Gaspari et al. 2014), which translates into a plethora of profile shapes and normalisations. -[0.4 − 0.6]: In this range, the scatter is sensitive to the scaling applied, suggesting that this is the scale where clusters are closer to being within the self-similar scenario. -[0.6 − 1]: The CHEX-MATE scatter and the total scatter increase at such scales and are greater by a factor of approximately two than the projection, showing that profile differences are genuine and not due to projection effects. The emission of sub-structures and filamentary structures and the correct determination of the background play a crucial role in determining the shape of the profiles at such scales.
We were able to investigate the origin of the scatter by combining the statistical power of the CHEX-MATE sample not only because of the great number of objects observed with sufficient exposure time to measure surface brightness profiles above R Y SZ 500 but also because of the sample's homogeneity and the uniqueness of the simulation sample. The latter allowed us to discriminate the scatter due to genuine differences between profiles and those related to projection. The CHEX-MATE sample allowed us to measure the scatter up to R Y SZ 500 with the sufficient precision to clearly discriminate the contribution from the projection term at all scales. Hundred scatters computed using the azimuthal mean and median EM profiles. These are shown with green and grey lines, respectively. The 68% dispersion is shown with the coloured envelopes. Bottom: Ratio between the median of the total scatters computed using the azimuthal averaged profiles over the median computed using the azimuthal median profiles. The dashed-dotted lines indicate the identity line and the ±20% levels.
The bottom panel shows that the scatters are nearly identical, within 0.2R 500 , and are on average around the 20% level in the [0.2-0.6]R 500 radial range. The scatter of the mean profiles increases rapidly above that radius. The same behaviour is observed for the scatter of the median even if the increase is less rapid, as shown by the ratio in the bottom panel at R> 0.6R 500 .
The azimuthal median in a given annulus does not completely remove the emission from the extended sub-structure, which can only be achieved by masking it. However, we argue that the scales at which the sub-structures become important, R> 0.6R 500 , correspond to annuli whose size is typically larger than the size of a sub-halo. For this reason, the azimuthal median is marginally affected.
Indeed, sub-structure masking is a key difference between observations and simulations, and it does affect the computation of the total scatter. However, we suggest that using the median profiles is an effective way to reduce the impact of sub-structures at the scales at which they are important. For this reason, the rapid increase of the total scatter at ∼ R 500 is more likely to be due to a genuine difference between the profiles and to the background subtraction effect discussed in Section 7.2.
Appendix B: ROSAT-XMM-Newton background relation
The determination of the sky background level was performed in a region free from cluster emission. In this work, we used the annular region between R Y X 200 and 13.5 arcminutes to measure the photon count rate associated with the sky background. We considered that we had sufficient statistics for the background estimation if the width of this region is at least 1.5 arcmin (i.e. R Y SZ 200 < 12 arcmin). The R Y SZ 200 of nearby clusters at z 0.2 are generally larger than 12 arcmin, and it was not possible to define a sky background region unless offset observations were available. For this reason, we predicted the sky background for these objects using the ROSAT All-Sky Survey diffuse background maps obtained with the Position Sensitive Proportional Counters (PSPC). We determined the ROSAT photon count rate, ROSAT cr , for each CHEX-MATE object in the R5 band, [0.73−1.56] keV, within an annular region centred on the X-ray peak and with the minimum and maximum radius being R Y SZ 200 and 1.5 degrees, respectively, using the sxrbg tool (Sabol & Snowden 2019). We then calibrated the relation between the ROSAT cr and the XMM-Newton background sky count rate, XMM cr , for clusters whose R Y X 200 was less than 12 by performing a linear regression using the linmix package (Kelly 2007):
XMM cr = α + β × ROSAT cr . (B.1)
The results of the linear regression for the mean and median profiles are shown in the left and right panels of Fig. B.1, respectively. The values of the linear minimisation y = α + β * x and the intrinsic scatter are reported in Table B.1. We used these relations to estimate the XMM-Newton sky background for the objects where R Y X 200 is greater than 12 arcmin in the CHEX-MATE sample. Column 8: R, D, and M stand for object morphologically relaxed, disturbed, and mixed, respectively, following the morphological classification of Campitiello et al. (2022).
Fig. 1 :
1Distribution of the clusters published in the PSZ2 Planck catalogue (Planck Collaboration XXVII 2016) in the mass-redshift plane.
Fig. 3 :
3Scaled emission measure median profiles centred on the X-ray
Fig. 4 :
4Mass distribution of the CHEX-MATE and The Three Hundred samples. These are shown with black empty and green polygons,
Fig. 5 :Fig. 6 :
56Example of the creation of the X-ray mock images. Left panel: EM map of a simulated cluster of our sample. The white circle encompasses R 500 . Right panel: Mock X-ray background-subtracted image in the [0.5-2] keV band of the same object shown in the left panel after we applied the procedures simulating typical X-ray observation effects. These are described in detail in Section 4.Comparison between the Sim and Simx profiles. Top panel:
Arnaud et al. (2010), Pratt et al. (2010), Maughan et al. (2012), and Eckert et al. (2012) (cfr.
Fig. 8 :
8Comparison of the statistical properties of the CHEX-MATE EM profiles with morphologically selected sub-samples and the X-COP and REXCESS samples. Top-left panel: Median of the EM peak median profiles of the CHEX-MATE sample. Its dispersion is shown with the black solid line and the grey envelope. The blue and red solid lines represent the median of the profiles derived from the morphologically relaxed and disturbed objects, respectively. Bottom-left panel: Ratio of the median of the morphologically relaxed and disturbed EM profiles over the median of the full CHEX-MATE sample. The same colour coding as above is used. The dotted lines represent 0.8 and 1.2 values and are shown for reference. Top-right panel: Same as the left panel except the median of the azimuthal mean profiles is shown. The median is indicated with the dotted black line. The median of the X-COP (Ghirardini et al. 2019) and REXCESS (Croston et al. 2008) EM profiles are shown with green and orange solid lines, respectively. Bottom-left panel: Ratio between the median of X-COP and REXCESS samples to the CHEX-MATE median. The REXCESS profiles were extracted performing an azimuthal average. For this reason, we show the ratio between the median of the REXCESS profiles and the median of the CHEX-MATE azimuthal mean profiles.
Fig. 9 :
9Top panel: Simx EM profiles and their median. The profiles are shown with grey solid lines and their median with a black dashed line. The Simx profiles were extracted from a randomly chosen line of sight for each cluster. The red solid line identifies the median of the CHEX-MATE sample. The black solid line is the median of the CHEX-MATE sample assuming a 20% bias on the mass (i.e. each profile has been scaled for R Y SZ 500 /0.8 1/3 ; see Sect. 4 for details). Bottom panel: Ratio of the median of the CHEX-MATE with and without correction for hydrostatic bias and the median of the Simx simulations. The CHEX-MATE median with the correction is shown with the red line and the median without it is shown with the black line.
Fig. 10 :
10Results of the fit of the CHEX-MATE EM profiles using broken power laws. Left panel: Fit of the median CHEX-MATE EM profiles shown with grey lines with a power law in four radial bins. The bins are [0.2 − 0.4], [0.4 − 0.6], [0.6 − 0.8], and [0.8 − 1] and are in units of R Y SZ 500 . For each radial bin, the black solid line represents the best fit of the power law shown in Eq. 6. The magenta envelope was obtained considering the dispersion of the fitted parameters (A and α in Eq. 6). The blue and red solid lines represent the best fit of the profiles of the morphologically relaxed and disturbed clusters, respectively. Right panel: Median values of the power law indexes, α, obtained from the fit of the CHEX-MATE, X-COP, REXCESS, and Simx samples in the four radial bins shown in the left panel. For each value we report its dispersion.
Fig. 11 :
11Scatter of the EM CHEX-MATE, X-COP, and REXCESS profiles. Top-left panel: Comparison between the scatter of the CHEX-MATE sample and the morphologically selected sub-samples. Black dotted lines identify the ±1σ scatter between the EM profiles of the CHEX-MATE sample. The scatter between the profiles of morphologically relaxed and disturbed clusters are shown with blue and red envelopes, respectively. The width of the envelopes corresponds to the 1σ uncertainty. Top-right panel: Comparison between the scatter of the CHEX-MATE sample and the mass selected sub-samples. Green and magenta envelopes represent the scatter between the EM profiles of the low-and high-mass sub-samples, respectively. Bottom-left panel: Investigation of evolution of the scatter. Blue and red envelopes represent the scatter between the profiles of the hi-mass clusters, M Y SZ 500 > 5 × 10 14 M , of the low-and high-redshift samples, respectively. Bottom-right panel: Comparison between the scatter of the CHEX-MATE with X-COP (Ghirardini et al. 2019) and REXCESS
Fig. 13 :
13Projection scatter of the regular and irregular clusters in Fig. 12.The scatters are shown with black and red solid lines, respectively. The black and red envelops represent the dispersion. The black and red dotted lines refer to the projection scatter computed using the Simx profiles.
Fig. 14 :
14Comparison between the total and projection scatters of the Simx profiles. The scatters are shown with solid green and magenta lines, respectively.
Fig. 15 :
15Comparison between the scatter of the CHEX-MATE sample and the total and projection terms of the simulations. The CHEX-MATE scatter is shown using dashed black lines. The medians of the total and projection terms are represented with green and magenta solid lines, respectively. Their 68% dispersions are represented using envelopes coloured accordingly.
Fig. A. 1 :
1Comparison between the total scatters computed using the azimuthal mean and median profiles. Top: Medians of the total from The Three
Fig. B. 1 :
1Calibration of the sky background count rate between XMM-Newton and ROSAT-PSPC. Left panel: Relation between the sky background count rate as measured using the XMM-Newton mean SX profiles and ROSAT-PSPC in the R5, [0.56-1.21] keV, energy band. The black points represent the clusters for which R Y SZ 200 is less than 12 arcmin and that have been used to fit the relations. The grey points are the clusters filling the field of view, their R Y SZ 200 being greater than 12 arcmin. The solid line represents the cross correlation obtained via the linear regression. The dashed lines represent the intrinsic scatter of the relation. Right panel: Same as the left panel except for the fact that the XMM-Newton count rate is measured using the median SX profiles.
Fig
. D.1: Surface brightness radial profiles of the CHEX-MATE sample. Left column: Azimuthal averaged surface brightness profiles of the CHEX-MATE sample centred on the X-peak and the centroid in the top and bottom panels, respectively. Blue and red solid lines represent morphologically relaxed and disturbed clusters, respectively. The black solid vertical line identifies R Y SZ 500 . Right column: Same as left column but for profiles extracted computing the azimuthal median.
.1.Article number, page 3 of 23
A&A proofs: manuscript no. 46189corr
10 8
10 7
10 6
10 5
10 4
EM
S [cm 6
Mpc]
Median CHEXMATE
M Ysz 500 5 × 10 14 M
M Ysz 500 > 5 × 10 14 M
z < 0.33
z > 0.33
10 1
10 0
R [R YSZ 500 ]
0.5
1.0
1.5
Ratio
10 8
10 7
10 6
10 5
10 4
EM [cm 6
Mpc]
M Ysz 500 5 × 10 14 M
M Ysz 500 > 5 × 10 14 M
10 1
10 0
R [R YSZ 500 ]
0.9
1.0
1.1
Ratio
10 8
10 7
10 6
10 5
10 4
EM [cm 6
Mpc]
z 0.33
z > 0.33
10 1
10 0
R [R YSZ 500 ]
0.9
1.0
1.1
Ratio
Fig. 2: Comparison of the statistical properties of the CHEX-MATE EM profiles divided in redshift and mass selected sub-samples. Top-left
panel:
Table 1 :
1Average of the relative errors of the CHEX-MATE EM profiles.Radius [R YSZ
500 ] Average relative error [%] Number of profiles used
0.2
1.7
116
0.5
2.1
116
0.7
3.0
116
1.0
6.0
107
The variations of the disturbed objects are up to 10% in the centre, where the10 1
10 0
R [R YSZ 500 ]
0.90
0.95
1.00
1.05
1.10
Azimuthal median profiles
Morph. relaxed
Morph. disturbed
10 1
10 0
R [R YSZ 500 ]
0.90
0.95
1.00
1.05
1.10
Azimuthal mean profiles
Peak /Centroid
10 1
10 0
R [R YSZ 500 ]
0.85
0.90
0.95
1.00
1.05
Peak
10 1
10 0
R [R YSZ 500 ]
0.85
0.90
0.95
1.00
1.05
Centroid
(Azimuthal Median/Azimuthal Mean)
Fig.
).10 7
10 6
10 5
10 4
EM [cm 6
Mpc]
Median CHEXMATE
Median morph. relaxed
Median morph. disturbed
10 1
10 0
R [R YSZ 500 ]
1
2
Rel. /dist.
10 8
10 7
10 6
10 5
10 4
EM [cm 6
Mpc]
CHEXMATE median
CHEXMATE mean
REXCESS Croston + 06
XCOP Ghirardini + 19
10 1
10 0
R [R 500 ]
0.75
1.00
1.25
Ratio
Table 1 of
1Campitiello et al. 2022).
, where the dispersion of the profiles shown is minimal inArticle number, page 11 of 23
A&A proofs: manuscript no. 46189corr
10 1
10 0
Radius [R YSZ 500 ]
0.2
0.4
0.6
0.8
1.0
1.2
1.4
Scatter
Peak median CHEX MATE
Morph. relaxed
Morph. disturbed
10 1
10 0
Radius [R YSZ 500 ]
0.2
0.4
0.6
0.8
1.0
1.2
1.4
Scatter
Peak median CHEX MATE
M Ysz 500 5 × 10 14 M
M Ysz 500 > 5 × 10 14 M
10 1
10 0
Radius [R YSZ 500 ]
0.2
0.4
0.6
0.8
1.0
1.2
1.4
Scatter
Peak median CHEX MATE
[M Ysz 500 > 5 × 10 14 M , z < 0.33]
[M Ysz 500 > 5 × 10 14 M , z 0.33]
10 1
10 0
Radius [R YSZ 500 ]
0.2
0.4
0.6
0.8
1.0
1.2
1.4
Scatter
Peak median CHEX MATE
REXCESS
XCOP
I. Bartalucci: CHEX-MATE: Constraining the origin of the scatter in galaxy cluster radial X-ray surface brightness profiles0.00005 0.00010 0.00015 0.00020 0.00025 0.00030
ROSAT R5 [cts/s]
0.0001
0.0002
0.0003
0.0004
0.0005
0.0006
0.0007
0.0008
0.0009
XMM EPIC [cts/s]
Mean profiles
R Yx 200 < 12arcmin
R Yx 200 > 12arcmin
0.00005 0.00010 0.00015 0.00020 0.00025 0.00030
ROSAT R5 [cts/s]
0.0001
0.0002
0.0003
0.0004
0.0005
0.0006
0.0007
0.0008
0.0009
XMM EPIC [cts/s]
Median profiles
R Yx 200 < 12arcmin
R Yx 200 > 12arcmin
Table B . 1 :
B1Results of the linear minimisation of the ROSAT-R5 vs XMM-Newton sky background count rate shown in Equation B.1.Parameter
Val
Val
Mean
Median
α [10 −5 ct/s] 3.974
2.346
β
2.730
2.630
[10 −5 ct/s] 2.375
2.902
Notes: The term represents the intrinsic scatter.
Table C
C.1: Results of the power law fit shown in Equation 6.
Radial bin
α CHX
α Simx
α CHX MR
α CHX MD
A CHX
A Simx
A CHX MR
A CHX MD
[R 500 ]
[ 10 −6 cm −6 Mpc]
0.2-0.4
2.01 ± 0.36 2.37 ± 0.36 2.51 ± 0.13 1.38 ± 0.20 1.88 ± 0.41 2.04 ± 0.27 2.13 ± 0.38 1.75 ± 0.31
0.4-0.6
2.58 ± 0.33 2.98 ± 0.38 2.93 ± 0.22 2.25 ± 0.18 0.59 ± 0.11 0.50 ± 0.11 0.55 ± 0.05 0.66 ± 0.14
0.6-0.8
3.03 ± 0.27 3.54 ± 0.56 3.17 ± 0.32 3.00 ± 0.20 0.22 ± 0.04 0.18 ± 0.05 0.19 ± 0.04 0.27 ± 0.04
0.8-1.0
3.27 ± 0.36 3.55 ± 0.99 3.44 ± 0.42 3.49 ± 0.27 0.10 ± 0.02 0.07 ± 0.02 0.07 ± 0.02 0.12 ± 0.02
Notes: The letters MR and MD in columns 4, 5, 8, and 9 stand for morphologically relaxed and disturbed, respectively.
Table D .
D1: Observational, morphological, and global properties of the CHEX-MATE sample.A&A proofs: manuscript no. 46189corrPlanck
name
z
X-peak
Centroid
N
H
R Y
SZ
500
M Y
SZ
500
Morphology
RA
DEC
RA
DEC
[J2000]
[J2000]
[10 20
cm −2
] [arcmin] [10 14
M ]
PSZ2 G075.71+13.51 0.056
19 :
21 :
12.415 43 :
56 :
50.623
19 :
21 :
9.426 43 :
57 :
57.118
1.91
22.020
8.740
M
PSZ2 G068.22+15.18 0.057
18 :
57 :
37.625 38 :
0 :
31.119
18 :
57 :
42.458 38 :
0 :
12.868
3.31
13.548
2.142
M
PSZ2 G040.03+74.95 0.061
13 :
59 :
15.106 27 :
58 :
34.080
13 :
59 :
14.381 27 :
59 :
6.593
1.48
12.980
2.342
M
PSZ2 G033.81+77.18 0.062
13 :
48 :
52.939 26 :
35 :
26.919
13 :
48 :
52.646 26 :
35 :
44.943
1.69
15.848
4.463
R
PSZ2 G057.78+52.32 0.065
15 :
44 :
59.000 36 :
6 :
40.699
15 :
44 :
57.836 36 :
7 :
34.614
5.67
12.147
2.316
M
PSZ2 G105.55+77.21 0.072
13 :
11 :
4.980 39 :
13 :
26.314
13 :
11 :
8.588 39 :
13 :
55.277
3.43
10.901
2.196
M
PSZ2 G042.81+56.61 0.072
15 :
22 :
29.473 27 :
42 :
27.922
15 :
22 :
25.737 27 :
43 :
22.702
5.36
13.499
4.219
M
PSZ2 G031.93+78.71 0.072
13 :
41 :
48.706 26 :
22 :
21.527
13 :
41 :
50.464 26 :
22 :
46.747
5.14
11.643
2.718
M
PSZ2 G287.46+81.12 0.073
12 :
41 :
17.552 18 :
34 :
26.322
12 :
41 :
17.618 18 :
33 :
48.907
2.87
11.329
2.563
M
PSZ2 G040.58+77.12 0.075
13 :
49 :
23.507 28 :
6 :
23.519
13 :
49 :
23.701 28 :
5 :
58.161
1.06
11.082
2.569
M
PSZ2 G057.92+27.64 0.076
17 :
44 :
14.333 32 :
59 :
29.115
17 :
44 :
14.092 32 :
59 :
27.068
5.60
11.084
2.659
M
PSZ2 G006.49+50.56 0.077
15 :
10 :
56.232
5 :
44 :
40.654
15 :
10 :
56.030
5 :
44 :
40.937
1.55
15.168
7.045
R
PSZ2 G048.10+57.16 0.078
15 :
21 :
8.350 30 :
38 :
7.594
15 :
21 :
15.377 30 :
38 :
16.582
4.34
11.898
3.540
D
PSZ2 G172.74+65.30 0.079
11 :
11 :
39.825 40 :
50 :
23.218
11 :
11 :
39.716 40 :
50 :
21.227
2.10
10.226
2.388
M
PSZ2 G057.61+34.93 0.080
17 :
9 :
47.051 34 :
27 :
14.010
17 :
9 :
47.446 34 :
27 :
23.362
1.43
11.727
3.705
M
PSZ2 G243.64+67.74 0.083
11 :
32 :
51.251 14 :
27 :
15.521
11 :
32 :
50.980 14 :
28 :
24.900
1.69
11.226
3.624
M
PSZ2 G080.16+57.65 0.088
15 :
1 :
7.943 47 :
16 :
38.232
15 :
0 :
59.165 47 :
16 :
51.937
2.91
9.474
2.513
D
PSZ2 G044.20+48.66 0.089
15 :
58 :
20.468 27 :
13 :
48.074
15 :
58 :
21.094 27 :
13 :
43.075
3.79
14.133
8.773
M
PSZ2 G114.79-33.71
0.094
0 :
20 :
36.995 28 :
39 :
36.116
0 :
20 :
36.917 28 :
39 :
42.461
2.47
10.198
3.788
M
PSZ2 G056.77+36.32 0.095
17 :
2 :
42.568 34 :
3 :
36.225
17 :
2 :
41.971 34 :
3 :
32.091
2.04
10.535
4.338
M
PSZ2 G049.32+44.37 0.097
16 :
20 :
30.894 29 :
53 :
30.425
16 :
20 :
32.043 29 :
53 :
35.492
1.38
9.866
3.763
M
PSZ2 G080.37+14.64 0.098
19 :
26 :
9.543 48 :
33 :
0.121
19 :
26 :
10.342 48 :
32 :
54.284
1.29
9.205
3.127
M
PSZ2 G099.48+55.60 0.105
14 :
28 :
38.408 56 :
51 :
36.620
14 :
28 :
34.413 56 :
53 :
9.043
1.76
8.271
2.749
D
PSZ2 G080.41-33.24
0.107
22 :
26 :
5.889 17 :
21 :
51.623
22 :
26 :
4.592 17 :
22 :
24.699
5.57
9.028
3.774
M
PSZ2 G113.29-29.69
0.107
0 :
11 :
45.583 32 :
24 :
53.523
0 :
11 :
45.554 32 :
24 :
52.973
1.59
8.857
3.573
M
PSZ2 G053.53+59.52 0.113
15 :
10 :
12.594 33 :
30 :
29.924
15 :
10 :
11.521 33 :
30 :
12.360
2.06
9.580
5.209
M
PSZ2 G046.88+56.48 0.115
15 :
24 :
8.181 29 :
53 :
6.523
15 :
24 :
9.401 29 :
53 :
31.977
4.02
9.402
5.104
D
PSZ2 G204.10+16.51 0.122
7 :
35 :
47.144 15 :
6 :
48.825
7 :
35 :
47.232 15 :
6 :
58.589
1.16
7.978
3.706
M
PSZ2 G192.18+56.12 0.124
10 :
16 :
22.500 33 :
38 :
19.329
10 :
16 :
24.056 33 :
38 :
18.491
3.28
7.801
3.620
M
PSZ2 G098.44+56.59 0.132
14 :
27 :
24.855 55 :
44 :
55.015
14 :
27 :
22.753 55 :
44 :
59.931
3.92
6.800
2.826
M
PSZ2 G273.59+63.27 0.134
12 :
0 :
24.093
3 :
20 :
40.725
12 :
0 :
23.716
3 :
20 :
31.335
0.82
8.353
5.465
M
PSZ2 G217.09+40.15 0.136
9 :
24 :
5.764 14 :
10 :
25.122
9 :
24 :
6.079 14 :
10 :
19.625
1.34
7.369
3.890
M
PSZ2 G218.59+71.31 0.137
11 :
29 :
52.386 23 :
48 :
39.122
11 :
29 :
51.064 23 :
48 :
49.086
7.13
7.219
3.759
D
PSZ2 G179.09+60.12 0.137
10 :
40 :
44.656 39 :
57 :
10.577
10 :
40 :
44.381 39 :
57 :
12.733
1.03
7.265
3.839
M
PSZ2 G226.18+76.79 0.143
11 :
55 :
17.827 23 :
24 :
16.924
11 :
55 :
18.153 23 :
24 :
15.915
7.24
8.129
5.974
M
PSZ2 G077.90-26.63
0.147
22 :
0 :
53.218 20 :
58 :
19.022
22 :
0 :
53.101 20 :
58 :
23.993
2.24
7.456
4.989
M
PSZ2 G021.10+33.24 0.151
16 :
32 :
46.985
5 :
34 :
30.355
16 :
32 :
46.967
5 :
34 :
32.936
1.19
8.427
7.788
R
PSZ2 G028.89+60.13 0.153
15 :
0 :
19.607 21 :
22 :
9.023
15 :
0 :
19.783 21 :
22 :
11.193
1.57
6.940
4.473
R
PSZ2 G313.88-17.11
0.153 16 :
1 :
48.622 −75
:
45 :
16.177 16 :
1 :
48.193 −75
:
45 :
8.926
5.79
8.374
7.858
R
PSZ2 G071.63+29.78 0.156
17 :
47 :
14.949 45 :
13 :
16.921
17 :
47 :
13.660 45 :
12 :
37.672
2.11
6.624
4.131
D
PSZ2 G062.46-21.35
0.162
21 :
4 :
53.306 14 :
1 :
29.123
21 :
4 :
52.969 14 :
1 :
29.734
5.86
6.431
4.106
R
Article number, page 21 of 23
Table D .1: continuation
DArticle number, page 22 of 23 I. Bartalucci: CHEX-MATE: Constraining the origin of the scatter in galaxy cluster radial X-ray surface brightness profilesPlanck
name
z
X-peak
Centroid
N
H
R Y
SZ
500
M Y
SZ
500
Morphology
RA
DEC
RA
DEC
[J2000]
[J2000]
[10 20
cm −2
] [arcmin] [10 14
M ]
PSZ2 G094.69+26.36 0.162
18 :
32 :
32.424 64 :
50 :
0.613
18 :
32 :
30.227 64 :
49 :
38.915
1.72
5.818
3.081
M
PSZ2 G066.68+68.44 0.163
14 :
21 :
40.389 37 :
17 :
30.114
14 :
21 :
39.902 37 :
17 :
31.735
1.52
6.218
3.802
R
PSZ2 G050.40+31.17 0.164
17 :
20 :
8.584 27 :
40 :
13.923
17 :
20 :
9.163 27 :
40 :
11.458
4.91
6.403
4.219
M
PSZ2 G049.22+30.87 0.164
17 :
20 :
9.866 26 :
37 :
31.037
17 :
20 :
9.690 26 :
37 :
21.341
1.24
7.146
5.904
R
PSZ2 G263.68-22.55
0.164
6 :
45 :
29.064 −54
:
13 :
39.415
6 :
45 :
30.011 −54
:
13 :
28.896
4.91
7.893
7.955
M
PSZ2 G285.63+72.75 0.165
12 :
30 :
47.325 10 :
33 :
7.622
12 :
30 :
46.802 10 :
33 :
28.002
5.00
7.002
5.606
M
PSZ2 G238.69+63.26 0.169
11 :
12 :
54.419 13 :
26 :
4.625
11 :
12 :
54.181 13 :
26 :
33.234
7.26
6.211
4.167
M
PSZ2 G149.39-36.84
0.170
2 :
21 :
34.396 21 :
21 :
56.224
2 :
21 :
34.675 21 :
22 :
9.957
8.46
6.714
5.346
M
PSZ2 G187.53+21.92 0.171
7 :
32 :
20.308 31 :
37 :
57.919
7 :
32 :
20.464 31 :
37 :
55.784
1.35
6.604
5.165
M
PSZ2 G000.13+78.04 0.171
13 :
34 :
8.218 20 :
14 :
27.123
13 :
34 :
9.672 20 :
14 :
28.105
3.37
6.586
5.122
M
PSZ2 G067.17+67.46 0.171
14 :
26 :
2.032 37 :
49 :
33.518
14 :
26 :
0.963 37 :
49 :
39.424
1.89
7.350
7.143
M
PSZ2 G218.81+35.51 0.175
9 :
9 :
12.697 10 :
58 :
30.523
9 :
9 :
12.554 10 :
58 :
33.929
2.62
6.501
5.241
D
PSZ2 G067.52+34.75 0.175
17 :
17 :
18.999 42 :
26 :
59.275
17 :
17 :
18.625 42 :
26 :
54.180
3.01
6.167
4.494
M
PSZ2 G041.45+29.10 0.178
17 :
17 :
44.795 19 :
40 :
35.724
17 :
17 :
47.051 19 :
40 :
37.372
3.24
6.477
5.411
M
PSZ2 G085.98+26.69 0.179
18 :
19 :
57.539 57 :
9 :
39.886
18 :
19 :
54.155 57 :
10 :
9.398
1.51
5.910
4.172
D
PSZ2 G111.75+70.37 0.183
13 :
13 :
6.819 46 :
17 :
25.912
13 :
13 :
5.636 46 :
16 :
31.997
2.91
5.876
4.342
D
PSZ2 G313.33+61.13 0.183 13 :
11 :
29.410 −
1 :
20 :
29.175 13 :
11 :
29.520 −
1 :
20 :
25.528
1.02
7.421
8.771
R
PSZ2 G083.86+85.09 0.183
13 :
5 :
50.907 30 :
53 :
43.423
13 :
5 :
51.368 30 :
53 :
56.263
7.36
6.043
4.735
M
PSZ2 G217.40+10.88 0.189
7 :
38 :
18.558
1 :
2 :
15.325
7 :
38 :
18.362
1 :
2 :
16.088
1.15
6.123
5.340
R
PSZ2 G224.00+69.33 0.190
11 :
23 :
58.030 21 :
28 :
57.124
11 :
23 :
58.286 21 :
29 :
0.392
3.08
5.994
5.106
M
PSZ2 G124.20-36.48
0.197
0 :
55 :
50.350 26 :
24 :
35.217
0 :
55 :
53.140 26 :
24 :
31.763
1.49
6.541
7.253
D
PSZ2 G195.75-24.32
0.203
4 :
54 :
9.827
2 :
55 :
29.225
4 :
54 :
10.087
2 :
55 :
53.424
1.33
6.535
7.800
M
PSZ2 G159.91-73.50
0.206
1 :
31 :
53.353 −13
:
36 :
42.371
1 :
31 :
53.063 −13
:
36 :
47.658
1.69
6.632
8.464
M
PSZ2 G346.61+35.06 0.223 15 :
15 :
2.849 −15
:
23 :
10.574 15 :
15 :
1.989 −15
:
22 :
39.963
5.53
6.197
8.409
D
PSZ2 G055.59+31.85 0.224
17 :
22 :
27.224 32 :
7 :
56.518
17 :
22 :
26.675 32 :
7 :
51.890
1.03
5.992
7.724
M
PSZ2 G092.71+73.46 0.228
13 :
35 :
17.944 41 :
0 :
1.126
13 :
35 :
18.896 41 :
0 :
9.805
2.21
5.977
8.003
M
PSZ2 G072.62+41.46 0.228
16 :
40 :
20.160 46 :
42 :
31.218
16 :
40 :
20.457 46 :
42 :
26.700
3.32
6.727
11.426
M
PSZ2 G073.97-27.82
0.233
21 :
53 :
36.775 17 :
41 :
42.122
21 :
53 :
36.848 17 :
41 :
48.852
2.20
6.218
9.516
M
PSZ2 G340.94+35.07 0.236 14 :
59 :
29.073 −18
:
10 :
44.774 14 :
59 :
29.213 −18
:
10 :
44.609
3.04
5.760
7.795
M
PSZ2 G208.80-30.67
0.248
4 :
54 :
6.672 −10
:
13 :
12.866
4 :
54 :
8.829 −10
:
14 :
19.927
1.71
5.400
7.255
D
PSZ2 G340.36+60.58 0.253
14 :
1 :
2.040
2 :
52 :
41.925
14 :
1 :
1.901
2 :
52 :
39.166
3.91
5.743
9.199
R
PSZ2 G266.83+25.08 0.254 10 :
23 :
50.072 −27
:
15 :
20.572 10 :
23 :
49.951 −27
:
15 :
23.133
2.99
5.282
7.258
R
PSZ2 G229.74+77.96 0.269
12 :
1 :
14.612 23 :
6 :
28.724
12 :
1 :
16.318 23 :
6 :
30.125
1.92
5.084
7.441
D
PSZ2 G087.03-57.37
0.278
23 :
37 :
37.705
0 :
16 :
2.925
23 :
37 :
39.382
0 :
16 :
13.669
6.63
4.926
7.329
M
PSZ2 G107.10+65.32 0.280
13 :
32 :
38.954 50 :
33 :
30.306
13 :
32 :
44.154 50 :
32 :
56.523
1.13
4.999
7.800
D
PSZ2 G186.37+37.26 0.282
8 :
42 :
57.144 36 :
21 :
57.014
8 :
42 :
57.517 36 :
21 :
51.864
1.35
5.573
10.998
M
PSZ2 G259.98-63.43
0.284
2 :
32 :
18.459 −44
:
20 :
47.910
2 :
32 :
17.649 −44
:
20 :
57.756
5.90
4.872
7.451
M
PSZ2 G106.87-83.23
0.292
0 :
43 :
24.811 −20
:
37 :
24.747
0 :
43 :
24.574 −20
:
37 :
22.358
6.83
4.812
7.732
M
PSZ2 G262.27-35.38
0.295
5 :
16 :
37.024 −54
:
30 :
56.483
5 :
16 :
39.379 −54
:
31 :
1.180
3.54
4.978
8.759
D
PSZ2 G266.04-21.25
0.296
6 :
58 :
30.077 −55
:
56 :
37.797
6 :
58 :
29.777 −55
:
56 :
43.551
4.49
5.580
12.470
M
PSZ2 G008.94-81.22
0.307
0 :
14 :
19.107 −30
:
23 :
28.159
0 :
14 :
17.229 −30
:
23 :
7.230
3.04
4.870
8.989
D
Table D .1: continuation
DPSZ2 G278.58+39.16 0.308 11 :Planck
name
z
X-peak
Centroid
N
H
R Y
SZ
500
M Y
SZ
500
Morphology
RA
DEC
RA
DEC
[J2000]
[J2000]
[10 20
cm −2
] [arcmin] [10 14
M ]
31 :
54.653 −19
:
55 :
44.575 11 :
31 :
56.147 −19
:
55 :
44.025
4.02
4.730
8.290
M
PSZ2 G008.31-64.74
0.312 22 :
58 :
48.237 −34
:
48 :
1.673 22 :
58 :
48.340 −34
:
48 :
16.957
1.88
4.506
7.421
D
PSZ2 G325.70+17.34 0.315 14 :
47 :
33.340 −40
:
20 :
36.772 14 :
47 :
32.611 −40
:
20 :
33.188
3.53
4.495
7.570
M
PSZ2 G349.46-59.95
0.347 22 :
48 :
44.575 −44
:
31 :
47.748 22 :
48 :
44.883 −44
:
31 :
42.214
3.99
4.768
11.359
M
PSZ2 G207.88+81.31 0.353
12 :
12 :
18.311 27 :
32 :
54.422
12 :
12 :
18.904 27 :
33 :
5.442
3.30
4.090
7.440
M
PSZ2 G143.26+65.24 0.363
11 :
59 :
14.275 49 :
47 :
41.128
11 :
59 :
15.703 49 :
47 :
51.346
1.33
3.966
7.257
D
PSZ2 G271.18-30.95
0.370
5 :
49 :
19.297 −62
:
5 :
14.978
5 :
49 :
18.796 −62
:
5 :
11.902
6.20
3.932
7.373
R
PSZ2 G113.91-37.01
0.371
0 :
19 :
41.833 25 :
18 :
4.618
0 :
19 :
39.458 25 :
17 :
28.226
4.69
3.958
7.582
D
PSZ2 G172.98-53.55
0.373
2 :
39 :
53.403 −
1 :
34 :
43.974
2 :
39 :
54.152 −
1 :
34 :
48.158
7.51
3.906
7.367
M
PSZ2 G216.62+47.00 0.383
9 :
49 :
51.753 17 :
7 :
10.527
9 :
49 :
51.757 17 :
7 :
18.162
5.01
4.012
8.469
M
PSZ2 G046.10+27.18 0.389
17 :
31 :
38.965 22 :
51 :
47.874
17 :
31 :
40.056 22 :
51 :
52.331
4.60
3.861
7.840
D
PSZ2 G286.98+32.90 0.390 11 :
50 :
49.017 −28
:
4 :
36.574 11 :
50 :
49.731 −28
:
4 :
51.557
2.71
4.646
13.742
M
PSZ2 G057.25-45.34
0.397 22 :
11 :
45.747 −
3 :
49 :
45.974 22 :
11 :
45.754 −
3 :
49 :
35.485
3.19
4.070
9.624
M
PSZ2 G206.45+13.89 0.410
7 :
29 :
50.887 11 :
56 :
28.523
7 :
29 :
51.061 11 :
56 :
26.337
6.87
3.648
7.459
M
PSZ2 G243.15-73.84
0.410
1 :
59 :
1.932 −34
:
12 :
54.069
1 :
59 :
1.523 −34
:
13 :
18.610
3.37
3.747
8.086
M
PSZ2 G083.29-31.03
0.412
22 :
28 :
33.296 20 :
37 :
12.319
22 :
28 :
33.252 20 :
37 :
16.322
2.10
3.664
7.642
M
PSZ2 G241.11-28.68
0.420
5 :
42 :
56.882 −36
:
0 :
0.975
5 :
42 :
56.210 −35
:
59 :
54.191
3.89
3.566
7.361
M
PSZ2 G262.73-40.92
0.421
4 :
38 :
17.598 −54
:
19 :
24.413
4 :
38 :
17.538 −54
:
19 :
18.041
2.74
3.576
7.461
M
PSZ2 G239.27-26.01
0.430
5 :
53 :
26.252 −33
:
42 :
36.972
5 :
53 :
24.362 −33
:
42 :
35.104
1.84
3.714
8.772
M
PSZ2 G225.93-19.99
0.435
6 :
0 :
8.022 −20
:
8 :
5.678
6 :
0 :
10.214 −20
:
7 :
38.748
2.93
3.819
9.789
D
PSZ2 G277.76-51.74
0.438
2 :
54 :
16.472 −58
:
56 :
59.907
2 :
54 :
23.102 −58
:
57 :
28.650
4.90
3.646
8.650
D
PSZ2 G284.41+52.45 0.441 12 :
6 :
12.169 −
8 :
48 :
3.836 12 :
6 :
12.264 −
8 :
48 :
8.038
2.57
3.854
10.400
M
PSZ2 G205.93-39.46
0.443
4 :
17 :
34.752 −11
:
54 :
33.373
4 :
17 :
34.246 −11
:
54 :
21.899
0.99
3.980
11.542
M
PSZ2 G056.93-55.08
0.447 22 :
43 :
21.951 −
9 :
35 :
43.371 22 :
43 :
22.815 −
9 :
35 :
52.671
1.76
3.704
9.491
D
PSZ2 G324.04+48.79 0.452 13 :
47 :
30.663 −11
:
45 :
7.275 13 :
47 :
30.809 −11
:
45 :
13.630
4.41
3.811
10.578
R
PSZ2 G210.64+17.09 0.480
7 :
48 :
46.441
9 :
40 :
5.820
7 :
48 :
47.331
9 :
40 :
13.545
1.71
3.289
7.790
M
PSZ2 G044.77-51.30
0.503 22 :
14 :
57.349 −14
:
0 :
11.673 22 :
14 :
57.004 −14
:
0 :
10.935
8.12
3.255
8.359
M
PSZ2 G201.50-27.31
0.538
4 :
54 :
10.952 −
3 :
0 :
53.478
4 :
54 :
11.056 −
3 :
0 :
49.066
1.21
3.094
8.304
M
PSZ2 G004.45-19.55
0.540 19 :
17 :
5.068 −33
:
31 :
20.804 19 :
17 :
5.347 −33
:
31 :
22.095
1.835
3.291
10.090
M
PSZ2 G228.16+75.20 0.545
11 :
49 :
35.358 22 :
24 :
9.825
11 :
49 :
35.731 22 :
24 :
0.321
7.24
3.237
9.790
M
PSZ2 G111.61-45.71
0.546
0 :
18 :
33.528 16 :
26 :
11.320
0 :
18 :
33.396 16 :
26 :
8.511
1.56
3.085
8.499
M
PSZ2 G155.27-68.42
0.567
1 :
37 :
24.847 −
8 :
27 :
20.166
1 :
37 :
24.693 −
8 :
27 :
33.384
3.83
2.986
8.365
M
PSZ2 G066.41+27.03 0.575
17 :
56 :
51.042 40 :
8 :
4.525
17 :
56 :
49.973 40 :
8 :
8.150
4.77
2.875
7.695
D
PSZ2 G339.63-69.34
0.596 23 :
44 :
43.777 −42
:
43 :
11.551 23 :
44 :
44.150 −42
:
43 :
14.971
4.26
2.846
8.051
R
Notes:
Providing precise measures of the mass of the observed clusters is one of the goals of the CHEX-MATE collaboration. For this paper, it is sufficient to prove that The Three Hundred clusters have a profile in reasonable agreement with the observed sample in order to employ them for the study of the EM scatter.Article number, page 9 of 23 A&A proofs: manuscript no. 46189corr
Acknowledgements. The authors thank the referee for his/her comments. We acknowledge financial contribution from the contracts ASI-INAF Athena 2019-27-HH.0, "Attività di Studio per la comunità scientifica di Astrofisica delle Alte Energie e Fisica Astroparticellare" (Accordo Attuativo ASI-INAF n. 2017-14-H.0), and from the European Union's Horizon 2020 Programme under the AHEAD2020 project (grant agreement n. 871158). This research was supported by the International Space Science Institute (ISSI) in Bern, through ISSI International Team project #565 (Multi-Wavelength Studies of the Culmination of Structure Formation in the Universe). The results reported in this article are based on data obtained with XMM-Newton, an ESA science mission with instruments and contributions directly funded by ESA Member States and NASA. GWP acknowledges financial support from CNES, the French space agency.Appendix A: The impact of sub-structures in simulationsThe presence of sub-structures within the region of extraction of the radial profiles modifies the shape of the surface brightness and emission measure profiles. This translates into an increase of the scatter between them. In this work, we are interested in the distribution of the gas within the cluster halo filtering the contribution of sub-structures whose emission is detectable within or near R Y SZ 500 . This filtering is achieved by masking the sub-structures in observations. The same procedure is difficult to apply to simulations. Generally speaking, automatic detection algorithms in X-ray analyses are calibrated to detect point source emission only, as the detection of extended sources would cause the algorithm to also detect the cluster emission itself. For this reason, the identification of extended emission associated to sub-structures is done via eye inspection, but this approach cannot be taken with large datasets comprised of thousands of maps, such as the one we used in this work. The fact that we do not mask sub-structures in the simulated maps constitutes one of the main differences between the X-ray analysis and The Three Hundred analysis. However, we could qualitatively investigate the impact of sub-structures on the scatter by comparing the results obtained following the procedures of Section 7 that used the azimuthal average and median profiles shown inFig. A.1.Appendix C: Power law fitWe report inTable C.1 the results of the fit of the median EM profiles centred on the X-ray peak profiles using the power law shown in Eq. 6 and described in Sect. 5. The fit was performed using the mean value of each bin as the pivot for the radius, that is, 0.3, 0.5, 0.7, and 0.9 for the [0.Appendix D: Surface brightness profilesWe show in D.1 the surface brightness profiles of the CHEX-MATE sample that we extracted as described in Section 3.3.1. Thedotted line shown in the top-left panel indicates R Y SZ 500 and highlights the data quality of the sample as most of the profiles extend beyond that radius.
. S W Allen, A E Evrard, A B Mantz, ARA&A. 49409Allen, S. W., Evrard, A. E., & Mantz, A. B. 2011, ARA&A, 49, 409
. E Anders, N Grevesse, Geochim. Cosmochim. Acta. 53197Anders, E. & Grevesse, N. 1989, Geochim. Cosmochim. Acta, 53, 197
. F Andrade-Santos, C Jones, W R Forman, ApJ. 84376Andrade-Santos, F., Jones, C., Forman, W. R., et al. 2017, ApJ, 843, 76
. S Ansarifard, E Rasia, V Biffi, A&A. 634113Ansarifard, S., Rasia, E., Biffi, V., et al. 2020, A&A, 634, A113
. M Arnaud, N Aghanim, D M Neumann, A&A. 3891Arnaud, M., Aghanim, N., & Neumann, D. M. 2002, A&A, 389, 1
. M Arnaud, D M Neumann, N Aghanim, A&A. 36580Arnaud, M., Neumann, D. M., Aghanim, N., et al. 2001, A&A, 365, L80
. M Arnaud, G W Pratt, R Piffaretti, A&A. 51792Arnaud, M., Pratt, G. W., Piffaretti, R., et al. 2010, A&A, 517, A92
. H Böhringer, P Schuecker, G W Pratt, A&A. 469363Böhringer, H., Schuecker, P., Pratt, G. W., et al. 2007, A&A, 469, 363
. S Borgani, A Kravtsov, Advanced Science Letters. 4204Borgani, S. & Kravtsov, A. 2011, Advanced Science Letters, 4, 204
. G Campitiello, S Giacintucci, L Lovisari, ApJ. 92591Campitiello, G., Giacintucci, S., Lovisari, L., et al. 2022, ApJ, 925, 91
. M Cappellari, Y Copin, MNRAS. 342345Cappellari, M. & Copin, Y. 2003, MNRAS, 342, 345
. A&A. 650104CHEX-MATE Collaboration. 2021, A&A, 650, A104
. J H Croston, G W Pratt, H Böhringer, A&A. 487431Croston, J. H., Pratt, G. W., Böhringer, H., et al. 2008, A&A, 487, 431
. W Cui, A Knebe, G Yepes, MNRAS. 4802898Cui, W., Knebe, A., Yepes, G., et al. 2018, MNRAS, 480, 2898
. W Cui, C Power, V Biffi, MNRAS. 4562566Cui, W., Power, C., Biffi, V., et al. 2016, MNRAS, 456, 2566
. E Darragh-Ford, A B Mantz, E Rasia, arXiv:2302.10931arXiv e-printsDarragh-Ford, E., Mantz, A. B., Rasia, E., et al. 2023, arXiv e-prints, arXiv:2302.10931
. S Diehl, T S Statler, MNRAS. 368497Diehl, S. & Statler, T. S. 2006, MNRAS, 368, 497
. K Dolag, F K Hansen, M Roncarelli, L Moscardini, MNRAS. 36329Dolag, K., Hansen, F. K., Roncarelli, M., & Moscardini, L. 2005, MNRAS, 363, 29
. D Eckert, S Ettori, E Pointecouteau, Astronomische Nachrichten. 338293Eckert, D., Ettori, S., Pointecouteau, E., et al. 2017, Astronomische Nachrichten, 338, 293
. D Eckert, M Roncarelli, S Ettori, MNRAS. 4472198Eckert, D., Roncarelli, M., Ettori, S., et al. 2015, MNRAS, 447, 2198
. D Eckert, F Vazza, S Ettori, A&A. 54157Eckert, D., Vazza, F., Ettori, S., et al. 2012, A&A, 541, A57
. S Ettori, I Balestra, A&A. 496343Ettori, S. & Balestra, I. 2009, A&A, 496, 343
. S Ettori, A Donnarumma, E Pointecouteau, Space Sci. Rev. 177119Ettori, S., Donnarumma, A., Pointecouteau, E., et al. 2013, Space Sci. Rev., 177, 119
. S Ettori, F Gastaldello, A Leccardi, A&A. 52468Ettori, S., Gastaldello, F., Leccardi, A., et al. 2010, A&A, 524, A68
. S Ettori, V Ghirardini, D Eckert, A&A. 62139Ettori, S., Ghirardini, V., Eckert, D., et al. 2019, A&A, 621, A39
. S Ettori, L Lovisari, D Eckert, arXiv:2211.03082arXiv e-printsEttori, S., Lovisari, L., & Eckert, D. 2022, arXiv e-prints, arXiv:2211.03082
. M Gaspari, F Brighenti, P Temi, S Ettori, ApJ. 78310Gaspari, M., Brighenti, F., Temi, P., & Ettori, S. 2014, ApJ, 783, L10
. M Gaspari, F Tombesi, M Cappi, Nature Astronomy. 410Gaspari, M., Tombesi, F., & Cappi, M. 2020, Nature Astronomy, 4, 10
. V Ghirardini, D Eckert, S Ettori, A&A. 62141Ghirardini, V., Eckert, D., Ettori, S., et al. 2019, A&A, 621, A41
. S Ghizzardi, XMM-SOC-CAL-TN-0022Ghizzardi, S. 2001, XMM-SOC-CAL-TN-0022
. S Ghizzardi, S Molendi, R Van Der Burg, A&A. 64692Ghizzardi, S., Molendi, S., van der Burg, R., et al. 2021, A&A, 646, A92
. R Giacconi, P Rosati, P Tozzi, ApJ. 551624Giacconi, R., Rosati, P., Tozzi, P., et al. 2001, ApJ, 551, 624
. B C Kelly, ApJ. 6651489Kelly, B. C. 2007, ApJ, 665, 1489
. A Klypin, G Yepes, S Gottlöber, F Prada, S Heß, MNRAS. 4574340Klypin, A., Yepes, G., Gottlöber, S., Prada, F., & Heß, S. 2016, MNRAS, 457, 4340
. K D Kuntz, S L Snowden, ApJ. 543195Kuntz, K. D. & Snowden, S. L. 2000, ApJ, 543, 195
. A M C Le Brun, M Arnaud, G W Pratt, R Teyssier, MNRAS. 47369Le Brun, A. M. C., Arnaud, M., Pratt, G. W., & Teyssier, R. 2018, MNRAS, 473, L69
. L Lovisari, S Ettori, 7254UniverseLovisari, L. & Ettori, S. 2021, Universe, 7, 254
. B J Maughan, P A Giles, S W Randall, C Jones, W R Forman, MNRAS. 4211583Maughan, B. J., Giles, P. A., Randall, S. W., Jones, C., & Forman, W. R. 2012, MNRAS, 421, 1583
. D M Neumann, A&A. 439465Neumann, D. M. 2005, A&A, 439, 465
. D M Neumann, M Arnaud, A&A. 348711Neumann, D. M. & Arnaud, M. 1999, A&A, 348, 711
. D M Neumann, M Arnaud, A&A. 37333Neumann, D. M. & Arnaud, M. 2001, A&A, 373, L33
I Bartalucci, Planck Collaboration XXCHEX-MATE: Constraining the origin of the scatter in galaxy cluster radial X-ray surface brightness profiles. 57120I. Bartalucci: CHEX-MATE: Constraining the origin of the scatter in galaxy cluster radial X-ray surface brightness profiles Planck Collaboration XX. 2014, A&A, 571, A20
arXiv:1502.01598ArXiv e-prints. 59427Planck Collaboration XXVII. 2015, ArXiv e-prints [arXiv:1502.01598] Planck Collaboration XXVII. 2016, A&A, 594, A27
. S Planelles, D Fabjan, S Borgani, MNRAS. 4673827Planelles, S., Fabjan, D., Borgani, S., et al. 2017, MNRAS, 467, 3827
. G W Pratt, M Arnaud, A Biviano, Space Sci. Rev. 21525Pratt, G. W., Arnaud, M., Biviano, A., et al. 2019, Space Sci. Rev., 215, 25
. G W Pratt, M Arnaud, B J Maughan, J B Melin, A&A. 66524Pratt, G. W., Arnaud, M., Maughan, B. J., & Melin, J. B. 2022, A&A, 665, A24
. G W Pratt, M Arnaud, R Piffaretti, A&A. 51185Pratt, G. W., Arnaud, M., Piffaretti, R., et al. 2010, A&A, 511, A85
. G W Pratt, J H Croston, M Arnaud, H Böhringer, A&A. 498361Pratt, G. W., Croston, J. H., Arnaud, M., & Böhringer, H. 2009, A&A, 498, 361
. E Rasia, S Borgani, G Murante, ApJ. 81317Rasia, E., Borgani, S., Murante, G., et al. 2015, ApJ, 813, L17
. E Rasia, M Meneghetti, S Ettori, The Astronomical Review. 840Rasia, E., Meneghetti, M., & Ettori, S. 2013, The Astronomical Review, 8, 40
. M Roncarelli, S Ettori, S Borgani, MNRAS. 4323030Roncarelli, M., Ettori, S., Borgani, S., et al. 2013, MNRAS, 432, 3030
. M Roncarelli, S Ettori, K Dolag, MNRAS. 3731339Roncarelli, M., Ettori, S., Dolag, K., et al. 2006, MNRAS, 373, 1339
. M Rossetti, F Gastaldello, D Eckert, MNRAS. 4681917Rossetti, M., Gastaldello, F., Eckert, D., et al. 2017, MNRAS, 468, 1917
E J Sabol, S L. ; J Snowden, T Wiecki, C Fonnesbeck, sxrbg: ROSAT X-Ray Background Tool Salvatier. 55Sabol, E. J. & Snowden, S. L. 2019, sxrbg: ROSAT X-Ray Background Tool Salvatier, J., Wiecki, T., & Fonnesbeck, C. 2016, PeerJ-CS, e55
. J Sayers, A B Mantz, E Rasia, arXiv:2206.00091arXiv e-printsSayers, J., Mantz, A. B., Rasia, E., et al. 2022, arXiv e-prints, arXiv:2206.00091
. G Schellenberger, S Giacintucci, L Lovisari, ApJ. 92591Schellenberger, G., Giacintucci, S., Lovisari, L., et al. 2022, ApJ, 925, 91
. M Sereno, S Ettori, A Baldi, MNRAS. 4192646Sereno, M., Ettori, S., & Baldi, A. 2012, MNRAS, 419, 2646
. M Sereno, S Ettori, M Meneghetti, MNRAS. 4673801Sereno, M., Ettori, S., Meneghetti, M., et al. 2017, MNRAS, 467, 3801
. M Sereno, K Umetsu, S Ettori, ApJ. 8604Sereno, M., Umetsu, K., Ettori, S., et al. 2018, ApJ, 860, L4
. S L Snowden, R F Mushotzky, K D Kuntz, D S Davis, A&A. 478615Snowden, S. L., Mushotzky, R. F., Kuntz, K. D., & Davis, D. S. 2008, A&A, 478, 615
. L Strüder, U Briel, K Dennerl, A&A. 36518Strüder, L., Briel, U., Dennerl, K., et al. 2001, A&A, 365, L18
. R A Sunyaev, I B Zeldovich, ARA&A. 18537Sunyaev, R. A. & Zeldovich, I. B. 1980, ARA&A, 18, 537
. M J L Turner, A Abbey, M Arnaud, A&A. 36527Turner, M. J. L., Abbey, A., Arnaud, M., et al. 2001, A&A, 365, L27
. A Vikhlinin, W Forman, C Jones, ApJ. 52547Vikhlinin, A., Forman, W., & Jones, C. 1999, ApJ, 525, 47
. G M Voit, Reviews of Modern Physics. 77207Voit, G. M. 2005, Reviews of Modern Physics, 77, 207
. G M Voit, S T Kay, G L Bryan, MNRAS. 364909Voit, G. M., Kay, S. T., & Bryan, G. L. 2005, MNRAS, 364, 909
. P Xiii, A&A. 59413XIII, P. 2016, A&A, 594, A13
. I Zhuravleva, E Churazov, A Kravtsov, MNRAS. 4283274Zhuravleva, I., Churazov, E., Kravtsov, A., et al. 2013, MNRAS, 428, 3274
| [] |
[
"Representation Learning for Person or Entity-Centric Knowledge Graphs: An Application in Healthcare",
"Representation Learning for Person or Entity-Centric Knowledge Graphs: An Application in Healthcare"
] | [
"Christos Theodoropoulos [email protected] \nKU Leuven\nOude Markt 133000LeuvenBelgium\n\nIBM Research Europe\nDublinIreland\n",
"Natasha Mulligan [email protected] \nIBM Research Europe\nDublinIreland\n",
"Thaddeus Stappenbeck \nLerner Research Institute\nCleveland Clinic\nClevelandOhioUnited States\n",
"Joao Bettencourt-Silva \nIBM Research Europe\nDublinIreland\n"
] | [
"KU Leuven\nOude Markt 133000LeuvenBelgium",
"IBM Research Europe\nDublinIreland",
"IBM Research Europe\nDublinIreland",
"Lerner Research Institute\nCleveland Clinic\nClevelandOhioUnited States",
"IBM Research Europe\nDublinIreland"
] | [] | Knowledge graphs (KGs) are a popular way to organise information based on ontologies or schemas and have been used across a variety of scenarios from search to recommendation. Despite advances in KGs, representing knowledge remains a non-trivial task across industries and it is especially challenging in the biomedical and healthcare domains due to complex interdependent relations between entities, heterogeneity, lack of standardization, and sparseness of data. KGs are used to discover diagnoses or prioritize genes relevant to disease, but they often rely on schemas that are not centred around a node or entity of interest, such as a person. Entity-centric KGs are relatively unexplored but hold promise in representing important facets connected to a central node and unlocking downstream tasks beyond graph traversal and reasoning, such as generating graph embeddings and training graph neural networks for a wide range of predictive tasks. This paper presents an end-to-end representation learning framework to extract entity-centric KGs from structured and unstructured data. We introduce a star-shaped ontology to represent the multiple facets of a person and use it to guide KG creation. Compact representations of the graphs are created leveraging graph neural networks and experiments are conducted using different levels of heterogeneity or explicitness. A readmission prediction task is used to evaluate the results of the proposed framework, showing a stable system, robust to missing data, that outperforms a range of baseline machine learning classifiers. We highlight that this approach has several potential applications across domains and is open-sourced. Lastly, we discuss lessons learned, challenges, and next steps for the adoption of the framework in practice. | 10.48550/arxiv.2305.05640 | [
"https://export.arxiv.org/pdf/2305.05640v2.pdf"
] | 258,564,429 | 2305.05640 | 0152a75cfe2cee2b7510118635c75369dd9be690 |
Representation Learning for Person or Entity-Centric Knowledge Graphs: An Application in Healthcare
Christos Theodoropoulos [email protected]
KU Leuven
Oude Markt 133000LeuvenBelgium
IBM Research Europe
DublinIreland
Natasha Mulligan [email protected]
IBM Research Europe
DublinIreland
Thaddeus Stappenbeck
Lerner Research Institute
Cleveland Clinic
ClevelandOhioUnited States
Joao Bettencourt-Silva
IBM Research Europe
DublinIreland
Representation Learning for Person or Entity-Centric Knowledge Graphs: An Application in Healthcare
Person RepresentationEntity-Centric Knowledge GraphsPerson-Centric OntologyRepresentation LearningGraph Neural Net- works
Knowledge graphs (KGs) are a popular way to organise information based on ontologies or schemas and have been used across a variety of scenarios from search to recommendation. Despite advances in KGs, representing knowledge remains a non-trivial task across industries and it is especially challenging in the biomedical and healthcare domains due to complex interdependent relations between entities, heterogeneity, lack of standardization, and sparseness of data. KGs are used to discover diagnoses or prioritize genes relevant to disease, but they often rely on schemas that are not centred around a node or entity of interest, such as a person. Entity-centric KGs are relatively unexplored but hold promise in representing important facets connected to a central node and unlocking downstream tasks beyond graph traversal and reasoning, such as generating graph embeddings and training graph neural networks for a wide range of predictive tasks. This paper presents an end-to-end representation learning framework to extract entity-centric KGs from structured and unstructured data. We introduce a star-shaped ontology to represent the multiple facets of a person and use it to guide KG creation. Compact representations of the graphs are created leveraging graph neural networks and experiments are conducted using different levels of heterogeneity or explicitness. A readmission prediction task is used to evaluate the results of the proposed framework, showing a stable system, robust to missing data, that outperforms a range of baseline machine learning classifiers. We highlight that this approach has several potential applications across domains and is open-sourced. Lastly, we discuss lessons learned, challenges, and next steps for the adoption of the framework in practice.
Introduction
Knowledge graphs (KGs) have been widely used to organize information in a structured and flexible way enabling a variety of downstream tasks and applications [30]. KGs consist of nodes (entities) and edges (relations) between them, that represent the information in a particular domain or set of domains. The ability of KGs to support complex reasoning and inference has been explored in a variety of tasks including search [56,26,35], recommendation [78,74,27], and knowledge discovery [53,39].
KGs are becoming increasingly used across a wide range of biomedical and healthcare applications. Knowledge graphs typically rely on information retrieved from biomedical literature and early approaches, such as BioGrid [15], have first depended on manual curation of knowledge bases to map protein-to-protein interactions using genes and proteins as entity types. Semi-automatic and machinelearning approaches have been introduced to assist, for example, in finding associations or relations between important concepts such as diseases and symptoms [60]. Recently, healthcare and clinical applications have otherwise focused on building KGs from electronic health records (EHRs) where nodes represent diseases, drugs, or patients, and edges represent their relations [54]. Most approaches are however limited by the challenging nature of healthcare data, including heterogeneity, sparseness and inconsistent or lacking standardization [5]. Recent calls for future work on KGs have expressed the need to develop new models and algorithms that are able to take these challenges (e.g. missing data) into account [54]. Furthermore, there is a need to be able to accurately represent information from multiple data sources about individual patients. This is not only required by physicians to support routine hospital activities but also for clinical research in, for example, developing novel predictive models or discovering new important features associated with poor patient outcomes. A holistic representation of individual patients should therefore capture not only their clinical attributes such as diagnoses, procedures, and medication but also other variables that may also be predictors such as demographics, behavioral and social aspects (e.g. smoking habits or unemployment). Incorporating new types of information in models and analyses will allow the creation of better tools for evaluating the effectiveness of therapies and directing them to the most relevant patients.
This paper presents an end-to-end representation learning framework to extract information and organize it into entity-centric KGs from both structured and unstructured data. A star-shaped ontology is designed and used for the purpose of representing the multiple facets of a person and guiding the first stages of a person knowledge graph (PKG) creation. Graphs are then extracted, and compact representations are generated leveraging graph neural networks (GNNs). We evaluate our approach using a real-world hospital intensive care unit (ICU) dataset and a hospital readmission prediction task. To the best of our knowledge, the novelty of the proposed approach can be summarised as:
the first end-to-end framework for PKG extraction in Resource Description Framework (RDF) and PyTorch Geometric applicable format, using structured EHRs as well as unstructured clinical notes.
the first use of a star-shaped Health & Social Person-centric Ontology (HSPO) [31] to model a comprehensive view of the patient, focused on multiple facets (e.g. clinical, demographic, behavioral, and social). a representation learning approach that embeds personal knowledge graphs (PKGs) using GNNs and tackles the task of ICU readmission prediction using a real-world ICU dataset. the implementation proposed in this paper is open-sourced 4 , adaptable, and generalizable so that it can be used to undertake other downstream tasks. The paper is structured in the following way: section 2 discusses related work and section 3 includes a description of the HSPO ontology. Section 4 describes the data preprocessing pipeline for PKG extraction using the ontology and the processed dataset. Section 5 evaluates different graph approaches in tackling downstream tasks anchored by an ICU hospital readmission prediction task. Finally, section 6 discusses the impact and adoption of the study and highlights the applicability of the framework in different downstream tasks and domains.
Related Work
Public KGs are perhaps the most pervasive type of knowledge graphs today. These are often based on publicly available documents, encyclopedias or domainspecific databases and their schemas describe the features typically found within them. In recent years, especially in the health and biomedical domains, different types of KGs have been proposed from literature or EHRs yet they are not usually centred around the individual. Recent works include the PubMed KG [77], enabling connections among bio-entities, authors, articles, affiliations, and funding, the clinical trials KG [16], representing medical entities found in clinical trials with downstream tasks including drug-repurposing and similarity searches, and the PrimeKG [14], a multimodal KG for precision medicine analyses centred around diseases. Indeed disease-centric KGs have been previously proposed and despite some efforts in overlaying individual patient information [52], these graphs are not centred around the person or patient.
The idea behind entity-centric knowledge graphs and particularly personcentric graphs is relatively new and unexplored. One of the first efforts to define personal knowledge graphs (PKGs) was that of Balog and Kenter [3] where the graph has a particular "spiderweb" or star-shaped layout and every node has to have a direct or indirect connection to the user (i.e. person). The main advantage of having such a star-shaped representation lies in the fact the graph itself becomes personalized, enabling downstream applications across health and wellbeing, personal assistants, or conversational systems [3]. Similarly, a review paper [62] discussed the idea of a Person Health KG as a way to represent aggregated multi-modal data including all the relevant health-related personal data of a patient in the form of a structured graph but several challenges remained and implementations are lacking especially in representation learning. Subsequent works have proposed personal research knowledge graphs (PRKGs) [13] to represent information about the research activities of a researcher, and personal attribute knowledge bases [43] where a pre-trained language model with a noise-robust loss function aims to predict personal attributes from conversations without the need for labeled utterances. A knowledge model for capturing dietary preferences and personal context [66] has been proposed for personalized dietary recommendations where an ontology was developed for capturing lifestyle behaviors related to food consumption. A platform that leverages the Linked Open Data [7] stack (RDF, URIs, and SPARQL) to build RDF representations of Personal Health Libraries (PHLs) for patients is introduced in [1] and is aimed at empowering care providers in making more informed decisions. A method that leverages personal information to build a knowledge graph to improve suicidal ideation detection on social media is introduced by [12]. In the latter, the extracted KG includes several user nodes as the social interaction between the users is modeled. UniSKGRep [67] is a unified representation learning framework of knowledge graphs and social networks. Personal information of famous athletes and scientists is extracted to create two different universal knowledge graphs. The framework is used for node classification [76,63] and link prediction [39,48,45]. A graph-based approach that leverages the interconnected structure of personal web information, and incorporates efficient techniques to update the representations as new data are added is proposed by [64]. The approach captures personal activity-based information and supports the task of email recipient prediction [61].
Despite the above efforts and idiosyncrasies, person-centric knowledge graphs have not been extensively used for predictive or classification tasks, especially those involving graph embeddings and GNNs. Similarly, ontologies that support the creation of entity-centric, star-shaped PKGs are not well established and there is no published research on representation learning of person-centric knowledge graphs using GNNs. To the best of our knowledge, this paper is the first to propose a framework for learning effective representations of an entity or person (i.e., graph classification setting [79,11,75]) using PKGs and GNNs that can be applied to different downstream predictive tasks.
HSPO Ontology
The Health and Social Person-centric ontology (HSPO) has been designed to describe a holistic view of an individual spanning across multiple domains or facets. The HSPO defines a schema for a star-shaped Personal Knowledge Graph [3] with a person as a central node of the graph and corresponding characteristics or features (e.g. computable phenotype) linked to the central node.
This view is unique as it is designed to continue to be expanded with additional domains and facets of interest to the individual including but not limited to the clinical, demographics and social factors in a first version but also being able to expand future versions to include behavioral, biological, or gene information. Representing a holistic view of individuals with new information generated beyond the traditional healthcare setting is expected to unlock new insights that can transform the way in which patients are treated or services delivered.
Previous ontologies have been built to harmonize disease definitions globally (MONDO [70]), to provide descriptions of experimental variables such as compounds, anatomy, diseases, and traits (EFO [47]), or to describe evidence of interventions in populations ( [49]). Other ontologies have focused on specific contexts or diseases, such as SOHO, describing a set of social determinants affecting individual's health [38] or an ontology that describes behavior change interventions [51]. Further to this, not all ontologies provide mappings for their entities into standard biomedical terminologies and vocabularies. The HSPO aims to address these challenges by being the first to create a person-centric view linking multiple facets together, leveraging existing ontological efforts, and providing mappings to known terms of biomedical vocabularies when appropriate.
The HSPO ontology has been built incrementally with the main objective of providing an accurate and clear representation of a person and their characteristics across multiple domains of interest. Domains of interest were identified and prioritized together with domain experts and a well-established methodology [55] was followed to guide development. The HSPO is built not only to ensure that questions may be asked from the generated KGs but also that derived graphs may be used to train neural networks for downstream tasks.
Person-Centric Knowledge Graph Extraction
Generating Person-Centric Knowledge Graphs requires a data preprocessing pipeline to prepare the dataset before graphs can be extracted. This section describes the use case rationale and data, the data preprocessing pipeline, and the steps taken to extract knowledge graphs.
Use Case Rationale and Dataset
In this study, we use EHRs on ICU admissions provided by the MIMIC-III dataset [34,33,24], a well-established publicly available and de-identified dataset that contains hospital records from a large hospital's intensive care units. MIMIC-III, therefore, includes data and a structure similar to most hospitals, containing both tabular data and clinical notes in the form of free text. More precisely, the data covers the demographic (e.g. marital status, ethnicity, etc.), clinical (e.g. diagnoses, procedures, medication, etc.), and some aspects of the social view of the patient embedded in text notes. Detailed results of lab tests and metrics of monitoring medical devices are also provided.
MIMIC-III is not only an appropriate dataset because of its structure and the types of data that it contains but also because of its population characteristics, including the reasons for hospital admission. More than 35% of the patients in this dataset were admitted with cardiovascular conditions [33] which are broadly relevant across healthcare systems globally. Indeed, following their first hospital discharge, nearly 1 in 4 heart failure patients are known to be readmitted within 30 days, and approximately half are readmitted within 6 months [36]. These potentially avoidable subsequent hospitalizations are on the increase and reducing 30-day readmissions has now been a longstanding target for governments worldwide to improve the quality of care (e.g. outcomes) and reduce costs [40].
Therefore, both the experiments carried out and knowledge graphs generated in this paper describe a use case on patients admitted with cardiovascular conditions and a downstream prediction task to identify potential 30-day readmissions. This task can be further generalised to other conditions as readmissions are a widely used metric of success in healthcare, and other outcome metrics (e.g. specific events, mortality) are also possible using the approach proposed in this paper and accompanying open-source code repository.
Preprocessing Pipeline
Data Selection and Completion
The goal of the data preprocessing is to prepare the dataset for the PKG extraction. Due to the nature of MIMIC-III, each PKG represents the state of a patient during a single hospital admission. In the first step of the data preprocessing pipeline (Fig. 2), we select and aggregate the relevant data. In order to construct an efficient and applicable representation for a range of downstream tasks, the data selection step is necessary to include a subset of the EHR data, that is concise.We exclude the detailed lab test results (e.g. full blood count test values), as we assume that the diagnoses, medication, and procedures data are sufficiently expressive to represent the clinical view of the patient. The inclusion of fine-grain information poses additional challenges in the encoding of the PKG and the representation learning process. Following this strategy, we select the demographic information: gender, age, marital status, religion, ethnicity, and the essential clinical data: diagnoses, medication, and procedures. We create a different record for each admission with the corresponding information using the JSON format. The diagnoses and procedures in the MIMIC-III dataset are recorded using ICD-9 [57,58] (The International Classification of Diseases, Ninth Revision) coding schema with a hierarchical structure (with a format xXXX.YYY ) where the first three or four digits of the code represent the general category, and the subsequent three digits after the dot separator represent more specific subcategories.
We group the diagnoses and procedures using the corresponding general family category (xXXX ) to reduce the number of different textual descriptions while defining substantially each diagnosis and procedure. For example, the diagnoses acute myocardial infarction of anterolateral wall (ICD-9 code: 410.0) and acute myocardial infarction of inferolateral wall (ICD-9 code: 410.2) are grouped under the general family diagnosis acute myocardial infarction (ICD-9 code: 410).
This grouping is important, otherwise, the encoding of the graph and training of a Graph Neural Network to solve a downstream task would be very challenging, as some diagnoses and procedures are rare and underrepresented in the limited dataset of the study. In detail, more than 3,900 diagnoses and 1,000 procedures have a frequency of less than 10 in the dataset, while after grouping the numbers drop to less than 300 and 280 respectively.
Data Sampling As the diagnoses and procedures are given using ICD-9 coding, we add the textual descriptions of the ICD-9 codes to the dataset. We sample the data records that are appropriate for the selected downstream task of the paper: 30-day ICU readmission prediction. The day span is a hyperparameter of the approach. The ICU admission records of patients that passed away during their admission to the hospital, or in a day span of fewer than 30 days after their discharge, are excluded.
Clinical Notes Integration
The clinical notes of the dataset contain information related to the clinical, demographic, and social aspects of the patients. We use the MetaMap [2] annotator to annotate the clinical notes with UMLS entities (UMLS Metathesaurus [8,50]) and sample the codes that are related to certain social aspects such as employment status, household composition, and housing conditions. In this study, we focus on these three social aspects because social problems are known to be non-clinical predictors of poor outcomes but also they are often poorly recorded yet previous works have identified these specific three social problems available in MIMIC-III [46]. The extracted UMLS annotations of the clinical notes are integrated into the processed dataset.
Summary of Processed Dataset
After the data preprocessing steps, the total number of admission records is 51,296. From these, in 47,863 (93.3%) of the cases, the patients are not readmitted to the hospital, while 3,433 (6.7%) of the patients returned to the hospital within 30 days. In the downstream task for this paper, we focus on patients diagnosed with heart failure or cardiac dysrhythmia. Hence, the final dataset consists of 1,428 (9.2%) readmission cases and 14,113 (90.8%) non-readmission cases.
Hence, the dataset is highly imbalanced as the readmission cases are underrepresented. In addition, EHR data is often incomplete [72,32,4,68] and MIMIC-III, as it consists of real-world EHRs, is no exception. Information may be missing for multiple reasons, such as the unwillingness of the patient to share information, technical and coding errors, among several others [5]. More precisely, we observe that for some fields, such as religion, marital status, and medication there is a significant percentage of missing information. Tab. 1 shows the number of missing records per field for all admissions. We highlight that the social information (employment, housing conditions, and household composition), extracted from the unstructured data is scarce. This is an indication that the clinical notes focus predominantly on the clinical view of the patients, without paying attention to aspects that can be connected to the social determinants of health, as previously reported [6,46]. The MIMIC-III dataset is protected under HIPPA regulations. Thus, the detailed data distribution per field (e.g. race, diseases, medication, etc.) cannot be shared publicly. The implementation for the extraction of the distributions is publicly available in the official repository accompanying this paper and can be used when access to the dataset is officially granted 5 to the user. The HSPO ontology provides the knowledge schema used to create the PKGs. A PKG is extracted for every admission record of the processed data. The ontology represents the different classes (e.g. Religion, Disease, etc.), instances/individuals of the classes (e.g. Acute Kidney Failure), relations between the classes (e.g. has-Disease, hasSocialContext, etc.), and data properties (e.g. the instances of the class Age has a data property age in years that is an integer number). We use the rdflib python library for the implementation. The extracted knowledge graphs follow the Resource Description Framework (RDF) [19] format (Fig. 3), which is the primary foundation for the Semantic Web. Thus, SPARQL Query Language [18,59] can be used to query, search and perform operations on the graphs.
Person-Centric Knowledge Graph Extraction
Evaluation
The evaluation section reflects on the validity, reliability, and effectiveness of the person-centric graphs in downstream tasks. We evaluate the patient representation learning using person-centric graphs and a GNN on a specific ICU readmission prediction task. We provide insights into the applicability, benefits, and challenges of the proposed solution.
Data Transformation
The extracted graphs in RDF format cannot be used to train GNNs. Hence, a transformation step is implemented to transform graphs into the format (.dt files) of PyTorch Geometric [21], as we adopt this framework to build the models. The transformed graphs consist of the initial representations (initialized embeddings) of the nodes and the adjacency matrices for each relation type. They follow a triplet-based format, where the triplets have the following format: [entity 1, relation type, entity 2] (e.g. [patient, hasDisease, acute renal failure]).
Training GNNs using person-centric graphs as input is a relatively unexplored field and defining a priori the most useful graph structure is not a trivial task. Hence, we experiment with 4 different graph versions (Fig. 4) to find the most suitable structure for the downstream task given the available data. The strategy to define the different graph structures is described as follows: starting from a detailed version, we progressively simplify the graph by reducing the heterogeneity with relation grouping. We highlight that finding the optimal level of heterogeneity is an open and challenging research question as it depends on the available data, the downstream task, and the model architecture.
More precisely, the first version of the graph structure is aligned with the schema provided by the HSPO ontology and includes 8 relation types (hasDisease, hasIntervention, hasSocialContext, hasRaceOrEthnicity, followsReligion, has-Gender, hasMaritalStatus, and hasAge). The detailed demographic relations are grouped under the hasDemographics relation in the second version. The third version is the most simplified containing only the has relation type.
Lastly, we present the fourth version of the graph with the inclusion of group nodes to explore the effectiveness of graph expressivity on learnt representation. The design of the fourth version is based on the assumption that the summarization of the corresponding detailed information (e.g. the disease information is summarized in the representation of the Disease node) and the learning of a grouped representation during training can be beneficial for the performance of the models. To explore this assumption in the experimental setup, we introduce a node type group node and add 7 group nodes (Diseases, Social Context, Demographics, Age, Interventions, Procedures, and Medication). For the inclusion of the group nodes in the graph, we introduce the general relation type has. Fig. 4 presents the four directed graph structures. We also include the undirected corresponding versions in the experiments.
The nodes of each graph are initialized using the bag-of-words (BOW) approach. Hence, the initial representations are sparse. We create the vocabulary using the textual descriptions of the end nodes (diagnoses names, medication, etc.) and the description of the group nodes (patient, social context, diseases, interventions, medication, procedures, demographics, and age). The final vocabulary consists of 3,723 nodes. Alternatively to the introduction of sparsity using the BOW approach, a language model, such as BioBERT [41], PubMedBERT [25], and CharacterBERT [20,69], can be used for the node initialization. We provide this capability in our open-source framework.
Demographics
Graph Neural Networks and Baseline Models
We experiment with two different GNN architectures and each of them has two variations. The difference between the variations lies in the final layer of the model which is convolutional or linear (Fig. 5). The first model (PKGSage) is based on Sage Graph Convolution Network [28] and the second (PKGA) utilizes the Graph Attention Network (GAT) architecture [71,10]. Given a graph structure G with N nodes, a sequence of transformations T is applied, and the final prediction p of the model is extracted as follows:
X k,i = σ(T i (X k,i−1 , G)), with k ∈ [1, ..., N ] and i ∈ [1, 2, 3],(1)p = σ(X n,3 ),(2)
where T i is the transformation (Sage Convolution, GAT Convolution, or linear) of the i th layer, X k,i is the k node representation after the T i transformation, X n,3 is the final output of the last layer for the patient node, and σ is the activation function. ReLU is used as the activation function for the first two layers and Sigmoid for the last layer. In principle, the graphs are multi-relational and this can lead to rapid growth in the number of model parameters. To address this issue, we apply basis decomposition [65] for weight regularization.
We incorporate a set of baseline classifiers to compare the graph-based approach with traditional machine-learning algorithms. Particularly, k-nearest neighbors (KNN) (k is set to 5 based on the average performance in the validation set), linear and non-linear Support Vector Machines (L-SVM and RBF-SVM respectively) [17], Decision Tree (DT) [9], AdaBoost [22,29], Gaussian Naive Bayes (NB), and Gaussian Process (GP) [73] with radial basis function (RBF) kernel are included in the study. We apply one-hot encoding to transform the textual descriptions into numerical features for the diagnoses, medication, and procedures. The remaining features (gender, religion, marital status, age group, race, employment, housing conditions, and household composition) are categorical, so we encode each category using mapping to an integer number. For a fair comparison, feature engineering or feature selection techniques are not implemented, since the graph-based models include all the available information extracted from the EHRs.
S A G E C o n v S A G E C o n v L I N E A R G A T C o n v L I N E A R S A G E C o n v S A G E C o n v S A G E C o n v G A T C o n v G A T C o n v G A T C o n v G A T C o n v G A T C o n v
Experimental Setup
We create 10 different balanced dataset splits for experimentation to overcome the imbalance problem. In detail, the 14,113 non-readmission cases are randomly divided into 10 folds. Combined with the 1,428 readmission cases, these folds constitute the final 10 balanced dataset splits. For each balanced split, 5-fold cross-validation is applied. We highlight that we use the same splits across the different experimental settings to have a fair comparison. The Adam optimizer [37] is used and the learning rate is set to 0.001. The models are trained for 100 epochs and the best model weights are stored based on the performance in the validation set (15% of the training set). The number of bases, for the basis decomposition method [65], is 3, and the batch size is set to 32.
Results
In this subsection, we present the average results, across the different runs and folds of 5-fold cross-validation, of the models using different graph versions (Tab. 2) and compare the performance of the best models with various machine learning classifiers (Tab. 3). Starting with the intra-model comparison, the PKGSage model performs best using the first directed graph version with 62.16% accuracy and the third undirected graph version with 68.06% F1-score. The PKGA model achieves 61.69% accuracy and 67.49% F1-score using the third undirected graph version. Overall, the end-to-end convolution strategy is advantageous as the second variation of the models (PKGSage 2 , PKGA 2 ) performs better than the first variation with the linear final layer in most cases. Nonetheless, the best performance is achieved by the PKGSage 1 model. The inter-model comparison reveals that both models (PKGSage, PKGA) achieve similar results.
The graph's structure is essential since the performance varies according to the graph version. More precisely, the results are comparable with less notable differences using the undirected/directed first version of the graph, and the undirected second and third versions. However, the performance degradation is significant when the directed second and third versions are utilized. In both versions, we observe that all links point to the central Patient node (Fig. 4), and the representations of the remaining nodes are not updated during train- ing, imposing a challenge on the trainability of the models. However, the same structure is present in the directed first version where no performance drop is noticed. Another possible reason is the level of heterogeneity of the graph. The first version has 8 relation types, while the second and third versions contain 4 and 1 relation types respectively. Given these, we conclude that the direction of the links and graph heterogeneity are crucial for the final performance of the models. Leveraging the fourth graph version is not advantageous since the models achieve worse performance. This indicates that introducing the group nodes and additional expressibility can be an obstacle to the trainability and performance of the models. We observe saturation and stability problems [23] during training when the fourth version is used.
The best-performing models of the study also significantly outperform the baseline models. We notice an improvement of 3.62% in F1-Score. Based on the accuracy metric, only the SVM classifiers (linear and non-linear) achieve comparable results. The results illustrate the potential of the PKG-based models in downstream predictive tasks and particularly in ICU readmission prediction.
Ablation Study
Following the observation that the data is incomplete (Tab. 1), we conduct an ablation study to probe the robustness of our approach in handling missing information. The following hypotheses are drawn: We apply the ablation study using the best-performing model PKGSage 1 (Tab. 3) with the undirected third graph version (PKGSage 1 UnV 3 ) and the directed first graph version (PKGSage 1 DV 1 ). In the first step, we exclude one facet of the data (e.g. medication, procedures) and evaluate performance degradation. To address the H2 hypothesis and to further test model robustness, in the next step, we exclude two facets in a complete manner (as depicted in Tab. 4).
The exclusion of the disease information results in the most significant performance decline in the accuracy for both model versions and in the F1-score for the PKGSage 1 UnV 3 model (Tab. 4). Excluding the medication information leads to 1.13% drop in F1-score for the PKGSage 1 DV 1 model. The results of the ablation study only partially support the H1 hypothesis. The robustness of the models in handling missing information is profound as the performance deterioration is limited. In the worst case, accuracy and F1-score drop by 2.57% and 1.13% respectively. A similar pattern is revealed when we remove two facets of the data as the removal of clinical information is reflected in lower performance. More precisely, the exclusion of the medication and disease information leads to 2.37% and 1.87% drop in F1-score for the PKGSage 1 UnV 3 and PKGSage 1 DV 1 models correspondingly. PKGSage 1 UnV 3 is less accurate by 2.93% when medication and procedures nodes are absent, and PKGSage 1 DV 1 achieves 58.53% accuracy (3.63% reduction) when procedure and disease data are excluded. Overall, the performance of the models is robust even when two out of three clinical facets (medication, diseases, and procedures) are unavailable. We highlight that stability and robustness are key properties, especially in healthcare data where missing information is inevitable due to privacy issues, system or human errors.
Discussion
The proposed end-to-end framework for person or entity-centric KGs has impacts on both the technology and industry and we highlight the benefits and challenges of adopting KG technologies in practice. We focus on the healthcare domain yet present a solution that is, by design, generalisable across domains and it is also not restricted by the final predictive task. The proposed approach was evaluated using complex real-world data, which is imbalanced, heterogeneous, and sparse, and the ontology efforts and approach were reviewed by domain experts from multiple organisations. Indeed, planned next steps include a new use case in inflammatory bowel disease (IBD) which can be studied across institutions. IBD is a compelling next use case not only because a significant amount of the information needed by clinicians is buried in unstructured text notes but also because multiple facets, including diet, behaviors, race, and ethnicity or environmental factors are all thought to contribute to disease progression and outcomes [44,42]. The proposed star-shaped KG approach and ontology will not only allow data from disparate sources and types to be meaningfully combined, as already demonstrated, but also allow pertinent new research questions to be addressed. This paves the way towards a comprehensive and holistic 360°view of the person to be created from multiple data sources and could be further generalised to patient cohorts or groups of individuals. This approach is also scalable as multiple PKGs can be created, one for each patient, unlike traditional approaches relying on very large general knowledge KGs or others which have reported scalability issues [65]. The topology of traditional KGs used to query and infer knowledge might not be easily applied to learn graph representations using GNNs primarily due to their large size. Our experiments show that reducing the heterogeneity with relation grouping has an effect on F1-score provided that the PKG structure is generally fit for learning patient representations using GNNs. We also observe that the proposed framework is able to generalise even with limited amount of data.
Furthermore, the ontology design process can be used as guidance for the creation of other entity-centric ontologies and the HSPO is continuing to be expanded with new domains and facets. The open-sourced implementation for the PKG creation can be reused, with necessary adjustments, to extract entitycentric knowledge graphs in RDF or PyTorch Geometric applicable format. Finally, a wide range of predictive tasks using neural networks, or inferences using the KGs produced can be addressed, even if these will be constrained by the availability and expressiveness of the data and provided annotations.
We highlight the applicability of the framework through a readmission prediction task using the PKGs and GNN architectures. Following the proposed paradigm, other classification tasks, such as mortality prediction and clinical trial selection, can be undertaken. The efficient patient representation may also be leveraged for clustering and similarity grouping applications.
Conclusion and Future Work
This paper proposes a new end-to-end representation learning framework to extract entity-centric ontology-driven KGs from structured and unstructured data. Throughout this paper, we describe how KG technologies can be used in combination with other technologies and techniques (learning entity representation using GNNs, predicting outcomes (prediction tasks) using neural networks and learnt representation) to drive practical industry applications with an example in the healthcare industry. The open-sourced framework and approach are scalable and show stability and robustness when applied to complex real-world healthcare data. We plan to extend and apply the PKG and the proposed framework to new use cases which will further drive adoption. In healthcare, we envisage that this work can unlock new ways of studying complex disease patterns and drive a better understanding of disease across population groups. Supplemental Material Availability: Source code is available in the official GitHub repository: https://github.com/IBM/hspo-ontology
Fig. 1 .
1Main classes defined in the HSPO.
Fig. 2 .
2Data preprocessing pipeline: The EHRs of MIMIC-III are being preprocessed and a unified JSON file is extracted, that consists of records for each admission with the essential information about the clinical, demographic, and social view of the patient.
Fig. 3 .
3An example Person-Centric Knowledge: The graph represents the demographics, clinical, intervention, and social aspects of the patient.
Fig. 4 .
4The 4 directed graph structures of the study are presented. The colors of the nodes represent the different types of information and are consistent across the versions.
Fig. 5 .
5The input of the model is a graph with N number of nodes and edge index per relation type. The dimension of the initial representation of the nodes is 3,723. The PKGSage models leverage the Sage Convolution as the main transformation step while the PKGA models include the GAT Convolution module. The final output of the models is the readmission probability of the patient.
-
The pure clinical view of the patient (medication, diseases, and procedures) is very important in predicting ICU readmission. (H1) -The exclusion of additional information results in lower performance. (H2)
Table 1 .
1Missing information in the processed dataset.Information
Records with missing information
Gender
0
Religion
17,794 (34.69%)
Marital Status
9,627 (18.77%)
Race/Ethnicity
4,733 (9.23%)
Diseases/Diagnoses
10 (0.02%)
Medication
8,032 (15.66%)
Procedures
6,024 (11.74%)
Employment
25,530 (49.77%)
Housing conditions
49,739 (96.96%)
Household composition
42,958 (83.75%)
Table 2 .
2Results: Performance of the models with different graph versions.G. 1 D. 2 Model
Accuracy
F1-Score
G. 1 D. 2 Model
Accuracy
F1-Score
V1
Undirected
PKGSage1 61.72 ± 0.98 67.86 ± 1.31
V3
Undirected
PKGSage1 61.69 ± 0.91 68.06 ± 1.14
PKGSage2 61.27 ± 0.92 67.44 ± 1.34
PKGSage2 61.48 ± 0.93 67.5 ± 1.21
PKGA1 59.73 ± 1.08 66.32 ± 1.12
PKGA1 61.69 ± 0.71 66.32 ± 1.56
PKGA2 61.39 ± 0.89 66.58 ± 1.56
PKGA2 61.46 ± 0.82 67.49 ± 1.76
Directed
PKGSage1 62.16 ± 0.89 67.93 ± 1.35
Directed
PKGSage1 50.2 ± 0.51 64.21 ± 1.74
PKGSage2 61.72 ± 0.73 67.2 ± 1.68
PKGSage2 49.93 ± 0.57 61.43 ± 1.83
PKGA1 60.5 ± 1.26 66.52 ± 1.51
PKGA1 50.2 ± 0.63 61.54 ± 1.91
PKGA2 61.38 ± 0.87 67 ± 1.65
PKGA2 49.98 ± 0.61 61.44 ± 1.87
V2
Undirected
PKGSage1 60.43 ± 0.63 67.85 ± 1.22
V4
Undirected
PKGSage1 49.5 ± 0.55 59.77 ± 2.54
PKGSage2 60.93 ± 0.81 67.06 ± 1.45
PKGSage2 54.5 ± 1.4 59.51 ± 2.36
PKGA1 58.95 ± 0.78 66.21 ± 1.36
PKGA1 52.08 ± 1.68 53.1 ± 1.88
PKGA2 60.24 ± 0.99 67.37 ± 1.31
PKGA2 57.9 ± 0.91 58.67 ± 0.9
Directed
PKGSage1 51.26 ± 0.81 65.1 ± 1.58
Directed
PKGSage1 49.45 ± 0.58 58.41 ± 2.75
PKGSage2 51.61 ± 0.76 61.75 ± 1.26
PKGSage2 54.79 ± 1.79 59.87 ± 2.67
PKGA1
51 ± 0.81 64.86 ± 1.27
PKGA1 51.01 ± 1.29 62.92 ± 1.54
PKGA2 51.3 ± 0.91 63.84 ± 1.43
PKGA2 57.32 ± 1.54 60.16 ± 2.28
1 G.: Graph
2 D.: Direction
Table 3 .
3Results: Comparison with baseline models Accuracy 57.31 ± 0.54 61.58 ± 0.62 62.11 ± 0.65 62.16 ± 0.89 F1-Score 50.9 ± 1.73 61.45 ± 1.49 64.44 ± 1.49 68.06 ± 1.14Metric
DT
AdaBoost
NB
GP
Accuracy 55.34 ± 0.55 60.03 ± 0.65 53.92 ± 1.44 56.82 ± 0.57
F1-Score 57.5 ± 0.65 59.01 ± 1.44 39.93 ± 3.56 52.53 ± 1.62
Metric
KNN
L-SVM
RBF-SVM PKGSage1
Table 4 .
4Ablation Study Social aspect and Medication 60.13 (↓ 1.56) 66.84 (↓ 1.22) 60.01 (↓ 2.15) 66.67 (↓ 1.26) Social aspect and Procedures 60.13 (↓ 1.56) 67.99 (↓ 0.07) 59.87 (↓ 2.29) 67.34 (↓ 0.59)Excluded Information
PKGSage1 undirected V3 PKGSage1 directed V1
Accuracy
F1-Score
Accuracy
F1-Score
-
61.69
68.06
62.16
67.93
Social aspect
60.68 (↓ 1.01) 67.63 (↓ 0.43) 60.5 (↓ 1.66) 67.64 (↓ 0.29)
Medication
59.7 (↓ 1.99) 66.98 (↓ 1.08) 60.03 (↓ 2.13) 66.8 (↓ 1.13)
Procedures
60.42 (↓ 1.27) 67.43 (↓ 0.63) 59.97 (↓ 2.19) 67.56 (↓ 0.37)
Diseases
59.69 (↓ 2) 66.87 (↓ 1.19) 59.59 (↓ 2.57) 67.48 (↓ 0.45)
Demographics
60.43 (↓ 1.26) 67.43 (↓ 0.63) 60.16 (↓ 2) 67.54 (↓ 0.39)
Social aspect and Diseases
59.87 (↓ 1.82) 66.62 (↓ 1.44) 59.6 (↓ 2.56) 67.18 (↓ 0.75)
Social aspect and Demographics 60.64 (↓ 1.05) 66.84 (↓ 1.22) 60.14 (↓ 2.02) 66.88 (↓ 1.05)
Medication and Procedures 58.76 (↓ 2.93) 66.59 (↓ 1.47) 59.47 (↓ 2.69) 66.86 (↓ 1.07)
Medication and Diseases
59.49 (↓ 2.2) 65.69 (↓ 2.37) 59.72 (↓ 2.44) 66.06 (↓ 1.87)
Medication and Demographics 59.84 (↓ 1.85) 66.8 (↓ 1.26) 59.9 (↓ 2.26) 66.56 (↓ 1.37)
Procedures and Diseases
59.32 (↓ 2.37) 66.62 (↓ 1.44) 58.53 (↓ 3.63) 66.38 (↓ 1.55)
Procedures and Demographics 60.63 (↓ 1.06) 67.69 (↓ 0.37) 60.01 (↓ 2.15) 67.67 (↓ 0.26)
Diseases and Demographic
59.53 (↓ 2.16) 66.85 (↓ 1.21) 59.56 (↓ 2.6) 66.52 (↓ 1.73)
https://github.com/IBM/hspo-ontology
https://physionet.org/content/mimiciii/1.4/
AcknowledgementsWe would like to acknowledge the teams from Cleveland Clinic (Dr. Thaddeus Stappenbeck, Dr. Tesfaye Yadete) and Morehouse School of Medicine (Prof. Julia Liu, Dr. Kingsley Njoku) and colleagues from IBM Research (Vanessa Lopez, Marco Sbodio, Viba Anand, Elieen Koski) for their support and insights.
Using a personal health library-enabled mhealth recommender system for self-management of diabetes among underserved populations: Use case for knowledge graphs and linked data. N Ammar, J E Bailey, R L Davis, A Shaban-Nejad, 10.2196/24738JMIR formative research. 5324738Ammar, N., Bailey, J.E., Davis, R.L., Shaban-Nejad, A., et al.: Using a personal health library-enabled mhealth recommender system for self-management of dia- betes among underserved populations: Use case for knowledge graphs and linked data. JMIR formative research 5(3), e24738 (2021). https://doi.org/10.2196/24738
Effective mapping of biomedical text to the umls metathesaurus: the metamap program. A R Aronson, Proceedings of the AMIA Symposium. the AMIA Symposium17Aronson, A.R.: Effective mapping of biomedical text to the umls metathesaurus: the metamap program. In: Proceedings of the AMIA Symposium. p. 17. American Medical Informatics Association (2001)
Personal knowledge graphs: A research agenda. K Balog, T Kenter, Proceedings of the 2019 ACM SIGIR International Conference on Theory of Information Retrieval. the 2019 ACM SIGIR International Conference on Theory of Information RetrievalBalog, K., Kenter, T.: Personal knowledge graphs: A research agenda. In: Proceedings of the 2019 ACM SIGIR International Con- ference on Theory of Information Retrieval. pp. 217-220 (2019).
. 10.1145/3341981.3344241https://doi.org/https://doi.org/10.1145/3341981.3344241
Characterizing and managing missing structured data in electronic health records: data analysis. B K Beaulieu-Jones, D R Lavage, J W Snyder, J H Moore, S A Pendergrass, C R Bauer, 10.2196/medinform.8960JMIR medical informatics. 618960Beaulieu-Jones, B.K., Lavage, D.R., Snyder, J.W., Moore, J.H., Pendergrass, S.A., Bauer, C.R.: Characterizing and managing missing structured data in elec- tronic health records: data analysis. JMIR medical informatics 6(1), e8960 (2018). https://doi.org/10.2196/medinform.8960
On creating a patient-centric database from multiple hospital information systems. J Bettencourt-Silva, B De La Iglesia, S Donell, V Rayward-Smith, 10.3414/ME10-01-0069Methods of information in medicine. 5103Bettencourt-Silva, J., De La Iglesia, B., Donell, S., Rayward-Smith, V.: On creating a patient-centric database from multiple hospital information systems. Methods of information in medicine 51(03), 210-220 (2012). https://doi.org/10.3414/ME10- 01-0069
Discovering new social determinants of health concepts from unstructured data: framework and evaluation. J H Bettencourt-Silva, N Mulligan, M Sbodio, J Segrave-Daly, R Williams, V Lopez, C Alzate, 10.3233/SHTI200145Digital Personalized Health and Medicine. IOS PressBettencourt-Silva, J.H., Mulligan, N., Sbodio, M., Segrave-Daly, J., Williams, R., Lopez, V., Alzate, C.: Discovering new social determinants of health concepts from unstructured data: framework and evaluation. In: Digital Personalized Health and Medicine, pp. 173-177. IOS Press (2020). https://doi.org/10.3233/SHTI200145
Linked data: The story so far. C Bizer, T Heath, T Berners-Lee, 10.4018/978-1-60960-593-3.ch008Semantic services, interoperability and web applications: emerging concepts. IGI globalBizer, C., Heath, T., Berners-Lee, T.: Linked data: The story so far. In: Semantic services, interoperability and web applications: emerging concepts, pp. 205-227. IGI global (2011). https://doi.org/10.4018/978-1-60960-593-3.ch008
The unified medical language system (umls): integrating biomedical terminology. O Bodenreider, Nucleic acids research. 32suppl 1Bodenreider, O.: The unified medical language system (umls): integrating biomed- ical terminology. Nucleic acids research 32(suppl 1), D267-D270 (2004), https: //www.ncbi.nlm.nih.gov/pmc/articles/PMC2243666/
Classification and regression trees. L Breiman, 10.1201/9781315139470Breiman, L.: Classification and regression trees. Routledge (2017). https://doi.org/https://doi.org/10.1201/9781315139470
How attentive are graph attention networks?. S Brody, U Alon, E Yahav, arXiv:2105.14491arXiv preprintBrody, S., Alon, U., Yahav, E.: How attentive are graph at- tention networks? arXiv preprint arXiv:2105.14491 (2021).
. 10.48550/arXiv.2105.14491https://doi.org/https://doi.org/10.48550/arXiv.2105.14491
A comprehensive survey of graph embedding: Problems, techniques, and applications. H Cai, V W Zheng, K C C Chang, IEEE transactions on knowledge and data engineering. 309Cai, H., Zheng, V.W., Chang, K.C.C.: A comprehensive survey of graph embedding: Problems, techniques, and applications. IEEE trans- actions on knowledge and data engineering 30(9), 1616-1637 (2018).
. 10.1109/TKDE.2018.2807452https://doi.org/10.1109/TKDE.2018.2807452
Building and using personal knowledge graph to improve suicidal ideation detection on social media. L Cao, H Zhang, L Feng, 10.1109/TMM.2020.3046867IEEE Transactions on Multimedia. 24Cao, L., Zhang, H., Feng, L.: Building and using personal knowledge graph to improve suicidal ideation detection on social media. IEEE Transactions on Multi- media 24, 87-102 (2020). https://doi.org/10.1109/TMM.2020.3046867
Personal research knowledge graphs. P Chakraborty, S Dutta, D K Sanyal, Companion Proceedings of the Web Conference 2022. New York, NY, USAAssociation for Computing MachineryChakraborty, P., Dutta, S., Sanyal, D.K.: Personal research knowledge graphs. In: Companion Proceedings of the Web Conference 2022. p. 763-768. WWW '22, Association for Computing Machinery, New York, NY, USA (2022).
. 10.1145/3487553.3524654https://doi.org/10.1145/3487553.3524654, https://doi.org/10.1145/3487553. 3524654
Building a knowledge graph to enable precision medicine. P Chandak, K Huang, M Zitnik, 10.1038/s41597-023-01960-3Scientific Data. 10167Chandak, P., Huang, K., Zitnik, M.: Building a knowledge graph to enable precision medicine. Scientific Data 10(1), 67 (2023). https://doi.org/10.1038/s41597-023- 01960-3
A Chatr-Aryamontri, B J Breitkreutz, S Heinicke, L Boucher, A Winter, C Stark, J Nixon, L Ramage, N Kolas, L O'donnell, 10.1093/nar/gks1158The biogrid interaction database: 2013 update. Nucleic acids research 41(D1). Chatr-Aryamontri, A., Breitkreutz, B.J., Heinicke, S., Boucher, L., Winter, A., Stark, C., Nixon, J., Ramage, L., Kolas, N., O'Donnell, L., et al.: The biogrid inter- action database: 2013 update. Nucleic acids research 41(D1), D816-D823 (2012). https://doi.org/10.1093/nar/gks1158
A knowledge graph of clinical trials (ctkg). Z Chen, B Peng, V N Ioannidis, M Li, G Karypis, X Ning, 10.1038/s41598-022-08454-zScientific reports. 1214724Chen, Z., Peng, B., Ioannidis, V.N., Li, M., Karypis, G., Ning, X.: A knowl- edge graph of clinical trials (ctkg). Scientific reports 12(1), 4724 (2022). https://doi.org/10.1038/s41598-022-08454-z
Support-vector networks. C Cortes, V Vapnik, 10.1007/BF00994018Machine learning. 20Cortes, C., Vapnik, V.: Support-vector networks. Machine learning 20, 273-297 (1995). https://doi.org/https://doi.org/10.1007/BF00994018
A relational algebra for sparql. R Cyganiak, Digital Media Systems Laboratory HP Laboratories Bristol. HPL-2005-170. 35Cyganiak, R.: A relational algebra for sparql. Digital Media Systems Labora- tory HP Laboratories Bristol. HPL-2005-170 35(9) (2005), http://shiftleft. com/mirrors/www.hpl.hp.com/techreports/2005/HPL-2005-170.pdf
The semantic web: The roles of xml and rdf. S Decker, S Melnik, F Van Harmelen, D Fensel, M Klein, J Broekstra, M Erdmann, I Horrocks, 10.1109/4236.877487IEEE Internet computing. 45Decker, S., Melnik, S., Van Harmelen, F., Fensel, D., Klein, M., Broekstra, J., Erdmann, M., Horrocks, I.: The semantic web: The roles of xml and rdf. IEEE Internet computing 4(5), 63-73 (2000). https://doi.org/10.1109/4236.877487
CharacterBERT: Reconciling ELMo and BERT for word-level openvocabulary representations from characters. H El Boukkouri, O Ferret, T Lavergne, H Noji, P Zweigenbaum, J Tsujii, Proceedings of the 28th International Conference on Computational Linguistics. the 28th International Conference on Computational LinguisticsBarcelona, SpainOnlineEl Boukkouri, H., Ferret, O., Lavergne, T., Noji, H., Zweigenbaum, P., Tsu- jii, J.: CharacterBERT: Reconciling ELMo and BERT for word-level open- vocabulary representations from characters. In: Proceedings of the 28th Inter- national Conference on Computational Linguistics. pp. 6903-6915. International Committee on Computational Linguistics, Barcelona, Spain (Online) (Dec 2020).
. 10.18653/v1/2020.coling-main.609https://doi.org/10.18653/v1/2020.coling-main.609, https://aclanthology.org/ 2020.coling-main.609
Fast graph representation learning with pytorch geometric. M Fey, J E Lenssen, ICLR Workshop on Representation Learning on Graphs and Manifolds. Fey, M., Lenssen, J.E.: Fast graph representation learning with pytorch geometric. In: ICLR Workshop on Representation Learning on Graphs and Manifolds (2019)
A decision-theoretic generalization of on-line learning and an application to boosting. Y Freund, R E Schapire, 10.1006/jcss.1997.1504Journal of computer and system sciences. 551Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences 55(1), 119-139 (1997). https://doi.org/https://doi.org/10.1006/jcss.1997.1504
Understanding the difficulty of training deep feedforward neural networks. X Glorot, Y Bengio, Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. Proceedings of Machine Learning Research. Teh, Y.W., Titterington, M.the Thirteenth International Conference on Artificial Intelligence and Statistics. Machine Learning ResearchSardinia, ItalyPMLR, Chia Laguna Resort9Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Teh, Y.W., Titterington, M. (eds.) Proceedings of the Thir- teenth International Conference on Artificial Intelligence and Statistics. Proceed- ings of Machine Learning Research, vol. 9, pp. 249-256. PMLR, Chia Laguna Resort, Sardinia, Italy (13-15 May 2010), https://proceedings.mlr.press/v9/ glorot10a.html
Physiobank, physiotoolkit, and physionet: components of a new research resource for complex physiologic signals. A L Goldberger, L A Amaral, L Glass, J M Hausdorff, P C Ivanov, R G Mark, J E Mietus, G B Moody, C K Peng, H E Stanley, 10.1161/01.CIR.101.23.e215circulation. 10123Goldberger, A.L., Amaral, L.A., Glass, L., Hausdorff, J.M., Ivanov, P.C., Mark, R.G., Mietus, J.E., Moody, G.B., Peng, C.K., Stanley, H.E.: Phys- iobank, physiotoolkit, and physionet: components of a new research re- source for complex physiologic signals. circulation 101(23), e215-e220 (2000). https://doi.org/https://doi.org/10.1161/01.CIR.101.23.e215
Domain-specific language model pretraining for biomedical natural language processing. Y Gu, R Tinn, H Cheng, M Lucas, N Usuyama, X Liu, T Naumann, J Gao, H Poon, 10.1145/3458754ACM Transactions on Computing for Healthcare (HEALTH). 31Gu, Y., Tinn, R., Cheng, H., Lucas, M., Usuyama, N., Liu, X., Naumann, T., Gao, J., Poon, H.: Domain-specific language model pretraining for biomedical natural language processing. ACM Transactions on Computing for Healthcare (HEALTH) 3(1), 1-23 (2021). https://doi.org/https://doi.org/10.1145/3458754
Semantic search. R Guha, R Mccool, E Miller, Proceedings of the 12th international conference on World Wide Web. the 12th international conference on World Wide WebGuha, R., McCool, R., Miller, E.: Semantic search. In: Proceedings of the 12th international conference on World Wide Web. pp. 700-709 (2003).
. 10.1145/775152.775250https://doi.org/https://doi.org/10.1145/775152.775250
A survey on knowledge graph-based recommender systems. Q Guo, F Zhuang, C Qin, H Zhu, X Xie, H Xiong, Q He, IEEE Transactions on Knowledge and Data Engineering. 348Guo, Q., Zhuang, F., Qin, C., Zhu, H., Xie, X., Xiong, H., He, Q.: A survey on knowledge graph-based recommender systems. IEEE Trans- actions on Knowledge and Data Engineering 34(8), 3549-3568 (2020).
. 10.1109/TKDE.2020.3028705https://doi.org/10.1109/TKDE.2020.3028705
Inductive representation learning on large graphs. W Hamilton, Z Ying, J Leskovec, I Guyon, U V Luxburg, S Bengio, H Wallach, R Fergus, S Vishwanathan, T Hastie, S Rosset, J Zhu, H Zou, Advances in Neural Information Processing Systems. Garnett, R.Curran Associates, Inc30Multi-class adaboostHamilton, W., Ying, Z., Leskovec, J.: Inductive representation learn- ing on large graphs. In: Guyon, I., Luxburg, U.V., Bengio, S., Wal- lach, H., Fergus, R., Vishwanathan, S., Garnett, R. (eds.) Advances in Neural Information Processing Systems. vol. 30, p. 1025-1035. Curran Associates, Inc. (2017), https://proceedings.neurips.cc/paper/2017/file/ 5dd9db5e033da9c6fb5ba83c7a7eb\ea9-Paper.pdf 29. Hastie, T., Rosset, S., Zhu, J., Zou, H.: Multi-class ad- aboost. Statistics and its Interface 2(3), 349-360 (2009).
. 10.4310/SII.2009.v2.n3.a8https://doi.org/https://dx.doi.org/10.4310/SII.2009.v2.n3.a8
. A Hogan, E Blomqvist, M Cochez, C Amato, G D Melo, C Gutierrez, S Kirrane, J E L Gayo, R Navigli, S Neumaier, Knowledge graphs. ACM Computing Surveys (CSUR). 544Hogan, A., Blomqvist, E., Cochez, M., d'Amato, C., Melo, G.d., Gutier- rez, C., Kirrane, S., Gayo, J.E.L., Navigli, R., Neumaier, S., et al.: Knowledge graphs. ACM Computing Surveys (CSUR) 54(4), 1-37 (2021).
. 10.1145/3447772https://doi.org/https://doi.org/10.1145/3447772
. Hspo Team, Health and Social Person-centric OntologyHSPO Team: Health and Social Person-centric Ontology (Sep 2022), https:// github.com/IBM/hspo-ontology
Strategies for handling missing clinical data for automated surgical site infection detection from the electronic health record. Z Hu, G B Melton, E G Arsoniadis, Y Wang, M R Kwaan, G J Simon, 10.1016/j.jbi.2017.03.009Journal of biomedical informatics. 68Hu, Z., Melton, G.B., Arsoniadis, E.G., Wang, Y., Kwaan, M.R., Simon, G.J.: Strategies for handling missing clinical data for automated surgical site infection detection from the electronic health record. Journal of biomedical informatics 68, 112-120 (2017). https://doi.org/10.1016/j.jbi.2017.03.009
Mimic-iii clinical database (version 1.4). Phy-sioNet. A Johnson, T Pollard, R Mark, 10.13026/C2XW2610Johnson, A., Pollard, T., Mark, R.: Mimic-iii clinical database (version 1.4). Phy- sioNet 10, C2XW26 (2016). https://doi.org/https://doi.org/10.13026/C2XW26
Mimiciii, a freely accessible critical care database. A E Johnson, T J Pollard, L Shen, L W H Lehman, M Feng, M Ghassemi, B Moody, P Szolovits, L Anthony Celi, R G Mark, G Kasneci, F M Suchanek, G Ifrim, M Ramanath, G Weikum, 10.1038/sdata.2016.352008 IEEE 24th International Conference on Data Engineering. NagaIEEE3: Searching and ranking knowledgeJohnson, A.E., Pollard, T.J., Shen, L., Lehman, L.w.H., Feng, M., Ghas- semi, M., Moody, B., Szolovits, P., Anthony Celi, L., Mark, R.G.: Mimic- iii, a freely accessible critical care database. Scientific data 3(1), 1-9 (2016). https://doi.org/10.1038/sdata.2016.35 35. Kasneci, G., Suchanek, F.M., Ifrim, G., Ramanath, M., Weikum, G.: Naga: Searching and ranking knowledge. In: 2008 IEEE 24th Inter- national Conference on Data Engineering. pp. 953-962. IEEE (2008).
. 10.1109/ICDE.2008.4497504https://doi.org/10.1109/ICDE.2008.4497504
Trends in 30-and 90-day readmission rates for heart failure. M S Khan, J Sreenivasan, N Lateef, M S Abougergi, S J Greene, T Ahmad, S D Anker, G C Fonarow, J Butler, 10.1161/CIRCHEARTFAILURE.121.008335Circulation: Heart Failure. 1448335Khan, M.S., Sreenivasan, J., Lateef, N., Abougergi, M.S., Greene, S.J., Ahmad, T., Anker, S.D., Fonarow, G.C., Butler, J.: Trends in 30-and 90-day readmis- sion rates for heart failure. Circulation: Heart Failure 14(4), e008335 (2021). https://doi.org/10.1161/CIRCHEARTFAILURE.121.008335
D P Kingma, J Ba, arXiv:1412.6980Adam: A method for stochastic optimization. arXiv preprintKingma, D.P., Ba, J.: Adam: A method for stochas- tic optimization. arXiv preprint arXiv:1412.6980 (2014).
. 10.48550/arXiv.1412.6980https://doi.org/https://doi.org/10.48550/arXiv.1412.6980
An ontology for the social determinants of health domain. N M Kollapally, Y Chen, J Xu, J Geller, 2022 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). IEEEKollapally, N.M., Chen, Y., Xu, J., Geller, J.: An ontology for the so- cial determinants of health domain. In: 2022 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). pp. 2403-2410. IEEE (2022).
. 10.1109/BIBM55620.2022.9995544https://doi.org/10.1109/BIBM55620.2022.9995544
Link prediction techniques, applications, and performance: A survey. A Kumar, S S Singh, K Singh, B Biswas, Physica A: Statistical Mechanics and its Applications. 553124289Kumar, A., Singh, S.S., Singh, K., Biswas, B.: Link prediction techniques, applications, and performance: A survey. Physica A: Statistical Mechanics and its Applications 553, 124289 (2020).
. 10.1016/j.physa.2020.124289https://doi.org/https://doi.org/10.1016/j.physa.2020.124289
Trends in 30-day readmissions following hospitalisation for heart failure by sex, socioeconomic status and ethnicity. C Lawson, H Crothers, S Remsing, I Squire, F Zaccardi, M Davies, L Bernhardt, K Reeves, R Lilford, K Khunti, 10.1016/j.eclinm.2021.101008EClinicalMedicine. 38101008Lawson, C., Crothers, H., Remsing, S., Squire, I., Zaccardi, F., Davies, M., Bernhardt, L., Reeves, K., Lilford, R., Khunti, K.: Trends in 30- day readmissions following hospitalisation for heart failure by sex, so- cioeconomic status and ethnicity. EClinicalMedicine 38, 101008 (2021). https://doi.org/10.1016/j.eclinm.2021.101008
Biobert: a pre-trained biomedical language representation model for biomedical text mining. J Lee, W Yoon, S Kim, D Kim, S Kim, C H So, J Kang, Bioinformatics. 364Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234-1240 (2020).
. 10.1093/bioinformatics/btz682https://doi.org/https://doi.org/10.1093/bioinformatics/btz682
The Current State of Care for Black and Hispanic Inflammatory Bowel Disease Patients. J J Liu, B P Abraham, P Adamson, E L Barnes, K A Brister, O M Damas, S C Glover, K Hooks, A Ingram, G G Kaplan, Loftus, V Edward, J Mcgovern, D P B Narain-Blackwell, M Odufalu, F D Quezada, S Reeves, V Shen, B Stappenbeck, T S Ward, L , 10.1093/ibd/izac124Inflammatory Bowel Diseases. 292Liu, J.J., Abraham, B.P., Adamson, P., Barnes, E.L., Brister, K.A., Damas, O.M., Glover, S.C., Hooks, K., Ingram, A., Kaplan, G.G., Loftus, Edward V, J., McGov- ern, D.P.B., Narain-Blackwell, M., Odufalu, F.D., Quezada, S., Reeves, V., Shen, B., Stappenbeck, T.S., Ward, L.: The Current State of Care for Black and Hispanic Inflammatory Bowel Disease Patients. Inflammatory Bowel Diseases 29(2), 297- 307 (07 2022). https://doi.org/10.1093/ibd/izac124, https://doi.org/10.1093/ ibd/izac124
Personal attribute prediction from conversations. Y Liu, H Chen, W Shen, 10.1145/3487553.3524248Companion Proceedings of the Web Conference 2022. New York, NY, USAAssociation for Computing Machinery22Liu, Y., Chen, H., Shen, W.: Personal attribute prediction from con- versations. In: Companion Proceedings of the Web Conference 2022. p. 223-227. WWW '22, Association for Computing Machinery, New York, NY, USA (2022). https://doi.org/10.1145/3487553.3524248, https://doi.org/10. 1145/3487553.3524248
Multi-omics of the gut microbial ecosystem in inflammatory bowel diseases. J Lloyd-Price, C Arze, A N Ananthakrishnan, M Schirmer, J Avila-Pacheco, T W Poon, E Andrews, N J Ajami, K S Bonham, C J Brislawn, 10.1038/s41586-019-1237-9Nature. 5697758Lloyd-Price, J., Arze, C., Ananthakrishnan, A.N., Schirmer, M., Avila-Pacheco, J., Poon, T.W., Andrews, E., Ajami, N.J., Bonham, K.S., Brislawn, C.J., et al.: Multi-omics of the gut microbial ecosystem in inflammatory bowel diseases. Nature 569(7758), 655-662 (2019). https://doi.org/https://doi.org/10.1038/s41586-019- 1237-9
Link prediction in complex networks: A survey. Physica A: statistical mechanics and its applications. L Lü, T Zhou, 10.1016/j.physa.2010.11.027390Lü, L., Zhou, T.: Link prediction in complex networks: A survey. Phys- ica A: statistical mechanics and its applications 390(6), 1150-1170 (2011). https://doi.org/https://doi.org/10.1016/j.physa.2010.11.027
Annotating social determinants of health using active learning, and characterizing determinants using neural event extraction. K Lybarger, M Ostendorf, M Yetisgen, 10.1016/j.jbi.2020.103631Journal of Biomedical Informatics. 113103631Lybarger, K., Ostendorf, M., Yetisgen, M.: Annotating social determinants of health using active learning, and characterizing determinants using neu- ral event extraction. Journal of Biomedical Informatics 113, 103631 (2021). https://doi.org/https://doi.org/10.1016/j.jbi.2020.103631
Modeling sample variables with an experimental factor ontology. J Malone, E Holloway, T Adamusiak, M Kapushesky, J Zheng, N Kolesnikov, A Zhukova, A Brazma, H Parkinson, 10.1093/bioinformatics/btq099Bioinformatics. 268Malone, J., Holloway, E., Adamusiak, T., Kapushesky, M., Zheng, J., Kolesnikov, N., Zhukova, A., Brazma, A., Parkinson, H.: Modeling sample variables with an experimental factor ontology. Bioinformatics 26(8), 1112-1118 (2010). https://doi.org/10.1093/bioinformatics/btq099
A survey of link prediction in complex networks. V Martínez, F Berzal, J C Cubero, ACM computing surveys (CSUR). 494Martínez, V., Berzal, F., Cubero, J.C.: A survey of link prediction in complex networks. ACM computing surveys (CSUR) 49(4), 1-33 (2016).
. 10.1145/3012704https://doi.org/https://doi.org/10.1145/3012704
Systematic reviews as an interface to the web of (trial) data: using pico as an ontology for knowledge synthesis in evidencebased healthcare research. C Mavergames, S Oliver, L Becker, SePublica. 994Mavergames, C., Oliver, S., Becker, L.: Systematic reviews as an interface to the web of (trial) data: using pico as an ontology for knowledge synthesis in evidence- based healthcare research. SePublica 994, 22-6 (2013)
An upper-level ontology for the biomedical domain. A T Mccray, Comparative and Functional genomics. 41McCray, A.T.: An upper-level ontology for the biomedical do- main. Comparative and Functional genomics 4(1), 80-84 (2003).
. 10.1002/cfg.255https://doi.org/https://doi.org/10.1002/cfg.255
The human behaviour-change project: harnessing the power of artificial intelligence and machine learning for evidence synthesis and interpretation. S Michie, J Thomas, M Johnston, P M Aonghusa, J Shawe-Taylor, M P Kelly, L A Deleris, A N Finnerty, M M Marques, E Norris, 10.1186/s13012-017-0641-5Implementation Science. 121Michie, S., Thomas, J., Johnston, M., Aonghusa, P.M., Shawe-Taylor, J., Kelly, M.P., Deleris, L.A., Finnerty, A.N., Marques, M.M., Norris, E., et al.: The human behaviour-change project: harnessing the power of artificial intelligence and ma- chine learning for evidence synthesis and interpretation. Implementation Science 12(1), 1-12 (2017). https://doi.org/https://doi.org/10.1186/s13012-017-0641-5
Integrating biomedical research and electronic health records to create knowledge-based biologically meaningful machine-readable embeddings. C A Nelson, A J Butte, S E Baranzini, 10.1038/s41467-019-11069-0Nature communications. 1013045Nelson, C.A., Butte, A.J., Baranzini, S.E.: Integrating biomedical research and electronic health records to create knowledge-based biologically meaning- ful machine-readable embeddings. Nature communications 10(1), 3045 (2019). https://doi.org/10.1038/s41467-019-11069-0
A survey of current link discovery frameworks. M Nentwig, M Hartung, A C Ngonga Ngomo, E Rahm, 10.3233/SW-150210Semantic Web. 83Nentwig, M., Hartung, M., Ngonga Ngomo, A.C., Rahm, E.: A survey of current link discovery frameworks. Semantic Web 8(3), 419-436 (2017). https://doi.org/10.3233/SW-150210
Constructing knowledge graphs and their biomedical applications. D N Nicholson, C S Greene, 10.1016/j.csbj.2020.05.017Computational and Structural Biotechnology Journal. 18Nicholson, D.N., Greene, C.S.: Constructing knowledge graphs and their biomedical applications. Computational and Structural Biotechnology Journal 18, 1414-1428 (2020). https://doi.org/10.1016/j.csbj.2020.05.017, https://www. sciencedirect.com/science/article/pii/S2001037020302804
Ontology development 101: A guide to creating your first ontology. N F Noy, D L Mcguinness, Noy, N.F., McGuinness, D.L., et al.: Ontology development 101: A guide to creating your first ontology (2001), https://corais.org/sites/default/files/ ontology_development101_aguide_to_creating\_your_first_ontology.pdf
Industry-scale knowledge graphs: Lessons and challenges: Five diverse technology companies show how it's done. N Noy, Y Gao, A Jain, A Narayanan, A Patterson, J Taylor, Queue. 172Noy, N., Gao, Y., Jain, A., Narayanan, A., Patterson, A., Taylor, J.: Industry-scale knowledge graphs: Lessons and challenges: Five diverse technology companies show how it's done. Queue 17(2), 48-75 (2019).
. 10.1145/3329781.3332266https://doi.org/https://doi.org/10.1145/3329781.3332266
W H Organization, ( For Health Statistics, N C Us), The International Classification of Diseases, 9th Revision, Clinical Modification: Procedures: tabular list and alphabetic index. 3Organization, W.H., for Health Statistics (US), N.C.: The International Classi- fication of Diseases, 9th Revision, Clinical Modification: Procedures: tabular list and alphabetic index, vol. 3. Commission on Professional and Hospital Activities. (1980)
ICD-9-CM: International Classification of Diseases, 9th Revision: Clinical Modification. PMIC, Practice Management Information Corporation. W H Organization, Organization, W.H., et al.: ICD-9-CM: International Classification of Diseases, 9th Revision: Clinical Modification. PMIC, Practice Management Information Corpo- ration (1998)
Semantics and complexity of sparql. J Pérez, M Arenas, C Gutierrez, 10.1145/1567274.1567278International semantic web conference. SpringerPérez, J., Arenas, M., Gutierrez, C.: Semantics and complexity of sparql. In: International semantic web conference. pp. 30-43. Springer (2006). https://doi.org/https://doi.org/10.1145/1567274.1567278
Accelerating the discovery of semantic associations from medical literature: Mining relations between diseases and symptoms. A Purpura, F Bonin, J Bettencourt-Silva, Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track. the 2022 Conference on Empirical Methods in Natural Language Processing: Industry TrackAbu Dhabi, UAEAssociation for Computational LinguisticsPurpura, A., Bonin, F., Bettencourt-silva, J.: Accelerating the discovery of se- mantic associations from medical literature: Mining relations between diseases and symptoms. In: Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track. pp. 77-89. Association for Compu- tational Linguistics, Abu Dhabi, UAE (Dec 2022), https://aclanthology.org/ 2022.emnlp-industry.6
Activity modeling in email. A Qadir, M Gamon, P Pantel, A Hassan, 10.18653/v1/N16-1171Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesQadir, A., Gamon, M., Pantel, P., Hassan, A.: Activity modeling in email. In: Pro- ceedings of the 2016 Conference of the North American Chapter of the Associa- tion for Computational Linguistics: Human Language Technologies. pp. 1452-1462 (2016). https://doi.org/10.18653/v1/N16-1171
Personal health knowledge graphs for patients. N Rastogi, M J Zaki, 10.48550/arXiv.2004.00071arXiv:2004.00071arXiv preprintRastogi, N., Zaki, M.J.: Personal health knowledge graphs for patients. arXiv preprint arXiv:2004.00071 (2020). https://doi.org/10.48550/arXiv.2004.00071
Dropedge: Towards deep graph convolutional networks on node classification. Y Rong, W Huang, T Xu, J Huang, International Conference on Learning Representations. Rong, Y., Huang, W., Xu, T., Huang, J.: Dropedge: Towards deep graph convo- lutional networks on node classification. In: International Conference on Learning Representations
Toward activity discovery in the personal web. T Safavi, A Fourney, R Sim, M Juraszek, S Williams, N Friend, D Koutra, P N Bennett, 10.1145/3336191.3371828Proceedings of the 13th International Conference on Web Search and Data Mining. the 13th International Conference on Web Search and Data MiningSafavi, T., Fourney, A., Sim, R., Juraszek, M., Williams, S., Friend, N., Koutra, D., Bennett, P.N.: Toward activity discovery in the personal web. In: Proceedings of the 13th International Conference on Web Search and Data Mining. pp. 492-500 (2020). https://doi.org/https://doi.org/10.1145/3336191.3371828
Modeling relational data with graph convolutional networks. M Schlichtkrull, T N Kipf, P Bloem, Van Den, R Berg, I Titov, M Welling, 10.1007/978-3-319-93417-4_38The Semantic Web: 15th International Conference. Heraklion, Crete, GreeceSpringerProceedings 15Schlichtkrull, M., Kipf, T.N., Bloem, P., Van Den Berg, R., Titov, I., Welling, M.: Modeling relational data with graph convolutional networks. In: The Semantic Web: 15th International Conference, ESWC 2018, Heraklion, Crete, Greece, June 3-7, 2018, Proceedings 15. pp. 593-607. Springer (2018). https://doi.org/10.1007/978-3-319-93417-4 38
Personal health knowledge graph for clinically relevant diet recommendations. O W Seneviratne, J Harris, C H Chen, D L Mcguinness, 10.48550/arXiv.2110.10131ArXiv. Seneviratne, O.W., Harris, J., Chen, C.H., McGuinness, D.L.: Personal health knowledge graph for clinically relevant diet recommendations. ArXiv (2021). https://doi.org/10.48550/arXiv.2110.10131
Uniskgrep: A unified representation learning framework of social network and knowledge graph. Y Shen, X Jiang, Z Li, Y Wang, C Xu, H Shen, X Cheng, Neural Networks. 158Shen, Y., Jiang, X., Li, Z., Wang, Y., Xu, C., Shen, H., Cheng, X.: Uniskgrep: A unified representation learning framework of social net- work and knowledge graph. Neural Networks 158, 142-153 (2023).
. 10.1016/j.neunet.2022.11.010https://doi.org/https://doi.org/10.1016/j.neunet.2022.11.010
Challenges associated with missing data in electronic health records: a case study of a risk prediction model for diabetes using data from slovenian primary care. G Stiglic, P Kocbek, N Fijacko, A Sheikh, M Pajnkihar, Health informatics journal. 253Stiglic, G., Kocbek, P., Fijacko, N., Sheikh, A., Pajnkihar, M.: Chal- lenges associated with missing data in electronic health records: a case study of a risk prediction model for diabetes using data from slove- nian primary care. Health informatics journal 25(3), 951-959 (2019).
. 10.1177/1460458217733288https://doi.org/https://doi.org/10.1177/1460458217733288
Imposing relation structure in language-model embeddings using contrastive learning. C Theodoropoulos, J Henderson, A C Coman, M F Moens, 10.18653/v1/2021.conll-1.27Proceedings of the 25th Conference on Computational Natural Language Learning. the 25th Conference on Computational Natural Language LearningAssociation for Computational LinguisticsTheodoropoulos, C., Henderson, J., Coman, A.C., Moens, M.F.: Imposing rela- tion structure in language-model embeddings using contrastive learning. In: Pro- ceedings of the 25th Conference on Computational Natural Language Learn- ing. pp. 337-348. Association for Computational Linguistics, Online (Nov 2021). https://doi.org/10.18653/v1/2021.conll-1.27, https://aclanthology.org/ 2021.conll-1.27
. N A Vasilevsky, N A Matentzoglu, S Toro, J E F Iv, H Hegde, D R Unni, G F Alyea, J S Amberger, L Babb, J P Balhoff, T I Bingaman, G A Burns, O J Buske, T J Callahan, L C Carmody, P C Cordo, L E Chan, G S Chang, S L Christiaens, L C Daugherty, M Dumontier, L E Failla, M J Flowers, H Garrett, J Goldstein, J L Gration, D Groza, T Hanauer, M Harris, N L Hilton, J A Himmelstein, D S Hoyt, C T Kane, M S Köhler, S Lagorce, D Lai, A Larralde, M Lock, A Santiago, I L Maglott, D R Malheiro, A J Meldal, B H M Munoz-Torres, M C Nelson, T H Nicholas, F W Ochoa, D Olson, D P Oprea, T I Osumi-Sutherland, D Parkinson, H Pendlington, Z M Rath, A Rehm, H L Remennik, L Riggs, E R Roncaglia, P Ross, J E Shadbolt, M F Shefchek, K A Similuk, M N Sioutos, N Smedley, D Sparks, R Stefancsik, R Stephan, R Storm, A L Stupp, D Stupp, G S Sundaramurthi, J C Tammen, I Tay, D Thaxton, C L Valasek, E Valls-Margarit, J Wagner, A H Welter, D Whetzel, P L Whiteman, L L Wood, V Xu, C H Zankl, A Zhang, X A Chute, C G Robinson, P N Mungall, C J Hamosh, A Haendel, M A , 10.1101/2022.04.13.22273750Mondo: Unifying diseases for the world, by the world. medRxiv (2022Vasilevsky, N.A., Matentzoglu, N.A., Toro, S., IV, J.E.F., Hegde, H., Unni, D.R., Alyea, G.F., Amberger, J.S., Babb, L., Balhoff, J.P., Bingaman, T.I., Burns, G.A., Buske, O.J., Callahan, T.J., Carmody, L.C., Cordo, P.C., Chan, L.E., Chang, G.S., Christiaens, S.L., Daugherty, L.C., Dumontier, M., Failla, L.E., Flowers, M.J., H. Alpha Garrett, J., Goldstein, J.L., Gration, D., Groza, T., Hanauer, M., Harris, N.L., Hilton, J.A., Himmelstein, D.S., Hoyt, C.T., Kane, M.S., Köhler, S., Lagorce, D., Lai, A., Larralde, M., Lock, A., Santiago, I.L., Maglott, D.R., Malheiro, A.J., Meldal, B.H.M., Munoz-Torres, M.C., Nelson, T.H., Nicholas, F.W., Ochoa, D., Olson, D.P., Oprea, T.I., Osumi-Sutherland, D., Parkinson, H., Pendlington, Z.M., Rath, A., Rehm, H.L., Remennik, L., Riggs, E.R., Roncaglia, P., Ross, J.E., Shadbolt, M.F., Shefchek, K.A., Similuk, M.N., Sioutos, N., Smed- ley, D., Sparks, R., Stefancsik, R., Stephan, R., Storm, A.L., Stupp, D., Stupp, G.S., Sundaramurthi, J.C., Tammen, I., Tay, D., Thaxton, C.L., Valasek, E., Valls- Margarit, J., Wagner, A.H., Welter, D., Whetzel, P.L., Whiteman, L.L., Wood, V., Xu, C.H., Zankl, A., Zhang, X.A., Chute, C.G., Robinson, P.N., Mungall, C.J., Hamosh, A., Haendel, M.A.: Mondo: Unifying diseases for the world, by the world. medRxiv (2022). https://doi.org/10.1101/2022.04.13.22273750, https: //www.medrxiv.org/content/early/2022/05/03/2022.04.13.22273750
P Veličković, G Cucurull, A Casanova, A Romero, P Lio, Y Bengio, 10.48550/arXiv.1710.10903arXiv:1710.10903Graph attention networks. arXiv preprintVeličković, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., Ben- gio, Y.: Graph attention networks. arXiv preprint arXiv:1710.10903 (2017). https://doi.org/https://doi.org/10.48550/arXiv.1710.10903
Strategies for handling missing data in electronic health record derived data. B J Wells, K M Chagin, A S Nowacki, M W Kattan, 10.13063/2327-9214.1035Egems. 13Wells, B.J., Chagin, K.M., Nowacki, A.S., Kattan, M.W.: Strategies for han- dling missing data in electronic health record derived data. Egems 1(3) (2013). https://doi.org/10.13063/2327-9214.1035
Gaussian processes for machine learning. C K Williams, C E Rasmussen, MIT press2Cambridge, MAWilliams, C.K., Rasmussen, C.E.: Gaussian processes for ma- chine learning, vol. 2. MIT press Cambridge, MA (2006).
. 10.7551/mitpress/3206.001.0001https://doi.org/https://doi.org/10.7551/mitpress/3206.001.0001
Graph neural networks in recommender systems: a survey. S Wu, F Sun, W Zhang, X Xie, B Cui, ACM Computing Surveys. 555Wu, S., Sun, F., Zhang, W., Xie, X., Cui, B.: Graph neural networks in rec- ommender systems: a survey. ACM Computing Surveys 55(5), 1-37 (2022).
. 10.1145/3535101https://doi.org/https://doi.org/10.1145/3535101
Z Wu, S Pan, F Chen, G Long, C Zhang, S Y Philip, 10.1109/TNNLS.2020.2978386A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems. 32Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive sur- vey on graph neural networks. IEEE transactions on neural networks and learning systems 32(1), 4-24 (2020). https://doi.org/10.1109/TNNLS.2020.2978386
Graph neural networks in node classification: survey and evaluation. Machine Vision and Applications. S Xiao, S Wang, Y Dai, W Guo, 10.1007/s00138-021-01251-033Xiao, S., Wang, S., Dai, Y., Guo, W.: Graph neural networks in node classifica- tion: survey and evaluation. Machine Vision and Applications 33, 1-19 (2022). https://doi.org/https://doi.org/10.1007/s00138-021-01251-0
Building a pubmed knowledge graph. J Xu, S Kim, M Song, M Jeong, D Kim, J Kang, J F Rousseau, X Li, W Xu, V I Torvik, 10.1038/s41597-020-0543-2Scientific data. 71205Xu, J., Kim, S., Song, M., Jeong, M., Kim, D., Kang, J., Rousseau, J.F., Li, X., Xu, W., Torvik, V.I., et al.: Building a pubmed knowledge graph. Scientific data 7(1), 205 (2020). https://doi.org/10.1038/s41597-020-0543-2
Collaborative knowledge base embedding for recommender systems. F Zhang, N J Yuan, D Lian, X Xie, W Y Ma, 10.1145/2939672.2939673Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. the 22nd ACM SIGKDD international conference on knowledge discovery and data miningZhang, F., Yuan, N.J., Lian, D., Xie, X., Ma, W.Y.: Collaborative knowledge base embedding for recommender systems. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. pp. 353-362 (2016). https://doi.org/https://doi.org/10.1145/2939672.2939673
An end-to-end deep learning architecture for graph classification. M Zhang, Z Cui, M Neumann, Y Chen, Proceedings of the AAAI conference on artificial intelligence. the AAAI conference on artificial intelligence32Zhang, M., Cui, Z., Neumann, M., Chen, Y.: An end-to-end deep learning architecture for graph classification. In: Proceedings of the AAAI conference on artificial intelligence. vol. 32 (2018).
. 10.1609/aaai.v32i1.11782https://doi.org/https://doi.org/10.1609/aaai.v32i1.11782
| [
"https://github.com/IBM/hspo-ontology",
"https://github.com/IBM/hspo-ontology"
] |
[
"COUNTING SHI REGIONS WITH A FIXED SEPARATING WALL",
"COUNTING SHI REGIONS WITH A FIXED SEPARATING WALL"
] | [
"Susanna Fishel ",
"ANDEleni Tzanaki ",
"Monica Vazirani "
] | [] | [] | Athanasiadis introduced separating walls for a region in the extended Shi arrangement and used them to generalize the Narayana numbers. In this paper, we fix a hyperplane in the extended Shi arrangement for type A and calculate the number of dominant regions which have the fixed hyperplane as a separating wall; that is, regions where the hyperplane supports a facet of the region and separates the region from the origin. | 10.1007/s00026-013-0201-x | [
"https://arxiv.org/pdf/1202.6648v1.pdf"
] | 13,238,912 | 1202.6648 | a8b1f687bee9de50b676313e00f91e1336bb11bf |
COUNTING SHI REGIONS WITH A FIXED SEPARATING WALL
29 Feb 2012
Susanna Fishel
ANDEleni Tzanaki
Monica Vazirani
COUNTING SHI REGIONS WITH A FIXED SEPARATING WALL
29 Feb 2012arXiv:1202.6648v1 [math.CO]
Athanasiadis introduced separating walls for a region in the extended Shi arrangement and used them to generalize the Narayana numbers. In this paper, we fix a hyperplane in the extended Shi arrangement for type A and calculate the number of dominant regions which have the fixed hyperplane as a separating wall; that is, regions where the hyperplane supports a facet of the region and separates the region from the origin.
Introduction
A hyperplane arrangement dissects its ambient vector space into regions. The regions have walls-hyperplanes which support facets of the region-and the walls may or may not separate the region from the origin. The regions in the extended Shi arrangement are enumerated by well-known sequences: all regions by the extended parking function numbers, the dominant regions by the extended Catalan numbers, dominant regions with a given number of certain separating walls by the Narayana numbers. In this paper we study the extended Shi arrangement by fixing a hyperplane in it and calculating the number of regions for which that hyperplane is a separating wall. For example, suppose we are considering the mth extended Shi arrangement in dimension n − 1, with highest root θ. Let H θ,m be the mth translate of the hyperplane through the origin with θ as normal. Then we show there are m n−2 regions which abut H θ,m and are separated from the origin by it.
At the heart of this paper is a well-known bijection from certain integer partitions to dominant alcoves (and regions). One particularly nice aspect of our work is that we are able to use the bijection to enumerate regions. We characterize the partitions associated to the regions in question by certain interesting features and easily count those partitions, whereas it is not clear how to count the regions directly.
We give two very different descriptions of this bijection, one combinatorial and one geometric. The first description of the bijection comes from group theory, from studying the Scopes equivalence on blocks of the symmetric group. The second description is the standard one for combinatorics and used when studying the affine symmetric group. As a reference, it is good to have both forms of this oft-used map in one place. We can then prove several results in two ways, using the different descriptions. Although Theorem 3.1 is essentially the same as Theorem 3.5, the proofs are very different and Propositions 3.2 and 3.3 are of independent interest.
We rely on work from several sources. Shi (1986) introduced what is now called the Shi arrangement while studying the affine Weyl group of type A, and Stanley (1998) extended it. We also use his study of alcoves in Shi (1987a). Richards (1996), on decomposition numbers for Hecke algebras, has been very useful. The Catalan numbers have been extended and generalized; see Athanasiadis (2005) for the history. Fuss-Catalan numbers is another name for the extended Catalan numbers. The Catalan numbers can be written as a sum of Narayana numbers. Athanasiadis (2005) generalized the Narayana numbers. He showed they enumerated several types of objects; one of them was the number of dominant Shi regions with a fixed number of separating walls. This led us to investigate separating walls. All of our work is for type A, although Shi arrangements, Catalan numbers, and Narayana numbers exist for other types.
In Section 2, we introduce notation, define the Shi arrangement, certain partitions, and the bijection between them which we use to count regions. In Section 3, we characterize the partitions assigned to the regions which have H θ,m as separating wall. In order to enumerate the regions which have other separating walls, we must use a generating function, which we introduce in Section 4. The generating function recods the number of hyperplanes of a certain type in the arrangement which separte the region from the origin. It turns out that in the base case, where the hyperplane is H θ,m these numbers can be easily read from the region's associated n-core and we obtain Corollary 4.4:
R:H θ,m is a separating wall for R p c(R) q r(R) = p m q m [p m−1 + p m−2 q + · · · + q m−1 ] n−2 ,
where c() and r() are defined in Section 4. Finally, in Section 5, we give a recursion for the generating functions from Section 4, which enables us to count the regions which have other separating walls H α,m .
Preliminaries
Here we introduce notation and review some constructions.
2.1. Root system notation. Let {ε 1 , . . . , ε n } be the standard basis of R n and | be the bilinear form for which this is an orthonormal basis. Let α i = ε i − ε i+1 . Then Π = {α 1 , . . . , α n−1 } is a basis of
V = {(a 1 , . . . , a n ) ∈ R n | n i=1 a i = 0}.
For i ≤ j, we write α ij for α i + . . . + α j . In this notation, we have that α i = α ii , the highest root θ is α 1,n−1 , and α ij = ε i − ε j+1 . The elements of ∆ = {ε i − ε j | i = j} are called roots and we write a root α is positive, written α > 0, if α ∈ ∆ + = {ε i − ε j | i < j}. We let ∆ − = −∆ + and write α < 0 if α ∈ ∆ − . Then Π is the set of simple roots. As usual, we let Q = n−1 i=1 Zα i be identified with the root lattice of type A n−1 and Q + =
n−1 i=1 Z ≥0 α i .
Extended Shi arrangements.
A hyperplane arrangement is a set of hyperplanes, possibly affine, in V . We are interested in certain sets of hyperplanes of the following form. For each α ∈ ∆ + , we define the reflecting hyperplane
H α,0 = {v ∈ V | v | α = 0}
and its kth translate, for k ∈ Z,
H α,k = {v ∈ V | v | α = k}.
Note H −α,−k = H α,k so we usually take k ∈ Z ≥0 . Then the extended Shi arrangement, here called the m-Shi arrangement, is the collection of hyperplanes
H m = {H α,k | α ∈ ∆ + , −m < k ≤ m}.
This arrangement is defined for crystallographic root systems of all finite types. Regions of the m-Shi arrangement are the connected components of the hyperplane arrangement complement V \ H∈Hm H.
We denote the closed half-
spaces {v ∈ V | v | α ≥ k} and {v ∈ V | v | α ≤ k} by H α,k + and H α,k − respectively. The dominant or fundamental chamber of V is n−1 i=1 H αi,0 + .
This paper primarily concerns regions and alcoves in the dominant chamber.
A dominant region of the m-Shi arrangement is a region that is contained in the dominant chamber. We denote the collection of dominant regions in the m-Shi arrangement S n,m .
Each connected component of
V \ α∈∆ + k∈Z H α,k
is called an alcove and the fundamental alcove
is A 0 , the interior of H θ,1 − ∩ n−1 i=1 H αi,0 + .
A dominant alcove is one contained in the dominant chamber. Denote the set of dominant alcoves by A n .
A wall of a region is a hyperplane in H m which supports a facet of that region or alcove. Two open regions are separated by a hyperplane H if they lie in different closed half-spaces relative to H. Please see Athanasiadis (2005) or Humphreys (1990) for details. We study dominant regions with a fixed separating wall. A separating wall for a region R is a wall of R which separates R from A 0 .
2.3. The affine symmetric group.
Definition 2.1. The affine symmetric group, denoted S n , is defined as
S n = s 1 , . . . , s n−1 , s 0 | s 2 i = 1, s i s j = s j s i if i ≡ j ± 1 mod n, s i s j s i = s j s i s j if i ≡ j ± 1 mod n for n > 2, but S 2 = s 1 , s 0 | s 2 i = 1 .
The affine symmetric group contains the symmetric group S n as a subgroup. S n is the subgroup generated by the s i , 0 < i < n. We identify S n with the set of permutations of {1, . . . , n} by identifying s i with the simple transposition (i, i + 1).
The affine symmetric group S n acts freely and transitively on the set of alcoves. We thus identify each alcove A with the unique w ∈ S n such that A = w −1 A 0 . Each simple generator s i , i > 0, acts by reflection with respect to the simple root α i . In other words, it acts by reflection over the hyperplane H αi,0 . The element s 0 acts as reflection with respect to the affine hyperplane H θ,1 .
More specifically, the action on V is given by s i (a 1 , . . . , a i , a i+1 , . . . , a n ) = (a 1 , . . . , a i+1 , a i , . . . , a n ) for i = 0, and s 0 (a 1 , . . . , a n ) = (a n + 1, a 2 , . . . , a n−1 , a 1 − 1).
Note S n preserves | , but S n does not.
2.4. Shi coordinates and Shi tableaux. Every alcove A can be written as w −1 A 0 for a unique w ∈ S n and additionally, for each α ∈ ∆ + , there is a unique integer k α such that k α < α | x < k α + 1 for all x ∈ A. Shi characterized the integers k α which can arise in this way and the next lemma gives the conditions for type A.
Lemma 2.2 (Shi (1987a)). Let {k αij } 1≤i≤j≤n−1 be a set of n 2 integers. There exists a w ∈ S n such that
k αij < α ij | x < k αij + 1 for all x ∈ w −1 A 0 if and only if k αit + k αt+1,j ≤ k αij ≤ k αit + k αt+1,j + 1,
for all t such that i ≤ t < j.
From now on, except in the discussion of Proposition 4.3, we write k ij for k αij . These {k ij } 1≤i≤n−1 are the Shi coordinates of the alcove. We arrange the coordinates for an alcove A in the Young's diagram (see Section 2.5) of a staircase partition (n − 1, n − 2, . . . , 1) by putting k ij in the box in row i, column n − j. See Krattenthaler et al. (2002) for a similar arrangement of sets indexed by positive roots. For a dominant alcove, the entries are nonnegative and non-increasing along rows and columns.
We can also assign coordinates to regions in the Shi arrangement. In each region of the m-Shi hyperplane arrangement, there is exactly one "representative," or mminimal, alcove closest to the fundamental alcove A 0 . See Shi (1987b) for m = 1 and Athanasiadis (2005) for m ≥ 1. Let A be an alcove with Shi coordinates {k ij } 1≤i≤n−1 and suppose it is the m-minimal alcove for the region R. We define coordinates {e ij } 1≤i≤j≤n−1 for R by e ij = min(k ij , m).
Again, we arrange the coordinates for a region R in the Young's diagram (see Section 2.5) of a staircase partition (n − 1, n − 2, . . . , 1) by putting e ij in the box in row i, column n − j. For dominant regions, the entries are nonnegative and non-increasing along rows and columns. Example 2.4. The dominant chamber for the 2-Shi arrangement for n = 3 is illustrated in Figure 1 The yellow region has coordinates e 12 = 2, e 11 = 1, and e 22 = 2. Its 2-minimal alcove has coordinates k 12 = 3, k 11 = 1, and k 22 = 2.
Denote the Shi tableau for the alcove A by T A and for the region R by T R . Both Richards (1996) and Athanasiadis (2005) characterized the Shi tableaux for dominant m-Shi regions.
H α 1 ,0 H α 2 ,0 H α 1 ,1 H α 2 ,1 H θ,1 H α 1 ,2 H α 2 ,2 H θ,2 H θ,3 H θ,4∆ + = I 0 ⊇ I 1 ⊇ . . . ⊇ I m in ∆ + such that (2.2) (I i + I j ) ∩ ∆ + ⊆ I i+j , and (2.3) (J i + J j ) ∩ ∆ + ⊆ J i+j ,
where I k = I m for k > m and J i = ∆ + \ I i . He gave a bijection between co-filtered chains of ideals and m-minimal alcoves for R ∈ S n,m . Given such a chain, let e uv = k if α uv ∈ I k , α uv / ∈ I k+1 , and k < m and let e uv = m if α uv ∈ I m . Then conditions (2.2) and (2.3) translate into (2.1). Lemma 3.9 from Athanasiadis (2005) is crucial to our work here. He characterizes the co-filtered chains of ideals for which H α,m is a separating wall. We translate that into our set-up in Lemma 2.6, using entries from the Shi Tableau.
Lemma 2.6 (Athanasiadis (2005)). A region R ∈ S n,m has H αuv ,m as a separating wall if and only if e uv = m and for all t such that u ≤ t < v, e ut + e t+1,v = m − 1.
Partitions.
A partition is a non-increasing sequence λ = (λ 1 , λ 2 , . . . , λ n ) of nonnegative integers, called the parts of λ. We identify a partition λ = (λ 1 , λ 2 , . . . , λ n ) with its Young diagram, that is the array of boxes with coordinates {(i, j) : 1 ≤ j ≤ λ i for all λ i }. The conjugate of λ is the partition λ ′ whose diagram is obtained by reflecting λ's diagram about the diagonal. The length of a partition λ, ℓ(λ), is the number of positive parts of λ.
2.5.1. Core partitions. The (k, l)-hook of any partition λ consists of the (k, l)-box of λ, all the boxes to the right of it in row k together with all the boxes below it and in column l. The hook length h λ kl of the box (k, l) is the number of boxes in the (k, l)-hook. Let n be a positive integer. An n-core is a partition λ such that n ∤ h λ (k,l) for all (k, l) ∈ λ. We let C n denote the set of partitions which are n-cores.
2.5.2. S n action on cores. There is a well-known action of S n on n-cores which we will briefly describe here; please see Misra and Miwa (1990), Lascoux (2001), Lapointe and Morse (2005), Berg et al. (2009), or Fishel andVazirani (2010), for more details and history. The Young diagram of a partition λ is made up of boxes. We say the box in row i and column j has residue r if j − i ≡ r mod n. A box not in the Young diagram of λ is called addable if we obtain a partition when we add it to λ. In other words, the box (i, j + 1) is addable if λ i = j and either i = 1 or λ i−1 > λ i . A box in the Young diagram of λ is called removable if we obtain a partition when we remove it from λ. It is well-known (see for example Fishel and Vazirani (2010) or Lapointe and Morse (2005)) that the following action of s i ∈ S n on n-cores is well-defined.
Definition 2.7. S n action n-core partitions:
(1) If λ has an addable box with residue r, then s r (λ) is the n-core partition created by adding all addable boxes of residue r to λ.
(2) If λ has an removable box with residue r, then s r (λ) is the n-core partition created by removing all removable boxes of residue r from λ. (3) If λ has neither removable nor addable boxes of residue r, then s r (λ) is λ.
2.6. Abacus diagrams. In Section 3, we use a bijection, called Ψ, to describe certain regions. We will need abacus diagrams to define Ψ. We associate to each partition λ its abacus diagram. When λ is an n-core, its abacus has a particularly nice form.
The β-numbers for a partition λ = (λ 1 , . . . , λ r ) are the hook lengths from the boxes in its first column:
β k = h λ (k,1)
. Each partition is determined by its β-numbers and β 1 > β 2 > · · · > β ℓ(λ) > 0.
An n-abacus diagram, or abacus diagram when n is clear, is a diagram with integer entries arranged in n columns labeled 0, 1, . . . , n− 1. The columns are called runners. The horizontal cross-sections or rows will be called levels and runner k contains the integer entry qn + r on level q where −∞ < q < ∞. We draw the abacus so that each runner is vertical, oriented with −∞ at the top and ∞ at the bottom, and we always put runner 0 in the leftmost position, increasing to runner n − 1 in the rightmost position. Entries in the abacus diagram may be circled; such circled elements are called beads. The level of a bead labeled by qn + r is q and its runner is r. Entries which are not circled will be called gaps. Two abacus diagrams are equivalent if one can be obtained by adding a constant to each entry of the other.
See Example 2.9 below. Given a partition λ its abacus is any abacus diagram equivalent to the one with beads at entries β k = h λ (k,1) and all entries j ∈ Z <0 .
Given the original n-abacus for the partition λ with beads at {β k } 1≤k≤ℓ(λ) , let b i be one more than the largest level number of a bead on runner i; that is, the level of the first gap. Then (b 0 , . . . , b n−1 ) is the vector of level numbers for λ.
The balance number of an abacus is the sum over all runners of the largest level of a bead in that runner. An abacus is balanced if its balance number is zero. There is a unique n-abacus which represents a given n-core λ for each balance number. In particular, there is a unique n-abacus for λ with balance number 0.
Remark 2.8. It is well-known that λ is an n-core if and only if all its n-abacus diagrams are flush, that is to say whenever there is a bead at entry j there is also a bead at j − n. Additionally, if (b 0 , . . . , b n−1 ) is the vector of level numbers for λ,
then b 0 = 0, n−1 i=0 b i = ℓ(λ), and since there are no gaps, (b 0 . . . , b n−1 ) describes λ completely.
Example 2.9. Both abacus diagrams in Figure 2 represent the 4-core λ = (5, 2, 1, 1, 1). The levels are indicated to the left of the abacus and below each runner is the largest level number of a bead in that runner. The boxes of the Young diagram of λ have been filled with their hooklengths. The diagram on the left is balanced. The diagram on the right is the original diagram, where the beads are placed at the β-numbers and negative integers. The vector of level numbers for λ is (0, 3, 1, 1). 2.7. Bijections. We describe here two bijections, Ψ and Φ, from the set of n-cores to dominant alcoves. We neither use nor prove the fact that Ψ = Φ.
2.7.1. Combinatorial description. Ψ is a slightly modified version of the bijection given in Richards (1996). Given an n-core λ, let (b 0 = 0, b 1 , . . . , b n−1 ) be the level numbers for its abacus. Now letp i = b i−1 n + i − 1, which is the entry of the first gap on runner i, for i from 1 to n, and then let p 1 = 0 < p 2 < · · · < p n be the {p i } written in ascending order. Finally we define Ψ(λ) to be the alcove whose Shi coordinates are given by
k ij = ⌊ p j+1 − p i n ⌋ for 1 ≤ i ≤ j ≤ n − 1.
Example 2.10. We continue Example 2.9. We have n = 4, λ = (5, 2, 1, 1, 1), and 3, 1, 1). Thenp 1 = 0,p 2 = 13,p 3 = 6, andp 4 = 7 and p 1 = 0, p 2 = 6, p 3 = 7, and p 4 = 13. Thus Ψ(λ) is the alcove with the following Shi tableau.
(b 0 , b 1 , b 2 , b 3 ) = (0,k 13 = 3 k 12 = 1 k 11 = 1 k 23 = 1 k 22 = 0 k 33 = 1
Proposition 2.11. The map Ψ from n-cores to dominant alcoves is a bijection.
Proof. We first show that we indeed produce an alcove by the process above. By
Lemma 2.2, it is enough to show that k it + k t+1,j ≤ k ij ≤ k it + k t+1,j + 1 for all t such that 1 ≤ t < j. k ij = ⌊ pj+1−pi n ⌋ implies that (2.4) k ij = p j+1 − p i n − B ij where 0 ≤ B ij < 1.
Let t be such that 0 ≤ t < j. Using (2.4), we have
k it + k t+1,j = p t+1 − p i n − p j+1 − p t+1 n − B ij − B j+1,t+1 . Now let A = B ij + B j+1,t+1 , so that 0 ≤ A < 2. We have k it + k t+1,j + A = p j+1 − p i n .
Thus
⌊k it + k t+1,j + A⌋ = ⌊ p j+1 − p i n ⌋ = k ij
or, since k it and k t+1,j are integers,
(2.5) k it + k t+1,j + ⌊A⌋ = k ij .
Combining (2.5) with ⌊A⌋ is equal to 0 or 1 shows that the conditions in Lemma 2.2 are satisfied and we have the Shi coordinates of an alcove. Since each k ij ≥ 0, it is an alcove in the dominant chamber. Now we reverse the process described above to show that Ψ is a bijection. Let {k ij } 1≤i≤j≤n−1 be the Shi coordinates of a dominant alcove. Write p i = nq i + r i for the intermediate values {p i }, which we first calculate. Then p 1 = q 1 = r 1 = 0 and q i = k 1,i−1 . We must now determine r 2 , . . . , r n , a permutation of 1, . . . , n − 1. However, since
(2.6) k ij = q j+1 − q i if r j+1 > r i q j+1 − q i − 1 if r j+1 < r i . ,
we can determine the inversion table for this permutation, using k ij for 2 ≤ i ≤ j ≤ n − 1 and q 1 , . . . , q n . Indeed, Inv(r j+1 ) = |{r i | 1 ≤ i < j + 1 and r i > r j+1 }|
= |{(k 1j , k 1,i−1 , k ij ) | k 1j = k 1,i−1 + k ij + 1}|. (2.7)
Therefore, we can compute r 2 , . . . , r n and therefore p 1 , p 2 , . . . , p n . We can now sort the {p i } according to their residue mod n, giving usp 1 , . . . ,p n ; from this, (b 0 , . . . , b n−1 ). Note that (b 0 , . . . , b n−1 ) is a permutation of q 1 , . . . , q n .
Example 2.12. We continue Examples 2.9 and 2.10 here. Suppose we are given that n = 4 and the alcove coordinates k 13 = 3,k 12 = 1,k 11 = 1,k 23 = 1,k 22 = 0, and k 33 = 1. That is,
T R = k 13 k 12 k 11 k 23 k 22 k 33 = 3 1 1 1 0 1
We demonstrate Ψ −1 and calculate (b 0 , b 1 , b 2 , b 3 ) and thereby the 4-core λ. We have q 1 = 0, q 2 = 1, q 3 = 1, and q 4 = 3, and r 1 = 0, from k 13 , k 12 , and k 11 . We must determine r 2 , r 3 , r 4 , a permutation of 1, 2, 3.
Using (2.7), we know Inv(r 4 ) = 2, since k 13 = k 11 +k 23 +1 and k 13 = k 12 +k 33 +1.
Inv(r 3 ) = 0, since k 12 = k 11 + k 22 + 1. Inv(r 2 ) = 0, always. Therefore we have r 3 = 3, r 2 = 2, and r 4 = 1, which means b 1 = q 4 = 3, b 2 = q 2 = 1, and b 3 = q 3 = 1.
Remark 2.13. The column (or row) sums of the Shi tableau of an alcove give us a partition whose conjugate is (n−1)-bounded, as in the bijections of Lapointe and Morse (2005) or Björner and Brenti (1996) 2.7.2. Geometric description. The bijection Φ associates an n-core to an alcove through the S n action described in Sections 2.5.2 and 2.3. The map Φ : w∅ → w −1 A 0 for w ∈ S n a minimal length coset for S n /S n , is a bijection. In Fishel and Vazirani (2010), it is shown that the m-minimal alcoves of Shi regions in S n,m correspond, under Φ, to n-cores which are also (nm + 1)-cores.
Separating wall H θ,m
Separating walls were defined in Section 2.2 as a wall of a region which separates the region from A 0 . Equivalently for alcoves, H α,k is a separating wall for the alcove
w −1 A 0 if there is a simple reflection s i , where 0 ≤ i < n, such that w −1 A 0 ⊆ H α,k + and (s i w) −1 A 0 ⊆ H α,k − .
We want to count the regions which have H α,m as a separating wall, for any α ∈ ∆ + . We do this by induction and the base case will be α = θ. Our main result in this section characterizes the regions which have H θ,m as a separating wall by describing the n-core partitions associated to them under the bijections Ψ and Φ described in Section 2.7.
Theorem 3.1. Let Ψ : C n → A n be the bijection described in Section 2.7.1, let R ∈ S n,m have m-minimal alcove A, and let λ be the n-core such that Ψ(λ) = A. Then H θ,m is a separating wall for the region R if and only if h λ 11 = n(m − 1) + 1.
Proof. Let b(λ) = (b 0 , b 1 , . . . , b n−1 ) be the vector of level numbers for the n-core λ, so b o = 0. We first note that h 11 = β 1 = n(m − 1) + 1 if and only if b 1 = m and b i < m for 1 < i ≤ n − 1.
Now suppose that H θ,m is a separating wall for the region R. Let {e ij } be the coordinates of R and let {k ij } be the coordinates of A. By Lemma 2.6, we know that e 1,n−1 = m and e 1t + e t+1,n−1 = m − 1, for all t such that 1 ≤ t < n − 1. Therefore for all e ij except e 1,n−1 , we have e ij ≤ m − 1, so that e ij = k ij . Since k 1t + k t+1,n−1 ≤ k 1,n−1 ≤ k 1t + k t+1,n−1 + 1, we have that k 1,n−1 ≤ m, so indeed the Shi coordinates of R are the same as the coordinates of A.
Consider the proof of Proposition 2.11 where we describe Ψ −1 , but in this situation. We see that {q i } 1≤i≤n−1 , a nonincreasing rearrangement of (b 1 . . . , b n−1 ), is made up of m and n − 2 nonnegative integers strictly less than m. So we need only show that b 1 = m, in view of our first remark of the proof. Combining (2.7) with the facts that if H θ,m is a separating wall for a region then e ij = k ij and, then by Lemma 2.6, k 1,n−1 = k 1,i−1 + k i,n+1 + 1 for all i such that 2 ≤ i ≤ n, we have Inv(r n ) = n − 1. This implies that r n = 1, so that b 1 = q n = k 1,n−1 = m.
Conversely, suppose that h λ 11 = n(m − 1) + 1, so that b 1 = m and b i ≤ m − 1 for 1 < i ≤ n − 1. Thenp 2 = nm + 1 andp i = nb i−1 + i − 1 ≤ n(m − 1) + i − 1 ≤ n(m − 1) + n − 1 = nm − 1. Therefore, p 1 = 0 and p n = nm + 1 and p i ≤ nm − 1, so that q 1 = 0, q n = m, r n = 1, and q i ≤ m − 1 and thus k 1,n−1 = m and k 1i ≤ m − 1. By specializing (2.6) to j = n − 1, we have
(3.1) k i,n−1 = q n − q i if r n > r i q n − q i − 1 if r n < r i . .
Then, by (3.1), k i,n−1 = q n − q i − 1, so that k 1,i−1 + k i,n−1 = q i + q n − q i − 1 = m − 1.
Since k ij ≤ m for 1 ≤ i ≤ j ≤ n − 1, k ij = e ij and the conditions in Lemma 2.6 that H θ,m be a separating wall are fulfilled.
We can also look at the regions which have H θ,m as a separating wall in terms of the geometry directly. Theorem 3.5 is an alternate version of Theorem 3.1.
Proposition 3.2. Let λ be an n-core and w ∈ S n /S n be of minimal length such that λ = w∅. Let k = λ 1 + n−1 2 . Let γ = α 1,n−1 + α 2,n−1 + · · · + α n−1,n−1 .
(1) Then the affine hyperplane H γ,k passes through the corresponding alcove w −1 A 0 . More precisely, w −1 ( 1 n ρ) | γ = k.
(2) Then the affine hyperplane H γ,λ1 passes through the corresponding alcove w −1 A 0 . More precisely, w −1 (Λ r ) | γ = λ 1 , where r ≡ λ 1 mod n.
Proof. First, recall that ρ = 1 2 α∈∆ + α = ( n−1 2 , n−1 2 − 1, . . . , 1−n 2 ). Hence 1 n ρ ∈ A 0 and so w −1 ( 1 n ρ) ∈ w −1 A 0 . Let η = i ε i . Recall V = η ⊥ as for all (a 1 , . . . , a n ) ∈ V we have i a i = 0. Observe that for all v ∈ V , v | γ = v | η − nε n = v | −nε n . So it suffices to show w −1 ( 1 n ρ) | ε n = − k n . Recall we may write w = t β u where β ∈ Q and u ∈ S n , where t β is translation by β. Please see Humphreys (1990)
for details. Then w −1 = u −1 t −β = t u −1 (−β) u −1 satisfies u −1 (−β) ∈ Q + .
Write λ 1 = nq − (n − r) with 0 ≤ n − r < n. Then 1 ≤ r ≤ n, q = λ1 n , and r ≡ λ 1 mod n. Let a i be the level of the first gap in runner i of the balanced abacus diagram for λ and write n(λ) = (a 1 , a 2 , . . . , a n ). It is worth noting that n(λ) = w(0, . . . , 0). By (Berg et al., 2009, Prop 3.2.13), the largest entry of n(λ) is a r = q and the rightmost occurrence of q occurs in the r th position. Hence the smallest entry of − n(λ) is −q and its rightmost occurrence is also in position r. Since u −1 ∈ S n is of minimal length such that u −1 (− n(λ)) ∈ Q + , we have that u −1 (ε r ) = ε n . Now we compute
w −1 ( 1 n ρ) | ε n = t u −1 (− n(λ)) u −1 ( 1 n ρ) | ε n = u −1 (− n(λ)) | ε n + u −1 ( 1 n ρ) | ε n = u −1 (− n(λ)) | ε n + 1 n ρ | u(ε n ) = −q + 1 n ρ | ε r = −q + 1 n ( n − 1 2 − (r − 1)) = − 1 n (nq − (n − r) + n − 1 2 ) = − 1 n (λ 1 + n − 1 2 ) = − k n .
For the second statement, note the fundamental weight Λ j ∈ V has coordinates given by
Λ j = 1 n ((n − j)(ε 1 + · · · + ε j ) − j(ε j+1 + · · · + ε n )).
So Λ j ∈ H αi,0 for i = j, Λ j ∈ H αj ,1 , and the {Λ j | 1 ≤ j ≤ n} ∪ {0} are precisely the vertices of A 0 . For the notational consistency of this statement and others below, we will adopt the convention that Λ 0 = 0 (which is consistent with considering 0 ∈ H θ,0 = H α0,1 ). Hence we have that w −1 (Λ j ) ∈ w −1 A 0 .
As above we compute
− 1 n w −1 (Λ r ) | γ = w −1 (Λ r ) | ε n = −q + Λ r | ε r = −q + 1 n (n − r) = − 1 n (nq − (n − r)) = − 1 n (λ 1 ).
Proposition 3.3. Let λ be an n-core and w ∈ S n /S n be of minimal length such that λ = w∅. Let K = ℓ(λ) + n−1 2 . Let Γ = α 1,n−1 + α 1,n−2 + · · · + α 1,1 . (1) Then the affine hyperplane H Γ,K passes through the corresponding alcove w −1 A 0 . More precisely, w −1 ( 1 n ρ) | Γ = K.
(2) Then the affine hyperplane H Γ,ℓ(λ) passes through the corresponding al-
cove w −1 A 0 . More precisely, w −1 (Λ s−1 ) | Γ = ℓ(λ), where 1 − s ≡ ℓ(λ) mod n.
Proof. First note v | Γ = v | nε 1 for all v ∈ V , so it suffices to compute w −1 ( 1 n ρ) | nε 1 . Next, note Γ = nε 1 .
Write ℓ(λ) = nM + (1 − s) with 1 ≤ s ≤ n, so −M = − ℓ(λ) n . By Berg et al. (2009), the smallest entry of n(λ) = (a 1 , a 2 , . . . , a n ) is a s = −M and the leftmost occurrence of −M occurs in the s th position. Hence the largest entry of − n(λ) is M and its leftmost occurrence is also in position s. Then for u as above, it is clear u(ε 1 ) = ε s . So, by a similar computation as above,
w −1 ( 1 n ρ) | Γ = n w −1 ( 1 n ρ) | ε 1 = n u −1 (− n(λ)) | ε 1 + n u −1 ( 1 n ρ) | ε 1 = nM + ρ | ε s = nM + n − 1 2 − (s − 1) = ℓ(λ) + n − 1 2 = K. Likewise, w −1 (Λ s−1 ) | ε 1 = M + Λ s−1 | ε s = M + 1 n (−(s − 1)) = 1 n (nM + (1 − s)) = 1 n ℓ(λ).
Taking subscripts mod n we have w −1 (Λ λ1 ) | γ = λ 1 and w −1 (Λ −ℓ(λ) ) | Γ = ℓ(λ).
Corollary 3.4. n w −1 ( 1 n ρ) | θ = λ 1 + ℓ(λ) + n − 1. Note that when λ = ∅, the above quantity is h λ 11 + n where h λ 11 is the hooklength of the first box. (One could also set h ∅ 11 = −1.) Theorem 3.5. Let Φ : C n → A n be the bijection described in Section 2.7.2, let R ∈ S n,m have m-minimal alcove A, and let λ be the n-core such that Φ(λ) = A. Then H θ,m is a separating wall for the region R if and only if h λ 11 = n(m − 1) + 1. Proof. Let r, s, q, M , and w be as in Propositions 3.2 and 3.3. Suppose that H θ,m is a separating wall for R and let i be such that w −1 A 0 ⊆ H θ,m + and w −1 s i A 0 ⊆ H θ,m − . Recall Λ j ∈ H αi,0 for all j = i, and Λ i ∈ H αi,1 . Hence w −1 (Λ j ) ∈ H θ,m but w −1 (Λ i ) ∈ H θ,m+1 . In fact, this configuration of vertices characterizes separating walls. Note
(3.2) Λ j | ε s − ε r = 1 if s ≤ j < r −1 if s > j ≥ r 0 else.
By Propositions 3.2 and 3.3, w −1 (Λ j ) | θ = M + q + Λ j | ε s − ε r . Because H θ,m is a separating wall, this yields M + q + Λ j | ε s − ε r = m + δ i,j . We must consider two cases. First, M + q = m and Λ j | ε s − ε r = δ i,j . In other words, by (3.2) s ≤ j < r implies j = i. More precisely, r − s = 1, s = i, and ε s − ε r = α i . In the second case, M + q − 1 = m and Λ j | ε s − ε r = δ i,j − 1. In other words, s > j ≥ r for all 1 ≤ j < n (and recall Λ 0 | ε s − ε r = 0 | ε s − ε r = 0). More precisely, r − s = 1 − n and ε s − ε r = −θ.
Putting this all together for λ = ∅,
h λ 11 = ℓ(λ) + λ 1 − 1 = (nM + 1 − s) + (nq − (n − r)) − 1 = n(M + q) − n + (r − s) = nm − n + 1 if ε s − ε r = α i n(m + 1) − n + (1 − n) if ε s − ε r = −θ = n(m − 1) + 1.
Conversely, if h λ 11 = n(m − 1) + 1, then by the computation above n(M + q) − n + (r − s) = nm − n + 1, which forces n(M + q − 1 − m + 1) = 1 + s − r. Note 2−n ≤ 1+s−r ≤ n. If 1+s−r < n, divisibility forces 0 = 1+s−r = M +q −m. In other words, ε s − ε r = α i for i = s, and we compute as above that w −1 (Λ j ) | θ = M + q + δ i,j showing H θ,m is a separating wall. If instead 1 + s − r = n, this forces M + q − m = 1 and ε s − ε r = θ. Hence w −1 (Λ j ) | θ = M + q − 1 = m for all j < n, but w −1 (0) | θ = M + q = m + 1, so that H θ,m is again a separating wall for w −1 A 0 .
As a side note, similar calculations show that h λ 11 = n(m − 1) − 1 if and only if either M + q = m and r − s = −1, or M + q = m − 1 and r − s = n − 1. In both cases H θ,m will not be a separating wall for w −1 A 0 , but will be a separating wall for w −1 s i A 0 where i = s − 1. One vertex of w −1 A 0 lies in H θ,m−1 and the rest in H θ,m .
Generating functions
We use h n αk to denote the set of regions in S n,m which have H α,k as a separating wall. See Figure 3. In the language of Athanasiadis (2005), these are the regions whose corresponding co-filtered chain of ideals have α as an indecomposable element of rank k. In this section, we present a generating function for regions in h n αk . In Section 5, we discuss a recursion for regions. The recursion is found by adding all possible first columns to Shi tableaux for regions in S n−1,m to create all Shi tableaux for regions in S n,m . The generating function keeps track of the possible first columns and rows. We use two statistics r() and c() on regions in the extended Shi arrangement. Let R ∈ S n,m and define r(R) = |{(j, k) : R and A 0 are separated by H α1j ,k and 1 ≤ k ≤ m}| and c(R) = |{(i, k) : R and A 0 are separated by H αin−1,k and 1 ≤ k ≤ m}|. r(R) counts the number of translates of H α1j ,0 which separate R from A 0 , for 1 ≤ j ≤ n − 1. Similarly for c(R) and translates of H αi,n−1,0 .
H α 1 ,0 H α 2 ,0 H α 1 ,1 H α 2 ,1 H α 12 ,1 H α 1 ,2 H α 2 ,2 H α 12 ,2
The generating function is
F n αij m (p, q) = R∈h n α ij m p c(R) q r(R) .
Example 4.1. F 3 α12 (p, q) = p 4 q 2 + p 4 q 3 + p 4 q 4 .
We let [k] p,q = k−1 j=0 p j q k−1−j and [k] q = [k] 1,q . We will also need to truncate polynomials and the notation we use for that is
j=n j=0 a j q j ≤q N = j=N j=0 a j q j .
The statistics are related to the n-core partition assigned by Ψ to the m-minimal alcove for the region. The second part of the claim follows since c(R) = k 1,n−1 + k 2,n−1 + . . . + k n−1,n−1 and k i,n−1 = (m − 1) − k 1,i−1 for R ∈ h n θm and 2 ≤ i ≤ n − 1. We can also relate the statistics r() and c() to the n-core partition corresponding under Φ to the m-minimal alcove of the region R.
For now, let k w,α be the Shi coordinate of w −1 A 0 . Note k w,α < w −1 ( 1 n ρ) | α < k w,α + 1, so k w,α = w −1 ( 1 n ρ) | α .
Proposition 4.3. Let λ be an n-core and w ∈ S n /S n be of minimal length such that λ = w∅.
(1) Then i k w,αi,n−1 = λ 1 .
(2) Then j k w,α1,j = ℓ(λ).
Proof. Consider
w −1 ( 1 n ρ) | α i,n = u −1 (−β) | α i,n + u −1 ( 1 n ρ) | α i,n = u −1 (−β) | α i,n + 1 n ρ | ε u(i) − ε u(n) . = u −1 (−β) | α i,n + 1 n ρ | ε u(i) − ε r . Note u −1 (−β) | α i,n ∈ Z and 1 n ρ | ε u(i) − ε r = 0 if u(i) < r, but 1 n ρ | ε u(i) − ε r = −1 if u(i) > r. Hence i u −1 ( 1 n ρ) | α i,n = −(n − r).
We then compute
i k w,αi,n = r − n + i u −1 (−β) | α i,n = r − n + u −1 (−β) | γ = r − n + qn = λ 1
by the computations in the proof of Proposition 3.2. Likewise, u −1 ( 1 n ρ) | α 1,j = 1 n ρ | ε s − ε u(j) = −1 if u(j) < s and zero otherwise. As above,
j k w,α1,j = −(s − 1) + j u −1 (−β) | α 1,j = 1 − s + u −1 (−β) | Γ = 1 − s + M n = ℓ(λ)
by the computations in the proof of Proposition 3.3.
We thus obtain another corollary to Theorem 3.1. Proof. Corollary 4.4 follows from Theorem 3.1 or 3.5, Proposition 4.2, and the abacus representation of n-cores which have the prescribed hook length.
F n θ,m (p, q) = R∈h n θm p c(R) q r(R) = λ is an n−core h λ 11 =n(m−1)+1 p m+ n−1 i=2 bi q m+ n−1 i=2 (m−1−bi) = (b2,...,bn−1) 0≤bi≤m−1 p m q m n−1 i=2 p bi q m−1−bi = p m q m (p m−1 + p m−2 q + · · · + pq m−2 + q m−1 ) n−2 = p m q m [m] n−2 p,q .
In particular, by evaluating at p = q = 1, we have the following corollary to Corollary 4.4. There are direct explanations for Corollary 4.5, but we need Theorem 3.1, Theorem 3.5, and Corollary 4.4 to develop our recursions, where we need to know more than the number of regions which have H θ,m as a separating wall. We use the number of hyperplanes which separate each region from the origin.
Arbitrary separating wall
The next few lemmas provide an inductive method for determining whether or not R ∈ S n,m is an element of h n α2,n−1m . Given a Shi tableau T R = {e ij } 1≤i≤j≤n−1 , where R ∈ S n,m , letT R be the tableau with entries {e ij } 1≤i≤j≤n−2 . That is,T R is T R with the first column removed. The next lemma tells us thatT R is always the Shi tableau for a region in one less dimension.
Lemma 5.1. If T R is the tableau of a region R ∈ S n,m and 1 ≤ u ≤ v ≤ n − 1, thenT R = TR for someR ∈ S n−1,m .
Proof. This follows from Lemma 2.5.
Lemma 5.2. Let T R be the Shi tableau for the region R ∈ S n,m and letR be defined by TR =T R , whereR ∈ S n−1,m by Lemma 5.1. Then R ∈ h n αi,n−2m if and only if R ∈ h n−1 αi,n−2m .
Proof. This follows from Lemma 2.6.
In terms of generating functions, Lemma 5.2 states:
F n αi,n−2m (p, q) = R∈h n α i,n−2 m p c(R) q r(R) (5.1) = R1∈h n−1 α i,n−2 m R∈S n,m R=R1 p c(R) q r(R)
If R 1 ∈ h n−1 αi,n−n and R ∈ S n,m are such thatR = R 1 , then, since e i,n−2 = m in the Shi tableau for R 1 , r(R) = r(R 1 ) + m and c(R) = c(R 1 ) + k, for some k. We need to establish the possible values for k.
We will use Proposition 3.5 from Richards (1996) to do this. His "pyramids" correspond to our Shi tableaux for regions, with his e and w being our n and m + 1. He does not mention hyperplanes, but with the conversion u a v = m − e u+1,v his conditions in Proposition 3.4 become our conditions in Lemma 2.5.
In our language, his Proposition 3.5 becomes Lemma 5.3 (Richards (1996)). Let µ 1 , µ 2 , . . . , µ n be non-negative integers with
µ 1 ≥ µ 2 ≥ . . . ≥ µ n = 0 and µ i ≤ (n − i)m.
Then there is a unique region R ∈ S n,m with Shi tableau T R = {e ij } 1≤i≤j≤n−1 such that
µ j = µ j (R) = n−j i=1 e i,n−j for 1 ≤ j ≤ n − 1
We include his proof for completeness.
Proof. By Lemma 2.5, we have e ij ≥ e i+1,j and e ij ≥ e i,j−1 for 1 ≤ i < j ≤ n − 1, which, combined with 0 ≤ e ij ≤ m, means that the column sums µ j = j i=1 e ij form a partition such that 0 ≤ µ j ≤ m(n − j).
We use induction on n to show that given such a partition µ, there is at most one region whose Shi tableau has column sums µ. It is clearly true for n = 2. Let n > 2 and suppose we had two regions R 1 with coordinates {e ij } 1≤i≤j≤n−1 and R 2 with coordinates {f ij } 1≤i≤j≤n−1 such that
µ j = j i=1 e ij = j i=1 f ij .
By induction e ij = f ij for 1 ≤ i ≤ j ≤ n − 2. Let u be the least index such that e u,n−1 = f u,n−1 and assume e u,n−1 < f u,n−1 . Then since
n−1 i=1 e i,n−1 = n−1 i=1 f i,n−1 , we have that e v,n−1 > f v,n−1 for some v such that u < v ≤ n − 1.
Then since f u,n−1 ≤ f u,v−1 + f v,n−1 + 1 by Lemma 2.5 and f u,v−1 = e u,v−1 by induction, we have e u,n−1 < f u,n−1 ≤ e u,v−1 + f v,n−1 + 1 ≤ e u,v−1 + e v,n−1 .
This contradicts Lemma 2.5 applied to R 1 .
However, there are 1 mn+1 (m+1)n n dominant Shi regions by Shi (1997) for m = 1 and Athanasiadis (2004) for m > 1 and it is well-known that there are also 1 mn+1 (m+1)n n partitions µ such that 0 ≤ µ i ≤ m(n − i), so we are done.
Example 5.4. Consider R 1 , R 2 , and R 3 in S 3,2 with tableaux 2 2 1 2 2 2 2 2 1 2 2 1 2 2 1 2 2 0 respectively. ThenR 1 =R 2 =R 2 = R, where R is the region in S 2,2 with tableau 2 1 2 Let α = α ij , where 1 ≤ i ≤ j ≤ n − 2 in the following. Suppose R 1 is a region, where R 1 ∈ S n−1,m and T R1 = {e ij } 1≤i≤j≤n−2 , and k is an integer such that n−2 i=1 e i,n−2 ≤ k ≤ (n − 1)m. Then Lemma 5.3 means there is a region R ∈ S n,m such thatR = R 1 and the first column sum of R's Shi tableau is k. Additionally, by Lemma 5.2, we have R 1 ∈ h n−1 αm if and only if R ∈ h n αm . On the other hand, given R ∈ S n,m with Shi tableau T R = {e ij } 1≤i≤j≤n−1 , let k be the first column sum of T R . Then by Lemma 5.1 and the fact that e i,n−1 ≥ e i,n−2 for 1 ≤ i ≤ n − 2, the pair (R, k) is such thatR ∈ S n−1,m and the first column sum of TR is not more than k. Again, by Lemma 5.2, we haveR ∈ h n−1 αm if and only if R ∈ h n αm . We continue (5.1), keeping in mind that c(R 1 ) is the first column sum for T R1 . For ease of reading, write α for α i,n−2 in the following calculation. The result of the above calculation is that (5.2) F n αm (p, q) = q m [(n − 2)m + 1] p F n−1 αm (p, q) ≤p (n−1)m when α = α i,n−2 . The next proposition will provide a method for determining whether or not H α1n−j ,m is a separating wall for R. Given a Shi tableau T = {e ij } 1≤i≤j≤n−1 for a region in S n,m , let T ′ be its conjugate given by T ′ = {e ′ ij } 1≤i≤j≤n−1 , where e ′ ij = e n−j,n−i . By Lemma 2.5, T ′ will also be Shi tableau of a region in S n,m . Additionally, by Lemma 2.6, we have the following proposition.
Proposition 5.5. Suppose the regions R and R ′ are related by
(T R ) ′ = T R ′ .
Then R ∈ h n αij m if and only if R ′ ∈ h n αn−j,n−im .
In terms of generating functions, this becomes the following:
(5.3) F n αij m (p, q) = F n αn−j,n−im (q, p).
We will now combine Theorem 3.1, Proposition 5.2, and Proposition 5.5 to produce an expression for the generating function for regions with a given separating wall.
Given a polynomial f (p, q) in two variables, let φ k,m (f (p, q)) be the polynomial (q m [m(k − 2) + 1] p f (p, q)) ≤p (k−1)m .
We define the polynomial ρ(f ) by ρ(f )(p, q) = f (q, p).
Then (5.2) is
F n αij m (p, q) = φ n,m (F n−1 αij m (p, q)) for j = n − 2 and (5.3) is F n αij m (p, q) = ρ(F n αn−j,n−im (p, q)).
Finally, the full recursion is
Theorem 5.6. The idea behind the theorem is that, given a root α uv in dimension n − 1, we remove columns using Lemma 5.3 until we are in dimension (v + 1) − 1, then we conjugate, then remove columns again until our root is α 1,v−u+1 and we are in dimension (v − u + 2) − 1.
Example 5.7. We would like to know how many elements there are in h 7 α242 ; that is, how many dominant regions in the 2-Shi arrangement for n = 7 have H α24,2 as a separating wall. In order to make this readable, we omit the m subscript, since it is always 2 in this calculation. After expanding this polynomial and evaluating at p = q = 1, we see there are 781 regions in the dimension 7 2-Shi arrangement which have H α24,2 as a separating wall.
Future work
It would be interesting to expand this problem by considering a given set of more than one separating walls. That is, given a set H ∆ ′ = {H α,m , α ∈ ∆ ′ ⊆ ∆} of hyperplanes in the Shi arrangement, find the number of regions having all the hyperplanes in H ∆ ′ as separating walls.
We would again be able to define a similar generating function, use the functions φ k,m and ρ corresponding to truncation and conjugation of the Shi tableaux, but we should be able to compute the generating function for a suitably chosen base case.
Example 2. 3 .
3For n = 5, the coordinates are arranged k 14 k 13 k 12 k 11 k 24 k 23 k 22 k 34 k 33 k 44 e 14 e 13 e 12 e 11 e 24 e 23 e 22 e 34 e 33 e 44
Figure 1 .
1S 3,2 consists of 12 regions Lemma 2.5. Let T = {e ij } 1≤i≤j≤n−1 be a collection of integers such that 0 ≤ e ij ≤ m. Then T is the Shi tableau for a region R ∈ S n,m if and only if (2.1) e ij = e it + e t+1,j or e it + e t+1,j + 1 if m − 1 ≥ e it + e t+1,j for t = i, . . . , j − 1 m otherwise Proof. Athanasiadis (2005) defined co-filtered chains of ideals as decreasing chains of ideals in the root poset
Figure 2 .
2The abacus represents the 4-core λ.
Figure 3 .
3There are three regions in h 3 α 1 2
Proposition 4 . 2 .
42Let λ be an n-core with vector of level numbers (b 0 , . . . , b n−1 ) and suppose Ψ(λ) = R and R ∈ h n θm . Then r(R) = m + m − 1 − b i ) = λ 1 . Proof. Let λ, (b 0 , . . . , b n−1 ), and R be as in the statement of the claim. Let {e ij } be the region coordinates for R and {k ij } be the coordinates of R's m-minimal alcove, and let {p i } and {p i } be as in the definition of Ψ. Then r(R) = e 1,n−1 + e 1,n−2 + . . . + e 11 = k 1,n−1 + k 1,n−2 + . . . + k 11 = ⌊ p n n ⌋ + . . .
F
n θ,m (p, q) = p m q m [m] n−2 p,q .
Corollary 4 . 5 .
45There are m n−2 regions in S n,m which have H θ,m = H α1n−1,m as a separating wall.
m [(n − 2)m + 1] p F n−1 αm (p, q) ≤p (n−1)m .
F
n αuv m (p, q) = φ n,m (φ n−1,m (. . . φ v+2,m (ρ(φ v+1,m (. . . (φ v−u+3,m (p m q m [m] v−u p,q ) . . .).
F
7 α242 (p, q) = q 2 [11] p F 6 α24 (p, q) ≤p 12 = q 2 [11] p q 2 [9] p F 5 α24 (p, q) ≤p 10 ≤p 12 = q 2 [11] p q 2 [9] p F 5 α13 (q, p) ≤p 10 ≤p 12 = q 2 [11] p p 2 [9] p q 2 [7] q F 4 α13 (q, p) ≤q 8 ≤q 10 ≤q 12 = q 2 [11] p p 2 [9] p q 2 [7] q p 2 q 2 [2] 2p,q ≤q 8 ≤p 10 ≤p 12
AcknowledgementWe thank Matthew Fayers for telling us ofRichards (1996)and explaining its relationship to Fishel and Vazirani (2010). We thank Alessandro Conflitti for simplifying the proof of Proposition 2.11. We thank the referee for comments which helped us improve the exposition.
Generalized Catalan numbers, Weyl groups and arrangements of hyperplanes. C A Athanasiadis, 0024-6093Bull. London Math. Soc. 363C. A. Athanasiadis. Generalized Catalan numbers, Weyl groups and arrangements of hyperplanes. Bull. London Math. Soc., 36(3):294-302, 2004. ISSN 0024-6093.
On a refinement of the generalized Catalan numbers for Weyl groups. C A Athanasiadis, 0002- 9947Trans. Amer. Math. Soc. 3571C. A. Athanasiadis. On a refinement of the generalized Catalan numbers for Weyl groups. Trans. Amer. Math. Soc., 357(1):179-196 (electronic), 2005. ISSN 0002- 9947.
A bijection on core partitions and a parabolic quotient of the affine symmetric group. C Berg, B Jones, M Vazirani, 0097-3165J. Combin. Theory Ser. A. 1168C. Berg, B. Jones, and M. Vazirani. A bijection on core partitions and a parabolic quotient of the affine symmetric group. J. Combin. Theory Ser. A, 116(8):1344- 1360, 2009. ISSN 0097-3165.
Research Paper 18, approx. 35 pp. (electronic), 1996. The Foata Festschrift. S. Fishel and M. Vazirani. A bijection between dominant Shi regions and core partitions. A Björner, F Brenti, 10.1016/j.ejc.2010.05.014European J. Combin. 32J. Combin.A. Björner and F. Brenti. Affine permutations of type A. Electron. J. Combin., 3 (2):Research Paper 18, approx. 35 pp. (electronic), 1996. The Foata Festschrift. S. Fishel and M. Vazirani. A bijection between dominant Shi re- gions and core partitions. European J. Combin., 31(8):2087-2101, 2010. ISSN 0195-6698. doi: 10.1016/j.ejc.2010.05.014. URL http://dx.doi.org/10.1016/j.ejc.2010.05.014.
Reflection groups and Coxeter groups. J E Humphreys, ISBN 0-521-37510-XCambridge Studies in Advanced Mathematics. 29Cambridge University PressJ. E. Humphreys. Reflection groups and Coxeter groups, volume 29 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, 1990. ISBN 0-521-37510-X.
Enumeration of ad-nilpotent b-ideals for simple Lie algebras. C Krattenthaler, L Orsina, P Papi, Special issue in memory of Rodica Simion. 28C. Krattenthaler, L. Orsina, and P. Papi. Enumeration of ad-nilpotent b-ideals for simple Lie algebras. Adv. in Appl. Math., 28(3-4):478-522, 2002. Special issue in memory of Rodica Simion.
Tableaux on k + 1-cores, reduced words for affine permutations, and k-Schur expansions. L Lapointe, J Morse, 0097-3165J. Combin. Theory Ser. A. 1121L. Lapointe and J. Morse. Tableaux on k + 1-cores, reduced words for affine permu- tations, and k-Schur expansions. J. Combin. Theory Ser. A, 112(1):44-81, 2005. ISSN 0097-3165.
Ordering the affine symmetric group. A Lascoux, Algebraic combinatorics and applications (Gößweinstein, 1999). BerlinSpringerA. Lascoux. Ordering the affine symmetric group. In Algebraic combinatorics and applications (Gößweinstein, 1999), pages 219-231. Springer, Berlin, 2001.
Crystal base for the basic representation of U q (sl(n)). K Misra, T Miwa, K. Misra and T. Miwa. Crystal base for the basic representation of U q (sl(n)).
. 0010-3616Comm. Math. Phys. 1341Comm. Math. Phys., 134(1):79-88, 1990. ISSN 0010-3616.
Some decomposition numbers for Hecke algebras of general linear groups. M J Richards, 10.1017/S0305004100074296Math. Proc. Cambridge Philos. Soc. 1193M. J. Richards. Some decomposition numbers for Hecke algebras of general linear groups. Math. Proc. Cambridge Philos. Soc., 119(3):383-402, 1996. ISSN 0305- 0041. doi: 10.1017/S0305004100074296.
The Kazhdan-Lusztig cells in certain affine Weyl groups. J Y Shi, 3-540-16439- 1Lecture Notes in Mathematics. 1179Springer-VerlagJ. Y. Shi. The Kazhdan-Lusztig cells in certain affine Weyl groups, volume 1179 of Lecture Notes in Mathematics. Springer-Verlag, Berlin, 1986. ISBN 3-540-16439- 1.
Alcoves corresponding to an affine Weyl group. J Y Shi, 0024-6107J. London Math. Soc. 352J. Y. Shi. Alcoves corresponding to an affine Weyl group. J. London Math. Soc. (2), 35(1):42-55, 1987a. ISSN 0024-6107.
Sign types corresponding to an affine Weyl group. J Y Shi, 0024-6107J. London Math. Soc. 352J. Y. Shi. Sign types corresponding to an affine Weyl group. J. London Math. Soc. (2), 35(1):56-74, 1987b. ISSN 0024-6107.
The number of ⊕-sign types. J.-Y Shi, 10.1093/qmath/48.1.93Quart. J. Math. Oxford Ser. 482J.-Y. Shi. The number of ⊕-sign types. Quart. J. Math. Oxford Ser. (2), 48 (189):93-105, 1997. ISSN 0033-5606. doi: 10.1093/qmath/48.1.93. URL http://dx.doi.org/10.1093/qmath/48.1.93.
Hyperplane arrangements, parking functions and tree inversions. R P Stanley, Mathematical essays in honor of Gian-Carlo Rota. Cambridge, MA; Boston, MA161Progr. Math.R. P. Stanley. Hyperplane arrangements, parking functions and tree inversions. In Mathematical essays in honor of Gian-Carlo Rota (Cambridge, MA, 1996), volume 161 of Progr. Math., pages 359-375. Birkhäuser Boston, Boston, MA, 1998.
| [] |
[
"Squeezed primordial bispectrum from general vacuum state",
"Squeezed primordial bispectrum from general vacuum state"
] | [
"Jinn-Ouk Gong \nAsia Pacific Center for Theoretical Physics\n790-784PohangKorea\n\nDepartment of Physics\n790-784PostechPohangKorea\n",
"Misao Sasaki \nYukawa Institute for Theoretical Physics\nKyoto University\n606-8502KyotoJapan\n"
] | [
"Asia Pacific Center for Theoretical Physics\n790-784PohangKorea",
"Department of Physics\n790-784PostechPohangKorea",
"Yukawa Institute for Theoretical Physics\nKyoto University\n606-8502KyotoJapan"
] | [] | We study the general relation between the power spectrum and the squeezed limit of the bispectrum of the comoving curvature perturbation produced during single-field slow-roll inflation when the initial state is a general vacuum. Assuming the scale invariance of the power spectrum, we derive a formula for the squeezed limit of the bispectrum, represented by the parameter f NL , which is not slow-roll suppressed and is found to contain a single free parameter for a given amplitude of the power spectrum. Then we derive the conditions for achieving a scale-invariant f NL , and discuss a few examples. | 10.1088/0264-9381/30/9/095005 | [
"https://arxiv.org/pdf/1302.1271v2.pdf"
] | 119,276,954 | 1302.1271 | c1b06ad7d1069640e014078ff3226907cf8d6d5a |
Squeezed primordial bispectrum from general vacuum state
30 Mar 2013
Jinn-Ouk Gong
Asia Pacific Center for Theoretical Physics
790-784PohangKorea
Department of Physics
790-784PostechPohangKorea
Misao Sasaki
Yukawa Institute for Theoretical Physics
Kyoto University
606-8502KyotoJapan
Squeezed primordial bispectrum from general vacuum state
30 Mar 2013(Dated: April 2, 2013)arXiv:1302.1271v2 [astro-ph.CO]
We study the general relation between the power spectrum and the squeezed limit of the bispectrum of the comoving curvature perturbation produced during single-field slow-roll inflation when the initial state is a general vacuum. Assuming the scale invariance of the power spectrum, we derive a formula for the squeezed limit of the bispectrum, represented by the parameter f NL , which is not slow-roll suppressed and is found to contain a single free parameter for a given amplitude of the power spectrum. Then we derive the conditions for achieving a scale-invariant f NL , and discuss a few examples.
and used in observation. Among them, a particularly useful one is the squeezed configuration, where one of three momenta is much smaller than the others, e.g. k 1 ≈ k 2 ≫ k 3 . A dominant source of non-Gaussianity for this configuration is the so-called local one, where the curvature perturbation is locally expanded as [5,6] R(x) = R g (x) + 3 5
f NL R 2 g (x) + · · · ,(1)
where the subscript g denotes the dominant Gaussian component. The coefficient f NL determines the size of non-Gaussianity in the bispectrum.
An important prediction of single-field slow-roll inflation is that, in the squeezed limit, f NL is proportional to the spectral index n R − 1 of the power spectrum [6,7], and is thus too small to be observed. This relation holds irrespective of the detail of models and is usually called the consistency relation. Thus, barring the possibility of features that correlate the power spectrum and f NL [8], it has been widely claimed that any detection of the local non-Gaussianity would rule out all single field inflation models. However, it is based on two assumptions. First, the curvature perturbation is frozen outside the horizon and does not evolve. That is, only one growing mode is relevant on super-horizon scales. Indeed, it is possible to make use of the constancy of R to extract only a few relevant terms in the cubic order Lagrangian to simplify considerably the calculation of the squeezed bispectrum, and to confirm the consistency relation [9]. If we abandon this assumption, the usual consistency relation does not hold any longer [10].
Another assumption is that deep inside the horizon interactions are negligible and the state approaches the standard Fock vacuum in the Minkowski space, so-called the Bunch-Davies (BD)
vacuum. If this assumption does not hold, the corresponding bispectrum may be enhanced in the folded limit [11], in particular in the squeezed limit [12,13]. Thus, the usual consistency relation may not hold. See also [14], where the violation of the tree level consistency relation is discussed together with the infrared divergence in the power spectrum from one-loop contributions for non-BD initial states. However, in the previous studies the relation between the power spectrum and bispectrum was unclear and f NL was not easily readable [12], or case studies on specific models were carried out [13]. It is then of interest to make a closer and more explicit study on the general relation between the power spectrum and the squeezed limit of the bispectrum.
In this article, we compute the squeezed limit of the bispectrum when the initial state is not the BD vacuum and study the relation between the squeezed limit of the primordial bispectrum, described by the non-linear parameter f NL and the power spectrum. We find indeed f NL can be significantly large, but its momentum dependence is in general non-trivial. We then discuss the condition for f NL to be momentum-independent, thus exactly mimics the local form (1).
Before proceeding to our analysis, let us make a couple of comments. First, we note that the squeezed limit does not necessarily mean the exact limit of a squeezed triangle in the momentum space. It includes the case when the wavenumber of the squeezed edge of a triangle is smaller than that of the observationally smallest possible wavenumeber, i.e. that corresponds to the current Hubble parameter. In the context of (1), it needs to be valid only over the region covering our current Hubble horizon size. Second, in our analysis we focus only on the squeezed limit of the bispectrum and its relation to the power spectrum. However, if a large f NL that mimics the local form of the non-Gaussianity is generated, we may also have the bispectrum with a non-negligible amplitude in some other shapes of the triangle [12,13]. This may be an interesting issue to be studied, but it is out of the scope of this work.
II. BISPECTRUM IN SINGLE-FIELD SLOW-ROLL INFLATION
For general single-field inflation, the equation of motion of the comoving curvature perturabation R is given by [15]
z 2 R ′ k ′ + c 2 s k 2 z 2 R k = 0 ,(2)
where a prime denotes a derivative with respect to the conformal time dτ = dt/a, ǫ ≡ −Ḣ/H 2 and z 2 ≡ 2m Pl a 2 ǫ/c 2 s and c s is the speed of sound. From (2), we can see that irrespective of the detail of the matter sector, a constant solution of R k always exists on super-sound-horizon scales, c s k ≪ aH, and it dominates at late times for slow-roll inflation for which z −1 ∼ a −1 ∼ τ .
Here we focus on the case of slow-roll inflation. Keeping the constancy of R k on large scales, in the squeezed limit k 1 ≈ k 2 and k 3 → 0, the bispectrum at τ =τ is given by [9]
B R (k 1 , k 2 , k 3 ;τ ) = η(τ ) c 2 s (τ ) + F (k 1 ,τ ) P R (k 1 ) P R (k 1 )P R (k 3 ) ,(3)F (k,τ ) = iR 2 k (τ ) τ −∞ dτ 2ǫ c 4 s ǫ − 3 + 3c 2 s a 2 R ′ * k 2 + 2ǫ c 2 s ǫ − 2s + 1 − c 2 s a 2 k 2 (R * k ) 2 + 2ǫ c 2 s d dτ η c 2 s a 2 R ′ * k R * k + c.c. ,(4)
where η ≡ǫ/(Hǫ) and s ≡ċ s /(Hc s ).
Being interested in large non-Gaussianity, among the terms inside the square brackets of (4) we may focus on those not suppressed by the slow-roll parameters, that is,
F 0 = 2 1 − c 2 s c 2 s ℜ iR 2 k (τ ) τ −∞ dτ −3z 2 R ′ * k 2 + c 2 s k 2 z 2 (R * k ) 2 ,(5)
where for simplicity we have assumed the time variation of c 2 s is negligible, s = 0. Now, we find it is more convenient to write the integrand of (5) in terms of R ′ k . Multiplying (2) by R k , we have
c 2 s k 2 z 2 R 2 k = − z 2 R ′ k R k ′ + zR ′ k 2 .(6)
Hence (5) becomes
F 0 = 2 1 − c 2 s c 2 s ℜ iR 2 k (τ ) −z 2 R * k R * ′ k (τ ) − 2 τ −∞ dτ zR ′ * k 2 ≡ 2 1 − c 2 s c 2 s (ℜ[I 1 ] + ℜ[I 2 ]) .(7)
As we can write R k in terms of R ′ k as (2), we do not have to work with R k but only need to solve for R ′ k . Setting
f ≡ zR ′ k ,(8)
and taking a derivative of (2), we obtain
f ′′ + c 2 s k 2 − z(z −1 ) ′′ f = 0 . (9)
An interesting property of this equation is that in the slow-roll case, z −1 ∼ a −1 ∼ τ , so the potential term z(z −1 ) ′′ vanishes at leading order [16]. Specifically we have
z(z −1 ) ′′ = a 2 H 2 ǫ + η 2 + O(η 2 , ǫη) .(10)
This means that the WKB solution f ∝ e ±icskτ remains valid even on super-sound-horizon scales at leading order in the slow-roll expansion. The general leading order solution during slow-roll
inflation is thus
f = c s k 2 C k e −icskτ + D k e icskτ ,(11)
where C k and D k are constant and we have extracted the factor c s k/2 for convenience.
Carrying out the standard quantization procedure, we find that for f to be properly normalized, the constants C k and D k satisfy
|C k | 2 − |D k | 2 = 1 .(12)
Setting D k = 0 corresponds to the usual choice of the BD vacuum. But here we do not assume so and let D k be generally non-zero. From (6) and (11), the power spectrum can be easily computed to be
P R = k 3 2π 2 |R k | 2 csk=aH = k 3 2π 2 1 2(c s k) 3 z ′ z 2 2 csk=aH |C k + D k | 2 .(13)
Thus, a scale-invariant spectrum requires |C k + D k | 2 ∼ k 0 .
Now we return to (7). For slow-roll inflation, R ′ k rapidly decays outside the sound horizon. However, since z grows like a, neither I 1 nor I 2 may not be negligible outside the sound horizon.
Rewrite them in terms of f , we easily find the expression for ℜ[I 1 ] as
ℜ[I 1 ] = ℜ i z 2 (c s k) 6 f ′ + z ′ z f 2 f ′ f * = k −3 c 5 s z ′ z 2 2 |C k + D k | 2 ,(14)
where we have used (12). The second term is
ℜ[I 2 ] = ℜ −i 2 z 2 (c s k) 4 f ′ + z ′ z f 2 τ −∞ dτ (f * ) 2 ≈ − k −4 z 2 c 5 s ℜ f ′ + z ′ z f 2 C * k 2 e 2icskτ + D * k 2 e −2icskτ − 4ic s kτ ∞ C * k D * k .(15)
Here, upon integrating the last term of the integrand, there is no time dependence and thus literally integrating from −∞ it diverges. However in reality it should be understood as the boundary τ ∞ with |τ | ≪ |τ ∞ | at which the initial condition is specified. This means depending on our choice of c s kτ ∞ , the contribution of this term may become very large, in fact can be made arbitrarily large.
Hence we cannot neglect it even in the limit k → 0. In this limit,
ℜ[I 2 ] = − k −3 c 5 s z ′ z 2 2 ℜ (C k + D k ) 2 C * k 2 − D * k 2 − 4ic s kτ ∞ C * k D * k .(16)
If we only consider the contribution from the terms C * and hence cancels out. For the other terms in (4), the calculation goes more or less the same, and we find slow-roll suppressed contributions are given in the form,
ℜ[I 1 ] + ℜ[I 2 ] = ǫP R (k) 1 − 2c s kτ ∞ ℜ (iC k D * k ) |C k + D k | −2 .
Thus, from (3) in the squeezed limit the addtional contribution to the non-linear parameter f NL when D k = 0 is given by
3 5 f NL = F 0 4P R (k) = 1 − c 2 s c 2 s − ǫ c 2 s c s kτ ∞ ℜ (iC k D * k ) |C k + D k | 2 .(17)
Note that the only assumption we have made is slow-roll inflation where z −1 ∼ a −1 ∼ τ , and thus all the above arguments are completely valid for general vacuum state under the constancy of the curvature perturbation R.
III. LOCAL, SCALE-INDEPENDENT f NL
From (17), we see that f NL will be k-dependent in general due to that of C k D * k , in addition to that from non-linear evolution on large scales [17]. With the normalization (12), we may parametrize C k and D k as
C k = e iα k cosh χ k ,(18)D k = e iβ k sinh χ k ,(19)
From the power spectrum (13), by setting A ≡ |C k + D k | 2 which should be almost k-independent, we can solve for χ k as
sinh(2χ k ) = A 2 + A A 2 − sin 2 ϕ k − cos ϕ k − 1 (1 + cos ϕ k ) A + A 2 − sin 2 ϕ k ,(20)
where ϕ k = α k − β k .
Meanwhile, for f NL (17) we have, extracting the only (possibly) scale dependent part,
B ≡ −c s kτ ∞ ℜ (iC k D * k ) = 1 2 c s kτ ∞ sin ϕ k sinh(2χ k ) .(21)
With a suitable cutoff τ ∞ , we may choose ϕ k and χ k to make (21) have a particular k-dependence.
Further, given the amplitude of the power spectrum A, sin (2χ k ) is written in terms of ϕ k as (20) so f NL contains a single free parameter ϕ k other than the cutoff. To proceed further, let us for illustration consider two different choices of τ ∞ , and see when f NL becomes scale-invariant. These choices are depicted in Figure 1.
Note that the conditions we derive below are phenomenological ones to be satisfied if f NL is to remain almost scale-invariant. One may well try to construct more concrete and realistic models which can be approximated to the cases below, but the construction of such models is beyond the scope of the present paper.
(A) τ ∞ = −1/(c s k ∞ ):
This corresponds to fixing τ ∞ common to all modes. This will be the case when there is a phase transition at τ = τ ∞ [18]. In this case, −c s kτ ∞ = k/k ∞ ≫ 1 and we can think of three simple possibilities that give k-independent f NL :
1. ϕ k ≪ 1: In this case we find
B ≈ − 1 2 k k ∞ ϕ k sinh(2χ k ) ≈ − A 2 − 1 4A k k ∞ ϕ k .(22)
Thus, by choosing ϕ k = γ c k ∞ /k, with γ c being constant, we can make f NL scale-invariant.
2. 2χ k ≪ 1: Likewise, we find Thus 2χ k ≈ γ c k ∞ /k with a k-independent ϕ k works as well. Note that in this case, ϕ k is constant but its value is not constrained, and A ≈ 1 so that the state is very close to the BD vacuum.
B ≈ − 1 2 k k ∞ 2χ k sin ϕ k .(23)
3. ϕ k ≪ 1 and 2χ k ≪ 1: We have
B ≈ − 1 2 k k ∞ ϕ k 2χ k .(24)
Thus, choosing ϕ = p(k ∞ /k) n and 2χ k = q(k ∞ /k) 1−n with p, q and 0 < n < 1 being constant
gives B = −pq/2, so that f NL is k-independent. (B) τ ∞ = −γ p /(c s k) with γ p ≫ 1:
In this case, the cutoff τ ∞ depends on k in such a way that −c s kτ ∞ = γ p is constant. This is the case when the cutoff corresponds to a fixed, very short physical distance. Hence this cutoff may be relevant when we consider possible trans-Planckian effects [19]. Again, let us consider three simple possibilities:
1. ϕ k ≪ 1: We obtain
B ≈ − γ p 2 ϕ k sinh(2χ k ) ≈ − γ p A 2 − 1 4A ϕ k .(25)
Thus we require ϕ k to have no k-dependence in order to have a scale-invariant f NL .
2. 2χ k ≪ 1: In this case B ≈ −(γ p /2)2χ k sin ϕ k . Thus it is k-independent if both ϕ k and χ k are constant, for an arbitrary value of ϕ k .
3. ϕ k ≪ 1 and 2χ k ≪ 1: This gives
B ≈ − γ p 2 ϕ k 2χ k .(26)
This is a limiting case of the second case above, and the simplest example is when both ϕ k and 2χ k are k-independent.
We note that in all the cases considered above, f NL can be large, say f NL 10, if c 2 s = 1 and the constant γ c or γ p is large.
IV. CONCLUSION
In this article, focusing on single-field slow-roll inflation, we have studied in detail the squeezed limit of the bispectrum when the initial state is a general vacuum. In this case, the standard consistency relation between the spectral index of the power spectrum of the curvature perturbation and the amplitude of the squeezed limit of the bispectrum does not hold. In particular, the squeezed limit of the bispectrum may not be slow-roll suppressed.
Under the assumption that the comoving curvature perturbation is conserved on super-soundhorizon scales, we have derived the general relation between the squeezed limit of the primordial bispectrum, described in terms of the non-linear parameter f NL and the power spectrum. We find f NL is indeed not slow-roll suppressed. But it depends explicitly on the momentum in general, hence may not be in the local form. We then have discussed the condition for f NL to be momentumindependent. We have considered two typical ways to fix the initial state. One is to fix the state at a given time, common to all modes. The other is to fix the state for each mode at a given physical momentum. The former and the latter may be relevant when there was a phase transition, and when discussing trans-Planckian effects, respectively. We have spelled out the conditions for both cases and presented simple examples in which a large, scale-invariant f NL is realized.
Naturally it is of great interest to see if these simple examples can be actually realized in any specific models of inflation. Researches in this direction are left for future study.
the result is precisely −ℜ[I 1 ]
FIG. 1 :
1Schematic plot of the two choices of the cutoff τ ∞ . On the left panel it is fixed as a constant τ ∞ = −1/(c s k ∞ ) and is the same for every mode. Meanwhile on the right panel it varies for different k in such a way that it corresponds to a fixed length scale, c s k|τ ∞ | = γ p .
AcknowledgementsJG is grateful to the Yukawa Institute for Theoretical Physics at Kyoto University for hospitality while this work was under progress. We are grateful to Xingang Chen andTakahiro Tanaka
. A H Guth, Phys. Rev. D. 23347A. H. Guth, Phys. Rev. D 23, 347 (1981);
. K Sato, Mon. Not. Roy. Astron. Soc. 195467K. Sato, Mon. Not. Roy. Astron. Soc. 195, 467 (1981);
. A D Linde, Phys. Lett. B. 108389A. D. Linde, Phys. Lett. B 108, 389 (1982);
. A Albrecht, P J Steinhardt, Phys. Rev. Lett. 481220A. Albrecht and P. J. Steinhardt, Phys. Rev. Lett. 48, 1220 (1982).
. C L Bennett, D Larson, J L Weiland, N Jarosik, G Hinshaw, N Odegard, K M Smith, R S Hill, arXiv:1212.5225astro-ph.COC. L. Bennett, D. Larson, J. L. Weiland, N. Jarosik, G. Hinshaw, N. Odegard, K. M. Smith and R. S. Hill et al., arXiv:1212.5225 [astro-ph.CO].
Focus section on non-linear and non-Gaussian cosmological perturbations. Class. Quant. Grav. 27For a recent collection of reviews, see e.gFor a recent collection of reviews, see e.g. Class. Quant. Grav. 27, "Focus section on non-linear and non-Gaussian cosmological perturbations" (2010);
Testing the Gaussianity and Statistical Isotropy of the Universe. Adv Astron, Adv. Astron. 2010, "Testing the Gaussianity and Statistical Isotropy of the Universe" (2010).
. E Komatsu, D N Spergel, astro-ph/0005036Phys. Rev. D. 6363002E. Komatsu and D. N. Spergel, Phys. Rev. D 63, 063002 (2001) [astro-ph/0005036].
. J M Maldacena, astro-ph/0210603JHEP. 030513J. M. Maldacena, JHEP 0305, 013 (2003) [astro-ph/0210603].
. P Creminelli, M Zaldarriaga, astro-ph/0407059JCAP. 04106P. Creminelli and M. Zaldarriaga, JCAP 0410, 006 (2004) [astro-ph/0407059].
. A Achucarro, J. -O Gong, G A Palma, S P Patil, arXiv:1211.5619astro-ph.COA. Achucarro, J. -O. Gong, G. A. Palma and S. P. Patil, arXiv:1211.5619 [astro-ph.CO].
. J Ganc, E Komatsu, arXiv:1006.5457JCAP. 10129astro-ph.COJ. Ganc and E. Komatsu, JCAP 1012, 009 (2010) [arXiv:1006.5457 [astro-ph.CO]];
. S Renaux-Petel, arXiv:1008.0260JCAP. 101020astro-ph.COS. Renaux-Petel, JCAP 1010, 020 (2010) [arXiv:1008.0260 [astro-ph.CO]].
. M H Namjoo, H Firouzjahi, M Sasaki, arXiv:1210.3692Europhys. Lett. 10139001astro-ph.COM. H. Namjoo, H. Firouzjahi, M. Sasaki and , Europhys. Lett. 101, 39001 (2013) [arXiv:1210.3692 [astro-ph.CO]];
. X Chen, H Firouzjahi, M H Namjoo, M Sasaki, arXiv:1301.5699hep-thX. Chen, H. Firouzjahi, M. H. Namjoo and M. Sasaki, arXiv:1301.5699 [hep-th].
. X Chen, M Huang, S Kachru, G Shiu, hep-th/0605045JCAP. 07012X. Chen, M. -x. Huang, S. Kachru and G. Shiu, JCAP 0701, 002 (2007) [hep-th/0605045];
. R Holman, A J Tolley, arXiv:0710.1302JCAP. 08051hep-thR. Holman and A. J. Tolley, JCAP 0805, 001 (2008) [arXiv:0710.1302 [hep-th]].
. I Agullo, L Parker, arXiv:1010.5766Phys. Rev. D. 8363526astro-ph.COI. Agullo and L. Parker, Phys. Rev. D 83, 063526 (2011) [arXiv:1010.5766 [astro-ph.CO]];
. J Ganc, arXiv:1104.0244Phys. Rev. D. 8463514astro-ph.COJ. Ganc, Phys. Rev. D 84, 063514 (2011) [arXiv:1104.0244 [astro-ph.CO]];
. D Chialva, arXiv:1108.4203JCAP. 121037astro-ph.COD. Chialva, JCAP 1210, 037 (2012) [arXiv:1108.4203 [astro-ph.CO]];
. N Agarwal, R Holman, A J Tolley, J Lin, arXiv:1212.1172arXiv:1110.4688JCAP. 12025hep-th. See also S. Kundu. astro-ph.CON. Agarwal, R. Holman, A. J. Tolley and J. Lin, arXiv:1212.1172 [hep-th]. See also S. Kundu, JCAP 1202, 005 (2012) [arXiv:1110.4688 [astro-ph.CO]].
. F Arroja, A E Romano, M Sasaki, arXiv:1106.5384Phys. Rev. D. 84123503astroph.COF. Arroja, A. E. Romano and M. Sasaki, Phys. Rev. D 84, 123503 (2011) [arXiv:1106.5384 [astro- ph.CO]];
. F Arroja, M Sasaki, arXiv:1204.6489JCAP. 120812astro-ph.COF. Arroja and M. Sasaki, JCAP 1208, 012 (2012) [arXiv:1204.6489 [astro-ph.CO]].
. T Tanaka, Y Urakawa, arXiv:1103.1251JCAP. 110514astro-ph.COT. Tanaka and Y. Urakawa, JCAP 1105, 014 (2011) [arXiv:1103.1251 [astro-ph.CO]];
. T Tanaka, Y Urakawa, arXiv:1209.1914hep-thT. Tanaka and Y. Urakawa, arXiv:1209.1914 [hep-th].
. J Garriga, V F Mukhanov, hep-th/9904176Phys. Lett. B. 458219J. Garriga and V. F. Mukhanov, Phys. Lett. B 458, 219 (1999) [hep-th/9904176].
. M Sasaki, Prog. Theor. Phys. 761036M. Sasaki, Prog. Theor. Phys. 76, 1036 (1986).
. C T Byrnes, J. -O Gong, arXiv:1210.1851Phys. Lett. B. 718718astro-ph.COC. T. Byrnes and J. -O. Gong, Phys. Lett. B 718, 718 (2013) [arXiv:1210.1851 [astro-ph.CO]].
. A Vilenkin, L H Ford, Phys. Rev. D. 261231A. Vilenkin and L. H. Ford, Phys. Rev. D 26, 1231 (1982).
. . G Shiu, J. Phys. Conf. Ser. 18and references thereinSee e.g. G. Shiu, J. Phys. Conf. Ser. 18, 188 (2005) and references therein.
| [] |
[
"On a Problem of Mahler Concerning the Approximation of Exponentials and Logarithms",
"On a Problem of Mahler Concerning the Approximation of Exponentials and Logarithms"
] | [
"Michel Waldschmidt "
] | [] | [
"Publ. Math. Debrecen"
] | We first propose two conjectural estimates on Diophantine approximation of logarithms of algebraic numbers. Next we discuss the state of the art and we give further partial results on this topic.( * ) The main result in [11] involves a further parameter E which yields a sharper estimate when |λ|/D is small compared with h 1 . | 10.5486/pmd.2000.2324 | [
"https://arxiv.org/pdf/math/0001155v1.pdf"
] | 16,841,913 | math/0001155 | 3df16f5f65c22335217810083ffba6b18878bb97 |
On a Problem of Mahler Concerning the Approximation of Exponentials and Logarithms
2000
Michel Waldschmidt
On a Problem of Mahler Concerning the Approximation of Exponentials and Logarithms
Publ. Math. Debrecen
562000Bon Anniversaire Kàlman: tu as 60 ans, et on se connaît depuis 30 ans!
We first propose two conjectural estimates on Diophantine approximation of logarithms of algebraic numbers. Next we discuss the state of the art and we give further partial results on this topic.( * ) The main result in [11] involves a further parameter E which yields a sharper estimate when |λ|/D is small compared with h 1 .
§1. Two Conjectures on Diophantine Approximation of Logarithms of Algebraic Numbers
In 1953 K. Mahler [7] proved that for any sufficiently large positive integers a and b, the estimates log a ≥ a −40 log log a and e b ≥ b −40b (1) hold; here, · denotes the distance to the nearest integer: for x ∈ R,
x = min n∈Z |x − n|.
In the same paper [7], he remarks:
"The exponent 40 log log a tends to infinity very slowly; the theorem is thus not excessively weak, the more so since one can easily show that | log a − b| < 1 a for an infinite increasing sequence of positive integers a and suitable integers b." Mahler's estimates (1) have been refined by Mahler himself [8], M. Mignotte [10] and F. Wielonsky [19]: the exponent 40 can be replaced by 19.183.
Here we propose two generalizations of Mahler's problem. One common feature to our two conjectures is that we replace rational integers by algebraic numbers. However if, for simplicity, we restrict them to the special case of rational integers, then they deal with simultaneous approximation of logarithms of positive integers by rational integers. In higher dimension, there are two points of view: one takes either a hyperplane, or else a line. Our first conjecture is concerned with lower bounds for |b 0 + b 1 log a 1 + · · · + b m log a m |, which amounts to ask for lower bounds for |e b 0 a b 1 1 · · · a b m m − 1|. We are back to the situation considered by Mahler in the special case m = 1 and b m = −1. Our second conjecture asks for lower bounds for max 1≤i≤m |b i −log a i |, or equivalently for max 1≤i≤m |e b i − a i |. Mahler's problem again corresponds to the case m = 1. In both cases a 1 , . . . , a m , b 0 , . . . , b m are positive rational integers.
Dealing more generally with algebraic numbers, we need to introduce a notion of height. Here we use Weil's absolute logarithmic height h(α) (see [5] Chap. IV, § 1, as well as [18]), which is related to Mahler where f ∈ Z[X] is the minimal polynomial of α and d its degree. Another equivalent definition for h(α) is given below ( § 3.3). Before stating our two main conjectures, let us give a special case, which turns out to be the "intersection" of Conjectures 1 and 2 below: it is an extension of Mahler's problem where the rational integers a and b are replaced by algebraic numbers α and β.
h ≥ h(α), h ≥ h(β), h ≥ 1 D |λ| and h ≥ 1 D · Then |λ − β| ≥ exp −c 0 D 2 h .
One may state this conjecture without introducing the letter λ: then the conclusion is a lower bound for |e β − α|, and the assumption h ≥ |λ|/D is replaced by h ≥ |β|/D. It makes no difference, but for later purposes we find it more convenient to use logarithms.
The best known result in this direction is the following [11], which includes previous estimates of many authors; among them are K. Mahler, N.I. Fel'dman, P.L. Cijsouw, E. Reyssat, A.I. Galochkin and G. Diaz (for references, see [15], [4], Chap. 2 § 4.4, [11] and [19]). For convenience we state a simpler version ( * )
• Let α and β be algebraic numbers and let λ ∈ C satisfy α = e λ . Define D = [Q(α, β) : Q]. Let h 1 and h 2 be positive real numbers satisfying,
h 1 ≥ h(α), h 1 ≥ 1 D |λ|, h 1 ≥ 1 D and h 2 ≥ h(β), h 2 ≥ log(Dh 1 ), h 2 ≥ log D, h 2 ≥ 1.
Then
|λ − β| ≥ exp −2 · 10 6 D 3 h 1 h 2 (log D + 1) .(2)
To compare with Conjecture 0, we notice that from (2) we derive, under the assumptions of Conjecture 0,
|λ − β| ≥ exp −cD 3 h(h + log D + 1)(log D + 1)
with an absolute constant c. This shows how far we are from Conjecture 0. In spite of this weakness of the present state of the theory, we suggest two extensions of Conjecture 0 involving several logarithms of algebraic numbers. The common hypotheses for our two conjectures below are the following. We denote by λ 1 , . . . , λ m complex numbers such that the numbers α i = e λ i (1 ≤ i ≤ m) are algebraic. Further, let β 0 , . . . , β m be algebraic numbers. Let D denote the degree of the number field Q(α 1 , . . . , α m , β 0 , . . . , β m ). Furthermore, let h be a positive number which satisfies
h ≥ max 1≤i≤m h(α i ), h ≥ max 0≤j≤m h(β j ), h ≥ 1 D max 1≤i≤m |λ i | and h ≥ 1 D · Conjecture 1.
-Assume that the number
Λ = β 0 + β 1 λ 1 + · · · + β m λ m is non zero. Then |Λ| ≥ exp −c 1 mD 2 h ,
where c 1 is a positive absolute constant.
Conjecture 2. -Assume λ 1 , . . . , λ m are linearly independent over Q. Then m i=1 |λ i − β i | ≥ exp −c 2 mD 1+(1/m) h ,
with a positive absolute constant c 2 .
Remark 1. Thanks to A.O. Gel'fond, A. Baker and others, a number of results have already been given in the direction of Conjecture 1. The best known estimates to date are those in [12], [16], [1] and [9]. Further, in the special case m = 2, β 0 = 0, sharper numerical values for the constants are known [6]. However Conjecture 1 is much stronger than all known lower bounds:
-in terms of h: best known estimates involve h m+1 in place of h;
-in terms of D: so far, we have essentially D m+2 in place of D 2 ;
-in terms of m: the sharpest (conditional) estimates, due to E.M. Matveev [9], display c m (with an absolute constant c > 1) in place of m.
On the other hand for concrete applications like those considered by K. Győry, a key point is often not to know sharp estimates in terms of the dependence in the different parameters, but to have non trivial lower bounds with small numerical values for the constants. From this point of view a result like [6], which deals only with the special case m = 2, β 0 = 0, plays an important role in many situations, in spite of the fact that the dependence in the height of the coefficients β 1 , β 2 is not as sharp as other more general estimates from Gel'fond-Baker's method.
Remark 2. In case D = 1, β 0 = 0, sharper estimates than Conjecture 1 are suggested by Lang-Waldschmidt in [5], Introduction to Chapters X and XI. Clearly, our Conjectures 1 and 2 above are not the final word on this topic. Remark 4. In the special case where λ 1 , . . . , λ m are fixed and β 0 , . . . , β m are restricted to be rational numbers, Khinchine's Transference Principle (see [2], Chap. V) enables one to relate the two estimates provided by Conjecture 1 and Conjecture 2. It would be interesting to extend and generalize this transference principle so that one could relate the two conjectures in more general situations.
Remark 5. The following estimate has been obtained by N.I. Feld'man in 1960 (see [3], Th. 7.7 Chap. 7 §5); it is the sharpest know result in direction of Conjecture 2 when λ 1 , . . . , λ m are fixed:
• Under the assumptions of Conjecture 2,
m i=1 |λ i − β i | ≥ exp −cD 2+(1/m) (h + log D + 1)(log D + 1) −1
with a positive constant c depending only on λ 1 , . . . , λ m .
Theorem 8.1 in [14] enables one to remove the assumption that λ 1 , . . . , λ m are fixed, but then yields the following weaker lower bound:
• Under the assumptions of Conjecture 2,
m i=1 |λ i − β i | ≥ exp −cD 2+(1/m) h(h + log D + 1)(log h + log D + 1) 1/m ,
with a positive constant c depending only on m.
As a matter of fact, as in (2), Theorem 8.1 of [14] enables one to separate the contribution of the heights of α's and β's.
• Under the assumptions of Conjecture 2, let h 1 and h 2 satisfy
h 1 ≥ max 1≤i≤m h(α i ), h 1 ≥ 1 D max 1≤i≤m |λ i |, h 1 ≥ 1 D and h 2 ≥ max 0≤j≤m h(β j ), h 2 ≥ log log(3Dh 1 ), h 2 ≥ log D. Then m i=1 |λ i − β i | ≥ exp −cD 2+(1/m) h 1 h 2 (log h 1 + log h 2 + 2 log D + 1) 1/m ,(3)
with a positive constant c depending only on m.
Again, Theorem 8.1 of [14] is more precise (it involves the famous parameter E).
In case m = 1 the estimate (3) gives a lower bound with
D 3 h 1 h 2 (log h 1 + log h 2 + 2 log D + 1),
while (2) replaces the factor (log h 1 + log h 2 + 2 log D + 1) by log D + 1. The explanation of this difference is that the proof in [11] involves the so-called Fel'dman's polynomials, while the proof in [14] does not.
Remark 6.
A discussion of relations between Conjecture 2 and algebraic independence is given in [18], starting from [14].
Remark 7. One might propose more general conjectures involving simultaneous linear forms in logarithms. Such extensions of our conjectures are also suggested by the general transference principles in [2]. In this direction a partial result is given in [13].
Remark 8. We deal here with complex algebraic numbers, which means that we consider only Archimedean absolute values. The ultrametric situation would be also worth of interest and deserves to be investigated.
§2. Simultaneous Approximation of Logarithms of Algebraic Numbers
Our goal is to give partial results in the direction of Conjecture 2. Hence we work with several algebraic numbers β (and as many logarithms of algebraic numbers λ), but we put them into a matrix B. Our estimates will be sharper when the rank of B is small.
We need a definition:
Definition. A m × n matrix L = (λ ij ) 1≤i≤m 1≤j≤n
satisfies the linear independence condition if, for any non zero tuple t = (t 1 , . . . , t m ) in Z m and any non zero tuple s = (s 1 , . . . , s n ) in Z n , we have
m i=1 n j=1 t i s j λ ij = 0.
This assumption is much stronger than what is actually needed in the proof, but it is one of the simplest ways of giving a sufficient condition for our main results to hold.
θ = r(m + n) mn ·
There exists a positive constant c 1 with the following property. Let B be a m × n matrix of rank ≤ r with coefficients β ij in a number field K. For 1 ≤ i ≤ m and 1 ≤ j ≤ n, let λ ij be a complex number such that the number α ij = e λ ij belongs to K × and such that the m × n matrix L = (λ ij ) 1≤i≤m 1≤j≤n satisfies the linear independence condition. Define D = [K : Q]. Let h 1 and h 2 be positive real numbers satisfying the following conditions:
h 1 ≥ h(α ij ), h 1 ≥ 1 D |λ ij |, h 1 ≥ 1 D and h 2 ≥ h(β ij ), h 2 ≥ log(Dh 1 ), h 2 ≥ log D, h 2 ≥ 1 for 1 ≤ i ≤ m and 1 ≤ j ≤ n. Then m i=1 n j=1 λ ij − β ij ≥ e −c 1 Φ 1 where Φ 1 = Dh 1 (Dh 2 ) θ if Dh 1 ≥ (Dh 2 ) 1−θ , (Dh 1 ) 1/(1−θ) if Dh 1 < (Dh 2 ) 1−θ .(4)
Remark 1. One could also state the conclusion with the same lower bound for
m i=1 n j=1 e β ij − α ij .
Remark 2. Theorem 1 is a variant of Theorem 10.1 in [14]. The main differences are the following.
In [14], the numbers λ ij are fixed (which means that the final estimate is not explicited in terms of h 1 ).
The second difference is that in [14] the parameter r is the rank of the matrix L. Lemma 1 below shows that our hypothesis, dealing with the rank of the matrix B, is less restrictive.
The third difference is that in [14], the linear independence condition is much weaker than here; but the cost is that the estimate is slightly weaker in the complex case, where D 1+θ h θ 2 is replaced by D 1+θ h 1+θ 2 (log D) −1−θ . However it is pointed out p. 424 of [14] that the conclusion can be reached with D 1+θ h θ 2 (log D) −θ in the special case where all λ ij are real number. It would be interesting to get the sharper estimate without this extra condition.
Fourthly, the negative power of log D which occurs in [14] could be included also in our estimate by introducing a parameter E (see remark 5 below).
Finally our estimate is sharper than Theorem 10.1 of [14] in case
Dh 1 < (Dh 2 ) 1−θ .
Remark 3. In the special case n = 1, we have r = 1, θ = 1 + (1/m) and the lower bound (4) is slightly weaker than (3): according to (3), in the estimate
D 2+(1/m) h 1 h 1+(1/m) 2 ,
given by (4), one factor h 1/m 2 can be replaced by
log(eD 2 h 1 h 2 ) 1/m .
Similarly for n = 1 (by symmetry). Hence Theorem 1 is already known when min{m, n} = 1. (4) is not the sharpest result one can prove. Firstly the linear independence condition on the matrix L can be weakened. Secondly the same method enables one to split the dependence of the different α ij (see Theorem 14.20 of [18]). Thirdly a further parameter E can be introduced (see [11], [17] and [18], Chap. 14 for instance -our statement here corresponds to E = e).
Remark 4. One should stress that
Remark 5. In case Dh 1 < (Dh 2 ) 1−θ , the number Φ 1 does not depend on h 2 : in fact one does not use the assumption that the numbers β ij are algebraic! Only the rank r of the matrix comes into the picture. This follows from the next result.
Theorem 2. -Let m, n and r be positive rational integers with mn > r(m + n). Define
κ = mn mn − r(m + n) ·
There exists a positive constant c 2 with the following property. Let L = (λ ij ) 1≤i≤m 1≤j≤n be a matrix, whose entries are logarithms of algebraic numbers, which satisfies the linear independence condition. Let K be a number field containing the algebraic numbers
α ij = e λ ij (1 ≤ i ≤ m, 1 ≤ j ≤ n). Define D = [K: Q]. Let h be a positive real number satisfying h ≥ h(α ij ), h ≥ 1 D |λ ij | and h ≥ 1 D for 1 ≤ i ≤ m and 1 ≤ j ≤ n. Then for any m × n matrix M = (x ij ) 1≤i≤m 1≤j≤n of rank ≤ r with complex coefficients we have m i=1 n j=1 λ ij − x ij ≥ e −c 2 Φ 2 where Φ 2 = (Dh) κ .
Since κ(1 − θ) = 1, Theorem 2 yields the special case of Theorem 1 where Dh 1 < (Dh 2 ) 1−θ (cf. Remark 5 above). §3. Proofs
Before proving the theorems, we first deduce (2) from Theorem 4 in [11] and (3) from Theorem 8.1 in [14].
The following piece of notation will be convenient: for n and S positive integers, This is a finite set with (2S + 1) n elements.
Proof of (2)
We use Theorem 4 of [11] with E = e, log A = eh 1 , and we use the estimates h(β) + log max{1, eh 1 } + log D + 1 ≤ 4h 2 and 4e · 105 500 < 2 · 10 6 .
Proof of (3)
We use Theorem 8.1 of [14] with E = e, log A = eh 1 , B ′ = 3D 2 h 1 h 2 and log B = 2h 2 . We may assume without loss of generality that h 2 is sufficiently large with respect to m. The assumption B ≥ D log B ′ of [14] is satisfied: indeed the conditions h 2 ≥ log log(3Dh 1 ) and h 2 ≥ log D imply
h 2 ≥ log log(3D 2 h 1 h 2 ).
We need to check
s 1 β 1 + · · · + s m β m = 0 for s ∈ Z m [S] \ {0} with S = c 1 D log B ′ ) 1/m .
Assume on the contrary s 1 β 1 + · · · + s m β m = 0. Then
|s 1 λ 1 + · · · + s m λ m | ≤ mS max 1≤i≤m |λ i − β i |.
Since λ 1 , . . . , λ m are linearly independent, we may use Liouville's inequality (see for instance [18], Chap. 3) to derive |s 1 λ 1 + · · · + s m λ m | ≥ 2 −D e −mDSh 1 .
In this case one deduces a stronger lower bound than (3), with
cD 2+(1/m) h 2 replaced by c ′ D 1+(1/m) .
Auxiliary results
The proof of the theorems will require a few preliminary lemmas.
Lemma 1. -Let B = β ij 1≤i≤m
1≤j≤n be a matrix whose entries are algebraic numbers in a field of degree D and let L = λ ij 1≤i≤m 1≤j≤n be a matrix of the same size with complex coefficients. Assume rank(B) > rank(L).
Let B ≥ 2 satisfy log B ≥ max 1≤i≤m 1≤j≤n h(β ij ).
Then max 1≤i≤m 1≤j≤n
|λ ij − β ij | ≥ n −nD B −n(n+1)D .
Proof. Without loss of generality we may assume that B is a square regular n × n matrix. By assumption det(L) = 0. In case n = 1 we write B = β , A = λ where β = 0 and λ = 0. Liouville's inequality ( [18],
Chap. 3) yields |λ − β| = |β| ≥ B −D .
Suppose n ≥ 2. We may assume max 1≤i,j≤n
|λ ij − β ij | ≤ D log B (n − 1)B D ,
otherwise the conclusion is plain. Since
|β ij | ≤ B D and B D/(n−1) ≥ 1 + D n − 1 log B, we deduce max 1≤i,j≤n max{|λ ij | , |β ij |} ≤ B nD/(n−1) .
The polynomial det X ij ) is homogeneous of degree n and length n!; therefore (see Lemma 13.10 of [18])
|∆| = |∆ − det(L)| ≤ n · n! max 1≤i,j≤n max{|λ ij | , |β ij |} n−1 max 1≤i,j≤n |λ ij − β ij |.
On the other hand the determinant ∆ of B is a non zero algebraic number of degree ≤ D. We use Liouville's inequality again. Now we consider det X ij ) as a polynomial of degree 1 in each of the n 2 variables: |∆| ≥ (n!) D−1 B −n 2 D .
Finally we conclude the proof of Lemma 1 by means of the estimate n · n! ≤ n n .
Lemma 1 shows that the assumption rank(B) ≤ r of Theorem 1 is weaker than the condition rank(L) = r of Theorem 10.1 in [14]. For the proof of Theorem 1 there is no loss of generality to assume rank(B) = r and rank(L) ≥ r.
In the next auxiliary result we use the notion of absolute logarithmic height on a projective space P N (K), when K is a number field ( [18], Chap. 3): for (γ 0 : · · · : γ N ) ∈ P N (K), Here is a simple property of this height. Let N and M be positive integers and ϑ 1 , . . . , ϑ N , θ 1 , . . . , θ M algebraic numbers. Then h(1 : ϑ 1 : · · · : ϑ N : θ 1 : · · · : θ M ) ≤ h(1 : ϑ 1 : · · · : ϑ N ) + h(1 : θ 1 : · · · : θ M ).
h(γ 0 : · · · : γ N ) = 1 D v∈M K D v log max{|γ 0 | v , . . . , |γ N | v },
One deduces that for algebraic numbers ϑ 0 , . . . , ϑ N , not all of which are zero, we have
h(ϑ 0 : · · · : ϑ N ) ≤ N i=0 h(ϑ i ).(5)
Let K be a number field and B be a m × n matrix of rank r whose entries are in K. There exist two matrices B ′ and B ′′ , of size m × r and r × n respectively, such that B = B ′ B ′′ . We show how to control the heights of the entries of B ′ and B ′′ in terms of the heights of the entries of B (notice that the proof of Theorem 10.1 in [14] avoids such estimate).
We write
B = β ij 1≤i≤m 1≤j≤n , B ′ = β ′ i̺ 1≤i≤m 1≤̺≤r , B ′′ = β ′′ ̺j 1≤̺≤r 1≤j≤n
and we denote by β ′ 1 , . . . , β ′ m the m rows of B ′ and by β ′′ 1 , . . . , β ′′ n the n columns of B ′′ . Then
β ij = β ′ i · β ′′ j (1 ≤ i ≤ m, 1 ≤ j ≤ n),
where the dot · denotes the scalar product in K r .
Lemma 2. -Let β ij 1≤i≤m 1≤j≤n
be a m × n matrix of rank r with entries in a number field K. Define
B = exp max 1≤i≤m 1≤j≤n h(β ij .
Then there exist elements
β ′ i = (β ′ i1 , . . . , β ′ ir ) (1 ≤ i ≤ m) and β ′′ j = (β ′′ 1j , . . . , β ′′ rj ) (1 ≤ j ≤ n),
in K r such that
β ij = r ̺=1 β ′ i̺ β ′′ ̺j (1 ≤ i ≤ m, 1 ≤ j ≤ n)
and such that, for 1 ≤ ̺ ≤ r, we have
h(1 : β ′ 1̺ : · · · : β ′ m̺ ) ≤ m log B
and h(1 : β ′′ ̺1 : · · · : β ′′ ̺n ) ≤ rn log B + log(r!).
Proof. We may assume without loss of generality that the matrix β i̺ 1≤i,̺≤r has rank r. Let ∆ be its determinant. We first take β ′ i̺ = β i̺ (1 ≤ i ≤ m, 1 ≤ ̺ ≤ r), so that, by (5),
h(1 : β ′ 1̺ : · · · : β ′ m̺ ) ≤ m log B (1 ≤ ̺ ≤ r).
Next, using Kronecker's symbol, we set β ′′ ̺j = δ ̺j for 1 ≤ ̺, j ≤ r.
Finally we define β ′′ ̺j for 1 ≤ ̺ ≤ r, r < j ≤ n as the unique solution of the system
β ij = r ̺=1 β ′ i̺ β ′′ ̺j (1 ≤ i ≤ m, r < j ≤ n).
Then for 1 ≤ ̺ ≤ r we have (1 : β ′′ ̺,r+1 : · · · : β ′′ ̺n ) = (∆ : ∆ ̺,r+1 : · · · : ∆ ̺n ),
where, for 1 ≤ ̺ ≤ r and r < j ≤ n, ∆ ̺j is (up to sign) the determinant of the r × r matrix deduced from the r × (r + 1) matrix
β 11 · · · β 1r β 1j . . . . . . . . . . . . β r1 · · · β rr β rj
by deleting the ̺-th column. From (7) one deduces (6). This completes the proof of Lemma 2.
We need another auxiliary result:
Lemma 3. -Let L = λ ij 1≤i≤m 1≤j≤n
be a m × n matrix of complex numbers which satisfies the linear independence condition. Define α ij = e λ ij for i = 1, . . . , m and j = 1, . . . , n.
1) Consider the set
E = (t, s) ∈ Z m × Z n ; m i=1 n j=1 α t i s j ij = 1 . For each s ∈ Z n \ {0}, t ∈ Z m ; (t, s) ∈ E
is a subgroup of Z m of rank ≤ 1, and similarly, for each t ∈ Z m \ {0}, s ∈ Z n ; (t, s) ∈ E is a subgroup of Z n of rank ≤ 1.
2) Fix t ∈ Z m \ {0}. For each positive integer S, the set
m i=1 n j=1 α t i s j ij ; s ∈ Z n [S] ⊂ C ×
has at least (2S + 1) n−1 elements.
Proof. For the proof of 1), fix s ∈ Z n \ {0} and assume t ′ and t ′′ in Z m are such that (t ′ , s) ∈ E and (t ′′ , s) ∈ E. Taking logarithms we find two rational integers k ′ and k ′′ such that
m i=1 n j=1 t ′ i s j λ ij = 2k ′ π √ −1 and m i=1 n j=1 t ′′ i s j λ ij = 2k ′′ π √ −1. Eliminating 2π √ −1 one gets m i=1 n j=1 (k ′ t ′′ i − k ′′ t ′ i )s j λ ij = 0.
Using the linear independence condition on the matrix L one deduces that t ′ and t ′′ are linearly dependent over Z, which proves the first part of 1). The second part of 1) follows by symmetry. Now fix t ∈ Z m \ {0} and define a mapping ψ from the finite set Z n [S] to C × by
ψ(s) = m i=1 n j=1 α t i s j ij .
If s ′ and s ′′ in Z n [S] satisfy ψ(s ′ ) = ψ(s ′′ ), then (s ′ − s ′′ , t) ∈ E. From the first part of the lemma we deduce that, for each s 0 ∈ Z n [S], the set s − s 0 , for s ranging over the set of elements in Z n [S] for which ψ(s) = ψ(s 0 ), does not contain two linearly independent elements. Hence the set
s ∈ Z n [S] ; ψ(s) = ψ(s 0 )
has at most 2S + 1 elements. Since Z n [S] has (2S + 1) n elements, the conclusion of part 2) of Lemma 3 follows by a simple counting argument (Lemma 7.8 of [18]).
Proof of Theorem 1
As pointed out earlier Theorem 1 in case Dh 1 < (Dh 2 ) 1−θ is a consequence of Theorem 2 which will be proved in § 3.5. In this section we assume Dh 1 ≥ (Dh 2 ) 1−θ and we prove Theorem
1 with Φ 1 = Dh 1 (Dh 2 ) θ .
The proof of Theorem 1 is similar to the proof of Theorem 10.1 in [14]. Our main tool is Theorem 2.1 of [17]. We do not repeat this statement here, but we check the hypotheses. For this purpose we need to introduce some notation. We set
d 0 = r, d 1 = m, d 2 = 0, d = r + m,
and we consider the algebraic group G = G 0 × G 1 with G 0 = G r a and G 1 = G m m . There is no loss of generality to assume that the matrix B has rank r (since the conclusion is weaker when r is larger). Hence we may use Lemma 2 and introduce the matrix
M = β ′′ 11 · · · β ′′ 1n I r . . . . . . . . . β ′′ r1 · · · β ′′ rn β ′ 11 · · · β ′ 1r λ 11 · · · λ 1n . . . . . . . . . . . . . . . . . . β ′ m1 · · · β ′ mr λ m1 · · · λ mn
Define ℓ 0 = r and let w 1 , . . . , w ℓ 0 denote the first r columns of M, viewed as elements in K r+m :
w k = (δ 1k , . . . , δ rk , β ′ 1k , . . . , β ′ mk ) (1 ≤ k ≤ r)
(with Kronecker's diagonal symbol δ). The K-vector space they span, namely W = Kw 1 + · · · + Kw r ⊂ K d , has dimension r. Denote by η 1 , . . . , η n the last n columns of M, viewed as elements in C r+m :
η j = (β ′′ 1j , . . . , β ′′ rj , λ 1j , . . . , λ mj ) (1 ≤ j ≤ n).
Hence for 1 ≤ j ≤ n the point the projection of γ s on G 1 (K). Next put w ′ k = w k (1 ≤ k ≤ r) and, for 1 ≤ j ≤ n,
γ j = exp G η j = (β ′′1jη ′ j = (β ′′ 1j , . . . , β ′′ rj , β 1j , . . . , β mj ) ∈ K r+m ,
so that w ′ 1 , . . . , w ′ r , η ′ 1 , . . . , η ′ n are the column vectors of the matrix
M ′ = I r B ′′ B ′ B .
Further, for s ∈ Z n , set η ′ s = s 1 η ′ 1 + · · · + s n η ′ n .
Consider the vector subspaces
W ′ = Cw ′ 1 + · · · + Cw ′ r and V ′ = Cη ′ 1 + · · · + Cη ′ n of C d . Since M ′ = I r B ′ · I r B ′′ ,
the matrix M ′ has rank r, and it follows that V ′ and W ′ + V ′ have dimension r. We set r 1 = r 2 = 0 and r 3 = r.
j β ′′ hj : · · · : n j=1 s (M ) j β ′′ hj ≤ log B 1 (1 ≤ h ≤ r)
and h(1 : β ′ 1k : · · · : β ′ mk ) ≤ log B 2 (1 ≤ k ≤ r) follow from Lemma 2 thanks to the conditions h 2 ≥ 1 and h 2 ≥ log D.
Next we set
A 1 = . . . = A m = exp{c 0 Sh 1 }, E = e.
Thanks to the definition of h 1 , we have,
for 1 ≤ i ≤ m, e D ≤ log A i , h n j=1 α s j ij ≤ log A i and e D n j=1 s j λ ij ≤ log A i . Then define T = (c 2 0 Dh 2 ) r/m , V = c 3+4θ 0 Φ 1 , U = V /c 0 , T 0 = S 0 = U c 0 Dh 2
, T 1 = · · · = T m = T, S 1 = · · · = S n = S.
The inequalities
DT 0 log B 1 ≤ U, DS 0 log B 2 ≤ U and m i=1 DT i log A i ≤ U
are easy to check. The integers T 0 , . . . , T m and S 0 , . . . , S n are all ≥ 1, thanks to the assumption Dh 1 ≥ (Dh 2 ) 1−θ . We have U > c 0 D(log D + 1) and
T 0 + r r (T + 1) m > 4V r .
It will be useful to notice that we also have
S r 0 (2S + 1) n > c 0 T r 0 T m .(8)
Finally the inequality B 2 ≥ T 0 + mT + dS 0 is satisfied thanks to the conditions h 2 ≥ log(Dh 1 ) and h 2 ≥ log D.
Assume now |λ ij − β ij | ≤ e −V for 1 ≤ i ≤ m and 1 ≤ j ≤ n. Then all hypotheses of Theorem 2.1 of [17] are satisfied. Hence we obtain an algebraic subgroup
G * = G * 0 × G * 1 of G, distinct from G, such that S ℓ * 0 0 M * H(G * ; T ) ≤ (r + m)! r! T r 0 T m(9)
where
ℓ * 0 = dim K W * , W * = W + T G * (K) T G * (K) , M * = Card(Σ * ), Σ * = Σ + G * (K) G * (K) · Define d * 0 = dim(G 0 /G * 0 ) and d * = dim(G/G * ). Since H(G * ; T ) ≥ T r−d * 0 0
, we deduce from (8) and
(9) S ℓ * 0 0 M * < S d * 0 0 (2S + 1) n .(10)
We claim ℓ * 0 ≥ d * 0 . Indeed, consider the diagram
C d π 0 − −−−− → C r g g 0 C d * π * 0 − −−−− → C d * 0 where π 0 : C d → C r and π * 0 : C d * → C d * 0
denote the projections with kernels {0} × C m and {0} × T G * 1 (K) respectively, and g : C d → C d * and g 0 : C r → C d * 0 denote the projections
T G (K) → T G (K)/T G * (K) ≃ T G/G * (K) and T G 0 (K) → T G 0 (K)/T G * 0 (K) ≃ T G 0 /G * 0 (K) respectively.
We have W * = g(W ) and π 0 (W ) = C r . Since g 0 is surjective we deduce π * 0 (W * ) = C d * 0 , hence
ℓ * 0 = dim W * ≥ dim π * 0 (W * ) = d * 0 .
Combining the inequality ℓ * 0 ≥ d * 0 with (10) we deduce
M * < (2S + 1) n .
Therefore dim G * 1 > 0. Let Σ 1 denotes the projection of Σ on G 1 :
Σ 1 = n j=1 α s j 1j , . . . , n j=1 α s j mj ; s ∈ Z n [S] = γ(1)s (1) , . . . , γ(1)s (M ) .
For each s ′ = s ′′ in Z n [S] such that γ
(1) s ′ /γ (1)
s ′′ ∈ G * 1 (K), and for each hyperplane of T G * (K) containing T G * 1 (K) of equation t 1 z 1 + · · · + t m z m = 0, we get a relation with s = s ′ − s ′′ . Using the linear independence condition on the matrix L, we deduce from Lemma 3, part 1), that G * 1 has codimension 1 in G 1 ; hence
H(G * ; T ) ≥ (r + m − 1)! r! T r−d * 0 0 T m−1 .(11)
Next from part 2) of Lemma 3 we deduce that the set
Σ * 1 = Σ 1 + G * 1 (K) G * 1 (K)
has at least (2S + 1) n−1 elements. Hence
M * = Card(Σ) ≥ Card(Σ * 1 ) ≥ (2S + 1) n−1 .(12)
If mn ≥ m + n the estimates (9), (11) and (12) are not compatible. This contradiction concludes the proof of Theorem 1 in the case max{m, n} > 1 and Dh 1 ≥ (Dh 2 ) 1−θ . Finally, as we have seen in Remark 3 of § 2, Theorem 1 is already known in case either m = 1 or n = 1.
Proof of Theorem 2
We start with the easy case where all entries x ij of M are zero: in this special case Liouville's inequality gives
m i=1 n j=1 λ ij ≥ 2 −D e −Dh .
Next we remark that we may, without loss of generality, replace the number r by the actual rank of the matrix M.
Thanks to the hypothesis mn > r(m + n), there exist positive real numbers γ u , γ t and γ s satisfying γ u > γ t + γ s and rγ u < mγ t < nγ s .
For instance
γ u = 1, γ t = r m + 1 2m 2 n , γ s = r n + 1 mn 2
is an admissible choice. Next let c 0 be a sufficiently large integer. How large it should be can be explicitly written in terms of m, n, r, γ u , γ t and γ s .
We shall apply Theorem 2.1 of [17] with d 0 = ℓ 0 = 0, d = d 1 = m, d 2 = 0, G = G m m , r 3 = r, r 1 = r 2 = 0,
η j = (λ ij ) 1≤i≤m , η ′ j = (x ij ) 1≤i≤m (1 ≤ j ≤ n).
Since d 0 = ℓ 0 = 0 we set T 0 = S 0 = 0. Therefore the parameters B 1 and B 2 will play no role, but for completenes we set B 1 = B 2 = mn(Dh) mn .
We also define E = e, U = c γ u 0 (Dh) κ , V = (12m + 9)U, T 1 = · · · = T m = T, S 1 = · · · = S n = S, where T = c γ t 0 (Dh) rκ/m , S = c γ s 0 (Dh) rκ/n .
Define A 1 = · · · = A m by
log A i = 1 em c γ u −γ t −γ s 0 Sh (1 ≤ i ≤ m).
The condition γ t + γ s < γ u enables us to check Define Σ = α s 1 11 · · · α s n 1n , . . . , α s 1 m1 · · · α s n mn ∈ (K × ) m ; s ∈ Z n [S] . From the condition mγ t > rγ u one deduces (2T + 1) m > 2V r .
Assume that the conclusion of Theorem 2 does not hold for c = c γ u +1 0 . Then the hypotheses of Theorem 2.1 of [17] are satisfied, and we deduce that there exists a connected algebraic subgroup G * of G, distinct from G, which is incompletely defined by polynomials of multidegrees ≤ T where T stands for the m-tuple (T, . . . , T ), such that
M * H(G * ; T ) ≤ m!T m ,
where M * = Card Σ + G * (K) G * (K) .
Since mγ t < nγ s , we have m!T m < (2S + 1) n , and since H(G * ; T ) ≥ 1, we deduce M * < (2S + 1) n . Let us check, by contradiction, that G * has codimension 1. We already know G * = G. If the codimension of G * were ≥ 2, we would have two linearly independent elements t ′ and t ′′ in Z m [T ] such that the two numbers
a ′ = 1 2π √ −1 m i=1 n j=1
t ′ i s j λ ij and a ′′ =
1 2π √ −1 m i=1 n j=1
t ′′ i s j λ ij are in Z. Notice that max{|a ′ |, |a ′′ |} ≤ mnT SDh.
We eliminate 2π √ −1: set t = a ′′ t ′ − a ′ t ′′ , so that m i=1 n j=1 t i s j λ ij = 0 and 0 < |t| ≤ 2mnT 2 SDh < (2mnT SDh) 2 < U 2 .
This is not compatible with our hypothesis that the matrix L mn satisfies the linear independence condition.
Hence G * has codimension 1 in G. Therefore This is not compatible with the hypotheses mn > r(m + n) and r ≥ 1. This final contradiction completes the proof of Theorem 2.
(
We have replaced Mahler's notation f and a by a and b respectively for coherence with what follows).In view of this remark we shall dub Mahler's problem the following open question: (?) Does there exist an absolute constant c > 0 such that, for any positive integers a and b, |e b − a| ≥ a −c ? http://www.math.jussieu.fr/∼miw/articles/Debrecen.html
|f (e 2iπt )|dt ,
Conjecture 0 .
0-There exists a positive absolute constant c 0 with the following property. Let α and β be complex algebraic numbers and let λ ∈ C satisfy e λ = α. Define D = [Q(α, β) : Q]. Further, let h be a positive number satisfying
Remark 3 .
3Assume λ 1 , . . . , λ m as well as D are fixed (which means that the absolute constants c 1 and c 2 are replaced by numbers which may depend on m, λ 1 , . . . , λ m and D). Then both conjectures are true: they follow for instance from(2). The same holds if β 0 , . . . , β m and D are fixed.
Theorem 1 .
1-Let m, n and r be positive rational integers. Define
Z n [S] = [−S, S] n ∩ Z n = s = (s 1 , . . . , s n ) ∈ Z n , max 1≤j≤n |s j | ≤ S .
where D = [K : Q], M K is the set of normalized absolute values of K, and for v ∈ M K , D v is the local degree. The normalization of the absolute values is done in such a way the for N = 1 we have h(α) = h(1 : α).
, . . . , β ′′ rj , α 1j , . . . , α mj )lies in G(K) = K r × (K × ) m .For s = (s 1 , . . . , s n ) ∈ Z n , define an element η s in C d by η s = s 1 η 1 + · · · + s
h 2 .
2Theorem 2.1 of [17] is completely explicit, it would not be difficult to derive an explicit value for the constant c in Theorem 1 in terms of m and n only; but we shall only show it exists. We denote by c 0 a sufficiently large constant which depend only on m and n. Without loss of generality we may assume that both Dh 1 and h 2 are sufficiently large compared with c 0 . We set S = (c 3 0 Dh 2 ) r/n and M = (2S + 1) n , where the bracket denotes the integral part. Define Σ = γ s ; s ∈ Z n [S] ⊂ G(K). We shall order the elements of Z n [S]: Z n [S] = s (1) , . . . , s (M ) .Put B 1 = B 2 = e c 0 The
h(α ij ) ≤ log A i and n j=1 s j |λ ij | ≤ D E log A i for 1 ≤ i ≤ mand for any s ∈ Z n [S]. Moreover, from the very definition of κ we deduce log A i ≤ U.
Hence Σ[ 2 ]
2∩ G * (K) = {e}. Therefore there exist s ∈ Z n [2S] \ {0} and t ∈ Z m [T ]
H
(G * ; T ) ≥ T m−1 and consequently M * ≤ m!T.On the other hand a similar argument shows that anys ′ , s ′′ in Z n [2S] s ′′ j λ ij ∈ 2π √ −1Zare linearly dependent over Z. From Lemma 7.8 of[18] we deduce M * ≥ S n−1 .Therefore S n−1 ≤ m!T.
Gisbert -Logarithmic forms and group varieties. Alan ; Baker, Wüstholz, J. reine Angew. Math. 442Baker, Alan; Wüstholz, Gisbert -Logarithmic forms and group varieties. J. reine Angew. Math. 442 (1993), 19-62.
An Introduction to Diophantine Approximation. J W S Cassels, Cambridge Tracts in Mathematics and Mathematical Physics. 45Hafner Publishing CoReprint of the 1957 editionCassels, J.W.S. -An Introduction to Diophantine Approximation. Cambridge Tracts in Mathematics and Mathematical Physics, No. 45, Cambridge University Press, New York, 1957. Reprint of the 1957 edition: Hafner Publishing Co., New York, 1972.
Hilbert's seventh problem. Fel'dman, I Naum, Russian) Moskov. Gos. Univ. Fel'dman, Naum I. -Hilbert's seventh problem. (Russian) Moskov. Gos. Univ., Moscow, 1982.
Naum I Fel'dman, Yuri V Nesterenko, Number theory. IV. Transcendental Numbers. Encyclopaedia of Mathematical Sciences. BerlinSpringer-Verlag44Fel'dman, Naum I.; Nesterenko, Yuri V. -Number theory. IV. Transcendental Numbers. Ency- clopaedia of Mathematical Sciences, 44. Springer-Verlag, Berlin, 1998.
Serge -Elliptic Lang, Curves, Diophantine analysis. Grundlehren der Mathematischen Wissenschaften. Berlin-New YorkSpringer-Verlag231Lang, Serge -Elliptic curves: Diophantine analysis. Grundlehren der Mathematischen Wissen- schaften, 231. Springer-Verlag, Berlin-New York, 1978.
. Michel Laurent, Laurent, Michel;
Yuri -Formes linéaires en deux logarithmes et déterminants d'interpolation. Maurice ; Mignotte, Nesterenko, J. Number Theory. 552Mignotte, Maurice; Nesterenko, Yuri -Formes linéaires en deux logarithmes et déterminants d'interpolation. J. Number Theory 55 (1995), no. 2, 285-321.
On the approximation of logarithms of algebraic numbers. Kurt - Mahler, Philos. Trans. Roy. Soc. London. Ser. A. 245Mahler, Kurt -On the approximation of logarithms of algebraic numbers. Philos. Trans. Roy. Soc. London. Ser. A. 245, (1953). 371-398.
Kurt -Applications of some formulae by Hermite to the approximation of exponentials and logarithms. Mahler, Math. Ann. 168Mahler, Kurt -Applications of some formulae by Hermite to the approximation of exponentials and logarithms. Math. Ann. 168 (1967) 200-227.
Explicit lower estimates for rational homogeneous linear forms in logarithms of algebraic numbers. Eugène M Matveev, Engl. transl.: Izvestiya Mathematics. 624Izv. Akad. Nauk SSSR. Ser. Mat.Matveev, Eugène M. -Explicit lower estimates for rational homogeneous linear forms in logarithms of algebraic numbers. Izv. Akad. Nauk SSSR. Ser. Mat. 62 No 4, (1998) 81-136. Engl. transl.: Izvestiya Mathematics 62 No 4, (1998) 723-772.
Maurice - Mignotte, Approximations rationnelles de π et quelques autres nombres. Journées Arithmétiques. Grenoble; ParisBull. Soc. Math. France, Mém. 37, Soc. Math. FranceMignotte, Maurice -Approximations rationnelles de π et quelques autres nombres. Journées Arithmétiques (Grenoble, 1973), 121-132. Bull. Soc. Math. France, Mém. 37, Soc. Math. France, Paris, 1974.
On the approximation of the values of exponential function and logarithm by algebraic numbers. (Russian) Diophantine approximations. Yuri V Nesterenko, Waldschmidt, - Michel, Proceedings of papers dedicated to the memory of Prof. N. I. Fel'dman. Yu. V. Nesterenkopapers dedicated to the memory of Prof. N. I. Fel'dmanMoscowCentre for applied research under Mech.-Math. Faculty of MSUNesterenko, Yuri V.; Waldschmidt, Michel -On the approximation of the values of exponential function and logarithm by algebraic numbers. (Russian) Diophantine approximations, Proceedings of papers dedicated to the memory of Prof. N. I. Fel'dman, ed. Yu. V. Nesterenko, Centre for applied research under Mech.-Math. Faculty of MSU, Moscow (1996), 23-42.
Lower bounds for linear forms in logarithms. Patrice ; Philippon, Waldschmidt, - Michel, New advances in transcendence theory. Durham; Cambridge-New YorkCambridge Univ. PressPhilippon, Patrice; Waldschmidt, Michel -Lower bounds for linear forms in logarithms. New advances in transcendence theory (Durham, 1986), 280-312, Cambridge Univ. Press, Cambridge- New York, 1988.
Patrice ; Philippon, Waldschmidt, Michel -Formes linéaires de logarithmes simultanées sur les groupes algébriques commutatifs. Séminaire de Théorie des Nombres. Paris; Boston, MABirkhäuser Boston87Philippon, Patrice; Waldschmidt, Michel -Formes linéaires de logarithmes simultanées sur les groupes algébriques commutatifs. Séminaire de Théorie des Nombres, Paris 1986-87, 313-347, Progr. Math., 75, Birkhäuser Boston, Boston, MA, 1988.
Michel -Simultaneous approximation and algebraic independence. Damien ; Roy, Waldschmidt, The Ramanujan Journal, 1 Fasc. 4Roy, Damien; Waldschmidt, Michel -Simultaneous approximation and algebraic independence. The Ramanujan Journal, 1 Fasc. 4 (1997), 379-430.
Michel -Simultaneous approximation of numbers connected with the exponential function. Waldschmidt, J. Austral. Math. Soc. 25Waldschmidt, Michel -Simultaneous approximation of numbers connected with the exponential function. J. Austral. Math. Soc., 25 (1978), 466-478.
Michel -Minorations de combinaisons linéaires de logarithmes de nombres algébriques. Waldschmidt, Canad. J. Math. 451Waldschmidt, Michel -Minorations de combinaisons linéaires de logarithmes de nombres algébriques. Canad. J. Math. 45 (1993), no. 1, 176-224.
Michel -Approximation diophantienne dans les groupes algébriques commutatifs -(I) : Une version effective du théorème du sous-groupe algébrique. Waldschmidt, J. reine angew. Math. 493Waldschmidt, Michel -Approximation diophantienne dans les groupes algébriques commutatifs - (I) : Une version effective du théorème du sous-groupe algébrique. J. reine angew. Math., 493 (1997), 61-113.
Approximation on Linear Algebraic Groups. Transcendence Properties of the Exponential Function in Several Variables. Michel -Diophantine Waldschmidt, Springer Verlagto appearWaldschmidt, Michel -Diophantine Approximation on Linear Algebraic Groups. Transcendence Properties of the Exponential Function in Several Variables. Springer Verlag, to appear. http://www.math.jussieu.fr/∼miw/articles/DALAG.html
Franck -Hermite-Padé approximants to exponential functions and an inequality of Mahler. Wielonsky, J. Number Theory. 742Wielonsky, Franck -Hermite-Padé approximants to exponential functions and an inequality of Mahler. J. Number Theory 74 (1999), no. 2, 230-249.
Michel WALDSCHMIDT Institut de Mathématiques de Jussieu Théorie des Nombres Case 247 175 rue du Chevaleret F-75013 PARIS e-mail: [email protected] URL. Michel WALDSCHMIDT Institut de Mathématiques de Jussieu Théorie des Nombres Case 247 175 rue du Chevaleret F-75013 PARIS e-mail: [email protected] URL: http://www.math.jussieu.fr/∼miw/
| [] |
[
"ID-Reveal: Identity-aware DeepFake Video Detection",
"ID-Reveal: Identity-aware DeepFake Video Detection"
] | [
"Davide Cozzolino \nUniversity Federico II of Naples\n\n",
"Andreas Rössler \nTechnical University of Munich\n\n",
"Justus Thies \nTechnical University of Munich\n\n\nMax Planck Institute for Intelligent Systems\nTübingen\n",
"Matthias Nießner \nTechnical University of Munich\n\n",
"Luisa Verdoliva \nUniversity Federico II of Naples\n\n"
] | [
"University Federico II of Naples\n",
"Technical University of Munich\n",
"Technical University of Munich\n",
"Max Planck Institute for Intelligent Systems\nTübingen",
"Technical University of Munich\n",
"University Federico II of Naples\n"
] | [] | A major challenge in DeepFake forgery detection is that state-of-the-art algorithms are mostly trained to detect a specific fake method. As a result, these approaches show poor generalization across different types of facial manipulations, e.g., from face swapping to facial reenactment. To this end, we introduce ID-Reveal, a new approach that learns temporal facial features, specific of how a person moves while talking, by means of metric learning coupled with an adversarial training strategy. The advantage is that we do not need any training data of fakes, but only train on real videos. Moreover, we utilize high-level semantic features, which enables robustness to widespread and disruptive forms of post-processing. We perform a thorough experimental analysis on several publicly available benchmarks. Compared to state of the art, our method improves generalization and is more robust to low-quality videos, that are usually spread over social networks. In particular, we obtain an average improvement of more than 15% in terms of accuracy for facial reenactment on high compressed videos. | 10.1109/iccv48922.2021.01483 | [
"https://arxiv.org/pdf/2012.02512v3.pdf"
] | 227,305,462 | 2012.02512 | 83e866bef9d27f76d136ecd60b08252dea6ce7d9 |
ID-Reveal: Identity-aware DeepFake Video Detection
Davide Cozzolino
University Federico II of Naples
Andreas Rössler
Technical University of Munich
Justus Thies
Technical University of Munich
Max Planck Institute for Intelligent Systems
Tübingen
Matthias Nießner
Technical University of Munich
Luisa Verdoliva
University Federico II of Naples
ID-Reveal: Identity-aware DeepFake Video Detection
A major challenge in DeepFake forgery detection is that state-of-the-art algorithms are mostly trained to detect a specific fake method. As a result, these approaches show poor generalization across different types of facial manipulations, e.g., from face swapping to facial reenactment. To this end, we introduce ID-Reveal, a new approach that learns temporal facial features, specific of how a person moves while talking, by means of metric learning coupled with an adversarial training strategy. The advantage is that we do not need any training data of fakes, but only train on real videos. Moreover, we utilize high-level semantic features, which enables robustness to widespread and disruptive forms of post-processing. We perform a thorough experimental analysis on several publicly available benchmarks. Compared to state of the art, our method improves generalization and is more robust to low-quality videos, that are usually spread over social networks. In particular, we obtain an average improvement of more than 15% in terms of accuracy for facial reenactment on high compressed videos.
Introduction
Recent advancements in synthetic media generation allow us to automatically manipulate images and videos with a high level of realism. To counteract the misuse of these image synthesis and manipulation methods, the digital media forensics field got a lot of attention [43,42]. For instance, during the past two years, there has been intense research on DeepFake detection, that has been strongly stimulated by the introduction of large datasets of videos with manipulated faces [36,34,15,32,26,29,19].
However, despite excellent detection performance, the major challenge is how to generalize to previously unseen methods. For instance, a detector trained on face swapping will drastically drop in performance when tested on a facial reenactment method. This unfortunately limits practicality as we see new types of forgeries appear almost on a daily Figure 1: ID-Reveal is an identity-aware DeepFake video detection. Based on reference videos of a person, we estimate a temporal embedding which is used as a distance metric to detect fake videos.
basis. As a result, supervised detection, which requires extensive training data of a specific forgery method, cannot immediately detect a newly-seen forgery type.
This mismatch and generalization issue has been addressed in the literature using different strategies, ranging from applying domain adaptation [12,5] or active learning [17] to strongly increasing augmentation during training [45,15] or by means of ensemble procedures [15,7]. A different line of research is relying only on pristine videos at training time and detecting possible anomalies with respect to forged ones [25,11,13]. This can help to increase the generalization ability with respect to new unknown manipulations but does not solve the problem of videos characterized by a different digital history. This is quite common whenever a video is spread over social networks and posted multiple times by different users. In fact, most of the platforms often reduce the quality and/or the video resolution.
Note also that current literature has mostly focused on face-swapping, a manipulation that replaces the facial identity of a subject with another one, however, a very effective modification is facial reenactment [41], where only the expression or the lips movements of a person are modified (Fig. 2). Recently, the MIT Center for Advanced Virtual- ity created a DeepFake video of president Richard Nixon 1 . The synthetic video shows Nixon giving a speech he never intended to deliver, by modifying only the lips movement and the speech of the old pristine video. The final result is impressive and shows the importance to develop forgery detection approaches that can generalize on different types of facial manipulations.
To better highlight this problem, we carried out an experiment considering the winning solution of the recent Deep-Fake Detection Challenge organized by Facebook on Kaggle platform. The performers had the possibility to train their models using a huge dataset of videos (around 100k fake videos and 20k pristine ones with hundreds of different identities). In Fig. 3, we show the results of our experiment. The model was first tested on a dataset of real and deepfake videos including similar face-swapping manipulations, then we considered unseen face-swapping manipulations and finally videos manipulated using facial reenactment. One can clearly observe the significant drop in performance in this last situation. Furthermore, the test on low quality compressed videos shows an additional loss and the final value for the accuracy is no more than a random guess.
It is also worth noting that current approaches are often used as black-box models and it is very difficult to predict the result because in a realistic scenario it is impossible to have a clue about the type of manipulation that occurred. The lack of reliability of current supervised deep learning methods pushed us to take a completely different perspective, avoiding to answer to a binary question (real or fake?) and instead focusing on wondering if the face under test preserves all the biometric traits of the involved subject.
Following this direction, our proposed method turns out to be able to generalize to different manipulation methods and also shows robustness w.r.t. low-quality data. It can reveal the identity of a subject by highlighting inconsistencies 1 https://moondisaster.org Figure 3: Accuracy results (binary classification task) of the winner of the Deepfake Detection Challenge [37] trained on DFDC dataset [15] and tested on different datasets: the preview DFDC [16] (seen face swapping) and FaceForensics++ [36] both on face swapping and facial-reenactment. Results are presented on high quality (HQ) and low quality (LQ) videos.
of facial features such as temporal consistent motion. The underlying CNN architecture comprises three main components: a facial feature extractor, a temporal network to detect biometric anomalies (temporal ID network) and a generative adversarial network that tries to predict personspecific motion based on the expressions of a different subject. The networks are trained only on real videos containing many different subjects [9]. During test time, in addition to the test video, we assume to have a set of pristine videos of the target person. Based on these pristine examples, we compute a distance metric to the test video using the embedding of the temporal ID network (Fig. 1). Overall, our main contributions are the following:
• We propose an example-based forgery detection approach that detects videos of facial manipulations based on the identity of the subject, especially the person-specific face motion.
• An extensive evaluation that demonstrates the generalization to different types of manipulations even on lowquality videos, with a significant average improvement of more than 15% w.r.t. state of the art.
Related Work
Digital media forensics, especially, in the context of DeepFakes, is a very active research field. The majority of the approaches rely on the availability of large-scale datasets of both pristine and fake videos for supervised learning. A few approaches detect manipulations as anomalies w.r.t. features learned only on pristine videos. Some of these approaches verify if the behavior of a person in a video is consistent with a given set of example videos of this person. Our approach ID-Reveal is such an example-based forgery detection approach. In the following, we discuss the most related detection approaches.
Learned features Afchar et al. [1] presented one of the first approaches for DeepFake video detection based on supervised learning. It focuses on mesoscopic features to analyze the video frames by using a network with a low number of layers. Rössler et al. [36] investigated the performance of several CNNs architectures for DeepFake video detection and showed that very deep networks are more effective for this task, especially, on low-quality videos. To train the networks, the authors also published a large-scale dataset. The best performing architecture XceptionNet [8] was applied frame-by-frame and has been further improved by followup works. In [14] an attention mechanism is included, that can also be used to localize the manipulated regions, while in Kumar et al. [28] a triplet loss has been applied to improve performance on highly compressed videos.
Orthogonally, by exploiting artifacts that arise along the temporal direction it is possible to further boost performance. To this end, Guera et al. [20] propose using a convolutional Long Short Term Memory (LSTM) network. Masi et al. [33] propose to extract features by means of a twobranch network that are then fed into the LSTM: one branch takes the original information, while the other one works on the residual image. Differently in [51] a 3D CNN structure is proposed together with an attention mechanism at different abstraction levels of the feature maps.
Most of these methods achieve very good performance when the training set comprises the same type of facial manipulations, but performance dramatically impairs on unseen tampering methods. Indeed, generalization represents the Achilles' heel in media forensics. Augmentation can be of benefit to generalize to different manipulations as shown in [45]. In particular, augmentation has been extensively used by the best performing approaches during the DeepFake detection challenge [15]. Beyond the classic augmentation operations, some of them were particularly useful, e.g., by including cut-off based strategies on some specific parts of the face. In addition to augmentation, ensembling different CNNs have been also used to improve performance during this challenge [7,17]. Another possible way to face generalization is to learn only on pristine videos and interpret a manipulation as an anomaly. This can improve the detection results on various types of face manipulations, even if the network never saw such forgeries in training. In [11] the authors extract the camera fingerprint information gathered from multiple frames and use those for detection. Other approaches focus on specific operations used in current DeepFake techniques. For example, in [29] the aim is to detect the blending operation that characterizes the face boundaries for most current synthetic face generation approaches.
A different perspective to improve generalization is presented in [12,5], where few-shot learning strategies are applied. Thus, these methods rely on the knowledge of a few labeled examples of a new approach and guide the training process such that new embeddings can be properly separated from previous seen manipulation methods and pristine samples in a short retraining process.
Features based on physiological signals Other approaches look at specific artifacts of the generated videos that are related to physiological signals. In [30] it is proposed a method that detects eye blinking, which is characterized by a specific frequency and duration in videos of real people. Similarly, one can also use inconsistencies on head pose [50] or face warping artifacts [31] as identifiers for tampered content. Recent works are also using heart beat [18,35] and other biological signals [10] to find inconsistencies both in spatial and along the temporal direction.
Identity-based features The idea of identity-based approaches is to characterize each individual by extracting some specific biometric traits that can be hardly reproduced by a generator [4,3,2]. The work by Agarwal et al. [4] is the first approach that exploits the distinct patterns of facial and head movements of an individual to detect fake videos. In [3] inconsistencies between the mouth shape dynamics and a spoken phoneme are exploited. Another related work is proposed in [2] to detect face-swap manipulations. The technique uses both static biometric based on facial identity and temporal ones based on facial expressions and head movements. The method includes standard techniques from face recognition and a learned behavioral embedding using a CNN powered by a metric-learning objective function. In contrast, our proposed method extracts facial features based on a 3D morphable model and focuses on temporal behavior through an adversarial learning strategy. This helps to improve the detection of facial reenactment manipulations while still consistently be able to spot face swapping ones.
Proposed Method
ID-Reveal is an approach for DeepFake detection that uses prior biometric characteristics of a depicted identity, to detect facial manipulations in video content of the person. Any manipulated video content based on facial replacement results in a disconnect between visual identity as well as biometrical characteristics. While facial reenactment preserves the visual identity, biometrical characteristics such as the motion are still wrong. Using pristine video material of a target identity we can extract these biometrical features and compare them to the characteristics computed on a test video that is potentially manipulated. In order to be able to generalize to a variety of manipulation methods, we avoid training on a specific manipulation method, instead, we solely train on non-tampered videos. Additionally, this Figure 4: ID-Reveal is based on two neural networks, the Temporal ID Network as well as the 3DMM Generative Network, which interact with each other in an adversarial fashion. Using a three-dimensional morphable model (3DMM), we process videos of different identities and train the Temporal ID Network to embed the extracted features such that they can be separated in the resulting embedding space based on their containing identity. In order to incentivize this network to focus on temporal aspects rather than visual cues, we jointly train the 3DMM Generative Network to transform extracted features to fool its discriminative counterpart. allows us to leverage a much larger training corpus in comparison to the facial manipulation datasets [36,15].
Our proposed method consists of three major components (see Fig. 4). Given a video as input, we extract a compact representation of each frame using a 3D Morphable model (3DMM) [6]. These extracted features are input to the Temporal ID Network which computes an embedded vector. During test time, a metric in the embedding space is used to compare the test video to the previously recorded biometrics of a specific person. However, in order to ensure that the Temporal ID Network is also based on behavioral instead of only visual information, we utilize a second network, called 3DMM Generative Network, which is trained jointly in an adversarial fashion (using the Temporal ID Network as discriminator). In the following, we will detail the specific components and training procedure.
Feature Extraction Our employed networks are based on per-frame extracted facial features. Specifically, we utilize a low dimensional representation of a face based on a 3D morphable model [6]. The morphable model represents a 3D face by a linear combination of principle components for shape, expression, and appearance. These components are computed via a principle component analysis of aligned 3D scans of human faces. A new face can be represented by this morphable model, by providing the corresponding coefficients for shape, expression, and appearance. To retrieve these parameters from video frames, one can use optimization-based analysis-by-synthesis approaches [41] or learned regression. In our method, we rely on the regression framework of Guo et al. [21] which predicts a vector of 62 coefficients for each frame. Note that the 62 parameters, contain 40 coefficients for the shape, 10 for the expression, and additional 12 parameters for the rigid pose of the face (represented as a 3 × 4 matrix). In the following, we denote the extracted 3DMM features of video i of the individual c at frame t by x c,i (t) ∈ R 62 .
Temporal ID Network The Temporal ID Network N T processes the temporal sequence of 3DMM features through convolution layers that work along the temporal direction in order to extract the embedded vector y c,
i (t) = N T [x c,i (t)].
To evaluate the distance between embedded vectors, we adopt the squared Euclidean distance, computing the following similarity:
S c,i,k,j (t) = − 1 τ min t y c,i (t) − y k,j (t ) 2(1)
As a metric learning loss, similar to the Distance-Based Logistic Loss [44], we adopt a log-loss on a suitably defined probability [13]. Specifically, for each embedded vector y c,i (t), we build the probability through softmax processing as:
p c,i (t) = j =i e Sc,i,c,j (t) j =i e Sc,i,c,j (t) + k =c j e S c,i,k,j (t) ,
(2) Thus, we are considering all the similarities with respect to the pivot vector y c,i (t) in our probability definition p c,i (t). Note that to obtain a high probability value it is only necessary that at least one similarity with the same individual is much larger than similarities with other individuals. Indeed, the loss proposed here is a less restrictive loss compared to the current literature, where the aim is to achieve a high similarity for all the coherent pairs [22,24,46]. The adopted metric learning loss is then obtained from the probabilities through the log-loss function:
L rec = c,i,t −log (p c,i (t)) .(3)
In order to tune hyper-parameters during training, we also measure the accuracy of correctly identifying a subject. It is computed by counting the number of times where at least one similarity with the same individual is larger than all the similarities with other individuals. The Temporal ID Network is first trained alone using the previously described loss, and afterward it is fine-tuned together with the 3DMM Generative Network, which we describe in the following paragraph.
3DMM Generative Network The 3DMM Generative Network N G is trained to generate 3DMM features similar to the features that we may extract from a manipulated video. Specifically, the generative network has the goal to output features that are coherent to the identity of an individual, but with the expressions of another subject. The generative network N G works frame-by-frame and generates a 3DMM feature vector by combining two input feature vectors. Let x c and x k are the 3DMM feature vectors respectively of the individuals c and k, then, To train the generative network N G , we apply it to pairs of videos of these N identities. Specifically, for each identity c, we compute an averaged 3DMM feature vector x c . Based on this averaged input feature x c and a frame feature x i (t) of a video of person i (which serves as expression conditioning), we generate synthetic 3DMM features using the generator N G :
N G [x k , x c ] isx * c,i (t) = N G [x i (t), x c ] .(4)
The 3DMM Generative Network is trained based on the following loss:
L N G = L adv + λ cycle L cycle(5)
Where L cycle is a cycle consistency used in order to preserve the expression. Specifically, the 3DMM Generative Network is applied twice, firstly to transform a 3DMM feature vector of the individual i to identity c and then to transform the generated 3DMM feature vector to identity i again, we should obtain the original 3DMM feature vector. The L cycle is defined as:
L cycle = c,i,t x i (t) − N G x * c,i (t), x i 2 .(6)
The adversarial loss L adv is based on the Temporal ID Network, i.e., it tries to fool the Temporal ID Network by generating features that are coherent for a specific identity. Since the generator works frame-by-frame, it can deceive the Temporal ID Network by only altering the appearance of the individual and not the temporal patterns. The adversarial loss L adv is computed as:
L adv = c,i,t −log p * c,i (t) ,(7)
where the probabilities p * c,i (t) are computed using the equation 2, but considering the similarities evaluated between generated features and real ones:
S * c,i,k,j (t) = − 1 τ min t N T x * c,i (t) − y k,j (t ) 2 . (8)
Indeed, the generator aims to increase the similarity between the generated features for a given individual and the real features of that individual. During training, the Temporal ID Network is trained to hinder the generator, through a loss obtained as:
L N T = L rec + λ inv L inv .(9)
where the loss L inv , contrary to L abv , is used to minimized the probabilities p * c,i (t). Therefore, it is defined as:
L inv = c,i,t −log 1 − p * c,i (t) .(10)
Overall, the final objective of the adversarial game is to increase the ability of the Temporal ID Network to distinguish real identities from fake ones.
Identification Given a test sequence depicting a single identity as well as a reference set of pristine sequences of the same person, we apply the following procedure: we first embed both the test as well as our reference videos using the Temporal ID Network pipeline. We then compute the minimum pairwise Euclidean distance of each reference video and our test sequence. Finally, we compare this distance to a fixed threshold τ id to decide whether the behavioral properties of our testing sequence coincide with its identity, thus, evaluating the authenticity of our test video. The source code and the trained network of our proposal are publicly available 2 .
Results
To analyze the performance of our proposed method, we conducted a series of experiments. Specifically, we discuss our design choices w.r.t. our employed loss functions and the adversarial training strategy based on an ablation study applied on a set of different manipulation types and different video qualities. In comparison to state-of-the-art Deep-Fake video detection methods, we show that our approach surpasses these in terms of generalizability and robustness.
Experimental Setup
Our approach is trained using the VoxCeleb2 development dataset [9] consisting of multiple video clips of several identities. Specifically, we use 5120 subjects for the training-set and 512 subjects for the validation-set. During training, each batch contains 64 sequences of 96 frames. The 64 sequences are formed by M = 8 sequences for each individual, with a total of N = 8 different individuals extracted at random from the training-set. Training is performed using the ADAM optimizer [27], with a learning rate of 10 −4 and 10 −5 for the Temporal ID Network and the 3DMM Generative Network, respectively. The parameters λ cycle , λ inv and τ for our loss formulation are set to 1.0, 0.001 and 0.08 respectively. We first train the Temporal ID Network for 300 epochs (with an epoch size of 2500 iterations) and choose the best performing model based on the validation accuracy. Using this trained network, we enable our 3DMM Generative Network and continue training for a fixed 100 epochs. For details on our architectures, we refer to the supplemental document. For all experiments, we use a fixed threshold of τ id = √ 1.1 to determine whether the behavioral properties of a test video coincide with those of our reference videos. This threshold is set experimentally based on a one-time evaluation on 4 real and 4 fake videos from the original DFD [34] using the averaged squared euclidean distance of real and manipulated videos. Table 1: Accuracy and AUC for variants of our approach. We compare three different losses: the multi-similarity loss (MSL), the triplet loss and our proposed loss (eq.3). In addition, we present the results obtained with and without the adversarial learning strategy on high quality (HQ) and low quality (LQ) videos manipulated using Facial Reenactment (FR) and Face Swapping (FS).
Ablation Study
In this section, we show the efficacy of the proposed loss and the adversarial training strategy. For performance evaluation of our approach, we need to know the involved identity (the source identity for face-swapping manipulations and the target identity for facial reenactment ones). Based on this knowledge, we can set up the pristine reference videos used to compute the final distance metric. To this end, we chose a controlled dataset that includes several videos of the same identity, i.e., the recently created dataset of the Google AI lab, called DeepFake Dataset (DFD) [34]. The videos contain 28 paid actors in 16 different contexts, furthermore, for each subject there are pristine videos provided (varying from 9 to 16). In total, there are 363 real and 3068 DeepFakes videos. Since the dataset only contains face-swapping manipulations, we generated 320 additional videos that include 160 Face2Face [41] and 160 Neural Textures [40] videos. Some examples are shown in Fig. 5.
Performance is evaluated at video level using a leaveone-out strategy for the reference-dataset. In detail, for each video under test, the reference dataset only contains pristine videos with a different context from the one under test. The evaluation is done both on high quality (HQ) compressed videos (constant rate quantization parameter equal to 23) using H.264 and low quality (LQ) compressed videos (quantization parameter equal to 40). This scenario helps us to consider a realistic situation, where videos are uploaded to the web, but also to simulate an attacker who further compresses the video to hide manipulation traces.
We compare the proposed loss to the triplet loss [24] and the multi-similarity loss (MSL) [46]. For these two losses, we adopt the cosine distance instead of the Euclidean one as proposed by the authors. Moreover, hyper-parameters are chosen to maximize the accuracy to correctly identify a subject in the validation set. Results for facial reenactment (FR) and face swapping (FS) in terms of Area Under Curve (AUC) and accuracy both for HQ and LQ videos are shown in Tab. 1. One can observe that our proposed loss gives a consistent improvement over the multi-similarity loss (5.5% on average) and the triplet loss (2.8% on average) in terms of AUC. In addition, coupled with the adversarial training strategy performance, it gets better for the most challenging scenario of FR videos with a further improvement of around 3% for AUC and of 6% (on average) in terms of accuracy.
Comparisons to State of the Art
We compare our approach to several state-of-the-art DeepFake video detection methods. All the techniques are compared using the accuracy at video level. Hence, if a method works frame-by-frame, we average the probabilities obtained from 32 frames uniformly extracted from the video, as it is also done in [7,1].
State of the art approaches The methods used for our comparison are frame-based methods: MesoNet [1], Xception [8], FFD (Facial Forgery Detection) [14], Efficient-B7 [39]; ensemble methods: ISPL (Image and Sound Processing Lab) [7], Seferbekov [37]; temporal-based methods: Eff.B1 + LSTM, ResNet + LSTM [20] and an Identitybased method: A&B (Appearance and Behavior) [2]. A detailed description of these approaches can be found in the supplemental document. In order to ensure a fair comparison, all supervised approaches (frame-based, ensemble and temporal-based methods) are trained on the same dataset of real and fake videos, while the identity-based ones (A&B and our proposal) are trained instead on VoxCeleb2 [9].
Generalization and robustness analysis To analyze the ability to generalize to different manipulation methods, training and test come from different datasets. Note that we will focus especially on generalizing from face swapping to facial reenactment.
In a first experiment we test all the methods on the DFD Google dataset that contains both face swapping and facial reenactment manipulations, as described in Section 4.2. In this case all supervised approaches are trained on DFDC [15] with around 100k fake and 20k real videos. This is the largest DeepFake dataset publicly available and includes five different types of manipulations 3 . Experiments on (HQ) videos, with a compression factor of 23, and on low-quality (LQ) videos, where the factor is 40 are presented in terms of accuracy and AUC in Tab. 2. Most methods suffer from a huge performance drop when going from face-swapping to facial-reenactment, with an accuracy that often borders 50%, equivalent to coin tossing. The likely reason is that the DFDC training set includes mostly faceswapping videos, and methods with insufficient generalization ability are unable to deal with different manipulations. This does not hold for ID-Reveal and A&B, which are trained only on real data and, hence, have an almost identical performance with both types of forgeries. For facial reenactment videos, this represents a huge improvement with respect to all competitors. In this situation it is possible to observe a sharp performance degradation of most methods in the presence of strong compression (LQ videos). This is especially apparent with face-swapping, where some methods are very reliable on HQ videos but become almost useless on LQ videos. On the contrary, ID-Reveal suffers only a very small loss of accuracy on LQ videos, and outperforms all competitors, including A&B, by a large margin.
In another experiment, we use FaceForensics++ [36] (HQ) for training the supervised methods, while the identity based methods are always trained on the VoxCeleb2 dataset [9]. For testing, we use the preview DFDC Facebook dataset [16] and CelebDF [32]. The preview DFDC dataset [16] is composed only of face-swapping manipulations of 68 individuals. For each subject there are 3 to 39 pristine videos with 3 videos for each context. We consider 44 individuals which have at least 12 videos (4 contexts); obtaining a total of 920 real videos and 2925 fake videos. CelebDF [32] contains 890 real videos and 5639 face-swapping manipulated videos. The videos are related to 59 individuals except for 300 real videos that do not have any information about the individual, hence, they cannot be included in our analysis. Results in terms of accuracy and AUC at video-level are shown in Tab. 3. One can observe that also in this scenario our method achieves very good results for all the datasets, with an average improvement with respect to the best supervised approach of about 16% on LQ videos. Even the improvement with respect to the identity based approach A&B [2] is significant, around 14% on HQ videos and 13% on LQ ones. Again the performance of supervised approaches worsens in unseen conditions of lowquality videos, while our method preserves its good performance.
To gain better insights on both generalization and robustness, we want to highlight the very different behavior of supervised methods when we change the fake videos in training. Specifically, for HQ videos if the manipulation (in this case neural textures and face2face) is included in training and test, then performance are very high for all the methods, but they suddenly decrease if we exclude those manipulations from the training, see Fig. 6. The situation is even worse for LQ videos. Identity-based methods do not modify their performance since they do not depend at all on which manipulation is included in training.
Conclusion
We have introduced ID-Reveal, an identity-aware detection approach leveraging a set of reference videos of a target person and trained in an adversarial fashion. A key aspect of our method is the usage of a low-dimensional 3DMM representation to analyze the motion of a person. While this compressed representation of faces contains less information than the original 2D images, the gained type of robustness is a very important feature that makes our method generalize across different forgery methods. Specifically, the 3DMM representation is not affected by different envi- Figure 6: Binary detection accuracy of our approach compared to state-of-the-art methods. Results are obtained on the facialreenactment DFD dataset both on HQ and LQ videos. We consider two different training scenarios for all the approaches that need forged videos in training: manipulation in training (blue bars), where the training set includes same type of manipulations present in test set (neural textures and face2face for facial reenactment), and manipulation out of training (orange bars), where we adopt DFDC that includes only face swapping. ronments or lighting situations, and is robust to disruptive forms of post-processing, e.g., compression. We conducted a comprehensive analysis of our method and in comparison to state of the art, we are able to improve detection qualities by a significant margin, especially, on low-quality content. At the same time, our method improves generalization capabilities by adopting a training strategy that solely focuses on non-manipulated content.
Appendix
In this appendix, we report the details of our architectures used for the Temporal ID Network and the 3DMM Generative Network (Sec. A). Moreover, we briefly describe the state of the art DeepFake methods we compare to, (see Sec. B). In Sec. C and Sec. D, we present additional results to prove the generalization capability of our method. In Sec. E, we include scatter plots that show the separability of videos of different subjects in the embedding space. Finally, we analyze a real case on the web (see Sec. F).
A. Architectures
Temporal ID Network We leverage a convolution neural network architecture that works along the temporal direction and is composed by eleven layers (see Fig.7 (a)). We use Group Normalization [48] and LeakyReLU nonlinearity for all layers except the last one. Moreover, we adoptà-trous convolutions (also called dilated convolution), instead of classic convolutions in order to increase the receptive fields without increasing the trainable parameters. The first layer increases the number of channels from 62 to 512, while the successive ones are inspired by the ResNet architecture [23] and include a residual block as shown in Fig.7 (c). The parameters K and D of the residual blocks are the dimension of the filter and the dilatation factor of thè a-trous convolution, respectively. The last layer reduces the channels from 512 to 128. The receptive field of the whole network is equal to 51 frames which is around 2 seconds. 3DMM Generative Network As described in the main paper, the 3DMM Generative Network is fed by two 3DMM feature vectors. The two feature vectors are concatenated which results in a single input vector of 124 channels. The network is formed by five layers: a layer to increase the channels from 124 to 512, three residual blocks, and a last layer to decrease the channels from 512 to 62. The output is summed to the input 3DMM feature vector to obtain the generated 3DMM feature vector (see Fig.7 (b)). All the convolutions have a dimension of filter equal to one in order to work frame-by-frame.
B. Comparison methods
In the main paper, we compare our approach with several state of the art DeepFake detection methods, that are described in following:
Frame-based methods
(i) MesoNet [1]: is one of the first CNN methods proposed for DeepFake detection which uses dilated convolutions with inception modules. (ii) Xception [8]: is a relatively deep neural network that is achieving a very good performance compared to other CNNs for video DeepFake detection [36]. (iii) FFD (Facial Forgery Detection) [14]: is a variant of Xception, including an attention-based layer, in order to focus on high-frequency details. (iv) Efficient-B7 [39]: has been proposed by Tan et al. and is pre-trained on ImageNet using the strategy described in [49], where the network is trained with injected noise (such as dropout, stochastic depth, and data augmentation) on both labeled and unlabeled images.
Ensemble methods
(v) ISPL (Image and Sound Processing Lab) [7]: employs an ensemble of four variants of Efficienet-B4. The networks are trained using different strategies, such as self-attention mechanism and triplet siamese strategy. Data augmentation is performed by applying several operations, like downscaling, noise addition and JPEG compression. (vi) Seferbekov [37]: is the algorithm proposed by the winner of the Kaggle competition (Deepfake Detection Challenge) organized by Facebook [15]. It uses an ensemble of seven Efficientnet-B7 that work frame-byframe. The networks are pre-trained using the strategy described in [49]. The training leverages data augmentation, that, beyond some standard operations, includes a cut-out that drops specific parts of the face.
Temporal-based methods
(vii) ResNet + LSTM: is a method based on Long Short Term Memory (LSTM) [20]. In detail, a ResNet50 is used to extract frame-level features from 20 frames uniformly extracted from the video. These features are provided to a LSTM that classifies the whole video. (viii) Eff.B1 + LSTM: This is a variant of the approach described above, where the ResNet architecture is replaced by EfficientNet-B1.
Identity-based methods
(ix) A&B (Appearance and Behavior) [2]: is an identitybased approach that includes a face recognition network and a network that is based on head movements. The behavior recognition system encodes the information about the identity through a network that works on a sequence of attributes related to the movement [47].
Note that all the techniques are compared at video level. Hence, if a method works frame-by-frame, we average the probabilities obtained from 32 frames uniformly extracted from the video. Furthermore, to validate this choice, we compare averaging with the maximum strategy. Results are reported in Tab. 4 using the same experimental setting of Tab. 2 of the main paper. The results prove the advantage to use the averaging operation with respect to the maximum value: the increase in terms of AUC is around 0.04, while the accuracy increases (on average) of about 3%.
C. Additional results
To show the ability of our method to be agnostic to the type of manipulation, we test our proposal on additional datasets, that are not included in the main paper. In Tab. 5 we report the analysis on the dataset FaceForensics++ (FF++) [36]. Results are split for facial reenactment (FR) and face swapping (FS) manipulations. It is important to underline that this dataset does not provide information about multiple videos of the same subject, therefore, for identitybased approaches, the first 6 seconds of each pristine video are used as reference dataset, while the last 6 seconds are used to evaluate the performance (we only consider videos of at least 14 seconds duration, thus, obtaining 360 videos for each manipulation method). For the FF++ dataset, our method obtains always better performance in both the cases of high-quality videos and low-quality ones.
As a further analysis, we test our method on a recent method of face reenactment, called FOMM (First-Order Motion Model) [38]. Using the official code of FOMM, we created 160 fake videos using the pristine videos of DFD, some examples are in Fig. 8. Our approach on these videos achieves an accuracy of 85.6%, and an AUC of 0.94 which further underlines the generalization of our method with re-
D. Robustness to different contexts
We made additional experiments to understand that for our method it is not necessary that the reference videos are similar to the manipulated ones in terms of environment, lighting, or distance from the subject. To this end, we show results in Fig. 9 obtained for the DFD FR and DFD FS datasets, where information about the video context (kitchen, podium, outside, talking, meeting, etc.) is available. While the reference videos and the under-test videos differ, our method shows robust performance. Results seem only affected by the variety of poses and expressions present in the reference videos (the last reference video in the table contains the most variety in motion, thus yielding better results).
E. Visualization of the embedded vectors
In this section, we include scatter plots that show the 2D orthogonal projection of the extracted temporal patterns. In particular, in Fig. 10 we show the scatter plots of embedded vectors extracted from 4 seconds long video snippets relative to two actors for the DFD dataset by using Linear Discriminant Analysis (LDA) and selecting the 2-D orthogonal projection that maximize the separations between the real videos of two actors and between real videos and fake ones. We can observe that in the embedding space the real videos relative to different actors are perfectly separated. Moreover, also the manipulated videos relative to an actor are well separated from the real videos of the same actor.
F. A real case on the web
We applied ID-Reveal to videos of Nicolas Cage downloaded from YouTube. We tested on three real videos, four DeepFakes videos, one imitator (a comic interpreting Nicolas Cage) and a DeepFake applied on the imitator. We evaluate the distributions of distance metrics that are computed as the minimum pairwise squared Euclidean distance in the embedding space of 4 seconds long video snippets extracted from the pristine reference video and the video under test. In Fig. 11, we report these distributions using a violin plot.
We can observe that the lowest distances are relative to real videos (green). For the DeepFakes (red) all distances are higher and, thus, can be detected as fakes. An interesting case is the video related to the imitator (purple), that presents a much lower distance since he is imitating Nicolas Cage. A DeepFake driven by the imitator strongly reduces the distance (pink), but is still detected by our method.
Figure 2 :
2Automatic face manipulations can be split in two main categories: facial reenactment and face-swapping. The first one alters the facial expression preserving the identity. The second one modifies the identity of a person preserving the facial expression.
the generated feature vector with appearance of the individual c and expressions of individual k. During training, we use batches that contain N × M videos of N different individuals each with M videos. In our experiments, we chose M = N = 8.
Figure 5 :
5Aligned example images of DFD FS (Face-Swapping) as well as the newly created DFD FR (Facial Reenactment) datasets. From left to right: source videos, target sequences, DeepFakes and manipulations created using Neural Textures[40].
Figure 7 :
7Architecture of our proposed Temporal ID Network and 3DMM Generative Network.
Figure 8 :
8Aligned examples of created FOMM videos. From top to bottom: source videos, target sequences, and manipulations created using First-Order Motion Model [38].spect to a new type of manipulation.
Figure 10 :
10Scatter plots of embedded vectors extracted from 4 seconds long video snippets relative to couple of actors. We included both face swapping (FS) and facial reenactment (FR).
Figure 11 :
11Distributions of squared Euclidean distances of 9 videos downloaded from YouTube with respect to a real reference video of Nicolas Cage. From left to right: 3 real videos, 4 DeepFakes, a video from an imitator and a DeepFake driven by the imitator.
Table 2 :
2Video-level detection accuracy and AUC of our approach
compared to state-of-the-art methods. Results are obtained on the
DFD dataset on HQ videos and LQ ones, split in facial reenact-
ment (FR) and face swapping (FS) manipulations. Training for
supervised methods is carried out on DFDC, while for identity-
based methods on VoxCeleb2.
Table 3 :
3Video-level detection accuracy and AUC of our approach
compared to state-of-the-art methods. Results are obtained on
DFDCp and CelebDF on HQ videos and LQ ones. Training for su-
pervised methods is carried out on FF++, while for identity-based
methods on VoxCeleb2.
Table 4 :
4Video-level detection accuracy and AUC of frame-based
methods. We compare two strategies: averaging the score over
32 frames in a video and taking the maximum score. Results are
obtained on the DFD dataset on HQ videos and LQ ones, split in
facial reenactment (FR) and face swapping (FS) manipulations.
Reveal (Ours) 78.3 / 0.87 93.6 / 0.99 74.8 / 0.83 81.9 / 0.97High Quality (HQ)
Low Quality (LQ)
Acc(%) / AUC
FF++ FR
FF++ FS
FF++ FR
FF++ FS
MesoNet
55.4 / 0.58 57.1 / 0.61 55.4 / 0.57 57.3 / 0.62
Xception
55.6 / 0.58 79.0 / 0.89 51.9 / 0.57 69.2 / 0.79
Efficient-B7
54.9 / 0.59 85.4 / 0.93 50.6 / 0.54 65.6 / 0.80
FFD
54.4 / 0.56 69.2 / 0.75 53.5 / 0.56 63.3 / 0.70
ISPL
56.6 / 0.59 74.2 / 0.83 53.3 / 0.55 68.8 / 0.76
Seferbekov
58.3 / 0.62 89.9 / 0.97 53.0 / 0.55 79.4 / 0.87
ResNet + LSTM 55.0 / 0.58 59.0 / 0.63 56.2 / 0.58 61.9 / 0.66
Eff.B1 + LSTM
57.2 / 0.62 81.8 / 0.90 54.1 / 0.58 69.0 / 0.78
A&B
72.2 / 0.78 89.0 / 0.97 51.5 / 0.53 51.9 / 0.65
ID-
Table 5 :
5Video-level detection accuracy and AUC of our approach compared to state-of-the-art methods. Results are obtained on the FF++ dataset on HQ videos and LQ ones, split in facial reenactment (FR) and face swapping (FS) manipulations. Training for supervised methods is carried out on DFDC, while for identitybased methods on VoxCeleb2.
Figure 9: Average performance in terms of AUC evaluated on 28 actors of DFD FR and DFD FS datasets when test videos are in different contexts with respect to reference videos. Test videos: kitchen, podium-speech, outside laughing talking. Reference videos: angry talking, talking against wall, outside happy hugging, outside surprised, serious meeting.Reference Videos
AUC on DFD
FR / FS
Test Videos
0.845 / 0.973 0.781 / 0.956 0.831 / 0.930 0.786 / 0.894 0.995 / 0.999
0.874 / 0.924 0.729 / 0.975 0.811 / 0.913 0.793 / 0.955 0.996 / 0.997
0.868 / 0.897 0.743 / 0.908 0.886 / 0.812 0.863 / 0.889 0.838 / 1.000
https://github.com/grip-unina/id-reveal
https://www.kaggle.com/c/ deepfake-detection-challenge
AcknowledgmentWe gratefully acknowledge the support of this research by a TUM-IAS Hans Fischer Senior Fellowship, a TUM-IAS Rudolf Mößbauer Fellowship and a Google Faculty Research Award. In addition, this material is based on research sponsored by the Defense Advanced Research Projects Agency (DARPA) and the Air Force Research Laboratory (AFRL) under agreement number FA8750-20-2-1004. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA and AFRL or the U.S. Government. This work is also supported by the PREMIER project, funded by the Italian Ministry of Education, University, and Research within the PRIN 2017 program.
Mesonet: a compact facial video forgery detection network. Darius Afchar, Vincent Nozick, Junichi Yamagishi, Isao Echizen, IEEE International Workshop on Information Forensics and Security. 79Darius Afchar, Vincent Nozick, Junichi Yamagishi, and Isao Echizen. Mesonet: a compact facial video forgery detection network. In IEEE International Workshop on Information Forensics and Security, pages 1-7, 2018. 3, 7, 9
Detecting deep-fake videos from appearance and behavior. Shruti Agarwal, Hany Farid, Tarek El-Gaaly, Ser-Nam Lim, IEEE International Workshop on Information Forensics and Security (WIFS). 810Shruti Agarwal, Hany Farid, Tarek El-Gaaly, and Ser-Nam Lim. Detecting deep-fake videos from appearance and be- havior. In IEEE International Workshop on Information Forensics and Security (WIFS), pages 1-6, 2020. 3, 7, 8, 10
Detecting deep-fake videos from phonemeviseme mismatches. Shruti Agarwal, Hany Farid, IEEE CVPR Workshops. 2020Ohad Fried, and Maneesh AgrawalaShruti Agarwal, Hany Farid, Ohad Fried, and Maneesh Agrawala. Detecting deep-fake videos from phoneme- viseme mismatches. In IEEE CVPR Workshops, 2020. 3
Protecting world leaders against deep fakes. Shruti Agarwal, Hany Farid, Yuming Gu, Mingming He, Koki Nagano, Hao Li, IEEE CVPR Workshops. Shruti Agarwal, Hany Farid, Yuming Gu, Mingming He, Koki Nagano, and Hao Li. Protecting world leaders against deep fakes. In IEEE CVPR Workshops, June 2019. 3
Generalized zero and few-shot transfer for facial forgery detection. Shivangi Aneja, Matthias Nießner, arXiv:2006.1186313arXiv preprintShivangi Aneja and Matthias Nießner. Generalized zero and few-shot transfer for facial forgery detection. arXiv preprint arXiv:2006.11863, 2020. 1, 3
A morphable model for the synthesis of 3D faces. Volker Blanz, Thomas Vetter, ACM Transactions on Graphics (Proc. of SIGGRAPH). Volker Blanz and Thomas Vetter. A morphable model for the synthesis of 3D faces. In ACM Transactions on Graphics (Proc. of SIGGRAPH), pages 187-194, 1999. 4
Video Face Manipulation Detection Through Ensemble of CNNs. Nicolò Bonettini, Daniele Edoardo, Sara Cannas, Luca Mandelli, Paolo Bondi, Stefano Bestagini, Tubaro, IEEE International Conference on Pattern Recognition (ICPR). 79Nicolò Bonettini, Edoardo Daniele Cannas, Sara Man- delli, Luca Bondi, Paolo Bestagini, and Stefano Tubaro. Video Face Manipulation Detection Through Ensemble of CNNs. In IEEE International Conference on Pattern Recognition (ICPR), 2020. https://github.com/ polimi-ispl/icpr2020dfdc. 1, 3, 7, 9
Xception: Deep learning with depthwise separable convolutions. François Chollet, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 79François Chollet. Xception: Deep learning with depthwise separable convolutions. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1251-1258, 2017. 3, 7, 9
Voxceleb2: Deep speaker recognition. Arsha Joon Son Chung, Andrew Nagrani, Zisserman, In Interspeech. 7Joon Son Chung, Arsha Nagrani, and Andrew Zisserman. Voxceleb2: Deep speaker recognition. In Interspeech, 2018. 2, 6, 7
Fakecatcher: Detection of synthetic portrait videos using biological signals. Ilke Umur Aybars Ciftci, Lijun Demir, Yin, IEEE Transactions on Pattern Analysis and Machine Intelligence. 2020in pressUmur Aybars Ciftci, Ilke Demir, and Lijun Yin. Fakecatcher: Detection of synthetic portrait videos using biological sig- nals. IEEE Transactions on Pattern Analysis and Machine Intelligence, in press, 2020. 3
Extracting camera-based fingerprints for video forensics. Davide Cozzolino, Giovanni Poggi, Luisa Verdoliva, IEEE CVPR Workshops. 13Davide Cozzolino, Giovanni Poggi, and Luisa Verdoliva. Ex- tracting camera-based fingerprints for video forensics. In IEEE CVPR Workshops, pages 130-137, 2019. 1, 3
Forensic-Transfer: Weakly-supervised domain adaptation for forgery detection. Davide Cozzolino, Justus Thies, Andreas Rössler, Christian Riess, Matthias Nießner, Luisa Verdoliva, arXiv:1812.0251013arXiv preprintDavide Cozzolino, Justus Thies, Andreas Rössler, Christian Riess, Matthias Nießner, and Luisa Verdoliva. Forensic- Transfer: Weakly-supervised domain adaptation for forgery detection. arXiv preprint arXiv:1812.02510, 2018. 1, 3
Noiseprint: A CNN-Based Camera Model Fingerprint. Davide Cozzolino, Luisa Verdoliva, IEEE Transactions on Information Forensics and Security. 154Davide Cozzolino and Luisa Verdoliva. Noiseprint: A CNN- Based Camera Model Fingerprint. IEEE Transactions on In- formation Forensics and Security, 15:144-159, 2020. 1, 4
On the detection of digital face manipulation. Hao Dang, Feng Liu, Joel Stehouwer, Xiaoming Liu, Jain, IEEE Conference on Computer Vision and Pattern Recognition. 79Hao Dang, Feng Liu, Joel Stehouwer, Xiaoming Liu, and Anil K Jain. On the detection of digital face manipulation. In IEEE Conference on Computer Vision and Pattern Recog- nition, pages 5781-5790, 2020. http://cvlab.cse. msu.edu/project-ffd.html. 3, 7, 9
. Brian Dolhansky, Joanna Bitton, Ben Pflaum, Jikuo Lu, Russ Howes, Menglin Wang, Cristian Canton Ferrer, arXiv:2006.07397710The deepfake detection challenge dataset. arXiv preprintBrian Dolhansky, Joanna Bitton, Ben Pflaum, Jikuo Lu, Russ Howes, Menglin Wang, and Cristian Canton Ferrer. The deepfake detection challenge dataset. arXiv preprint arXiv:2006.07397, 2020. 1, 2, 3, 4, 7, 10
Nicole Baram, and Cristian Canton Ferrer. The deepfake detection challenge (DFDC) preview dataset. Brian Dolhansky, Russ Howes, Ben Pflaum, arXiv:1910.0885427arXiv preprintBrian Dolhansky, Russ Howes, Ben Pflaum, Nicole Baram, and Cristian Canton Ferrer. The deepfake detec- tion challenge (DFDC) preview dataset. arXiv preprint arXiv:1910.08854, 2019. 2, 7
Towards generalizable deepfake detection with localityaware autoencoder. Mengnan Du, K Shiva, Yuening Pentyala, Xia Li, Hu, ACM International Conference on Information and Knowledge Management. 13Mengnan Du, Shiva K. Pentyala, Yuening Li, and Xia Hu. Towards generalizable deepfake detection with locality- aware autoencoder. In ACM International Conference on Information and Knowledge Management, pages 325-334, 2020. 1, 3
Predicting Heart Rate Variations of Deepfake Videos using Neural ODE. Steven Fernandes, Sunny Raj, Eddy Ortiz, Iustina Vintila, Margaret Salter, Gordana Urosevic, Sumit Jha, ICCV Workshops. Steven Fernandes, Sunny Raj, Eddy Ortiz, Iustina Vintila, Margaret Salter, Gordana Urosevic, and Sumit Jha. Predict- ing Heart Rate Variations of Deepfake Videos using Neural ODE. In ICCV Workshops, 2019. 3
Videoforensicshq: Detecting high-quality manipulated face videos. Gereon Fox, Wentao Liu, Hyeongwoo Kim, Hans-Peter Seidel, Mohamed Elgharib, Christian Theobalt, IEEE International Conference on Multimedia and Expo (ICME). Gereon Fox, Wentao Liu, Hyeongwoo Kim, Hans-Peter Sei- del, Mohamed Elgharib, and Christian Theobalt. Vide- oforensicshq: Detecting high-quality manipulated face videos. In IEEE International Conference on Multimedia and Expo (ICME), pages 1-6, 2021. 1
Deepfake video detection using recurrent neural networks. David Güera, J Edward, Delp, IEEE International Conference on Advanced Video and Signal Based Surveillance. 710David Güera and Edward J Delp. Deepfake video detection using recurrent neural networks. In IEEE International Con- ference on Advanced Video and Signal Based Surveillance, 2018. 3, 7, 10
Towards fast, accurate and stable 3d dense face alignment. Jianzhu Guo, Xiangyu Zhu, Yang Yang, Fan Yang, Zhen Lei, Stan Z Li, 2020. 4European Conference on Computer Vision. Jianzhu Guo, Xiangyu Zhu, Yang Yang, Fan Yang, Zhen Lei, and Stan Z Li. Towards fast, accurate and stable 3d dense face alignment. In European Conference on Computer Vision (ECCV), 2020. 4
Dimensionality reduction by learning an invariant mapping. Raia Hadsell, Sumit Chopra, Yann Lecun, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2Raia Hadsell, Sumit Chopra, and Yann LeCun. Dimension- ality reduction by learning an invariant mapping. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 2, pages 1735-1742, 2006. 4
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770-778, 2016. 9
Deep metric learning using triplet network. Elad Hoffer, Nir Ailon, International Workshop on Similarity-Based Pattern Recognition. Springer46Elad Hoffer and Nir Ailon. Deep metric learning using triplet network. In International Workshop on Similarity-Based Pattern Recognition, pages 84-92. Springer, 2015. 4, 6
Fighting fake news: Image splice detection via learned self-consistency. Minyoung Huh, Andrew Liu, Andrew Owens, Alexei A Efros, European Conference on Computer Vision (ECCV). Minyoung Huh, Andrew Liu, Andrew Owens, and Alexei A Efros. Fighting fake news: Image splice detection via learned self-consistency. In European Conference on Com- puter Vision (ECCV), pages 101-117, 2018. 1
Deeperforensics-1.0: A large-scale dataset for real-world face forgery detection. Liming Jiang, Ren Li, Wayne Wu, Chen Qian, Chen Change Loy, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Liming Jiang, Ren Li, Wayne Wu, Chen Qian, and Chen Change Loy. Deeperforensics-1.0: A large-scale dataset for real-world face forgery detection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2886-2895, 2020. 1
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 6
Detecting deepfakes with metric learning. Akash Kumar, Arnav Bhavsar, Rajesh Verma, International Workshop on Biometrics and Forensics (IWBF). 2020Akash Kumar, Arnav Bhavsar, and Rajesh Verma. Detecting deepfakes with metric learning. In International Workshop on Biometrics and Forensics (IWBF), pages 1-6, 2020. 3
Face x-ray for more general face forgery detection. Lingzhi Li, Jianmin Bao, Ting Zhang, Hao Yang, Dong Chen, Fang Wen, Baining Guo, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 13Lingzhi Li, Jianmin Bao, Ting Zhang, Hao Yang, Dong Chen, Fang Wen, and Baining Guo. Face x-ray for more general face forgery detection. In IEEE Conference on Com- puter Vision and Pattern Recognition (CVPR), pages 5001- 5010, 2020. 1, 3
In Ictu Oculi: Exposing AI created fake videos by detecting eye blinking. Yuezun Li, Ming-Ching Chang, Siwei Lyu, IEEE international Workshop on Information Forensics and Security (WIFS). Yuezun Li, Ming-Ching Chang, and Siwei Lyu. In Ictu Oculi: Exposing AI created fake videos by detecting eye blinking. In IEEE international Workshop on Information Forensics and Security (WIFS), pages 1-7, 2018. 3
Exposing deepfake videos by detecting face warping artifacts. Yuezun Li, Siwei Lyu, IEEE CVPR Workshops. Yuezun Li and Siwei Lyu. Exposing deepfake videos by de- tecting face warping artifacts. In IEEE CVPR Workshops, 2019. 3
Celeb-DF: A Large-scale Challenging Dataset for DeepFake Forensics. Yuezun Li, Pu Sun, Honggang Qi, Siwei Lyu, IEEE Conference on Computer Vision and Patten Recognition (CVPR). 7Yuezun Li, Pu Sun, Honggang Qi, and Siwei Lyu. Celeb-DF: A Large-scale Challenging Dataset for DeepFake Forensics. In IEEE Conference on Computer Vision and Patten Recog- nition (CVPR), 2020. 1, 7
Royston Marian Mascarenhas, Shenoy Pratik Gurudatt, and Wael AbdAlmageed. Twobranch recurrent networkfor isolating deepfakes in videos. Iacopo Masi, Aditya Killekar, European Conference on Computer Vision (ECCV). 2020Iacopo Masi, Aditya Killekar, Royston Marian Mascaren- has, Shenoy Pratik Gurudatt, and Wael AbdAlmageed. Two- branch recurrent networkfor isolating deepfakes in videos. In European Conference on Computer Vision (ECCV), 2020. 3
DeepFakes detection Dataset. N Dufour, A Gully, P Karlsson, A V Vorbyov, T Leung, J Childs, C Bregler, 6N. Dufour, A. Gully, P. Karlsson, A.V. Vorbyov, T. Leung, J. Childs and C. Bregler. DeepFakes detection Dataset, 2019. https://ai.googleblog.com/2019/09/ contributing-data-to-deepfake-detection. html. 1, 6
DeepRhythm: Exposing DeepFakes with Attentional Visual Heartbeat Rhythms. Hua Qi, Qing Guo, Felix Juefei-Xu, Xiaofei Xie, Lei Ma, Wei Feng, Yang Liu, Jianjun Zhao, ACM International Conference on Multimedia. Hua Qi, Qing Guo, Felix Juefei-Xu, Xiaofei Xie, Lei Ma, Wei Feng, Yang Liu, and Jianjun Zhao. DeepRhythm: Exposing DeepFakes with Attentional Visual Heartbeat Rhythms. In ACM International Conference on Multimedia, pages 4318-4327, 2020. 3
Faceforen-sics++: Learning to detect manipulated facial images. Andreas Rössler, Davide Cozzolino, Luisa Verdoliva, Christian Riess, Justus Thies, Matthias Nießner, IEEE International Conference on Computer Vision (ICCV). 10Andreas Rössler, Davide Cozzolino, Luisa Verdoliva, Chris- tian Riess, Justus Thies, and Matthias Nießner. Faceforen- sics++: Learning to detect manipulated facial images. In IEEE International Conference on Computer Vision (ICCV), pages 1-11, 2019. 1, 2, 3, 4, 7, 9, 10
DeepFake Detection (DFDC) Team Sefer. Selim Seferbekov, 710Selim Seferbekov. DeepFake Detection (DFDC) Team Sefer. https://github.com/selimsef/dfdc_ deepfake_challenge. 2, 7, 10
First order motion model for image animation. Aliaksandr Siarohin, Stéphane Lathuilière, Sergey Tulyakov, Elisa Ricci, Nicu Sebe, Advances in Neural Information Processing Systems (NeurIPS). 3211Aliaksandr Siarohin, Stéphane Lathuilière, Sergey Tulyakov, Elisa Ricci, and Nicu Sebe. First order motion model for im- age animation. In Advances in Neural Information Process- ing Systems (NeurIPS), volume 32, 2019. 10, 11
Efficientnet: Rethinking model scaling for convolutional neural networks. Mingxing Tan, Quoc Le, International Conference on Machine Learning. 79Mingxing Tan and Quoc Le. Efficientnet: Rethinking model scaling for convolutional neural networks. In International Conference on Machine Learning, pages 6105-6114, 2019. 7, 9
Deferred neural rendering: Image synthesis using neural textures. Justus Thies, Michael Zollhöfer, Matthias Nießner, Proc. of SIG-GRAPH). of SIG-GRAPH)Justus Thies, Michael Zollhöfer, and Matthias Nießner. De- ferred neural rendering: Image synthesis using neural tex- tures. ACM Transactions on Graphics (Proc. of SIG- GRAPH), 2019. 6
Face2face: Real-time face capture and reenactment of rgb videos. Justus Thies, Michael Zollhöfer, Marc Stamminger, Christian Theobalt, Matthias Nießner, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 6Justus Thies, Michael Zollhöfer, Marc Stamminger, Chris- tian Theobalt, and Matthias Nießner. Face2face: Real-time face capture and reenactment of rgb videos. In IEEE Confer- ence on Computer Vision and Pattern Recognition (CVPR), pages 2387-2395, 2016. 1, 4, 6
Deepfakes and beyond: A survey of face manipulation and fake detection. Information Fusion. R Tolosana, R Vera-Rodriguez, J Fierrez, A Morales, J Ortega-Garcia, R. Tolosana, R. Vera-Rodriguez, J. Fierrez, A. Morales, and J. Ortega-Garcia. Deepfakes and beyond: A survey of face manipulation and fake detection. Information Fusion, pages 131-148, 2020. 1
Media forensics and deepfakes: an overview. Luisa Verdoliva, IEEE Journal of Selected Topics in Signal Processing. 145Luisa Verdoliva. Media forensics and deepfakes: an overview. IEEE Journal of Selected Topics in Signal Pro- cessing, 14(5):910-932, 2020. 1
Localizing and orienting street views using overhead imagery. N Nam, James Vo, Hays, European Conference on Computer Vision (ECCV). SpringerNam N Vo and James Hays. Localizing and orienting street views using overhead imagery. In European Conference on Computer Vision (ECCV), pages 494-509. Springer, 2016. 4
CNN-generated images are surprisingly easy to spot... for now. Sheng-Yu Wang, Oliver Wang, Richard Zhang, Andrew Owens, Alexei A Efros, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 13Sheng-Yu Wang, Oliver Wang, Richard Zhang, Andrew Owens, and Alexei A Efros. CNN-generated images are sur- prisingly easy to spot... for now. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 1, 3
Multi-similarity loss with general pair weighting for deep metric learning. Xun Wang, Xintong Han, Weilin Huang, Dengke Dong, Matthew R Scott, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 46Xun Wang, Xintong Han, Weilin Huang, Dengke Dong, and Matthew R Scott. Multi-similarity loss with general pair weighting for deep metric learning. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5022-5030, 2019. 4, 6
Self-supervised learning of a facial attribute embedding from video. Olivia Wiles, A Sophia Koepke, Andrew Zisserman, British Machine Vision Conference. Olivia Wiles, A. Sophia Koepke, and Andrew Zisserman. Self-supervised learning of a facial attribute embedding from video. In British Machine Vision Conference, 2018. 10
Group normalization. Yuxin Wu, Kaiming He, Proceedings of the European conference on computer vision (ECCV). the European conference on computer vision (ECCV)Yuxin Wu and Kaiming He. Group normalization. In Pro- ceedings of the European conference on computer vision (ECCV), pages 3-19, 2018. 9
Self-training with noisy student improves imagenet classification. Qizhe Xie, Minh-Thang Luong, Eduard Hovy, Quoc V Le, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 910Qizhe Xie, Minh-Thang Luong, Eduard Hovy, and Quoc V Le. Self-training with noisy student improves imagenet clas- sification. In IEEE Conference on Computer Vision and Pat- tern Recognition (CVPR), pages 10687-10698, 2020. 9, 10
Exposing deep fakes using inconsistent head poses. Xin Yang, Yuezun Li, Siwei Lyu, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Xin Yang, Yuezun Li, and Siwei Lyu. Exposing deep fakes using inconsistent head poses. In IEEE International Confer- ence on Acoustics, Speech and Signal Processing (ICASSP), pages 8261-8265, 2019. 3
WildDeepfake: A Challenging Real-World Dataset for Deepfake Detection. Bojia Zi, Minghao Chang, Jingjing Chen, Xingjun Ma, Yu-Gang Jiang, ACM International Conference on Multimedia. Bojia Zi, Minghao Chang, Jingjing Chen, Xingjun Ma, and Yu-Gang Jiang. WildDeepfake: A Challenging Real-World Dataset for Deepfake Detection. In ACM International Con- ference on Multimedia, pages 2382-2390, 2020. 3
| [
"https://github.com/grip-unina/id-reveal",
"https://github.com/selimsef/dfdc_"
] |
[
"DEEPHIVE: A MULTI-AGENT REINFORCEMENT LEARNING APPROACH FOR AUTOMATED DISCOVERY OF SWARM-BASED OPTIMIZATION POLICIES A PREPRINT",
"DEEPHIVE: A MULTI-AGENT REINFORCEMENT LEARNING APPROACH FOR AUTOMATED DISCOVERY OF SWARM-BASED OPTIMIZATION POLICIES A PREPRINT"
] | [
"Eloghosa A Ikponmwoba \nDepartment of Mechanical Engineering\nDepartment of Mechanical Engineering\nLouisiana State University Baton Rouge\nLouisiana State University Baton Rouge\n70803, 70803LA, LA\n",
"Opeoluwa Owoyele [email protected] \nDepartment of Mechanical Engineering\nDepartment of Mechanical Engineering\nLouisiana State University Baton Rouge\nLouisiana State University Baton Rouge\n70803, 70803LA, LA\n"
] | [
"Department of Mechanical Engineering\nDepartment of Mechanical Engineering\nLouisiana State University Baton Rouge\nLouisiana State University Baton Rouge\n70803, 70803LA, LA",
"Department of Mechanical Engineering\nDepartment of Mechanical Engineering\nLouisiana State University Baton Rouge\nLouisiana State University Baton Rouge\n70803, 70803LA, LA"
] | [] | We present an approach for designing swarm-based optimizers for the global optimization of expensive black-box functions. In the proposed approach, the problem of finding efficient optimizers is framed as a reinforcement learning problem, where the goal is to find optimization policies that require a few function evaluations to converge to the global optimum. The state of each agent within the swarm is defined as its current position and function value within a design space and the agents learn to take favorable actions that maximize reward, which is based on the final value of the objective function. The proposed approach is tested on various benchmark optimization functions and compared to the performance of other global optimization strategies. Furthermore, the effect of changing the number of agents, as well as the generalization capabilities of the trained agents are investigated. The results show superior performance compared to the other optimizers, desired scaling when the number of agents is varied, and acceptable performance even when applied to unseen functions. On a broader scale, the results show promise for the rapid development of domain-specific optimizers | 10.48550/arxiv.2304.04751 | [
"https://export.arxiv.org/pdf/2304.04751v1.pdf"
] | 258,060,130 | 2304.04751 | b389a2ee39d23e1d0358274c49c39eb042c86604 |
DEEPHIVE: A MULTI-AGENT REINFORCEMENT LEARNING APPROACH FOR AUTOMATED DISCOVERY OF SWARM-BASED OPTIMIZATION POLICIES A PREPRINT
April 12, 2023
Eloghosa A Ikponmwoba
Department of Mechanical Engineering
Department of Mechanical Engineering
Louisiana State University Baton Rouge
Louisiana State University Baton Rouge
70803, 70803LA, LA
Opeoluwa Owoyele [email protected]
Department of Mechanical Engineering
Department of Mechanical Engineering
Louisiana State University Baton Rouge
Louisiana State University Baton Rouge
70803, 70803LA, LA
DEEPHIVE: A MULTI-AGENT REINFORCEMENT LEARNING APPROACH FOR AUTOMATED DISCOVERY OF SWARM-BASED OPTIMIZATION POLICIES A PREPRINT
April 12, 2023Global optimization · Blackbox Optimization · Reinforcement learning · Swarm-based optimizers
We present an approach for designing swarm-based optimizers for the global optimization of expensive black-box functions. In the proposed approach, the problem of finding efficient optimizers is framed as a reinforcement learning problem, where the goal is to find optimization policies that require a few function evaluations to converge to the global optimum. The state of each agent within the swarm is defined as its current position and function value within a design space and the agents learn to take favorable actions that maximize reward, which is based on the final value of the objective function. The proposed approach is tested on various benchmark optimization functions and compared to the performance of other global optimization strategies. Furthermore, the effect of changing the number of agents, as well as the generalization capabilities of the trained agents are investigated. The results show superior performance compared to the other optimizers, desired scaling when the number of agents is varied, and acceptable performance even when applied to unseen functions. On a broader scale, the results show promise for the rapid development of domain-specific optimizers
Introduction
Design optimization, which involves the selection of design variables to achieve desired objectives, is important to many areas of engineering. Manufacturing processes must be optimized to reduce waste and power consumption while simultaneously maximizing desirable outcomes such as quality control and manufacturing speed. The discovery of new materials with superior mechanical, electrical, or thermal properties can also be framed as an optimization problem, where the goal is to find the processing and compositional parameters that maximize the desired properties. Other examples include the optimization of lithium-ion batteries to reduce the risk of thermal runaway, minimizing drag on airfoil or wind turbine blades, and optimizing the combustion processes in an engine to reduce greenhouse gas emissions, just to name a few. In some of these examples, the process or performance of the system may be affected by a large number of variables interacting in complicated ways Liao [2010]. In some cases, the problem of optimization is further complicated by the presence of several local optima, an objective function that is expensive or time-consuming to evaluate, or the existence of constraints.
One way to classify existing algorithms for design optimization is by their utilization of derivatives for finding optimal designs. On the one hand, gradient-based optimizers use local information of the function's derivatives to search for optimal values of the design variables Dababneh et al. [2018]. Although they possess fast convergence rates and can perform well for smooth problems with many design variables, one of the major disadvantages of gradient-based arXiv:2304.04751v1 [cs.AI] 29 Mar 2023 approaches is that global optimization is difficult to ensure when dealing with non-smooth problems with several local optima Dababneh et al. [2018]. In addition to this limitation, gradient-based optimizers do not apply to optimization problems where the functional form is not readily available, often referred to as black-box optimization (BBO). In such cases, we can observe the outputs of the function based on some given input parameters, but the form of the function is unavailable. On the other hand, derivative-free optimization methods that do not utilize gradients for optimization, as the name implies, have been developed. Among these are some evolutionary algorithms, loosely modeled after biological evolution, and swarm-based techniques, inspired by social behavior in biological swarms. Derivative-free optimizers possess advantages such as robustness to the occurrence of local optima, suitability for BBO problems, and ease of implementation (since analytical and numerical gradients are not required) Houssein et al. [2021]. In particular, swarm-based optimizers operate based on the principle of swarm intelligence (SI), which occurs in the collective intelligent behavior of decentralized and self-organized systems Ab Wahab et al. [2015]. Examples of swarm-based optimizers are Ant Colony Optimization (ACO) Dorigo et al. [2006], Artificial Bee Colony (ABC) Karaboga [2010], Cuckoo Search Algorithm (CSA) Yang and Deb [2010], Glowworm Swarm Optimization (GSO) Krishnanand and Ghose [2009], and Particle Swarm Optimization (PSO) Eberhart and Kennedy [1995], Hu et al. [2003], Poli et al. [2007]. These algorithms typically operate by gradually updating each particle's position within the design space based on a rule that governs how it interacts with other swarm members. As opposed to each swarm member updating its position based on only its knowledge of the design space, the swarm members work together to find the global optimum, thus achieving better performance than attainable with a single member working alone.
Despite their success in optimizing non-convex multimodal functions, swarm-based optimizers typically suffer from slow convergence and require several function evaluations to find the global optima Owoyele and Pal [2021]. Also, in many engineering applications, evaluating the objective function can be expensive or time-consuming. For instance, searching a compositional parameter space for a new material requires following the complex set including experimental setup, sample preparation, material testing, etc. In many industries, computational fluid dynamics (and other types of simulation approaches) may be used to perform function evaluations, and depending on the numerical complexity, a single simulation may require several hours or days to complete on several processors. Therefore, for such applications with expensive function evaluations, design optimizers must be designed to work with limited swarm sizes, otherwise, the optimization process becomes impractical due to excessive computing runtimes. For problems where a small swarm size is necessarily imposed due to runtime-related considerations described above, swarm-based optimizers often suffer from premature convergence to local optima Owoyele and Pal [2021]. Another downside is that the performance of swarmed-based optimizers for various problems is often sensitive to the choice of model parameters or constants (e.g., maximum velocity, communication topologies, inertia weights, etc.). These constants are required to be selected by the user a priori and do not necessarily generalize adequately across problems Eberhart [1998], Trelea [2003]. Thus, parameters need to be carefully tuned to maintain acceptable performance for different optimization problems Houssein et al. [2021].
In this study, we introduce DeepHive, a new approach to developing swarm-based design optimizers, using a machine learning strategy known as reinforcement learning. Reinforcement learning is a branch of artificial intelligence where an agent learns an optimal policy by interacting with the environment for sequential decision-making problems Sutton et al. [1998]. Some previous studies have explored the possibility of enhancing design optimization using reinforcement learning Xu and Pi [2020]. Li and Malik [2016] developed a framework for learning how to perform gradient-based optimization using reinforcement learning. Xu and Pi [2020] presented a variant of PSO based on reinforcement learning methods with superior performance over various state-of-the-art variants of PSO. In their approach, the particles act as independent agents, choosing the optimal communication topology using a Q-learning approach during each iteration. Samma et al. [2016] developed a reinforcement learning-based memetic particle swarm optimizer (RLMPSO), where each particle is subjected to five possible operations which are exploitation, convergence, high jump, low-jump, and local fine-tuning. These operations are executed based on the RL-generated actions. Firstly, our paper differs from the preceding studies in that the proposed approach discovers an optimization policy within a continuous action space. Secondly, as opposed to some previous studies that learn parameters within the framework of existing swarm-based approaches, DeepHive learns to optimize with minimal assumptions about the functional form of the particle update policy. Lastly, DeepHive displays acceptable generalization characteristics to unseen functions, as will be shown in section 4. The remainder of the paper is laid out as follows: section 2 gives a quick overview of reinforcement learning and its relationship to BBO, including a description of the proximal policy optimization (PPO) algorithm employed in this study. The proposed optimization technique is discussed in detail in section 3. A brief explanation of the benchmark objective functions tested, and the results obtained using DeepHive is presented and analyzed in section 4. The paper ends with some concluding remarks in section 6.
2 Proposed approach 2.1 Reinforcement Learning Reinforcement learning (RL) involves an interaction between an agent and its environment, wherein the agent receives a reward (or punishment) based on the quality of its actions. The agent takes an action that alters the state of an environment, and a reward is given based on the quality or fitness of the action performed. Each agent tries to maximize its reward over several episodes of interaction with the environment, and by so doing, learns to select the best action sequence. Typically, the reinforcement learning problem is expressed as a Markov decision process (MDP). The state and action space in a continuous setting of a finite-horizon MDP is defined by the tuple (S, A, S, p 0 , p, c, Y ), where S represents the states, A is the set of actions, and p 0 is the probability density function over initial states p 0 : S → R + , p is the conditional probability density over successor states given the current state and action, p : S × A × S → R + , c is the function that maps states to reward c : S → R, and y ∈ (0, 1] is the discount factor Li and Malik [2016]. Specifically, we employ the PPO algorithm, which belongs to a class of RL methods called policy gradient methods Schulman et al. [2017], to train the policy (denoted π θ ) in this work. The policy gradient methods operate by directly optimizing the policy which is modeled by a parameterized function of θ, i.e., π θ (a|s) . The policy is then optimized using a gradient ascent algorithm, which seeks to maximize reward. In the policy gradient approach, the goal is to maximize the surrogate objective defined as
L CP I (θ) =Ê π θ (a|s) π θ old (a|s) ·Â =Ê r(θ)Â(1)
where r(θ) = π θ (a|s) π θ old (a|s) is the ratio of the current policy to an old policy (or baseline policy) and is called the probability ratio. The superscript "CPI" stands for "conservative policy iteration",Ê is the expectation or expected value, θ represents the weights or the policy network parameters, and is the advantage which measures how good an action is in a given state. During training, excessive policy updates can sometimes occur, leading to the selection of a poor policy from which the agent is unable to recover. To reduce the occurrence of such actions, trust region optimization approaches, such as PPO, have been developed. PPO prevents excessive policy updates by applying a clipping function that limits the size of the updates as follows:
L CLIP (θ) =Ê min r(θ)Â, clip (r(θ), 1 − , 1 + )Â(2)
The first term on the right-hand side of Eq. 2 is L CLIP (θ) as defined in Eq. 1. The second term, (clip(r(θ), 1− , 1+ )Â is the modification which clips the probability ratio to the interval [1 − , 1 + ]. The term is a hyperparameter that controls the maximum update size, chosen to be 0.2 in the original PPO paper Schulman et al. [2017].
When the actor and critic function share the same neural network parameters, the critic value estimate is added as an error term to the objective function. Also, an entropy term is added for exploration, hence the final objective is given as:
c(θ) =Ê[L CLIP (θ) − c 1 L V F (θ) + c 2 S[π θ ](s)](3)
Where c 1 , c 2 are coefficients, and S denotes the entropy bonus term for exploration, and L V F (θ) is a squared-error loss which is the squared difference between the state value and the cumulative reward. In previous studies, PPO has been applied to the optimization of a range of reinforcement learning problems, displaying good performance Huang and Wang [2020], Schulman et al. [2017], Yu et al. [2021].
RL-based Optimization Method
Swarm-based optimizers can be thought of as a policy for updating the position of agents or particles within a design space. In general, swarm-based optimizers update the position of each swarm member based on the particles' present and historical positions and the objective function values. In the study, we seek to develop an approach for discovering such policies using deep reinforcement learning. First, we distinguish between two distinct optimization endeavors carried out in this study. The first involves the training of the reinforcement learning agents, where they learn to find the peaks or lowest points of surfaces by several repeated attempts. During policy generation, the agents attempt to optimize a benchmark optimization function repeatedly as though they are playing a game, progressively getting better at finding the optimum. We refer to this hereafter as the policy generation phase, which is obtained using the PPO algorithm as described in section 2.1. The policy generation phase yields a policy, π θ , that can be used to perform global optimization of practical problems. The policy, π θ , is computed using a deep neural network that takes information about the state of the swarm and outputs an action. The second phase involves global optimization or policy deployment,
where the policy discovered in the first phase is applied to the optimization of an objective function. Here, agents are randomly initialized over the entire design space at the initial design iteration. Afterward, the location of the agents is progressively adjusted using the generated policy, until they cooperatively find the design optimum. The update vector used to adjust the position of the particles is computed based on the policy, π θ . The update function π θ in this study, like swarm-based optimizers, is a function of the current and prior states of agents within the swarm parameterized by θ which are the weights of a neural network. The definitions of elements of the MDP are as follows. The state is made up of the agents' present and previous locations within the design space, as well as the objective function values at those locations. The action is a vector that specifies the distance between the agent's current location and its new location based on the application of policy, π θ . The reward is a numeric value assigned based on how good the current policy is at performing global optimization. These two elements, namely policy generation and deployment are discussed in more detail below.
Figure 1: Illustration of policy generation and deployment phases of DeepHive.
Policy Generation
As with every reinforcement learning problem, the environment is the agent's world. An agent cannot influence the rules of the environment by its actions. Nevertheless, it can interact with the environment by performing actions within the environment and receiving feedback in the form of a reward. In this work, the environment is customized from the Open-AI gym library Brockman et al. [2016]. The problem is framed as an episodic task, where each episode consists of 25 updates to the state of the swarm (i.e., design iterations), after which the optimization attempt is terminated, regardless of whether the swarm converges or not. At the beginning of each episode, the agents take random positions within the domain of interest. These positions and objective function values are scaled to fall within a [0, 1] interval, which enables the resulting policy to apply to problems where the order of magnitude of the domain bounds and objective function are different. However, for BBO, the minimum and maximum values of the objective function are not known a priori. Therefore, at a given iteration, the range chosen for normalization is based on the currently known global best and global worst objective function values. We denote the position vector for the i th agent in a swarm of N agents as x i = [x (i,1) , x (i,2) , . . . , x (i,D) ], where D is the number of dimensions (or independent variables) in the objective function, f . The elements of x i , denoted x (i,j) , represent the position of agent i in the j th dimension. Finally, x i is a vector that denotes the historical best position of agent i, i.e., the location of the maximum objective value that agent i has reached in since the swarm was initialized. The observation vector for agent i in the j th dimension is given by:
ζ i,j = [(x i,j −x i,j ), (f (x i ) − f (x i )), (x i,j − x n,j ), (f (x i ) − f (x n ))(4)
At every training epoch, the observation, ζ, is fed to the actor neural network, which outputs a mean action. The agents' actions are then sampled from a distribution created with the mean action and a set standard deviation. The policy generation has two phases. During the first stage, the environment is explored more aggressively, so that the agents are exposed to a wide range of states within the environment. During this stage, the position of agent i in the j th dimension, arbitrary agent, x i,j , is updated using a normal distribution with the mean obtained from the PPO policy and a fixed standard deviation of 0.2.
∆x i,j = N (π θ (ζ i,j ), 0.2)(5)
Eq. 5 is used to update the position vector during the first 2500 iterations. Afterward, to update the policy at a given iteration, we compute the action of an agent by sampling from a normal distribution with the mean as the output of the policy network and standard deviation which is a function of the particle's distance from the global best particle's distance:
∆x i,j = N (π θ (ζ i,j ), ψ(x i,j − x g,j ))(6)
Where g is the index of the globally best agent ψ : R → R is a linear function given by:
ψ(x i,j − x g,j ) = 0.002 + 0.18(x i,j − x g,j )(7)
In Eq. 7, the standard deviation for the update of agent i a linear function of its distance from the agent that has the best location globally. Thus, the stochastic component of a particle's movement is large when far away from the global best to encourage exploration, but its update becomes more deterministic as the design optimum is approached to promote better exploitation. At each time-step, the RL agents take actions which are steps within the domain of the objective function. The reward function is designed such that actions that lead to subsequently poor objective value are penalized, while those that lead to adequate and fast convergence are rewarded. The reward function used in this paper is:
R = 10(f (x (k) ) − f (x (k−1) )) if f (x (k) ) < κ 10[1 + (f (x (k) ) − f (x (k−1) ))] otherwise(8)
Eq. 8 assumes a maximization problem, where the superscript, k, denotes the iteration number. k is defined as the reward transition number and should be chosen to be close to the best-known optimum (0.9 in this paper). In other words, when the objective function value of an agent is far from the globally best objective function value, the reward is proportional to the difference in objective function value between the current and previous locations. The reward is positive if the agent moves to a better location, and negative if it moves to a worse location. Once an agent crosses the reward transition number, it gets an additional reward of 10 at each training epoch. This serves two purposes. First, this encourages the RL agents to quickly move to regions close to the design optimum and remain there, since more rewards are reaped in such regions. Secondly, rewards become more difficult to accumulate close to the global optimum, since there is limited room for improvement to the current objective function value. Thus, the additional reward in regions close to the global best helps establish a baseline reward, such that particles in this region are adequately rewarded. By learning to maximize reward, the agents learn to take actions that lead to high rewards, namely, actions that find regions close to the global optimum on the surface. In this study, all the agents used for policy generation share the same policy for choosing their actions within the design space. Furthermore, each dimension within the design space is updated separately. Therefore, the number of dimensions and the agents can be readily varied during model deployment to solve a global optimization problem. Policy generation is performed using the cosine mixture function (section 3.1) and 7 agents. The policy is modeled with a neural network consisting of 2 hidden layers, each containing 64 neurons and activated using the hyperbolic tangent function. The training was performed for 250,000 iterations (which is equivalent to 10,000 episodes).
Policy Deployment
The reinforcement learning approach for policy generation is summarized in algorithm 1. Once the training process described in section 2.2.1 is complete, the best policy (π * θ ) is saved, and afterward, can be deployed for global optimization of functions. The steps involved in the policy deployment stage are summarized in Algorithm 1. As in the policy generation phase, the positions of the particles are progressively updated. In contrast to policy generation, the policy is frozen during this process. Furthermore, the best particle at each design iteration is kept frozen. This provides an anchor for the entire swarm and prevents all the particles from drifting off from promising regions of the design space.
Algorithm 1 Reinforcement Learning-based Optimization
Require: number of dimensions D; number of agents, N ; maximum number of iterations, K Require: objective function, f ; exploration function, ψ Define: agent index, i; dimension index, j; iteration number, k; index of globally best agent, g Initialize x
(0) 1 , x (0) 2 , ..., x (0) N
Randomly initialize position vector of agents 1 to
N , where x i = [x i,1 , x i,2 , ..., x i,D ] x i = x i for i = 1, 2, ..., N
Initialize all agents best position for k = 1, 2, ..., K do loop through iteration number
g = argmax{f (x (k) 1 ), f (x (k) 2 ), f (x (k) N )}
get the index of the globally best agent for i = 1, 2, ..., N i = g do loop through the number of agents excluding the best agent n = rand(1, 2, ..., N ) = i get randomly selected neighbor for j = 1, 2, ..., D do loop through dimension number
ζ i,j = [(x (k) i,j −x i,j ), (f (x (k) i ) − f (x i )), (x (k) i,j − x (k) n,j ), (f (x (k) i ) − f (x (k) i ))] ∆x (k) i,j ← N (π θ (obs i,j ), ψ(x (k) i,j , x (k) g,j )
Evaluate change in agent's position
x (k+1) i,j ← x (k) i,j + ∆x (k) i,j
Update the position of the agents
end for if f (x (k+1) i ) > f (x i ) then
Update agent i best position In this section, we compare the performance of DeepHive with PSO Eberhart and Kennedy [1995], differential evolution (DE) Storn and Price [1997], from the python SciPy library Virtanen et al. [2020] and GENetic Optimization Using Derivatives (GENOUD) Mebane Jr and Sekhon [1997] algorithm. PSO is a swarm-based optimizer, while DE and GENOUD are both based on genetic algorithms. First, we evaluate the performance of DeepHive using identical scenarios to training. In other words, we deploy the policy by testing it with the same objective function, number of particles, and number of dimensions, and therefore, these results depict an ideal performance. The training was performed using 10 particles on the 2-dimensional (2D; i.e., D = 2) cosine mixture function Ali et al. [2005] based on the procedure described in section 2.2.1. The cosine mixture function is shown in Fig. 2. It is a multi-modal function consisting of 25 local maxima, with one of these being the global maximum. The objective function value, f, is defined as a function of design parameters, x, as
x i = x (k+1) if (x) = 0.1 D j=1 cos(5πx j ) − D j=1
x 2 j (9) subject to −1 <= x j <= 1 for j = 1, 2, ..., D. The global maximum is f max (x * ) = 0.1D located at x * = 0. Figure 3 shows a comparison of the performance of DeepHive compared to PSO, DE, and GENOUD. In the figure, the best objective function value is plotted against the number of function evaluations. In Fig. 3 (and all the remaining plots in the results section), the plots show the mean performance across 25 trials. In other words, the process of initialization and optimization described in Algorithm 1 was repeated 25 times, and the performance was averaged, to provide a statistically valid projection of real-world performance. From the figure, it can be seen that on average, DeepHive requires a lower number of function evaluations to reach the vicinity of the optimum design. PSO and DE, in particular, fail to reach the optimum, only achieving a maximum fitness value of 0.1764 ± 0.0542 and 0.1823 ± 0.048 respectively, even after 100 iterations. While GENOUD converged to the global optimum, the maximum function value obtained using DeepHive rises faster compared to GENOUD. This is particularly important for expensive function evaluations in applications where a high degree of accuracy in locating the design optimum is less critical than function evaluation efficiency. For instance, setting a threshold of 0.19, we see that it takes DeepHive 130 function evaluations to reach this function value, while GENOUD takes 350 function evaluations, on average. Table I shows numeric values of the averaged maximum fitness obtained by all the optimizers with the standard deviations included (mean ± standard deviation). For all entries, the standard deviation is rounded to four decimal places. We see that DeepHive consistently outperforms all the other optimizers, except after 500 function evaluations, where the local search component of GENOUD leads to minor improvements in the objective function compared to DeepHive. Due to the absence of a dedicated local search component, the maximum objective function for DeepHive peaks at 0.1999, with a difference of 0.0001 from the global optimum. Next, we discuss the effect of the number of agents used. As mentioned in section 2.2.1, all agents used during the policy generation phase share the same policy for updating their positions. Therefore, the number of agents can be scaled up or down after training, depending on the application and cost of the function evaluations. Here, we compare the use of 5, 7, and 10 agents, and as done in 3.1, Fig. 4 shows the maximum objective value as a function of the number of function evaluations. Figure 4 depicts a direct comparison of the performance of 5, 7, and 10 agents, demonstrating that the number of agents involved in the search is directly related to the speed of convergence. Here, the solid dashed lines represent the mean performance over 25 trials, while the shaded regions represent the standard deviation. As desired, DeepHive's performance shows positive scaling with the number of particles. First, we note that DeepHive successfully finds the design optimum with 5 agents, despite training being conducted with 7 agents. Secondly, the performance improves as the number of agents increases. Thus, the policy generated takes advantage of additional information provided by 10 agents, though it uses a policy generated by training 7 agents. In this case, 10, 7, and 5 agents reached a function value of 0.19 after 100, 130, and 400 function evaluations, respectively. Since the updates are applied to each dimension within the design space separately, DeepHive can be applied to problems with different dimensions than it was trained with. Next, we investigate the capability of the trained policy to generalize to higher dimensions. We do this by testing the policy generated using a 2D cosine mixture on 3D and 4D versions of the same function. This comparison is shown in Fig. 5, where 10 agents have been used for policy deployment in all cases. As expected, the number of iterations needed to reach the region around the global maximum increases as the dimensionality increases. This is in part due to the number of local maxima that scales exponentially with the dimensionality of the problem. Specifically, the number of local maxima for the cosine mixture problem is 5 D , where D is the number of dimensions. As a result, the 4D cosine mixture problem has a 625 local maximum, as opposed to 125 for the 3D case, and 25 for the 2D case. Another reason for the decreased performance of the optimizer relates to the exponential growth of the design space was the number of dimensions increases. Nonetheless, we see that DeepHive can reach the vicinity of the design optimum using 10 agents, albeit taking longer for the higher dimension (as expected). The maximum achievable function value for the 2D, 3D, and 4D cases are 0.2, 0.3, and 0.4, respectively. For the 4D case, DeepHive reaches a function value of 0.37 after 1500 function evaluations, increasing to 0.3906 after 5000 iterations. For the 3D case, it reaches 0.29 after 800 iterations, increasing to 0.2978 after 4500 iterations.
Effect of Number of Particles
Effect of Dimensionality
Generalization to other functions
We developed a multimodal test global optimization function based on the cosine function to evaluate DeepHive's generalization abilities. This function has 15 local optimal solutions and 1 global optimal solution as shown in Fig. 6a. Additionally, we experimented with three additional global optimization test functions. It should be noted that all the functions tested in this subsection are all unseen by the agents since training (or policy generation) was carried out using the cosine mixture function only. Contour plots of these test functions are shown in Figs. 6b-d, while their mathematical descriptions are presented below.
1. Function 1: Multi-modal cosine function with 15 local maxima and 1 global maximum
f (x) = D j=1 cos(x j − 2) + D j cos(2x j − 4) + D j cos(4x j − 8)(10)
subject to −10 <= x j <= 10 for j = 1, 2, ..., D. The global maximum is f max (x * ) = 6 located at x * = (1, 3).
Function II gbhat [2021]
: Non-convex function with two local maxima (including one global maximum)
f (x) = (1 − x 1 2 + x 5 1 + x 3 2 ) × e −x 2 1 −x 2 2(11)
subject to −10 <= x j <= 10 for j = 1, 2, ..., D. The global maximum is f max (x * ) = 1.058 located at x * = (−0.225, 0). Hedar [2007]:
Matyas function
f (x) = 0.26(x 2 1 + x 2 2 ) − 0.48x 1 x 2(12)
subject to −10 <= x j <= 10 for j = 1, 2, ..., D. The global maximum is f max (x * ) = 0 located at x * = (0, 0). Molga and Smutnicki [2005]:
Six hump camel function
f (x) = (4 − 2.1x 2 1 + x 4 1 3 ) + (−4 + 4x 2 2 )x 2 2(13)
subject to −3 <= x j <= 3 for j = 1, 2, ..., D. The global maximum is f max (x * ) = −1.0316 located at x * = (0.0898, −0.7126)or(−0.0898, 0.7126).
A comparison of DeepHive with other optimizers is shown in Figs. 7a-d, where the function value against the number of function evaluations is plotted. Figure 7a depicts the results of function I. At 250 function evaluations, DeepHive outperformed other optimizers reaching a fitness value of 5.9151 ± 0.1773, while PSO, DE, and GENOUD had a fitness value of 5.6999 ± 0.7141, 5.7596 ± 0.6498, 5.84 ± 0.5426 respectively. After 800 function evaluations, only DeepHive and GENOUD appear to reach the global maximum's vicinity consistently, with the other optimizers (PSO and DE) sometimes getting stuck in local maxima. The result of function II, which has a non-convex surface with one global maximum and a local maximum just beside the global maximum, is shown in Fig. 7b. In this test, DeepHive also has a better performance compared to PSO and DE but was narrowly outperformed by GENOUD. At 300 function evaluations, DeepHive had a fitness value of 1.0503 ± 0.0213, with PSO, DE, and GENOUD having 1.0207 ± 0.0832, 1.0116 ± 0.0908, 1.0571 ± 0.0, respectively. At 1000 function evaluation, only DeepHive and GENOUD had reached the vicinity of the global maximum with a fitness value of 1.0566 ± 0.0008 and 1.0571 ± 0.0, respectively. The result of the Matyas function, which is a smooth function with the global maximum (f max = 0) at the center of the search space is presented in Fig. 7c. After 50 function evaluations, DeepHive reached an appreciable fitness value of −0.0282 ± 0.0315 while PSO and DE have fitness values of −0.224 ± 0.3717 and −0.1121 ± 0.1801, respectively. GENOUD had a better performance as it converged to the global maximum within 50 function evaluations as in the cosine mixture function (section 3.1), DeepHive rapidly reaches the vicinity of the global maximum, before GENOUD becomes the highest performing optimizer due to its more intensive local search. For example, after 30 function evaluations, DeepHive attains a maximum function value of −0.0738 ± 0.0759, while GENOUD only reaches −0.2333 ± 0.2752. In many practical engineering applications, refinements close to the global optimum may not be as critical as function evaluation efficiency, since the uncertainty in measurements and simulations may be much higher than the refinements performed by GENOUD. In contrast to the Matyas function, the six-hump camel function is a minimization problem that features steep contours around its bounds and contains two global minima. For this function (Fig. 7b), after 500 function evaluations, the maximum fitness of DeepHive is 1.0286 ± 0.0026, compared with 0.9823 ± 0.1798, 1.0281 ± 0.0161, and 1.0316 ± 0.0 for PSO, DE, and GE, respectively. DeepHive converged at 1.0302 ± 0.0009 while PSO, DE, and GENOUD converged at 0.9823 ± 0.1798, 1.0314 ± 0.0009, 1.0316 ± 0.0 respectively. The Appendix section contains tables comparing the DeepHive algorithm's performance to that of other optimization algorithms for these benchmark functions.
Conclusion
In the paper, we presented a reinforcement learning-based approach for generating optimization policies for global optimization. In this approach, multiple agents work cooperatively to perform global optimization of a function. The proposed approach was compared with a particle swarm optimizer (PSO), direct evolutionary algorithm (DE), and GENetic algorithm using derivatives (GENOUD). The effect of changing the number of agents and dimensions was investigated, as well as the potential of generalization to other functions. The results showed good scaling performance with the number of agents, and that global optimization was still achievable even with higher dimensions. Most importantly, the proposed algorithm was able to locate the vicinity of the global optimum, even for functions it was not trained with, demonstrating generalization within the limited test functions used in this paper. Overall, DeepHive bears promise for use in domain-specific optimizers, where agents can be trained using a function generated using a low-fidelity, cost-effective model, then deployed on the full-order, expensive function. Future work will explore the effects of using multiple functions for training, the incorporation of local search components, and further validation studies on practical engineering problems.
of DeepHive on the cosine mixture function
Figure 2 :
2Contour plot of cosine mixture function. The white star in the figure represents the location of the global maximum.
Figure 3 :
3cosine mixture function contour (a), RL-optimizer compared with other optimizers (b), performance of RL-optimizer for various numbers of agents (c), Generalization test to higher dimensions (d)
Figure 4 :
4The performance of DeepHive for various numbers of agents. The horizontal black line represents the maximum achievable objective value.
Figure 5 :
5The performance of DeepHive for various numbers of agents. The horizontal black line represents the maximum achievable objective value.
Figure 6 :
6The performance of DeepHive for various numbers of agents. The horizontal black line represents the maximum achievable objective value.
Figure 7 :
7The performance of DeepHive for various numbers of agents. The horizontal black line represents the maximum achievable objective value.
Table 1 :
1Detailed comparison of DeepHive's with other optimizers for the cosine mixture functionIteration Number, k
DE
PSO
GENOUD
DeepHive
0
-0.4975±0.3836 -0.7312±0.4345 -0.7525±0.4417 -0.4045±0.4534
100
0.1803±0.0489
0.1757±0.054
0.1345±0.0729
0.1833±0.0303
200
0.1822±0.048
0.1764±0.0542
0.1645±0.0631
0.1925±0.0147
300
0.1823±0.048
0.1764±0.0542
0.1878±0.04
0.1958±0.0069
400
0.1823±0.048
0.1764±0.0542
0.1941±0.029
0.1975±0.0034
500
0.1823±0.048
0.1764±0.0542
0.1941±0.029
0.1985±0.0013
600
0.1823±0.048
0.1764±0.0542
0.2±0.0
0.1988±0.001
700
0.1823±0.048
0.1764±0.0542
0.2±0.0
0.199±0.0008
800
0.1823±0.048
0.1764±0.0542
0.2±0.0
0.199±0.0008
900
0.1823±0.048
0.1764±0.0542
0.2±0.0
0.1992±0.0008
1000
0.1823±0.048
0.1764±0.0542
0.2±0.0
0.1999±0.0002
Two hybrid differential evolution algorithms for engineering design optimization. T Warren Liao, Applied Soft Computing. 104T Warren Liao. Two hybrid differential evolution algorithms for engineering design optimization. Applied Soft Computing, 10(4):1188-1199, 2010.
Application of an efficient gradient-based optimization strategy for aircraft wing structures. Odeh Dababneh, Timoleon Kipouros, James F Whidborne, Aerospace. 513Odeh Dababneh, Timoleon Kipouros, and James F Whidborne. Application of an efficient gradient-based optimization strategy for aircraft wing structures. Aerospace, 5(1):3, 2018.
Major advances in particle swarm optimization: theory, analysis, and application. Swarm and Evolutionary Computation. H Essam, Ahmed G Houssein, Kashif Gad, Ponnuthurai Nagaratnam Hussain, Suganthan, 63100868Essam H Houssein, Ahmed G Gad, Kashif Hussain, and Ponnuthurai Nagaratnam Suganthan. Major advances in particle swarm optimization: theory, analysis, and application. Swarm and Evolutionary Computation, 63:100868, 2021.
A comprehensive review of swarm optimization algorithms. Mohd Nadhir, Ab Wahab, Samia Nefti-Meziani, Adham Atyabi, PloS one. 105122827Mohd Nadhir Ab Wahab, Samia Nefti-Meziani, and Adham Atyabi. A comprehensive review of swarm optimization algorithms. PloS one, 10(5):e0122827, 2015.
Ant colony optimization. Marco Dorigo, Mauro Birattari, Thomas Stutzle, IEEE computational intelligence magazine. 14Marco Dorigo, Mauro Birattari, and Thomas Stutzle. Ant colony optimization. IEEE computational intelligence magazine, 1(4):28-39, 2006.
. Dervis Karaboga. Artificial bee colony algorithm. scholarpedia. 536915Dervis Karaboga. Artificial bee colony algorithm. scholarpedia, 5(3):6915, 2010.
Engineering optimisation by cuckoo search. Xin-She Yang, Suash Deb, International Journal of Mathematical Modelling and Numerical Optimisation. 14Xin-She Yang and Suash Deb. Engineering optimisation by cuckoo search. International Journal of Mathematical Modelling and Numerical Optimisation, 1(4):330-343, 2010.
Glowworm swarm optimisation: a new method for optimising multi-modal functions. Debasish Kn Krishnanand, Ghose, International Journal of Computational Intelligence Studies. 11KN Krishnanand and Debasish Ghose. Glowworm swarm optimisation: a new method for optimising multi-modal functions. International Journal of Computational Intelligence Studies, 1(1):93-119, 2009.
Particle swarm optimization. Russell Eberhart, James Kennedy, Proceedings of the IEEE international conference on neural networks. the IEEE international conference on neural networksCiteseer4Russell Eberhart and James Kennedy. Particle swarm optimization. In Proceedings of the IEEE international conference on neural networks, volume 4, pages 1942-1948. Citeseer, 1995.
Engineering optimization with particle swarm. Xiaohui Hu, C Russell, Yuhui Eberhart, Shi, Proceedings of the 2003 IEEE Swarm Intelligence Symposium. SIS'03 (Cat. No. 03EX706). the 2003 IEEE Swarm Intelligence Symposium. SIS'03 (Cat. No. 03EX706)IEEEXiaohui Hu, Russell C Eberhart, and Yuhui Shi. Engineering optimization with particle swarm. In Proceedings of the 2003 IEEE Swarm Intelligence Symposium. SIS'03 (Cat. No. 03EX706), pages 53-57. IEEE, 2003.
Particle swarm optimization. Riccardo Poli, James Kennedy, Tim Blackwell, Swarm intelligence. 11Riccardo Poli, James Kennedy, and Tim Blackwell. Particle swarm optimization. Swarm intelligence, 1(1):33-57, 2007.
A novel machine learning-based optimization algorithm (activo) for accelerating simulation-driven engine design. Opeoluwa Owoyele, Pinaki Pal, Applied Energy. 285116455Opeoluwa Owoyele and Pinaki Pal. A novel machine learning-based optimization algorithm (activo) for accelerating simulation-driven engine design. Applied Energy, 285:116455, 2021.
Parameter selection in particle swarm optimization. Yuhui Shi, C Russell, Eberhart, International conference on evolutionary programming. SpringerYuhui Shi and Russell C Eberhart. Parameter selection in particle swarm optimization. In International conference on evolutionary programming, pages 591-600. Springer, 1998.
The particle swarm optimization algorithm: convergence analysis and parameter selection. Trelea Ioan Cristian, Information processing letters. 856Ioan Cristian Trelea. The particle swarm optimization algorithm: convergence analysis and parameter selection. Information processing letters, 85(6):317-325, 2003.
Introduction to reinforcement learning. S Richard, Andrew G Sutton, Barto, Richard S Sutton, Andrew G Barto, et al. Introduction to reinforcement learning. 1998.
A reinforcement learning-based communication topology in particle swarm optimization. Yue Xu, Dechang Pi, Neural Computing and Applications. 3214Yue Xu and Dechang Pi. A reinforcement learning-based communication topology in particle swarm optimization. Neural Computing and Applications, 32(14):10007-10032, 2020.
Learning to optimize. Ke Li, Jitendra Malik, arXiv:1606.01885arXiv preprintKe Li and Jitendra Malik. Learning to optimize. arXiv preprint arXiv:1606.01885, 2016.
A new reinforcement learning-based memetic particle swarm optimizer. Hussein Samma, Junita Mohamad Chee Peng Lim, Saleh, Applied Soft Computing. 43Hussein Samma, Chee Peng Lim, and Junita Mohamad Saleh. A new reinforcement learning-based memetic particle swarm optimizer. Applied Soft Computing, 43:276-297, 2016.
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov, arXiv:1707.06347Proximal policy optimization algorithms. arXiv preprintJohn Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
Deep-reinforcement-learning-based capacity scheduling for pv-battery storage system. Bin Huang, Jianhui Wang, IEEE Transactions on Smart Grid. 123Bin Huang and Jianhui Wang. Deep-reinforcement-learning-based capacity scheduling for pv-battery storage system. IEEE Transactions on Smart Grid, 12(3):2272-2283, 2020.
The surprising effectiveness of ppo in cooperative. Chao Yu, Akash Velu, Eugene Vinitsky, Yu Wang, Alexandre Bayen, Yi Wu, arXiv:2103.01955multi-agent games. arXiv preprintChao Yu, Akash Velu, Eugene Vinitsky, Yu Wang, Alexandre Bayen, and Yi Wu. The surprising effectiveness of ppo in cooperative, multi-agent games. arXiv preprint arXiv:2103.01955, 2021.
. Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, Wojciech Zaremba, arXiv:1606.01540Openai gym. arXiv preprintGreg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. arXiv preprint arXiv:1606.01540, 2016.
Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces. Rainer Storn, Kenneth Price, Journal of global optimization. 114Rainer Storn and Kenneth Price. Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces. Journal of global optimization, 11(4):341-359, 1997.
Scipy 1.0: fundamental algorithms for scientific computing in python. Pauli Virtanen, Ralf Gommers, E Travis, Matt Oliphant, Tyler Haberland, David Reddy, Evgeni Cournapeau, Pearu Burovski, Warren Peterson, Jonathan Weckesser, Bright, Nature methods. 173Pauli Virtanen, Ralf Gommers, Travis E Oliphant, Matt Haberland, Tyler Reddy, David Cournapeau, Evgeni Burovski, Pearu Peterson, Warren Weckesser, Jonathan Bright, et al. Scipy 1.0: fundamental algorithms for scientific computing in python. Nature methods, 17(3):261-272, 2020.
Genetic optimization using derivatives (genoud). Jasjeet S Walter R MebaneJr, Sekhon, Computer program available upon requestWalter R Mebane Jr and Jasjeet S Sekhon. Genetic optimization using derivatives (genoud). Computer program available upon request, 1997.
A numerical evaluation of several stochastic algorithms on selected continuous global optimization test problems. Charoenchai M Montaz Ali, Zelda B Khompatraporn, Zabinsky, Journal of global optimization. 314gbhat. Particle swarm optimization on non-convex function. 2021M Montaz Ali, Charoenchai Khompatraporn, and Zelda B Zabinsky. A numerical evaluation of several stochastic algorithms on selected continuous global optimization test problems. Journal of global optimization, 31(4):635-672, 2005. gbhat. Particle swarm optimization on non-convex function. 2021.
Global optimization test problems. Abdel-Rahman Hedar, 43Disponible Online enAbdel-Rahman Hedar. Global optimization test problems. Disponible Online en: http://wwwoptima. amp. i. kyoto-u. ac. jp/member/student/hedar/Hedar_files/TestGO. htm, page 43, 2007.
Test functions for optimization needs. Test functions for optimization needs. Marcin Molga, Czesław Smutnicki, 10148Marcin Molga and Czesław Smutnicki. Test functions for optimization needs. Test functions for optimization needs, 101:48, 2005.
| [] |
[
"Construction numbers: How to build a graph?",
"Construction numbers: How to build a graph?"
] | [
"Paul C Kainen [email protected] "
] | [] | [] | Counting the number of linear extensions of a partial order was considered by Stanley about 50 years ago. For the partial order on the vertices and edges of a graph determined by inclusion, we call such linear extensions construction sequences for the graph as each edge follows both of its endpoints. The number of such sequences for paths, cycles, stars, double-stars, and complete graphs is found. For paths, we agree with Stanley (the Tangent numbers) and get formulas for the other classes. Structure and applications are also studied. | 10.48550/arxiv.2302.13186 | [
"https://export.arxiv.org/pdf/2302.13186v2.pdf"
] | 257,219,360 | 2302.13186 | 562d6755323a7a3fe681cc63622b523ea7a774bc |
Construction numbers: How to build a graph?
May 2023
Paul C Kainen [email protected]
Construction numbers: How to build a graph?
29May 2023Roller-coaster problemup-down permutationsminimum edge delay
Counting the number of linear extensions of a partial order was considered by Stanley about 50 years ago. For the partial order on the vertices and edges of a graph determined by inclusion, we call such linear extensions construction sequences for the graph as each edge follows both of its endpoints. The number of such sequences for paths, cycles, stars, double-stars, and complete graphs is found. For paths, we agree with Stanley (the Tangent numbers) and get formulas for the other classes. Structure and applications are also studied.
Introduction
The elements of a graph G = (V, E) are the set V ∪E of vertices and edges. A linear order on V ∪ E is a construction sequence (or c-sequence) for G if each edge appears after both of its endpoints. For instance, for the path P 3 with vertices 1, 2, 3 and edges 12, 23, one has construction sequences (1,2,3,23,13) and (1,2,12,3,23), while (1,3,12,2,23) is not a c-sequence. There are a total of 16 c-sequences for P 3 .
Let C(G) be the set of all c-sequences for G = (V, E). The construction number of G is c(G) := #C(G), the number of distinct construction sequences. So c(P 3 ) = 16.
Construction numbers (c-numbers) grow rapidly with the number of elements; finding these numbers for various graph families leads to interesting integer sequences [7]. Here, we determine such integer sequences for various parameterized families of graphs (paths, stars, and complete graphs). Yet these numbers are not yet known for hypercubes and complete bipartite graphs. While construction turns out to behave nicely with disjoint union (coproduct), nothing is known about the construction numbers of various graph-products as a function of the c-numbers of the factors.
How do the density and topological connectivity of a graph affect construction number? For example, for trees with a fixed number of nodes, paths have the lowest and stars have the highest construction number [8], but corresponding problems are open for maximal planar and maximal outerplanar graphs. Almost nothing is known about construction numbers for simplexes and more complex objects.
We give a few simple properties of the construction sequence which allow vertexand edge-based recursions. In particular, the first two terms are vertices and the last k terms are edges, where k is at least the minimum degree of the graph. A c-sequence can be enriched by replacing the i-th term by the subgraph determined by the elements in the first i terms. Such a graphical c-sequence can be constrained by connectivity and other properties. For instance, a connected nontrivial graph has a construction sequence where each subgraph has at most two connected components.
Furthermore, the family of c-sequences of fixed length n (arising from all graphs) is of a finite set of types. This is essentially the grammar of construction as a language where "vertex" and "edge" are the words. From the numerical evidence, it appears likely that the set of these construction types is equinumerous with a special family of rooted, decorated trees. As we sketch below, for n = 1, . . . , 6, there are 1, 1, 2, 5, 19, 87 c-types, and these same numbers appear in [10, A124348].
An "opportunity cost" is introduced which charges each edge for the number of steps in the c-sequence by which it follows the placements of both its endpoints but one might also consider the delay after only the later endpoint. Cost could further involve some power of vertex degree. We consider min-and max-cost c-sequences and a greedy algorithm that could allow graphical structure to be built up according to various rubrics. Section 2 has basic definitions, Section 3 studies construction sequences for graphs with basepoint or basepoint-pairs, and Section 4 considers variability and construction type. In Section 5, we find c(G) for various families. Section 6 defines costfunctions, min-and max-cost c-sequences, and relative constructability of graphs. Section 7 describes earlier appearances of construction numbers (notably [11], [13]) and shows how to extend c-sequences for hypergraphs, simplicial complexes, CWcomplexes, posets, and categories. The last section has problems and applications.
Basic definitions and lemmas
Let P p be the path with p vertices, K 1,p−1 the star with a single degree-p−1 hub having p−1 neighbors, each an endpoint; C p and K p are the cycle graph and complete graph with p vertices. See, e.g., Harary [6] for any undefined graph terminology.
For any set S, #S is the cardinality, always finite in this paper. For G = (V, E) a graph, we write p = |G| = #V for the order of G and q = G = #E for the size. We use n for the number of connected components and m for cycle rank. Recall that m = q −p + n, so m is the cardinality of a cycle basis. The maximum and minimum vertex-degree of G are ∆ and δ. We write [k] for {1, . . . , k}, k a positive integer. Let N >0 be the set of positive integers.
If S is a finite set with ℓ elements, Perm(S) denotes the set of all bijections x from [ℓ] to S, where x : i → x i for i ∈ [ℓ]. A permutation on S is an element of Perm(S). For s ∈ S and x = (x 1 , . . . , x ℓ ) ∈ Perm(S), we have
x −1 (s) := j ∈ [k] ⇐⇒ x j = s.(1)
If G = (V, E) is a graph, we put S G := V ∪ E, the set of elements of G and let
ℓ := ℓ(G) := p + q(2)
A permutation x in Perm(S G ) is a construction sequence (c-sequence) for the graph G = (V, E) if each edge follows both its endpoints; i.e., if
∀e ∈ E, e = uw ⇒ x −1 (e) > max(x −1 (u), x −1 (w)).(3)
Let C(G) be the set of all c-sequences for G. Then for G = (V, E) non-edgeless
Perm(V ) × Perm(E) C(G) Perm(V ∪ E) = Perm(S G ).(4)
Any c-sequence in the subset Perm(V ) × Perm(E) is called an easy c-sequence. The other construction sequences are called non-easy. Let c(G) := #C(G) be the construction number of G. For each graph G,
p!q! ≤ c(G) ≤ (p + q)!(5)
We call these the vertex-edge bounds on the construction number, and by the containments in (4), the inequalities are strict if G ≥ 1.
Lemma 1.
If G is connected, then the last δ(G) entries in any construction sequence
x ∈ C(G) are edges. Moreover, for each v ∈ V (G), x −1 (v) ≤ ℓ − deg(v, G).
The first statement follows from the second, and the second from the definitions. If G ′ ⊆ G and x ∈ C(G), then we define x ′ := x | G ′ , the restriction of x to G ′ , by
x ′ = x −1 |S G ′ −1 • ζ,(6)
where ζ is the order-preserving bijection from [ℓ ′ ] := [ℓ(G ′ )] to the subset of [ℓ] determined by the positions of the elements of G ′ in the sequence x.
Lemma 2. Let G ′ ⊆ G. If x ∈ C(G), then x ′ := x| G ′ ∈ C(G ′ ).
If y is a permutation on S G such that y ′ := y| G ′ ∈ C(G ′ ), then
y ∈ C(G) ⇐⇒ ∀e = vw ∈ E(G) \ E(G ′ ), y −1 (e) > max(y −1 (v), y −1 (w)).
The number of ways to extend a c-sequence for G−v to a c-sequence for G depends on the particular construction sequence. For example, take P 2 = ({1, 2}, {a}), where a is the edge 12. Then C(P 2 ) = {x ′ , y ′ }, where x ′ := (1, 2, a) ≡ 12a and y ′ ≡ 21a (in short form). Consider P 3 = ({1, 2, 3}, {a, b}), where b = 23. As P 2 ⊂ P 3 , each csequence for P 3 extends a c-sequence of P 2 . One finds that x ′ has exactly 7 extensions (only one of which is non-easy) 312ab, 312ba, 132ab, 132ba, 123ab, 123ba, 12a3b
to an element in C(P 3 ), while y ′ has exactly 9 extensions (two of which are non-easy) 321ab, 321ba, 32b1a, 231ab, 231ba, 23b1a, 213ab, 213ba, 21a3b.
These are the 16 elements of C(P 3 ).
Given two element-wise disjoint finite sequences s 1 and s 2 of lengths n and m, we define a shuffle of the two sequences to be a sequence of length n + m which contains both s 1 and s 2 as subsequences. The number of shuffle sequences of s 1 and s 2 is n+m n , giving the construction number of a disjoint union in terms of its parts. Lemma 3. If x 1 and x 2 are c-sequences for disjoint graphs G 1 and G 2 , resp., then each shuffle of x 1 and x 2 is a c-sequence for G 1 ∪ G 2 , and conversely; hence,
c(G 1 ∪ G 2 ) = c(G 1 )c(G 2 ) ℓ 1 + ℓ 2 ℓ 1 ,(7)
where ℓ 1 and ℓ 2 are the lengths of the sequences x 1 and x 2 , resp.
The previous lemma extends to any finite pairwise-disjoint union of graphs. If G 1 , . . . , G n have ℓ 1 , . . . , ℓ n elements, then for ℓ := ℓ 1 + · · · + ℓ n , replacing binomial by multinomial coeffient, the pairwise-disjoint union (coproduct) G :
= n i=1 G i satisfies c(G) = n i=1 c(G i ) ℓ ℓ 1 · · · ℓ n .(8)
Construction numbers are equal for isomorphic graphs. More exactly, we have
(x) =x, wherex j = φ(x j ) for 1 ≤ j ≤ ℓ(G).
Construction sequences for basepointed graphs
Let H be any subgraph of G and let y be any construction sequence of H. Then the set of all construction sequences of G which A graph with basepoint is an ordered pair (G, v), where v ∈ V (G). A graph with basepoint-pair is an ordered triple (G, v, w), where (v, w) is an ordered pair of distinct vertices in V (G). We call v a basepoint and (v, w) a basepoint-pair. Let
C(v, G) := {x ∈ C(G) : v = x 1 } and C(v, w, G) := {x ∈ C(G) : v = x 1 , w = x 2 }
be the sets of construction sequences for the corresponding graphs beginning with the basepoint or basepoint-pair. The based construction number
c(v, G) := #C(v, G) is the number of c-sequences starting with v. The pair-based construction num- ber c(v, w, G) := #C(v, w, G) is defined analogously. By definition, C(v, w, G) is empty if v = w.
As each c-sequence starts with a pair of vertices, we have the following expressions for the construction number of a graph as the sum of its based construction numbers,
c(G) = v∈V c(v, G) = (v,w) ∈ V ×V c(v, w, G).(9)
The based construction numbers behave nicely w.r.t. isomorphism.
Lemma 5. Let G = (V, E), G ′ = (V ′ , E ′ ) be graphs. If v ∈ V , v ′ ∈ V ′ , and if φ : G → G ′ is an isomorphism such that φ(v) = v ′ , then c(v, G) = c(v ′ , G ′ ). Proof. Let x ∈ C(G). Then x ′ := (φ(x 1 ), . . . , φ(x ℓ )) is a c-sequence for G ′ .
Similarly, pair-based construction sequences behave well w.r.t. isomorphism.
Lemma 6. Let G = (V, E), G ′ = (V ′ , E ′ ) be graphs. If v, w ∈ V , v ′ , w ′ ∈ V ′ , and if φ : G → G ′ is an isomorphism such that φ(v) = v ′ and φ(w) = w ′ , then c(v, w, G) = c(v ′ , w ′ , G ′ ).
A graph G is vertex-transitive if for every pair (v, w) of vertices, there is an automorphism of G carrying v to w. The based construction number of a vertextransitive graph is independent of basepoint. Hence, by (9) and Lemma 5, we have
Proposition 1. If G = (V, E) is vertex transitive, then c(G) = p · c(v, G).
For i = 1, . . . , n, let G i be a family of graphs with ℓ i elements and chosen basepoints v i ∈ V G i . We now define a coproduct for graphs with basepoint which is the disjoint union of the G i with basepoints v i all identified to a single vertex v, also called the "wedge-product"
(G, v) := n i=1 (G i , v i ) := i∈[n] G i v 1 ≡ v 2 ≡ · · · ≡ v n ≡ v .
be the resulting which has base point v. Then as in (8), we have
c(v, G) = n i=1 c(v i , G i ) ℓ − 1 ℓ 1 − 1 · · · ℓ n − 1 .(10)
where ℓ := ℓ 1 + · · · + ℓ n . We can also give an edge-based recursion. Let G = (V, E) be nontrivial connected and for e ∈ E write C(G, e) := {x ∈ C(G) : x ℓ = e}; put c(G, e) := #C(G, e).
Lemma 7. Let G = (V, E) be nontrivial connected. Then c(G) = e∈E c(G − e).(11)
Call a graph edge transitive if there is an automorphism carrying any edge to any other edge. Then c(G, e) is independent of the edge.
Variations and structure in c-sequences
One can classify the type of a construction sequence by keeping track, when an edge is added, which two of the existing vertices are being joined or noting that a vertex is being added. We define τ (n) to be the number of types of c-sequence with n elements that can arise for any possible graph.
For example, putting e ij = v i v j , the set of construction sequences of length 4 is
(v 1 , v 2 , e 1,2 , v 3 ) ∪ (v 1 , v 2 , v 3 , e i,j ) 1 ≤ i < j ≤ 3 ∪ (v 1 , v 2 , v 3 , v 4 ) ,
so τ (4) = 5. That τ (1) = τ (2) = 1 and τ (3) = 2 are easily checked. But τ (5) = 19 and τ (6) = 87. We conjecture that τ (n) is given by A124348 in [10].
Indeed, let T (1, 2, 3) denote the K 3 graph determined by vertices v 1 , v 2 , v 3 . For n = 5, there are 6 natural subfamilies of possible construction sequence
v 1 , v 2 , e 12 , v 3 , v 4 ; v 1 , v 2 , e 12 , v 3 , e j3 , for 1 ≤ j ≤ 2; v 1 , v 2 , v 3 , e ij , e rs , for e ij ∈ E(T (1, 2, 3)), e rs ∈ E(T (1, 2, 3) − ij); v 1 , v 2 , v 3 , e ij , v 4 , for e ij ∈ E(T (1, 2, 3)); v 1 , v 2 , v 3 , v 4 , e i,j for 1 ≤ i < j ≤ 4; v 1 , v 2 , v 3 , v 4 , v 5 .
These contribute, respectively, 1 + 2 + 6 + 3 + 6 + 1 = 19 c-sequences.
There are a total of 6 distinct rooted unlabeled trees which satisfy the condition of "thinning limbs" given in [10, A124348] and listing them in decreasing lexicographic order according to the sequence of heights from root to leaves, they have the following number of decorations where vertices are numbered so that numbering increases along any path starting at the root: 1, 4, 4, 3, 6, 1, making 19 in all.
Thus, the putative one-to-one correspondence is non-obvious.
Let G = (V, E) be a graph and let x ∈ C(G). The graphical sequence of x is the sequence G i (x) := G i := G({x 1 , . . . , x i }), 1 ≤ i ≤ ℓ, of subgraphs of G such that x (i) := (x 1 , . . . , x i ) ∈ C(G i ).(12)
We call the G i (x) the partial subgraphs of G w.r.t. x ∈ C(G). Adding a vertex increments n by 1 and leaves m unchanged; that is,
x i ∈ V (G) ⇐⇒ n(G i ) = 1 + n(G i−1 ) and m(G i ) = m(G i−1 ) .(13)
Adding a new edge either leaves n unchanged (but increases the cycle rank of a connected component by 1) or else it decrements n by 1 but leaves m unchanged. In the first instance, the edge joins two vertices in the same component, while in the second case, the edge joins vertices in formerly distinct components; in CNF,
x i ∈ E(G) ⇐⇒ Λ ι ∨ Λ ε , where(14)Λ ι = n(G i ) = n(G i−1 ) ∧ m(G i ) = 1 + m(G i−1 ) (x i is "internal"); (15) Λ ε = n(G i ) = −1 + n(G i−1 ) ∧ m(G i ) = m(G i−1 ) (x i is "external"). (16)
A sequence (a 1 , . . . , a r ) is monotonically increasing if a 1 ≤ · · · ≤ a r . For any
x ∈ C(G), the sequence (m(G 1 (x)), . . . , m(G ℓ (x)) is monotonically increasing. A sequence (a 1 , . . . , a r ) is strongly left (sl) unimodal if there is a (unique) index k ∈ [r] such that we have (i) a 1 < a 2 < . . . < a k , and (ii) a k ≥ a k+1 ≥ · · · ≥ a r .
Recall that a sequence is easy if all its vertices are consecutive; see (4). One has Proposition 3. If G has order p and x ∈ C(G), then the following are equivalent:
(1) (n(G 1 (x)), . . . , n(G ℓ (x)) is sl unimodal (ℓ := ℓ(G)); (2) n(G p (x)) = p; (3) x is easy.
Let n max (G) := min x∈C(G) max 1≤i≤ℓ(G) n(G i (x)). A construction sequence x for G is called connected if n max (G) = 2; that is, if no partial subgraph of G w.r.t. x has more than 2 connected components. Then we have Proposition 4. For any nontrivial graph G, the following are equivalent: (1) n max (G) = 2;
(2) G is connected;
(3) G has a connected c-sequence.
Construction numbers for some graph families
In this section, construction numbers are determined for stars and double-stars, for paths and cycles, and for complete graphs. We begin with an easy example. Proof. For n = 0, 1, the result holds. Suppose n ≥ 2 and let x = (x 1 , . . . , x 2n+1 ) be a construction sequence for K 1,n . There are n edges e i = v 0 v i , where v 0 is the central node, and one of the edges, say, e i , must be the last term in x. This leaves 2n coordinates in x ′ := (x 1 , . . . , x 2n ) and one of them is v i . The remaining (2n − 1) coordinates are a construction sequence for the (n − 1)-star K 1,n−1 . Hence, c(K 1,n ) = n(2n)c(K 1,n − v i ) = 2n 2 2 n−1 (n − 1)! 2 = 2 n (n!) 2 by induction.
The numbers 2, 16, 288, 9216, 460800 generated by the above formula count the number of c-sequences for K 1,n for n ∈ {1, 2, 3, 4, 5}. These numbers are the absolute value of the sequence A055546 in the OEIS (reference [10]) and describe the number of ways to seat n cats and n dogs in a roller coaster with n rows, where each row has two seats which must be occupied by a cat and a dog.
Interestingly, the value of c(K 1,n ) is asymptotically almost exactly the geometric mean of the vertex-edge bounds (5). For we have
c(K 1,n ) (n + 1)!n! = 2 n (n!) 2 (n + 1)!n! = 2 n n(17)
and
(2n + 1)! c(K 1,n ) = (2n + 1)! 2 n (n!) 2 ∼ 2 1+n √ n √ π = c 2 n √ n, for c = 2 √ π ≈ 1.128....(18)
A double star is a tree of diameter at most 3. For a, b ≥ 0, let D a,b denote the double star formed by the union of K 1,a+1 and K 1,b+1 with one common edge. So D a,b consists of two adjacent vertices, say v and w, where v is adjacent to a and w to b additional vertices, and all vertices other than v and w have degree 1.
Using the recursion (11) as well as equation (7), one gets the following.
Theorem 2. Let a, b ≥ 0 be integers. Then c(D a,b ) = f (a, b) + β 1 (a, b)c(D a−1,b ) + β 2 (a, b)c(D a,b−1 ), where f (a, b) = c(K 1,a K 1,b ) = 2a+2b+2 2a+1 2 a (a!) 2 2 b (b!) 2 , β 1 = a(2a + 2b + 2), and β 2 = b(2a + 2b + 2).
We don't yet have a closed form but can calculate the values. For example, D 2,2 , the 6 vertex tree with two adjacent cubic nodes, has 402432 c-sequences. But c(K 1,5 ) = 460800. In fact, stars maximize the construction number over all trees of a fixed order, and if T is any p-vertex tree, then c(P p ) ≤ c(T ) ≤ c(K 1,p−1 ) [8]. Hence, the extremal trees have extremal construction numbers.
For the path, we show below that c(P p ) is the p-th Tangent number T p . By [17], [4, 24.15.4]), with B 2p = the 2p-th Bernoulli number [10, (A027641/A027642)],
c(P p ) = T p = (1/p) 2 2p 2 |B 2pn |.(19)
An asymptotic analysis shows c(P p ) is exponentially small compared to c(K 1,p−1 ).
c(P p ) c(K 1,p−1 ) ∼ 4e −2 π −1/2 p 1/2 8 π 2 p .(20)
By Proposition 2, counting c-sequences for C p and P p are equivalent problems.
Lemma 8. If p ≥ 3, then c(C p ) = p · c(P p ).
Using Lemma 8 and equation (19), we have for p ≥ 3,
c(C p ) = 2 2p 2 |B 2p |.(21)
In fact, the formula holds for p ≥ 1, where cycles are CW-complexes: C 1 has one vertex and a loop, and C 2 has two vertices and two parallel edges. Before determining c(P p ), we give a Catalan-like recursion for these numbers.
Lemma 9. c(P n ) = n−1 k=1 c(P k ) c(P n−k ) 2n−2 2k−1 . Proof. Any construction sequence x for P n has last entry an edge e, whose removal creates subpaths with k and n − k vertices, resp., for some k, 1 ≤ k ≤ n − 1. Now x contains construction sequences for both subpaths which suffices by Lemma 3.
Trivially, c(P 1 ) = 1. By the recursion, we get 1, 2, 16, 272, 7936, 353792 for c(P n ), n = 1, . . . , 6, which is in the OEIS as A000182 [10], the sequence of tangent numbers, T n . Its exponential generating function is tan(x) corresponding to the odd-index terms in the sequence of Euler numbers [10, A000111]; see, e.g., Kobayashi [9].
Here are two different proofs that c(P n ) = T n .
Proof 1. Let U(n) be the set of up-down permutations of [n]
, where consecutive differences switch sign, and the first is positive. It is well-known [18] that #U(2n − 1) = T n .
Proposition 5 (D. Ullman). There is a bijection from C(P n ) to U(2n − 1).
Proof.
A permutation π of the consecutively labeled elements of a path is a construction sequence if and only if π −1 is an up-down sequence. Indeed, for j = 1, . . . , n − 1, π −1 (2j) is the position in π occupied by the j-th edge, while π −1 (2j − 1), π −1 (2j + 1) correspond to the positions of the two vertices flanking the 2j-th edge and so are smaller iff π is a construction sequence.
Label the elements of P 5 from left to right (1,2,3,4,5,6,7,8,9), where oddnumbers correspond to vertices and even numbers to edges. Then P 5 has the easy c-sequence π = (1, 3, 5, 7, 9, 2, 4, 6, 8) and π −1 = (1, 6, 2, 7, 3, 8, 4, 9, 5) is up-down.
Proof 2.
By [10, A000182], T n = J 2n−1 , where for r ≥ 1, J r denotes the number of permutations of {0, 1, . . . , r + 1} which begin with '1', end with '0', and have consecutive differences which alternate in sign. Then J 2k = 0 for k ≥ 1 as the sequences counted by J must begin with an up and end with a down and hence have an odd number of terms. These tremolo sequences are in one-to-one correspondence with "Joyce trees" and were introduced by Street [14]. They satisfy the following recursion.
Proposition 6 (R. Street). For r ≥ 3, J r = r−1 m=0 r−1 m J m J r−1−m . Now we show c(P n ) = J 2n−1 . Indeed, J 1 = c(P 1 ) and J 3 = c(P 2 ). Replace J 2r−1 by c(P r ) and J 2r by zero and re-index; Street's recursion becomes Lemma 9, so c(P n ) and J 2n−1 both satisfy the same recursion and initial conditions. But J 2n−1 = T n .
If v is one of the endpoints of P n , we can calculate c(v, P n ) for the first few values, getting (with some care for the last term) 1, 1, 5, 61 for n = 1, 2, 3, 4. In fact,
c(v, P n ) = S n ,(22)
where S n is the n-th secant number ([10, A000364]), counting the "zig" permutations.
Complete graphs
This section is omitted until the Monthly problem [7] has appeared and its solutions are collected. The solution is joint with R. Stong, and J. Tilley.
Costs for construction sequences
Let G = (V, E) be a graph, let x ∈ C(G), and let e = uw ∈ E. The cost of edge e with respect to construction sequence x is defined to be
ν(e, x) := 2x −1 (e) − x −1 (u) − x −1 (w).(23)
The cost of a construction sequence x ∈ C(G) is the sum of the costs of its edges
ν(x) := e∈E(G) ν(e, x)(24)
We say x is min-cost if ν(x) = min{ν(y) : y ∈ C(G)}. Then ν(G) := ν min (G) := min{ν(y) : y ∈ C(G)} is the minimum cost to construct G. Similarly for max-cost. Let C ′ (G) be the set of min-cost c-sequences for G and let c ′ (G) be its cardinality. Call two c-sequences x, y for G cost equivalent if they have the same cost and edgerearrangement equivalent if costs agree after permuting within sets of consecutive edges. Min-cost implies equivalent to "greedy" sequence and max-cost implies easy.
Let G(G) be the family of all greedy c-sequences for graph G (defined below).
Theorem 3. Let x ∈ C(G) where G is connected and nontrivial. If x is min-cost, then there exists y ∈ G(G) such that x and y are edge-rearrangement equivalent.
Before giving the proof, we define our algorithm which translates any of the p! members of Perm(G) into members of G(G). Let G = (V, E) be a graph, x ∈ C(G), e = uw ∈ E and i ∈ [ℓ(G)]; we say that edge e is available for c-sequence x ∈ C(G) at time i if x −1 (e) ≥ i and max(x −1 (u), x −1 (w)) < i. Availability of e at time i for x is equivalent to the assertion that x (i−1) := (x 1 , . . . , x i−1 ) contains u and w but not e. Let A := A(G, x, i) be the set of all edges available for x at time i.
The available set of edges can actually be taken in any order. Cost is independent of the ordering of the available edges. Indeed, it suffices to check this for the interchange of two adjacent edges. One of the edges moves left and so has its cost reduced by 2 while the edge that moves to the right has its cost increased by 2; hence, the cost for the sequence remains constant. Thus, cost-equivalence is implied by edge-rearrangement equivalence.
Any construction sequence x for G = (V, E) induces a total order x|V on the set V of vertices. Conversely, given a vertex permutation ω, we now define a greedy algorithm G that produces a construction sequence x which induces ω and has minimum cost among all ω-inducing c-sequences for G.
Let G = (V, E) be a non-edgeless graph and let ω = (v 1 , v 2 , . . . , v p ); we define
y := G(ω)
to be the c-sequence obtained by taking first v 1 , . . . , v a , where a ≥ 2 is the first index for which V a := {v 1 , . . . , v a } induces a non-edgeless subgraph G(V a ) of G, with edge-set E(G(V a )) = A(G, y, a+1). The elements y a+1 , . . . , y a+b are the b ≥ 1 edges in G(V a ) in the lexicographic order induced by ω. Now add the next vertices from ω beginning with y a+b+1 = v a+1 until again one or more edges become available, etc. Call y a greedy c-sequence for G; let G(G) be the set of such greedy c-sequences.
Proof of Theorem 3
To minimize the cost of any c-sequence y which extends x (i−1) , all the edges in A should be placed before any additional vertices. Indeed, if vertex v / ∈ V i−1 precedes edge e in A, reversing the positions of v and e reduces the cost for any edge with v as an endpoint while decreasing the cost for e as well.
Theorem 4. Let x ∈ C(G). If x is max-cost, then x is an easy sequence.
Proof. If x is not easy, some edge immediately precedes a vertex. Reversing them increases cost.
A good question, for a specific graph, is how to choose the vertex ordering ω on which to run the greedy algorithm. For the path, the obvious two choices work.
Lemma 10. Let n ≥ 2. Then ν(P n ) = 4n − 5 and c ′ (P n ) = 2.
Proof. If P n is the n-path with V = [n] and natural linear order, the greedy algorithm gives the c-sequence 121 ′ 32 ′ 43 ′ 54 ′ · · · n(n−1) ′ , where we write k ′ for the edge [k, k+1]. The first edge costs 3, while each subsequent edge costs 4. The unique nonidentity automorphism of P n produces the other member of C ′ (P n ) by Lemma 4.
Thus, the minimum construction cost for a path grows linearly with its order. Also, for the path, if x is min-cost, then x is connected. But ν max is quadratic.
Lemma 11. For n ≥ 2, let x ∈ C(P n ) be any easy sequence which begins with v 1 , . . . , v n in the order they appear along the path. Then ν(x) = 2n−1 2 .
Proof. Suppose WLOG that the edges occur in reverse order, right-to-left. Then ν(x) = 3 + 7 + · · · + 4(n − 1) − 1 = 2n−1 2 .
The cost of an easy sequence for P n , using a random order of vertices, can be higher or lower than 2n−1 2 . We ask: What is ν max (P n )?
Lemma 12. Let n ≥ 2. Then ν(C n ) = 6n − 4 and c ′ (C n ) = 2n.
Proof. The elements x ∈ C ′ (C n ) for C n = (v 1 , e 12 , v 3 , . . . , e n−1,n , v n , e n,1 ) begin as in P n but the last edge costs 2 + (2n − 1) = 2n + 1, so ν(x) = 4n − 5 + 2n + 1 = 6n − 4. Any of the n edges of the cycle could be last, and either clockwise or counterclockwise orientation could occur, so c ′ (C n ) = 2n. A different cost model could be formed by attributing cost to the vertices, rather than the edges. For v ∈ V (G), let E v be the set of edges incident with v and put
κ(v, x) := e∈Ev x −1 (e) − x −1 (v) deg(v, G) for x ∈ C(G).
Then κ(x) := v∈V κ(v, x) and κ(G) := min x∈C(G) κ(x) are an alternative measure. It would also be possible to explore using maximum, rather than summation, to get the cost of a graph from that of its edges (or vertices), as with L 1 vs L ∞ norms.
Continuous and probabilistic models are studied in [8].
Choice of cost function may be influenced by application. One might wish to concentrate resources, filling in a portion of the graph with all possible edges between the adjacent vertex-pairs so as to maximize the number of partial subgraphs that are induced subgraphs, hence reaching full density as early as possible. In contrast, the graph cost function might be designed to cause the induced subgraphs to become spanning as early as possible -e.g., by means of a spanning tree when the graph to be constructed is connected. A further goal, consistent with the spanning tree model, is to cause density to increase as slowly as possible which might be appropriate if high-volume vertex throughput requires a learning period. (Perhaps a suitable regularizing function for the cost would be quadratic in the vertex-degrees.)
However, merely having a sharply curtailed repertoire of construction sequences makes it easier to find nice c-sequences. Perhaps c ′ (G) < c ′ (H) implies c(G) < c(H).
Currently, we don't know how much variation occurs in c(G) among families of graphs with a fixed number of vertices and edges. Let G(p, q) be the set of all isomorphism classes of graphs of order p and size q and suppose G ∈ F ⊆ G(p, q). Define relative constructability of G in F as c(G) divided by the average over F ,
ξ(G, F ) := c(G) α(F ) , where α(F ) := (#F ) −1 H∈F c(H).(25)
We ask whether ξ(G, F ) is largest when G has minimum diameter in F and least when G has maximum diameter in F ? This is true for trees as we mentioned above.
By designing cost suitably, a greedy algorithm could give c-sequences achieving various goals. These goals could include constraining vertex degree or minimizing the number of connected components. If the graph being constructed is, say, a complete or nearly complete graph, it should be possible to cause the sequence of partial subgraphs to exhibit various possible behaviors: Erdős-Renyí random graph process, a uniformly distributed family of regular disjoint subgraphs, or a locally concentrated graph.
If graph construction is prioritized in order to maximize the cycle rank, one should get the locally dense case. If construction prioritizes number of connected components and is also guided by a cost that is quadratic in the vertex degree, then we believe that the outcome will be a uniform distribution. Uniformly random selection of c-sequences from all permutations of the elements should give the Erdős-Renyí random graph process.
Earlier appearances and extensions
Stanley studied the number of linear extensions of a partial order [11, p 8], using them to define a polynomial [12, p 130]. In [13, p 7] he showed the number of linear extensions of the partial order determined by a path (Stanley called such a poset a "fence") is an Euler number, implying our results (19) and (21) above. Now take the Hasse diagram of any partial order; define a construction sequence for the diagram to be a total order on the elements of the poset such that each element is preceded in the linear order by all elements which precede it in the partial order. Simplicial and CW-complexes can be partially ordered by "is a face of", and the linearization of posets includes graphs and hypergraphs as 2-layer Hasse diagrams.
Construction sequences make sense for hypergraphs, multigraphs, and indeed for any CW-complex. In the latter case, one counts sequences of cells, where each cell must follow the cells to which it is attached. For simplicial complexes, one might start with simplexes, cubes, and hyperoctahedra (the standard Platonic polytopes), and the sporadic instances in 3 and 4 dimensions.
Construction sequences for categories, topoi, and limits and colimits of diagrams could be considered, even beyond the finite realm [11, p 77].
Other notions of graph construction are known. Whitney [19] has defined an ear decomposition. Suppose given a cycle; now attach nontrivial paths ("ears") so that the endpoints of the path are the only intersection of the attached path with the union of the previous elements of the decomposition. Existence of an ear decomposition characterizes the 2-connected graphs. For minimum (or relatively low) cost c-sequences of a 2-connected graph, one might try using the ear decomposition as a guide since min-cost c-sequences are known for its parts.
For 3-connected graphs, Tutte has shown that all can be constructed starting from a wheel graph (the cone on a cycle) by splitting vertices and adding edges [6]. More generally, shellability and collapsability of complexes and matroids might also be admitted. But it isn't clear how to exploit these structures for c-sequences.
A different approach to graph construction was motivated by the goal to describe the self-assembly of macromolecules performed by virus capsids in the host cell. An assembly tree [16] is a gadget that builds up a graph from subgraphs induced by various subsets of the vertices. The number of assembly trees of a graph is the assembly number but these seem to be quite unrelated to construction numbers. For example, for n-stars, [16] gives n!, while for paths and cycles, Vince and Boná find a Catalan-type value.
Discussion
Aside from their combinatorial relevance for the structure of incidence algebras [11] or for enumeration and integer sequences [7,8], construction numbers of graphs might have a deeper theoretical aspect. A natural idea is to think of construction sequences as the outcome of a constrained stochastic process, where a graph evolves through the addition of new vertices and edges subject to the condition that an edge cannot appear before its endpoints. Any given graph thus could be "enriched" by knowledge of its history, either the linear order on its elements or their actual time of appearance. The existence of such histories might enable some new methods of proof -e.g., for the graph reconstruction problem of Ulam and Harary.
Practical applications could include operations research, where directed hyperedges describe complex tasks such as "build an airbase" which depend on various supporting infrastructures. If a link should occur at some moment, one would like the necessary endpoints to happen just-in-time.
Graphs occur in many scientific contexts. It would be interesting to study the actual construction sequences for the complex biochemical networks found in the organic kingdom. How close are they to being economical?
Brightwell & Winkler [2] showed that counting the linear extensions of a poset is #P -complete and contrast this with randomized polynomial-time algorithms which estimate this number. Their conjecture that #P -completeness holds even for height-2 posets was proved by Dittmer & Pak [5]. Applications of linear extensions of posets to equidistributed classes of permutations were given by Björner & Wachs [2], and Burrow [3] has studied using traversals of posets representing taxonomies and concept lattices to construct algorithms for information databases.
Returning to the metaphor of construction sequences as an abstract language, one might think of edges as verbs and vertices as nouns but verbs are specific to word-pairs. In the future, perhaps some sort of directed hypergraph will become useful in the interpretation of spoken and written human language, where meaning is the hyperedge which requires the presence of all its vertices. Indeed, the examples of poetry and humor show that meaning is emerging (that is the strength, or probability, or even complex amplitude of the hyperedge is increasing) before all vertices (i.e., words) have occurred. Completion or dissonance is the result when the expected or unexpected word arrives.
Computer calculation of construction numbers to get sample values can aid in finding correct formulas (via the OEIS [10]) for inductive proofs, but such computation is difficult due to the large number of permutations. This might be flipped into an asset by utilizing the theoretical calculations here as a "teacher" for neural network or machine learning methods (cf. Talvitie et al. [15]). More ambitiously, a mathematically oriented artificial intelligence could be challenged to discover the formulas above, along with some of the others we would like to have.
Lemma 4 .
4Let φ : G → H be an isomorphism of graphs. Then there is an induced bijectionφ : C(G) → C(H) given byφ
Proposition 2 .
2Let G = (V, E) be edge transitive. Then c(G) = q · c(G, e).
For any graph H, let n(H) denote the number of connected components and let H(G) denote the cycle rank, which satisfies m(H) = q(H) − p(H) + n(H).
Theorem 1 .
1For n ≥ 0, c(K 1,n ) = 2 n (n!) 2 .
AcknowledgementWe thank Stan Wagon for helpful conversations and for pointing out[16]. We also acknowledge Richard Hammack for noting we were implicitly using equation(11).
Permutation Statistics and Linear Extensions of Posets. A L Björner & M, Wachs, J. Comb. Theory (A). 58A. Björner & M. L. Wachs, Permutation Statistics and Linear Extensions of Posets, J. Comb. Theory (A) 58 (1991) 85-114.
Counting Linear Extensions. G Brightwell, & P Winkler, Ord. 8G. Brightwell & P. Winkler, Counting Linear Extensions, Ord 8 (1991) 225-242.
Algorithm Design Using Traversals of the Covering Relation, 114-127 in Conceptual Structures. A Burrow, Leveraging Semantic Technologies, S. Rudolph, F. Dau & S. O. KuznetsovSpringer-VerlagBerlin HeidelbergA. Burrow, Algorithm Design Using Traversals of the Covering Relation, 114- 127 in Conceptual Structures: Leveraging Semantic Technologies, S. Rudolph, F. Dau & S. O. Kuznetsov (Eds.), Springer-Verlag Berlin Heidelberg 2009.
. Digital Library of Mathematical Functions. Digital Library of Mathematical Functions, http://dlmf.nist.gov
Counting Linear Extensions of Restricted Posets. S Dittmer, I Pak, P4.48Electronic J of Comb. 274S. Dittmer, I. Pak, Counting Linear Extensions of Restricted Posets, Electronic J of Comb, 27(4) (2020) P4.48.
F Harary, Graph Theory. Addison-Wesley, Reading, MAF. Harary, Graph Theory, Addison-Wesley, Reading, MA 1969.
Problem ?. P C Kainen, R Stong, & J Tilley, Amer. Math. Monthly. to appearP. C. Kainen, R. Stong, & J. Tilley, Problem ?, Amer. Math. Monthly, to appear.
Random construction of graphs. P C Kainen, & R Stong, in prepP. C. Kainen & R. Stong, Random construction of graphs, in prep.
M Kobayashi, arXiv:1908.00701A new refinement of Euler numbers on counting alternating permutations. math.COM. Kobayashi, A new refinement of Euler numbers on counting alternating permutations, arXiv:1908.00701 [math.CO], 2019.
N J A Sloane, Online Encyclopedia of Integer Sequences. N. J. A. Sloane, Online Encyclopedia of Integer Sequences, https://oeis.org
Ordered structures and partitions, photocopy. R P Stanley, R. P. Stanley, Ordered structures and partitions, photocopy, 1971 https://math.mit.edu/∼rstan/pubs/pubfiles/9.pdf
. R P Stanley, Enumerative Combinatorics. 1Cambridge Univ. PressR. P. Stanley, Enumerative Combinatorics, Vol. 1, Cambridge Univ. Press, 1997.
R P Stanley, Two poset polytopes. 1R. P. Stanley, Two poset polytopes, Discrete Comput Geom 1 (1986) 9-23.
Trees, permutations and the tangent function. R Street, Reflections (Math. Assoc. of NSW). 27R. Street, Trees, permutations and the tangent function, Reflections (Math. Assoc. of NSW) 27 (2002) 19-23 (27 July 2001).
Counting Linear Extensions in Practice: MCMC versus Exponential Monte Carlo. T Talvite, K Kangas, T Niinimaki, & M Koivisto, The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18). T. Talvite, K. Kangas, T. Niinimaki, & M. Koivisto, Counting Linear Extensions in Practice: MCMC versus Exponential Monte Carlo, The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18) (2018) 1431-1438.
The Number of Ways to Assemble a Graph. A Vince, & M Bóna, Electronic J of Comb. 19454A. Vince & M. Bóna, The Number of Ways to Assemble a Graph, Electronic J of Comb 19(4) (2012), #P54
E W Weisstein, Tangent Number. E. W. Weisstein, Tangent Number, Mathworld https://mathworld.wolfram.com/TangentNumber.html
Alternating Permutation. E W Weisstein, E. W. Weisstein, Alternating Permutation, Mathworld https://mathworld.wolfram.com/AlternatingPermutation.html
Non-separable and planar graphs. H Whitney, Trans. AMS. 34H. Whitney, Non-separable and planar graphs, Trans. AMS 34 (1932) 339-362.
| [] |
[
"GNOT: A General Neural Operator Transformer for Operator Learning",
"GNOT: A General Neural Operator Transformer for Operator Learning"
] | [
"Zhongkai Hao ",
"Zhengyi Wang ",
"Hang Su ",
"Chengyang Ying ",
"Yinpeng Dong ",
"Songming Liu ",
"Ze Cheng ",
"Jian Song ",
"Jun Zhu "
] | [] | [] | Learning partial differential equations' (PDEs) solution operators is an essential problem in machine learning. However, there are several challenges for learning operators in practical applications like the irregular mesh, multiple input functions, and complexity of the PDEs' solution.To address these challenges, we propose a general neural operator transformer (GNOT), a scalable and effective transformer-based framework for learning operators. By designing a novel heterogeneous normalized attention layer, our model is highly flexible to handle multiple input functions and irregular meshes. Besides, we introduce a geometric gating mechanism which could be viewed as a soft domain decomposition to solve the multi-scale problems. The large model capacity of the transformer architecture grants our model the possibility to scale to large datasets and practical problems. We conduct extensive experiments on multiple challenging datasets from different domains and achieve a remarkable improvement compared with alternative methods. Our code and data are publicly available at https://github.com/thu-ml/GNOT. | 10.48550/arxiv.2302.14376 | [
"https://export.arxiv.org/pdf/2302.14376v2.pdf"
] | 257,232,579 | 2302.14376 | ce5daa9f55781872fba7b83033375def9b0fd1cd |
GNOT: A General Neural Operator Transformer for Operator Learning
Zhongkai Hao
Zhengyi Wang
Hang Su
Chengyang Ying
Yinpeng Dong
Songming Liu
Ze Cheng
Jian Song
Jun Zhu
GNOT: A General Neural Operator Transformer for Operator Learning
Learning partial differential equations' (PDEs) solution operators is an essential problem in machine learning. However, there are several challenges for learning operators in practical applications like the irregular mesh, multiple input functions, and complexity of the PDEs' solution.To address these challenges, we propose a general neural operator transformer (GNOT), a scalable and effective transformer-based framework for learning operators. By designing a novel heterogeneous normalized attention layer, our model is highly flexible to handle multiple input functions and irregular meshes. Besides, we introduce a geometric gating mechanism which could be viewed as a soft domain decomposition to solve the multi-scale problems. The large model capacity of the transformer architecture grants our model the possibility to scale to large datasets and practical problems. We conduct extensive experiments on multiple challenging datasets from different domains and achieve a remarkable improvement compared with alternative methods. Our code and data are publicly available at https://github.com/thu-ml/GNOT.
Introduction
Partial Differential Equations (PDEs) are ubiquitously used in characterizing systems in many domains like physics, chemistry, and biology (Zachmanoglou & Thoe, 1986). These PDEs are usually solved by numerical methods like the finite element method (FEM). FEM discretizes PDEs using a mesh with a large number of nodes, and it is often computationally expensive for high dimensional problems. In many important tasks in science and engineering like structural optimization, we usually need to simulate the system under different settings and parameters in a massive and repeating manner. Thus, FEM can be extremely inefficient since a single simulation using numerical methods could take from seconds to days. Recently, machine learning methods (Lu et al., 2019;Li et al., 2020;2022b) are proposed to accelerate solving PDEs by learning an operator mapping from the input functions to the solutions of PDEs. By leveraging the expressivity of neural networks, such neural operators could be pre-trained on a dataset and then generalize to unseen inputs. The operators predict the solutions using a single forward computation, thereby greatly accelerating the process of solving PDEs. Much work has been done on investigating different neural architectures for learning operators (Hao et al., 2022). For instance, Deep-ONet (Lu et al., 2019) uses a branch network and a trunk network to process input functions and query coordinates. FNO (Li et al., 2020) learns the operator in the spectral space. Transformer models (Cao, 2021;Li et al., 2022b), based on attention mechanism, are proposed since they have a larger model capacity.
This progress notwithstanding, operator learning for practical real-world problems is still highly challenging and the performance can be unsatisfactory. As shown in Fig. 1, there are several major challenges in current methods: irregular mesh, multiple inputs, and multi-scale problems. First, the geometric shape or the mesh of practical problems are usually highly irregular. For example, the shape of the airfoil shown in Fig. 1 is complex. However, many methods like FNO (Li et al., 2020) using Fast Fourier Transform (FFT) and U-Net (Ronneberger et al., 2015) using convolutions are limited to uniform regular grids, making it challenging to handle irregular grids. Second, the problem can rely on multiple numbers and types of input functions like boundary shape, global parameter vector or source functions. The challenge is that the model is expected to be flexible to handle different types of inputs. Third, real physical systems can be multi-scale which means that the whole domain could be divided into physically distinct subdomains (Weinan, 2011). In Fig. 1, the velocity field is much more complex near the airfoil compared with the far field. It is more difficult to learn these multi-scale functions.
Existing works attempt to develop architectures to handle these challenges. For example, Geo-FNO (Li et al., 2022a) extends FNO to irregular meshes by learning a mapping from an irregular mesh to a uniform mesh. Transformer models (Li et al., 2022b) are naturally applicable to irregular meshes. But both of them are not applicable to handle problems with multiple inputs due to the lack of a general encoder framework. Moreover, MIONet (Jin et al., 2022) uses tensor product to handle multiple input functions but it performs unsatisfactorily on multi-scale problems. To the best of our knowledge, there is no attempt that could handle these challenges simultaneously, thus limiting the practical applications of neural operators. To fill the gap, it is imperative to design a more powerful and flexible architecture for learning operators under such sophisticated scenarios.
In this paper, we propose General Neural Operator Transformer (GNOT), a scalable and flexible transformer framework for learning operators. We introduce several key components to resolve the challenges as mentioned above. First, we propose a Heterogeneous Normalized (linear) Attention (HNA) block, which provides a general encoding interface for different input functions and additional prior information. By using an aggregation of normalized multi-head cross attention, we are able to handle arbitrary input functions while keeping a linear complexity with respect to the sequence length. Second, we propose a soft gating mechanism based on mixture-of-experts (MoE) (Fedus et al., 2021). Inspired by the domain decomposition methods that are widely used to handle multi-scale problems (Jagtap & Karniadakis, 2021;Hu et al., 2022), we propose to use the geometric coordinates of input points for the gating network and we found that this could be viewed as a soft domain decomposition. Finally, we conduct extensive experiments on several benchmark datasets and complex practical problems.
These problems are from multiple domains including fluids, elastic mechanics, electromagnetism, and thermology. The experimental results show that our model achieves a remarkable improvement compared with competing baselines. We reduce the prediction error by about 50% compared with baselines on several practical datasets like Elasticty, Inductor2d, and Heatsink.
Related Work
We briefly summarize some related work on neural operators and efficient transformers.
Neural Operators
Operator learning with neural networks has attracted much attention recently. DeepONet (Lu et al., 2019) proposes a branch network and a trunk network for processing input functions and query points respectively. This architecture has been proven to approximate any nonlinear operators with a sufficiently large network. Wang et al. (2021; introduces improved architecture and training methods of DeepONets. MIONet (Jin et al., 2022) extends DeepONets to solve problems with multiple input functions. Fourier neural operator (FNO) (Li et al., 2020) is another important method with remarkable performance. FNO learns the operator in the spectral domain using the Fast Fourier Transform (FFT) which achieves a good cost-accuracy trade-off. However, it is limited to uniform grids.Several works (Li et al., 2022a;Liu et al., 2023) extend FNO to irregular grids by mapping it to a regular grid or partitioning it into subdomains. Grady II et al. (2022) combine the technique of domain decomposition (Jagtap & Karniadakis, 2021) with FNO for learning multi-scale problems. Some works also propose variants of FNO from other aspects (Gupta et al., 2021;Wen et al., 2022;Tran et al., 2021). However, these works are not scalable to handle problems with multiple types of input functions.
Another line of work proposes to use the attention mechanism for learning operators. Galerkin Transformer (Cao, 2021) proposes linear attention for efficiently learning operators. It theoretically shows that the attention mechanism could be viewed as an integral transform with a learnable kernel while FNO uses a fixed kernel. The advantage of the attention mechanism is the large model capacity and flexibility. Attention could handle arbitrary length of inputs (Prasthofer et al., 2022) and preserve the permutation equivariance (Lee). HT-Net proposes a hierarchical transformer for learning multi-scale problems.
OFormer (Li et al., 2022b) proposes an encoder-decoder architecture using galerkin-type linear attention. Transformer architecture is a flexible framework for learning operators on irregular meshes. However, its architecture still performs unsatisfactorily and has a large room to be improved when learning challenging operators with multiple inputs and scales.
Efficient Transformers
The complexity of the original attention operation is quadratic with respect to the sequence length. For operator learning problems, the sequence length could be thousands to millions. It is necessary to use an efficient attention operation. Here we introduce some existing works in CV and NLP designing transformers with efficient attention. Many works (Tay et al., 2020) paid efforts to accelerate computing attention. First, sparse and localized attention (Child et al., 2019;Liu et al., 2021;Beltagy et al., 2020;Huang et al., 2019) avoids pairwise computation by restricting windows sizes which are widely used in computer vision and natural language processing. Kitaev et al. (2020) adopt hash-based method for acceleration. Another class of methods attempts to approximate or remove the softmax function in attention. Peng et al. (2021); Choromanski et al. (2020) use the product of random features to approximate the softmax function. Katharopoulos et al. (2020) propose to replace softmax with other decomposable similarity measures. Cao (2021) propose to directly remove the softmax function. We could adjust the order of computation for this class of methods and the total complexity is linear with respect to the sequence length. Besides reducing complexity for computing attention, the mixture of experts (MoE) (Jacobs et al., 1991) are adopted in transformer architecture (Lepikhin et al., 2020;Fedus et al., 2021) to reduce computational cost while keeping a large model capacity.
Proposed Method
We now present our method in detail.
Problem Formulation
We consider PDEs in the domain Ω ⊂ R d and the function space H over Ω, including boundary shapes and source functions. Our goal is to learn an operator G from the input function space A to the solution space H, i.e., G : A → H.
Here the input function space A could contain multiple different types, like boundary shapes, source functions distributed over Ω, and vector parameters of the systems. More formally, A could be represented as A = H × · · · × H × R p . For ∀a = (a 1 (·), . . . , a m (·), θ) ∈ A, a j (·) ∈ H represents boundary shapes and source functions, and θ ∈ R p represents parameters of the system, and G(a) = u ∈ H is the solution function over Ω.
For learning a neural operator, we train our model with a dataset D = {(a k , u k )} 1⩽k⩽D , where u k = G(a k ). In practice, since it is difficult to represent the function directly, we discretize the input functions and the solution function on irregular discretized meshes over the domain Ω using some mesh generation algorithm (Owen, 1998). For an input function a k , we discretize it on the mesh {x j i ∈ Ω} 1⩽j⩽m 1⩽i⩽Nj and the discretized a j k is
{(x j i , a i,j k )} 1⩽i⩽Nj , where a i,j k = a j k (x j i ). In this way, we use A k = {(x j i , a i,j k )} 1⩽j⩽m 1⩽i⩽Nj ∪ θ k to represent the input functions a k .
For the solution function u k , we discretize it on mesh
{y i ∈ Ω} 1⩽i⩽N ′ and the discretized u k is {(y i , u i k )} 1⩽i⩽N ′ , here u i k = u k (y i ).
For modeling this operator G, we use a parameterized neural networkG w , which receives the input A k (k = 1, ..., D) and outputsG w (A k ) = {ũ i k } 1⩽i⩽N ′ to approximate u k . Our goal is to minimize the mean squared error(MSE) loss between the prediction and data as
min w∈W 1 D D k=1 1 N ′ ∥G w (A k ) − {u i k } 1⩽i⩽N ′ ∥ 2 2 ,(1)
where w is a set of the network parameters and W is the parameter space.
Overview of Model Architecture
Here we present an overview of our model General Neural Operator Transformer (GNOT). Transformers are a popular architecture to learn operators due to their ability to handle irregular mesh and strong expressivity. Transformers embed the input mesh points into queries Q, keys K, and values V using MLPs and compute their attention. However, attention computation still has many limitations due to several challenges.
First, as the problem might have multiple different (types) input functions in practical cases, the model needs to be flexible and efficient to take arbitrary numbers of input functions defined on different meshes with different numerical scales. To obtain this goal, we first design a general input encoding protocol and embed different input functions and other available prior information using MLPs as shown in Fig 2. Then we use a novel attention block comprising a cross-attention layer followed by a self-attention layer to process these embeddings. We invent a Heterogeneous Normalized linear cross-Attention (HNA) layer which is able to take an arbitrary number of embeddings as input. The details of the HNA layer are stated in Sec 3.4.
Second, as practical problems might be multi-scale, it is difficult or inefficient to learn the whole solution using a single model. To handle this issue, We introduce a novel geometric gating mechanism that is inspired by the widely used domain-decomposition methods (Jagtap & Karniadakis, 2021). In particular, the domain-decomposition methods divide the whole domain into subdomains that are learned with subnetworks respectively. We use multiple FFNs in the attention block and compute a weighted average of these FFNs using a gating network as shown in Fig 2. The details
Query Points
Encoder Heterogeneous Normalized Cross-Attention
Normalized Self-Attention Q K V Q Gating FFN FFN Output Features K 1 V 1 K 2 V 2 Functions K 3 V 3 K m V m Shape Vector Edges N × Encoder
Encoder Encoder Encoder Figure 2. Overview of the model architecture. First, we encode input query points and input functions with different MLPs. Then we update features of query points using a heterogenous normalized cross-attention layer and a normalized self-attention layer. We use a gate network using geometric coordinates of query points to compute a weighted average of multiple expert FFNs. We output the features after processing them using N layers of the attention block.
of geometric gating are shown in Sec 3.5.
General Input Encoding
Now we introduce how our model is flexible to handle different types of input functions and preprocess these input features. The model takes positions of query points denoted by {x q i } 1⩽i⩽Nq and input functions as input. We could use a multiple layer perceptron to map it to query embedding X ∈ R Nq·ne . In practice, we might encounter several different formats and shapes of input functions. Here we present the encoding protocol to process them to get the feature embedding Y ∈ R N ne where N could be arbitrary dimension and n e is the dimension of embedding. We call Y the conditional embedding as it encodes information of input functions and extra information. We use simple multiple layer perceptrons f w to map the following inputs to the embedding. Note we use one individual MLP for each input function so they do not share parameters.
• Parameter vector θ ∈ R p : We could directly encode the parameter vector using the MLP, i.e, Y = f w (θ) and Y ∈ R 1×ne .
• Boundary shape {x i } 1⩽i⩽N : If the solution relies on the shape of the boundary, we propose to extract all these boundary points as input function and embed the position of these points with MLP. Specifically,
Y = (f w (x i )) 1⩽i⩽N ∈ R N d .
• Domain distributed functions {(x i , a i )} 1⩽i⩽N : If the input function is distributed over a domain or a mesh, we need to encode both the position of nodes and the function values, i.e.
Y = (f w (x i , a i )) 1⩽i⩽N ∈ R N d .
Besides these types of input functions, we could also encode some additional prior like domain knowledge for specific problems using such a framework in a flexible manner which might improve the model performance. For example, we could encode the extra features of mesh
points {(x i , z i )} 1⩽i⩽N and edge information of the mesh {(x src i , x dst i , e i )} 1⩽i⩽N .
The extra features could be the subdomain indicator of mesh points and the edges shows the topology structure of these mesh points. This extra information is usually generated when collecting the data by solving FEMs. We use MLPs to encode them into
Y = (f w (x i , z i )) 1⩽i⩽N and Y = (f w (x i , z i )) 1⩽i⩽N .
Heterogeneous Normalized Attention Block
Here we introduce the Heterogeneous Normalized Attention block. We calculate the heterogeneous normalized cross attention between features of query points X and conditional embeddings {Y l } 1⩽l⩽L . Then we apply a normalized self-attention layer to X. Specifically, the "heterogeneous" means that we use different MLPs to compute keys and values from different input features that ensure model capacity. Besides, we normalize the outputs of different attention outputs and use "mean" as the aggregation function to average all outputs. The normalization operation ensures numerical stability and also promotes the training process. Suppose we have three sequences called queries {q i } 1⩽i⩽N , keys{k i } 1⩽i⩽M and values {v i } 1⩽i⩽M . The attention is computed as follows,
z t = i exp(q t · k i /τ ) j exp(q t · k j /τ ) v i ,(2)
where τ is a hyperparameter. For self-attention models, q, k, v are obtained by applying a linear transformation
to input sequence X = (x i ) 1⩽i⩽N , i.e, q i = W q x i , k i = W k x i , v i = W v x i .
For cross attention models, q comes from the query sequence X while keys and values come from another sequence Y = (y i ) 1⩽i⩽M , i.e,
q i = W q x i , k i = W k y i , v i = W v y i .
However, the computational cost of the attention is O(N 2 n e ) for self attention and O(N M n e ) for cross attention where n e is the dimension of embedding.
For problems of learning operators, data usually consists of thousands to even millions of points. The computational cost is unaffordable using vanilla attention with quadratic complexity. Here we propose a novel attention layer with a linear computational cost that could handle long sequences. We first normalize these sequences respectively,
q i = Softmax(q i ) = e qij j e qij j=1,...ne ,(3)k i = Softmax(k i ) = e kij j e kij j=1,...ne .(4)
Then we compute the attention output without softmax using the following equation,
z t = iq t ·k i j q t ·k j · v i .(5)
We denote α t = jqt ·k j −1 and the efficient attention could be represented by,
z t = i α t (q t ·k i ) · v i = α tqt . ik i ⊗ v i . (6)
We could compute ik i ⊗v i first with a cost O(M n 2 e ) and then compute its multiplication with q with a cost O(N n 2 e ). The total cost is O((M +N )n 2 e ) which is linear with respect to the sequence length.
In our model, we usually have multiple conditional embeddings and we need to fuse the information with query points. To this end, we design a cross attention using the normalized linear attention that is able to handle arbitrary numbers of conditional embeddings. Specifically, suppose we have L conditional embeddings {Y l ∈ R N l ×ne } 1⩽l⩽L encoding the input functions and extra information. We first compute the queries Q = (q i ) = XW q , keys K l = (k l i ) = Y W k and values V l = (v l i ) = Y W v , and then normalize every q i and k i to be q i andk i . Then we compute the cross-attention as follows,
z t =q t + 1 L L l=1 N l i l =1 α l t (q t ·k i l )v i l ,(7)=q t + 1 L L l=1 α l tqt · N l i l =1k i l ⊗ v i l . (8) where α l t = 1 N l j=1qt ·kj
is the normalization cofficient.
We see that the cross-attention aggregates all information from input functions and extra information. We also add an identity mapping as skip connection to ensure the information is not lost. The computational complexity of Eq. (8) is O (N + l N l ) n 2 e also linear with sequence length. After applying such a cross-attention layer, we impose the self-attention layer for query features, i.e,
z ′ t = i α t (q t ·k i ) · v i ,(9)
where all of q, k and v are computed with the embedding z t as
q t = W qẑt , k t = W kẑt , v t = W vẑt .(10)
We use the cascade of a cross-attention layer and a selfattention layer as the basic block of our model. We tile multiple layers and multiple heads similar to other transformer models. The embedding z t and z ′ t are divided into H heads
as z t = Concat(z i t ) H i=1 and z ′ t = Concat(z ′ i t ) H i=1 .
Each head z i t can be updated using Eq. (7) and Eq. (9).
Geometric Gating Mechanism
To handle multi-scale problems, we introduce our geometric gating mechanism based on mixture-of-experts (MoE) which is a common technique in transformers for improving model efficiency and capacity. We improve it to serve as a domain decomposition technique for dealing with multiscale problems. Specifically, we design a geometric gating network that inputs the coordinates of the query points and outputs unnormalized scores G i (x) for averaging these expert networks. In each layer of our model, we use K subnetworks for the MLP denoted by E i (·). The update of z t and z ′ t in the feedforward layer after Eq. (8) and Eq. (9) is replaced by the following equation when we have multiple expert networks as
z t ← z t + K i=1 p i (x t ) · E i (z t ).(11)
The weights for averaging the expert networks are computed as
p i (x t ) = exp(G i (x t )) K i=1 exp(G i (x t )) ,(12)
where the gating network G(·) : R d → R K takes the geometric coordinates of query points x t as inputs. The normalized outputs p i (x t ) are the weights for averaging these experts.
The geometric gating mechanism could be viewed as a soft domain decomposition. There are several decision choices for the gating network. First, we could use a simple MLP to represent the gating network and learn its parameters end to end. Second, available prior information could be embedded into the gating network. For example, we could divide the domain into several subdomains and fix the gating network by handcraft. This is widely used in other domain decomposition methods like XPINNs when we have enough prior information about the problems. By introducing the gating module, our model could be naturally extended to handle large-scale and multi-scale problems.
Experiments
In this section, we conduct extensive experiments to demonstrate the effectiveness of our method on multiple challenging datasets.
Experimental Setup and Evaluation Protocol
Datasets. To conduct comprehensive experiments to show the scalability and superiority of our method, we choose several datasets from multiple domains including fluids, elastic mechanics, electromagnetism, heat conduction and so on. We briefly introduce these datasets here. Due to limited space, detailed descriptions are listed in the Appendix A.
We list the challenges of these datasets in Table 1 where "A", "B", and "C" represent the problem has irregular mesh, has multiple input functions, and is multi-scale, respectively.
• Darcy2d (Li et al., 2020): A second order, linear, elliptic PDE defined on a unit square. The input function is the diffusion coefficient defined on the square. The goal is to predict the solution u from coefficients a.
• NS2d (Li et al., 2020): A two-dimensional timedependent Naiver-Stokes equation of a viscous, incompressible fluid in vorticity form on the unit torus. The goal is to predict the last few frames from the first few frames of the vorticity u.
• NACA (Li et al., 2022a): A transonic flow over an airfoil governed by the Euler equation. The input function is the shape of the airfoil. The goal is to predict the solution field from the input mesh describing the airfoil shape.
• Elasticity (Li et al., 2022a): A solid body syetem satisfying elastokinetics. The geometric shape is a unit square with an irregular cavity. The goal is to predict the solution field from the input mesh.
• NS2d-c: A two-dimensional steady-state fluids problem governed by Naiver-Stokes equations. The geometric shape is a rectangle with multiple cavities which is a highly complex shape. The goal is to predict the velocity field of x and y direction u, v and the pressure field p from the input mesh.
• Inductor2d: A two-dimensional inductor system satisfying the MaxWell equation. The input functions include the boundary shape and several global parameter vectors. The geometric shape of this problem is highly irregular and the problem is multi-scale so it is highly challenging. The goal is to predict the magnetic potential A z from these input functions.
• Heat: A multi-scale heat conduction problem. The input functions include multiple boundary shapes segmenting the domain and a domain-distributed function deciding the boundary condition. The physical properties of different subdomains vary greatly. The goal is to predict the temperature field T from input functions.
• Heatsink: A 3d multi-physics example characterizing heat convection and conduction of a heatsink. The heat convection is accomplished by the airflow in the pipe. This problem is a coupling of laminar flow and heat conduction. We need to predict the velocity field and the temperature field from the input functions.
Baselines. We compare our method with several strong baselines listed below.
• MIONet (Jin et al., 2022): It extends DeepONet (Lu et al., 2019) to multiple input functions by using tensor products and multiple branch networks.
• FNO(-interp) (Li et al., 2020): FNO is an effective operator learning model by learning the mapping in spectral space. However, it is limited to regular mesh. We use basic interpolation to get a uniform grid to use FNO. However, it still has difficulty dealing with multiple input functions.
• Galerkin Transformer (Cao, 2021): Galerkin Transformer proposed an efficient linear transformer for learning operators. It introduces problem-dependent decoders like spectral regressors for regular grids.
• Geo-FNO (Li et al., 2022a): It extends FNO to irregular meshes by learning a mapping from the irregular grid to a uniform grid. The mapping could be learned end-to-end or pre-computed.
• OFormer (Li et al., 2022b): It uses the Galerkin type cross attention to compute features of query points. We slightly modify it by concatenating the different input functions to handle multiple input cases.
Evaluation Protocol and Hyperparameters. We use the mean l 2 relative error as the evaluation metric. Suppose u i , u ′ i ∈ R n is the ground truth solution and the predicted Table 1. Our main results of operator learning on several datasets from multiple areas. The types like u, v are the physical quantities to predict and types like "part" denotes the size of the dataset. "-" means that the method is not able to handle this dataset. Lower scores mean better performance and the best results are bolded.
7.57e-3 Inductor2d A, C Az 3.10e-2 - 2.56e-1 - 2.23e-2 1.21e-2 Bx 3.49e-2 - 3.06e-2 - 2.83e-2 1.92e-2 By 6.73e-2 - 4.45e-2 - 4.28e-2 3.62e-2 Heat A, B, C part 1.74e-1 - - - - 4.13e-2 full 1.45e-1 - - - - 2.56e-2 Heatsink A, B, C T 4.67e-1 - - - - 2.53e-1 u 3.52e-1 - - - - 1.42e-1 v 3.23e-1 - - - - 1.81e-1 w 3.71e-1 - - - - 1.88e-1
solution for the i-th sample, and D is the dataset size. The mean l 2 relative error is computed as follows,
ε = 1 D D i=1 ||u ′ i − u i || 2 ||u i || 2 .(13)
For the hyperparameters of baselines and our methods. We choose the network width from {64, 96, 128, 256} and the number of layers from 2 ∼ 6. We train all models with AdamW (Loshchilov & Hutter, 2017) optimizer with the cycle learning rate strategy (Smith & Topin, 2019) or the exponential decaying strategy. We train all models with 500 epochs with batch size from {4, 8, 16, 32}. We run our experiments on 1 ∼ 8 2080 Ti GPUs.
Main Results for Operator Learning
The main experimental results for all datasets and methods are shown in Table 1. More details and hyperparameters could be found in Appendix B. Based on these results, we have the following observations.
First, we find that our method performs significantly better on nearly all tasks compared with baselines. On datasets with irregular mesh and multiple scales like NACA, NS2d-c, and Inductor2d, our model achieves a remarkable improvement compared with all baselines. On some tasks, we reduce the prediction error by about 40% ∼ 50%. It demonstrates the scalability of our model. Our GNOT is also capable of learning operators on datasets with multiple inputs like Heat and Heatsink. The excellent performance on these datasets shows that our model is a general yet effective framework that could be used as a surrogate model for learning operators. This is because our heterogeneous normalized attention is highly effective to extract the complex relationship be-tween input features. Though, GK-Transformer performs slightly better on the Darcy2d dataset which is a simple dataset with a uniform grid.
Second, we find that our model is more scalable when the amount of data increases, showing the potential to handle large datasets. On NS2d dataset, our model reduces the error over 3 times from 13.7% to 4.42%. On the Heat dataset, we have reduced the error from 4.13% to 2.58%. Compared with other models like FNO(-interp), GK-Transformer on NS2d dataset, and MIONet on Heat dataset, our model has a larger capacity and is able to extract more information when more data is accessible. While OFormer also shows a good performance on the NS2d dataset, the performance still falls behind our model.
Third, we find that for all models the performance on multiscale problems like Heatsink is worse than other datasets. This indicates that multi-scale problems are more challenging and difficult. We found that there are several failure cases, i.e. predicting the velocity distribution u, v, w for the Heatsink dataset. The prediction error is very high (more than 10%). We suggest that incorporating such physical prior might help improve performance.
Scaling Experiments
One of the most important advantages of transformers is that its performance consistently gains with the growth of the number of data and model parameters. Here we conduct a scaling experiment to show how the prediction error varies when the amount of data increases. We use the NS2d-c dataset and predict the pressure field p. We choose MIONet as the baseline and the results are shown in The left figure shows the l 2 relative error of the different models using different amounts of data. The GNOT-large denotes the model with embedding dimension 256 and GNOTsmall denotes the model with embedding dimension 96. We see that all models perform better if there is more data and the relationship is nearly linear using log scale. However, the slope is different and our GNOT-large could best utilize the growing amount of data. With a larger model capacity, it is able to reach a lower error. It corresponds to the result in NLP (Kaplan et al., 2020) that the loss scales as a power law with the dataset size. Moreover, we find that our transformer architecture is more data-efficient compared with the MIONet since it has similar performance and model size with MIONet using less data.
The right figure shows how the prediction error varies with the number of layers in GNOT. Roughly we see that the error decreases with the growth of the number of layers for both Elasticity and NS2d-c datasets. The performance gain becomes small when the number of layers is more than 4 on Elasticity dataset. An efficient choice is to choose 4 layers since more layers mean more computational cost.
Ablation Experiments
We finally conduct an ablation study to show the influence of different components and hyperparameters of our model.
Necessity of different attention layers.
Our attention block consists of a cross-attention layer followed by a selfattention layer. To study the necessity and the order of self-attention layers, we conduct experiments on NACA, Elasticity, and NS2d-c datasets. The results are shown in Table 2. Note that "cross+self" denotes a cross-attention layer followed by a self-attention layer and the rest can be done in the same manner. We find that the "cross+self" attention block is the best on all datasets. And the "cross+self" attention is significantly better than "cross+cross". On the one hand, this shows that the self-attention layer is necessary for the model. On the other hand, it is a better choice to put the self-attention layer after the cross-attention layer. We con-NACA Elasticity NS2d-c (p) cross + cross 3.52e-2 3.31e-2 1.50e-2 self + cross 9.53e-3 1.25e-2 9.89e-2 cross + self 7.57e-3 8.65e-3 7.41e-3 Table 3. Results for ablation experiments on the influence of numbers of experts Nexperts (left two columns) and numbers of attention heads N heads (right two columns).
jecture that the self-attention layer after the cross-attention layer utilizes the information in both query points and input functions more effectively.
Influences of the number of experts and attention heads.
We use multiple attention heads and soft mixture-of-experts containing multiple MLPs for the model. Here we study the influence of the number of experts and attention heads. We conduct this experiment on Heat which is a multi-scale dataset containing multiple subdomains. The results are shown in Table 3. The left two columns show the results of using different numbers of experts using 1 attention head. We see that using 3 experts is the best. The problem of Heat contains three different subdomains with distinct properties. It is a natural choice to use three experts so that it is easier to learn. We also find that using too many experts (≥ 8) deteriorates the performance. The right two columns are the results of using different numbers of attention heads with 1 expert. We find that number of attention heads has little impact on the performance. Roughly we see that using more attention heads leads to slightly better performance.
Conclusion
In this paper, we propose an operator learning model called General Neural Operator Transformer (GNOT). To solve the challenges of practical operator learning problems, we devise two new components, i.e. the heterogeneous normalized attention and the geometric gating mechanism.
Then we conducted comprehensive experiments on multiple datasets in science and engineering. The excellent performance compared with baselines verified the effectiveness of our method. It is an attempt to use a general model architecture to handle these problems and it paves a possible direction for large-scale neural surrogate models in science and engineering.
A. Details and visualization of datasets
Here we introduce more details about the datasets. For all these datasets, we generate datasets with COMSOL multi-physics 6.0. The code and datasets are publicly available at https://github.com/thu-ml/GNOT.
(u · ∇)u = 1 Re ∇ 2 u − ∇p (14) ∇ · u = 0(15)
The velocity vanishes on boundary ∂Ω, i.e. u = 0. On the outlet, the pressure is set to 0. On the inlet, the input velocity is u x = y(8 − y)/16. The visualization of the mesh is shown in the following Figure 4. The velocity field and pressure field is shown in Figure 5. We create 1100 samples with different positions of circles where we use 1000 for training and 100 for testing. Inductor2d. A 2d inductor satisfying the following steady-state MaxWell's equation,
∇ × H = J (16) B = ∇ × A (17) J = σE + σv × B + J e (18) B = µ 0 µ r H(19)
The boundary condition is n × A = 0 (20) On the coils, the current density is,
J e = N I coil A e coil(21)
We create 1100 inductor2d model with different geometric parameters, I coil and material parameters µ r . Our goal is We use 1000 for training and 100 for testing. We plot the geometry of this problem in Figure 6. The solutions is shown in Figure 7.
Heat. An example satisfying 2d steady-state heat equation,
ρC p u · ∇T − k∇ 2 T = Q(22)
The geometry is a rectangle Ω = [0, 9] 2 , but it is divided into three parts using two splines. On the left and right boundary, it satisfies the periodic boundary condition. The input functions of this dataset includes the boundary temperature on the top boundary and the parameters of splines. We generate a small dataset with 1100 samples and a full datase with 5500 samples. The mesh and the temperature field is visulaized in the Figure 8.
Heatsink. A 3d steady-state multi-physics example with a coupling of heat and fluids. This example is complicated and we omit the technical details here and they could be found in the mph source files. The fluids satisfy Naiver-Stokes equation and the heat equation. The flow field and temperature field is coupled by heat convection and heat conduction. The input functions include some geometric parameters and the velocity distribution at the inlet. The goal is to predict the velocity field for the fluids and the temperature field for the whole domain. We generate 1100 samples for training and testing. The geometry of this problem is the following Figure 9. The solution fields T, u, v, w are shown in Figure 10.
B. Hyperparameters and details for models.
MIONet. We use MLPs with 4 layers and width 256 as the branch network and trunk network. When the problem has multiple input functions, the MIONet uses multiple branch networks and one trunk network. If there is only one branch, it degenerates to DeepONet. Since the discretization input functions contain different numbers of points for different samples, we pad the inputs to the maximum number of points in the whole dataset. We train MIONet with AdamW optimizer until convergence. The batch size is chosen roughly to be 4× average sequence length. We use the AdamW optimizer with one cycle learning decay strategy. Except for NS2d and Burgers1d, we use the pointwise decoder for GK-Transformer since the spectral regressor is limited to uniform grids. Other parameters of OFormer are kept similar to its original paper. We list the details of these hyperparameters in the following table,
C. Other Supplementary Results
We provide a runtime comparison for training our GNOT as well as baselines in the following Table 5. We see that a drawback for all transformer based methods is that training them is slower than FNO.
D. Broader Impact
Learning neural operators has a wide range of real-world applications in many subjects including physics, quantum mechanics, heat engineering, fluids dynamics, and aerospace industry, etc. Our GNOT is a general and powerful model for learning neural operators and thus might accelerate the development of those fields. One of the potential negative impacts is that methods using neural networks like transformers lack theoretical guarantee and interoperability. If these unexplainable models are deployed in risk-sensitive areas, accident investigation becomes more difficult. A possible way to solve the problem is to develop more explainable and robust methods with a better theoretical guarantees or corner case protection when these models are deployed to risk-sensitive areas.
Proceedings of the 40 th International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s).
Figure 1 .
1A pre-trained neural operator using transformers is much more efficient for the numerical simulation of physical systems. However, there are several challenges in training neural operators including irregular mesh, multiple inputs, and multiple scales.
Fig 3 .
3
Figure 3 .
3Results of scaling experiments for different dataset sizes (left) and different numbers of layers (right).
Figure 4 .
4Visualization of mesh of the NS2d-c dataset.
Figure 5 .
5Visualization of velocity field u, v and pressure field p of NS2d-c dataset.
Figure 6 .
6Visualization of mesh of the inductor2d dataset.
Figure 7 .
7Visualization of Bx, By and Az of inductor2d dataset.
Figure 8 .
8Left: mesh of Heat2d dataset. Right: visualization of temperature field T .
Figure 9 .
9Visualization of mesh of the Heatsink dataset.
Figure 10 .
10Visualization of T, u, v, w of Heatsink dataset.
Table 2 .
2Experimental results for the necessity and order of different attention blocks.N experts
error
N heads
error
1
0.04212
1
0.04131
3
0.03695
4
0.04180
8
0.04732
8
0.04068
16
0.04628
16
0.03952
NS2d-c. It obeys a 2d steady-state Naiver-Stokes equation defined on a rectangle minus four circular regions, i.e. Ω =[0, 8] 2 \ i=1 R i , where R i is a circle. The governing equation is,4
Hyperparameter typeDarcy2d, NS2d,Elasticity, NACA Inductor2d, Heat,NS2d-c,NS2d-full HeatsinkTable 4. Details of hyperparameters used for main experiments.Table 5. Runtime comparison for different methods. FNO-(interp) and Geo-FNO. We use 4 FNO layers with modes from {12, 16, 32} and width from {16, 32, 64}. The batch size is chosen from {8, 20, 32, 48, 64}. For datasets with uniform grids like Darcy2d and NS2d, we use vanilla FNO models. For datasets with irregular grids, we interpolate the dataset on a resolution from {80 × 80, 120 × 120, 160 × 160}. For Geo-FNO models, it degenerates to vanilla FNO models on Darcy2d and NS2d datasets. So Geo-FNO performs the same as FNO on these datasets. Other hyperparameters of Geo-FNO like width, modes, and batch size are kept the same with FNO(-interp). GK-Transformer, OFormer, and GNOT. For all transformer models, we choose the number of heads from {1, 4, 8, 16}. The number of layers is chosen from {2, 3, 4, 5, 6}. The dimensionality of embedding and hidden size of FFNs are chosen from {64, 96, 128, 256}. The batch size is chosen from {4, 8, 16, 20}.Activation function
GELU
GELU
GELU
Number of attention layers
3∼4
4
4
Hidden size of attention
96
256
192
Layers of MLP
3
4
4
Hidden size of MLP
192
256
192
Hidden size of input embedding
96
128,256
96,192
Learning rate schedule
Onecycle
Onecycle
Onecycle
N experts
{1,4}
{3,4}
4
N heads
{4,8}
8
8
Time per epoch (s) MIONet FNO(-interp) GK-Transformer Geo-FNO OFormer GNOT
Darcy2d
18.6
13.7
27.7
13.9
29.1
29.4
NS2d
-
18.2
23.1
17.9
22.5
23.7
Elasticity
6.7
3.1
5.8
2.9
6.0
6.3
NACA
31.2
28.6
43.7
23.4
45.2
46.5
Heatsink
-
-
-
-
-
68.4
Dept. of Comp. Sci. & Techn., Institute for AI, BNRist Center, Tsinghua-Bosch Joint ML Center, Tsinghua University 2 Dept. of EE, Tsinghua University 3 RealAI 4 Bosch China Investment Ltd. Correspondence to: Jun Zhu <[email protected]>.
Acknowledgment
I Beltagy, M E Peters, A Cohan, Longformer, arXiv:2004.05150The long-document transformer. arXiv preprintBeltagy, I., Peters, M. E., and Cohan, A. Long- former: The long-document transformer. arXiv preprint arXiv:2004.05150, 2020.
Choose a transformer: Fourier or galerkin. S Cao, Advances in Neural Information Processing Systems. 34Cao, S. Choose a transformer: Fourier or galerkin. Advances in Neural Information Processing Systems, 34:24924- 24940, 2021.
Generating long sequences with sparse transformers. R Child, S Gray, A Radford, I Sutskever, arXiv:1904.10509arXiv preprintChild, R., Gray, S., Radford, A., and Sutskever, I. Gen- erating long sequences with sparse transformers. arXiv preprint arXiv:1904.10509, 2019.
K Choromanski, V Likhosherstov, D Dohan, X Song, A Gane, T Sarlos, P Hawkins, J Davis, A Mohiuddin, L Kaiser, arXiv:2009.14794Rethinking attention with performers. arXiv preprintChoromanski, K., Likhosherstov, V., Dohan, D., Song, X., Gane, A., Sarlos, T., Hawkins, P., Davis, J., Mohiuddin, A., Kaiser, L., et al. Rethinking attention with performers. arXiv preprint arXiv:2009.14794, 2020.
Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. W Fedus, B Zoph, N Shazeer, Fedus, W., Zoph, B., and Shazeer, N. Switch transform- ers: Scaling to trillion parameter models with simple and efficient sparsity, 2021.
I I Grady, T J Khan, R Louboutin, M Yin, Z Witte, P A Chandra, R Hewett, R J Herrmann, F J , arXiv:2204.01205Towards large-scale learned solvers for parametric pdes with model-parallel fourier neural operators. arXiv preprintGrady II, T. J., Khan, R., Louboutin, M., Yin, Z., Witte, P. A., Chandra, R., Hewett, R. J., and Herrmann, F. J. To- wards large-scale learned solvers for parametric pdes with model-parallel fourier neural operators. arXiv preprint arXiv:2204.01205, 2022.
Multiwavelet-based operator learning for differential equations. G Gupta, X Xiao, Bogdan , P , Advances in Neural Information Processing Systems. 34Gupta, G., Xiao, X., and Bogdan, P. Multiwavelet-based operator learning for differential equations. Advances in Neural Information Processing Systems, 34:24048- 24062, 2021.
Physics-informed machine learning: A survey on problems. Z Hao, S Liu, Y Zhang, C Ying, Y Feng, H Su, J Zhu, arXiv:2211.08064arXiv preprintHao, Z., Liu, S., Zhang, Y., Ying, C., Feng, Y., Su, H., and Zhu, J. Physics-informed machine learning: A survey on problems, methods and applications. arXiv preprint arXiv:2211.08064, 2022.
Augmented physics-informed neural networks (apinns): A gating network-based soft domain decomposition methodology. Z Hu, A D Jagtap, G E Karniadakis, K Kawaguchi, arXiv:2211.08939arXiv preprintHu, Z., Jagtap, A. D., Karniadakis, G. E., and Kawaguchi, K. Augmented physics-informed neural networks (ap- inns): A gating network-based soft domain decompo- sition methodology. arXiv preprint arXiv:2211.08939, 2022.
Ccnet: Criss-cross attention for semantic segmentation. Z Huang, X Wang, L Huang, C Huang, Y Wei, W Liu, Proceedings of the IEEE/CVF international conference on computer vision. the IEEE/CVF international conference on computer visionHuang, Z., Wang, X., Huang, L., Huang, C., Wei, Y., and Liu, W. Ccnet: Criss-cross attention for semantic segmen- tation. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 603-612, 2019.
Adaptive mixtures of local experts. R A Jacobs, M I Jordan, S J Nowlan, G E Hinton, Neural computation. 31Jacobs, R. A., Jordan, M. I., Nowlan, S. J., and Hinton, G. E. Adaptive mixtures of local experts. Neural computation, 3(1):79-87, 1991.
Extended physicsinformed neural networks (xpinns): A generalized spacetime domain decomposition based deep learning framework for nonlinear partial differential equations. A D Jagtap, G E Karniadakis, AAAI Spring Symposium: MLPS. Jagtap, A. D. and Karniadakis, G. E. Extended physics- informed neural networks (xpinns): A generalized space- time domain decomposition based deep learning frame- work for nonlinear partial differential equations. In AAAI Spring Symposium: MLPS, 2021.
P Jin, S Meng, L Lu, Mionet, arXiv:2202.06137Learning multipleinput operators via tensor product. arXiv preprintJin, P., Meng, S., and Lu, L. Mionet: Learning multiple- input operators via tensor product. arXiv preprint arXiv:2202.06137, 2022.
J Kaplan, S Mccandlish, T Henighan, T B Brown, B Chess, R Child, S Gray, A Radford, J Wu, Amodei , D , arXiv:2001.08361Scaling laws for neural language models. arXiv preprintKaplan, J., McCandlish, S., Henighan, T., Brown, T. B., Chess, B., Child, R., Gray, S., Radford, A., Wu, J., and Amodei, D. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361, 2020.
Transformers are rnns: Fast autoregressive transformers with linear attention. A Katharopoulos, A Vyas, N Pappas, F Fleuret, International Conference on Machine Learning. PMLRKatharopoulos, A., Vyas, A., Pappas, N., and Fleuret, F. Transformers are rnns: Fast autoregressive transformers with linear attention. In International Conference on Machine Learning, pp. 5156-5165. PMLR, 2020.
Reformer: The efficient transformer. N Kitaev, Ł Kaiser, A Levskaya, arXiv:2001.04451arXiv preprintKitaev, N., Kaiser, Ł., and Levskaya, A. Reformer: The efficient transformer. arXiv preprint arXiv:2001.04451, 2020.
Mesh-independent operator learning for partial differential equations. S Lee, ICML 2022 2nd AI for Science Workshop. Lee, S. Mesh-independent operator learning for partial differential equations. In ICML 2022 2nd AI for Science Workshop.
D Lepikhin, H Lee, Y Xu, D Chen, O Firat, Y Huang, M Krikun, N Shazeer, Chen , Z Gshard, arXiv:2006.16668Scaling giant models with conditional computation and automatic sharding. arXiv preprintLepikhin, D., Lee, H., Xu, Y., Chen, D., Firat, O., Huang, Y., Krikun, M., Shazeer, N., and Chen, Z. Gshard: Scaling giant models with conditional computation and automatic sharding. arXiv preprint arXiv:2006.16668, 2020.
Fourier neural operator for parametric partial differential equations. Z Li, N Kovachki, K Azizzadenesheli, B Liu, K Bhattacharya, A Stuart, Anandkumar , A , arXiv:2010.08895arXiv preprintLi, Z., Kovachki, N., Azizzadenesheli, K., Liu, B., Bhat- tacharya, K., Stuart, A., and Anandkumar, A. Fourier neural operator for parametric partial differential equa- tions. arXiv preprint arXiv:2010.08895, 2020.
Z Li, D Z Huang, B Liu, Anandkumar , A , arXiv:2207.05209Fourier neural operator with learned deformations for pdes on general geometries. arXiv preprintLi, Z., Huang, D. Z., Liu, B., and Anandkumar, A. Fourier neural operator with learned deformations for pdes on general geometries. arXiv preprint arXiv:2207.05209, 2022a.
Transformer for partial differential equations' operator learning. Z Li, K Meidani, A B Farimani, arXiv:2205.13671arXiv preprintLi, Z., Meidani, K., and Farimani, A. B. Transformer for partial differential equations' operator learning. arXiv preprint arXiv:2205.13671, 2022b.
Nuno: A general framework for learning parametric pdes with non-uniform data. S Liu, Z Hao, C Ying, H Su, Z Cheng, J Zhu, arXiv:2305.18694arXiv preprintLiu, S., Hao, Z., Ying, C., Su, H., Cheng, Z., and Zhu, J. Nuno: A general framework for learning parametric pdes with non-uniform data. arXiv preprint arXiv:2305.18694, 2023.
Ht-net: Hierarchical transformer based operator learning model for multiscale pdes. X Liu, B Xu, L Zhang, arXiv:2210.10890arXiv preprintLiu, X., Xu, B., and Zhang, L. Ht-net: Hierarchical trans- former based operator learning model for multiscale pdes. arXiv preprint arXiv:2210.10890, 2022.
Swin transformer: Hierarchical vision transformer using shifted windows. Z Liu, Y Lin, Y Cao, H Hu, Y Wei, Z Zhang, S Lin, B Guo, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionLiu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., and Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10012-10022, 2021.
. I Loshchilov, F Hutter, arXiv:1711.05101Decoupled weight decay regularization. arXiv preprintLoshchilov, I. and Hutter, F. Decoupled weight decay regu- larization. arXiv preprint arXiv:1711.05101, 2017.
Learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators. L Lu, P Jin, G E Karniadakis, Deeponet, arXiv:1910.03193arXiv preprintLu, L., Jin, P., and Karniadakis, G. E. Deeponet: Learning nonlinear operators for identifying differential equations based on the universal approximation theorem of opera- tors. arXiv preprint arXiv:1910.03193, 2019.
A survey of unstructured mesh generation technology. S J Owen, IMR. 239267Owen, S. J. A survey of unstructured mesh generation technology. IMR, 239:267, 1998.
. H Peng, N Pappas, D Yogatama, R Schwartz, N A Smith, L Kong, arXiv:2103.02143Random feature attention. arXiv preprintPeng, H., Pappas, N., Yogatama, D., Schwartz, R., Smith, N. A., and Kong, L. Random feature attention. arXiv preprint arXiv:2103.02143, 2021.
. M Prasthofer, T De Ryck, S Mishra, arXiv:2205.11404Variableinput deep operator networks. arXiv preprintPrasthofer, M., De Ryck, T., and Mishra, S. Variable- input deep operator networks. arXiv preprint arXiv:2205.11404, 2022.
U-net: Convolutional networks for biomedical image segmentation. O Ronneberger, P Fischer, T Brox, International Conference on Medical image computing and computer-assisted intervention. SpringerRonneberger, O., Fischer, P., and Brox, T. U-net: Convolu- tional networks for biomedical image segmentation. In In- ternational Conference on Medical image computing and computer-assisted intervention, pp. 234-241. Springer, 2015.
Super-convergence: Very fast training of neural networks using large learning rates. In Artificial intelligence and machine learning for multidomain operations applications. L N Smith, N Topin, SPIE11006Smith, L. N. and Topin, N. Super-convergence: Very fast training of neural networks using large learning rates. In Artificial intelligence and machine learning for multi- domain operations applications, volume 11006, pp. 369- 386. SPIE, 2019.
Efficient transformers: A survey. Y Tay, M Dehghani, D Bahri, D Metzler, ACM Computing Surveys2020Tay, Y., Dehghani, M., Bahri, D., and Metzler, D. Effi- cient transformers: A survey. ACM Computing Surveys (CSUR), 2020.
A Tran, A Mathews, L Xie, C S Ong, arXiv:2111.13802Factorized fourier neural operators. arXiv preprintTran, A., Mathews, A., Xie, L., and Ong, C. S. Factorized fourier neural operators. arXiv preprint arXiv:2111.13802, 2021.
Learning the solution operator of parametric partial differential equations with physics-informed deeponets. S Wang, H Wang, P Perdikaris, Science advances. 7408605Wang, S., Wang, H., and Perdikaris, P. Learning the solution operator of parametric partial differential equations with physics-informed deeponets. Science advances, 7(40): eabi8605, 2021.
Improved architectures and training algorithms for deep operator networks. S Wang, H Wang, P Perdikaris, Journal of Scientific Computing. 922Wang, S., Wang, H., and Perdikaris, P. Improved architec- tures and training algorithms for deep operator networks. Journal of Scientific Computing, 92(2):1-42, 2022.
Principles of multiscale modeling. E Weinan, Cambridge University PressWeinan, E. Principles of multiscale modeling. Cambridge University Press, 2011.
U-fno-an enhanced fourier neural operator-based deep-learning model for multiphase flow. G Wen, Z Li, K Azizzadenesheli, A Anandkumar, S M Benson, Advances in Water Resources. 163104180Wen, G., Li, Z., Azizzadenesheli, K., Anandkumar, A., and Benson, S. M. U-fno-an enhanced fourier neural operator-based deep-learning model for multiphase flow. Advances in Water Resources, 163:104180, 2022.
Introduction to partial differential equations with applications. E C Zachmanoglou, D W Thoe, Courier Corporation. Zachmanoglou, E. C. and Thoe, D. W. Introduction to partial differential equations with applications. Courier Corporation, 1986.
| [
"https://github.com/thu-ml/GNOT.",
"https://github.com/thu-ml/GNOT."
] |
[
"Calculating the Hawking Temperatures of Conventional Black Holes in the f(R) Gravity Models with the RVB Method",
"Calculating the Hawking Temperatures of Conventional Black Holes in the f(R) Gravity Models with the RVB Method"
] | [
"Wen-Xiang Chen \nDepartment of Astronomy\nSchool of Physics and Materials Science\nGuangzhou University\n510006GuangzhouChina\n",
"Jun-Xian Li \nDepartment of Astronomy\nSchool of Physics and Materials Science\nGuangzhou University\n510006GuangzhouChina\n",
"Jing-Yi Zhang \nDepartment of Astronomy\nSchool of Physics and Materials Science\nGuangzhou University\n510006GuangzhouChina\n"
] | [
"Department of Astronomy\nSchool of Physics and Materials Science\nGuangzhou University\n510006GuangzhouChina",
"Department of Astronomy\nSchool of Physics and Materials Science\nGuangzhou University\n510006GuangzhouChina",
"Department of Astronomy\nSchool of Physics and Materials Science\nGuangzhou University\n510006GuangzhouChina"
] | [] | This paper attempted to apply the RVB method for calculating the Hawking temperatures of black holes under f(R) gravity. In calculating the Hawking temperature, we found a difference in the integration constant between the RVB and general methods. | 10.1007/s10773-023-05335-7 | [
"https://export.arxiv.org/pdf/2210.09062v2.pdf"
] | 252,917,639 | 2210.09062 | 3e185b33c6e081feaa37cd77e9bb3f229362f6a2 |
Calculating the Hawking Temperatures of Conventional Black Holes in the f(R) Gravity Models with the RVB Method
2 Dec 2022
Wen-Xiang Chen
Department of Astronomy
School of Physics and Materials Science
Guangzhou University
510006GuangzhouChina
Jun-Xian Li
Department of Astronomy
School of Physics and Materials Science
Guangzhou University
510006GuangzhouChina
Jing-Yi Zhang
Department of Astronomy
School of Physics and Materials Science
Guangzhou University
510006GuangzhouChina
Calculating the Hawking Temperatures of Conventional Black Holes in the f(R) Gravity Models with the RVB Method
2 Dec 2022RVB methodf(R) gravitypure geometric model, Hawking temperature
This paper attempted to apply the RVB method for calculating the Hawking temperatures of black holes under f(R) gravity. In calculating the Hawking temperature, we found a difference in the integration constant between the RVB and general methods.
INTRODUCTION
In the classical view, a black hole is always considered an extreme object from which nothing can escape. Hawking and Bekenstein discovered that black holes can have temperature and entropy and that a black hole system can be considered a thermodynamic system. The topological properties of black holes can be characterized by topologically invariant Euler characteristic [1][2][3][4][5][6][7][8][9]. Black holes have some important features that can be studied more easily by calculating the Euler characteristic. For example, the black hole entropy has been discussed before [9,10]. While Padmanabhan et al. emphasized the importance of the topological properties of the horizon temperature [11,12]. Recently, Robson, Villari, and Biancalana [12][13][14] showed that the Hawking temperature of a black hole is closely related to its topology. Besides, a topological method related to the Euler characteristic was proposed to obtain the Hawking temperature, which has been successfully applied to four-dimensional Schwarzschild black holes, anti-de Sitter black holes, and other Schwarzschild-like or charged black holes. Based on the work of Robson, Villari, and Biancalana [12][13][14], Liu et al. accurately calculated the Hawking temperature of a charged rotating BTZ black hole by using this topology method [15]. Xian et al. [16] studied the Hawking temperature of the global monopole spacetime which was based on the topological method proposed by the RVB method. Previous work has linked the Hawking temperature to the topological properties of black holes.
The previous literature applied the RVB method to various black holes under general relativity [12][13][14][15][16]. Therefore, whether the RVB complex coordinate system makes their temperature difficult to calculate for many special black holes, it is found that the Hawking temperature of the black hole under f(R) gravity can be easily obtained by the RVB method. In this paper, by comparing the RVB method and the regular one, the Hawking temperatures of four types of black holes under different f(R) gravity are investigated. We found an integration constant for the temperature calculation after using the RVB method. The constant is zero or a parameter * Electronic address: [email protected] term that would otherwise not correspond to the Hawking temperature under the conventional method.
The arrangement of this paper is as follows. In the second part, the main formula studied in this paper is introduced, which are, new expressions for the Hawking temperature of a two-dimensional black hole system. In the third part, by studying the properties of known topological invariants and the RVB method, the Hawking temperature of Schwarzschild-like black holes under f(R) gravity is studied. In the fourth part, the RVB method is continued to calculate the Hawking temperature in RN black hole systems under f(R) gravity. In the fifth part, this formula is used to study the characteristics of Hawking temperature in terms of BZT black holes under f(R) gravity. In the sixth part, the RVB method is used to calculate the Hawking temperature of the Kerr-Sen black hole. The seventh part is the conclusion and discussion. In the f(R) theory, the metric of a general static spherically symmetric black hole [17][18][19][20][21][22][23][24][25][26]is
ds 2 = −g(r)dt 2 + dr 2 n(r) + r 2 dΩ 2 ,(1)
where g(r) and n(r) are general functions of the coordinate r. Taking n (r + ) = 0 , the resulting horizon satisfies n ′ (r + ) = 0.
κ K = g ′ (r + ) n ′ (r + ) 2 .(2)
Then the black hole temperature is Black hole systems are various, and stationary or rotating black holes have simple metrics, thus the temperature can be easily deduced. However, the complex coordinate system makes their temperature difficult to calculate for many special black holes. Therefore, the RVB method coordinates the Hawking temperature of a black hole to the Euler characteristic χ, which is very useful in calculating the Hawking temperature in any coordinate system. Note: Since we are between the cosmological and event horizons, the inner horizon temperature cannot be observed. Below we discuss only the observable Hawking temperature.
T = κ K 2π = g ′ (r + ) n ′ (r + ) 4π .(3)
The Euler characteristic [1][2][3][4][5] can be described:
χ = r0 Π − rH Π − r0 Π = − rH Π.(4)
In a word, in the calculation of the Euler characteristic, the outer boundary is always canceled out. Therefore, the integral should only be related to the Killing horizon. According to references [12][13][14][15][16]19], the Hawking temperature of a two-dimensional black hole can be obtained by the topological formula:
T H = c 4πχk B j≤χ rH j |g|Rdr,(5)
among them, is Planck's constant, c is the speed of light, k B is the Boltzmann constant, g is the determinant of the metric, R is the Ricci scalar, and r Hj is the location of the Killing horizon. In this paper, we use the natural unit system = c = k B = 1. The χ, which depends on the spatial coordinate r, is an Euler characteristic, representing the Killing level in Euclidean geometry. The symbol Eq.(4) represents the summation associated with the horizon. Through transformation, in this paper |g|=1.
The complex coordinate system makes its temperature difficult to calculate many special black holes are difficult to calculate for many special black holes' theorem, it is also equal to the integral of Π over the boundary of V n . Bounded manifolds require an important correction to the value of χ, which becomes: [12][13][14]
χ = ∂V Π − ∂M Π.(6)
The submanifold V n of M 2n−1 is crucial because its boundaries are defined as fixed points (zero point) of the unit vector field defined in M n . From|g|=1, the Hawking temperature can be rewritten by using Eq.(5) and Eq.(6) as
T H = − 1 2 ( 1 4π r c(−) Rdr − 1 4π r+ Rdr),(7)
where r c(−) is the radius of the additional horizon(cosmological horizon or inner horizon), r + is the radius of the event horizon.In the Euclidean coordinate, the period τ is already fixed on the event horizon, and therefore the cosmological horizon has a conic singularity and one has to remove it by introducing a boundary. The Einstein-Hilbert action √ −gR is a total derivative and √ −g here is 1 . Therefore the Ricci scalar of the metric (1) must be a total derivative, and the only thing that I can write must be R ∼ −g ′′ . To be precise, it is
R ∼ −g ′′ (r).(8)
Therefore
drR = −g ′ (r), → drR r→r+ ∼ −g ′ (r + )+C.
(9) This leads to the relation to the Hawking temperature. The more general case works as well, with
ds 2 = −g(r)dt 2 + dr 2 n(r) , → √ −gR ∼ − n g g ′ ′ .(10)
HAWKING TEMPERATURE OF SCHWARZSCHILD-LIKE BLACK HOLES UNDER F(R) GRAVITY OBTAINED BY RVB METHOD
The f(R) static black hole solution and its thermodynamics (when the constant Ricci curvature is not equal to 0) are briefly reviewed. Its general form of action is given by
I = 1 2 d 4 x √ −gf (R) + S mat .(11)
3.1. In the case of constant Ricci curvature (initial conditions)
Schwarzschild-de Sitter-f(R) black holes
A spherically symmetric solution with constant curvature R 0 is first considered as a simple but important example. By comparison with [27][28][29][30][31][32][33][34][35][36], the Schwarzschild solution (R 0 = 0)or the Schwarzschild-de Sitter solution is
ds 2 = −g(r)dt 2 + dr 2 g(r) + r 2 dΩ 2 ,(12)
where
g(r) = 1 − 2M r − R 0 r 2 12 .(13)
It is not difficult to calculate a Schwarzschild spherical symmetric solution with constant curvature R 0 under the f(R) model, the surface gravity on the event horizon and the cosmological horizon are
κ h = R 0 48 r −1 + (r c − r + ) (r + − r − ) , κ c = R 0 48 r −1 c (r c − r + ) (r c − r − ) ,(14)
where R 0 is the constant curvature,r c is the radius of the cosmological horizon, r + is the radius of the event horizon, and r − is the radius of the inner horizon. It can be seen that the Schwarzschild-de Sitter-f(R) black hole has two horizons, both of which have Hawking radiation. In the Cartesian coordinate system, the twodimensional line element is [10,17]
ds 2 = g(r)dτ 2 + dr 2 g(r) .(15)
Thus the Ricci scalar is
R = d 2 dr 2 g(r) = − 4M r 3 − 1/6R 0 .(16)
From |g|=1, the Hawking temperature is gotten by Eq. (5) and Eq. (6):
T H = − 1 2 1 4π rc Rdr − 1 4π r+ Rdr .(17)
There were two Killing horizons at that time, the Euler characteristic is 2, and we get
T H = κ h /(2π) + κ c /(2π) + C.(18)
r c is the radius of the cosmological horizon, r + is the radius of the event horizon, both are the radius of the Killing horizon, and C is the integral constant. In the Euclidean coordinate, the period τ is already fixed on the event horizon, and therefore the cosmological horizon has a conic singularity and one has to remove it by introducing a boundary. Comparing Eq.(18) with the Hawking temperature under the conventional method
T H = κ h /(2π) + κ c /(2π),(19)
we noticed C is 0.
Black hole in the form of f(R) theory:
f (R) = R − qR β+1 αβ+α+ǫ β+1 + qǫR β+1 ln a β 0 R β c
One of the forms of f(R) gravity is [17][18][19][20][21][22][23][24][25][26]
f (R) = R − qR β+1 αβ + α + ǫ β + 1 + qǫR β+1 ln a β 0 R β c ,(20)
where 0 ≤ ǫ ≤ e 4 1 + 4 e α , q = 4a β 0 /c(β+1), α ≥ 0, β ≥ 0 and a 0 = l 2 p , a and c are constants. Since R = 0, this f (R) theory has no Schwarzschild solution. Its metric form is
ds 2 = −g(r)dt 2 + h(r)dr 2 + r 2 dθ 2 + r 2 sin 2 θdϕ 2 , (21) and g(r) = h(r) −1 = 1 − 2m r + β 1 r,(22)
where m is related to the mass of the black hole, and β 1 is a model parameter.
Since the definition of the Killing horizon does not involve angular degrees of freedom, we reduce the angular degrees of freedom of space-time. In the Cartesian coordinate system, the two-dimensional metric is as follow [10,17], and the Euler characteristic is 1:
ds 2 = g(r)dτ 2 + dr 2 g(r) .(23)
Thus the Ricci scalar is
R = d 2 dr 2 g(r) = − 4m r 3 .(24)
From |g|=1, the Hawking temperature writes
T H = 2m r 2 + /(4π) + C.(25)
C is the integral constant, r + is the radius of the event horizon, which is also the only radius of the Killing horizon,
r + = √ 8β 1 m + 1 − 1 2β 1 .(26)
The Hawking temperature under the conventional method is
T H = ( 2m r 2 + + β 1 )/(4π),(27)
we noticed C is β 1 /(4π).
3.2.
In the case of non-constant Ricci curvature (initial conditions)
Black hole in the form of f(R) theory:
f (R) = R + 2α √ R
Its metric is [30] ds 2 = −g(r)dt 2 + 1/g(r)dr 2 + r 2 dθ 2 + r 2 sin 2 θdϕ 2 ,
where g(r) = 1 2 + 1 3αr
.
The Hawking temperature is gotten by the use of the RVB method. Euler characteristic is 1, and the Killing horizon is the event horizon. In the Cartesian coordinate system, the two-dimensional line element is [10,17]
ds 2 = g(r)dτ 2 + dr 2 g(r) .(30)
Thus the Ricci scalar is
R = 2/(3α(r 3 )).(31)
The Hawking temperature writes
T H = −1/((12π)α)r 2 + ) + C.(32)
C is the integral constant.At the event horizon, g (r + ) = 0, resulting in
r + = − 2 3α .(33)
The Hawking temperature under the conventional method is
T H = −1/((12π)α)r 2 + ).(34)
In contrast, C is 0.
Black hole in the form of f(R) theory:
f (R) = R + 2α √ R − 4Λ − 2Λ
The second f(R) model considered with non-constant curvature is given by literature [17][18][19][20][21][22][23][24][25][26][27][28]
f (R) = R + 2α √ R − 4Λ − 2Λ,(35)
where α < 0 is the integral constant, which can be regarded as the cosmological constant. The solution to this model is as follows
ds 2 = −g(r)dt 2 + 1/g(r)dr 2 + r 2 dθ 2 + r 2 sin 2 θdϕ 2 . (36)
In the Cartesian coordinate system, the twodimensional line element is [10,17]
ds 2 = g(r)dτ 2 + dr 2 g(r) ,(37)
where
g(r) = 1 2 + 1 3αr − Λ 3 r 2 .(38)
There is only one positive root of g(r) = 0, so the Ricci scalar is
R = 2/(3α(r 3 )) − 2Λ/3.(39)
Using the RVB method, we see that
T H = −1/((12π)α)r 2 + ) − 2Λr + /(12π) + C.(40)
C is the integral constant. The Hawking temperature under the conventional method is
T H = −1/((12π)α)r 2 + ) − 2Λr + /(12π).(41)
In contrast, C is 0. In another situation, when the cosmological constant is positive, the Euler characteristic is 2, the metric is isomorphic to the example in 3.1.1, and the conclusion is consistent with 3.1.1.
Black hole in the form of f(R) theory:
f (R) = R − µ 4 R and f (R) = R − λ exp(−ξR)
The solution to this model is: [30][31][32][33][34][35][36]
ds 2 = −g(r)dt 2 + 1/g(r)dr 2 + r 2 dθ 2 + r 2 sin 2 θdϕ 2 . (42)
And when the cosmological constant is negative,d=4,
g(r) = k c − 2Λ (d − 1)(d − 2) r 2 − M r d−3 ,(43)
where we should set Λ = ±µ 2 2d √ d 2 − 4 or λ = 2dΛe ξR d+2ξR .k c is a constant, it can take 1, -1, 0. We should set Λ as the negative terminal (d as the dimension). In the Cartesian coordinate system, the two-dimensional line element is [10,17]
ds 2 = g(r)dτ 2 + dr 2 g(r) .(44)
Thus the Ricci scalar is
R = d 2 dr 2 g(r) = −(d−3)(d−2) M r d−1 −4/((d − 1)(d − 2))Λ.(45)T H = −1/((d − 1)(d − 2)π)Λr + + (d − 3)M/(4π(r + ) d−2 ) + C.
(46) C is the integral constant.
Let g(r)=0, we get
r + = −((2 1/3 × (2k − 3dk + d 2 k))/(216M Λ 2 − 324dM Λ 2 + 108d 2 M Λ 2 + −864(2k − 3dk + d 2 k) 3 Λ 3 + (216M Λ 2 − 324dM Λ 2 + 108d 2 M Λ 2 ) 2 ) 1/3 ) − 1 6 × 2 1/3 Λ (216M Λ 2 − 324dM Λ 2 + 108d 2 M Λ 2 + −864(2k − 3dk + d 2 k) 3 Λ 3 + (216M Λ 2 − 324dM Λ 2 + 108d 2 M Λ 2 ) 2 ) 1/3 .(47)T H = −1/((d − 1)(d − 2)π)Λr + + (d − 3)M/(4π(r + ) d−2 ).
(48) By comparison, we get C is 0.
In another situation-when the cosmological constant is positive, the Euler characteristic is 2, the metric is isomorphic to the example in 3.1.1, and the conclusion is consistent with 3.1.1.
Black hole in the form of f(R) theory:
df (R)/dR = 1 + αr
The line element is [36] ds 2 = −g(r)dt 2 + 1/g(r)dr 2 + r 2 dθ 2 + r 2 sin 2 θdϕ 2 , (49) where g(r) = C 2 r 2 + 1 2 + 1 3αr + C1 r 3αr − 2 − 6α 2 r 2 + 6α 3 r 3 ln 1 + 1 αr , (50) C1 and C2 are constants.
Using the RVB method, we see that the Killing horizon is the event horizon. In the Cartesian coordinate system, the two-dimensional metric is [10,17]
ds 2 = g(r)dτ 2 + dr 2 g(r) .(51)
So the Ricci scalar is
R = 2C 2 + 2 3ar 3 − 4C 1 r 3 − C 1 12a 2 1 ar + 1 r − 6aC 1 1 ar + 1 2 r 2 + 12a 3 C 1 ln 1 ar + 1 .(52)
The Hawking temperature is
T H = (2C 2 r + − 1 3ar 2 + + 2C 1 r 2 + + C 1 (12a 3 ln 1 ar + + 1 r + − 6a 2 1 ar+ + 1 ))/(4π) + C.
(53) C is the integral constant. The Hawking temperature under the conventional method is
T H = (2C 2 r + − 1 3ar 2 + + 2C 1 r 2 + + C 1 (12a 3 ln 1 ar + + 1 r + − 6a 2 1 ar+ + 1 ))/(4π).
(54) In contrast, C is −6a 2 C 1 /(4π).
In another situation, when g(r) = 0, it can have two more positive roots, the metric is isomorphic to the example in 3.1.1, and the conclusion is consistent with 3.1.1.
Black hole in the form of f(R) theory:
f (R) = R + Λ + R+Λ R/R 0 +2/α ln R+Λ Rc
The line element is [37] ds 2 = −g(r)dt 2 + 1/g(r)dr 2 + r 2 dθ 2 + r 2 sin 2 θdϕ 2 ,
where
g(r) = 1 − 2M r + βr − Λr 2 3 .(56)
β > 0.
We use the RVB method to find the Hawking temperature. At this time, the Euler characteristic is 1, and the event horizon is the only Killing horizon. In the Cartesian coordinate system, the two-dimensional metric is [10,17] ds 2 = g(r)dτ 2 + dr 2 g(r) .
So the Ricci scalar is
R = − 4M r 3 − 2Λ 3 .(58)
When g(r) = 0 has only one positive root, the Euler characteristic is 1,
T H = ( 2M r 2 + − 2Λr + 3 )/(4π) + C.(59)
C is the integral constant. Let g(r)=0, we get
r+ = β Λ + 2 1/3 × −9β 2 − 9Λ 3Λ −54β 3 − 81βΛ + 162M Λ 2 + 4 (−9β 2 − 9Λ) 3 + (−54β 3 − 81βΛ + 162M Λ 2 ) 2 1/3 − −54β 3 − 81βΛ + 162M Λ 2 + 4 (−9β 2 − 9Λ) 3 + (−54β 3 − 81βΛ + 162M Λ 2 ) 2 1/3 3 × 2 1/3 Λ .
(60) The Hawking temperature under the conventional method is
T H = ( 2M r 2 + − 2Λr + 3 + β)/(4π).(61)
By contrast, C is β/(4π). When the cosmological constant is positive, the metric is isomorphic to the example in 3.1.1, the conclusion is consistent with 3.1.1, and the Euler characteristic is 2.
HAWKING TEMPERATURE OF RN BLACK HOLES UNDER F(R) GRAVITY BY RVB METHOD
In this section, we briefly review the main features of four-dimensional charged black holes with constant Ricci scalar curvature [13,31,[37][38][39] in the f(R) gravitational background. The action is given by Comparing the Reissner Nordstrom black hole in the de Sitter space-time, we consider the spherically symmetric solution of f (R) gravity with constant curvature R 0 (or initial Ricci curvature) under the model, such as RN black hole solution (R 0 = 0) , where g(r) is in the form of g(r) = 1 − 2M r − R0r 2 12 + Q 2 r 2 , its metric is as follows [33]
S = M d 4 x √ −g [f (R) − F µν F µν ] .(62)ds 2 = − 1 − 2M r − R 0 r 2 12 + Q 2 r 2 dt 2 + 1 − 2M r − R 0 r 2 12 + Q 2 r 2 −1 dr 2 + r 2 dθ 2 + r 2 sin 2 θdϕ 2 ,(63)
where R 0 (> 0) is the cosmological constant. Let g(r)=0, three solutions can be obtained,
1 − 2M r − R 0 r 2 12 + Q 2 r 2 = 0.(64)
r 1 is a negative root, which has no physical meaning; r i is the smallest positive root, corresponding to the inner horizon of the black hole; r e is a smaller positive root, corresponding to the outer horizon of the black hole; r c is the largest positive root. The three surface gravities are
κ1 = R 0 48 r −2 i ( r i − r 1 )( r e − r i )( r c − r i ), κ2 = R 0 48 r −2 e (r e − r 1 ) (r e − r i ) (r c − r e ) , κ3 = R 0 48 r −2 c (r c − r 1 ) (r c − r i ) (r c − r e ) .(65)
We use the RVB method to find the Hawking temperature. At this time, the Euler characteristic is 2, and we get, in the Cartesian coordinate system, the twodimensional metric is [10,17]
ds 2 = g(r)dτ 2 + dr 2 g(r) ,(66)
so the Ricci scalar is
R = d 2 dr 2 g(r) = − 4M r 3 − 1/6R 0 + 6Q 2 /r 4 .(67)
Using Eq. (7), we get
T H = − 1 2 1 4π rc Rdr − 1 4π re Rdr ,(68)
and
T H = κ 2 /(2π) + κ 3 /(2π) + C.(69)
r c is the radius of the cosmological horizon, r e is the radius of the event horizon, both of which are the radius of the Killing horizon, and C is an integral constant. Since we are between the cosmological and event horizons, the inner horizon temperature cannot be observed. Below we will only talk about the observable Hawking temperature. In the Euclidean signature, the period τ is already fixed on the event horizon, and therefore the cosmological horizon has a conic singularity and one has to remove it by introducing a boundary. The Hawking temperature under the conventional method is
T H = κ 2 /(2π) + κ 3 /(2π).(70)
By contrast, C is 0.
Black hole in the form of f(R) theory:
f (R) = R − αR n
The example we focus on is [16,30]
f (R) = R − αR n .(71)
Its metric form is
ds 2 = −g(r)dt 2 + dr 2 g(r) + r 2 dΩ 2 ,(72)
where
g(r) = 1 − 2m r + q 2 br 2 . (73) b = f ′ (R 0
) and the two parameters m and q are proportional to the black hole mass and charge, respectively
[14] M = mb, Q = q √ b .(74)
In the Cartesian coordinate system, the twodimensional line element is [10,17]
ds 2 = g(r)dτ 2 + dr 2 g(r) ,(75)
so the Ricci scalar is
R = d 2 dr 2 g(r) = − 4m r 3 + 6q 2 br 4 .(76)
We got
T H = ( m r + 2 − q 2 br + 3 + m r − 2 − q 2 br − 3 )/(4π) + C.(77)
r − is the radius of the inner horizon, r + is the radius of the event horizon, both are the radius of the Killing horizon, and C is the integral constant.
r + = m 2 − Q 2 + m, r − = m − m 2 − Q 2 .(78)
The Hawking temperature under the conventional method is
T H = ( m r + 2 − q 2 br + 3 + m r − 2 − q 2 br − 3 )/(4π).(79)
By contrast, C is 0.
We define λ = Re ξR 2+ξR , κ = − 1+ξR R(2+ξR) , ξ is a free parameter, to simplify our following d takes 4. Its metric form is [35]
ds 2 = −g(r)dt 2 + dr 2 g(r) + r 2 dΩ 2 ,(80)
where
g(r) = k − 2Λ (d − 1)(d − 2) r 2 − M r d−3 + Q 2 r d−2 . (81)
k is a constant, it can take 1, -1, 0. In the Cartesian coordinate system, the two-dimensional metric is [10,17]
ds 2 = g(r)dτ 2 + dr 2 g(r) ,(82)
so the Ricci scalar is
R = d 2 dr 2 g(r) = −(d − 3)(d − 2) M r d−1 − 4/((d − 1)(d − 2))Λ + (d − 1)(d − 2)Q 2 /r d .
(83) Some parameters, such as λ and κ should be fixed. Through the metric form, we find that for this static spherically symmetric black hole, it can be easily seen that it is consistent with the calculation result of the Hawking temperature under the conventional method through the metric form (the integral constant obtained is 0). When the cosmological constant is negative, at this time the Euler characteristic number is 1, and we get
TH = −1/((d − 1)(d − 2)π)Λr+ + (d − 3)M/(4π(r+) d−2 ) − (d − 2)Q 2 /(4π(r+) d−1 ) + C.
(84) C is the integral constant, and r + is the radius of the event horizon.
The Hawking temperature under the conventional method is
TH = −1/((d − 1)(d − 2)π)Λr+ + (d − 3)M/(4π(r+) d−2 ) − (d − 2)Q 2 /(4π(r+) d−1 ).
(85) C is 0 when contrast is made.
When the cosmological constant is positive, the structure is consistent with 4.1.1, and the conclusion is the same. One of the forms of the f(R) model is [30] f (R) = 2a
√ R − α,(86)
where α is a parameter of the model that is related to an effective cosmological constant, and α > 0 is a parameter in units of [distance ] −1 . The model has a solution of the static spherically symmetric black hole, which has the following form
ds 2 = −g(r)dt 2 + dr 2 g(r) + r 2 dΩ 2 ,(87)
where
g(r) = 1 2 1 − αr 2 6 + 2Q r 2 ,(88)
and Q is the integration constant. The event horizon of the black hole is located at: (a) when α > 0 and Q > 0, r + = 3α + α √ 9 + 12αQ/α; (b) when α > 0, Q < 0 and αQ > −3/4, r + = 3α − α √ 9 + 12αQ/α; (c) when α < 0 and Q < 0, r + = 3/α − α √ 9 + 12αQ/α; (d) when α > 0 and Q = 0, r + = 6/α.
When α > 0 and Q > 0 or α < 0 and Q < 0 or α > 0 and Q = 0, then g (r)=0 has only one positive root with physical meaning. The Euler characteristic is 1, and |g|=1 at this time. In the Cartesian coordinate system, the two-dimensional metric is [10,17]
ds 2 = g(r)dτ 2 + dr 2 g(r) .(89)
The Ricci scalar is as follows
R = d 2 dr 2 g(r) = 6Q/r 4 − α/6.(90)
Using the RVB method, we get
T H = (− 2Q r 3 + − r + α 6 )/(4π) + C.(91)
C is the integral constant. The Hawking temperature under the traditional method is
T H = (− 2Q r 3 + − r + α 6 )/(4π).(92)
In contract, C is 0. When we take different values for α and Q, we find that there are two effective positive roots in the equation g(r) = 0, one is the event horizon and the other is the cosmological horizon, both of which are Killing horizons. The structure is consistent with 4.1.1. When α > 0 and Q < 0, the Euler characteristic is 2 by this time. The RVB method is used to find the Hawking temperature:
T H = − 1 2 1 4π rc Rdr − 1 4π re Rdr .(93)T H = κ 2 /(2π) + κ 3 /(2π) + C.(94)
r c is the radius of the cosmological horizon, r e is the radius of the event horizon, both are the radius of the Killing horizon, and C is an integral constant. In the Euclidean signature, the period τ is already fixed on the event horizon, so the cosmological horizon has a conic singularity, which must be eliminated by introducing a boundary. The Hawking temperature under normal conditions is
T H = κ 2 /(2π) + κ 3 /(2π).(95)
By contract, C is 0. This conclusion is consistent with 4.1.1,
κ2 = R 0 48 r −2 e (r e − r1) (r e − r i ) (r c − r e ) , κ3 = R 0 48 r −2 c (r c − r1) (r c − r i ) (r c − r e ) .(96)
R 0 is the initial Ricci curvature.
Black hole in the form of f(R) theory:
f (R) = R − λ exp(−ξR) + κR n + η ln(R)
Its metric form is [35]
ds 2 = −g(r)dt 2 + dr 2 g(r) + r 2 dΩ 2 ,(97)
where
g(r) = k 1 − 2Λ (d − 1)(d − 2) r 2 − M r d−3 + Q 2 r d−2 .(98)
We take d=4,
λ = R + κR n − (R + nκR n ) ln R (1 + ξR ln R)e −ξR , η = − (1 + ξR)R + (n + ξR)κR n 1 + ξR ln R ,(99)
where ξ is a free parameter, and k 1 is a constant, it can take 1, -1, 0. When the cosmological constant is positive, the structure is consistent with 4.1.1, and the conclusion is the same. We find that there are two effective positive roots of the equation g(r) = 0, one is on the event horizon and the other is on the cosmological horizon, both of which are Killing horizons.
We use the RVB method to find the Hawking temperature,
T H = − 1 2 1 4π rc Rdr − 1 4π re Rdr .(100)T H = κ 2 /(2π) + κ 3 /(2π) + C.(101)
In the Euclidean signature, the period τ is already fixed on the event horizon, so the cosmological horizon has a conic singularity, which must be eliminated by introducing a boundary.
The Hawking temperature under normal conditions is
T H = κ 2 /(2π) + κ 3 /(2π).(102)
In contrast, C is 0. This conclusion is consistent with 4.1.1,
κ2 = R 0 48 r −2 e (r e − r1) (r e − r i ) (r c − r e ) , κ3 = R 0 48 r −2 c (r c − r1) (r c − r i ) (r c − r e ) .(103)
R 0 is the initial Ricci curvature. When the cosmological constant is negative, the twodimensional metric is [10,17]
ds 2 = g(r)dτ 2 + dr 2 g(r) ,(104)
so the Ricci scalar is
R = d 2 dr 2 g(r) = −(d − 3)(d − 2) M r d−1 − 4/((d − 1)(d − 2))Λ + (d − 1)(d − 2)Q 2 /r d .
(105) When g(r) = 0, there is only one positive root with physical meaning, at this time, the Euler characteristic is 1, and the event horizon is on the only Killing horizon. Using the RVB method, we get
TH = −1/((d − 1)(d − 2)π)Λr+ + (d − 3)M/(4π(r+) d−2 ) − (d − 2)Q 2 /(4π(r+) d−1 ) + C.
(106) C is the integral constant, and r + is the radius of the event horizon.
Hawking temperature under the traditional method is
TH = −1/((d − 1)(d − 2)π)Λr+ + (d − 3)M/(4π(r+) d−2 ) − (d − 2)Q 2 /(4π(r+) d−1 ).
(107) By contract, C is 0.
HAWKING TEMPERATURE OF THE BTZ
BLACK HOLE UNDER f (R) GRAVITY OBTAINED BY RVB METHOD In the following two models, the cosmological constant is negative. We bring the following solution: [27,28].
ds 2 = −g(r)dt 2 + dr 2 g(r) + r 2 dΩ 2 ,(108)
where g(r) = −Λr 2 − M (2ηr + ξ).
The curvature scalar is
R = −2Λ − 4M η r .(110)
In the Euclidean coordinate system, the twodimensional metric is [10,17] ds 2 = g(r)dτ 2 + dr 2 g(r) .
Thus the Ricci scalar is
R = −2Λ.(112)
We use the RVB method to find the Hawking temperature,
T H = −Λr + 2π + C.(113)
C is the integration constant. Let g(r)=0,we get
r + = − M (M η 2 − Λξ) + M η Λ ,(114)
r + is the event horizon radius and the unique Killing horizon radius. The Hawking temperature calculation under the conventional method is
T = −Λr + − M η 2π .(115)
By contrast,C = −Mη 2π . In the case Φ(r) = 0, the charged (2 + 1)-dimensional solution under pure f (R)-gravity is [27,28]
ds 2 = −g(r)dt 2 + dr 2 g(r) + r 2 dΩ 2 ,(116)
where
g(r) = −Λr 2 − M r − 2Q 2 3ηr .(117)
The two-dimensional line element is [10,17]
ds 2 = g(r)dτ 2 + dr 2 g(r) ,(118)
at this point |g|=1, the Ricci scalar is
R = − 4Q 2 3ηr 3 − 2Λ.(119)
Using the RVB method, we get
T H = − Λr + 2π + Q 2 6πηr 2 + + C.(120)
C is the integration constant. Let g(r)=0,we get
r + = 1 3 − M Λ − M 2 η Λ M 3 η 3 + 9Q 2 η 2 Λ 2 + 3 2M 3 Q 2 eta 5 Λ 2 + 9Q 4 η 4 Λ 4 1/3 − M 3 η 3 + 9Q 2 η 2 Λ 2 + 3 2M 3 Q 2 η 5 Λ 2 + 9Q 4 η 4 Λ 4 1/3 ηΛ ,(121)
r + is the event horizon radius and the unique Killing horizon radius.
The Hawking temperature calculation under the conventional method is
T = − Λr + 2π − M 4π + Q 2 6πηr 2 + .
(122)
By contrast , C is −M 4π .
HAWKING TEMPERATURE OF SPHERICALLY SYMMETRIC KERR-SEN BLACK HOLES OBTAINED BY THE RVB METHOD
There is a special solution (Kerr-Sen black hole solution) in a certain modified gravity, and the space-time line element of the spherically symmetric Kerr-Sen black hole can be expressed as [29] ds 2 = −g(r)dt 2 + dr 2 g(r) + r 2 dΩ 2 ,
where g(r) = r 2 − 2M ′ r + a 2 / r 2 + + a 2 .
The two-dimensional line element is [10,17] ds 2 = g(r)dτ 2 + dr 2 g(r) ,
the Ricci scalar is
R = 2/ r 2 + + a 2 .(126)
The two horizons are
r ± = M ′ ± M ′2 − a 2 .(127)
The effective mass is
M ′ = M − b1 = M − Q 2 2M .(128)
Note,r ∓ is the radius of the inner and outer horizons of the black hole, and b1 is the parameter related to the coordinate extension. We find the Hawking temperature in this case [12][13][14] T H = − 1 2
1 4π r− Rdr − 1 4π r+ Rdr ,(129)
where r − is the radius of the inner horizon, r + is the radius of the event horizon.
T H = (M ′ ) 2 − a 2 4πM ′ M ′ + (M ′ ) 2 − a 2 + C,(130)
C is the integration constant. Compared with the conventional method temperature, an important thermodynamic quantity corresponding to a black hole is the Hawking temperature T H , which can be calculated at the poles as
T H = 1 2π lim r→r+ √ g rr ∂ r √ −g tt θ=0 ,(131)
we get
T = (M ′ ) 2 − a 2 4πM ′ M ′ + (M ′ ) 2 − a 2 .(132)
By contrast, C is 0.
CONCLUSION AND DISCUSSION
In this work, we studied the Hawking temperature of four basic types of black holes under different f(R) gravity using the Euler characteristic in topological formulations. It was found that there is an integral constant difference between temperature results calculated by the RVB method and the method under the conventional method. In the calculation of the Hawking temperature, the integral constant can be determined from the Hawking temperature obtained by the standard definition to obtain an accurate temperature. Therefore, the topological method is also applicable to the calculation of the temperature of the black hole under the conventional f(R) gravity. It is found that the integral constants corresponding to different f(R) gravity are different. Different forms of f(R) correspond to different gravitational theories, thus their integral constants are dependent on the gravitational theory.
4. 2 .
2The solution of RN black hole has a non-constant Ricci curvature 4.2.1. Black hole in the form of f(R) theory:f (R) = 2a √ R − α
5. 1 .
1Black hole in the form of f(R) theory:f (R) = −4η 2 M ln(−6Λ − R) + ξR + R0
5. 2 .
2Black hole in the form of f(R) theory: f (R) = −2ηM ln(6Λ + R) + R0
The Hawking temperature writes
The Hawking temperature under the conventional method is
.1.3. Black hole in the form of f(R) theory: f (R) = R − λ exp(−ξR) + κR 2
Acknowledgements:This work is partially supported by the National Natural Science Foundation of China(No. 11873025)
. G W Gibbons, R E Kallosh, Phys. Rev. D. 512839G. W. Gibbons and R. E. Kallosh, Phys. Rev. D 51, 2839 (1995).
. G W Gibbons, S W Hawking, Commun. Math. Phys. 66291G. W. Gibbons and S. W. Hawking, Commun. Math. Phys. 66, 291 (1979).
. T Eguchi, P B Gilkey, A J Hanson, Phys. Rep. 66213T. Eguchi, P. B. Gilkey, and A. J. Hanson, Phys. Rep. 66, 213 (1980).
. S Liberati, G Pollifrone, Phys. Rev. D. 566458S. Liberati and G. Pollifrone, Phys. Rev. D 56, 6458 (1997).
. M Bañados, C Teitelboim, J Zanelli, Phys. Rev. Lett. 72957M. Bañados, C. Teitelboim, and J. Zanelli, Phys. Rev. Lett. 72, 957 (1994).
. M Eune, W Kim, S H Yi, JHEP. 0320M. Eune, W. Kim and S. H. Yi, JHEP 03, 020 (2013).
. G Gibbons, R Kallosh, B Kol, Phys. Rev. Lett. 774992G. Gibbons, R. Kallosh and B. Kol, Phys. Rev. Lett. 77,4992(1996).
. D Kubiznak, R B Mann, JHEP. 0733D. Kubiznak and R. B. Mann, JHEP 07, 033 (2012).
. A Bohr, B R Mottelson, Nuclear Structure. 1W. A. Benjamin IncA. Bohr and B. R. Mottelson, "Nuclear Structure", Vol.1 (W. A. Benjamin Inc., New York, 1969).
Models of the Nucleon. R K Bhaduri, Addison-WesleyR. K. Bhaduri, "Models of the Nucleon", (Addison- Wesley, 1988).
. S Das, P Majumdar, R K Bhaduri, Class. Quant. Grav. 19S. Das, P. Majumdar, R. K. Bhaduri, Class. Quant. Grav.19:2355-2368, (2002).
On the Topological Nature of the Hawking Temperature of Black Holes. C W Robson, L D M Villari, F Biancalana, Phys. Rev. D. 44042Robson, C.W.; Villari, L.D.M.; Biancalana, F. On the Topological Nature of the Hawking Temperature of Black Holes. Phys. Rev. D 2019, 99, 044042 .
C W Robson, L D M Villari, F Biancalana, arXiv:1902.02547Global Hawking Temperature of Schwarzschild-de Sitter Spacetime: a Topological Approach. gr-qcRobson, C.W.; Villari, L.D.M.; Biancalana, F. Global Hawking Temperature of Schwarzschild-de Sitter Space- time: a Topological Approach. arXiv:1902.02547[gr-qc].
C W Robson, L D M Villari, F Biancalana, arXiv:1903.04627The Hawking Temperature of Anti-de Sitter Black Holes: Topology and Phase Transitions. grqcRobson, C.W.; Villari, L.D.M.; Biancalana, F. The Hawking Temperature of Anti-de Sitter Black Holes: Topology and Phase Transitions. arXiv:1903.04627[gr- qc].
Topological approach to derive the global Hawking temperature of (massive) BTZ black hole. Y P Zhang, S W Wei, Y X Liu, Phys. Lett. 2020135788Zhang, Y.P.; Wei, S.W.; Liu, Y.X. Topological approach to derive the global Hawking temperature of (massive) BTZ black hole. Phys. Lett. B 2020, 810, 135788 .
Deriving the Hawking Temperature of (Massive) Global Monopole Spacetime via a Topological Formula. Junlan Xian, Jingyi Zhang, Entropy. 24634Xian, Junlan, and Jingyi Zhang. "Deriving the Hawking Temperature of (Massive) Global Monopole Spacetime via a Topological Formula." Entropy 24.5 (2022): 634.
Holographic effective actions from black holes. Francesco Caravelli, Leonardo Modesto, Physics Letters B. 702Caravelli, Francesco, and Leonardo Modesto. "Holo- graphic effective actions from black holes." Physics Let- ters B 702.4 (2011): 307-311.
Generation of spherically symmetric metrics in f (R) gravity. Z Amirabi, M Halilsoy, S Habib, Mazharimousavi, Z. Amirabi, M. Halilsoy, S. Habib Mazharimousavi, Gen- eration of spherically symmetric metrics in f (R) gravity
. The European Physical Journal C. 766338The European Physical Journal C, 2016, 76(6): 338.
The global monopole spacetime and its topological charge. Tan, Hongwei, Chinese Physics B. 2730401Tan, Hongwei, et al. "The global monopole spacetime and its topological charge." Chinese Physics B 27.3 (2018): 030401.
Spherically symmetric solutions of modified field equations in f (R) theories of gravity. T Multamaki, I Vilja, T. Multamaki, I. Vilja, Spherically symmetric solutions of modified field equations in f (R) theories of gravity[J].
. Physical Review D. 74664022Physical Review D, 2006, 74(6): 064022.
Is cosmic speed-up due to new gravitational physics. S M Carroll, V Duvvuri, M Trodden, M S Turner, S.M. Carroll, V. Duvvuri, M. Trodden, M.S. Turner, Is cosmic speed-up due to new gravitational physics?[J].
. Physical Review D. 70443528Physical Review D, 2004, 70(4): 043528.
Curvature quintessence matched with observational data. S Capozziello, V F Cardone, S Carloni, A Troisi, S. Capozziello, V.F. Cardone, S. Carloni, A. Troisi, Cur- vature quintessence matched with observational data[J].
. International Journal of Modern Physics D. 1210International Journal of Modern Physics D, 2003, 12(10): 1969-1982.
The Cosmology of f (R) gravity in metric variational approach. B Li, J D Barrow, Physical Review D. 75884010B. Li, J.D. Barrow, The Cosmology of f (R) gravity in metric variational approach[J]. Physical Review D, 2007, 75(8) : 084010
Conditions for the cosmological viability of f (R) dark energy models. L Amendola, R Gannouji, D Polarski, S Tsujikawa, J]. Physical Review D. 75883504L. Amendola, R. Gannouji, D. Polarski, S. Tsujikawa, Conditions for the cosmological viability of f (R) dark en- ergy models[J]. Physical Review D, 2007, 75(8): 083504.
Viable singularity-free f (R) gravity without a cosmological constant. V Miranda, S E Jones, I Waga, M Quartin, J]. Physical Review Letters. 10222221101V. Miranda, S.E. Jones, I. Waga, M. Quartin, Viable singularity-free f (R) gravity without a cosmological con- stant[J]. Physical Review Letters, 2009, 102(22): 221101.
Static spherically symmetric solutions in F(R) gravity[J]. The European Physical Journal C. L Sebastiani, S Zerbini, 711591L. Sebastiani, S. Zerbini, Static spherically symmetric so- lutions in F(R) gravity[J]. The European Physical Jour- nal C, 2011, 71: 1591.
What happens for the BTZ black hole solution in dilaton f (R)-gravity?. Younes Younesizadeh, International Journal of Modern Physics D. 30042150028Younesizadeh, Younes, et al. "What happens for the BTZ black hole solution in dilaton f (R)-gravity?." In- ternational Journal of Modern Physics D 30.04 (2021): 2150028.
The effects of massive graviton on the equilibrium between the black hole and radiation gas in an isolated box. Ya-Peng Hu, Feng Pan, Xin-Meng Wu, Physics Letters B. 772Hu, Ya-Peng, Feng Pan, and Xin-Meng Wu. "The effects of massive graviton on the equilibrium between the black hole and radiation gas in an isolated box." Physics Let- ters B 772 (2017): 553-558.
Hidden and generalized conformal symmetry of Kerr-Sen spacetimes. A M Ghezelbash, H M Siahaan, Classical and Quantum Gravity. 30135005Ghezelbash, A. M., and H. M. Siahaan. "Hidden and gen- eralized conformal symmetry of Kerr-Sen spacetimes." Classical and Quantum Gravity 30.13 (2013): 135005.
Some exact solutions of f (R) gravity with charged (a) dS black hole interpretation. S H Hendi, B Eslam Panah, S M Mousavi, General Relativity and Gravitation. 44Hendi, S. H., B. Eslam Panah, and S. M. Mousavi. "Some exact solutions of f (R) gravity with charged (a) dS black hole interpretation." General Relativity and Gravitation 44.4 (2012): 835-853.
Hidden Depths in a Black Hole: Surface Area Information Encoded in the (r, t) Sector. Charles W Robson, Marco Ornigotti, arXiv:1911.11723arXiv preprintRobson, Charles W., and Marco Ornigotti. "Hid- den Depths in a Black Hole: Surface Area Informa- tion Encoded in the (r, t) Sector." arXiv preprint arXiv:1911.11723 (2019).
Entropy of a nonthermal equilibrium Schwarzschild-de Sitter black hole. Wenbiao Liu, Zheng Zhao, Liu, Wenbiao, and Zhao, Zheng. "Entropy of a non- thermal equilibrium Schwarzschild-de Sitter black hole." (1995).
Entropy of a nonthermal equilibrium Reissner-Nordstrom-de Sitter black hole. Wenbiao Liu, Zheng Zhao, Journal of Beijing Normal University: Natural Sciences Edition. 36Liu, Wenbiao, and Zhao, Zheng. "Entropy of a non- thermal equilibrium Reissner-Nordstrom-de Sitter black hole." Journal of Beijing Normal University: Natural Sci- ences Edition 36.5 (2000): 626-630.
Deriving Hawking Radiation via Gauss-Bonnet Theorem: An Alternative Way. A Övgün, İ Sakall, Annals Phys. 168071Övgün, A.; Sakall,İ. Deriving Hawking Radiation via Gauss-Bonnet Theorem: An Alternative Way. Annals Phys. 2020, 413, 168071.
Rotating charged black hole spacetimes in quadratic f (R) gravitational theories. G G Nashed, International Journal of Modern Physics D. 271850074Nashed, G. G. L. "Rotating charged black hole space- times in quadratic f (R) gravitational theories." Inter- national Journal of Modern Physics D 27.07 (2018): 1850074.
Horizon Thermodynamics in D-Dimensional f (R) Black Hole. Chennai Zhu, Rong-Jia Yang, Entropy. 221246Zhu, Chennai, and Rong-Jia Yang. "Horizon Thermo- dynamics in D-Dimensional f (R) Black Hole." Entropy 22.11 (2020): 1246.
Dynamics and center of mass energy of colliding particles around the black hole in f (R) gravity. Bushra Majeed, Mubasher Jamil, International Journal of Modern Physics D. 261741017Majeed, Bushra, and Mubasher Jamil. "Dynamics and center of mass energy of colliding particles around the black hole in f (R) gravity." International Journal of Mod- ern Physics D 26.05 (2017): 1741017.
Thermodynamics of multi-horizon spacetimes. C Singha, Singha C. Thermodynamics of multi-horizon space- times[J].
. General Relativity and Gravitation. 20224General Relativity and Gravitation, 2022, 54(4): 1-17.
Unifying inflation with ΛCDM epoch in modified f (R) gravity consistent with Solar System tests. Shin ' Nojiri, Sergei D Odintsov, Physics Letters B. 657Nojiri, Shin'ichi, and Sergei D. Odintsov. "Unifying infla- tion with ΛCDM epoch in modified f (R) gravity consis- tent with Solar System tests." Physics Letters B 657.4-5 (2007): 238-245.
| [] |
[
"Rethinking Counterfactual Explanations as Local and Regional Counterfactual Policies",
"Rethinking Counterfactual Explanations as Local and Regional Counterfactual Policies"
] | [
"Salim I Amoukou \nLaMME ENSIIE\nLaMME University\nUniversity Paris Saclay Quantmetry Paris\n\n",
"Paris Saclay \nLaMME ENSIIE\nLaMME University\nUniversity Paris Saclay Quantmetry Paris\n\n",
"Stellantis Paris \nLaMME ENSIIE\nLaMME University\nUniversity Paris Saclay Quantmetry Paris\n\n",
"Nicolas J-B Brunel \nLaMME ENSIIE\nLaMME University\nUniversity Paris Saclay Quantmetry Paris\n\n"
] | [
"LaMME ENSIIE\nLaMME University\nUniversity Paris Saclay Quantmetry Paris\n",
"LaMME ENSIIE\nLaMME University\nUniversity Paris Saclay Quantmetry Paris\n",
"LaMME ENSIIE\nLaMME University\nUniversity Paris Saclay Quantmetry Paris\n",
"LaMME ENSIIE\nLaMME University\nUniversity Paris Saclay Quantmetry Paris\n"
] | [] | Counterfactual Explanations (CE) face several unresolved challenges, such as ensuring stability, synthesizing multiple CEs, and providing plausibility and sparsity guarantees. From a more practical point of view, recent studies[Pawelczyk et al., 2022]show that the prescribed counterfactual recourses are often not implemented exactly by individuals and demonstrate that most state-of-the-art CE algorithms are very likely to fail in this noisy environment. To address these issues, we propose a probabilistic framework that gives a sparse local counterfactual rule for each observation, providing rules that give a range of values capable of changing decisions with high probability. These rules serve as a summary of diverse counterfactual explanations and yield robust recourses. We further aggregate these local rules into a regional counterfactual rule, identifying shared recourses for subgroups of the data. Our local and regional rules are derived from the Random Forest algorithm, which offers statistical guarantees and fidelity to data distribution by selecting recourses in high-density regions. Moreover, our rules are sparse as we first select the smallest set of variables having a high probability of changing the decision. We have conducted experiments to validate the effectiveness of our counterfactual rules in comparison to standard CE and recent similar attempts. Our methods are available as a Python package. | 10.48550/arxiv.2209.14568 | [
"https://export.arxiv.org/pdf/2209.14568v2.pdf"
] | 252,596,140 | 2209.14568 | 7b66b3b29711a2aba537c7fd703ec3d98495450a |
Rethinking Counterfactual Explanations as Local and Regional Counterfactual Policies
Salim I Amoukou
LaMME ENSIIE
LaMME University
University Paris Saclay Quantmetry Paris
Paris Saclay
LaMME ENSIIE
LaMME University
University Paris Saclay Quantmetry Paris
Stellantis Paris
LaMME ENSIIE
LaMME University
University Paris Saclay Quantmetry Paris
Nicolas J-B Brunel
LaMME ENSIIE
LaMME University
University Paris Saclay Quantmetry Paris
Rethinking Counterfactual Explanations as Local and Regional Counterfactual Policies
Counterfactual Explanations (CE) face several unresolved challenges, such as ensuring stability, synthesizing multiple CEs, and providing plausibility and sparsity guarantees. From a more practical point of view, recent studies[Pawelczyk et al., 2022]show that the prescribed counterfactual recourses are often not implemented exactly by individuals and demonstrate that most state-of-the-art CE algorithms are very likely to fail in this noisy environment. To address these issues, we propose a probabilistic framework that gives a sparse local counterfactual rule for each observation, providing rules that give a range of values capable of changing decisions with high probability. These rules serve as a summary of diverse counterfactual explanations and yield robust recourses. We further aggregate these local rules into a regional counterfactual rule, identifying shared recourses for subgroups of the data. Our local and regional rules are derived from the Random Forest algorithm, which offers statistical guarantees and fidelity to data distribution by selecting recourses in high-density regions. Moreover, our rules are sparse as we first select the smallest set of variables having a high probability of changing the decision. We have conducted experiments to validate the effectiveness of our counterfactual rules in comparison to standard CE and recent similar attempts. Our methods are available as a Python package.
Introduction
In recent years, many explanations methods have been developed for explaining machine learning models, with a strong focus on local analysis, i.e., generating explanations for individual prediction, see [Molnar, 2022] for a survey. Among this plethora of methods, Counterfactual Explanations [Wachter et al., 2017] have emerged as one of the most prominent and active techniques. In contrast to popular local attribution methods such as SHAP [Lundberg et al., 2020] and LIME [Ribeiro et al., 2016], which assign importance scores to each feature, Counterfactuals Explanations (CE) describe the smallest modification to the feature values that changes the prediction to a desired target. While CE can be intuitive and user-friendly, providing recourse in certain situations (e.g., loan applications), they have practical limitations. Most CE methods depend on gradient-based algorithms or heuristic approaches [Karimi et al., 2020b], which can fail to identify the most natural explanations and lack guarantees. Most algorithms either do not ensure sparse counterfactuals (changes to the smallest number of features) or fail to generate in-distribution samples (refer to [Verma et al., 2020, Chou et al., 2022 for a survey on counterfactual methods). Several studies [Parmentier and Vidal, 2021, Poyiadzi et al., 2019, Looveren and Klaise, 2019 attempt to address the plausibility/sparsity issues by incorporating ad-hoc constraints.
In another direction, numerous papers [Mothilal et al., 2020, Karimi et al., 2020a, Russell, 2019 encourage the generation of diverse counterfactuals in order to find actionable recourse [Ustun et al., 2019]. Actionability is a vital desideratum, as some features may be non-actionable, and generating many counterfactuals increases the chance of getting actionable recourse. However, the diversity of CE compromises the intelligibility of the explanation, and the synthesis of various CE or local explanations, in general, remains an unsolved challenge [Lakkaraju et al., 2022]. Recently, Pawelczyk et al. [2022] highlights a new problem of CE called: noisy responses to prescribed recourses. In real-world scenarios, some individuals may not be able to implement exactly the prescribed recourses, and they show that most CE methods fail in this noisy environment. Consequently, we propose to reverse the usual way of explaining with counterfactual by computing Counterfactual rules. We introduce a new line of counterfactuals, constructing interpretable policies for changing a decision with a high probability while ensuring the stability of the derived recourse. These policies are sparse, faithful to the data distribution and their computation comes with statistical guarantees. Our proposal is to find a general policy or rule that permits changing the decision while fixing some features instead of generating many counterfactual samples. One of the main challenges is identifying the minimal set of features that provide the directions for changing the decision to the desired output with high probability. Additionally, we show that this method can be extended to create a common counterfactual policy for subgroups of the data, which aids model debugging and bias detection.
Motivation and Related works
Most Counterfactuals Explanations methods are based on the approach of the seminal work of Wachter et al. [2017], where counterfactual samples are generated by cost optimization. This procedure does not account directly the plausibility of the counterfactual examples, see Table 1 from [Verma et al., 2020] for a classification of CE methods. Indeed, a major shortcoming is that the action suggested for obtaining the counterfactual is not designed to be feasible or representative of the underlying data distribution. Several recent studies have suggested incorporating ad-hoc plausibility constraints into the optimization process. For instance, Local Outlier Factor [Kanamori et al., 2020], Isolation Forest [Parmentier and Vidal, 2021], and density-weighted metrics [Poyiadzi et al., 2019] have been employed to generate realistic samples. Alternatively, Looveren and Klaise [2019] proposes the use of an autoencoder that penalizes out-of-distribution candidates. Instead of relying on ad-hoc constraints, we propose CE that gives plausible explanations by design. Our approach leverages the Random Forest (RF) algorithm, which helps identify high-density regions and ensures counterfactual explanations reside within these areas. To ensure sparsity, we begin by identifying the smallest subset S of variables X S and associated value ranges for each observation that have the highest probability of changing the prediction. We compute this probability with a consistent estimator of the conditional distribution Y |X S obtained from a RF. As a consequence, the sparsity of the counterfactuals is not encouraged indirectly by adding a penalty term (ℓ 0 or ℓ 1 ) as existing works [Mothilal et al., 2020]. Our method draws inspiration from the concept of Same Decision Probability (SDP) [Chen et al., 2012], which is used to identify the smallest feature subset that guarantees prediction stability with a high probability. This minimal subset is called Sufficient Explanations. In [Amoukou and Brunel, 2021], it has been shown that the SDP and the Sufficient Explanations can be estimated and computed efficiently for identifying important local variables in any classification/regression models using RF. For counterfactuals, we are interested in the dual set. We want the minimal subset of features that allows for a high probability of changing the decision when the other features remain fixed.
Another limitation of the current CE is the multiplicity of the explanations produced. While some papers [Mothilal et al., 2020, Karimi et al., 2020a, Russell, 2019 promote the generation of diverse counterfactual samples to ensure actionable recourse, such diverse explanations should be summarized to be intelligible [Lakkaraju et al., 2022], but the compilation of local explanations is often a very difficult problem. To address this issue, instead of generating counterfactual samples, we construct a rule called Local Counterfactual Rules (L-CR) from which counterfactual samples can be derived. In contrast to traditional CE that identify the nearest instances with a desired output, we first determine the most effective rule (or group of similar observations) for each observation that changes the prediction to the intended target. The L-CR can be seen as a summary of the diverse counterfactual samples possible for a given instance. For example, if x 0 = {Age=20, Salary=35k, HoursWeek=25h, Sex=M, . . . } with Loan=False, fixing the variables Age and Sex and changing the Salary and HoursWeek change the decision. Therefore, instead of giving multiples combination of Salary and HoursWeek (e.g., 35k and 40h or 40k and 55h, . . . ) that result in many samples, the counterfactual rule gives the range of values: C 0 = IF HoursWeek ∈ [35h, 50h], Salary ∈ [40k, 50k], and the remaining features are fixed THEN Loan=True with high probability . One can also have several observations with the same predictions and almost the same counterfactual rules. For example, consider a second observation x 1 = {Age=25, Salary=45k, Hour-sWeek=25h, Sex=M, . . . } with Loan = False, and x 0 , x 1 included in the following hyper-rectangle (or rule) R = IF Salary ∈ [20k, 45k], Age ∈ [20, 30] THEN Loan=False which may contains other observations. The local CR of x 1 is C 0 := IF HoursWeek ∈ [40h, 45h], Salary ∈ [48k, 50k], and the remaining features are fixed THEN Loan=True with high probability . We observe that x 0 , x 1 have nearly identical counterfactual rules C 0 , C 1 . Hence, the global counterfactual rules enable summarizing such information into a single rule that applies to multiple observations simultaneously. The regional Counterfactual Rule (R-CR) of the rule R could be C R := [IF HoursWeek ∈ [35h, 45h], Salary ∈ [40k, 50k], and the remaining rules of R are fixed THEN Loan=True with high probability]. It shows that for all observations that are in the hyperrectangle R, we can apply the same counterfactual rules to change their predictions. These global rules allow us to have a global picture of the model to detect certain patterns that may be used for fairness detection, among other applications. The main difference between a local and a global CR is that the local CR explain a single instance by fixing the remaining feature values (not used in the CR) ; while a regional CR is defined by keeping the remaining variables in a given interval (not used in the regional CR). Moreover, by giving ranges of values that guarantee a high probability of changing the decision, we partly answer the problem of noisy responses to prescribed recourses [Pawelczyk et al., 2022]. We find that the generated CE remain robust as long as the perturbations remain within the specified ranges.
While the Local Counterfactual Rule is a novel concept, the Regional Counterfactual Rule shares similarities with some recent works. Indeed, Rawal and Lakkaraju [2020] proposed Actionable Recourse Summaries (AReS), a framework that constructs global counterfactual recourses to have a global insight into the model and detect unfair behavior. Despite similarities with the Regional Counterfactual Rule, there are notable differences. Our methods can handle regression problems and work directly with continuous features. AReS requires discretizing continuous features, leading to a trade-off between speed and performance, as observed by [Ley et al., 2022]. Too few bins yield unrealistic recourse, while too many bins result in excessive computation time. AReS employs a greedy heuristic search approach to find global recourse, which may result in unstable and sub-optimal recourse. Our approaches overcome these limitations by leveraging on the informative partitions obtained from a Random Forest, removing the need for an extensive search space, and focusing on high-density regions for plausibility. Additionally, we prioritize changes to the smallest number of features possible, utilizing a consistent estimator of the conditional distribution.
Another global CE framework has been introduced in [Kanamori et al., 2022] to ensure transparency. The Counterfactual Explanation Tree (CET) partitions the input space with a decision tree and assigns a suitable action for changing the decision of each subspace, providing a unique recourse for multiple instances. In comparison, our approach offers greater flexibility in counterfactual explanations by providing a range of possible values that guarantee a change with a given probability for each subspace. We also propose a method to derive classic counterfactual samples using the counterfactual rules. We do not make assumptions about the cost of changing the feature or actionability. If such information is available, it can be incorporated as additional post-processing.
Minimal Counterfactual Rules
Consider a dataset Dn = {(Xi, Yi)} n i=1 consisting of i.i.d observations of (X, Y ) ∼ P X P Y |X , where X ∈ X (typically X ⊆ R p ) and Y ∈ Y.
The output Y can be either discrete or continuous. We denote [p] = {1, . . . , p}, and for a given subset S ⊆ [p], X S = (X i ) i∈S represents a subgroup of features, and we write x = (xS, xS).
For a given observation (x, y), we consider a target set Y ⋆ ⊂ Y, such that y / ∈ Y ⋆ . In the case of a classification problem, Y ⋆ = {y ⋆ } is a singleton where y ⋆ ∈ Y and y ⋆ ̸ = y. Unlike conventional approaches, our definition of CE also accommodates regression problems by considering Y ⋆ = [a, b] ⊂ R, and the definitions and computations remain the same for both classification and regression. The classic CE problem, defined here only for classification, considers a predictor f : X → Y, trained on dataset D n and search a function a : X → X , such that for all observations x ∈ X , f (x) ̸ = y ⋆ , we have f (a(x)) = y ⋆ . The function is defined point-wise by solving an optimisation program. Most often a(·) is not a single-output function, as a(x) may be in fact a collection of (random) values {x ⋆ 1 , . . . , x ⋆ k }, which represent the counterfactual samples. A more recent perspective, proposed by Kanamori et al. [2022], defines a as a decision tree, where for each leaf L, a common action is predicted for all instances x ∈ L to change their predictions.
Our approach diverges slightly from the traditional model-based definition as we directly consider the observation (X, Y ) rather than the model. Our method can be seen as a mapping between distributions. For example, in a binary classification setting, our CE can be seen as a map T between the distribution of X|Y = 0 and X|Y = 1 such that each observation of class Y = 0 is linked to the most similar observation of class Y = 1. This concept has been studied under the name of transport-based counterfactuals by [Black et al., 2020, De Lara et al., 2021. De Lara et al. [2021] shows that it coincides with causal counterfactual under appropriate assumptions. Our methods can be thought as a strategy to find the counterfactual map T for any data generating process (X, Y ) directly or any learnt model (X, f (X)). In the following discussion, we consider f (X) as either the output of a learned model or a sample from P Y |X .
Furthermore, our approach is hybrid, as we do not suggest a single action for each observation or subspace of X but provide sets of possible perturbations. A Local Counterfactual Rule (L-CR) for target
Y ⋆ and observation x (with f (x) / ∈ Y ⋆ ) is a rectangle C S (x, Y ⋆ ) = i∈S [a i , b i ], a i , b i ∈ R such that for all perturbations of x = (x S , xS) obtained as x ⋆ = (z S , xS) with z S ∈ C S (x, Y ⋆ ) and x ⋆ an in-distribution sample, then f (x ⋆ ) is in Y ⋆ with a high probability. Similarly, a Regional Counterfactual Rule (R-CR) C S (R, Y ⋆ ) is defined for target Y ⋆ and a rectangle R = d i=1 [a i , b i ], a i , b i ∈ R, which represent a subspace of X of similar observations, if for all ob- servations x = (x S , xS) ∈ R, the perturbations obtained as x ⋆ = (z S , xS) with z S ∈ C S (R, Y ⋆ ) and x ⋆ an in-distribution sample are such that f (x ⋆ ) is in Y ⋆ with high probability.
Our approach constructs such rectangles in a sequential manner. Firstly, we identify the best directions S ⊆ [p] that offer the highest probability of changing the decision. Next, we determine the optimal intervals [a i , b i ] for i ∈ S that change the decision to the desired target. Additionally, we propose a method to derive traditional Counterfactual Explanations (CE) (i.e., actions that alter the decision) using our Counterfactual Rules. A central tool in this approach is the Counterfactual Decision Probability presented below.
Definition 3.1. Counterfactual Decision Probability (CDP). The Counterfactual Decision Proba- bility of the subset S ⊆ 1, p , w.r.t x = (x S , xS) and the desired target Y ⋆ (s.t. f (x) / ∈ Y ⋆ ) is CDPS (x, Y ⋆ ) = P (f (X) ∈ Y ⋆ |XS = xS ).
The CDP of the subset S is the probability that the decision changes to the desired target Y ⋆ by sampling the features X S given XS = xS. It is related to the Same Decision Probability and Brunel, 2021] for solving the dual problem of selecting the most local important variables for obtaining and maintaining the decision
SDP S (Y ; x) = P (f (X) ∈ Y |X S = x S ) used in [Amoukouf (x) ∈ Y , where Y ⊂ Y. The set S is called the Minimal Sufficient Explanation. Indeed, we have CDP S (x, Y ⋆ ) = SDP S (x, Y ⋆ ).
The computation of these probabilities is challenging and discussed in section 4. Next, we define the minimal subset of features S that allows changing the decision to the target set with a given high probability π.
Definition 3.2. (Minimal Divergent Explanations). Given an instance x and a desired target
Y ⋆ , S is a Divergent Explanation for probability π > 0, if CDP S (x, Y ⋆ ) ≥ π, and no subset Z of S satisfies CDP Z (x, Y ⋆ ) ≥ π.
Hence, a Minimal Divergent Explanation is a Divergent Explanation with minimal size.
The set satisfying these properties is not unique, and we can have several Minimal Divergent Explanations. Note that the probability π represents the minimum level required for a set to be chosen for generating counterfactuals, and its value should be as high as possible and depends on the use case. With these concepts established, we can now define our main criterion for constructing a Local Counterfactual Rule (L-CR).
Definition 3.3. (Local Counterfactual Rule). Given an instance x, a desired target Y ⋆ ̸ ∋ f (x) , a Minimal Divergent Explanation S, the rectangle CS(x, Y ⋆ ) = i∈S [ai, bi], ai, bi ∈ R is a Local Counterfactual Rule with probability π C if CS(x, Y ⋆ ) = arg maxC P X XS ∈ C | XS = xS such that CRPS x, Y ⋆ = P (f (X) ∈ Y ⋆ |XS ∈ CS(x, Y ⋆ ), XS = xS) ≥ πC . P X XS ∈ CS(x, Y ⋆ ) | XS =xS
represent the plausibility of the rule and by maximizing it, we ensure that the rule lies in a high-density region. CRP S is the Counterfactual Rule Probability.
The higher the probability π C is, the better the relevance of the rule C S (x, Y ⋆ ) is for changing the decision to the desired target.
In practice, we often observe that the Local CR C S (·, Y ⋆ ) for neighboring observations x and x ′ are quite similar, as the Minimal Divergent Explanations tend to be alike, and the corresponding rectangles frequently overlap. This observation motivates a generalization of these Local CRs to
hyperrectangles R = d i=1 [a i , b i ], a i , b i ∈ R,
which group together similar observations. We denote supp(R) = {i : [a i , b i ] ̸ = R} as the support of the rectangle and extend the Local CRs to Regional Counterfactual Rules (R-CR). To achieve this, we denote RS = i∈S [a i , b i ] as the rectangle with intervals of R in supp(R) ∩S, and define the corresponding Counterfactual Decision Probability (CDP) for rule R and subset S as CDP S (R, Y ⋆ ) = P (f (X) ∈ Y ⋆ |XS ∈ RS ). Consequently, we can compute the Minimal Divergent Explanation for rule R using the corresponding CDP for rules, following Definition 3.2. The Regional Counterfactual Rules (Regional CRs) correspond to definition 3.3 with the associated CDP for rules.
Estimation of the CDP and CRP
To compute the probabilities CDP S and CRP S for any S, we use a dedicated Random Forest (RF) that learns the model or the data-generating process to explain. Indeed, the conditional probabilities CDP S and CRP S can be easily computed from a RF by combining the Projected Forest algorithm [Bénard et al., 2021a] and the Quantile Regression Forest [Meinshausen and Ridgeway, 2006]. As a result, we can estimate the probabilities CDP S (x, Y ⋆ ) consistently. This method has been previously utilized by [Amoukou and Brunel, 2021] for calculating the Same Decision Probability SDP S .
Projected Forest and CDP S
The estimator of the SDP S is based on the Random Forest [Breiman et al., 1984] algorithm. Assuming we have trained a RF m(·) using the dataset D n , the model consists of a collection of k randomized trees (for a detailed description of decision trees, see [Loh, 2011]). For each instance x, the predicted value of the l-th tree is denoted as m l (x; Θ l ), where Θ l represents the resampling data mechanism in the j-th tree and the subsequent random splitting directions. The predictions of the individual trees are then averaged to produce the prediction of the forest as m(x; Θ1, . . . ,
Θ k ) = 1 k k l=1 m l (x; Θ l ).
The RF can also be interpreted as an adaptive nearest neighbor predictor. For every instance x, the observations in D n are weighted by w n,i (x), with i = 1, . . . , n. As a result, the prediction of the RF can be reformulated as m(x; Θ1, . . . , Θ k ) = n i=1 wn,i(x)Yi. This emphasizes the central role played by the weights in the RF's algorithm. See [Meinshausen andRidgeway, 2006, Amoukou andBrunel, 2021] for a detailed description of the weights. Consequently, it naturally gives estimators for other quantities, e.g., cumulative hazard function [Ishwaran et al., 2008], treatment effect [Wager and Athey, 2017], conditional density [Du et al., 2021]. For instance, Meinshausen and Ridgeway [2006] showed that we can use the same weights to estimate the conditional distribution function with the following estimator
F (y|X = x) = n i=1 wn,i(x)1 Y i ≤y .
In another direction, Bénard et al. [2021a] introduced the Projected Forest algorithm [Bénard et al., 2021c,a] that aims to estimate E[Y |X S ] by modifying the RF's prediction algorithm. Bénard et al. [2021b] suggests to simply ignore the splits based on the variables not contained in S from the tree predictions. More formally, it consists of projecting the partition of each tree of the forest on the subspace spanned by the variables in S. The authors also introduced an algorithmic trick that computes the output of the Projected Forest efficiently without modifying the initial tree structures. It consists of dropping the observations down in the initial trees, ignoring the splits which use a variable not in S: when it encounters a split involving a variable i / ∈ S, the observations are sent both to the left and right children nodes. Therefore, each instance falls in multiple terminal leaves of the tree. To compute the prediction of x S , we follow the same procedure, and gather the set of terminal leaves where x S falls. Next, we collect the training observations which belong to every terminal leaf of this collection, in other words, we keep only the observations that fall in the intersection of the leaves where x S falls. Finally, we average their outputs Y i to generate the estimation of E[Y |X S = x S ]. Notice that the authors show that this algorithm converges asymptotically to the true projected conditional expectation E[Y |X S = x S ] under suitable assumptions. As the RF, the Projected Forest (PRF) assigns a weight to each observation. The associated PRF is denoted m (S) (x S ) = n i=1 w n,i (x S )Y i . Therefore, as the weights of the original forest was used to estimate the CDF, Amoukou and Brunel [2021] used the weights of the Projected Forest Algorithm to estimate the SDP as SDP S (x, Y ⋆ ) = n i=1 w n,i (x S )1 Yi∈Y ⋆ . The idea is essentially to replace Y i by 1 Yi∈Y ⋆ in the Projected Forest equation defined above. Amoukou and Brunel [2021] also show that this estimator converges to the true SDP S under suitable assumptions and works very well in practice. Especially, for tabular data where tree-based models are known to perform well [Grinsztajn et al., 2022]. Similarly, we can estimate the CDP with statistical guarantees [Amoukou and Brunel, 2021] using the following estimator CDP S (x, Y ⋆ ) = n i=1 wn,i(xS)1 Y i ∈Y ⋆ . Remark: we only give the estimator of the CDP S of an instance x. The estimator for the CDP S of a rule R will be discussed in the next section, as it is closely related to the estimator of the CRP S .
Projected Forest: To estimate E[Y |X S = x S ] instead of E[Y |X = x] using a RF,
Regional RF and CRP S
Here, we focus on estimating the CRPS(x,
Y ⋆ ) = P (f (X) ∈ Y ⋆ |XS ∈ CS(x, Y ⋆ ), XS = xS) and CRPS(R, Y ⋆ ) = P (f (X) ∈ Y ⋆ |XS ∈ CS(R; Y ⋆ ), XS ∈ RS).
For ease of reading, we remove the dependency of the rectangles C S in Y ⋆ . Based on the previous section, we are already aware that the estimators using the RF will take the form of CRP S (x, Y ⋆ ) = n i=1 wn,i(x)1 Y i ∈Y ⋆ , so we only need to determine the appropriate weighting. The main challenge lies in the fact that we have a condition based on a region, e.g., XS ∈ CS(x) or XS ∈ RS (regional-based) instead of the condition of type X S = x S (fixed value-based) as usual. However, we introduced a natural extension of the RF algorithm to handle predictions when the conditions are both regional-based and fixed value-based. As a result, cases with only regional-based conditions can be naturally derived.
Regional RF to estimate CRPS(x, Y ⋆ ) = P (f (X) ∈ Y ⋆ |XS ∈ CS(x), XS = xS).
The algorithm is based on a slight modification of RF and works as follows: we drop the observations in the trees, if a split used variable i ∈S, i.e., fixed value-based condition, we use the classic rules of RF, if x i ≤ t, the observations go to the left children, otherwise the right children. However, if a split used variable i ∈ S, i.e, regional-based condition, we use the rectangles
C S (x) = |S| i=1 [a i , b i ].
The observations are sent to the left children if b i ≤ t, right children if a i > t and if t ∈ [a i , b i ], the observations are sent both to the left and right children. Consequently, we use the weights of the Regional RF algorithm w R n,i (x) to estimate the CRP S , the estimator is
CRP S (x, Y ⋆ ) = n i=1 w R n,i (x)1 Yi∈Y ⋆ .
Additionally, the number of observations at the leaves is used as an estimate of P (X S ∈ C S (x) | XS = xS). A more comprehensive description and discussion of the algorithm are provided in the appendix.
To estimate the CDP of a rule CDP S (R, Y ⋆ ) = P (f (X) ∈ Y ⋆ |XS ∈ RS ), we just have to apply the projected Forest algorithm to the Regional RF, i.e., when a split involving a variable outside ofS is met, the observations are sent both to the left and right children nodes, otherwise we use the Regional RF split rule, i.e., if an interval of RS is below t, the observations go to the left children, otherwise the right children and if t is in the interval, the observations go to the left and right children. The estimator of the CRP S (R, Y ⋆ ) = P (f (X) ∈ Y ⋆ |X S ∈ C S (R; Y ⋆ ), XS ∈ RS) for rule R is also derived from the Regional RF. Indeed, it is a special case of the Regional RF algorithm where there are only regional-based conditions.
Learning the Counterfactual Rules
The computation of the Local and Regional CR is performed using the estimators introduced in the previous section. First, we determine the Minimal Divergent Explanation, akin to the Minimal Sufficient Explanation [Amoukou and Brunel, 2021], by exploring the subsets obtained using the K = 10 most frequently selected variables in the Random Forest estimator. K is a hyper-parameter to choose according to the use case and computational power. We can also use any importance measure. An alternative strategy to exhaustively searching through the 2 K possible subsets would be to sample a sufficient number of subsets, typically a few thousand, that are present in the decision paths of the trees in the forest. By construction, these subsets are likely to contain influential variables. A similar strategy was used in [Basu et al., 2018, Bénard et al., 2021a.
Given an instance x or rectangle R, target set Y ⋆ and their corresponding Minimal Divergent Explanation S, our objective is to find the maximal rule CS(R) or C S (x) = i∈S [a i , b i ] s.t. given XS = xS or XS ∈ RS, and XS ∈ CS(x), the probability that Y ∈ Y ⋆ is high. Formally, we want:
P (f (X) ∈ Y ⋆ |XS ∈ CS(x), XS = xS) or P (f (X) ∈ Y ⋆ |XS ∈ CS(x), XS ∈ RS) above π C .
The rectangles C S (x) = i∈S [a i , b i ] defining the CR are derived from the RF. In fact, these rectangles naturally arise from the partition learned by the RF. AReS [Rawal and Lakkaraju, 2020], on the other hand, relies on binned variables to generate candidate rules, testing all possible rules to select the optimal one. By leveraging the partition learned by the RF, we overcome both the computational burden and the challenge of choosing the number of bins. Moreover, by focusing only on the non-empty leaves containing training observations, we significantly reduce the search space. This approach allows identifying high-density regions of the input space to generate plausible counterfactual explanations. To illustrate the idea, we use a two-dimensional data (X 0 , X 1 ) with (a) (b) (c) Figure 1: (a) Partition of the RF, (b) Partition of the PRF when we condition on X 0 , i.e., ignoring the splits on X 1 , (c) The optimal CR of x when we condition given X 0 = x 0 is the green region.
label Y represented as green/blue stars in figure 1(a). We fit a Random Forest to classify this dataset and show its partition in figure 1(a). The explainee x is the blue triangle. Examining the different cells/leaves of the RF, we deduce that the Minimal Divergent Explanation of x is S = X 1 . In Figure 1(b), we observe the leaves of the Projected Forest when not conditioning on S = X 1 , thus projecting the RF's partition only on the subspace X 0 . It consists of ignoring all the splits in the other directions (here the X 1 -axis), thus x falls in the projected leaf 2 (see figure 1(b)) and its CDP is CDP X1 (green; x) = 10 green 10 green+17 blue = 0.58. To find the optimal rectangle C S (x) = [a i , b i ] in the direction of X 1 , such that the decision changes, we can utilize the leaves of the RF. By looking at the leaves of the RF (figure 1(a)) for observations belonging to the projected RF leaf 2 (Figure 1(b)) where x falls, we observe in Figure 1(c) that the optimal rectangle for changing the decision, given X 0 = x 0 or being in the projected RF leaf 2, is the union of the intervals on X 1 of the leaf 3 and 4 of the RF (see the green region in Figure 1(c)).
Given an instance x and its Minimal Divergent Explanation S, the first step is to collect observations that belong to the leaf of the Projected Forest givenS, where x falls. These observations correspond to those with positive weights in the computation of the CDPS(x,
Y ⋆ ) = n i=1 w R n,i (xS)1 Y i ∈Y ⋆ , i.e.,
{Xi : w R n,i (xS) > 0}. Then, we used the partition of the original forest to find the possible leaves CS(x) in the direction S. The possible leaves is among the RF's leaves of the collected observations {Xi : w R n,i (xs) > 0}. Let denote L(Xi) the leaf of the observation X i with wn,i(xS) > 0. A possible leaf is a leaf L(Xi) s.t. P (f (X) ∈ Y ⋆ |XS ∈ L(Xi)S, XS = xS) ≥ πC . Finally, we merge all the possible neighboring leaves to get the largest rectangle, and this maximal rectangle is the counterfactual rule. It is important to note that the union of possible leaves is not necessarily a connected space, which may result in multiple disconnected counterfactual rules.
We apply the same approach to find the regional CR. Given a rule R and its Minimal Divergent Explanation S, we used the Projection given XS ∈ RS to identify compatible observations and their leaves. We then combine the possible ones that satisfy CRP S (R, Y ⋆ ) ≥ π C to obtain the regional CR. For instance, if we consider Leaf 5 of the original forest as a rule (i.e., if X ∈ Leaf 5, then predict blue), its Minimal Divergent Explanation is also S = X 1 . The Regional CR would be the green region in figure 1(c). Indeed, satisfying the X 0 condition of Leaf 5 and the X 1 condition of Leaves 3 and 4 would cause the decision to change to green. 6 Sampling CE using the CR Our approaches cannot be directly compared with traditional CE methods, as they return counterfactual samples, whereas we provide rules (ranges of vector values) that permit changing the decision with high probability. In some applications, users might prefer recourse to CR. Hence, we adapt the CR to generate counterfactual samples using a generative model. For example, given an instance x = (x S , xS), target set Y ⋆ and its counterfactual rule C S (x, Y ⋆ ), we want to find a sample
x ⋆ = (z S , xS) with z S ∈ C S (x, Y ⋆ ) s.t x ⋆ is a realistic sample and f (x ⋆ ) ∈ Y ⋆ .
Instead of using a complex conditional generative model as [Xu et al., 2019, Patki et al., 2016, which can be difficult to calibrate, we use an energy-based generative approach [Grathwohl et al., 2020, Lecun et al., 2006. The core idea is to find z S ∈ C S (x, Y ⋆ ) s.t. x ⋆ maximize a given energy score, ensuring that x ⋆ lie in a high-density region. We use the negative outlier score of an Isolation Forest [Liu et al., 2008] and Simulated Annealing [Guilmeau et al., 2021] to maximize the negative outlier score using the information of the counterfactual rules C S (x, Y ⋆ ). In fact, the range values given by the CR C S (x, Y ⋆ ) reduce the search space for z S drastically. We used the marginal law of X given X S ∈ C S (x, Y ⋆ ) as the proposal distribution, i.e., we draw a candidate z S by independently sampling each variable using the marginal law z S ∼ i∈S P (X j |X ∈ C S (x, Y ⋆ ) until we found an observation x ⋆ = (z S , xS) with a high energy. In practice, we used the training set D n to find the possible values and we defined P i , P S as the list of values of the variable X i found in D n and P S = {z S = (z 1 , . . . , z S ) : z S ∈ C S (x, Y ⋆ ), z i ∈ P i } the possible values of X S , respectively. Then, we sample z S in the set P S and use Simulated Annealing to find a x ⋆ = (z S , xS) that maximizes the negative outlier score. The algorithm works the same for sampling CE with the Regional CR. A more detailed version of the algorithm is provided in appendix.
Experiments
To demonstrate the performance of our framework, we conduct two experiments on real-world datasets. In the first experiment, we showcase the utility of the Local Counterfactual Rules for explaining a regression model. In the second experiment, we compare our approaches with two baseline methods in the context of classification problems: (1) CET [Kanamori et al., 2022], which partition the input space using a decision tree and associate a vector perturbation for each leaf, (2) AReS [Rawal and Lakkaraju, 2020] performs an exhaustive search for finding global counterfactual rules. We used the implementation of Kanamori et al. [2022] that adapts AReS for returning counterfactuals samples instead of rules. We compare the methods only in classification problem as all prior works do not deal with regression problems. In all experiments, we split our dataset into train (75%) -test (25%), and we learn a model f , a LightGBM (estimators=50, nb leaves=8), on the train set, which served as the explainee. We learn f 's predictions on the train set with an approximating RF (estimators=20, max depth=10): that will be used to generate the CR with π = 0.9. The used parameters for AReS, CET are max rules=8, bins=10 and max iterations=1000, max leaf=8, bins=10 respectively. The other parameters of each method are provided in Appendix.
We evaluate the methods on unseen observations using three metrics. The first metric, Accuracy, measures the average number of instances for which the prescribed action by the methods changes the prediction to the desired outcome. The second metric, Plausibility, measures the average number of inliers (predicted by an Isolation Forest) among the generated counterfactual samples. The third metric, Sparsity, measures the average number of features that have been changed. For the global counterfactual methods (AReS, Regional CR), which do not guarantee to cover all instances, we additionally compute the Coverage, which corresponds to the average number of unseen observations for which they propose a recourse.
Local counterfactual rules for regression. We apply our approach to the California House Price dataset (n=20640, p=8) [Kelley Pace and Barry, 1997], which contains information about each district such as income, population, and location, and the goal is to predict the median house value of each district. To demonstrate the effectiveness of our Local CR method, we focus on a subset of the test set consisting of 1566 houses with prices lower than 100k. Our objective is to find recourse that would increase their price, such that the price falls within the target range Y ⋆ = [200k, 250k]. For each instance x, we compute the Minimal Divergent Explanation S, the Local CR C S (x, [200k, 250k]), and a generate counterfactula samples using the Simulated Annealing technique described earlier.
We succeed in changing the decision for all observations, achieving Accuracy = 100%. Moreover, the majority of the counterfactual samples passed the outlier test, with a Plausibility score of 0.92. Additionally, our Local CR method achieves a high degree of sparsity, with Sparsity = 4.45.
For instance, the Local CR for the observation x = [Longitude=-118.2, latitude=33.8, housing median age=26, total rooms=703, total bedrooms=202, population=757, households=212, median income=2.52] is C S (x, [200k, 250k] [2132,3546], total bedrooms ∈ [214, 491]] with probability 0.97. This means that if the total number of rooms and total bedrooms satisfy the conditions in C S (x, [200k, 250k]), and the remaining features of x are fixed, then the probability that the price falls within the target set Y ⋆ = [200k, 250k] is 0.97.
) = [total room ∈
Comparisons of Local and Regional CR with baselines (AReS, CET). We evaluate our framework on three real-world datasets: Diabetes (n=768, p=8) [Kaggle, 2016] aims to predict whether a patient has diabetes or not, Breast Cancer Wisconsin (BCW, n=569, p=32) [Dua and Graff, 2017] aims to predict whether a tumor is benign or malignant, and Compas (n=6172, p=12) [Larson et al., 2016] is used to predict criminal recidivism. Our evaluation reveals that AReS and CET are highly sensitive to the number of bins and the maximal number of rules or actions, as previously noted by [Ley et al., 2022]. Poor parameterization can result in completely useless explanations. Furthermore, these methods require separate models for each target class, while our framework only requires a single RF with good precision. Table 1 demonstrates that the Local and Regional CR methods achieve a high level of accuracy in changing decisions on all datasets, surpassing AReS and CET by a significant margin on BCW and Diabetes. Furthermore, the baselines struggle to simultaneously change both the positive and negative classes, e.g., CET has Acc=1 in the positive class, and 0.21 for the negative class on BCW) or when they have a good Acc, the CE are not plausible. For instance, CET has Acc=0.98 and Psb=0 on Compas, meaning that all the counterfactual samples are outliers. Regarding the coverage of the global CE, CET covers all the instances as it partitions the input space, but we observe that AReS has a smaller Coverage= {0.43, 0.44, 0.81} compared to the Regional CR, which has {1, 0.7, 1} for BCW, Diabetes, and Compas respectively.
Noisy responses robustness of Local CR: To assess the robustness of our approach against noisy responses, we conducted an experiment inspired by Pawelczyk et al. [2022]. We normalized the datasets so that X ∈ [0, 1] p and added small Gaussian noises ϵ to the prescribed recourses, with ϵ ∼ N (0, σ 2 ), where σ 2 took values of 0.01, 0.025, 0.05. We computed the Stability, which is the fraction of unseen instances where the action and perturbed action lead to the same output, for the Compas and Diabetes datasets. We used the simulated annealing approach with the Local CR of section 6 to generate the actions. The Stability metrics for the different noise levels were 0.98, 0.98, 0.98 for Compas and 0.96, 0.97, 0.96 for Diabetes.
In summary, our CR approach is easier to train, and provides more accurate and plausible rules than the baseline methods. Furthermore, our resulting CE is robust against noisy responses.
Conclusion
We propose a novel approach that formulates CE as Counterfactual Rules. These rules are simple policies that can change the decision of an individual or sub-population with a high probability. Our method is designed to learn robust, plausible, and sparse adversarial regions that indicate where observations should be moved to satisfy a desired outcome. The use of Random Forests is central to our approach, as they provide consistent estimates of interest probabilities and naturally give rise to the counterfactual rules we seek. This also allows us to handle regression problems and continuous features, making our method applicable to a wide range of datasets where tree-based models perform well, such as tabular data. In this section, we give a simple application of the Regional RF algorithm to better understand how it works. Recall that the regional RF is a generalization of the RF's algorithm to give prediction even when we condition given a region, e.g., to estimate E(f (X)
|X S ∈ C S (x), XS = xS) with C S (x) = |S| i=1 [a i , b i ], a i , b i ∈R a hyperrectangle.
The algorithm works as follows: we drop the observations in the initial trees, if a split used variable i ∈S, a fixed value-based condition, we used the classic rules i.e., if x i ≤ t, the observations go to the left children, otherwise the right children. However, if a split used variable i ∈ S, regional-based condition, we used the hyperrectangle To illustrate how it works, we use a two dimensional variables X ∈ R 2 , a simple decision tree f represented in figure 2, and want to compute for x = [1.5, 1.9], E(f (X) | X 1 ∈ [2, 3.5], X 0 = 1.5). We assume that P (X 1 ∈ [2, 3.5] | X 0 = 1.5) > 0 and denoted T 1 as the set of the values of the splits based on variables X 1 of the decision tree. One way of estimating this conditional mean is by using Monte Carlo sampling. Therefore, there are two cases : • If ∀t ∈ T 1 , t ≤ 2 or t > 3.5, then all the observations sampled s.t.X i ∼ P X | X1∈[2, 3.5],X0=1.5 follow the same path and fall in the same leaf. The Monte Carlo estimator of the decision tree E(f (X)|X 1 ∈ [2, 3.5], X 0 = 1.5) is equal to the output of the Regional RF algorithm.
C S (x) = |S| i=1 [a i , b i ]. The
-For instance, a special case of the case above is: if ∀t ∈ T 1 , t ≤ 2, and we sample using P X | X1∈[2, 3.5],X0=1.5 , then all the observations go to the right children when they encounters a node using X 1 and fall in the same leaf.
• If ∃t ∈ T 1 and t ∈ [2, 3.5], then the observations sampled s.t.X i ∼ P X | X1∈[2, 3.5],X0=1.5 can fall in multiple terminal leaf depending on if their coordinates x 1 is lower than t. Following our example, if we generate samples using P X | X1∈[2, 3.5],X0=1.5 , the observations will fall in the gray region of figure 2, and thus can fall in node 4 or 5. Therefore, the true estimate is:
E(f (X)|X1 ∈ [2,
Concerning the last case (t ∈ [2, 3.5]), we need to estimate the different probabilities p(X 1 ≤ 2.9 |X 0 = 1.5), p(X 1 > 2.9 |X 0 = 1.5) to compute E(f (X)|X 1 ∈ [2, 3.5], X 0 = 1.5), but these probabilities are difficult to estimate in practice. However, we argue that we can ignore these splits, and thus do no need to fragment the query region using the leaves of the tree. Indeed, as we are no longer interest in a point estimate but regional (population mean) we do not need to go to the level of the leaves. We propose to ignore the splits of the leaves that divide the query region. For instance, the leaves 4 and 5 split the region [2, 3.5] in two cells, by ignoring these splits we estimate the mean of the gray region by taking the average output of the leaves 4 and 5 instead of computing the mean weighted by the probabilities as in eq. 1. Roughly, it consists to follow the classic rules of a decision tree (if the region is above or below a split) and ignore the splits that are in the query region, i.e., we average the output of all the leaves that are compatible with the condition X 1 ∈ [2, 3.5], X 0 = 1.5. We think that it leads to a better approximation for two reasons. First, we observe that the case where t is in the region and thus divides the query region does not happen often. Moreover, the leaves of the trees are very small in practice, and taking the mean of the observations that fall in the union of leaves that belong to the query region is more reasonable than computing the weighted mean and thus trying to estimate the different probabilities p(X 1 ≤ 2.9 |X 0 = 1.5), p(X 1 > 2.9 |X 0 = 1.5).
B Additional experiments
In table 2, we compare the Accuracy (Acc), Plausibility (Psb), and Sparsity (Sprs) of the different methods on additonal real-world datasets: FICO [FICO, 2018], NHANESI [CDC, 1999[CDC, -2022.
We observe that the L-CR, and R-CR outperform the baseline methods by a large margin on Accuracy and Plausibility. The baseline methods still struggle to change at the same time the positive and negative class. In addition, AReS and CET give better sparsity, but their counterfactual samples are less plausible than the ones generated by the CR. Generate sample X s . t . X_S \ in C_S using simulated annealing and outlier score .
28
Args :
29 outlier_score ( lambda functon ) : outlier_score ( X ) return a outlier score . If the value are negative , then the observation is an outlier . Listing 1: The simulated annealing algorithm to generate samples that satisfy the condition CR
D Parameters detailed
In this section, we give the different parameters of each method. For all methods and datasets, we first used a greedy search given a set of parameters. For AReS, we use the following set of parameters:
• max rule = {4, 6, 8}, max rule length = {4, 8}, max change num = {2, 4, 6}, • minimal support = 0.05, discretization bins = {10, 20}, • λ acc = λ cov = λ cst = 1.
For CET, we search in the following set of parameters:
• max iterations = {500, 1000}, • max leaf size = {4, 6, 8, −1}, • λ = 0.01, γ = 1.
Finally, for the Counterfactual Rules, we used the following parameters:
• nb estimators = {20, 50}, max depth= {8, 10, 12}, • π = 0.9, π C = 0.9.
We obtained the same optimal parameters for all datasets:
• AReS: max rule = 4, max rule length= 4, max change num = 4, minimal support = 0.05, discretization bins = 10, λ acc = λ cov = λ cst = 1 • CET: max iterations = 1000, max leaf size = −1, λ = 0.01, γ = 1 • CR: nb estimators= 20, max depth= 10, π = 0.9, π C = 0.9
observations are sent to the left children if b i ≤ t, right children if a i > t and if t ∈ [a i , b i ] the observations are sent both to the left and right children.
Figure 2 :
2Representation of a simple decision tree (right figure) and its associated partition (left figure). The gray part in the partition corresponds to the region [2, 3.5] × [1, 2]
3.5], X0 = 1.5) = p(X 1 ≤ 2.9 |X 0 = 1.5) * E[f (X) |X ∈ L 4 ]+p(X 1 > 2.9 |X 0 = 1.5) * E[f (X) |X ∈ L 5 ]
list ) : contains the indices of the variables on which to condition 32 x_train ( numpy . ndarray ) ) : 2 -D array represent the training samples 33 C_S ( numpy . ndarray ) ) : 3 -D (# variables x 2 x 1) representing the hyper -rectangle on which to condition 34 batch ( int ) : number of sample by iteration 35 max_iter ( int ) : number of iteration of the algorithm 36 temp ( double ) : the temperature of the simulated annealing algorithm 37 max_iter_convergence ( double ) : minimun number of iteration to stop the algorithm if it find an in -= generate_candidate (x , S , x_train , C_S , n_samples =1) 44 best_eval = outlier_score ( best ) [0] 45 curr , curr_eval = best , best_eval
Table 1 :
1Results of the Accuracy (Acc), Plausibility, and Sparsity (Sprs) of the different methods. We compute each metric according to the positive (Pos) and negative (Neg) class.COMPAS
BCW
Diabetes
Acc
Psb
Sps
Acc
Psb
Sps
Acc
Psb
Sps
Pos Neg Pos Neg Pos Neg Pos Neg Pos Neg Pos Neg Pos Neg Pos Neg Pos Neg
L-CR
1
0.9 0.87 0.73
2
4
1
1
0.96
1
9
7
0.97
1
0.99 0.8
3
4
R-CR 0.9 0.98 0.74 0.93
2
3
0.89 0.9 0.94 0.93
9
9
0.99 0.99 0.9 0.87
3
4
AReS 0.98
1
0.8 0.61
1
1
0.63 0.34 0.83 0.80
4
3
0.73 0.60 0.77 0.86
1
1
CET 0.85 0.98 0.7
0
2
2
1
0.21 0.6 0.80
8
2
0.84
1
0.60 0.20
6
6
Table 2 :
2Results of the Accuracy (Acc), Plausibility, and Sparsity (Sprs) of the different methods. We compute each metric according to the positive (Pos) and negative (Neg) class. Neg Pos Neg Pos Neg Pos Neg Pos Neg Pos Neg C Simulated annealing to generate counterfactual samples using the Counterfactual Rules 1 import numpy as npFICO
NHANESI
Acc
Psb
Sps
Acc
Psb
Sps
Pos L-CR 0.98 0.94 0.98 0.99
5
5
0.99 0.98 0.98 0.97
5
6
R-CR 0.90 0.94 0.98 0.99
9
8.43 0.86 0.95 0.96 0.99
7
7
AReS 0.34 0.01 0.85 0.86
2
1
0.06
1
0.87 0.92
1
1
CET 0.76
0
0.76 0.60
2
2
0
0.40 0.82 0.56
0
5
Consistent sufficient explanations and minimal local rules for explaining regression and classification models. I Salim, Nicolas Jb Amoukou, Brunel, arXiv:2111.04658arXiv preprintSalim I Amoukou and Nicolas JB Brunel. Consistent sufficient explanations and minimal local rules for explaining regression and classification models. arXiv preprint arXiv:2111.04658, 2021.
Iterative random forests to discover predictive and stable high-order interactions. Sumanta Basu, Karl Kumbier, B James, Bin Brown, Yu, Proceedings of the National Academy of Sciences. 1158Sumanta Basu, Karl Kumbier, James B Brown, and Bin Yu. Iterative random forests to discover predictive and stable high-order interactions. Proceedings of the National Academy of Sciences, 115(8):1943-1948, 2018.
Shaff: Fast and consistent shapley effect estimates via random forests. Clément Bénard, Gérard Biau, Sébastien Da Veiga, Erwan Scornet, arXiv:2105.11724arXiv preprintClément Bénard, Gérard Biau, Sébastien Da Veiga, and Erwan Scornet. Shaff: Fast and consistent shapley effect estimates via random forests. arXiv preprint arXiv:2105.11724, 2021a.
Interpretable random forests via rule extraction. Clément Bénard, Gérard Biau, Sébastien Veiga, Erwan Scornet, International Conference on Artificial Intelligence and Statistics. PMLRClément Bénard, Gérard Biau, Sébastien Veiga, and Erwan Scornet. Interpretable random forests via rule extraction. In International Conference on Artificial Intelligence and Statistics, pages 937-945. PMLR, 2021b.
Mda for random forests: inconsistency, and a practical solution via the sobol-mda. Clément Bénard, Erwan Sébastien Da Veiga, Scornet, arXiv:2102.13347arXiv preprintClément Bénard, Sébastien Da Veiga, and Erwan Scornet. Mda for random forests: inconsistency, and a practical solution via the sobol-mda. arXiv preprint arXiv:2102.13347, 2021c.
Fliptest: fairness testing via optimal transport. Emily Black, Samuel Yeom, Matt Fredrikson, Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. the 2020 Conference on Fairness, Accountability, and TransparencyEmily Black, Samuel Yeom, and Matt Fredrikson. Fliptest: fairness testing via optimal transport. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pages 111-121, 2020.
Classification and regression trees. Leo Breiman, Jerome Friedman, Richard Olshen, Charles Stone, 37wadsworth int. GroupLeo Breiman, Jerome Friedman, Richard Olshen, and Charles Stone. Classification and regression trees. wadsworth int. Group, 37(15):237-251, 1984.
National health and nutrition examination survey. CDCCDC. National health and nutrition examination survey, 1999-2022. URL https://wwwn.cdc. gov/Nchs/Nhanes/Default.aspx.
The same-decision probability: A new tool for decision making. S Chen, Arthur Choi, Adnan Darwiche, S. Chen, Arthur Choi, and Adnan Darwiche. The same-decision probability: A new tool for decision making. 2012.
Counterfactuals and causability in explainable artificial intelligence: Theory, algorithms, and applications. Information Fusion. Yu-Liang Chou, Catarina Moreira, Peter Bruza, Chun Ouyang, Joaquim Jorge, 10.1016/j.inffus.2021.11.003.URLhttps:/www.sciencedirect.com/science/article/pii/S15662535210022811566-253581Yu-Liang Chou, Catarina Moreira, Peter Bruza, Chun Ouyang, and Joaquim Jorge. Counterfactuals and causability in explainable artificial intelligence: Theory, algorithms, and applications. Informa- tion Fusion, 81:59-83, 2022. ISSN 1566-2535. doi: https://doi.org/10.1016/j.inffus.2021.11.003. URL https://www.sciencedirect.com/science/article/pii/S1566253521002281.
Alberto Lucas De Lara, Nicholas González-Sanz, Jean-Michel Asher, Loubes, arXiv:2108.13025Transport-based counterfactual models. arXiv preprintLucas De Lara, Alberto González-Sanz, Nicholas Asher, and Jean-Michel Loubes. Transport-based counterfactual models. arXiv preprint arXiv:2108.13025, 2021.
Wasserstein random forests and applications in heterogeneous treatment effects. Qiming Du, Gérard Biau, François Petit, Raphaël Porcher, International Conference on Artificial Intelligence and Statistics. PMLRQiming Du, Gérard Biau, François Petit, and Raphaël Porcher. Wasserstein random forests and appli- cations in heterogeneous treatment effects. In International Conference on Artificial Intelligence and Statistics, pages 1729-1737. PMLR, 2021.
UCI machine learning repository. Dheeru Dua, Casey Graff, Dheeru Dua and Casey Graff. UCI machine learning repository, 2017. URL http://archive.ics. uci.edu/ml.
explainable machine learning challenge. Fico, Fico, FICO. Fico. explainable machine learning challenge, 2018. URL https://community.fico.com/ s/explainable-machine-learning-challenge.
Your classifier is secretly an energy based model and you should treat it like one. Will Grathwohl, Kuan-Chieh Wang, Joern-Henrik Jacobsen, David Duvenaud, Mohammad Norouzi, Kevin Swersky, International Conference on Learning Representations. Will Grathwohl, Kuan-Chieh Wang, Joern-Henrik Jacobsen, David Duvenaud, Mohammad Norouzi, and Kevin Swersky. Your classifier is secretly an energy based model and you should treat it like one. In International Conference on Learning Representations, 2020.
Why do tree-based models still outperform deep learning on typical tabular data?. Leo Grinsztajn, Edouard Oyallon, Gael Varoquaux, Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track. Leo Grinsztajn, Edouard Oyallon, and Gael Varoquaux. Why do tree-based models still outperform deep learning on typical tabular data? In Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2022.
Simulated annealing: a review and a new scheme. Thomas Guilmeau, Emilie Chouzenoux, Víctor Elvira, 10.1109/SSP49050.2021.9513782072021Thomas Guilmeau, Emilie Chouzenoux, and Víctor Elvira. Simulated annealing: a review and a new scheme. pages 101-105, 07 2021. doi: 10.1109/SSP49050.2021.9513782.
Random survival forests. The annals of applied statistics. Hemant Ishwaran, B Udaya, Eugene H Kogalur, Michael S Blackstone, Lauer, 2Hemant Ishwaran, Udaya B Kogalur, Eugene H Blackstone, and Michael S Lauer. Random survival forests. The annals of applied statistics, 2(3):841-860, 2008.
Pima indians diabetes database. Kaggle, Kaggle. Pima indians diabetes database, 2016. URL https://www.kaggle.com/datasets/ uciml/pima-indians-diabetes-database.
Dace: Distribution-aware counterfactual explanation by mixed-integer linear optimization. Kentaro Kanamori, Takuya Takagi, Ken Kobayashi, Hiroki Arimura, IJCAI. Kentaro Kanamori, Takuya Takagi, Ken Kobayashi, and Hiroki Arimura. Dace: Distribution-aware counterfactual explanation by mixed-integer linear optimization. In IJCAI, 2020.
Counterfactual explanation trees: Transparent and consistent actionable recourse with decision trees. Kentaro Kanamori, Takuya Takagi, Ken Kobayashi, Yuichi Ike, PMLR 151Proceedings of The 25th International Conference on Artificial Intelligence and Statistics. The 25th International Conference on Artificial Intelligence and StatisticsKentaro Kanamori, Takuya Takagi, Ken Kobayashi, and Yuichi Ike. Counterfactual explanation trees: Transparent and consistent actionable recourse with decision trees. In Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:1846-1870, 2022.
Model-agnostic counterfactual explanations for consequential decisions. ArXiv, abs. Amir-Hossein, Gilles Karimi, Borja Barthe, Isabel Balle, Valera, Amir-Hossein Karimi, Gilles Barthe, Borja Balle, and Isabel Valera. Model-agnostic counterfactual explanations for consequential decisions. ArXiv, abs/1905.11190, 2020a.
A survey of algorithmic recourse: definitions, formulations, solutions, and prospects. Amir-Hossein, Gilles Karimi, Bernhard Barthe, Isabel Schölkopf, Valera, abs/2010.04050CoRRAmir-Hossein Karimi, Gilles Barthe, Bernhard Schölkopf, and Isabel Valera. A survey of algorithmic recourse: definitions, formulations, solutions, and prospects. CoRR, abs/2010.04050, 2020b. URL https://arxiv.org/abs/2010.04050.
Sparse spatial autoregressions. R , Kelley Pace, Ronald Barry, 10.1016/S0167-7152(96)00140-X.URLhttps:/www.sciencedirect.com/science/article/pii/S016771529600140X0167-7152Statistics, Probability Letters. 333R. Kelley Pace and Ronald Barry. Sparse spatial autoregressions. Statistics, Probability Letters, 33 (3):291-297, 1997. ISSN 0167-7152. doi: https://doi.org/10.1016/S0167-7152(96)00140-X. URL https://www.sciencedirect.com/science/article/pii/S016771529600140X.
Rethinking explainability as a dialogue: A practitioner's perspective. CoRR, abs/2202.01875, 2022. Himabindu Lakkaraju, Dylan Slack, Yuxin Chen, Chenhao Tan, Sameer Singh, Himabindu Lakkaraju, Dylan Slack, Yuxin Chen, Chenhao Tan, and Sameer Singh. Rethinking explainability as a dialogue: A practitioner's perspective. CoRR, abs/2202.01875, 2022. URL https://arxiv.org/abs/2202.01875.
How we analyzed the compas recidivism algorithm. Jeff Larson, Surya Mattu, Lauren Kirchner, Julia Angwin, Jeff Larson, Surya Mattu, Lauren Kirchner, , and Julia Angwin. How we analyzed the compas recidivism algorithm, 2016. URL https://www.propublica.org/article/ how-we-analyzed-the-compas-recidivism-algorithm.
A tutorial on energy-based learning. Yann Lecun, Sumit Chopra, Raia Hadsell, 01Yann Lecun, Sumit Chopra, and Raia Hadsell. A tutorial on energy-based learning. 01 2006.
Global counterfactual explanations: Investigations, implementations and improvements. Dan Ley, Saumitra Mishra, Daniele Magazzeni, Dan Ley, Saumitra Mishra, and Daniele Magazzeni. Global counterfactual explanations: Investiga- tions, implementations and improvements, 2022. URL https://arxiv.org/abs/2204.06917.
Isolation forest. Tony Fei, Kai Ming Liu, Zhi-Hua Ting, Zhou, eighth ieee international conference on data mining. IEEEFei Tony Liu, Kai Ming Ting, and Zhi-Hua Zhou. Isolation forest. In 2008 eighth ieee international conference on data mining, pages 413-422. IEEE, 2008.
Classification and regression trees. Wei-Yin Loh, Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery. 1Wei-Yin Loh. Classification and regression trees. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 1, 2011.
Interpretable counterfactual explanations guided by prototypes. CoRR, abs/1907.02584. Arnaud Van Looveren, Janis Klaise, Arnaud Van Looveren and Janis Klaise. Interpretable counterfactual explanations guided by proto- types. CoRR, abs/1907.02584, 2019. URL http://arxiv.org/abs/1907.02584.
From local explanations to global understanding with explainable ai for trees. M Scott, Gabriel Lundberg, Hugh Erion, Alex Chen, Jordan M Degrave, Bala Prutkin, Ronit Nair, Jonathan Katz, Nisha Himmelfarb, Su-In Bansal, Lee, Nature Machine Intelligence. 21Scott M. Lundberg, Gabriel Erion, Hugh Chen, Alex DeGrave, Jordan M. Prutkin, Bala Nair, Ronit Katz, Jonathan Himmelfarb, Nisha Bansal, and Su-In Lee. From local explanations to global understanding with explainable ai for trees. Nature Machine Intelligence, 2(1):2522-5839, 2020.
Quantile regression forests. Nicolai Meinshausen, Greg Ridgeway, Journal of Machine Learning Research. 76Nicolai Meinshausen and Greg Ridgeway. Quantile regression forests. Journal of Machine Learning Research, 7(6), 2006.
Christoph Molnar, Interpretable Machine Learning. 2 editionChristoph Molnar. Interpretable Machine Learning. 2 edition, 2022. URL https://christophm. github.io/interpretable-ml-book.
Explaining machine learning classifiers through diverse counterfactual explanations. K Ramaravind, Amit Mothilal, Chenhao Sharma, Tan, 10.1145/3351095.3372850Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* '20. the 2020 Conference on Fairness, Accountability, and Transparency, FAT* '20New York, NY, USAAssociation for Computing MachineryRamaravind K. Mothilal, Amit Sharma, and Chenhao Tan. Explaining machine learning classifiers through diverse counterfactual explanations. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* '20, page 607-617, New York, NY, USA, 2020. Associa- tion for Computing Machinery. ISBN 9781450369367. doi: 10.1145/3351095.3372850. URL https://doi.org/10.1145/3351095.3372850.
Optimal counterfactual explanations in tree ensembles. CoRR, abs/2106.06631. Axel Parmentier, Thibaut Vidal, Axel Parmentier and Thibaut Vidal. Optimal counterfactual explanations in tree ensembles. CoRR, abs/2106.06631, 2021. URL https://arxiv.org/abs/2106.06631.
The synthetic data vault. N Patki, R Wedge, K Veeramachaneni, 10.1109/DSAA.2016.492016 IEEE International Conference on Data Science and Advanced Analytics (DSAA). N. Patki, R. Wedge, and K. Veeramachaneni. The synthetic data vault. In 2016 IEEE International Conference on Data Science and Advanced Analytics (DSAA), pages 399-410, Oct 2016. doi: 10.1109/DSAA.2016.49.
Algorithmic recourse in the face of noisy human responses. Martin Pawelczyk, Teresa Datta, Johannes Van-Den Heuvel, Gjergji Kasneci, Himabindu Lakkaraju, Martin Pawelczyk, Teresa Datta, Johannes van-den Heuvel, Gjergji Kasneci, and Himabindu Lakkaraju. Algorithmic recourse in the face of noisy human responses, 2022. URL https: //arxiv.org/abs/2203.06768.
FACE: feasible and actionable counterfactual explanations. CoRR, abs/1909.09369. Rafael Poyiadzi, Kacper Sokol, Raúl Santos-Rodriguez, Tijl De Bie, Peter A Flach, Rafael Poyiadzi, Kacper Sokol, Raúl Santos-Rodriguez, Tijl De Bie, and Peter A. Flach. FACE: feasible and actionable counterfactual explanations. CoRR, abs/1909.09369, 2019. URL http: //arxiv.org/abs/1909.09369.
Beyond individualized recourse: Interpretable and interactive summaries of actionable recourses. Kaivalya Rawal, Himabindu Lakkaraju, Advances in Neural Information Processing Systems. 33Kaivalya Rawal and Himabindu Lakkaraju. Beyond individualized recourse: Interpretable and interactive summaries of actionable recourses. Advances in Neural Information Processing Systems, 33:12187-12198, 2020.
why should i trust you?" explaining the predictions of any classifier. Sameer Marco Tulio Ribeiro, Carlos Singh, Guestrin, Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. the 22nd ACM SIGKDD international conference on knowledge discovery and data miningMarco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135-1144, 2016.
Efficient search for diverse coherent explanations. Chris Russell, 10.1145/3287560.3287569Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT* '19. the Conference on Fairness, Accountability, and Transparency, FAT* '19New York, NY, USAAssociation for Computing MachineryChris Russell. Efficient search for diverse coherent explanations. In Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT* '19, page 20-28, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450361255. doi: 10.1145/3287560.3287569. URL https://doi.org/10.1145/3287560.3287569.
Actionable recourse in linear classification. Berk Ustun, Alexander Spangher, Yang Liu, Proceedings of the Conference on Fairness, Accountability, and Transparency. the Conference on Fairness, Accountability, and TransparencyBerk Ustun, Alexander Spangher, and Yang Liu. Actionable recourse in linear classification. Pro- ceedings of the Conference on Fairness, Accountability, and Transparency, 2019.
Counterfactual explanations for machine learning: A review. CoRR, abs. Sahil Verma, John P Dickerson, Keegan Hines, Sahil Verma, John P. Dickerson, and Keegan Hines. Counterfactual explanations for machine learning: A review. CoRR, abs/2010.10596, 2020. URL https://arxiv.org/abs/2010.10596.
Counterfactual explanations without opening the black box: Automated decisions and the gdpr. Sandra Wachter, Brent Daniel Mittelstadt, Chris Russell, CybersecuritySandra Wachter, Brent Daniel Mittelstadt, and Chris Russell. Counterfactual explanations without opening the black box: Automated decisions and the gdpr. Cybersecurity, 2017.
Estimation and inference of heterogeneous treatment effects using random forests. Stefan Wager, Susan Athey, Stefan Wager and Susan Athey. Estimation and inference of heterogeneous treatment effects using random forests, 2017.
Modeling tabular data using conditional gan. Lei Xu, Maria Skoularidou, Alfredo Cuesta-Infante, Kalyan Veeramachaneni, NeurIPS. Lei Xu, Maria Skoularidou, Alfredo Cuesta-Infante, and Kalyan Veeramachaneni. Modeling tabular data using conditional gan. In NeurIPS, 2019.
| [] |
[
"Learning an Efficient Terrain Representation for Haptic Localization of a Legged Robot",
"Learning an Efficient Terrain Representation for Haptic Localization of a Legged Robot"
] | [
"Damian Sójka ",
"Michał R Nowicki ",
"Piotr Skrzypczyński "
] | [] | [] | Although haptic sensing has recently been used for legged robot localization in extreme environments where a camera or LiDAR might fail, the problem of efficiently representing the haptic signatures in a learned prior map is still open. This paper introduces an approach to terrain representation for haptic localization inspired by recent trends in machine learning. It combines this approach with the proven Monte Carlo algorithm to obtain an accurate, computationefficient, and practical method for localizing legged robots under adversarial environmental conditions. We apply the triplet loss concept to learn highly descriptive embeddings in a transformer-based neural network. As the training haptic data are not labeled, the positive and negative examples are discriminated by their geometric locations discovered while training. We demonstrate experimentally that the proposed approach outperforms by a large margin the previous solutions to haptic localization of legged robots concerning the accuracy, inference time, and the amount of data stored in the map. As far as we know, this is the first approach that completely removes the need to use a dense terrain map for accurate haptic localization, thus paving the way to practical applications. • The first adaptation of the triplet loss training paradigm for learning local terrain representations from haptic information that lacks explicit class labels for the positive and negative examples. • An efficient transformer-based neural network architec-arXiv:2209.15135v1 [cs.RO] 29 Sep 2022 | 10.48550/arxiv.2209.15135 | [
"https://export.arxiv.org/pdf/2209.15135v1.pdf"
] | 252,668,756 | 2209.15135 | 44f3cf22d9e58ba69109133539a2896e8db8006c |
Learning an Efficient Terrain Representation for Haptic Localization of a Legged Robot
Damian Sójka
Michał R Nowicki
Piotr Skrzypczyński
Learning an Efficient Terrain Representation for Haptic Localization of a Legged Robot
Although haptic sensing has recently been used for legged robot localization in extreme environments where a camera or LiDAR might fail, the problem of efficiently representing the haptic signatures in a learned prior map is still open. This paper introduces an approach to terrain representation for haptic localization inspired by recent trends in machine learning. It combines this approach with the proven Monte Carlo algorithm to obtain an accurate, computationefficient, and practical method for localizing legged robots under adversarial environmental conditions. We apply the triplet loss concept to learn highly descriptive embeddings in a transformer-based neural network. As the training haptic data are not labeled, the positive and negative examples are discriminated by their geometric locations discovered while training. We demonstrate experimentally that the proposed approach outperforms by a large margin the previous solutions to haptic localization of legged robots concerning the accuracy, inference time, and the amount of data stored in the map. As far as we know, this is the first approach that completely removes the need to use a dense terrain map for accurate haptic localization, thus paving the way to practical applications. • The first adaptation of the triplet loss training paradigm for learning local terrain representations from haptic information that lacks explicit class labels for the positive and negative examples. • An efficient transformer-based neural network architec-arXiv:2209.15135v1 [cs.RO] 29 Sep 2022
I. INTRODUCTION
Recent years have brought legged locomotion from labs to real-world applications, focusing on inspection or searchand-rescue tasks in harsh environments like industrial facilities, disaster sites, or mines [1]. So far, few works have demonstrated the possibility of localizing a walking robot without visual or LiDAR-based SLAM, employing haptic sensing, using signals from IMUs, force/torque (F/T) sensors in the feet, and joint encoders [2], [3], [4]. Whereas these papers demonstrated a possibility of solving the pose tracking problem employing the Monte Carlo Localization (MCL) algorithm with particle filtering, the representation of the terrain and foot/terrain interactions extracted from haptic information remained an open problem. This representation is essential for haptic localization, as interactions between the robot's feet and the terrain are the only source of exteroceptive information in this problem formulation. Hence, haptic information representation must be descriptive enough to distinguish between the steps taken at different locations, even if these footholds are located on a similar surface. Moreover, this representation needs to be compact to allow quick retrieval of the data from terrain map and efficient comparison of the locations. The practical aspect of the representation problem is how the terrain map is obtained. A dense 2.5D elevation map used in [3] has to be surveyed using an external LiDAR sensor. In contrast, a map of terrain 1: Haptic localization requires a distinctive representation of the foot/terrain interaction to distinguish between locations. We propose to train a transformer-based neural network with triplet loss to minimize the difference between embeddings for steps close to each other while maximizing this difference for steps further away. types encoded as classes on a grid map [4] needs tedious manual labeling. Both approaches confine the operation of a walking robot to small-scale pre-surveyed environments making the haptic localization concept rather impractical in real-world applications.
In contrast, this research uses the building blocks of machine learning methods already proven in computer vision [5] and place recognition [6] problems to create a sparse map of highly descriptive signatures in the locations touched by the robot's feet. This concept leverages the possibility of training a neural network with triplet loss to extract the features that differentiate the neighboring footholds from the collected haptic signals while suppressing those irrelevant for localization ( Fig. 1). Interestingly, this approach addresses both the mentioned challenges, creating embeddings (latent vectors of the signatures) that are simultaneously highly descriptive and extremely compact. Our approach uses a new neural network architecture based on parameter-efficient transformer layers to build embeddings for a sparse terrain map, which ensures short inference times to achieve realtime operation of the haptic MCL. The contribution of our work can be summarized as follows: ture for computing the embeddings. • A novel variant of the MCL method for legged robots that employs a sparse map of haptic embeddings and 3D positions of steps. It allows the robot to self-localize using only haptic information without any map that needs to be created or annotated manually.
II. RELATED WORK
Walking robots commonly use haptic information from their legs in terrain classification and gait adaptation. The existing approaches to terrain classification [7], [8], [9], [10] demonstrated that haptic information, like IMU or force/torque signals, can be successfully used to determine the class of the terrain the robot is walking on. These methods achieve similar, high accuracies [11] but the supervised approaches to terrain classification are data-hungry and offer poor generalization to unseen classes. Moreover, the choice of these classes needs too be known upfront and is based on human perception. To overcome these challenges, Bednarek et al. [12] proposed an efficient transformer-based model resulting in fast inference and a decreased need for samples to train the network while improving the robustness of the solution to noisy or previously unseen data. Another attempt to decrease the required number of samples can be seen in [13] which uses a semi-supervised approach to the training of a Recurrent Neural Network (RNN) based on Gated Recurrent Units.
Gait adaptation is commonly mentioned as one of the applications of terrain classification. Lee et al. [14] proved that end-to-end learning can generate walking policies that adapt well to the changing environment. In their approach, driven by simulation, the terrain information is represented internally without explicit classes. Following this example, Gangapurwala et al. [15] employ reinforcement learning policies to prioritize stability over aggressive locomotion while achieving the desired whole-body motion tracking and recovery control in legged locomotion. Despite progress with end-to-end approaches, we also see works that benefit from combining the trained and classical model-based approaches. One example is the work of Yuntao et al. [16], who shows that model-based predictive control can predict future interactions to increase the robustness of training policies. Our haptic localization system takes inspiration from both of the presented domains. It follows the general processing scheme introduced by Buchanan et al. [3], who proposed an MCL algorithm for walking robots based on the measured terrain height and a dense 2.5D map of this terrain built with an accurate external 3D LiDAR. Their approach uses the relative height of the leg touching the ground to compute the updated particle positions in MCL, thus reducing the accumulation of the localization drift. Moreover, their followup work [4] introduced a haptic localization system that additionally utilizes the terrain classification information to further improve the localization accuracy, as long as the dense 2.5D map has prior class labels for each cell of the map. While this work demonstrated that geometric information is complementary to tactile sensing, it also revealed the limitations of terrain classification employing a discrete number of terrain classes that might not have strict borders when applied in a real-world scenario. In this context, Łysakowski et al. [17] showed that localization could be performed with compressed tactile information using Improved AutoEncoders, thus avoiding explicit terrain classification. Moreover, their approach demonstrated the possibility of working with a sparse map of latent signal representations, making it possible to learn the terrain map by the robot itself without tedious manual labeling. But the terrain representation from [17] just compresses the haptic signals, without selecting valuable features.
In this work, we propose a new HL-ST approach to haptic localization using a sparse geometric map and a latent representation of the haptic information that benefits from a training scheme with triplet loss, which has not yet been used in training on haptic signals for localization problems. This stands in contrast to [17], where the training process is fully unsupervised, thus giving no control over the learned representation. Similarly to [12], we also employ a transformerbased architecture to achieve a parameter-efficient network, but we train it to generate embeddings rather than class labels. Moreover, inspired by works in gait adaptation [14], the critical ingredient of our localization solution (latent representation/embedding) is trained to benefit from a large number of collected samples.
III. PROPOSED METHOD
Our problem statement is driven by an application of a walking robot performing repetitive tasks over a known route. We would like to quickly explore the desired path with the robot and then operate solely based on the legged odometry and haptic signals, even in challenging environments. In contrast to [3], [4], we assume no prior dense map, only requiring accurate localization for the first walk along the given route. An overview of our approach is presented in Fig. 2. Each localization event is triggered by a foot placement on the ground that captures 160 consecutive samples from the F/T sensors mounted at that foot. In the initial phase, each step event has an associated localization estimate (e.g., from SLAM), and the whole sequence is used to gather a database of signals. This database is used to train our transformer-based network on triplets of samples to find the latent representation that best suits the localization purposes. The trained network processes the entire database to create a sparse map of embeddings at the measured locations.
During the localization phase, raw measurements from step events are fed to the trained neural network to obtain embeddings that can be compared to those already stored in the sparse terrain map. These comparisons are used to update our localization estimates represented by particles in the MCL framework.
A. Learning Terrain Representation
The critical component and our main contribution is the network to determine the embeddings that encode relevant features from the raw haptic input. The network is called Signal Transformer, based on the original transformer architecture from [18] and is presented in Fig. 3. The 6dimensional sensor input (from 3-axis force and 3-axis torque sensors) for 160 consecutive measurements is converted into a 16-dimensional feature space with a fully-connected layer. We apply layer normalization and augment the sequence with learnable positional encoding. Augmented data are passed to the encoder of the reference transformer architecture from [18], with h = 2 attention heads, the dimensionality of model d m = 16, and the size of inner feed-forward layer d f f = 8. The use of average pooling flattens output from the encoder. The final latent representation is generated by applying batch normalization and feeding normalized data to the dense feed-forward layer with the ReLU activation function. The final layer has the number of neurons equal to the length of the embedding, which by default is set to 256, as in [17]. Implementation of the proposed network is publicly available 1 .
The network is trained with triplet loss presented in Fig. 4. For efficiency, an online triplet mining technique is used to reduce the number of inactive triplets and to improve the convergence and speed of training. We form minibatches randomly, with every example considered an anchor during the training. We need an associated location where this sample was captured for each training sample. Positive examples for a specific anchor are those sampled closer to the anchor than the defined constant distance threshold d thr . Consequently, the negative examples were sampled further 1 https://github.com/dmn-sjk/signal_transformer Fig. 4: The Signal Transformer network is trained using a triplet of data samples to achieve the desired similarity of embeddings for the anchor and positive sample while increasing the difference for the anchor and negative sample. away than d thr :
d(s ba , s bi ) > d thr → b i ∈ N a d(s ba , s bi ) ≤ d thr → b i ∈ P a ,(1)
where d(s ba , s bi ) is the Euclidean distance between step position of an anchor s ba and step position s bi of the i-th data sample b i . P a and N a denote a set of positives and negatives concerning the a-th anchor. The distance threshold d thr is a hyperparameter that can be adjusted. We used d thr = 25 cm since it provided the best localization accuracy. During training, positive and negative examples depend strictly on the spatial dependencies between step positions without any terrain class labels. Inspired by [19], we use Batch All triplet loss variation, but without special mini-batch sampling, considering the lack of class annotations, and calculate it as:
L = B a=1 |Pa| p=1 |Na| n=1 [d(f (b a ), f (b p )) − d(f (b a ), f (b n )) + m] + ,(2)
where B is the size of the mini-batch, m indicates the margin, and | · | denotes the cardinality of sets. The distance function d(·) is implementing an Euclidean distance. The average of Batch All triplet loss is calculated considering only these triplets, which have a non-zero loss as in [19]. The training process was performed using AdamW optimizer from [20]. The learning rate was exponentially decreased with an initial value of 5 × 10 −4 . The initial value of weight decay was equal to 2 × 10 −4 and was reduced with cosine decay. Mini-batch size was set to 128. The training lasted for 200 epochs.
B. Sparse Haptic Map Generation
The proposed network is used to build a sparse haptic map of embedding vectors visualized in Fig. 5. During the initial run, raw measurements taken at the contact of each foot with the ground are recorded along with the reference robot's position. The network is then trained with the triplet loss using these data. Once training is completed, data from each step are passed through the network to obtain the reduced latent representation (embedding), which is added to the map at the exact location of this step's foothold. The resulting map is sparse and unevenly distributed. The map generation phase requires an independent source of 6 DoF robot's pose estimates to properly train the network (generation of positive and negative examples) and to place the inferred embeddings in the map accurately. In practical applications, an onboard LiDAR-based localization subsystem of the robot [21] can be utilized as this independent source. Color patches distinguish terrain classes in the PUTany dataset, but class labels are not used in our new approach.
C. Sequential Monte Carlo Localization
The proposed neural network generates highly descriptive embeddings -a latent representation of tactile sensing. To verify its impact on the localization, we use the sequential MCL algorithm proposed in [4] that was also used in the follow-up works on unsupervised terrain localization [17]. We present the general idea of this method while noting that this part of the processing is not a contribution of this paper.
Given a history of measurements z 0 , . . . , z k = z 0:k , the MCL algorithm estimates the most likely pose x * k ∈ SE(3) at time k:
p (x k |z 0:k ) = i w i k−1 p (z k |x k ) p x k |x i k−1(3)
where w i is the importance weight of the i-th particle, p (z k , x k ) is the measurement likelihood function, and p x k |x i k−1 is the motion model for the i-th particle state. The system gets the measurements whenever the robot's foot touches the ground while using a statically stable gait. Once measurements are taken, the probability of the particles is updated, particles are resampled, and a new pose estimate is available.
D. Haptic Measurement Model
For each taken step, the proposed neural network generates an embedding v, based on signals from a single foot f placed at the ground at the location d f , in the base coordinates:
d i f = (d i xf , d i yf , d i zf ) = x i k d f .(4)
The position of the foot d i f is used to retrieve the haptic map entry m = S(s i xf , s i yf ) located in the map at (s i xf , s i yf ), which is the nearest neighbor in 2D. For the sake of speed, search is based on 2D Euclidean distance with a map
matching error d 2D = d i xf − s i xf 2 , d i yf − s i yf 2
. The map entry m contains both the embedding of the haptic signal w and the elevation of the original measurement e.
Although we search for the map entries in 2D, we compare 3D positions for the purpose of the MCL measurement model taking advantage of the elevation values stored in the sparse map. We compare the latent representations using the L 2 norm:
d l = f L2 (v, w) = n j=1 (v j − w j ) 2 ,(5)
where v j , w j encoding j-th component of the embeddings and d l is the latent representation distance. To include the elevation information, d e is defined as the difference between z-axis component of estimated foot position d i zf and the elevation e saved in the closest map entry:
d e = d i zf − e.(6)
We use univariate Gaussian distribution centered at the cell matched in the sparse latent map:
p(z k |x i k ) = 1 σ l σ 2D σe2π d 2D > d t N (d l , σ l )N (d 2D , σ 2D )N (d e , σ e ) d 2D ≤ d t ,(7)
where d t = 25 cm is the Euclidean threshold when a step is considered to have a proper match in the sparse map. The impact of haptic, 2D geometric component, and elevation is weighted by the experimentally determined sigma values σ l = 0.4, σ 2D = 0.4, and σ e = 0.01, respectively. We denote our haptic localization method as HL-ST when all source of information are read from a sparse map, HL-T when elevation is not considered, and HL-GT when elevation comes from a dense geometric map and is used in a separate measurement model as in [4].
IV. EXPERIMENTAL RESULTS
A. Dataset and Ground Truth
In the presented evaluation, we use the PUTany dataset proposed in [4] that was already applied to evaluate other haptic localization systems. The dataset consists of three trials (walks) of a quadruped robot ANYmal B300 robot equipped with F/T sensors in the feet conducted over a 3.5 × 7 m area containing uneven terrain with eight different terrain types ranging from ceramic to sand. The total distance traveled by the robot equals 715 m, with 6658 steps and a duration of 4054 s. We used a different sequence gathered on the same route to train the neural network proposed in this work. The ground truth for the robot motion is captured with the millimeter-accuracy motion capture system (OptiTrack), providing 6 DoF robot poses at 100 Hz.
B. Accuracy measures for haptic localization
The robot generates a trajectory of 6 DoF poses that can be compared to the ground truth trajectory to determine the accuracy of the localization method. We use the Absolute Pose Error (AP E) metric that is computed for a single relative pose T between the estimated pose P and the ground truth pose G at the time stamp t [22]:
T = P −1 t G t ,(8)
where P t , G t ∈ SE(3) are poses either available at time stamp t or interpolated for the selected time stamp. In our evaluation, we follow the error metrics reported in the previous articles concerning haptic localization [3], [4], [17] using the 3D translational part of the error T calling it t 3D . The accuracy of our method is compared to the previously published results taking these results directly from the respective papers [4], [17].
As already observed in [17], the earlier haptic terrain recognition methods are not constraining the localization of particles in MCL in the vertical direction (parallel to the gravity vector), because the latent information is stored in a 2D array and lacks an elevation component. Therefore, in our comparisons, we also include the 2D translation error t 2D that ignores the error in the elevation. The t 2D results for the state-of-the-art methods were computed based on the publicly available source code of these systems.
C. Haptic localization with latent map (without geometry)
Using a dense and accurate terrain map of the environment for robot localization is impractical, as creating such a map usually requires deploying a survey-grade LiDAR in the target environment. Therefore, we first consider a scenario when only haptic signals and accurate localization are available for the robot's training run, with the following runs relying solely on haptics for localization. With these conditions, neither state-of-the-art methods for haptic localization (HL) can use elevation information. As other methods cannot use an elevation map, we also constrain our HL-T system to ignore the elevation data stored in the sparse map, using only haptic data. We compare our work to HL-C [4] utilizing terrain classification and HL-U [17], which uses unsupervised haptic latent representation.
All these methods are evaluated using the t 2D error as the elevation component is unconstrained, following the odometry drift. The obtained results are presented in Tab. I.
TSIF [23] HL-C [4] HL-U [17]
HL-T Trial The worst results are produced by TSIF, the legged odometry estimator, which is both a baseline solution and a component of the MCL method in the remaining systems. Among the compared solutions, the proposed HL-T outperforms previous approaches by a large margin, reducing the APE values by almost 50%. We believe it stems from the fact that our method is not constrained to a limited number of discrete classes like HL-C, while unlike HL-U, it can learn an internal representation of the haptic signals that promote discriminative features. What is important, the error did not exceed 10 cm despite the lack of other sensing modalities, which may be sufficient to let an autonomous robot continue its operation despite a vision-based sensor failure or a sudden change in the environmental conditions.
t 2D t 2D t 2D t
D. Haptic localization with dense geometric and sparse latent map
Let's consider scenarios when an accurate 2.5D map of the environment is available for localization purposes. For such scenarios, we have HL-G [3] utilizing the dense height map of the terrain for pose correction, HL-GC [4] utilizing both the geometry and terrain classification, and HL-GU [17] which uses the geometry and unsupervised haptic latent representation. In these experiments, our HL-GT method is configured to use a dense elevation map and a sparse latent map. The results of the legged odometry estimator TSIF [23] were omitted as it has already been proven that HL-G, HL-GC, and HL-GU outperform it in these trials. The results for both types of errors (t 2D , t 3D ) are presented in Tab. II.
HL-G [4]
HL-GC [4] HL-GU [17] HL-GT Trial Proposed HL-GT provides the best results using both t 3D and t 2D error metric.
t 3D t 2D t 3D t 2D t 3D t 2D t 3D t 2D
The obtained results for all considered methods show that the t 3D and t 2D errors almost match each other, proving that there is no significant drift in the elevation direction due to the availability of the dense elevation map. The proposed HL-GT outperforms other solutions, which suggests that the latent representation trained with triplet loss is more suited for distinguishing between terrain locations than terrain classification (HL-GC) or unsupervised terrain representation/signal compression (HL-GU). Moreover, the haptic signal information encoding in HL-GT is complementary to the dense elevation map as the method improves the performance of the bare geometric approach (HL-G).
E. Haptic localization with sparse geometric map
One of the advantages of the proposed solution is the ability to use elevation information even if only localization ground truth was available for training. This improvement significantly impacts the solution's practicality as no surveygrade LiDAR is required to take advantage of the elevation data. Therefore, we decided to compare three solutions: HL-T, which uses solely haptic signals for localization, HL-GT which uses dense geometric map and haptic signals, and HL-ST, which uses sparse geometric and latent map. The results are presented in Tab. III. The results show that the HL-T approach provides the best results in 2D. Still, the 3D error is unbounded following the legged odometry's general drift, making it impractical for any autonomous operation. On the other hand, HL-GT provides the most accurate 3D localization due to the dense geometric map. The proposed HL-ST is a good trade-off between these approaches as the 2D and 3D errors are comparable with HL-T and HL-GT while only using the haptics and localization for the first trial. We believe HL-ST is, therefore, a unique solution that may support legged robot autonomy in challenging, real-world applications.
HL-T HL-GT HL-ST Trial t 3D t 2D t 3D t 2D t 3D t
F. Inference time evaluation
Autonomous operation requires real-time processing with short inference times. We compared the average inference times of the proposed Signal Transformer network with the classification neural network from HL-C [4] and the unsupervised network HL-U [17]. Inference times were measured on over 10 000 samples on NVIDIA GeForce GTX 1050 Mobile GPU matching a similarly capable GPU that can fit in a walking robot. The inference time of the Signal Transformer being a part of the HL-T/HL-ST solutions is one magnitude smaller when compared to the inference times of neural networks used in HL-C and HL-U. The observed gains stem from a reduced number of parameters for our networks (45992 parameters) compared to over 1 million parameters for networks used in HL-C and HL-U. The transformer-based architecture proved to be more compact and suitable for on-board deployment in a robot than previously used solutions.
G. The size of the latent representation
The transformer-based architectures are known to be efficient, needing almost a fraction of the resources (parameters and inference time) of other known architectures to achieve comparable results. Moreover, learning with triplet loss can train a representation with desired characteristics. Therefore, we wanted to verify the required size of the embeddings necessary to achieve good localization results using a more challenging HL-T approach. The obtained APEs depending on the chosen size of the embeddings are presented in Fig. 6. Fig. 6: The achieved 2D localization error t 2D as a function of the trained embedding size for the proposed Signal Transformer architecture. We see no major difference in APE, even with a small embedding size.
Decreasing the embedding size from the original size of 256 did not affect the localization accuracy, suggesting that an embedding vector with a length as low as 2 contains enough information to distinguish between embeddings from multiple positions in a given environment. This result is of practical importance, as for a possible map containing 10 000 steps, the original size of 75 MB for embeddings with length 256 can be reduced to 1.4 MB using embeddings of size 2, which means a substantial reduction in the amount of stored data and a possibility to operate over a larger area. Shorter embeddings also result in the reduction of map matching times.
V. CONCLUSIONS
This work investigated how to employ triplet loss to train a Signal Transformer network to compute descriptive embeddings from haptic signals. The experiments indicate that the novel localization method employing these embeddings outperforms state-of-the-art haptic localization solutions (HL-C and HL-U) when only haptics are used. At the same time, the HL-GT variant achieves the lowest localization error (3D APE, compared to HL-GC and HL-GU) when a dense 2.5D map is used due to an efficient representation of the haptic data. In contrast to previous works, we can build and use a sparse geometric map (HL-ST), resulting in a practical solution requiring only reference poses for the first robot run while achieving results on par with solutions utilizing dense 2.5D maps. Moreover, we show that our network can process data 10-times faster than previous approaches and can reduce the embeddings from a size of 256 to 2 without a significant impact on the localization error. As a result, we achieve a haptic localization method that is much more practical than state-of-the-art solutions.
Institute of Robotics and Machine Intelligence, Poznan University of Technology, Poznan, Poland [email protected] *M. R. Nowicki is supported by the Foundation for Polish Science (FNP).
Fig.
Fig. 1: Haptic localization requires a distinctive representation of the foot/terrain interaction to distinguish between locations. We propose to train a transformer-based neural network with triplet loss to minimize the difference between embeddings for steps close to each other while maximizing this difference for steps further away.
Fig. 2 :
2Overview of the training and processing pipelines for our Haptic Localization utilizing Signal Transformer.
Fig. 3 :
3The proposed Signal Transformer network that processes time sequence of force/torque signals to generate a location descriptive embedding.
Fig. 5 :
5Sparse haptic map visualization. Dark blue points indicate footholds recorded during the mapping run. Each 2D foothold holds an embedding and terrain elevation value.
TABLE I :
IComparison of the 2D Absolute Pose Error (APE,
in [m]) for localization with haptic sensing only. The new
HL-T method achieved the lowest error on all sequences.
TABLE II :
IIComparison of the 3D and 2D Absolute Pose
Error (APE, in [m]) for localization solutions utilizing both
prior dense geometric map and haptic terrain recognition
solutions.
TABLE III :
IIIComparison of the 3D and 2D Absolute Pose Error (APE, in [m]) for localization without geometry (HL-T), with dense geometric map (HL-GT), and sparse geometric map (HL-ST). HL-ST performs similarly to HL-GT in 2D and 3D without a tedious mapping phase.
Table IV contains the obtained results.HL method
HL-C
HL-U
HL-T
Inference time [ms]
21.20 ± 2.94 30.72 ± 4.84 2.2 ± 0.28
TABLE IV :
IVInference time comparison between models used to process the haptic signal for localization purposes.
Why should inspection robots be used in deep underground mines. R Zimroz, M Hutter, M Mistry, P Stefaniak, K Walas, J Wodecki, Proceedings of the 27th International Symposium on Mine Planning and Equipment Selection -MPES 2018. E. Widzyk-Capehart, A. Hekmat, and R. Singhalthe 27th International Symposium on Mine Planning and Equipment Selection -MPES 2018Springer International PublishingR. Zimroz, M. Hutter, M. Mistry, P. Stefaniak, K. Walas, and J. Wodecki, "Why should inspection robots be used in deep under- ground mines?" in Proceedings of the 27th International Symposium on Mine Planning and Equipment Selection -MPES 2018, E. Widzyk- Capehart, A. Hekmat, and R. Singhal, Eds. Cham: Springer Interna- tional Publishing, 2019, pp. 497-507.
Proprioceptive localization for a quadrupedal robot on known terrain. S Chitta, P Vernaza, R Geykhman, D Lee, IEEE International Conference on Robotics and Automation (ICRA). S. Chitta, P. Vernaza, R. Geykhman, and D. Lee, "Proprioceptive localization for a quadrupedal robot on known terrain," in IEEE International Conference on Robotics and Automation (ICRA), April 2007, pp. 4582-4587.
Haptic Sequential Monte Carlo Localization for Quadrupedal Locomotion in Vision-Denied Scenarios. R Buchanan, M Camurri, M Fallon, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). R. Buchanan, M. Camurri, and M. Fallon, "Haptic Sequential Monte Carlo Localization for Quadrupedal Locomotion in Vision-Denied Scenarios," in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2020.
Navigating by touch: haptic Monte Carlo localization via geometric sensing and terrain classification. R Buchanan, J Bednarek, M Camurri, M R Nowicki, K Walas, M Fallon, Autonomous Robots. 456R. Buchanan, J. Bednarek, M. Camurri, M. R. Nowicki, K. Walas, and M. Fallon, "Navigating by touch: haptic Monte Carlo localization via geometric sensing and terrain classification," Autonomous Robots, vol. 45, no. 6, pp. 843-857, 2021.
Learning embeddings for image clustering: An empirical study of triplet loss approaches. K Ho, J Keuper, F Pfreundt, M Keuper, 25th International Conference on Pattern Recognition (ICPR). K. Ho, J. Keuper, F. Pfreundt, and M. Keuper, "Learning embeddings for image clustering: An empirical study of triplet loss approaches," in 25th International Conference on Pattern Recognition (ICPR), 2021, pp. 87-94.
Spatial pyramidenhanced NetVLAD with weighted triplet loss for place recognition. J Yu, C Zhu, J Zhang, Q Huang, D Tao, IEEE Transactions on Neural Networks and Learning Systems. 312J. Yu, C. Zhu, J. Zhang, Q. Huang, and D. Tao, "Spatial pyramid- enhanced NetVLAD with weighted triplet loss for place recognition," IEEE Transactions on Neural Networks and Learning Systems, vol. 31, no. 2, pp. 661-674, 2020.
Haptic Terrain Classification on Natural Terrains for Legged Robots. M H Hoepflinger, C D Remy, M Hutter, R Siegwart, International Conference on Climbing and Walking Robots (CLAWAR). M. H. Hoepflinger, C. D. Remy, M. Hutter, and R. Siegwart, "Haptic Terrain Classification on Natural Terrains for Legged Robots," in International Conference on Climbing and Walking Robots (CLAWAR), 2010, pp. 785-792.
Integrated Ground Reaction Force Sensing and Terrain Classification for Small Legged Robots. X A Wu, T M Huh, R Mukherjee, M Cutkosky, IEEE Robotics and Automation Letters. 12X. A. Wu, T. M. Huh, R. Mukherjee, and M. Cutkosky, "Integrated Ground Reaction Force Sensing and Terrain Classification for Small Legged Robots," IEEE Robotics and Automation Letters, vol. 1, no. 2, pp. 1125-1132, 2016.
Robotic Touch: Classification of Materials for Manipulation and Walking. J Bednarek, M Bednarek, P Kicki, K Walas, IEEE International Conference on Soft Robotics (RoboSoft). J. Bednarek, M. Bednarek, P. Kicki, and K. Walas, "Robotic Touch: Classification of Materials for Manipulation and Walking," in IEEE International Conference on Soft Robotics (RoboSoft), 2019, pp. 527- 533.
Haptic Inspection of Planetary Soils With Legged Robots. H Kolvenbach, C Bärtschi, L Wellhausen, R Grandia, M Hutter, IEEE Robotics and Automation Letters. 42H. Kolvenbach, C. Bärtschi, L. Wellhausen, R. Grandia, and M. Hutter, "Haptic Inspection of Planetary Soils With Legged Robots," IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 1626-1632, 2019.
What am I touching? Learning to classify terrain via haptic sensing. J Bednarek, M Bednarek, L Wellhausen, M Hutter, K Walas, IEEE International Conference on Robotics and Automation (ICRA). J. Bednarek, M. Bednarek, L. Wellhausen, M. Hutter, and K. Walas, "What am I touching? Learning to classify terrain via haptic sensing," in IEEE International Conference on Robotics and Automation (ICRA), May 2019, pp. 7187-7193.
Fast haptic terrain classification for legged robots using transformer. M Bednarek, M Łysakowski, J Bednarek, M R Nowicki, K Walas, 2021 European Conference on Mobile Robots (ECMR). 2021M. Bednarek, M. Łysakowski, J. Bednarek, M. R. Nowicki, and K. Walas, "Fast haptic terrain classification for legged robots us- ing transformer," in 2021 European Conference on Mobile Robots (ECMR), 2021.
Semi-Supervised Gated Recurrent Neural Networks for Robotic Terrain Classification. A Ahmadi, T Nygaard, N Kottege, D Howard, N Hudson, IEEE Robotics and Automation Letters. 62A. Ahmadi, T. Nygaard, N. Kottege, D. Howard, and N. Hudson, "Semi-Supervised Gated Recurrent Neural Networks for Robotic Ter- rain Classification," IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 1848-1855, 2021.
Learning quadrupedal locomotion over challenging terrain. J Lee, J Hwangbo, L Wellhausen, V Koltun, M Hutter, Science Robotics. 547J. Lee, J. Hwangbo, L. Wellhausen, V. Koltun, and M. Hutter, "Learning quadrupedal locomotion over challenging terrain," Science Robotics, vol. 5, no. 47, 2020.
Rloc: Terrain-aware legged locomotion using reinforcement learning and optimal control. S Gangapurwala, M Geisert, R Orsolino, M Fallon, I Havoutis, IEEE Transactions on Robotics. S. Gangapurwala, M. Geisert, R. Orsolino, M. Fallon, and I. Havoutis, "Rloc: Terrain-aware legged locomotion using reinforcement learning and optimal control," IEEE Transactions on Robotics, pp. 1-20, 2022.
Combining learning-based locomotion policy with model-based manipulation for legged mobile manipulators. Y Ma, F Farshidian, T Miki, J Lee, M Hutter, IEEE Robotics and Automation Letters. 72Y. Ma, F. Farshidian, T. Miki, J. Lee, and M. Hutter, "Combining learning-based locomotion policy with model-based manipulation for legged mobile manipulators," IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 2377-2384, 2022.
Unsupervised Learning of Terrain Representations for Haptic Monte Carlo Localization. M Łysakowski, M R Nowicki, R Buchanan, M Camurri, M Fallon, K Walas, 2022 International Conference on Robotics and Automation (ICRA). M. Łysakowski, M. R. Nowicki, R. Buchanan, M. Camurri, M. Fallon, and K. Walas, "Unsupervised Learning of Terrain Representations for Haptic Monte Carlo Localization," in 2022 International Conference on Robotics and Automation (ICRA), 2022, pp. 4642-4648.
Attention is All you Need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, L U Kaiser, I Polosukhin, Advances in Neural Information Processing Systems (NIPS). 30A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. u. Kaiser, and I. Polosukhin, "Attention is All you Need," in Advances in Neural Information Processing Systems (NIPS), vol. 30, 2017.
In Defense of the Triplet Loss for Person Re-Identification. A Hermans, L Beyer, B Leibe, A. Hermans, L. Beyer, and B. Leibe, "In Defense of the Triplet Loss for Person Re-Identification," 2017. [Online]. Available: https://arxiv.org/abs/1703.07737
Decoupled weight decay regularization. I Loshchilov, F Hutter, 7th International Conference on Learning Representations, ICLR 2019. New Orleans, LA, USAI. Loshchilov and F. Hutter, "Decoupled weight decay regularization," in 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019, 2019.
Online LiDAR-SLAM for legged robots with robust registration and deep-learned loop closure. M Ramezani, G Tinchev, E Iuganov, M Fallon, IEEE International Conference on Robotics and Automation (ICRA). M. Ramezani, G. Tinchev, E. Iuganov, and M. Fallon, "Online LiDAR- SLAM for legged robots with robust registration and deep-learned loop closure," in IEEE International Conference on Robotics and Automation (ICRA), 2020, pp. 4158-4164.
evo: Python package for the evaluation of odometry and SLAM. M Grupp, M. Grupp, "evo: Python package for the evaluation of odometry and SLAM." https://github.com/MichaelGrupp/evo, 2017.
The Two-State Implicit Filter Recursive Estimation for Mobile Robots. M Bloesch, M Burri, H Sommer, R Siegwart, M Hutter, IEEE Robotics and Automation Letters. 31M. Bloesch, M. Burri, H. Sommer, R. Siegwart, and M. Hutter, "The Two-State Implicit Filter Recursive Estimation for Mobile Robots," IEEE Robotics and Automation Letters, vol. 3, no. 1, pp. 573-580, Jan 2018.
| [
"https://github.com/dmn-sjk/signal_transformer",
"https://github.com/MichaelGrupp/evo,"
] |
[
"Numerical Modelling of the Brain Poromechanics by High-Order Discontinuous Galerkin Methods *",
"Numerical Modelling of the Brain Poromechanics by High-Order Discontinuous Galerkin Methods *"
] | [
"Mattia Corti \nMOX-Dipartimento di Matematica\nPolitecnico di Milano\nPiazza Leonardo da Vinci 3220133MilanItaly\n",
"Paola F Antonietti \nMOX-Dipartimento di Matematica\nPolitecnico di Milano\nPiazza Leonardo da Vinci 3220133MilanItaly\n",
"Luca Dede' \nMOX-Dipartimento di Matematica\nPolitecnico di Milano\nPiazza Leonardo da Vinci 3220133MilanItaly\n",
"Alfio Maria Quarteroni \nMOX-Dipartimento di Matematica\nPolitecnico di Milano\nPiazza Leonardo da Vinci 3220133MilanItaly\n\nInstitute of Mathematics\nÉcole Polytechnique Fédérale de Lausanne\nStation 8, Av. PiccardCH-1015LausanneSwitzerland\n"
] | [
"MOX-Dipartimento di Matematica\nPolitecnico di Milano\nPiazza Leonardo da Vinci 3220133MilanItaly",
"MOX-Dipartimento di Matematica\nPolitecnico di Milano\nPiazza Leonardo da Vinci 3220133MilanItaly",
"MOX-Dipartimento di Matematica\nPolitecnico di Milano\nPiazza Leonardo da Vinci 3220133MilanItaly",
"MOX-Dipartimento di Matematica\nPolitecnico di Milano\nPiazza Leonardo da Vinci 3220133MilanItaly",
"Institute of Mathematics\nÉcole Polytechnique Fédérale de Lausanne\nStation 8, Av. PiccardCH-1015LausanneSwitzerland"
] | [] | We introduce and analyze a discontinuous Galerkin method for the numerical modelling of the equations of Multiple-Network Poroelastic Theory (MPET) in the dynamic formulation. The MPET model can comprehensively describe functional changes in the brain considering multiple scales of fluids. Concerning the spatial discretization, we employ a high-order discontinuous Galerkin method on polygonal and polyhedral grids and we derive stability and a priori error estimates. The temporal discretization is based on a coupling between a Newmark β-method for the momentum equation and a θ-method for the pressure equations. After the presentation of some verification numerical tests, we perform a convergence analysis using an agglomerated mesh of a geometry of a brain slice. Finally we present a simulation in a three dimensional patient-specific brain reconstructed from magnetic resonance images. The model presented in this paper can be regarded as a preliminary attempt to model the perfusion in the brain. | 10.1142/s0218202523500367 | [
"https://export.arxiv.org/pdf/2210.02272v2.pdf"
] | 252,715,512 | 2210.02272 | 31665e7698c99aafa4709bfe8164a071026a0ee1 |
Numerical Modelling of the Brain Poromechanics by High-Order Discontinuous Galerkin Methods *
March 20, 2023
Mattia Corti
MOX-Dipartimento di Matematica
Politecnico di Milano
Piazza Leonardo da Vinci 3220133MilanItaly
Paola F Antonietti
MOX-Dipartimento di Matematica
Politecnico di Milano
Piazza Leonardo da Vinci 3220133MilanItaly
Luca Dede'
MOX-Dipartimento di Matematica
Politecnico di Milano
Piazza Leonardo da Vinci 3220133MilanItaly
Alfio Maria Quarteroni
MOX-Dipartimento di Matematica
Politecnico di Milano
Piazza Leonardo da Vinci 3220133MilanItaly
Institute of Mathematics
École Polytechnique Fédérale de Lausanne
Station 8, Av. PiccardCH-1015LausanneSwitzerland
Numerical Modelling of the Brain Poromechanics by High-Order Discontinuous Galerkin Methods *
March 20, 2023(Professor Emeritus)
We introduce and analyze a discontinuous Galerkin method for the numerical modelling of the equations of Multiple-Network Poroelastic Theory (MPET) in the dynamic formulation. The MPET model can comprehensively describe functional changes in the brain considering multiple scales of fluids. Concerning the spatial discretization, we employ a high-order discontinuous Galerkin method on polygonal and polyhedral grids and we derive stability and a priori error estimates. The temporal discretization is based on a coupling between a Newmark β-method for the momentum equation and a θ-method for the pressure equations. After the presentation of some verification numerical tests, we perform a convergence analysis using an agglomerated mesh of a geometry of a brain slice. Finally we present a simulation in a three dimensional patient-specific brain reconstructed from magnetic resonance images. The model presented in this paper can be regarded as a preliminary attempt to model the perfusion in the brain.
Introduction
Poroelasticity models the interaction among fluid flow and elastic deformations in porous media. The precursor Biot's equations [1] are able to correctly model the physical problems; however, complete and detailed modelling sometimes requires a splitting of the fluid component into multiple distinct network fields [2]. Despite initially the multiple networks poroelastic (MPET) model being applied to soil mechanics [3], more recently, the separation of fluid networks was proposed in the context of biological flows. Indeed, to model blood perfusion, it is essential to separate the vascular network into its fundamental components (arteries, capillaries and veins). This is relevant both in the heart [4,5], and the brain [6,7] modelling.
In the context of neurophysiology, where blood constantly perfuses the brain and provides oxygen to neurons, the multiple network porous media models have been used to study circulatory diseases, such as ischaemic stroke [8,9]. The cerebrospinal fluid (CSF) that surrounds the brain parenchyma is related to disorders of the central nervous system (CNS), such as hydrocephalus [10,11], and plays a role in CNS clearance, particularly important in Alzheimer's disease, which is strongly linked to the accumulation of misfolded proteins, such as amyloid beta (Aβ) [12,13,14,15].
Despite the MPET equations find application in different physical contexts, at the best of our knowledge, a complete analysis of the numerical discretization in the dynamic case is still missing. Concerning the discretization of the quasi-static MPET equations, some works proposed an analysis using both the Mixed Finite Element Method [16,17] and the Hybrid High-Order Method (HHO) [18]. The quasi-static version neglects the second-order derivative of the displacement in the momentum balance equation. The physical meaning of neglecting this term is that inertial forces have small impact on the evolution of the fields. However, this term ought to be considered in the application to brain physiology, because of the strong impact of systolic pressure variations on the vascular and tissue deformation [6]. In the context of applications, the fully-dynamic system has been applied to model the aqueductal stenosis effects [10]. From a numerical perspective, the discretization of second-order time-dependent problems is challenging. In this work, the time discretization scheme applies a Newmark-β method [19] for the momentum equation. Due to the system structure, a continuity equation for each pressure field requires a temporal discretization method for first-order ODEs. We choose the application of a θ−method in this work.
In terms of accuracy, to guarantee low numerical dispersion and dissipation errors, high-order discretization methods are required, cf. for example [20]. In this work for space discretization, we proposed a high-order Discontinuous Galerkin formulation on polygonal/polyhedral grids (PolyDG). The PolyDG methods are naturally oriented to high-order approximations. Another strength of the proposed formulation is its flexibility in mesh generation; due to the applicability to polygonal/polyhedral meshes. Indeed, the geometrical complexity of the brain is one of the challenges that need to be considered. The possibility of refining the mesh only in some regions, handling the hanging nodes and eventually using elements which are not tetrahedral, is easy to implement in our approach. For all these reason, intense research has been undertaken on this topic [21,22,23,24,25], in particular concerning porous media and elasticity in the context of geophysical applications [26,27,28]. Moreover, PolyDG methods exhibit low numerical dispersion and dissipation errors, as recently shown in [29] for the elastodynamics equations.
The paper is organized as follows. Section 2 introduces the mathematical model of MPET, proposing also some changes for the adaptation to the brain physiology. In Section 3, we introduce the PolyDG space discretization of the problem. In Section 4 we prove stability of the semi-discretized MPET system in a suitable (mesh dependent) version. Section 5 is devoted to the proof of a priori error estimates of the semi-discretized MPET problem. In Section 6, we introduce a temporal discretization by means of Newmark-β and θ-methods. In Section 7 we show some numerical results considering convergence tests with analytical solutions. Moreover, we present some realistic simulations in physiological conditions. Finally, in Section 8, we draw some conclusions.
The mathematical model
In this section, we present the multiple-network poroelasticity system of equations. We consider a given set of labels J such that the magnitude is a number |J| ∈ N, corresponding to the number of fluid networks. The problem is dependent on time t ∈ (0, T ] and space x ∈ Ω ⊂ R d (d = 2, 3). The unknowns of our problem are the displacement u = u(x, t) and the network pressures p j = p j (x, t) for j ∈ J. The problem reads as follows: Find u = u(x, t) and p j = p j (x, t) such that:
ρ ∂ 2 u ∂t 2 − ∇ · σ E (u) + k∈J α k ∇p k = f , in Ω × (0, T ], c j ∂p j ∂t + ∇ · α j ∂u ∂t − K j µ j ∇p j + k∈J β jk (p j − p k ) + β e j p j = g j , in Ω × (0, T ] ∀j ∈ J.(1)
In Equation (1), we denote the tissue density by ρ, the elastic stress tensor by σ E , and the volume force by f . Moreover for the j-th fluid network, we prescribe a Biot-Willis coefficient α j , a storage coefficient c j , a fluid viscosity µ j , a permeability tensor K j , an external coupling coefficient β e j and a body force g j . Finally, we have a coupling transfer coefficient β jk for each couple of fluid networks (j, k) ∈ J × J.
Assumption 1 (Coefficients' regularity). In this work, we assume the following regularities for the coefficients and the forcing terms:
• ρ ∈ L ∞ (Ω).
• f ∈ L 2 ((0, T ], L 2 (Ω, R d )).
• α j ∈ L ∞ (Ω) and K j ∈ L ∞ (Ω, R d×d ) for any j ∈ J.
• c j > 0, µ j > 0, and β e j ∈ L ∞ (Ω) for any j ∈ J.
• g j ∈ L 2 ((0, T ], L 2 (Ω)) for any j ∈ J.
• β jk ∈ L ∞ (Ω) for each couple of fluid networks (j, k) ∈ J × J.
More detailed information about the derivation of this problem can be found in [6]. For the purpose of brain poromechanics modelling, we introduce two main modifications to the model:
• In the derivation we use a static form of the Darcy flow, as in [3]:
w j = − K j µ j ∇p j .(2)
This is considered a good approximation for the medium-speed phenomena. Indeed, in brain fluid dynamics we do not reach large values of the fluid velocities.
• We add to the equation a reaction term β e j p j for each fluid network j. Indeed, we aim simulating brain perfusion, so we use a diffuse discharge in the venular compartment of the form:
β e V (p V −p Veins ),(3)
considering the large veins pressurep Veins and comprising this in the g V in the abstract formulation. This mimicks what proposed in the context of heart perfusion [30].
It is important to notice that in each infinitesimal volume element we have both the existence of the solid component and of the fluid networks, as represented in Figure 1.
We assume small deformations, that is to consider linear elasticity constitutive relation for the tissue [31]:
σ E (u) = C E [ε(u)] = 2µε(u) + λ(∇ · u)I,(4)
where µ ∈ L ∞ (Ω) and λ ∈ L ∞ (Ω) are the Lamé parameters, I is the second-order identity tensor, and ε(u) = 1 2 (∇u + ∇ u) is the symmetric part of the displacement gradient. Moreover, defined S as the space of secondorder symmetric tensors, C E : S → S is the fourth order stiffness tensor. This assumption allows us to neglect the differentiation between the actual configuration domain Ω t and the reference oneΩ. For this reason, we consider Ω =Ω Ω t , as in Equation (1).
We supplement Equation (1) with suitable boundary and initial conditions. Concerning the initial conditions, due to the second-order time-derivative, we need to impose both a displacement u 0 and a velocity v 0 . Moreover, we need also an initial pressure p j0 for each fluid-network j ∈ J. The strong formulation reads:
ρ ∂ 2 u ∂t 2 − ∇ · σ E (u) + k∈J α k ∇p k = f , in Ω × (0, T ], c j ∂p j ∂t + ∇ · α j ∂u ∂t − K j µ j ∇p j + k∈J β jk (p j − p k ) + β e j p j = g j , in Ω × (0, T ] ∀j ∈ J, σ E (u) · n − k∈J α k p k n = h u , on Γ N × (0, T ], K j µ j ∇p j n = h j , on Γ j N × (0, T ] ∀j ∈ J, u = u D , on Γ D × (0, T ], p j = p D j , on Γ j D × (0, T ] ∀j ∈ J, u(0) = u 0 , in Ω, ∂u ∂t (0) = v 0 , in Ω, p j (0) = p j0 , in Ω ∀j ∈ J.(5)
Weak formulation
In order to introduce a numerical approximation to Equation (5), we recall the turn to its variational formulation. Let us consider a subset Γ D ⊂ ∂Ω with positive measure |Γ D | > 0, then we define the Sobolev space V = H 1 Γ D (Ω, R d ) such that:
H 1 Γ D (Ω, R d ) := {v ∈ H 1 (Ω, R d ) : v| Γ D = 0}.(6)
Analogously, for a subset Γ j D ⊂ ∂Ω with positive measure |Γ j D | > 0 with j ∈ J, we can define the Sobolev space
Q j = H 1 Γ j D
(Ω) such that:
H 1 Γ j D (Ω) := {q j ∈ H 1 (Ω) : q j | Γ j D = 0}.(7)
Moreover, we employ standard definition of scalar product in L 2 (Ω), denoted by (·, ·) Ω . The induced norm is denoted by || · || Ω .For vector-valued and tensor-valued functions the definition extends componentwise [32]. Finally, given k ∈ N and an Hilbert space H we use the notation C k ([0, T ], H) to denote the space of functions u = u(x, t) such that u is k-times continuously differentiable with respect to time and for each t ∈ [0, T ], u(·, t) ∈ H, see e.g. in [32].
The same equation can be also rewritten in an abstract form using the following definitions:
• a : V × V → R is a bilinear form such that:
a(u, v) = 2µ (ε(u), ε(v)) Ω + λ (∇ · u, ∇ · v) Ω ∀u, v ∈ V (8) • b j : Q j × V → R is a bilinear form such that: b j (q j , v) = α j (q j , ∇ · v) Ω ∀q j ∈ Q j ∀v ∈ V(9)
• F : V → R is a linear functional such that:
F (v) = (f , v) Ω + (h u , v) Γ N v ∈ V(10)
• s j : Q j × Q j → R is a bilinear form such that:
s j (p j , q j ) = K j µ j ∇p j , ∇q j Ω ∀p j , q j ∈ Q j ,(11)
• C j : × k∈J Q k × Q j → R is a bilinear form such that:
C j ((p k ) k∈J , q j ) = k∈J (β jk (p j − p k ), q j ) Ω + (β e j p j , q j ) Ω(12)
• G j : Q j → R is a linear functional such that:
G j (q j ) = (g j , q j ) Ω + (h j , q j ) Γ j N ∀q j ∈ Q j .(13)
The weak formulation of problem (5) reads:
Find u(t) ∈ V and q j (t) ∈ Q j with j ∈ J such that ∀t > 0:
ρ ∂ 2 u(t) ∂t 2 , v Ω + a(u(t), v) − k∈J b k (p k (t), v) = F (v), ∀v ∈ V, c j ∂p j ∂t , q j Ω + b j q j , ∂u ∂t + s j (p j , q j ) + C j ((p k ) k∈J , q j ) = G j (q j ), ∀q j ∈ Q j j ∈ J, u(0) = u 0 , in Ω, ∂u ∂t (0) = v 0 , in Ω, p j (0) = p j0 , in Ω j ∈ J,u(t) = u D (t), on Γ D , q j (t) = q D j (t), on Γ j D j ∈ J.(14)
The complete derivation of this formulation is reported in Appendix A.
PolyDG semi-discrete formulation
Let us introduce a polytopic mesh partition T h of the domain Ω made of polygonal/polyhedral elements K such that:
∀K i , K j ∈ T h |K i ∩ K j | = 0 if i = j j K j = Ω
where we for each element K ∈ T h , we denote by |K| the measure of the element and by h K its diameter. We set h ∈ max K∈T h h K . Then we can define the interface as the intersection of the (d − 1)−dimensional facets of two neighbouring elements. We distinguish two cases:
• case d = 3, in which the interface consists in a generic polygon, we further assume that we can decompose each interface into triangles; we denote the set of these triangles with F h ;
• case d = 2, in which the interfaces are always line segments; then we denote such a set of segments with F h .
It is now useful to subdivide the set into the union of interior faces F I h and F B h exterior faces lying on the boundary of the domain ∂Ω:
F h = F I h ∪ F B h .
Moreover the boundary faces set can be split according to the type of imposed boundary condition of the tissue displacement:
F B h = F D h ∪ F N h , where F D h and F N h
are the boundary faces contained in Γ D and Γ N , respectively. Implicit in this decomposition, there is the assumption that T h is aligned with Γ D and Γ N , i.e. any F ∈ F B h is contained in either Γ D and Γ N . The same splitting can be done according to the type of imposed boundary condition of the generic j-th fluid network:
F B h = F Dj h ∪ F Nj h , where F Dj h
and F Nj h are the boundary faces contained in Γ j D and Γ j N , respectively. Implicit in this decomposition, there is the assumption that T h is aligned with Γ j D and Γ j N , i.e. any F ∈ F B h is contained in either Γ j D and Γ j N . Let us define P s (K) as the space of polynomials of degree s over a mesh element K. Then we can introduce the following discontinuous finite element spaces:
Q DG h = {q ∈ L 2 (Ω) : q| K ∈ P q (K) ∀K ∈ T h }, V DG h = {w ∈ L 2 (Ω; R d ) : w| K ∈ [P p (K)] d ∀K ∈ T h },
where p ≥ 1 and q ≥ 1 are polynomial orders, which can be different in principle.
Finally, we introduce some assumptions on T h .
Definition 1 (Polytopic regular mesh). Let T h be a mesh, we say it is polytopic regular if:
∀K ∈ T h ∃{S F K } F ⊂∂K such that ∀F ⊂ ∂K F = ∂K ∩ S F K and h K d|S F K | |F | −1 , where {S F K } F ⊂∂K is a set of non-overlapping d-dimensional simplices contained in K and h K is the diameter of the element K.
We remark that the union of simplices {S F } F ⊂∂K does not have to cover, in general, the whole element K, that
is {S F } F ⊂∂K ⊂K.
Assumption 2. The mesh sequence {T h } h satisfies the following properties:
1. {T h } h>0 is uniformly polytopic-regular 2. For each T h ∈ {T h } h there exists a shape-regular, simplicial coveringT h of T h such that for each pair K ∈ T h andK ∈T h withK ⊂ K it holds: (a) h K hK; (b) max K∈T h |K | 1 where K ∈ T h : K ∩K = 0,K ∈T h , K ⊂K.
3. A local bounded variation property holds for the local mesh sizes:
∀F ∈ F h F ⊂ ∂K 1 ∩ ∂K 2 K 1 , K 2 ∈ T h ⇒ h K1 h K2 h K1
where the hidden constants are independent of both discretization parameters and number of faces of K 1 and K 2 .
We next introduce the so-called trace operators [33]. Let F ∈ F I h be a face shared by the elements K ± . Let n ± by the unit normal vector on face F pointing exterior to K ± , respectively. Then, assuming sufficiently regular scalar-valued functions q, vector-valued functions v and tensor-values functions τ , we can define:
• the average operator { {·} } on F ∈ F I h : { {q} } = 1 2 (q + + q − ), { {v} } = 1 2 (v + + v − ), { {τ } } = 1 2 (τ + + τ − ),(15)
• the jump operator
[[·]] on F ∈ F I h : [[q]] = q + n + + q − n − , [[v]] = v + · n + + v − · n − , [[τ ]] = τ + n + + τ − n − ,(16)
• the jump operator [[[·]]] on F ∈ F I h for a vector-valued function:
[[[v]]] = 1 2 (v + ⊗ n + + n + ⊗ v + ) + 1 2 (v − ⊗ n − + n − ⊗ v − ),(17)
where the result is a tensor in R d×d sym . In these relations we are using the superscripts ± on the functions, to denote the traces of the functions on F taken within the of interior to K ± .
In the same way, we can define analogous operators on the face F ∈ F B h associated to the cell K ∈ T h with n outward unit normal on ∂Ω:
• the average operator { {·} } on F ∈ F B h : { {q} } = q, { {v} } = v, { {τ } } = τ ,(18)
• the standard jump operator [[·]] on F ∈ F B h which does not belong to a Dirichlet boundary:
[[q]] = qn, [[v]] = v · n, [[τ ]] = τ n,(19)
• the jump operator [[[·]]] on F ∈ F B h which belongs to a Dirichlet boundary, with Dirichlet conditions g, g and γ:
[
[q]] = (q − g)n, [[v]] = (v − g) · n, [[τ ]] = (τ − γ)n,(20)
• the jump operator on F ∈ F B h for a vector-valued function which does not belong to a Dirichlet boundary:
[[[v]]] = 1 2 (v ⊗ n + n ⊗ v).(21)
• the jump operator on F ∈ F B h for a vector-valued function which belongs to a Dirichlet boundary, with Dirichlet condition g:
[[[v]]] = 1 2 ((v − g) ⊗ n + n ⊗ (v − g)).(22)
We recall the following identity will be useful in the method derivation:
[[qv]] = [[v]]{ {q} } + { {v} } · [[q]], ∀F ∈ F I h .(23)
Finally, we remark also the following identities [34,35] for τ ∈ L 2 (Ω, R d×d sym ), v ∈ H 1 (Ω, R d ), and q ∈ H 1 (Ω):
K∈T h ∂K qv · n K = F ∈F h F { {v} } · [[q]] + F ∈F I h F { {q} } · [[v]],(24)K∈T h ∂K v · (τ n K ) = K∈T h ∂K τ : (v ⊗ n K ) = F ∈F h F { {τ } } : [[[v]]] + F ∈F I h F { {v} } · [[τ ]],(25)
where n K is the outward normal unit vector to the cell K.
Semi-discrete formulation
To construct the semi-discrete formulation, we define the following penalization functions η : F h → R + and ζ j : F h → R + for each j ∈ J, which are face-wise defined as:
η = η 0C K E p 2 {h} H , on F ∈ F I h p 2 h , on F ∈ F D h ζ j = z j k K j √ µ j q 2 {h} H , on F ∈ F I h q 2 h , on F ∈ F B h ,(26)
where we are considering the harmonic average operator
{·} H on K ± ,C K E = √ C E | K 2 2
and k K j = || K j | K || 2 2 for any K ∈ T h 1 and η 0 and z j are parameters at our disposal (to be chosen large enough). The parameters z j require to be chosen appropriately in particular for small values of k K j , which are typical in applications. Moreover, we need to define the following bilinear forms:
• A E : V DG h × V DG h → R is a bilinear form such that: A E (u, v) = Ω σ E (u) : ∇ h v + F ∈F I h ∪F D h F (η[[[u]]] : [[[v]]] − { {σ E (u h )} } : [[[v h ]]] − [[[u h ]]] : { {σ E (v h )} }) dσ, (27) for all u, v ∈ V DG h . • B j : Q DG h × V DG h → R is a bilinear form for any j ∈ J such that: B j (p j , v) = Ω α j p j (∇ h · v) − F ∈F I h ∪F D j h F α j { {p jh I} } : [[[v h ]]]dσ ∀p j ∈ Q DG h ∀v ∈ V DG h . (28) • A Pj : Q DG h × Q DG h → R is a bilinear form such that: A Pj (p j , q j ) = Ω K j µ j ∇ h p j · ∇ h q j − F ∈F I h ∪F D j h F 1 µ j { {K j ∇ h p j } } · [[q j ]]+ − F ∈F I h ∪F D j h F 1 µ j { {K j ∇ h q j } } · [[p j ]] + F ∈F I h ∪F D j h F ζ j [[p j ]] · [[q j ]] p j , q j ∈ Q DG h .(29)
By exploiting the definitions of the bilinear forms, we obtain the following semi-discrete PolyDG formulation.
Find u h (t) ∈ V DG h and p jh (t) ∈ Q DG h with j ∈ J such that ∀t > 0: ρ (ü h (t), v h ) Ω + A E (u h (t), v h ) − k∈J B k (p kh (t), v h ) = F (v h ), ∀v h ∈ V DG h c j (ṗ jh (t), q jh ) Ω + B j (q jh ,u h (t)) + A Pj (p jh (t), q jh ) + C j ((p kh ) k∈J , q jh ) = G j (q jh ), ∀q jh ∈ Q DG h u h (0) = u 0h , in Ω ḣ u h (0) = v 0h , in Ω h p jh (0) = p j0h , in Ω h u h (t) = u D h (t), on Γ D q jh (t) = q D jh (t), on Γ j D(30)
1 In this context || · || 2 is the operator norm induced by the L 2 -norm in the space of symmetric second order tensors.
The complete derivation of this formulation is reported in Appendix B. Summing up the weak formulations we arrive to the following equivalent equation, we will use in the analysis:
ρ (ü h (t), v h ) Ω +A E (u h (t), v h ) + k∈J − B k (p kh (t), v h ) + c k (ṗ kh (t), q kh ) Ω + A P k (p kh (t), q kh ) +B k (q kh ,u h (t)) + C k ((p jh ) j∈J , q kh ) = F (v h ) + k∈J G k (q kh ) ∀v h ∈ V DG h ∀q kh ∈ Q DG h .(31)
4 Stability analysis of the semi-discrete formulation
To carry out a complete stability analysis of the problem (31), we introduce the following broken Sobolev spaces for an integer r ≥ 1:
H r (T h ) = {v h ∈ L 2 (Ω) : v h | K ∈ H r (K) ∀K ∈ T h }, H r (T h ; R d ) = {v h ∈ L 2 (Ω; R d ) : v h | K ∈ H r (K; R d ) ∀K ∈ T h }.
Moreover, we introduce the shorthand notation for the L 2 -norm || · || := || · || L 2 (Ω) and for the L 2 -norm on a set of
faces F as || · || F = F ∈F || · || L 2 (F ) 1/2 .
These norms can be used to define the following DG-norms:
||p|| DG,P j = K j µ j ∇ h p + || ζ j [[p]]|| L 2 (F I h ∪F D j h ) ∀p ∈ H 1 (T h ) (32) ||v|| DG,E = C E [ε h (v)] + || √ η[[[v]]]|| L 2 (F I h ∪F D h ) ∀v ∈ H 1 (T h ; R d )(33)
For the analysis, we need to prove some continuity and coercivity properties of the bilinear forms.
Proposition 1. Let Assumption 2 be satisfied, then the bilinear forms A E (·, ·) and A Pj (·, ·) are continuous:
|A E (v h , w h )| ||v h || DG,E ||w h || DG,E ∀v h , w h ∈ V DG h ,(34)|A Pj (p jh , q jh )| ||p jh || DG,P j ||q jh || DG,P j ∀p jh , q jh ∈ Q DG h ∀j ∈ J,(35)
and coercive:
A E (v h , v h ) ||v h || 2 DG,E ∀v h ∈ V DG h ,(36)A Pj (p jh , p jh ) ||p jh || 2 DG,P j ∀p jh ∈ Q DG h ∀j ∈ J,(37)
provided that the penalty parameters eta and ζ j for any j ∈ J are chosen large enough.
The proof of these properties can be found in [29].
Proposition 2. Let Assumption 2 be satisfied. The bilinear form B j is also continuous:
|B j (q jh , v h )| ||v h || DG,E ||q jh || ∀v h , ∈ V DG h ∀q jh ∈ Q DG h(38)
The proof of these properties can be found in [36].
Proposition 3. Let Assumption 2 be satisfied, then:
j∈J C j ((p kh ) k∈J , q jh ) k∈J j∈J ||p kh || ||q jh || ∀p kh , q kh ∈ Q DG h ,(39)j∈J C j ((p kh ) k∈J , p jh )) j∈J || β e j p jh || 2 ∀p jh ∈ Q DG h .(40)
Proof. First of all, to simplify the computations let us introduce the following quantity:
B = max max j,k∈J ||β jk || L ∞ (Ω) , max j∈J ||β e j || L ∞ (Ω)(41)
The proof of the continuity trivially derives from the application of triangular inequality and Hölder inequality, using relation (41):
j∈J C j ((p kh ) k∈J , q j ) ≤ k∈J j∈J |(β kj p kh , q kh ) Ω | + k∈J j∈J |(β kj p jh , q kh ) Ω | + j∈J |(β e j p jh , q jh ) Ω | ≤ ≤ k∈J j∈J (2B||p kh || ||q kh || + B||p jh || ||q kh ||) k∈J j∈J ||p jh || ||q kh ||
In the last step, we are observing that in the second sum we are also controlling the case j = k.
To prove the coercivity, we introduce the definition ofβ j = k∈J β kj + β e j = k∈J β jk + β e j > 0. Then we proceed as:
j∈J C j ((p kh ) k∈J , p jh )) = j∈J k∈J (β jk (p jh − p kh ), p jh ) Ω + j∈J (β e j p jh , p jh ) Ω = = j∈J k∈J || β jk p jh || 2 + j∈J || β e j p jh || 2 − j∈J k∈J (β jk p kh , p jh ) Ω ≥ ≥ j∈J k∈J || β jk p jh || 2 + j∈J || β e j p jh || 2 − j∈J k∈J |(β jk p jh , p kh ) Ω | ≥ Hölder inequality ≥ j∈J k∈J || β jk p jh || 2 + j∈J || β e j p jh || 2 − j∈J k∈J || β jk p jh || || β kj p kh || ≥ Young inequality ≥ j∈J k∈J || β jk p jh || 2 + j∈J || β e j p jh || 2 − 1 2 k∈J j∈J || β jk p jh || 2 − 1 2 k∈J j∈J || β kj p kh || 2 ≥ ≥ j∈J || β e j p jh || 2 ,
and the thesis follows.
Stability estimate
For the sake of simplicity, we assume homogeneous boundary conditions, both on Neumann and Dirichlet boundaries, i.e. u D = 0, h u = 0, h j = 0 and p D j = 0 for any j ∈ J.
Definition 2. Let us define the following energy norm:
||(u h ,(p kh ) k∈J )(t)|| 2 ε = =|| √ ρu h (t)|| 2 + ||u h (t)|| 2 DG,E + k∈J || √ c k p kh (t)|| 2 + t 0 ||p kh (s)|| 2 DG,P k + || β e k p kh (s)|| 2 ds(42)
Theorem 1 (Stability estimate). Let Assumptions 1 and 2 be satisfied and let (u h , (p kh ) k∈J ) be the solution of Equation (31) for any t ∈ (0,t]. Let the stability parameters be large enough for any k ∈ J. then, it holds:
||(u h , (p kh ) k∈J )(t)|| ε ϑ 0 + t 0 1 √ ρ ||f (t)|| + k∈J 1 √ c k ||g k (t)|| dt,(43)
where we use the following definition:
ϑ 2 0 := || √ ρu 0 h || 2 + ||u 0 h || 2 DG,E + k∈J || √ c k p 0 kh || 2(44)
Proof. We start from the Equation (31) and we choose v h =u h and q kh = p kh . Then we find:
ρ (ü h ,u h ) Ω + A E (u h ,u h ) + k∈J −B k (p kh ,u h ) + c k (ṗ kh , p kh ) Ω + A P k (p kh , p kh ) +B k (p kh ,u h ) + C k ((p jh ) j∈J , q kh ) = F (u h ) + k∈J G k (p kh ).
This choice allows us to simplify the bilinear form B k , because for any k ∈ J it appears in the equation with different signs. Then, we obtain:
ρ (ü h ,u h ) Ω + A E (u h ,u h ) + k∈J c k (ṗ kh , p kh ) Ω + A P k (p kh , p kh ) + C k ((p jh ) j∈J , q kh ) = F (u h ) + k∈J G k (p kh ).
Now, we recall the integration by parts formula:
t 0 (v(s), w(s)) * ds = (v(t), w(t)) * − (v(0), w(0)) * − t 0 (v(s),ẇ(s)) * ds(45)
which holds for each v and w regular enough and for any scalar product (·, ·) * . The application of this gives rise to the following estimates:
t 0 ρ (ü h (s),u h (s)) Ω ds = ρ (u h (t),u h (t)) Ω − ρ u 0 h ,u 0 h Ω − t 0 ρ (ü h (s),u h (s)) Ω ds t 0 A E (u h (s),u h (s))ds = A E (u h (t), u h (t)) − A E (u 0 h , u 0 h ) − t 0 A E (u h (s),u h (s))dsds t 0 c k (ṗ kh (s), p kh (s)) Ω ds = c k (p kh (t), p kh (t)) Ω − c k p 0 kh , p 0 kh Ω − t 0 c k (ṗ kh (s), p kh (s)) Ω ds
Then, integrating the equation, we obtain:
|| √ ρu h (t)|| 2 − || √ ρu 0 h || 2 + A E (u h (t), u h (t)) − A E (u 0 h , u 0 h ) + k∈J || √ c k p kh (t)|| 2 − || √ c k p 0 kh || 2 + +2 t 0
A P k (p kh (s), p kh (s))ds + 2 t 0 C k ((p jh (s)) j∈J , q kh (s)) ds = 2 t 0 F (u h (s))ds + 2 k∈J t 0 G k (p kh (s))ds.
Now we can use continuity and coercivity estimates we stated in Equation (44):
||(u h , (p kh ) k∈J )(t)|| 2 ε ≤ ≤|| √ ρu h (t)|| 2 + ||u h (t)|| 2 DG,E + k∈J || √ c k p kh (t)|| 2 + 2 t 0 ||(p kh (s)|| 2 DG,P k + || β e k p kh (s)|| 2 ds || √ ρu 0 h || 2 + ||u 0 h || 2 DG,E + k∈J || √ c k p 0 kh || 2 + 2 t 0 F (u h (s))ds + 2 k∈J t 0 G k (p kh (s))ds = =ϑ 2 0 + 2 t 0 F (u h (s))ds + 2 k∈J t 0 G k (p kh (s))ds
Then we use Equation (44) and then the continuity of the linear functionals, to obtain:
||(u h , (p kh ) k∈J )(t)|| 2 ε ϑ 2 0 + 2 t 0 ||f (s)|| ||u h (s)||ds + 2 k∈J t 0 ||g k (s)|| ||p kh (s)||ds ϑ 2 0 + t 0 2 √ ρ ||f (s)|| || √ ρu h (s)||ds + k∈J t 0 2 √ c k ||g k (s)|| || √ c k p kh (s)||ds ϑ 2 0 + t 0 2 √ ρ ||f (s)|| + k∈J 2 √ c k ||g k (s)|| ||(u h , (p kh ) k∈J )(s)|| ε ds
Using Grönwall Lemma [20], we reach the thesis:
||(u h , (p kh ) k∈J )(t)|| ε ϑ 0 + t 0 1 √ ρ ||f (s)|| + k∈J 1 √ c k ||g k (s)|| ds
Error analysis
In this section, we derive an a priori error estimate for the solution of the PolyDG semi-discrete problem (31). For the sake of simplicity we neglect the dependencies of the inequality constants on the model parameters, using the notation x y to say that ∃C > 0 : x ≤ Cy, where C is function of the model parameters (but it is independent of the discretization parameters).
First of all, we need to introduce the following definition:
|||p||| DG,P j = ||p|| DG,P j + ζ − 1 2 j 1 µ j { {K j ∇ h p} } L 2 (F I h ∪F D j h ) ∀p ∈ H 2 (T h ),(46)|||v||| DG,E = ||v|| DG,E + η − 1 2 { { C E [ε h (v)]} } L 2 (F I h ∪F D h ) ∀v ∈ H 2 (T h , R d ),(47)
We introduce the interpolants of the solutions u I ∈ V DG h and p kI ∈ Q DG h of the continuous formulation (14). Then, for a polytopic mesh T h which satisfies Assumption 2, we can define a Stein operator E : H m (K) → H m (R d ) for any K ∈ T h and m ∈ N 0 such that:
E v| K = v ||E v|| H m (R d ) ||v|| H m (K) , ∀v ∈ H m (K).
Proposition 4. Let Assumption 2 be fulfilled. If d ≥ 2, then the following estimates hold:
∀v ∈ H n (T h ; R d ) ∃v I ∈ V DG h : |||v − v I ||| 2 DG,E K∈T h h 2 min{p+1,n}−2 K p 2n−3 ||E v|| 2 H n (K,R d ) ,(48)∀p j ∈ H n (T h ) ∃p jI ∈ Q DG h : |||p j − p jI ||| 2 DG,P j K∈T h h 2 min{q+1,n}−2 K q 2n−3 ||E p j || 2 H n (K) .(49)
Error estimates
First of all let us consider (u h , (p kh ) k∈J ) solution of (30) and (u, (p k ) k∈J ) solution of (14). To extend the bilinear forms of (30) to the space of continuous solutions we need further regularity requirements. We assume element-wise H 2 -regularity of the displacement and pressures together with the continuity of the normal stress and fluid flow across the interfaces F ∈ F I h for all time t ∈ (0, T ]. In this context, we need to provide additional boundedness results for the functionals of the formulation: Proposition 5. Let Assumption 2 be satisfiedThen:
|A E (v, w h )| |||v||| DG,E ||w h || DG,E , ∀v ∈ H 2 (T h ; R d ), ∀w h ∈ V DG h (50) |A Pj (p j , q jh )| |||p j ||| DG,P j ||q jh || DG,P j , ∀p j ∈ H 2 (T h ), ∀q jh ∈ Q DG h (51) |B k (q kh , v)| |||v||| DG,E ||q kh || ∀v ∈ H 2 (T h ; R d ), ∀q kh ∈ Q DG h (52) |B k (q k , v h )| ||v|| DG,E |||q kh ||| DG,P j ∀v h ∈ V DG h , ∀q k ∈ H 2 (T h )(53)
The proof of these relations could be found in [28,27,37].
Theorem 2. Let Assumptions 1 and 2 be fulfilled and let (u, (p j ) j∈J ) be the solution of (14) for any t ∈ (0, T ] and let it satisfy the following additional regularity requirements:
u ∈ C 1 ((0, T ]; H m (Ω; R d )) p j ∈ C 1 ((0, T ]; H n (Ω)) ∀j ∈ J (54)
for m, n ≥ 2. Let (u h , (p jh ) j∈J ) be the solution of (31) for any t ∈ (0, T ]. Then, the following estimate holds:
||| (e u , (e pj ) j∈J ) (t)||| 2 ε K∈T h h 2 min{p+1,m}−2 K p 2m−3 ||E u(t)|| 2 H m (K,R d ) + t 0 ||Eu(s)|| 2 H m (K,R d ) ds + t 0 ||Eü(s)|| 2 H m (K,R d ) ds + K∈T h h 2 min{q+1,n}−2 K q 2n−3 j∈J ||E p j (t)|| 2 H n (K) + t 0 ||E p j (s)|| 2 H n (K) ds + t 0 ||Eṗ j (s)|| 2 H n (K) ds ,(55)
where e u = u − u h and e pj = p j − pjh for any j ∈ J.
Proof. Subtracting the resulting equation from problem (30), we obtain:
ρ (ü −ü h , v h ) Ω +A E (u − u h , v h ) + k∈J − B k (p k − p kh , v h ) + c k (ṗ k −ṗ kh , q kh ) Ω +A P k (p k − p kh , q kh ) + B k (q kh ,u −u h ) + C k ((p j − p jh ) j∈J , q kh ) = 0.
We define the errors for the displacement e u h = u I −u h and e u I = u−u I . Analogously, for the pressures e p k h = p kI −p kh and e p k I = p k − p kI . Then we can rewrite the equation above as follows:
ρ (ë u h ,ė u h ) Ω +A E (e u h ,ė u h ) + k∈J − B k (e p k h ,ė u h ) + c k (ė p k h , e p k h ) Ω + A P k (e p k h , e p k h ) + B k (e p k h ,ė u h ) +C k (e pj h ) j∈J , e p k h = ρ (ë u I ,ė u h ) Ω + A E (e u I ,ė u h ) + k∈J − B k (e p k I ,ė u h ) + c k (ė p k I , e p k h ) Ω +A P k (e p k I , e p k h ) + B k (e p k h ,ė u I ) + C k (e pj I ) j∈J , e p k h
Due to the symmetry of scalar product and A E we can rewrite the problem:
ρ 2 d dt (ė u h ,ė u h ) Ω + 1 2 d dt A E (e u h , e u h ) + k∈J c k 2 d dt (e p k h , e p k h ) Ω + A P k (e p k h , e p k h ) + C k (e pj h ) j∈J , e p k h =ρ (ë u I ,ė u h ) Ω + d dt A E (e u I , e u h ) − A E (ė u I , e u h ) + k∈J − d dt B k (e p k I , e u h ) + B k (ė p k I , e u h ) + c k (ė p k I , e p k h ) Ω +B k (e p k h ,ė u I ) + A P k (e p k I , e p k h ) + C k (e pj I ) j∈J , e p k h .
Now we integrate between 0 and t. We remark that e u h (0) = 0,ė u h (0) = 0 and e p k h = 0 for each k ∈ J. Then, by proceeding in an analogous way to what we did in the proof of Theorem 1, we obtain:
||| (e u h , (e p k h ) k∈J ) (t)||| 2 ε A E (e u I (t), e u h (t)) − k∈J B k (e p k I (t), e u h (t)) + t 0 ρ (ë u I (s),ė u h (s)) Ω − t 0 A E (ė u I (s), e u h (s)) + k∈J t 0 B k (ė p k I (s), e u h (s)) + t 0 c k (ė p k I (s), e p k h (s)) Ω + t 0 B k (e p k h (s),ė u I (s)) + t 0 A P k (e p k I (s), e p k h (s)) + t 0 C k (e pj I (s)) j∈J , e p k h (s) .
Then exploiting the continuity relations in Proposition 5:
||| (e u h , (e p k h ) k∈J ) (t)||| 2 ε |||e u I (t)||| DG,E ||e u h (t)|| DG,E + k∈J |||e p k I (t)||| DG,P j ||e u h (t)|| DG,E + t 0 || √ ρë u I (s)|| || √ ρė u h (s)|| + t 0 |||ė u I (s)||| DG,E ||e u h (s)|| DG,E + k∈J t 0 |||ė p k I (s)||| DG,P j ||e u h (s)|| DG,E + t 0 || √ c kė p k I (s)|| || √ c k e p k h (s)|| + t 0 (||e p k h (s)|| DG,P j |||ė u I (s)||| DG,E + t 0 ||e p k h (s)|| DG,P j |||e p k I (s)||| DG,P j + j∈J k∈J t 0 B||c − 1 2 j e pj I (s)|| || √ c k e p k h (s)|| .
Then using the definition of the energy norm and both Hölder and Young inequalities we obtain:
||| (e u h , (e p k h ) k∈J ) (t)||| 2 ε |||e u I (t)||| 2 DG,E + k∈J |||e p k I (t)||| 2 DG,P j + t 0 |||ė u I (s)||| 2 DG,E + k∈J |||e p k I (s)||| 2 DG,P k + t 0 || (e u h , (e p k h ) k∈J ) (s)|| ε || √ ρë u I (s)|| + |||ė u I (s)||| DG,E + k∈J |||ė p k I (s)||| DG,P j + || √ c kė p k I (s)|| + ||c − 1 2 k e p k I (s)|| .
Then by application of the Grönwall lemma [20], we obtain:
||| (e u h , (e p k h ) k∈J ) (t)||| 2 ε |||e u I (t)||| 2 DG,E + k∈J |||e p k I (t)||| 2 DG,P j + t 0 |||ė u I (s)||| 2 DG,E + k∈J |||e p k I (s)||| 2 DG,P k + t 0 || √ ρë u I (s)|| 2 + |||ė u I (s)||| 2 DG,E + k∈J |||ė p k I (s)||| 2 DG,P j + || √ c kė p k I (s)|| 2 + ||c − 1 2 k e p k I (s)|| 2 .
Then by using the relations of Proposition 4, we find:
||| e u h , (e pj h ) j∈J (t)||| 2 ε K∈T h h 2 min{p+1,n}−2 K p 2n−3 ||E u(t)|| 2 H n (K,R d ) + t 0 ||Eu(s)|| 2 H n (K,R d ) ds + t 0 ||Eü(s)|| 2 H n (K,R d ) ds + K∈T h h 2 min{q+1,n}−2 K q 2n−3 j∈J ||E p j (t)|| 2 H n (K) + t 0 ||E p j (s)|| 2 H n (K) ds + t 0 ||Eṗ j (s)|| 2 H n (K) ds .
Then, we use the triangular inequality to estimate the discretization error.
||| (e u , (e pj ) j∈J ) (t)||| 2 ε ≤ ||| e u h , (e pj h ) j∈J (t)||| 2 ε + ||| e u I , (e pj I ) j∈J (t)||| 2 ε
Finally, by applying the result in Equation (5.1) and the interpolation error, the thesis follows.
Time discretization
By fixing a basis for the discrete spaces (ϕ n ) Nu
n=0 ⊂ V DG h and (ψ n ) Np n=0 ⊂ Q DG h , where N u = dim(V DG h ) and N p = dim(Q DG h ), such that: u h (t) = Nu n=0 U n (t)ϕ n p kh (t) = Np n=0 P kn (t)ψ n ∀k ∈ J.(56)
We connect the coefficients of the expansion of u h and p jh for any j ∈ J in such a basis in the vectors U ∈ R 3Nu and P k ∈ R Np for any k ∈ J. By using the same basis, we are able to define the following matrices:
[M u ] ij = (ρϕ j , ϕ i ) Ω (Elasticity mass matrix) [K u ] ij = A E (ϕ j , ϕ i ) (Elasticity stiffness matrix) [M k ] ij = (c k ψ j , ψ i ) Ω (k−th pressure mass matrix) [K k ] ij = A P k (ψ j , ψ i ) (k−th pressure stiffness matrix) [B k ] ij = B k (ψ j , ϕ i ) (Pressure − displacement coupling matrix) [C kl ] ij = (β kl ψ j , ψ i ) Ω (Pressure − pressure coupling matrix)
[C e k ] ij = (β e k ψ j , ψ i ) Ω (Pressure external − coupling matrix) Moreover, we define the forcing terms:
[F ] j = F (ϕ j ) [G k ] j = G k (ψ j )
By exploiting all these definitions, we rewrite the problem (30) in algebraic form:
M uÜ (t) + K u U (t) − k∈J B k P k (t) = F (t), t ∈ (0, T ) M kṖ k (t) + B kU (t) + K k P k (t) + j∈J C kj (P k (t) − P j (t)) + C e k P k (t) = G k (t), t ∈ (0, T ) ∀k ∈ J U (0) = U 0 U (0) = V 0 P k (0) = P k0 , ∀k ∈ J(57)
Then after the introduction of the vector variable P = [P j1 , P j2 , ..., P jn ] with j 1 , j 2 , ..., j n ∈ J, we can construct the following matrices:
B = B j1 B j2 . . . B jn , M p = M j1 0 · · · 0 0 M j2 · · · 0 . . . . . . . . . . . . 0 0 · · · M jn , G = G j1 G j2 . . . G jn , K p = K j1 + i∈J C j1i + C e j1 −C j1j2 · · · −C j1jn −C j2j1 K j2 + i∈J C j2i + C e j2 · · · −C j2jn . . . . . . . . . . . . −C jnj1 −C jnj2 · · · K jn + i∈J C jni + C e jn ,
Then, we write the Equation (57) in a compact form, as follows:
M uÜ (t) + K u U (t) − B P (t) = F (t), t ∈ (0, T ) M pṖ (t) + BU (t) + K p P (t) = G(t), t ∈ (0, T ) U (0) = U 0U (0) = V 0 P (0) = P 0(58)
Let now construct a temporal discretization of the interval (0, T ) by constructing a partition of N intervals 0 = t 0 < t 1 < ... < t N = T . We assume a constant timestep ∆t = t n+1 − t n for each n = 0, ..., N . Let now construct a discretized formulation by means of the Newmark β-method for the first equation. We introduce a velocity vector Z n , and an acceleration one A n . Then, we have the following equations to be solved at each timestep t n :
1 β∆t 2 M u + K u U n+1 − B P n+1 = F n+1 + 1 β∆t 2 M u U n + 1 β∆t M u Z n + 1 − 2β 2β M u A n A n+1 = 1 β∆t 2 (U n+1 − U n ) − 1 β∆t Z n + 2β − 1 2β A n Z n+1 = Z n + ∆t(γA n+1 + (1 − γ)A n )(59)
We couple the problem above with a θ-method for the pressure equations. To obtain the formulation we consider first the definition of the velocity Z =U at time-continuous level, which gives us the following expression:
M pṖ (t) + BZ(t) + K p P (t) = G(t), t ∈ (0, T ).(60)
Using this form of the equation, the derivation of the discretized equation naturally follows:
M p P n+1 =M p P n + ∆tθ(G n+1 − BZ n+1 − K p P n+1 ) + ∆t(1 − θ)(G n − BZ n − K p P n ) = =M p P n + ∆tθ(G n+1 − γ β∆t B(U n+1 − U n ) − (1 − γ β )BZ n − (1 − γ 2β )∆tBA n − K p P n+1 ) +∆t(1 − θ)(G n − BZ n − K p P n )(61)
The final algebraic discretized formulation reads as follows:
1 β∆t 2 M u + K u U n+1 − B P n+1 = F n+1 + 1 β∆t 2 M u U n + 1 β∆t M u Z n + 1 − 2β 2β M u A n 1 ∆t M p + θK p P n+1 + θγ β∆t BU n+1 =θG n+1 + (1 − θ)G n + 1 ∆t M p − (1 − θ)K p P n + θγ β∆t BU n + θγ β − 1 BZ n − θ 1 − γ 2β ∆tBA n A n+1 = 1 β∆t 2 (U n+1 − U n ) − 1 β∆t Z n + 2β − 1 2β A n Z n+1 = Z n + ∆t(γA n+1 + (1 − γ)A n )(62)
In order to rewrite Equation (62) in matrix form, we introduce the following matrices:
A 1 = M u β∆t 2 + K u −B 0 0 θγ β∆t B M p ∆t + θK p 0 0 0 0 I −∆tγI − I β∆t 2 0 0 I X n = U n P n Z n A n A 2 = M u β∆t 2 −B M u β∆t 1 − 2β 2β M u θγB β∆t M p ∆t −θK p θγ β − 1 B θ − θγ 2β ∆tB 0 0 I ∆t(1 − γ)I − I β∆t 2 0 − I β∆t 2β − 1 2β I S n+1 = F n+1 θG n+1 +θG n 0 0
Finally, we the algebraic formulation reads as follows:
A 1 X n+1 = A 2 X n + S n+1 n > 0.(63)
Numerical results
In this section, we aim at validating the accuracy of the method practice. All simulations are carried out considering the following choice of Newmark parameters β = 0.25 and γ = 0.5; moreover, we choose θ = 0.5.
Parameter
Value
h P 3 − P 4 P 3 − P 3 ||e u || DG,e roc u DG k∈J || √ c k e p k || roc p L 2 ||e u || DG,e roc u DG k∈J || √ c k e p k ||
Test case 1: convergence analysis in a 3D case
For the numerical test in this section, we use the FEniCS finite element software [38] (version 2019). We use a cubic domain with structured tetrahedral mesh. Concerning the temporal discretization, we use a timestep ∆t = 10 −5 and a maximum time T = 5 × 10 −3 . We consider the following manufactured exact solution for a case with four pressure fields:
u(x, y, z, t) = sin(πt) − cos(πx) cos(πy) sin(πx) sin(πy) z p 1 (x, y, z, t) = p 3 (x, y, z, t) = π sin(πt)(cos(πy) sin(πx) + cos(πx) sin(πy))z p 2 (x, y, z, t) = p 4 (x, y, z, t) = π sin(πt)(cos(πy) sin(πx) − cos(πx) sin(πy))z
A fundamental assumption in this section is the isotropic permeability of the pressure fields K j = k j I for j = 1, ..., 4. We report the values of the physical parameters we use for this simulation in Table 1.
In Table 2 we report the computed errors in both the DG and L 2 norms, together with the computed rates of convergence (roc) as a function of the mesh size h. The results reported in Table 2 left (right) have been obtained with polynomials of degree q = 1, 2, 3 for the pressure fields and with polynomials of degree q (q + 1) for the displacement one. We observe that the theoretical rates of convergence are respected both in the cases of P q + P q+1 elements and P q + P q ones. Indeed, the rate of convergence of the displacement in DG−norm is exactly the degree of approximation of the displacement in all the cases. At the same time, the L 2 −norm rates of convergence for the pressures are equal to q + 1. This fact is coherent with our energy stability estimate, in the first case, while we observe a superconvergence in the case of L 2 −norm in pressure with P q − P q elements. The rate of convergence can be observed also in Figure 3.
A convergence analysis concerning the order of discretization is also performed. The results are reported in h Convergence for P 1 − P 2 elements
|| √ c 1 (p 1 − p ex 1 )|| || √ c 2 (p 2 − p ex 2 )|| || √ c 3 (p 3 − p ex 3 )|| || √ c 4 (p 4 − p ex 4 )|| ||u − u ex || DG,
Test case 2: convergence analysis on 2D polygonal grids
The first numerical test we perform is a two-dimensional setting on a polygonal agglomerated grid. Starting from structural Magnetic Resonance Images (MRI) of a brain from the OASIS-3 database [39] we segment the brain by means of Freesurfer [40,41]. After that, we construct a mesh of a slice of the brain along the frontal plane by means of VMTK [42].
The triangular resulting mesh is composed of 14 372 triangles. However, the generality of the PolyDG method allows us to use mesh elements of any shape, for this reason, we agglomerate the mesh by using ParMETIS [43] and we obtain a polygonal mesh of 51 elements, as shown in Figure 5. This mesh is then used to perform a convergence Error Convergence for P q − P q+1 elements To test the convergence we consider the following exact solution for a case with two pressure fields (|J| = 2):
|| √ c 1 (p 1 − p ex 1 )|| || √ c 2 (p 2 − p ex 2 )|| || √ c 3 (p 3 − p ex 3 )|| || √ c 4 (p 4 − p ex 4 )|| ||u − u ex || DG,
u(x, y, t) = sin(πt) − cos(πx) cos(πy) sin(πx) sin(πy) , p 1 (x, y, t) = 10 4 π sin(πt)(cos(πy) sin(πx) + cos(πx) sin(πy)), p 2 (x, y, t) = 10 4 π sin(πt)(cos(πy) sin(πx) − cos(πx) sin(πy)).
Concerning the time discretization, we use a timestep ∆t = 10 −7 and a maximum time T = 10 −5 . The forcing terms are then constructed to fulfil the continuous problem. The simulation is performed considering isotropic permeability K j = k j I for j = 1, 2 and the values of the physical parameters used in this simulation are reported in Table 3. These values are chosen to be comparable in dimensions to the parameters used in patient-specific simulations in literature [44,45]. In Figure 5, we report the computed solution using the PolyDG of order 6 both in displacement and pressures at the final time. We can notice that the exact solution is smoothly approximated using the polygonal mesh. Unless Error Convergence for P q − P q elements the mesh contains few elements we are able to achieve the solution.
||p 1 − p ex 1 || ||p 2 − p ex 2 || ||u − u ex || DG,e
We report in Figure 6, the convergence results for this test case. We can observe a spectral convergence increasing the polynomial order q. Finally, we observe that after q = 5 we arrive to the lower bound of the pressure errors. This is due to the temporal discretization timestep we choose and it is coherent with the theory of space-time discretization errors.
Test case 3: simulation on a brain geometry
Finally, we perform a three-dimensional simulation starting from a structural MRI from the OASIS-3 database [39] and we segment it employing Freesurfer [40,41] and 3DSlicer [46,47]. Finally, the mesh is constructed using the SVMTK library [48]. The tetrahedral resulting mesh is composed of 81'194 elements. The problem is solved by means of a code implemented in FEniCS finite element software [38] (version 2019).
In this case we refer to the mathematical modelling of [6], which proposed the simulation of four different fluid networks: arterial blood (A), capillary blood (C), venous blood (V) and cerebrospinal/extracellular fluid (E). In this context, the boundary condition are constructed by dividing the boundary of the domain into the ventricular boundary Γ Vent and the skull Γ Skull , as visible in Figure 7. The discretization is based on a DG method in space with polynomials of order 2 for each solution field. Moreover, we apply a temporal discretization with ∆t = 10 −3 s by considering a heartbeat of duration 1s. Indeed, we apply periodic boundary conditions to the problem. Concerning the elastic movement, we consider a fixed skull, while the ventricles boundary can deform under the stress of the CSF inside the ventricles:
u = 0 on Γ Skull , σ E (u)n − j∈J α j p j n = −p Vent E n on Γ Vent .(66)
Concerning the arterial blood, we impose sinusoidal pressure on the skull, to mimic the pressure variations due to the heartbeat, with a mean value of 70mmHg. At the same time, we do not allow inflow/outflow of blood from the ventricular boundary:
p A = 70 + 10 sin(2πt) mmHg, on Γ Skull , ∇p A · n = 0, on Γ Vent .
Concerning the capillary blood, we do not allow any inflow/outflow of blood from the boundary:
∇p C · n = 0, on ∂Ω.(68)
For the venous blood, we impose the pressure value at the boundary:
p V = 6 mmHg, on ∂Ω.(69)
Finally, we assume that the CSF can flow from the parenchyma to the ventricles and we impose a pulsatility around a baseline of 5 mmHg. Indeed we impose:
p E = 5 + (2 + 0.012) sin(2πt) mmHg on Γ Vent , p E = 5 + 2 sin(2πt) mmHg on Γ Skull .(70)
We add the discharge term to the venous pressure equation with a parameter β e V = 10 −6 m 2 /(N · s) and we consider an external veins pressurep Veins = 6 mmHg. To solve the algebraic resulting problem, we apply a monolithic strategy, by using an iterative GMRES method, with a SOR preconditioner.
In Figure 8 we report the numerical solution computed at time t = 0.25s with the parameters from [16]. As we can observe, we obtain maximum values of displacement on the ventricular boundary. The pressure values obtained are coherent with the imposed boundary conditions and the largest gradients are related to the arterial compartment. Concerning venous and capillary pressures, we do not report the maps of values inside the brain. Indeed, the computed values are near to spatially constant at 6 mmHg and 38 mmHg, respectively. This is coherent to what is found in similar studies [7].
Conclusions
In this work, we have introduced a numerical polyhedral discontinuous Galerkin method for the solution of the dynamic multiple-network poroelastic model in the dynamic version. We derived stability and convergence error estimates for arbitrary-order approximation. Moreover, we proposed a temporal discretization based on the coupling of Newmark β-method for the second order equation and θ-method for the first order equations.
The numerical convergence tests were presented both in two and three dimensions. In particular, we presented a test on a slice of brain, with an agglomerated polygonal mesh. These tests confirmed the theoretical results of our analysis and the possibility to use this formulation to solve the problem on coarse polygonal meshes. Finally, we performed a numerical simulation on a real 3D brain geometry.
Declaration of competing interests
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this article. this identity leads us to the final weak formulation of the momentum equation:
ρ ∂ 2 u ∂t 2 , v
Find u(t) ∈ V and q j (t) ∈ Q j with j ∈ J such that ∀t > 0:
ρ ∂ 2 u(t) ∂t 2 , v Ω + a(u(t), v) − k∈J b k (p k (t), v) = F (v), ∀v ∈ V,
c j ∂p j ∂t , q j Ω + b j q j , ∂u ∂t + s j (p j , q j ) + C j ((p k ) k∈J , q j ) = G j (q j ), ∀q j ∈ Q j j ∈ J,
u(0) = u 0 , in Ω, ∂u ∂t (0) = v 0 , in Ω, p j (0) = p j0 ,
in Ω j ∈ J,
u(t) = u D (t), on Γ D , q j (t) = q D j (t), on Γ j D j ∈ J.
(77)
B Derivation of PolyDG semi-discrete formulation
In this section, we derive the PolyDG formulation of the Equation (14). First, we rewrite the momentum equation in a Discontinuous Galerkin (DG) framework. We proceed as usual to obtain:
Ω ∇ · σ E (u h ) · v h = K∈T h K ∇ · σ E (u h ) · v h = − K∈T h K σ E (u h ) : ∇v h + K∈T h ∂K (σ E (u h ) · n) · v h dσ = = − Ω σ E (u h ) : ∇ h v h + F ∈F I h ∪F D h F { {σ E (u h )} }(σ E (u h ) · n) · v h dσ ∀v h ∈ V DG h
Moreover, we have to treat also the pressure component of the momentum for each fluid network j ∈ J:
Ω α j ∇p jh · v h = Ω α j ∇ · (p jh I) · v h = = − K∈T h K α j p jh I : ∇v h + K∈T h ∂K α j (p jh I · n) · v h dσ = = − Ω α j p jh I : ∇ h v h + F ∈F I h ∪F D j h F α j { {p jh I} } : [[[v h ]]]dσ + Γ N α j (p jh I · n) · v h dσ = = − Ω α j p jh (∇ h · v h ) + F ∈F I h ∪F D j h F α j { {p jh I} } : [[[v h ]]]dσ + Γ N α j (p jh I · n) · v h dσ ∀v h ∈ V DG h
By exploiting the definitions of the bilinear forms, we arrive at the PolyDG semi-discrete formulation of the momentum equation for any t ∈ (0, T ]:
ρ (ü h (t), v h ) Ω + A E (u h (t), v h ) − k∈J B k (p kh (t), v h ) = F (v h ) ∀v h ∈ V DG h .(78)
Following similar arguments, we can rewrite the continuity equations for the fluid networks, from the Darcy flux component:
− Ω ∇ · K j µ j ∇p jh q jh = − K∈T h K ∇ · K j µ j ∇p jh q jh = = K∈T h K K j µ j ∇p jh · ∇q jh − K∈T h ∂K K j µ j ∇p jh · n q jh = = Ω K j µ j ∇ h p jh · ∇ h q jh − Γ j N h j q jh − F ∈F I h ∪F D j h F 1 µ j { {K j ∇ h p jh } } · [[q jh ]]+ − F ∈F I h ∪F D j h F 1 µ j { {K j ∇ h q jh } } · [[p jh ]] + F ∈F I h ∪F D j h F ζ j [[p jh ]] · [[q jh ]] q jh ∈ Q DG h ,
The continuity equation reads as for any t ∈ (0, T ]:
c j (ṗ jh (t), q jh ) Ω + B j (q jh ,u h (t)) + A Pj (p jh (t), q jh ) + C j ((p kh ) k∈J , q jh ) = G j (q jh ) ∀q jh ∈ Q DG h .
To conclude the PolyDG semi-discrete formulation of the MPET problem reads as:
Find u h (t) ∈ V DG h and p jh (t) ∈ Q DG h with j ∈ J such that ∀t > 0:
ρ (ü h (t), v h ) Ω + A E (u h (t), v h ) − k∈J B k (p kh (t), v h ) = F (v h ), ∀v h ∈ V DG h c j (ṗ jh (t), q jh ) Ω + B j (q jh ,u h (t)) + A Pj (p jh (t), q jh ) + C j ((p kh ) k∈J , q jh ) = G j (q jh ), ∀q jh ∈ Q DG h u h (0) = u 0h , in Ω ḣ u h (0) = v 0h , in Ω h p jh (0) = p j0h , in Ω h u h (t) = u D h (t), on Γ D q jh (t) = q D jh (t), on Γ j D(80)
Figure 1 :
1Example of infinitesimal volume element in which we consider the coexistence of both the solid part (brown) and multiple fluid networks, as we can see in the image: CSF (light-blue), arterial blood (red) and venous blood (blue)
Figure 2 :
2A domain Ω with associated boundary conditions for both the displacement u of the tissue and a generic fluid pressure p j for j ∈ J.
Figure 3 :
3Test case 1: computed errors and convergence rates.
Figure 4 :
4Test case 1: computed errors against the order of DG approximation.
Figure 5 :
5Test case 2: computed solution (PolyDG of order 6) at the final time analysis, by varying the polynomial order.
Figure 6 :
6Test case 2: computed errors against the order of PolyDG approximation on the brain section.
Figure 7 :
7Test case 3: brain 3D mesh. An external view of the mesh (on the left), an internal view with the ventricles in red (in the middle) and a visualization of ventricles boundary in red and skull in trasparency (on the right).
Figure 8 :
8Test case 3: solution of the MPET dynamic system in the patient-specific geometry at t = 0.25 s. From left to right: p A , p E and |u|.
Table 2 :
2Error estimates and convergence rates for the 3D test case.
Table 3 :
3Physical parameter values used in the 2D brain simulation.
: [[[v h ]]]dσ + [[[u h ]]] : { {σ E (v h )} }dσ+ − F ∈F I h ∪F D h F η[[[u h ]]] : [[[v h ]]]dσ +F ∈F I
h ∪F D
h
F
Γ N
A Derivation of the weak formulationConsidering the momentum equation we can introduce a test function v ∈ V and write the weak formulation:Each component can be treated separately to reach:Finally, exploiting the Neumann boundary condition on the boundary Γ N for the momentum equation, we have:Substituting the definitions of the bilinear forms into the equation (72) we obtain:The other conservation equations in Equation(5)can be derived following the same procedure. For j ∈ J, we multiply by a test function q j ∈ Q j and we integrate over the domain Ω:We treat each component separately:We sum up the terms and we obtain the following equation:Introducing now the definitions of the bilinear forms, we can rewrite problem (75) in an abstract form:c j ∂p j ∂t , q j Ω + b j q j , ∂u ∂t + s j (p j , q j ) + C j ((p k ) k∈J , q j ) = G j (q j ) ∀q j ∈ Q j j ∈ J.Finally, the weak formulation of problem (5) reads:
General theory of three dimensional consolidation. M A Biot, Journal of Applied Physics. 122M. A. Biot, "General theory of three dimensional consolidation," Journal of Applied Physics, vol. 12, no. 2, p. 155-164, 1941.
Basic formulation of static and dynamic behaviours of soil and other porous media. O C Zienkiewicz, Applied Mathematics and Mechanics. 34O. C. Zienkiewicz, "Basic formulation of static and dynamic behaviours of soil and other porous media," Applied Mathematics and Mechanics, vol. 3, no. 4, pp. 457-468, 1982.
Dynamic behaviour of saturated porous media; The generalized Biot formulation and its numerical solution. O C Zienkiewicz, T Shiomi, International Journal for Numerical and Analytical Methods in Geomechanics. 81O. C. Zienkiewicz and T. Shiomi, "Dynamic behaviour of saturated porous media; The generalized Biot formula- tion and its numerical solution," International Journal for Numerical and Analytical Methods in Geomechanics, vol. 8, no. 1, pp. 71-96, 1984.
A multiscale poromechanics model integrating myocardial perfusion and the epicardial coronary vessels. N Barnafi, S Di Gregorio, L Dede, ' , P Zunino, C Vergara, A Quarteroni, SIAM Journal on Applied Mathematics. 824N. Barnafi, S. Di Gregorio, L. Dede', P. Zunino, C. Vergara, and A. Quarteroni, "A multiscale poromechan- ics model integrating myocardial perfusion and the epicardial coronary vessels," SIAM Journal on Applied Mathematics, vol. 82, no. 4, pp. 1167-1193, 2022.
Mathematical analysis and numerical approximation of a general linearized poro-hyperelastic model. N Barnafi, P Zunino, L Dedè, A Quarteroni, Computers & Mathematics with Applications. 91N. Barnafi, P. Zunino, L. Dedè, and A. Quarteroni, "Mathematical analysis and numerical approximation of a general linearized poro-hyperelastic model," Computers & Mathematics with Applications, vol. 91, pp. 202-228, 2021.
Cerebral water transport using multiple-network poroelastic theory: Application to normal pressure hydrocephalus. B Tully, Y Ventikos, Journal of Fluid Mechanics. 667B. Tully and Y. Ventikos, "Cerebral water transport using multiple-network poroelastic theory: Application to normal pressure hydrocephalus," Journal of Fluid Mechanics, vol. 667, pp. 188-215, 2011.
Exploring neurodegenerative disorders using a novel integrated model of cerebral transport: Initial results. J C Vardakis, D Chou, L Guo, Y Ventikos, Proceedings of the Institution of Mechanical Engineers, Part H: Journal of Engineering in Medicine. 23411J. C. Vardakis, D. Chou, L. Guo, and Y. Ventikos, "Exploring neurodegenerative disorders using a novel integrated model of cerebral transport: Initial results," Proceedings of the Institution of Mechanical Engineers, Part H: Journal of Engineering in Medicine, vol. 234, no. 11, pp. 1223-1234, 2020.
A porous circulation model of the human brain for in silico clinical trials in ischaemic stroke. T I Józsa, R M Padmos, N Samuels, W K El-Bouri, A G Hoekstra, S J Payne, Interface Focus. 1120190127T. I. Józsa, R. M. Padmos, N. Samuels, W. K. El-Bouri, A. G. Hoekstra, and S. J. Payne, "A porous circulation model of the human brain for in silico clinical trials in ischaemic stroke," Interface Focus, vol. 11, p. 20190127, 2021.
On the Sensitivity Analysis of Porous Finite Element Models for Cerebral Perfusion Estimation. T I Józsa, R M Padmos, W K El-Bouri, A G Hoekstra, S J Payne, Annals of Biomedical Engineering. 4912T. I. Józsa, R. M. Padmos, W. K. El-Bouri, A. G. Hoekstra, and S. J. Payne, "On the Sensitivity Analysis of Porous Finite Element Models for Cerebral Perfusion Estimation," Annals of Biomedical Engineering, vol. 49, no. 12, pp. 3647-3665, 2021.
A fully dynamic multi-compartmental poroelastic system: Application to aqueductal stenosis. D Chou, J C Vardakis, L Guo, B J Tully, Y Ventikos, Journal of Biomechanics. 4911D. Chou, J. C. Vardakis, L. Guo, B. J. Tully, and Y. Ventikos, "A fully dynamic multi-compartmental poroe- lastic system: Application to aqueductal stenosis," Journal of Biomechanics, vol. 49, no. 11, pp. 2306-2312, 2016.
Investigating cerebral oedema using poroelasticity. J C Vardakis, D Chou, B J Tully, C C Hung, T H Lee, P.-H Tsui, Y Ventikos, Medical Engineering & Physics. 381J. C. Vardakis, D. Chou, B. J. Tully, C. C. Hung, T. H. Lee, P.-H. Tsui, and Y. Ventikos, "Investigating cerebral oedema using poroelasticity," Medical Engineering & Physics, vol. 38, no. 1, pp. 48-57, 2016.
A multiple-network poroelastic model for biological systems and application to subject-specific modelling of cerebral fluid transport. L Guo, J C Vardakis, D Chou, Y Ventikos, International Journal of Engineering Science. 147103204L. Guo, J. C. Vardakis, D. Chou, and Y. Ventikos, "A multiple-network poroelastic model for biological systems and application to subject-specific modelling of cerebral fluid transport," International Journal of Engineering Science, vol. 147, p. 103204, 2020.
Protein-protein interactions in neurodegenerative diseases: A conspiracy theory. T B Thompson, P Chaggar, E Kuhl, A Goriely, F T A D N Initiative, PLOS Computational Biology. 16101008267T. B. Thompson, P. Chaggar, E. Kuhl, A. Goriely, and f. t. A. D. N. Initiative, "Protein-protein interactions in neurodegenerative diseases: A conspiracy theory," PLOS Computational Biology, vol. 16, no. 10, p. e1008267, 2020.
A physics-based model explains the prion-like features of neurodegeneration in Alzheimer's disease, Parkinson's disease, and amyotrophic lateral sclerosis. J Weickenmeier, M Jucker, A Goriely, E Kuhl, Journal of the Mechanics and Physics of Solids. 124J. Weickenmeier, M. Jucker, A. Goriely, and E. Kuhl, "A physics-based model explains the prion-like features of neurodegeneration in Alzheimer's disease, Parkinson's disease, and amyotrophic lateral sclerosis," Journal of the Mechanics and Physics of Solids, vol. 124, pp. 264-281, 2019.
The role of clearance in neurodegenerative diseases. G S Brennan, T B Thompson, H Oliveri, M E Rognes, A Goriely, 2022G. S. Brennan, T. B. Thompson, H. Oliveri, M. E. Rognes, and A. Goriely, "The role of clearance in neurode- generative diseases," 2022.
A Mixed Finite Element Method for Nearly Incompressible Multiple-Network Poroelasticity. J J Lee, E Piersanti, K.-A Mardal, M E Rognes, SIAM Journal on Scientific Computing. 412J. J. Lee, E. Piersanti, K.-A. Mardal, and M. E. Rognes, "A Mixed Finite Element Method for Nearly Incom- pressible Multiple-Network Poroelasticity," SIAM Journal on Scientific Computing, vol. 41, no. 2, pp. A722- A747, 2019.
Parameter robust preconditioning by congruence for multiple-network poroelasticity. E Piersanti, J Lee, T Thompson, K.-A Mardal, M Rognes, SIAM Journal on Scientific Computing. 43E. Piersanti, J. Lee, T. Thompson, K.-A. Mardal, and M. Rognes, "Parameter robust preconditioning by congruence for multiple-network poroelasticity," SIAM Journal on Scientific Computing, vol. 43, pp. B984- B1007, 2021.
A Hybrid High-Order Method for Multiple-Network Poroelasticity. L Botti, M Botti, D A Di Pietro, Polyhedral Methods in Geosciences, SEMA SIMAI Springer Series. Springer International PublishingL. Botti, M. Botti, and D. A. Di Pietro, "A Hybrid High-Order Method for Multiple-Network Poroelasticity," in Polyhedral Methods in Geosciences, SEMA SIMAI Springer Series, pp. 227-258, Springer International Publishing, 2021.
A method of computation for structural dynamics. N M Newmark, Journal of the Engineering Mechanics Division. 85EM3N. M. Newmark, "A method of computation for structural dynamics," Journal of the Engineering Mechanics Division, vol. 85, no. EM3, pp. 67--94, 1959.
Numerical models for differential problems. A Quarteroni, SpringerA. Quarteroni, Numerical models for differential problems. Springer, 3 ed., 2017.
P F Antonietti, A Cangiani, J Collis, Z Dong, E H Georgoulis, S Giani, P Houston, Review of Discontinuous Galerkin Finite Element Methods for Partial Differential Equations on Complicated Domains. SpringerBuilding Bridges: Connections and Challenges in Modern Approaches to Numerical Partial Differential EquationsP. F. Antonietti, A. Cangiani, J. Collis, Z. Dong, E. H. Georgoulis, S. Giani, and P. Houston, "Review of Discontinuous Galerkin Finite Element Methods for Partial Differential Equations on Complicated Domains," in Building Bridges: Connections and Challenges in Modern Approaches to Numerical Partial Differential Equations, pp. 281-310, Springer, 2016.
A Cangiani, Z Dong, E H Georgoulis, P Houston, hp-Version Discontinuous Galerkin Methods on Polygonal and Polyhedral Meshes. SpringerA. Cangiani, Z. Dong, E. H. Georgoulis, and P. Houston, hp-Version Discontinuous Galerkin Methods on Polygonal and Polyhedral Meshes. Springer, 2017.
A Cangiani, E H Georgoulis, P Houston, hp-version discontinuous Galerkin methods on polygonal and polyhedral meshes. 24A. Cangiani, E. H. Georgoulis, and P. Houston, "hp-version discontinuous Galerkin methods on polygonal and polyhedral meshes," Mathematical Models and Methods in Applied Sciences, vol. 24, no. 10, p. 2009-2041, 2014.
A Cangiani, Z Dong, E Georgoulis, hp-version discontinuous Galerkin methods on essentially arbitrarilyshaped elements. 91A. Cangiani, Z. Dong, and E. Georgoulis, "hp-version discontinuous Galerkin methods on essentially arbitrarily- shaped elements," Mathematics of Computation, vol. 91, no. 333, pp. 1-35, 2022.
On the flexibility of agglomeration based physical space discontinuous Galerkin discretizations. F Bassi, L Botti, A Colombo, D A Di Pietro, P Tesini, Journal of Computational Physics. 2311F. Bassi, L. Botti, A. Colombo, D. A. Di Pietro, and P. Tesini, "On the flexibility of agglomeration based physical space discontinuous Galerkin discretizations," Journal of Computational Physics, vol. 231, no. 1, pp. 45-65, 2012.
P F Antonietti, C Facciolà, P Houston, I Mazzieri, G Pennesi, M Verani, High-order Discontinuous Galerkin Methods on Polyhedral Grids for Geophysical Applications: Seismic Wave Propagation and Fractured Reservoir Simulations. Springer International PublishingPolyhedral Methods in GeosciencesP. F. Antonietti, C. Facciolà, P. Houston, I. Mazzieri, G. Pennesi, and M. Verani, "High-order Discontinuous Galerkin Methods on Polyhedral Grids for Geophysical Applications: Seismic Wave Propagation and Fractured Reservoir Simulations," in Polyhedral Methods in Geosciences, pp. 159-225, Springer International Publishing, 2021.
A high-order discontinuous Galerkin approach to the elastoacoustic problem. P F Antonietti, F Bonaldi, I Mazzieri, Computer Methods in Applied Mechanics and Engineering. 358112634P. F. Antonietti, F. Bonaldi, and I. Mazzieri, "A high-order discontinuous Galerkin approach to the elasto- acoustic problem," Computer Methods in Applied Mechanics and Engineering, vol. 358, p. 112634, 2020.
A High-Order Discontinuous Galerkin Method for the Poro-elasto-acoustic Problem on Polygonal and Polyhedral Grids. P F Antonietti, M Botti, I Mazzieri, S Nati Poltri, SIAM Journal of Scientific Computing. 44P. F. Antonietti, M. Botti, I. Mazzieri, and S. Nati Poltri, "A High-Order Discontinuous Galerkin Method for the Poro-elasto-acoustic Problem on Polygonal and Polyhedral Grids," SIAM Journal of Scientific Computing, vol. 44, pp. B1-B28, 2022.
P F Antonietti, I Mazzieri, High-order Discontinuous Galerkin methods for the elastodynamics equation on polygonal and polyhedral meshes. 342P. F. Antonietti and I. Mazzieri, "High-order Discontinuous Galerkin methods for the elastodynamics equation on polygonal and polyhedral meshes," Computer Methods in Applied Mechanics and Engineering, vol. 342, pp. 414-437, 2018.
A computational model applied to myocardial perfusion in the human heart: From large coronaries to microvasculature. S Di Gregorio, M Fedele, G Pontone, A F Corno, P Zunino, C Vergara, A Quarteroni, Journal of Computational Physics. 424109836S. Di Gregorio, M. Fedele, G. Pontone, A. F. Corno, P. Zunino, C. Vergara, and A. Quarteroni, "A computa- tional model applied to myocardial perfusion in the human heart: From large coronaries to microvasculature," Journal of Computational Physics, vol. 424, p. 109836, 2021.
. O Coussy, Poromechanics , John Wiley & SonsO. Coussy, Poromechanics. John Wiley & Sons, 2004.
Partial differential equations in action: from modeling to theory. S Salsa, Springer3 ed.S. Salsa, Partial differential equations in action: from modeling to theory. Springer, 3 ed., 2016.
Unified analysis of discontinuous Galerkin methods for elliptic problems. D N Arnold, F Brezzi, B Cockburn, L Donatella Marini, SIAM Journal on Numerical Analysis. 395D. N. Arnold, F. Brezzi, B. Cockburn, and L. Donatella Marini, "Unified analysis of discontinuous Galerkin methods for elliptic problems," SIAM Journal on Numerical Analysis, vol. 39, no. 5, pp. 1749-1779, 2001-02.
An Interior Penalty Finite Element Method with Discontinuous Elements. D N Arnold, SIAM Journal on Numerical Analysis. 194D. N. Arnold, "An Interior Penalty Finite Element Method with Discontinuous Elements," SIAM Journal on Numerical Analysis, vol. 19, no. 4, pp. 742-760, 1982.
Stability analysis of discontinuous galerkin approximations to the elastodynamics problem. P F Antonietti, B Ayuso De Dios, I Mazzieri, A Quarteroni, Journal of Scientific Computing. 68P. F. Antonietti, B. Ayuso de Dios, I. Mazzieri, and A. Quarteroni, "Stability analysis of discontinuous galerkin approximations to the elastodynamics problem," Journal of Scientific Computing, vol. 68, pp. 143-170, 2016.
Stability Analysis of Polytopic Discontinuous Galerkin Approximations of the Stokes Problem with Applications to Fluid-Structure Interaction Problems. P F Antonietti, L Mascotto, M Verani, S Zonca, Journal of Scientific Computing. 90123P. F. Antonietti, L. Mascotto, M. Verani, and S. Zonca, "Stability Analysis of Polytopic Discontinuous Galerkin Approximations of the Stokes Problem with Applications to Fluid-Structure Interaction Problems," Journal of Scientific Computing, vol. 90, no. 1, p. 23, 2021.
Discontinuous Galerkin approximation of the fully-coupled thermoporoelastic problem. P F Antonietti, S Bonetti, M Botti, SIAM Journal of Scientific Computing. p. to appearP. F. Antonietti, S. Bonetti, and M. Botti, "Discontinuous Galerkin approximation of the fully-coupled thermo- poroelastic problem," SIAM Journal of Scientific Computing, p. to appear, 2023.
The FEniCS project version 1.5. M S Alnaes, J Blechta, J Hake, A Johansson, B Kehlet, A Logg, C Richardson, J Ring, M E Rognes, G N Wells, Archive of Numerical Software. 3M. S. Alnaes, J. Blechta, J. Hake, A. Johansson, B. Kehlet, A. Logg, C. Richardson, J. Ring, M. E. Rognes, and G. N. Wells, "The FEniCS project version 1.5," Archive of Numerical Software, vol. 3, 2015.
Oasis-3: Longitudinal neuroimaging, clinical, and cognitive dataset for normal aging and alzheimer disease. P J Lamontagne, T L Benzinger, J C Morris, S Keefe, R Hornbeck, C Xiong, E Grant, J Hassenstab, K Moulder, A G Vlassenko, M E Raichle, C Cruchaga, D Marcus, medRxivP. J. LaMontagne, T. L. Benzinger, J. C. Morris, S. Keefe, R. Hornbeck, C. Xiong, E. Grant, J. Hassenstab, K. Moulder, A. G. Vlassenko, M. E. Raichle, C. Cruchaga, and D. Marcus, "Oasis-3: Longitudinal neuroimag- ing, clinical, and cognitive dataset for normal aging and alzheimer disease," medRxiv, 2019.
Freesurfer. B , NeuroImage. 62B. Fischl, "Freesurfer," NeuroImage, vol. 62, pp. 774-781, 2012.
Freesurfer. 2022"Freesurfer." https://surfer.nmr.mgh.harvard.edu/, 2022.
An image-based modeling framework for patient-specific computational hemodynamics. L Antiga, M Piccinelli, L Botti, B Ene-Iordache, A Remuzzi, D A Steinman, Medical & Biological Engineering & Computing. 46L. Antiga, M. Piccinelli, L. Botti, B. Ene-Iordache, A. Remuzzi, and D. A. Steinman, "An image-based modeling framework for patient-specific computational hemodynamics," Medical & Biological Engineering & Computing, vol. 46, pp. 1097-1112, 2008.
Parmetis. parallel graph partitioning and sparse matrix ordering library. G Karypis, K Schloegel, V Kumar, G. Karypis, K. Schloegel, and V. Kumar, "Parmetis. parallel graph partitioning and sparse matrix ordering library." https://github.com/KarypisLab/ParMETIS, 2022.
Subject-specific multi-poroelastic model for exploring the risk factors associated with the early stages of Alzheimer's disease. L Guo, J C Vardakis, T Lassila, M Mitolo, N Ravikumar, D Chou, M Lange, A Sarrami-Foroushani, B J Tully, Z A Taylor, S Varma, A Venneri, A F Frangi, Y Ventikos, Interface Focus. 81L. Guo, J. C. Vardakis, T. Lassila, M. Mitolo, N. Ravikumar, D. Chou, M. Lange, A. Sarrami-Foroushani, B. J. Tully, Z. A. Taylor, S. Varma, A. Venneri, A. F. Frangi, and Y. Ventikos, "Subject-specific multi-poroelastic model for exploring the risk factors associated with the early stages of Alzheimer's disease," Interface Focus, vol. 8, no. 1, 2018.
Fluid-structure interaction for highly complex, statistically defined, biological media: Homogenisation and a 3D multi-compartmental poroelastic model for brain biomechanics. J C Vardakis, L Guo, T W Peach, T Lassila, M Mitolo, D Chou, Z A Taylor, S Varma, A Venneri, A F Frangi, Y Ventikos, Journal of Fluids and Structures. 91102641J. C. Vardakis, L. Guo, T. W. Peach, T. Lassila, M. Mitolo, D. Chou, Z. A. Taylor, S. Varma, A. Venneri, A. F. Frangi, and Y. Ventikos, "Fluid-structure interaction for highly complex, statistically defined, biological media: Homogenisation and a 3D multi-compartmental poroelastic model for brain biomechanics," Journal of Fluids and Structures, vol. 91, p. 102641, 2019.
3D Slicer as an image computing platform for the quantitative imaging network. A Fedorov, R Beichel, J Kalpathy-Cramer, J Finet, J.-C Fillion-Robin, S Pujol, C Bauer, D Jennings, F Fennessy, M Sonka, J Buatti, S Aylward, J Miller, S Pieper, R Kikinis, Magnetic Resonance Imaging. 309A. Fedorov, R. Beichel, J. Kalpathy-Cramer, J. Finet, J.-C. Fillion-Robin, S. Pujol, C. Bauer, D. Jennings, F. Fennessy, M. Sonka, J. Buatti, S. Aylward, J. Miller, S. Pieper, and R. Kikinis, "3D Slicer as an im- age computing platform for the quantitative imaging network," Magnetic Resonance Imaging, vol. 30, no. 9, pp. 1323-1341, 2012.
. "3d Slicer, 2022"3D Slicer." https://www.slicer.org/, 2022.
Mathematical Modeling of the Human Brain -From Magnetic Resonance Images to Finite Element Simulation. K.-A Mardal, M E Rognes, T B Thompson, L Magnus Valnes, SpringerK.-A. Mardal, M. E. Rognes, T. B. Thompson, and L. Magnus Valnes, Mathematical Modeling of the Human Brain -From Magnetic Resonance Images to Finite Element Simulation . Springer, 2021.
| [
"https://github.com/KarypisLab/ParMETIS,"
] |
[
"On the weak solutions for the MHD systems with controllable total energy and cross helicity",
"On the weak solutions for the MHD systems with controllable total energy and cross helicity"
] | [
"Changxing Miao \nInstitute of Applied Physics and Computational Mathematics\nP.O. Box 8009100088BeijingP. R. China\n",
"Weikui Ye \nInstitute of Applied Physics and Computational Mathematics\nP.O. Box 8009100088BeijingP. R. China\n"
] | [
"Institute of Applied Physics and Computational Mathematics\nP.O. Box 8009100088BeijingP. R. China",
"Institute of Applied Physics and Computational Mathematics\nP.O. Box 8009100088BeijingP. R. China"
] | [] | In this paper, we prove the non-uniqueness of three-dimensional magneto-hydrodynamic (MHD) system in C([0, T ]; L 2 (T 3 )) for any initial data in Hβ(T 3 ) (β > 0), by exhibiting that the total energy and the cross helicity can be controlled in a given positive time interval. Our results extend the nonuniqueness results of the ideal MHD system to the viscous and resistive MHD system. Different from the ideal MHD system, the dissipative effect in the viscous and resistive MHD system prevents the nonlinear term from balancing the stress error (Rq,Mq) as doing in[4]. We introduce the box flows and construct the perturbation consisting in seven different kinds of flows in convex integral scheme, which ensures that the iteration works and yields the non-uniqueness. | null | [
"https://export.arxiv.org/pdf/2208.08311v2.pdf"
] | 251,622,452 | 2208.08311 | b552db33d899152d3f12976dcd1ef693354837b9 |
On the weak solutions for the MHD systems with controllable total energy and cross helicity
Changxing Miao
Institute of Applied Physics and Computational Mathematics
P.O. Box 8009100088BeijingP. R. China
Weikui Ye
Institute of Applied Physics and Computational Mathematics
P.O. Box 8009100088BeijingP. R. China
On the weak solutions for the MHD systems with controllable total energy and cross helicity
convex integral iterationthe MHD systemweak solutionscross helicitynon-uniqueness Mathematics Subject Classification: 35A0235D3035Q3076D0576W05
In this paper, we prove the non-uniqueness of three-dimensional magneto-hydrodynamic (MHD) system in C([0, T ]; L 2 (T 3 )) for any initial data in Hβ(T 3 ) (β > 0), by exhibiting that the total energy and the cross helicity can be controlled in a given positive time interval. Our results extend the nonuniqueness results of the ideal MHD system to the viscous and resistive MHD system. Different from the ideal MHD system, the dissipative effect in the viscous and resistive MHD system prevents the nonlinear term from balancing the stress error (Rq,Mq) as doing in[4]. We introduce the box flows and construct the perturbation consisting in seven different kinds of flows in convex integral scheme, which ensures that the iteration works and yields the non-uniqueness.
Introduction
In this paper, we consider the Cauchy problem of the following 3D MHD equations: (1.1) where v(t, x) is the fluid velocity, b(t, x) is the magnetic fields and p(t, x) is the scalar pressure. ν 1 and ν 2 are the viscous and resistive coefficients, respectively. We call (1.1) the viscous and resistive MHD system when ν 1 , ν 2 > 0. For a given initial data (v in , b in ) ∈ Hβ(T 3 ) 1 withβ > 0, we construct a weak solution of (1.1) with controllable total energy and cross helicity, which implies the non-uniqueness of weak solutions in C([0, T ]; L 2 (T 3 )).
∂ t v − ν 1 ∆v + div(v ⊗ v) + ∇p = div(b ⊗ b), ∂ t b − ν 2 ∆b + div(v ⊗ b) = div(b ⊗ v), div v = 0, div b = 0, (v, b)| t=0 = (v in , b in ),
To begin with, let us introduce the definition of weak solutions of (1.1). Definition 1.1. Let (v in , b in ) ∈ L 2 (T 3 ). We say that (v, b) ∈ C([0, T ]; L 2 (T 3 )) is a weak solution to (1.1), if div v = div b = 0 in the weak sense, and for all divergence-free test functions φ ∈ C ∞ 0 ([0, T ) × T 3 ),
T 0 T 3 (∂ t − ν 1 ∆)φv + ∇φ : (v ⊗ v − b ⊗ b) dx dt = − T 3 v in φ(0, x) dx, (1.2) T 0 T 3 (∂ t − ν 2 ∆)φb + ∇φ : (v ⊗ b − b ⊗ v) dx dt = − T 3 b in φ(0, x) dx.
(1.3)
When ν 1 = ν 2 = 0, (1.1) becomes the ideal MHD system:
∂ t v + div(v ⊗ v) + ∇p = div(b ⊗ b), ∂ t b + div(v ⊗ b) = div(b ⊗ v), div v = 0, div b = 0, (v, b)| t=0 = (v in , b in ).
( 1.4) For the smooth solutions to (1.4), they possess a number of physical invariants :
The total energy: e(t) = T 3 (|v(t, x)| 2 + |b(t, x)| 2 ) dx; The cross helicity:
h v,b (t) = T 3 (v(t, x) · b(t, x)) dx; The magnetic helicity: h b,b (t) = T 3 (A(t, x) · b(t,
x)) dx, where A is a periodic vector field with zero mean satisfying curl A = b.
In 1949, Lars Onsager conjectured that the Hölder exponent threshold for the energy conservation of weak solutions of the Euler equations is 1/3. Since then, many mathematicians are devoted to proving Onsager conjecture on the Euler equations and there have been a flood of papers with this problem [7, 11-15, 19, 22, 23, 26]. In recent years, Onsager-type conjectures on the ideal MHD equations which possess several physical invariants have caught researchers' interest and some progress has been made on related issues. For instance, in [2,9,20], the magnetic helicity conservation for the 3D ideal MHD was proved in the critical space L 3 t,x . Later, Faraco-Lindberg-Székelyhidi [18] showed that the L 3 t,x integrability condition for the magnetic helicity conservation is sharp. The cross helicity and total energy are conservative when the weak solutions (v, b) ∈ L 3 t B α 3,∞ with α > 1 3 , but whether they are conservative for α ≤ 1 3 or not is still an open problem. Throughout current literatures, whether the solution satisfies the physical invariants and Runa [13] considered non-uniqueness problem by constructing wild initial data which is L 2 (T 3 )-dense, while Rosa and Haffter [15] also showed that any smooth initial data gives rise to uncountably many solutions.
For the incompressible Navier-Stokes equations (1.1) with b ≡ 0, there have many results on the nonuniqueness problems. Buckmaster-Vicol in [8] made the first important break-through by making use of a L 2
x -based intermittent convex integration scheme. Subsequently, Buckmaster, Colombo and Vicol [5] showed that the wild solutions can be generated by H 3 initial data. Recently, another non-uniqueness result based on Serrin condition for the Navier-Stokes equations was proved by Cheskidov-Luo [10], which shows the sharpness of the Ladyzhenskaya-Prodi-Serrin criteria 2 p + d q ≤ 1 at the endpoint (p, q) = (2, ∞). In [1], Albritton, Brué and Colombo proved the non-uniqueness of the Leray-Hopf solutions with a special force by skillfully constructing a "background" solution which is unstable for the Navier-Stokes dynamics in similarity variables. For the 3D hyper-viscous NSE, Luo-Titi [25] also proved the non-uniqueness results, whenever the exponent of viscosity is less than the Lions exponent 5/4.
For the viscous and resistive MHD system, the existence of Leray-Hopf solutions to the MHD equations was proved by Wu [28]. In [24], Li, Zeng and Zhang proved the non-uniqueness of weak solutions in H t,x , where sufficiently small. However, the uniqueness of Leray-Hopf solutions is unsolved, even in C t L 2
x ∩L 2
tḢ 1 x
is still open. In [8], the non-uniqueness result for the Navier-Stokes equations also imply the non-uniqueness of the viscous and resistive MHD system with trivial magnetic field in C t L 2 x . One natural problem is whether the viscous and resistive MHD system with non-trivial magnetic fields in C t L 2
x is unique or not. In this paper, we solve this problem by showing the non-uniqueness of (1.1) with ν 1 , ν 2 > 0. Now we are in position to state the main result. [4] proved the non-uniqueness for the weak solutions in C t L 2
x . However, for the viscous and resistive MHD system (1.1), the uniqueness for solutions in C t L 2
x is still unsloved. Theorem 1.2 solves this problem and extends the non-uniqueness results of the ideal MHD system to the viscous and resistive MHD system. Compared with the ideal MHD system, the dissipative effect prevents the nonlinear term from balancing the stress error (R q ,M q ) as doing in [4]. This leads to the major difficulty in convex integral iteration in
C t L 2
x . A nature choice is using 3D box type flows instead of the Mikado flows in convex integral iteration. However, these 3D box type flows do not have enough freedom on the oscillation directions in the velocity and magnetic flows, which will give rise to additional errors in the oscillation terms. Inspired by [4,8,10], we construct "temporal flows" and "Inverse traveling wave flows" to eliminate these extra errors, which help us construct a weak solution by combining with the principal flows. Moreover, we construct the so-called "Initial flows" and "Helicity flows" to achieve
(v(0, x), b(0, x)) = (v in , b in ), and T 3 (|v| 2 + |b| 2 ) dx = e(t), and T 3 v · b dx = h(t), t ∈ [1, T ],
which yields the non-uniqueness the weak solution.
We now present a main theorem, which immediately implies Theorem 1.2 by showing that the total energy and the cross helicity can be controlled in a given positive time interval:
Theorem 1.4 (Main theorem). Let T,β > 0 and (v in , b in ) ∈ Hβ(T 3 )
. For fixed δ 2 > 0, assume that there exists two smooth functions e(t), h(t) satisfying
δ 2 2 ≤ e(t) − T 3 (|v in | 2 + |b in | 2 ) dx ≤ 3δ 2 4 , t ∈ [ 1 2 , T ] (1.5) and δ 2 200 ≤ h(t) − T 3 v in · b in dx ≤ δ 2 50 , t ∈ [ 1 2 , T ]. (1.6)
Then there exists a weak solution (v, b) ∈ C([0, T ]; L 2 (T 3 )) to the viscous and resistive MHD system with initial data (v in , b in ). Moreover, we have
T 3 (|v| 2 + |b| 2 ) dx = e(t) and T 3 v · b dx = h(t), t ∈ [1, T ]
where h(t) := h v,b (t) denotes the cross helicity.
Remark 1.5. For a given (v in , b in ) ∈ Hβ, one can choose infinitely many functions e(t), h(t) satisfying (1.5) and (1.6), which implies the non-uniqueness of weak solutions. Moreover, we will prove that (v, b) ∈ C([0, T ]; H (T 3 )) with 0 < β in Section 2.2.
For the ideal MHD system (1.4), one can obtain a similar result after a simple modification to the proof of Theorem 1.4.
Theorem 1.6. Let T,β > 0 and (v in , b in ) ∈ Hβ(T 3 )
. Then there exist infinitely many smooth functions
e(t), h(t) associated with a weak solution (v, b) ∈ C([0, T ]; L 2 (T 3 )) to the ideal MHD system (1.4) with initial data (v in , b in ). Moreover, we have T 3 (|v| 2 + |b| 2 ) dx = e(t) and T 3 v · b dx = h(t), t ∈ [1, T ].
Remark 1.7. Theorem 1.6 shows that all initial data in Hβ (∀β > 0) may generate non unique weak solutions by choosing different total energy e(t) or cross helicity h(t). For weak solutions with non-conservative magnetic helicity, one can see [4,18,24] for more details.
As a matter of fact, authors in [4] constructed solutions in C t H x → C t L 2 x which breaks the conservative law of magnetic helicity. In view of Taylor's conjecture, these weak solutions cannot be the weak ideal limits of Leray-Hopf weak solutions. Mathematically, Taylor's conjecture is stated as follows: Theorem 1.8 (Taylor's conjecture [16,27]). Suppose that (v, b) ∈ L ∞ ([0, T ]; L 2 (T 3 )) is a weak ideal limit of sequence of Leray-Holf weak solutions of the viscous and resistive MHD system, then the magnetic helicity is conservative.
Fortunately, combining Theorem 1.4 with Theorem 1.6, we can prove that the weak solutions constructed in [4] can be a vanishing viscosity and resistivity limit of the weak solutions to (1.1), which is similar to Theorem 1.3 in [8].
Corollary 1.9. Suppose that (v, b) ∈ C([0, T ]; H (T 3 )
) is a weak solution of (1.4). Then, there exist 0 < and a sequence of weak solutions (v νn , b νn ) ∈ C([0, T ]; H (T 3 )) to the viscous and resistive MHD system such that,
(v νn , b νn ) → (v, b) strongly in C t L 2 x , as ν n → 0,
where ν n = (ν 1,n , ν 2,n ).
Outline of the convex integration scheme
In this paper, it suffices to prove Theorem 1.4 for (1.1) with ν 1 , ν 2 > 0. Without loss of generality, we set ν 1 = ν 2 = 1 .
Parameters and the iterative process
Setβ < 1. If (v in , b in ) is sufficiently smooth, we still have (v in , b in ) ∈ H 1 ⊂ Hβ. We choose b = 2 16 β −1/2 , β =β b 4 , α
to be a small constant depending on b, β,β such that 0 < α ≤ min{ 1 b 6 , β b 3 }, and a ∈ N + to be a large number depending on b, β,β, α and the initial data . We define
λ q := a (b q ) , δ q := λ 3β 2 λ −2β q , q ∈ N + . (2.1)
For q = 1, 2, δ q is a large number which could bound the L 2 norm of initial data by choosing a sufficiently large. For q ≥ 3, δ q is small and tends to zero as q → ∞.
Firstly, we choose two smooth functions e :
[1/2, T ] → [0, ∞), h : [1/2, T ] → (−∞, ∞) such that δ 2 2 ≤ e(t) − T 3 (|v in | 2 + |b in | 2 ) dx ≤ 3δ 2 4 , (2.2) δ 2 200 ≤ h(t) − T 3 v in · b in dx ≤ δ 2 150 . (2.3)
Secondly, adopting strategy of convex integration scheme, we consider a modification of (1.1) with stress tensor error (R q ,M q ). Assume that ψ := 1 ψ( x ) stands for a sequence of standard mollifiers, where ψ is a non-negative radial bump function. Let (v q , b q , p q ,R q ,M q ) solve
∂ t v q − ∆v q + div(v q ⊗ v q ) + ∇p q = div(b q ⊗ b q ) + divR q , ∂ t b q − ∆b q + div(v q ⊗ b q ) = div(b q ⊗ v q ) + divM q , ∇ · v q = 0, ∇ · b q = 0, (v q , b q )| t=0 = (v in * ψ q−1 , b in * ψ q−1 ). (2.4) where q−1 := λ −6 q−1 , v ⊗ b := (v j b i ) 3 i,j=1
, and vector div M denotes the divergence of a 2-tensor M = (M ij ) 3 i,j=1 with components:
(div M ) i := ∂ j M ji . In particular, div(v ⊗ b) = (v · ∇)b if div v = 0.
The magnetic stressM q is required to be a anti-symmetric matrix. And the Reynolds stressR q is a symmetric, trace-free 3 × 3 matrix.
M q = −M T q ,R q =R T q , TrR q = 3 i=1 (R q ) ii = 0. (2.5)
The estimates we propagate inductively are:
(v q , b q ) L 2 ≤ C 0 q l=1 δ 1/2 l , (2.6) (v q , b q ) H 3 ≤ λ 5 q , (2.7) (R q ,M q ) L 1 ≤ δ q+1 λ −40α q , (2.8) (v q (0, x), b q (0, x)) = (v in * ψ q−1 , b in * ψ q−1 ), (2.9) t ∈ [1 − τ q−1 , T ] =⇒ 1 3 δ q+1 ≤ e(t) − T 3 |v q | 2 + |b q | 2 dx ≤ δ q+1 , (2.10) t ∈ [1 − τ q−1 , T ] =⇒ δq+1 300 ≤ h(t) − T 3 v q · b q dx ≤ δq+1 100 ,(2.11)
where q := λ −6 q , τ q := 3 q and C 0 := 600. By the definition of δ q , one can easily deduce that ∞ i=1 δ i converges to a finite number. Moreover, we restrict the error of the cross helicity to be much smaller than the energy error in the iterative procedure, which is used to reduce the impact on the energy error, see Section 4.4.
Proposition 2.1. Let (v in , b in ) ∈ Hβ(T 3 ) with 0 <β < 1. Assume that (v q , b q , p q ,M q ,R q ) solves (2.4)
and satisfies (2.6)-(2.9), and e(t), h(t) are any smooth functions satisfying (2.10)-(2.11), then there exists a solution (v q+1 , b q+1 , p q+1 ,R q+1 ,M q+1 ), satisfying (2.4), (2.6)-(2.11) with q replaced by q + 1, and such that
(v q+1 − v q , b q+1 − b q ) L 2 ≤ C 0 δ 1/2 q+1 .
(2.12)
Notations: Throughout this paper, we set that
v⊗b := v ⊗ b − 1 3 Tr(v ⊗ b)Id, P H := Id − ∇ div ∆ , P >0 f (x) = f (x) − T d f (z) dz.
It is easy to check that v⊗b is a trace-free matrix. Next, we prove that Proposition 2.1 implies Theorem 1.4. To start the iteration, we define (v 1 , b 1 , p 1 ,
R 1 ,M 1 ) by v 1 (x, t) := e t∆ v in * ψ 0 , b 1 (x, t) := e t∆ b in (x) * ψ 0 , p 1 (x, t) := |v 1 | 2 − |b 1 | 2 , R 1 (x, t) := v 1⊗ v 1 − b 1⊗ b 1 ,M 1 (x, t) := v 1 ⊗ b 1 − b 1 ⊗ v 1 .
It is easy to verify that (v 1 , b 1 , p 1 ,R 1 ,M 1 ) solves (2.4). In addition, letting a, b be sufficiently large, we can guarantee that Then, making use of Proposition 2.1 inductively, we obtain a L 2 convergent sequence of functions
(R 1 ,M 1 ) L 1 ≤ (v in , b in ) 2 L 2 ≤ δ 2 λ −40α 1 , (e t∆ v in , e t∆ b in ) L 2 ≤ (v in , b in ) L 2 ≤ δ 1/2 2 < δ 1/2 1 , (v in * ψ 0 , b in * ψ 0 ) H 3 ≤ δ 1/2 2 3 0 ≤ λ 5 1 .(v q , b q ) → (v, b) which solves (1.1), with v 2 L 2 + b 2 L 2 = e(t) and T 3 v · b dx = h(t) for all t ∈ [1, T ]. A standard argument shows that (v, b) ∈ C([0, T ]; L 2 (T 3 )
), see [6] for more details. Moreover, from (2.7) and
(2.12), there exists 0 < β such that {(v q , b q )} is also a Cauchy sequence in C t H x by interpolation. Thus, we obtain (v, b) ∈ C t H x .
The remainder of the paper is devoted to the proof of Proposition 2.1.
The proof sketch of Proposition 2.1
Starting from a solution (v q , b q , p q ,R q ,M q ) satisfying the estimates as in Proposition 2.1, the broad scheme of the iteration is as follows.
1. We defined (v q , b q , p q ,R q ,M q ) by mollification, and it is standard in convex integration schemes.
2. We define a family of exact solutions (v l , b l ) l≥0 to MHD by exactly solving the MHD system with
initial data (v l , b l )| t=t l = (v q (t l ), b q (t l ))
, where t l = lτ q defines an evenly spaced paritition of [0, T ].
3. These solutions are glued together by a partition of unity, leading to the tuple (v q ,b q ,p q ,R q ,M q ). The stress error terms are zero when t ∈ J l , l ≥ 0, see [5,6].
4. We define (v q+1 , b q+1 ) = (v q + w q+1 ,b q + d q+1 )
by constructing a perturbation (w q+1 , d q+1 ).
5. Finally, we prove that the inductive estimates (2.6)-(2.11) hold with q replaced by q + 1.
Step 4 is moderately involved and we breaks it into the following sub-steps :
1. For times t ≥ 1, we use the 'squiggling' cutoffs η l from [6,21] that allow energy to be added at such times, even outside the support of (R q ,M q ), while cancelling a large part of (R q ,M q ).
2. For times t < 1, we instead employ the straight cutoffs introduced in [5,19]. This ensures that
(v q ,b q )| t=0 = (v in * ψ q−1 * ψ q , b in * ψ q−1 * ψ q ).
3. Then we construct the seven parts of the perturbation by using the "box flows".
• Principal flows: (w
(p) q+1 , d (p) q+1
) plays a role in cancelling the Reynolds and magnetic stresses (R q ,M q ), while this would lead to some extra errors.
• Temporal flows: (w
(t) q+1 , d (t) q+1
) is used to cancel the extra errors which stem from div(φkk), where φk is a traveling-wave.
• Inverse traveling wave flows: (w
(v) q+1 , d (v) q+1
) is used to cancel the extra errors produced by div(φkk), where φk does not depend on t.
• Heat conduction flows : (w ∆ q+1 , d ∆ q+1 ) is used to cancel the extra errors producing by the inverse traveling wave flows (w
(v) q+1 , d (v) q+1 ). • Corrector flows: (w (c) q+1 , d (c) q+1 ) is introduced to correct principal perturbation (w (p) q+1 , d (p)
q+1 ) to enforce the incompressibility condition.
• Initial flows: (w
(s) q+1 , d (s) q+1 ) can ensure that (v q+1 , b q+1 )| t=0 = (v in * ψ q , b in * ψ q ).
It should be noted that the above five types of flows are zero when t = 0.
• Helicity flows: (w
(h) q+1 , d (h)
q+1 ) makes the cross helicity satisfy (2.11) at q + 1 level.
It is noteworthy that the first five flows are enough to produce a weak solution (v, b) for (1.1), initial flows and helicity flows are used to control the helicity and show the non-uniqueness.
Preliminary preparation of iteration
In this section, we provide some preliminary preparation from (v q , b q ) to (v q ,b q ), and it is essentially a modification as in [6,8,21]. For the sake of completeness, we briefly review relevant results in the process of constructing (v q ,b q ) and readers can refer to [6,8,21] for the more details. In Section 4, we will construct
(v q ,b q ) → (v q+1 , b q+1 )
to complete the iteration, which is the key ingredient of this paper.
Mollification
Let q := λ −6 q , and we define the functions v q , b q andR q ,M q as follows:
v q := v q * ψ q ,R q :=R q * ψ q − (v q⊗ v q ) * ψ q + v q⊗ v q + (b q⊗ b q ) * ψ q − b q⊗ b q , (3.1) b q := b q * ψ q ,M q :=M q * ψ q − (v q ⊗ b q ) * ψ q + v q ⊗ b q + (b q ⊗ v q ) * ψ q − b q ⊗ v q . (3.2)
Then, (v q , b q , p q ,R q ,M q ) satisfies the following equations
∂ t v q − ∆v q + div(v q ⊗ v q ) + ∇p q = div(b q ⊗ b q ) + divR q , ∂ t b q − ∆b q + div(v q ⊗ b q ) = div(b q ⊗ v q ) + divM q , ∇ · v q = 0, ∇ · b q = 0, v q | t=0 = v in * ψ q−1 * ψ q , b q | t=0 = b in * ψ q−1 * ψ q ,(3.3)
where p q := p q * ψ q − |v q | 2 + |v q | 2 + |b q | 2 − |b q | 2 , and we have used the identity div(f I 3×3 ) = ∇f for a scalar field f . A simple computation shows the following mollification estimates:
Proposition 3.1 (Estimates for mollified functions [6]). For any N ≥ 0, we have 2 v q − v q L 2 + b q − b q L 2 δ q+1 3α q , (3.4) v q H N +3 + b q H N +3 λ 5 q −N q , (3.5) R q W N,1 + M q W N,1 δ q+1 3α−N q , (3.6) T 3 (|v q | 2 − v q 2 ) dx + T 3 (|b q | 2 − b q 2 ) dx δ q+1 3α q , (3.7) T 3 v q · b q − v q · b q dx δ q+1 3α q . (3.8)
Classical exact flows
We define τ q and the sequence of initial times t l (l ∈ N) by
τ q := 3 q (v q , b q ) −1 H 5 2 +α , t l := lτ q ,(3.9)
and (v l , b l ) denotes the unique strong solution to the following MHD system on [t l , t l+2 ]:
∂ t v l − ∆v l + div(v l ⊗ v l ) + ∇p l = div(b l ⊗ b l ), ∂ t b l − ∆b l + div(v l ⊗ b l ) = div(b l ⊗ v l ), div v l = 0, div b l = 0, v l | t=t l = v q (·, t l ), b l | t=t l = b q (·, t l ).
(3.10)
Proposition 3.2 (Estimates for classical solutions to MHD [28]). Let (v 0 , b 0 ) ∈ H N0 with N 0 ≥ 3, and div v 0 = div b 0 = 0. Then there exists a unique local solution (v, b) to (1.1) with ν 1 = ν 2 = 1 satisfying (v(·, t), b(·, t)) H N (v 0 , b 0 ) H N , N ∈ [ 5 2 , N 0 ],
where the local lifespan T = c v0 H 5/2+α + b0 H 5/2+α for some universal c > 0.
According to Proposition 3.2, the solvability of the Cauchy problem (3.10) on [t l , t l+2 ] can be stated as:
Corollary 3.3. System (3.10) possesses a unique local solution (v l , b l ) in [t l , t l+2 ] such that (v l (·, t), b l (·, t)) L 2 (v q , b q ) L 2 , (3.11) (v l (·, t), b l (·, t)) H N +3 (v q , b q ) H N +3 , ∀N ≥ 0, (3.12) (v l − v q , b l − b q ) H N τ q δ q+1 −N −5/2+α q , ∀N ≥ 0, (3.13) where (v l − v q , b l − b q ) has zero mean.
Proof. (3.11)-(3.12) could be obtianed by Proposition 3.2. We want to prove (3.13).
Let (v, b) := (v l − v q , b l − b q ), we have ∂ t v − ∆v + div(v l ⊗ v + v ⊗ v q ) + ∇(p l − p q ) = div(b l ⊗ b + b ⊗ b q ) + divR q , ∂ t b − ∆b + div(v l ⊗ b + v ⊗ b q ) = div(b l ⊗ v + b ⊗ v q ) + divM q , v| t=t l = 0, b| t=t l = 0.
(3.14)
When N = 0, using the calsscial estimations in Besov space [3] on [t l , t l+2 ], we deduce that
(v, b) L ∞ t B 0 2,1 ∩L 2 t B 1 2,1 ∩L 1 t B 2 2,1 div(v l ⊗ v + v ⊗ v q ) L 1 t B 0 2,1 + div(b l ⊗ b + b ⊗ b q ) L 1 t B 0 2,1 + div(v l ⊗ b + v ⊗ b q ) L 1 t B 0 2,1 + div(b l ⊗ v + b ⊗ v q ) L 1 t B 0 2,1 + (R q ,M q ) L 1 t B 1 2,1 (v, b) L ∞ t B 0 2,1 (v q , v l , b q , b l ) L 1 t B 5/2 2,1 + (v, b) L 2 t B 1 2,1 (v q , v l , b q , b l ) L 2 t B 3/2 2,1 + (R q ,M q ) L 1 t B 1 2,1 τ q λ 5 q (v, b) L ∞ t B 0 2,1 + τ 1/2 q λ 5 q (v, b) L 2 t B 1 2,1 + τ q (R q ,M q ) L ∞ t B 1 2,1 τ q δ q+1 −5/2+α q , (3.15) where we use the fact that (R q ,M q ) B 1 2,1 (R q ,M q ) B 5/2 1,1 (R q ,M q ) W 5/2+α,1 δ q+1 −5/2+α q .
When N ≥ 1, similarly we deduce that
(v, b) L ∞ t B N 2,1 ∩L 2 t B N +1 2,1 ∩L 1 t B N +2 2,1 div(v l ⊗ v + v ⊗ v q ) L 1 t B N 2,1 + div(b l ⊗ b + b ⊗ b q ) L 1 t B N 2,1 div(v l ⊗ b + v ⊗ b q ) L 1 t B N 2,1 + div(b l ⊗ v + b ⊗ v q ) L 1 t B N 2,1 + (R q ,M q ) L 1 t B N +1 2,1 (v, b) L ∞ t B 0 2,1 (v q , v l , b q , b l ) L 1 t B N +5/2 2,1 + (v, b) L 2 t B N +1 2,1 (v q , v l , b q , b l ) L 2 t B 3/2 2,1 + (R q ,M q ) L 1 t B N +1 2,1 τ q −N q λ 5 q (v, b) L ∞ t B 0 2,1 + τ 1/2 q λ 5 q (v, b) L 2 t B N +1 2,1 + τ q (R q ,M q ) L ∞ t B N +1 2,1 τ q δ q+1 −N −5/2+α q . (3.16) Using the fact that (v l −v q , b l −b q ) L ∞ t H N ≤ (v l −v q , b l −b q ) L ∞ J l := t l − τq 3 , t l + τq 3 . (3.18)
And N q denotes the smallest number so that
[0, T ] ⊆ J 0 ∪ I 0 ∪ J 1 ∪ I 1 ∪ · · · ∪ J Nq ∪ I Nq ,
i.e.
N q := sup l ≥ 0 : (J l ∪ I l ) ∩ [0, T ] = ∅ ≤ T τ q . For N ≥ 0, let {χ l } Nq l=1 be a partition of unity such that Nq l=1 χ l (t) = 1, t ∈ [− τq 3 , T + τq 3 ], where supp χ l = I l−1 ∪ J l ∪ I l , χ l | J l = 1, ∂ N t χ l C 0 t τ −N q , (N q − 1 ≥ l ≥ 2), (3.19) supp χ 1 = J 0 ∪ I 0 ∪ J 1 ∪ I 1 , χ 1 | [0,t1]∪J1 = 1, ∂ N t χ 1 C 0 t τ −N q , (3.20) supp χ Nq = I Nq−1 ∪ J Nq ∪ I Nq , χ Nq | [t Nq −1 ,t Nq ]∪J Nq = 1, ∂ N t χ Nq C 0 t τ −N q . (3.21)
In particular, for |l − j| ≥ 2, supp χ l ∩ supp χ j = ∅. Then we define the glued velocity, magnetic fields and
pressure (v q ,b q ,p q ) byv q (x, t) := Nq−1 l=0 χ l+1 (t)v l (x, t), (3.22) b q (x, t) := Nq−1 l=0 χ l+1 (t)b l (x, t) (3.23) p q (x, t) := Nq−1 l=0 χ l+1 (t)p l (x, t). (3.24)
One can deduce that
(v q (0, x),b q (0, x)) = (v q (0, x), b q (0, x)) = (v in * ψ q−1 * ψ q , b in * ψ q−1 * ψ q ).
Furthermore, (v q ,b q ) solves the following system for t ∈ [0, T ]:
∂ tvq − ∆v q + div(v q ⊗v q ) + ∇p q = div(b q ⊗b q ) + divR q , ∂ tbq − ∆b q + div(v q ⊗b q ) = div(b q ⊗v q ) + divM q , divv q = 0, divb q = 0, v q | t=0 = v q (·, 0),b q | t=0 = b q (·, 0). (3.25)
Here (R q ,M q ,p q ) is defined as follows:
R q := Nq l=0 ∂ t χ l R(v l − v l+1 ) − χ l (1 − χ l )(v l − v l+1 )⊗ (v l − v l+1 ) + χ l (1 − χ l )(b l − b l+1 )⊗ (b l − b l+1 ), (3.26) M q := Nq l=0 ∂ t χ l R a (b l − b l+1 ) − χ l (1 − χ l )(v l − v l+1 ) ⊗ (b l − b l+1 ) + χ l (1 − χ l )(b l − b l+1 ) ⊗ (v l − v l+1 ), (3.27) p q :=p q − Nq l=0 χ l (1 − χ l ) |v l − v l+1 | 2 − T 3 |v l − v l+1 | 2 dx − |b l − b l+1 | 2 + T 3 |b l − b l+1 | 2 dx , (3.28)
where we have used the inverse divergence operator R and R a in Section A.2. It is easy to check that
v l − v l+1 , b l − b l+1
andp q have zero mean,M q is anti-symmetric,R q is symmetric and trace-free.
Combining Corollary 3.3 with the estimates of solutions for the heat equation in periodic Besov spaces [3], we deduce the following two propositions after some computations:
Proposition 3.4 (Estimates for (v q − v q ,b q − b q )). For all t ∈ [0, T ] and N ≥ 0, we have (v q ,b q ) H 3+N λ 5 q −N q , (3.29) (v q − v q ,b q − b q ) L 2 δ 1/2 q+1 α q , (3.30) (v q − v q ,b q − b q ) H N τ q δ q+1 −N −5/2+α q . (3.31) Proposition 3.5 (Estimates for (R(v l − v l+1 ), R a (b l − b l+1 )). For all t ∈ [0, T ] and N ≥ 0, we have (R(v l − v l+1 ), R a (b l − b l+1 )) W N,1 τ q δ q+1 −N +α q .
(3.32) Proposition 3.6 (Estimates for (R q ,M q )). For all t ∈ [0, T ] and N ≥ 0, we have
(R q ,M q ) W N,1 δ q+1 −N +α q , (3.33) (∂ tRq , ∂ tMq ) W N,1 δ q+1 τ −1 q −N +α q . (3.34)
Proposition 3.7 (Gaps between energy and helicity). For all t ∈ [0, T ], we have
T 3 (|v q | 2 + |b q | 2 − |v q | 2 − |b q | 2 ) dx δ q+1 2α q , (3.35) T 3 (v q ·b q − v q · b q ) dx δ q+1 2α q . (3.36)
Cutoffs
Space-time cutoffs
To control the energy without impacting the initial data, we construct the following space-time cutoffs η l in this section. We define the index
N 0 q := 1 τ q − 2 ∈ N + ,
and denote by {η l } l≥1 the cutoffs such that:
η l (x, t) := η l (t) 1 ≤ l < N 0 q , η l (x, t) N 0 q ≤ l ≤ N q .
(3.37)
We defineη l as in [5,19] as follows:
Letη 1 ∈ C ∞ c (J 1 ∪ I 1 ∪ J 2 ; [0, 1]) satisfy suppη 1 = I 1 + − τ q 6 , τ q 6 = 7τ q 6 , 11τ q 6 ,
be identically 1 on I 1 , and possess the following estimates for N ≥ 0:
∂ N tη1 C 0 t τ −N q .
We setη l (t) :=η 1 (t − t l−1 ) for 1 ≤ l < N 0 q . Next, we give the definition ofη l (t, x) as in [6,21]. Let [21].
∈ (0, 1/3), 0 1. For N 0 q ≤ l ≤ N q , letting φ be a bump function such that supp φ ⊂ [−1, 1] and φ = 1 in [− 1 2 , 1 2 ], we define I l := I l + − (1 − )τ q 3 , (1 − )τ q 3 = lτ q + τ q 3 , lτ q + (3 − )τ q 3 , I l := x, t + 2 τ q 3 sin(2πx 1 ) : x ∈ T 3 , t ∈ I l ⊂ T 3 × R, η l (x, t) := 1 3 0 1 0 τ q I l φ |x − y| 0 φ |t − s| 0 τ q dy ds Figure 1: The support of a singleη l . For each time t ∈ [t l , t l+1 ], the integral 1 0η l dx 1 > c η ≈ 1/4. Furthermore, {suppη l } l≥N 0 q are pairwise disjoint sets. Figure from
From the above discussion, we can obtain the following lemma: 6,21]). For all l = 1, . . . , N q , The functions {η l } l≥1 satisfy
Lemma 3.8 ( [1. η l ∈ C ∞ c (T 3 × (J l ∪ I l ∪ J l+1 ); [0, 1]) with: ∂ n t η l L ∞ t C m x n,m τ −n q , n, m ≥ 0. (3.38) 2. η l (·, t) ≡ 1 for t ∈ I l . 3. supp η l ∩ supp η j = ∅ if l = j. 4. For all t ∈ [t N 0 q , T ], we have c η ≤ Nq l=0 T 3 η 2 l (x, t) dx ≤ 1
for a fixed positive constant c η ≈ 1 4 independent of q.
5. For all 1 ≤ l < N 0 q , η l only depends on t, and supp η l ⊂ [
7τq 6 + (l − 1)τ q , 11τq 6 + (l − 1)τ q ] .
Helicity gap
First, for t ∈ [0, T ] we set new helicity gap such that
h q (t) := 1 3 h(t) − T 3v q ·b q dx − δ q+2 200 . (3.39)
We deduce by Proposition 3.1 and Proposition 3.
7 that h q (t) is strictly positive in [1 − τ q−1 , T ] and satisfies 1 400 δ q+1 ≤ 3h q (t) ≤ 1 90 δ q+1 , ∀t ∈ [1 − τ q−1 , T ]. (3.40) Next, we define a function η −1 := η −1 (t) ∈ C ∞ c ([0, t N 0 q +1 ); [0, 1]) such that η −1 ≡ 1, 0 ≤ t ≤ t N 0 q ,(3.41)
and satisfies
sup t |∂ N t η −1 (t)| τ −N q , N ≥ 0. (3.42) Note that t N 0 q +1 = τ q ( τ −1 q − 1) ≤ 1 − τ q , then t ≥ 1 − τ q implies η −1 (t) = 0.
The following figure shows the supports relationship between η l and η −1 visually: Figure 2: At time t N 0 q , we switch from using straight cutoffsη l to the squiggling cutoffsη l . At time
t N 0 q +1 = τ q ( τ q −1 − 1) ≤ 1 − τ q ,
we start to control the helicity. Figure from [21].
Then, we modify the energy gap h q (t) by setting
h b,q (t) := δq+1 400 ℵ(t) + hq(t)(1−ℵ(t)) η−1(t)+ Nq l=1 T 3 η 2 l (x,t) dx , (3.43)
where ℵ is a smooth cut-off function that is equal to 1 for t < 1 − τ q−1 and 0 for t > 1 − τ q and satisfies
∂ N t ℵ τ −N q−1 τ −N q .
Using Lemma 3.8, one can easily deduce that δq+1 2400 ≤ h b,q (t) ≤ 1 50 δ q+1 for each t ∈ [0, T ], which implies that h b,q is not much different from the original helicity gap in (2.11).
Perturbation
In this section, we construct the perturbation (w q+1 , d q+1 ) to iterate (v q ,b q ) → (v q+1 , b q+1 ), which is the key ingredient of this paper. (3.25), we obtain the following MHD system with new Reynolds and magnetic stresses (R q+1 ,M q+1 ):
Stresses associated with the MHD system
Let (v q+1 , b q+1 ) = (v q + w q+1 ,b q + d q+1 ). From ∂ t v q+1 − ∆v q+1 + div(v q+1 ⊗ v q+1 − b q+1 ⊗ b q+1 ) = divR q+1 − ∇p q+1 , ∂ t b q+1 − ∆b q+1 + div(v q+1 ⊗ b q+1 − b q+1 ⊗ v q+1 ) = divM q+1 , (v q+1 , b q+1 )| t=0 = (v in * ψ q , b in * ψ q ), where p q+1 :=p q (x, t) − 1 3 Tr[w q+1 ⊗v q +v q ⊗ w q+1 − d q+1 ⊗b q −b q ⊗ d q+1 ] − P v , R q+1 =R[∂ t w q+1 − ∆w q+1 ] + [w q+1⊗vq +v q⊗ w q+1 − d q+1⊗bq −b q⊗ d q+1 ] + R[div(w q+1 ⊗ w q+1 − d q+1 ⊗ d q+1 ) + divR q − ∇P v ] :=R lin q+1 + R osc q+1 , M q+1 =R a [∂ t d q+1 − ∆d q+1 ] + [w q+1 ⊗b q +v q ⊗ d q+1 −b q ⊗ w q+1 − d q+1 ⊗v q ] + R a P H [ div(w q+1 ⊗ d q+1 − d q+1 ⊗ w q+1 ) + divM q ] :=M lin q+1 + M osc q+1 .
Here we have used the fact that P H div A = div A for any anti-symmetric matrix A. The definition of P v can be seen in (5.25).
Before introducing the perturbation (w q+1 , d q+1 ), we provide two useful tools: "geometric lemmas" and "box flows".
Two geometric lemmas
R u ∈ B v (Id) R = k∈Λv a v,k (R)k ⊗k.
To eliminate the helicity errors, we need another set Λ s which does not interact with Λ v , Λ b , see Appendix A.2 for more details. For simplicity, we let Λ :
= Λ b ∪ Λ v ∪ Λ s .
Box flows
In this section, let Φ : R → R be a smooth cutoff function supported on the interval [0, 1 8 ]. Assume that
φ := d 2 dx 2 Φ.
We define the stretching transformation as
r − 1 2 φ(N Λ r −1 x),
where r −1 is a positive integer number and N Λ is a large number such that N Λ k, N Λk , N Λk ∈ Z 3 . We periodize it as φ r (x) on [0, 1].
Next, for lager positive integer numbers r −1 ,r −1 ,r −1 , µ and σ, we set φ k (x) := φ r (σk · x), φk(x, t) := φr(σk · x + σµt), φk(x) := φr(σk · x).
Then we define a set of functions {φ k,k,k } k∈Λ :
T 3 × R → R by φ k,k,k := (φ k φkφk)(x − x k ), k ∈ Λ,
where x k ∈ R 3 are shifts which guarantee that
supp φ k,k,k ∩ supp φ k ,k ,k = ∅, if k , k ∈ Λ, k = k . (4.1)
There exist such shifts x k by the fact that r,r,r 1. (4.1) makes sense since φ k,k,k supports in some small 3D boxes. Readers can refer to [10,14,24] for this technique. In the rest of this paper, we still denote . One can easily verify that φ k,k,k has zero mean and we can deduce the following proposition:
φ k (x − x k ), φk(x − x k ) and φk(x − x k ) by φ k (x),Proposition 4.3. For p ∈ [1, ∞], we have φ k,k,k L p λ 14 16 (1− 2 p ) q+1 λ 5 16 ( 1 2 − 1 p ) q+1 , supp φ k,k,k ≈ λ − 33 16 q+1 .
Moreover, we have φ k,k,k L 2 = 1 after normalization.
Finally, let Ψ ∈ C ∞ (T) and ψ = d 2 dx 2 Ψ. Set ψ k := ψ(λ q+1 N Λ kx) and normalize it such that ψ k L 2 = 1. One can deduce that
ψ k = λ −2 q+1 N −2 Λ ∆[Ψ(λ q+1 N Λ kx)] := λ −2 q+1 N −2 Λ ∆Ψ k , k ∈ Λ. (4.2)
It is easy to verify that ψ k φ k,k,k is also a C ∞ 0 (T 3 ) function supported in 3D boxes, and we call ψ k φ k,k,kk or ψ k φ k,k,kk "box flows" throughout this paper.
Construction of perturbation
Now, we introduce the following seven types of perturbation:
(1) To cancel the errors of the cross helicity, we construct a so-called "helicity flow" (w
(h) q+1 , d (h) q+1 ). We are in position to define w (h) q+1 = d (h) q+1 := l;k∈Λs η l h 1/2 b,q ψ k P σ (φ k φkφkk),(4.3)
where h b,q (t) comes from (3.43).
(2) We aim to construct the principal corrector (w
(p) q+1 , d (p)
q+1 ) via geometric lemmas. Note that the geometric lemmas are valid for anti-symmetric matrices perturbed near 0 matrix and symmetric matrices perturbed near Id matrix, we need the following smooth function χ introduced in [4,25] such that the stresses are pointwise small. More precisely, let χ : [0, ∞) → R + be a smooth function satisfying
χ(z) = 1, 0 ≤ z ≤ 2, z, z ≥ 4. (4.4)
To cancel the magnetic stressM q , we construct the principal correctors w (pb)
q+1 and d (p) q+1 . Letting χ b := χ M b δq+1 α/2 q and M b := −M q , ρ b,q := χ b δ q+1 α/3 q , we set that w (pb) q+1 := l;k∈Λ b η l ρ 1/2 b,q a b,k M b ρ b,q ψ k φ k,k,kk := l;k∈Λ b a b,l,k ψ k φ k,k,kk , (4.5) d (p) q+1 := l;k∈Λ b η l ρ 1/2 b,q a b,k M b ρ b,q ψ k φ k,k,kk := l;k∈Λ b a b,l,k ψ k φ k,k,kk . (4.6)
Next, we want to construct the principal correctors w (pu) q+1 to cancel the Reynolds stressR q . Let
ρ q (t) := 1 3 e(t) − T 3 (|v q | 2 + |b q | 2 ) dx − E(t) − δ q+2 2 ,(4.7)
where E(t) :=
T 3 (|w (pb) q+1 | 2 + |d (p) q+1 | 2 + |w (h) q+1 | 2 + |d (h) q+1 | 2 ) dx.
One can easily deduce that E(t) ≤ δ q+1 /10. Indeed, from the definition of φ k,k,k and ψ k , we deduce by Lemma A.1 that
|E(t)| ≤ (w (h) q+1 , d (h) q+1 , w (pb) q+1 , d (p) q+1 ) 2 L 2 ( η l h 1/2 b,q 2 L 2 + η l ρ 1/2 b,q 2 L 2 ) φ k,k,k 2 L 2 < δq+1 10 .
(4.8)
Hence we deduce by Proposition 3.1 and Proposition 3.7 that ρ q (t) is strictly positive in [1 − τ q−1 , T ] and satisfy 1 10 δ q+1 ≤ 3ρ q (t) ≤ 4δ q+1 (4.9)
Then, we modify the energy gap ρ q (t) by setting
ρ q,0 (t) := δ q+1 ℵ(t) + ρq(t)(1−ℵ(t)) η−1(t)+ Nq l=1 T 3 η 2 l χv(x,t) dx ,(4.10)
where
χ v := χ Rv α/4 q δq+1 and R v :=R q − l;k∈Λv a 2 b,l,k (k ⊗k −k ⊗k). We firstly show that ρ q,0 (t) is well-defined. When t ∈ I i ∪ J i with i ≤ N 0 q , we have η −1 (t) = 1, ρ q,0 (t) is well-defined. When t ∈ I i with i ≥ N 0 q , we have T 3 η 2 l χ v (x, t) dx = T 3 χ v dx = T 3 χ Rv α/4 q δq+1 dx ≥ Rv α/4 q δq+1 ≤2 1 dx ≥ 1 2 , (4.11)
where we use that When t ∈ J i with i ≥ N 0 q , we conclude that One could easily deduce that χ b = χ v = 1. Therefore,
T 3 η 2 l χ v (x, t) dx = T 3 η 2 l (x, t) dx > c η ≈ 1 4 .
Combining the definition of η −1 with the above estimates shows that Therefore, we prove that ρ q,0 is well-defined. Recalling the definitions of η −1 (t), ℵ(t) and the fact that
η −1 (t) + Nq l=1 T 3 η 2 l χ v,0 (x, t) dx ≥ 1, 0 ≤ t ≤ t N 0 q , 1 4 , t N 0 q ≤ t ≤ T.1 − τ q−1 ≤ t N 0 q < t N 0 q +1 ≤ 1 − τ q ≤ t N 0 q +2 ≤ 1, we obtain that δq+1 90 ≤ ρ q,0 (t) ≤ 10δ q+1 , ∀t ∈ [0, T ].
Now, setting ρ v,q := ρ q,0 χ v , we construct the principal corrector w a v,l,k ψ k φ k,k,kk . (4.14)
Combining with (4.5) and (4.6), we show the principal correctors w
(p) q+1 and d (p) q+1 such that w (p) q+1 := l;k∈Λv a v,l,k ψ k φ k,k,kk + l;k∈Λ b a b,l,k ψ k φ k,k,kk = w (pu) q+1 + w (pb) q+1 , d (p) q+1 := l;k∈Λ b a b,l,k ψ k φ k,k,kk .Proposition 4.4. For N ≥ 0, we have (a v,l,k , a b,l,k ) L 2 δ 1/2 q+1 α 10 q , (4.16) (a v,l,k , a b,l,k ) L ∞ −3+ α 10 q , (4.17) (a v,l,k , a b,l,k ) H N δ 1/2 q+1 −5N + α 10 q , (4.18) ∂ t (a v,l,k , a b,l,k ) H N τ −1 q δ 1/2 q+1 −5N + α 10 q . (4.19)
We emphasize that (a v,l,k , a b,l,k ) oscillates at a frequency −5 q , which have relatively small contribution comparing to the "box flows". Indeed, the "box flows" oscillate at a much higher frequency λ (4) We construct the inverse traveling wave flows w
(v) q+1 , d (v) q+1 such that w (v) q+1 : = (µσ) −1 P H P >0 l;k∈Λ b a 2 b,l,k div (∂ −1 t P >0 φ 2 k )φ 2 k φ 2 kk ⊗k , (4.22) d (v) q+1 : = (µσ) −1 P H P >0 l;k∈Λ b a 2 b,l,k div (∂ −1 t P >0 φ 2 k )φ 2 k φ 2 kk ⊗k , (4.23) where ∂ −1 t P >0 φ 2 k := σ(k(x−x k )+µt) 0 (φ 2 r (z) − 1) dz. Since φ 2 r (z) − 1 has zero mean, we have |∂ −1 t P >0 (φ 2 k )| ≤ 2.(5)
In order to match the inverse traveling wave flows w
(v) q+1 , d (v)
q+1 , we need to construct the following heat conduction flows w ∆ q+1 , d ∆ q+1 :
w ∆ q+1 :=(µσ) −1 P H P >0 l;k∈Λ b a 2 b,l,k t 0 e (t−τ )∆ ∆ div (∂ −1 t P >0 φ 2 k )φ 2 k φ 2 kk ⊗k dτ + P H P >0 l;k∈Λ b a 2 b,l,k t 0 e (t−τ )∆ div φ 2 k φ 2 kk ⊗k dτ, (4.24) d ∆ q+1 :=(µσ) −1 P H P >0 l;k∈Λ b a 2 b,l,k t 0 e (t−τ )∆ ∆ div (∂ −1 t P >0 φ 2 k )φ 2 k φ 2 kk ⊗k dτ + P H P >0 l;k∈Λ b a 2 b,l,k t 0 e (t−τ )∆ div φ 2 k φ 2 kk ⊗k dτ. (4.25) (6) Since w (t) q+1 , w ∆ q+1 , w (v) q+1 and d (t) q+1 , d ∆ q+1 , d (v)
q+1 are divergence-free, it suffices to define two small correctors w
(c) q+1 , d (c) q+1 such that div(w (h) q+1 + w (p) q+1 + w (c) q+1 ) = 0, div(d (h) q+1 + d (p) q+1 + d (c) q+1 ) = 0.
Firstly, noting that ψ kk is zero mean and divergence-free, we have by (4.2)
ψ kk = − curl curl ∆ ∆ Ψ kk λ 2 q+1 N 2 Λ := curl λ q+1 Fk.
Hence,
w (p) q+1 + w (h) q+1 = l;k∈Λ a v,c ψ kk = l;k∈Λ a v,c λ q+1 curl Fk, where a v,c = (a v,l,k 1 k∈Λv + a b,l,k 1 k∈Λ b + η l h 1/2 b,q 1 k∈Λs )φ k,k,k .
Next, we set This equality leads to
w (p) q+1 + w (h) q+1 + w (c) q+1 = curl av,c λq+1 Fk by curl(f W ) = ∇f × W + f curl W.
Similarly, letting
a b,c = (a b,l,k 1 k∈Λ b + η l h 1/2 b,q 1 k∈Λs )φ k,k,k , we define d (c) q+1 := l;k∈Λ 1 λq+1 ∇a b,c × Fk. (4.27)
One deduces that
w (c) q+1 + w (p) q+1 + w (h) q+1 and d (c) q+1 + d (p) q+1 + d (h) q+1
are divergence-free and have zero mean.
(7) Finally, we define the following "initial flows":
w (s) q+1 := e t∆ (v in * ψ q − v in * ψ q−1 * ψ q ), (4.28) d (s) q+1 := e t∆ (b in * ψ q − b in * ψ q−1 * ψ q ). (4.29)
From the definition of η l in (3.37), one obtains that
(w (h) q+1 + w (p) q+1 + w (t) q+1 + w (v) q+1 + w ∆ q+1 + w (c) q+1 )(0, x) = 0, (d (h) q+1 + d (p) q+1 + d (t) q+1 + d (v) q+1 + d ∆ q+1 + d (c) q+1 )(0, x) = 0.
Hence, (4.28) and (4.29) guarantee that v q+1 (0, x) = v in * ψ q and b q+1 (0, x) = b in * ψ q . This fact implies that (2.9) holds with q replaced by q + 1. Moreover, one can easily deduce that w To sum up, we construct
w q+1 := w (h) q+1 + w (p) q+1 + w (t) q+1 + w ∆ q+1 + w (v) q+1 + w (c) q+1 + w (s) q+1 , (4.30) d q+1 := d (h) q+1 + d (p) q+1 + d (t) q+1 + d ∆ q+1 + w (v) q+1 + d (c) q+1 + d (s) q+1 , (4.31) which help us finish the iteration from (v q ,b q ) to (v q+1 , b q+1 ).
Now, we show that (2.6) and (2.7) hold at q + 1 level.
Proposition 4.5 (Estimates for w q+1 and d q+1 ).
(w
(h) q+1 , d (h) q+1 ) L 2 + 1 λ 5 q+1 (w (h) q+1 , d (h) q+1 ) H 3 ≤ 1 10 δ 1/2 q+1 , (4.32) (w (p) q+1 , d (p) q+1 ) L 2 + 1 λ 5 q+1 (w (p) q+1 , d (p) q+1 ) H 3 ≤ 100δ 1/2 q+1 , (4.33) (w (t) q+1 , d (t) q+1 ) L 2 + 1 λ 5 q+1 (w (t) q+1 , d (t) q+1 ) H 3 ≤ λ −50α q+1 δ q+2 , (4.34) (w (v) q+1 , d (v) q+1 ) L 2 + 1 λ 5 q+1 (w (v) q+1 , d (v) q+1 ) H 3 ≤ λ −50α q+1 δ q+2 , (4.35) (w (c) q+1 , d (c) q+1 ) L 2 + 1 λ 5 q+1 (w (c) q+1 , d (c) q+1 ) H 3 ≤ λ −50α q+1 δ q+2 , (4.36) (w ∆ q+1 , d ∆ q+1 ) L 2 + 1 λ 5 q+1 (w ∆ q+1 , d ∆ q+1 ) H 3 ≤ λ −50α q+1 δ q+2 , (4.37) (w (s) q+1 , d (s) q+1 ) L 2 + 1 λ 5 q+1 (w (s) q+1 , d (s) q+1 ) H 3 ≤ λ −50α q+1 δ q+2 , (4.38) (w q+1 , d q+1 ) L 2 + 1 λ 5 q+1 (w q+1 , d q+1 ) H 3 ≤ 400δ 1/2 q+1 . (4.39)
Proof. Firstly, from the definition of φ k,k,k and ψ k , we deduce by Proposition 4.3 , Proposition 4.4 and Secondly, we aim to prove (4.37). For simplicity, we denote w ∆ q+1 by
Lemma A.1 that (w (h) q+1 , d (h) q+1 ) L 2 ≤ η l h 1/2 b,q ψ k φ k,k,k L 2 η l h 1/2 b,q L 2 φ k,k,k L 2 ≤ δ 1/2 q+1 20 ,(4.w ∆ q+1 :=(µσ) −1 P H P >0 l;k∈Λ b a 2 b,l,k t 0 e (t−τ )∆ ∆g(x, t) dτ + P H P >0 l;k∈Λ b a 2 b,l,k t 0 e (t−τ )∆ div h(σk · x, σk · x)k ⊗k dτ, (4.42) where h(σk · x, σk · x) = φ 2 k φ 2 k = φ 2 r (σk · x)φ 2 r (σk · x) and g(t, x) := div (∂ −1 t P >0 φ 2 k )φ 2 k φ 2 kk ⊗k = (∂ −1 t P >0 φ 2 k )φ 2 k div φ 2 kk ⊗k .
Using the estimates of solution for the heat equation in [3], we deduce that for 0 < α ≤ min{ 1
b 6 , β b 3 }, t 0 e (t−τ )∆ ∆g dτ L ∞ t L 2 ≤ g L ∞ t H α λ α q+1 σr −1/2r−3/2 . (4.43) Similarly, noting that div h(σk · x, σk · x)k ⊗k = σ(∂ 2 h)(σk · x, σk · x)k, we deduce that t 0 e (t−τ )∆ div h(σk · x, σk · x)k ⊗k dτ L ∞ t L 2 σ (∂2h)(σk·x,σk·x)k ∆ L ∞ t H α = σ −1 ∂2h ∆ (σk · x, σk · x) L ∞ t H α , (4.44)
where we have used the fact that
(∂ 2 h)(σk · x, σk · x) = σ −2 ∆[ ∂2h ∆ (σk · x, σk · x)], (k,k,k) ∈ Λ.
Since h(·, ·) ∈ C ∞ (T 2 ) for fixed q and k ⊥k, we have for α > 0,
∂2h ∆ (σk · x, σk · x) L 2 (T 3 ) = ∂2h ∆ (·, ·) L 2 (T 2 ) h W α,1 (T 2 ) and ∂2h ∆ (σk · x, σk · x) Ḣ1 (T 3 ) σ h L 2 (T 2 ) σ h W 1+α,1 (T 2 ) .
Plugging the above two estimates into (4.44) yields that
t 0 e (t−τ )∆ div h(σk · x, σk · x)k ⊗k dτ L ∞ t L 2 σ −1 ∂2h ∆ (σk · x, σk · x) 1−α L 2 (T 3 ) ∂2h ∆ (σk · x, σk · x) α H 1 (T 3 ) σ −1 h 1−α W α,1 h α W 1+α,1 σ −1+α λ 2α q+1 .
(4.45)
Combining (4.43) with (4.45), we have
w ∆ q+1 L ∞ t L 2 −6 q λ 4α q+1 (µ −1 r −1/2r−3/2 + σ −1 ) −6 q (λ − 17 16 + 7 16 + 15 32 +2α q+1 + λ − 1 128 +2α q+1 ) λ −50α q+1 δ q+2 ,
where σ = λ . A similar calculation also yields that
d ∆ q+1 L 2 −6 q λ 2α q+1 (µ −1 r −1/2r−3/2 + σ −1 ) λ −50α q+1 δ q+2 and (w ∆ q+1 , d ∆ q+1 ) H 3 λ 5−50α q+1 δ q+2 .
This completes the proof of (4.37).
Finally, we turn to prove (4.38). Recalling that
β =β b 4 , b = 2 16 β −1/2 and α ≤ min 1 b 6 , β b 3 , we have β q−1 < λ (−1−βb+β)β q−1 < λ −2βb 3 −50αb 2 q−1 .
For any (v in , b in ) ∈ (Hβ, Hβ),β > β, we have
(w (s) q+1 , d (s) q+1 ) L 2 ≤ (v in * ψ q − v in * ψ q−1 * ψ q , b in * ψ q − b in * ψ q−1 * ψ q ) L 2 β q−1 (v in , b in ) Hβ λ −50α q+1 δ q+2 , and (w (s) q+1 , d (s) q+1 ) Ḣ3 (v in , b in ) L 2 −3 q ≤ λ 5−50α q+1 δ q+2 . (4.46)
Hence, we derive (4.38). Collecting the estimates (4.32)-(4.38), one obtains (4.39).
Remark 4.6. Proposition 4.5 tells us that w
(c) q+1 , w (s) q+1 , w (t) q+1 , w ∆ q+1 , w (v) q+1 , d (c) q+1 , d (s) q+1 , d (t) q+1 , d ∆ q+1 and d (v) q+1 are small such that (w (c) q+1 , w (s) q+1 , w (t) q+1 , w ∆ q+1 , w (v) q+1 , d (c) q+1 , d (s) q+1 , d (t) q+1 , d ∆ q+1 , d (v) q+1 ) L 2 ≤ λ −50α q+1 δ q+2 . (4.47)
So one can omit these terms in estimating the linear errors or the oscillation errors.
Estimates of the stresses associated with the MHD system
Proposition 5.1 (Estimate for R lin q+1 and M lin q+1 ).
R lin q+1 − R[(∂ t w (t) q+1 + ∂ t w (v) q+1 ) + (∂ t w ∆ q+1 − ∆w ∆ q+1 − ∆w (v) q+1 )] L 1 λ −50α q+1 δ q+2 , (5.1) M lin q+1 − R a [(∂ t d (t) q+1 + ∂ t d (v) q+1 ) + (∂ t d ∆ q+1 − ∆d ∆ q+1 − ∆d (v) q+1 )] L 1 λ −50α q+1 δ q+2 . (5.2)
Proof. We deduce by Proposition 4.4-4.5 and Lemma A.1 that
w q+1 ⊗v q +v q ⊗ w q+1 L 1 ≤ v q ⊗ (w (c) q+1 + w (v) q+1 + w ∆ q+1 + w (t) q+1 + w (s) q+1 ) + (w (c) q+1 + w (v) q+1 + w ∆ q+1 + w (t) q+1 + w (s) q+1 ) ⊗v q L 1 + (w (p) q+1 + w (h) q+1 ) ⊗v q +v q ⊗ (w (p) q+1 + w (h) q+1 ) L 1 ≤λ −50α q+1 δ q+2 + v q L 2 a v,l,k + a b,l,k + h 1/2 b,q L 2 φ k,k,k L 1 λ −50α q+1 δ q+2 + δ 1/2 q+1 λ − 33 32 +α q+1 λ −50α q+1 δ q+2 ,(5.3)
and
d q+1 ⊗b q +b q ⊗ d q+1 L 1 λ −50α q+1 δ q+2 . (5.4) Since ∂ t w (s) q+1 − ∆w (s)
q+1 =0, we obtain by Proposition 4.3-4.5 and Lemma A.2 that
R[(∂ t w q+1 − ∂ t w (t) q+1 − ∂ t w (v) q+1 − ∂ t w ∆ q+1 ) − (∆w q+1 − ∆w (v) q+1 − ∆w ∆ q+1 )] L 1 ≤ R∂ t (w (p) q+1 + w (c) q+1 + w (h) q+1 ) L 1 + R∆(w (p) q+1 + w (c) q+1 + w (h) q+1 + w (t) q+1 ) L 1 −15 q µσr 1/2 λ −1+5α q+1 δ 1/2 q+1 + ( −15 q λ 1+5α q+1 rr 1/2 δ 1/2 q+1 + −15 q µ −1 r −1 σλ 5α q+1 δ q+1 ) −15 q λ −1− 5 32 + 17 16 + 1 128 +5α q+1 δ 1/2 q+1 + −15 q λ 1− 7 8 − 5 32 +5α q+1 δ q+1 + −15 q λ − 17 16 + 7 8 + 1 128 +5α q+1 δ q+1 λ −50α q+1 δ q+2 . (5.5)
Collecting (5.3)-(5.5) yields (5.1). In the same way, we can prove (5.2). Thus, we complete the proof of Proposition 5.1.
Proposition 5.2 (Estimate for M osc q+1 ). M osc q+1 + R a [(∂ t d (t) q+1 + ∂ t d (v) q+1 ) + (∂ t d ∆ q+1 − ∆d ∆ q+1 − ∆d (v) q+1 )] L 1 λ −50α q+1 δ q+2 .
Proof. Firstly, a direct computation shows that
div[M osc q+1 + R a [(∂ t d (t) q+1 + ∂ t d (v) q+1 ) + (∂ t d ∆ q+1 − ∆d ∆ q+1 − ∆d (v) q+1 )]] =P H div[w (pb) q+1 ⊗ d (p) q+1 − d (p) q+1 ⊗ w (pb) q+1 + M low,0 +M q ] + (∂ t d (t) q+1 + ∂ t d (v) q+1 ) + (∂ t d ∆ q+1 − ∆d ∆ q+1 − ∆d (v) q+1 ) =P H div[w (pb) q+1 ⊗ d (p) q+1 − d (p) q+1 ⊗ w (pb) q+1 +M q ] + div M low,0 + (∂ t d (t) q+1 + ∂ t d (v) q+1 ) + (∂ t d ∆ q+1 − ∆d ∆ q+1 − ∆d (v) q+1 ), (5.6)
where M low,0 is anti-symmetric such that
M low,0 :=(w q+1 − w (p) q+1 − w (h) q+1 ) ⊗ (d q+1 − d (p) q+1 − d (h) q+1 ) + (w (p) q+1 + w (h) q+1 ) ⊗ (d q+1 − d (p) q+1 − d (h) q+1 ) + (w q+1 − w (p) q+1 − w (h) q+1 ) ⊗ (d (p) q+1 + d (h) q+1 ) − (d q+1 − d (p) q+1 − d (h) q+1 ) ⊗ (w q+1 − w (p) q+1 − w (h) q+1 ) − (d q+1 − d (p) q+1 − d (h) q+1 ) ⊗ (w (p) q+1 + w (h) q+1 ) − (d (p) q+1 + d (h) q+1 ) ⊗ (w q+1 − w (p) q+1 − w (h) q+1 ). (5.7)
With aid of (4.1), we have d
(p) q+1 ⊗ w (pu) q+1 = w (pu) q+1 ⊗ d (p) q+1 = w (h) q+1 ⊗ d (p) q+1 = d (h) q+1 ⊗ w (p) q+1 = 0.
We can easily deduce by Proposition 4.5 that
M low,0 L 1 λ −50α q+1 δ q+2 . (5.8)
Secondly, recalling the definitions of d
(v)
q+1 , d ∆ q+1 in (4.23) and (4.25), we obtain that
∂ t d ∆ q+1 − ∆d ∆ q+1 − ∆d (v) q+1 = (µσ) −1 P H P >0 l;k∈Λ b (∂ t − ∆)(a 2 b,l,k ) t 0 e (t−τ )∆ ∆ div (∂ −1 t P >0 φ 2 k )φ 2 k φ 2 kk ⊗k dτ +P H P >0 l;k∈Λ b (∂ t − ∆)(a 2 b,l,k ) t 0 e (t−τ )∆ div φ 2 k φ 2 kk ⊗k dτ −(µσ) −1 P H P >0 l;k∈Λ b 3 i=1 ∂ xi (a 2 b,l,k )∂ xi t 0 e (t−τ )∆ ∆ div (∂ −1 t P >0 φ 2 k )φ 2 k φ 2 kk ⊗k dτ −P H P >0 l;k∈Λ b 3 i=1 ∂ xi (a 2 b,l,k )∂ xi t 0 e (t−τ )∆ div φ 2 k φ 2 kk ⊗k dτ + P H P >0 l;k∈Λ b a 2 b,l,k div φ 2 k φ 2 kk ⊗k = div M low,1 + div M high,1 . (5.9)
Using the Leibniz rule
∂ xi f · ∂ xi g = ∂ xi (f · g) − ∂ 2 xi f · g,
one can easily deduce that
M low,1 L 1 λ −50α q+1 δ q+2 . (5.10)
Thirdly, using the definitions of η l , Λ v , Λ b and φ k,k,k , we have l;k∈Λv
a v,l,k ψ k φ k,k,k · l;k∈Λ b a b,l,k ψ k φ k,k,k = 0, (5.11) and l;k∈Λ b a b,l,k ψ k φ k,k,k · l;k∈Λ b a b,l,k ψ k φ k,k,k = l;k∈Λ b a 2 b,l,k ψ 2 k φ 2 k,k,k . (5.12)
By div = div P >0 = P >0 div, we show that
P H div w (pb) q+1 ⊗ d (p) q+1 − d (p) q+1 ⊗ w (pb) q+1 +M q + (∂ t d (t) q+1 + ∂ t d (v) q+1 + div M high,1 ) =P H div l;k∈Λ b a 2 b,l,k ψ 2 k (φ 2 k φ 2 k φ 2 k )(k ⊗k −k ⊗k) +M q + (∂ t d (t) q+1 + ∂ t d (v) q+1 + div M high,1 ) =P H div l;k∈Λ b a 2 b,l,k (k ⊗k −k ⊗k) +M q + P H div l;k∈Λ b a 2 b,l,k P >0 (ψ 2 k )(φ 2 k φ 2 k φ 2 k )(k ⊗k −k ⊗k) + P H div l;k∈Λ b a 2 b,l,k P >0 (φ 2 k φ 2 k φ 2 k )(k ⊗k −k ⊗k) + (∂ t d (t) q+1 + ∂ t d (v) q+1 + div M high,1 ) = 0 + P H P >0 l;k∈Λ b P >0 (ψ 2 k ) div a 2 b,l,k (φ 2 k φ 2 k φ 2 k )(k ⊗k −k ⊗k) + P H P >0 l;k∈Λ b P >0 (φ 2 k φ 2 k φ 2 k ) div[a 2 b,l,k (k ⊗k −k ⊗k)] + P H P >0 l;k∈Λ b a 2 b,l,k div[(φ 2 k φ 2 k φ 2 k )(k ⊗k)] + ∂ t d (t) q+1 + P H P >0 l;k∈Λ b −a 2 b,l,k div[(φ 2 k φ 2 k φ 2 k )(k ⊗k)] + ∂ t d (v) q+1 + div M high,1 := div M low,2 + div M low,3 + div M low,4 ,(5.13)
where we have used Lemma 4.1 in the third equality. Combining (5.6), (5.9) and (5.13) shows
div[M osc q+1 + R a [(∂ t d (t) q+1 + ∂ t d (v) q+1 ) + (∂ t d ∆ q+1 − ∆d ∆ q+1 − ∆d (v) q+1 )]]
= div(M low,0 + M low,1 + M low,2 + M low,3 + M low,4 ). (5.14)
We begin to estimate M low,2 . Using Lemma A.2 and Proposition 4.4, we deduce that
M low,2 L 1 −15 q r −1 λ −1+σ q+1 λ −50α q+1 δ q+2 . (5.15)
Next, we consider M low,3 . Recalling that
∂ t d (t) q+1 = − 1 µ P H P >0 l;k∈Λ b ∂ t (a 2 b,l,k )(φ 2 k φ 2 k φ 2 k )k + a 2 b,l,k ∂ t φ 2 k · φ 2 k φ 2 kk , by ∂ t [φ 2 k (kx + µt)] · φ 2 k φ 2 kk = µ div[φ 2 k (kx + µt)k]φ 2 k φ 2 kk = µ div[(φ 2 k φ 2 k φ 2 k )(k ⊗k)],
we have
M low,3 = R a P H P >0 l;k∈Λ b a 2 b,l,k div[(φ 2 k φ 2 k φ 2 k )(k ⊗k)] + R a ∂ t d (t) q+1 = −R a P H P >0 l;k∈Λ b µ −1 ∂ t (a 2 b,l,k )(φ 2 k φ 2 k φ 2 k )k. (5.16)
So we can easily deduce that
M low,3 L 1 µ −1 ∂ t (a 2 b,l,k ) L ∞ φ 2 k φ 2 k φ 2 k L 1+α λ −50α q+1 δ q+2 .(5.M low,4 =R a P H P >0 l;k∈Λ b −a 2 b,l,k div[(P >0 (φ 2 k ) + 1) · φ 2 k φ 2 k (k ⊗k)] + ∂ t d (v) q+1 + M high,1 =R a P H P >0 l;k∈Λ b −a 2 b,l,k div[P >0 (φ 2 k )φ 2 k φ 2 k (k ⊗k)] + ∂ t d (v) q+1 =(µσ) −1 R a P H P >0 l;k∈Λ b ∂ t (a 2 b,l,k ) div[(∂ −1 t P >0 φ 2 k )φ 2 k φ 2 k (k ⊗k)] =(µσ) −1 R a P H P >0 l;k∈Λ b (∂ t a 2 b,l,k )(∂ −1 t P >0 φ 2 k )φ 2 k div[φ 2 k (k ⊗k)]. (5.18)
One deduces by Proposition 4.4 that
M low,4 L 1 ≤ (µσ) −1 ∂ t (a 2 b,l,k ) L ∞ (∂ −1 t P >0 φ 2 k )φ 2 k div(φ 2 k )k ⊗k L 1+α −15 q λ 2α q+1 µ −1r−1 λ −50α q+1 δ q+2 .(5.M osc q+1 + R a [(∂ t d (t) q+1 + ∂ t d (v) q+1 ) + (∂ t d ∆ q+1 − ∆d ∆ q+1 − ∆d (v) q+1 )] L 1 λ −50α q+1 δ q+2 .
This completes the proof of Propositon 5.2.
Proposition 5.3 (Estimate for R osc q+1 ).
R osc q+1 + R[(∂ t w (t) q+1 + ∂ t w (v) q+1 ) + (∂ t w ∆ q+1 − ∆w ∆ q+1 − ∆w (v) q+1 )] L 1 λ −50α q+1 δ q+2 ,
where P v is defined in (5.25).
Proof. Firstly, since w
(h) q+1 = d (h) q+1 and w (pu) q+1 ⊗ w (pb) q+1 = w (h) q+1 ⊗ w (p) q+1 = d (h) q+1 ⊗ d (p) q+1 = 0, we have w q+1 ⊗ w q+1 − d q+1 ⊗ d q+1 = w (p) q+1 ⊗ w (p) q+1 − d (p) q+1 ⊗ d (p) q+1 + R low,0 , (5.20) where R low,0 :=(w q+1 − w (p) q+1 − w (h) q+1 ) ⊗ (w q+1 − w (p) q+1 − w (h) q+1 ) + (w (p) q+1 + w (h) q+1 ) ⊗ (w q+1 − w (p) q+1 − w (h) q+1 ) + (w q+1 − w (p) q+1 − w (h) q+1 ) ⊗ (w (p) q+1 + w (h) q+1 ) − (d q+1 − d (p) q+1 − d (h) q+1 ) ⊗ (d q+1 − d (p) q+1 − d (h) q+1 ) − (d (p) q+1 + d (h) q+1 ) ⊗ (d q+1 − d (p) q+1 − d (h) q+1 ) − (d q+1 − d (p) q+1 − d (h) q+1 ) ⊗ (d (p) q+1 + d (h) q+1 ). (5.21)
We deduce by Proposition 4.5 that
R div R low,0 L 1 λ −50α q+1 δ q+2 . (5.22)
Secondly, straightforward calculations show
∂ t w ∆ q+1 − ∆w ∆ q+1 − ∆w (v) q+1 = (µσ) −1 P H P >0 l;k∈Λ b (∂ t − ∆)(a 2 b,l,k ) t 0 e (t−τ )∆ ∆ div (∂ −1 t P >0 φ 2 k )φ 2 k φ 2 kk ⊗k dτ + P H P >0 l;k∈Λ b (∂ t − ∆)(a 2 b,l,k ) t 0 e (t−τ )∆ div φ 2 k φ 2 kk ⊗k dτ − (µσ) −1 P H P >0 l;k∈Λ b 3 i=1 ∂ xi (a 2 b,l,k )∂ xi t 0 e (t−τ )∆ ∆ div (∂ −1 t P >0 φ 2 k )φ 2 k φ 2 kk ⊗k dτ − P H P >0 l;k∈Λ b 3 i=1 ∂ xi (a 2 b,l,k )∂ xi t 0 e (t−τ )∆ div φ 2 k φ 2 kk ⊗k dτ + P H P >0 l;k∈Λ b a 2 b,l,k div φ 2 k φ 2 kk ⊗k = div R low,1 + div R high,1 . (5.23)
It is easy to verify that
R low,1 L 1 λ −50α q+1 δ q+2 . (5.24)
Thirdly, we deduce that div(w
(p) q+1 ⊗ w (p) q+1 − d (p) q+1 ⊗ d (p) q+1 +R q ) + (∂ t w (t) q+1 + ∂ t w (t) q+1 + div R high,1 ) = div l;k∈Λv a 2 v,l,k ψ 2 k (φ 2 k φ 2 k φ 2 k )(k ⊗k) +R q + (∂ t w (t) q+1 + ∂ t w (t) q+1 + div R high,1 ) + div l;k∈Λ b a 2 b,l,k ψ 2 k (φ 2 k φ 2 k φ 2 k )(k ⊗k) − l;k∈Λ b a 2 b,l,k ψ 2 k (φ 2 k φ 2 k φ 2 k )(k ⊗k) = div l;k∈Λv a 2 v,l,k P >0 (ψ 2 k )(φ 2 k φ 2 k φ 2 k )(k ⊗k) + a 2 v,l,k P >0 (φ 2 k φ 2 k φ 2 k )(k ⊗k) + ∇ l η 2 l χ v ρ v,q + div l;k∈Λ b a 2 b,l,k P >0 (ψ 2 k )(φ 2 k φ 2 k φ 2 k )(k ⊗k) + a 2 b,l,k P >0 (φ 2 k φ 2 k φ 2 k )(k ⊗k) + ∂ t w (t) q+1 − div l;k∈Λ b a 2 b,l,k P >0 (ψ 2 k )(φ 2 k φ 2 k φ 2 k )(k ⊗k) + a 2 b,l,k P >0 (φ 2 k φ 2 k φ 2 k )(k ⊗k) + ∂ t w (v) q+1 + div R high,1 = div l;k∈Λv a 2 v,l,k P >0 (ψ 2 k )(φ 2 k φ 2 k φ 2 k )(k ⊗k) + l;k∈Λ b a 2 b,l,k P >0 (ψ 2 k )(φ 2 k φ 2 k φ 2 k )(k ⊗k −k ⊗k) + div l;k∈Λv a 2 v,l,k P >0 (φ 2 k φ 2 k φ 2 k )(k ⊗k) + l;k∈Λ b a 2 b,l,k P >0 (φ k φkφk)(k ⊗k) + R∂ t w (t) q+1 + div l;k∈Λ b −a 2 b,l,k P >0 (φ 2 k φ 2 k φ 2 k )(k ⊗k) + R∂ t w (v) q+1 + R high,1 + ∇ l η 2 l χ v ρ v,q = P >0 l;k∈Λv P >0 (ψ 2 k ) div[a 2 v,l,k (φ 2 k φ 2 k φ 2 k )(k ⊗k)] + l;k∈Λ b P >0 (ψ 2 k ) div[a 2 b,l,k (φ k φkφk)(k ⊗k −k ⊗k)] + P >0 l;k∈Λv P >0 (φ 2 k φ 2 k φ 2 k ) div[a 2 v,l,k (k ⊗k)] + l;k∈Λ b P >0 (φ 2 k φ 2 k φ 2 k ) div[a 2 b,l,k (k ⊗k −k ⊗k)] + P >0 l;k∈Λv a 2 v,l,k div[(φ 2 k φ 2 k φ 2 k )(k ⊗k)] + l;k∈Λ b a 2 b,l,k div[(φ 2 k φ 2 k φ 2 k )(k ⊗k)] + ∂ t w (t) q+1 + P >0 l;k∈Λ b −a 2 b,l,k div[(φ 2 k φ 2 k φ 2 k )(k ⊗k)] + ∂ t w (v) q+1 + div R high,1 + ∇ l η 2 l χ v ρ v,q := div R low,2 + div R low,3 + div R low,4 + ∇ l η 2 l χ v ρ v,q := div R low,2 + P H div R low,3 + P H div R low,4 + ∇( div div ∆ R low,3 + div div ∆ R low,4 + l η 2 l χ v ρ v,q ) := div R low,2 + P H div R low,3 + P H div R low,4 + ∇P v ,(5.25)
where we use Lemma 4.2 with R v =R q − k∈Λv a b,l,k (k ⊗k −k ⊗k) in the second equality. Thus, by the definition of R osc q+1 , we deduce
div R osc q+1 + R[(∂ t w (t) q+1 + ∂ t w (v) q+1 ) + (∂ t w ∆ q+1 − ∆w ∆ q+1 − ∆w (v) q+1 )]
= div R low,0 + R low,1 + R div(R low,2 ) + RP H div(R low,3 + R low,4 ) .
(5.26)
We need to estimate R div R low,2 , RP H div R low, 3 and RP H div R low,4 , respectively.
To begin with, we estimate R div R low,2 . Thanks to Lemma A.2 and Proposition 4.4, one obtains that
R div R low,2 L 1 −15 q σr −1 λ −1+α q+1 + −15 q σ −1 λ α q+1 λ −50α q+1 δ q+2 . (5.27)
Next, we need to RP H div R low,3 . Recalling that
∂ t w (t) q+1 = − P H P >0 l;k∈Λv 1 µ ∂ t (a 2 v,l,k )(φ 2 k φ 2 k φ 2 k )k − P H P >0 l;k∈Λ b 1 µ ∂ t (a 2 b,l,k )(φ 2 k φ 2 k φ 2 k )k − P H P >0 l;k∈Λv a 2 v,l,k div(φ 2 k φ 2 k φ 2 k )k ⊗k − P H P >0 l;k∈Λ b a 2 b,l,k div(φ 2 k φ 2 k φ 2 k )k ⊗k. (5.28)
Owning to div = div P >0 = P >0 div, we have
RP H div R low,3 =RP H P >0 l;k∈Λv a 2 v,l,k div(φ 2 k φ 2 k φ 2 k )k ⊗k + RP H P >0 l;k∈Λ b a 2 b,l,k div(φ 2 k φ 2 k φ 2 k )k ⊗k + R∂ t w (t) q+1 = − RP H P >0 l;k∈Λv 1 µ ∂ t (a 2 v,l,k )(φ 2 k φ 2 k φ 2 k )k − RP H P >0 l;k∈Λ b 1 µ ∂ t (a 2 b,l,k )(φ 2 k φ 2 k φ 2 k )k. (5.29)
Hence, one can easily deduce that
RP H div R low,3 L 1 µ −1 ∂ t (a 2 v,l,k , a 2 b,l,k ) L ∞ φ 2 k φ 2 k φ 2 k L 1+α λ −5α q+1 δ q+2 .(5.RP H div R low,4 =RP H P >0 l;k∈Λ b −a 2 b,l,k div[(P >0 φ 2 k + 1)φ 2 k φ 2 k (k ⊗k)] + ∂ t w (v) q+1 + div R high,1 =RP H P >0 l;k∈Λ b −a 2 b,l,k div[P >0 (φ 2 k )φ 2 k φ 2 k (k ⊗k)] + ∂ t w (v) q+1 =(µσ) −1 RP H P >0 l;k∈Λ b ∂ t (a 2 b,l,k ) div[(∂ −1 t P >0 φ 2 k )φ 2 k φ 2 k (k ⊗k)] =(µσ) −1 RP H P >0 l;k∈Λ b ∂ t (a 2 b,l,k )(∂ −1 t P >0 φ 2 k )φ 2 k div[φ 2 k (k ⊗k)]. (5.31)
One deduces by Proposition 4.4 that
RP H div R low,4 L 1 ≤ (µσ) −1 ∂ t (a 2 b,l,k ) L ∞ (∂ −1 t P >0 φ 2 k )φ 2 k div[φ 2 k (k ⊗k)] L 1+α −15 q λ 2α q+1 µ −1r−1 λ −50α q+1 δ q+2 .R osc q+1 + R[(∂ t w (t) q+1 + ∂ t w (v) q+1 ) + (∂ t w ∆ q+1 − ∆w ∆ q+1 − ∆w (v) q+1 )] L 1 λ −50α q+1 δ q+2 .
This completes the proof of Proposition 5.3.
Collecting Proposition 5.1-Proposition 5.3, we obtain that
M q+1 L 1 ≤ M osc + R a [(∂ t d (t) q+1 + ∂ t d (v) q+1 ) + (∂ t d ∆ q+1 − ∆d ∆ q+1 − ∆d (v) q+1 )] L 1 M lin q+1 − R a [(∂ t d (t) q+1 + ∂ t d (v) q+1 ) + (∂ t d ∆ q+1 − ∆d ∆ q+1 − ∆d (v) q+1 )] L 1 ≤λ −40α q+1 δ q+2 , and R q+1 L 1 ≤ R osc q+1 + R[(∂ t w (t) q+1 + ∂ t w (v) q+1 ) + (∂ t w ∆ q+1 − ∆w ∆ q+1 − ∆w (v) q+1 )] L 1 + R lin q+1 − R[(∂ t w (t) q+1 + ∂ t w (v) q+1 ) + (∂ t w ∆ q+1 − ∆w ∆ q+1 − ∆w (v) q+1 )] L 1 ≤λ −40α q+1 δ q+2 .
This fact shows that (2.8) holds by replacing q with q + 1.
Energy iteration
In this section, we show that (2.10) and (2.11) hold with q replaced by q + 1.
Proposition 6.1 (Energy estimate). For all t ∈ [1 − τ q , T ], we have e(t) − T 3 (|v q+1 | 2 + |b q+1 | 2 ) dx − δq+2 2 ≤ δ q+2 100 .
Proof. For t ∈ [1 − τ q , T ], the total energy error can be rewritten as follows:
e(t) − T 3 (|v q+1 | 2 + |b q+1 | 2 ) dx − δ q+2 2 =[e(t) − T 3 (|v q | 2 + |b q | 2 ) dx − δq+2 2 ] − T 3 (|w q+1 | 2 + |d q+1 | 2 ) dx − 2 T 3 (w q+1 ·v q + d q+1 ·b q ) dx =[e(t) − T 3 (|v q | 2 + |b q | 2 ) dx − δq+2 2 ] + e low,0 − T 3 (w (p) q+1 + w (h) q+1 ) · (w (p) q+1 + w (h) q+1 ) + (d (p) q+1 + d (h) q+1 ) · (d (p) q+1 + d (h) q+1 ) dx, (6.1) where e low,0 := − T 3 (w (p) q+1 + w (h) q+1 ) · (w (v) q+1 + w ∆ q+1 + w (t) q+1 + w (c) q+1 + w (s) q+1 ) (6.2) + (w (v) q+1 + w ∆ q+1 + w (t) q+1 + w (c) q+1 + w (s) q+1 ) · w q+1 + (d (p) q+1 + d (h) q+1 ) · (d (v) q+1 + d ∆ q+1 + d (t) q+1 + d (c) q+1 + d (s) q+1 ) (6.3) + (d (v) q+1 + d ∆ q+1 + d (t) q+1 + d (c) q+1 + d (s) q+1 ) · d q+1 dx − 2 T 3 w q+1 ·v q + d q+1 ·b q dx. (6.4)
For the last term on the right-hand side of equality (6.1), using w
(pu) q+1 · w (pb) q+1 = w (p) q+1 · w (h) q+1 = d (p) q+1 · d (h)
q+1 = 0 and TrR q = 0, we have Since η −1 (t) = ℵ(t) = 0 for t ∈ [1 − τ q , T ], we rewrite ρ v,q as ρ v,q (t) = χ v δ q+1 ℵ(t) + ρ q (t)(1 − ℵ(t))
T 3 (w (p) q+1 + w (h) q+1 ) · (w (p) q+1 + w (h) q+1 ) + (d (p) q+1 + d (h) q+1 ) · (d (p) q+1 + d (h) q+1 ) dx = T 3 Tr(w (pu) q+1 ⊗ w (pu) q+1 ) + (|w (pb) q+1 | 2 + |d (p) q+1 | 2 + |d (h) q+1 | 2 + |w (h) q+1 | 2 ) dx = T 3 Tr l η 2 l ρ v,q k∈Λv a 2 k ψ 2 k φ 2 k,k,kk ⊗k dx + T 3 |w (pb) q+1 | 2 + |d (p) q+1 | 2 + |d (h) q+1 | 2 + |w (h) q+1 | 2 dx = T 3 Tr l η 2 l (ρ v,q Id −R q ) dx + T 3 |w (pb) q+1 | 2 + |d (p) q+1 | 2 + |d (h) q+1 | 2 + |w (h) q+1 | 2 dx + T 3 Tr l;k∈Λv η 2 l ρ v,q a 2 k [P >0 (ψ 2 k )φ 2 k,k,k + P >0 φ 2 k,k,k ]k ⊗k dx =3 T 3 l η 2 l ρ v,q dx + T 3 |w(η −1 (t) + Nq l=1 T 3 η 2 l χ v (x, t) dx = χ v ρ q (t) Nq l=1 T 3 η 2 l χ v (x, t) dx . (6.6)
Recalling the definition of ρ q (t) in (4.7), we deduce that
3 T 3 l η 2 l ρ v,q dx + T 3 (|w (pb) q+1 | 2 + |d (p) q+1 | 2 + |d (h) q+1 | 2 + |w (h) q+1 | 2 ) dx =3ρ q (t) + E(t) = e(t) − T 3 |v q | 2 + |b q | 2 dx − δ q+2 2 .
(6.7) (6.7) helps us rewrite (6.5) as
T 3 (w (p) q+1 + w (h) q+1 ) · (w (p) q+1 + w (h) q+1 ) + (d (p) q+1 + d (h) q+1 ) · (d (p) q+1 + d (h) q+1 ) dx =e(t) − T 3 |v q | 2 + |b q | 2 dx − δq+2 2 + Tr T 3 l;k∈Λv a 2 v,l,k [P >0 (ψ 2 k )(φ 2
k,k,k ) + P >0 (φ 2 k,k,k )]k ⊗k dx. (6.8)
Next, using δ q+2 (6.9) and T 3 w q+1 ·v q + d q+1 ·b q dx λ −5α q+1 δ q+2 1 10000 δ q+2 . (6.10)
T 3 f P ≥c gdx = T 3 |∇| L f |∇| −L P ≥c gdx ≤ c −L g L 2 f H L with L
From Remark 4.6, we deduce that
T 3 (w (p) q+1 + w (h) q+1 ) · (w ∆ q+1 + w (t) q+1 + w (c) q+1 + w (s) q+1 ) + (w ∆ q+1 + w (t) q+1 + w (c) q+1 + w (s) q+1 ) · w q+1 + (d (p) q+1 + d (h) q+1 ) · (d ∆ q+1 + d (t) q+1 + d (c) q+1 + d (s) q+1 ) + (d ∆ q+1 + d (t) q+1 + d (c) q+1 + d (s) q+1 ) · d q+1 dx λ −50α q+1 δ q+2 1 10000
δ q+2 . (6.11)
Combining the above inequality with (6.10), we show that |e low | ≤ 1 2000 δ q+2 . (6.12)
Finally, putting (6.8), (6.9) and (6.12) into (6.1), we have
e(t) − δq+2 2 − T 3 (|v q+1 | 2 + |b q+1 | 2 ) dx = e(t) − T 3 (|v q | 2 + |b q | 2 ) dx − δq+2 2 − T 3 (w (p) q+1 + w (h) q+1 ) · (w (p) q+1 + w (h) q+1 ) + (d (p) q+1 + d (h) q+1 ) · (d (p) q+1 + d (h) q+1 ) dx + e low
Since h q (t) = h(t) − T 3vq ·b q dx − δq+2 200 , plugging (6.17) and (6.18) into (6.15) yields that
h(t) − T 3 v q+1 · b q+1 dx − δ q+2 200 ≤ |h low,0 | + |h low,1 | λ −50α q+1 δ q+2 ≤ δ q+2 1000 .
Therefore, we complete the proof of the Proposition 6.2.
In conclusion, collecting all the results in Section 3-Section 6, we prove Proposition 2.1.
We thank the anonymous referee and the associated editor for their invaluable comments which helped to improve the paper. This work was supported by the National Key Research and Development Program of China (No. 2020YFA0712900) and NSFC Grant 11831004.
A Appendix
A.1 Inverse divergence operator
Recalling the following pseudodifferential operator of order −1 as in [4]:
Rv kl = ∂ k ∆ −1 v l + ∂ l ∆ −1 v k − 1 2 (δ kl + ∂ k ∂ l ∆ −1 ) div ∆ −1 v, (A.1) R a u ij = ijk (−∆) −1 (curl u) k , (A.2)
where ijk is the Levi-Civita tensor, i, j, k ∈ {1, 2, 3}. One can easily verify that R is a matrix-valued right inverse of the divergence operator, such that div Rv = v for mean-free vector fields v. In addition, Rv is traceless and symmetric. While R a is also a matrix-valued right inverse of the divergence operator, such that div R a u = u for divergence free vector fields u. And R a u is anti-symmetric.
A.2 Some notations in geometric lemmas
Using the same idea as in [4], we give the definitions of Λ b , Λ v and Λ s by For fixed k ∈ Λ b ∪ Λ v ∪ Λ s , one can obtain a unique triple (k,k,k). See the following table:
Λ b = e 1 ,
k ∈ Λ b , k ⊥k ⊥k k ∈ Λ v , k ⊥k ⊥k k ∈ Λ s , k ⊥k ⊥k Let p ∈ {1, 2}, and f be a T 3 -periodic function such that there exists a constant C f such that
D j f L p ≤ C f λ j , j ∈ [1, L + 4].
Moreover, assume that g is a (T/κ) 3 -periodic function. Then, we have
f g L p C f g L p ,
where the implicit constant is universal.
One can obtain the following Lemma after some modifications in Lemma B.1 in [8].
Lemma A.2. Fixed κ > λ ≥ 1 and p ∈ (1, 2]. Assume that there exists an integer L with κ L−2 > λ L . Let f be a T 3 -periodic function so that there exists a constant C f such that
D j f L p 1 λ j f L p 1 , j ∈ [0, L].
Assume further that T 3 f (x)P ≥κ g(x)dx = 0 and g is a (T/κ) 3 -periodic function. Then, we have
|∇| −1 (f P ≥κ g) L p f L p 1 g L p 2 κ , 1 p = 1 p1 + 1 p2 ,
where the implicit constant is universal.
Theorem 1.2. A weak solution (v, b) of the viscous and resistive MHD system in C([0, T ]; L 2 (T 3 )) is nonunique if (v, b) has at least one interval of regularity. Moreover, there exist non-Leray-Hopf weak solutions (v, b) in C([0, T ]; L 2 (T 3 )).
Remark 1. 3 .
3For the ideal MHD system (1.4) with non-trivial magnetic fields, Beekie-Buckmaster-Vicol in
For arbitrary smooth functions e : [1/2, T ] → [0, ∞), h : [1/2, T ] → (−∞, ∞) satisfying the estimates (2.2)-(2.3), it is easy to verify that (2.10)-(2.11) for q = 1 .
Lemma 4. 1 (
1First Geometric Lemma[4]). Let B b (0) be a ball of radius b centered at 0 in the space of 3 × 3 skew-symmetric matrices. There exists a set Λ b ⊂ S 2 ∩ Q 3 that consists of vectors k with associated orthogonal basis (k,k,k), b > 0 and smooth positive functions a b,k : B b (0) → R, such that for M ∈ B b
Lemma 4. 2 (
2Second Geometric Lemma[4]). Let B v (Id) be a ball of radius v centered at Id in the space of 3 × 3 symmetric matrices. There exists a set Λ v ⊂ S 2 ∩ Q 3 that consists of vectors k with associated orthogonal basis (k,k,k), v > 0 and smooth positive functions a v,k : B v (Id) → R, such that for
φk(x, t) and φk(x), respectively. Now, setting σ = λ 1 128 q+1 , µ = λ17 16 q+1 and r =r = λ
l ρ 1/2 v,q a v,k Id − Rv ρv,q ψ k φ k,k,kk := l;k∈Λv
taking the parameter b sufficiently large in λ q+1 . (3) Because the supports of the "box flows" are much smaller than the supports of the Mikado flows, we choose the "box flows" instead of Mikado flows. However, this choice gives rise to extra errors in the oscillation terms, since they are not divergence-free. To overcome this difficulty, which help us eliminate the extra errors. Now, let us define the temporal flows w (t) q+1 and d
divergence-free and have zero mean.
5.22),(5.24),(5.27),(5.30) and (5.32) into (5.26), we obtain
e 2 , e 3 ,
23−
30 )
30Finally, we turn to estimate RP H div R low,4 . Using the definition of R high,1 in (5.23) and w RP H div R low,4 as(v)
q+1 in (4.22),
we rewrite
pb )
pbq+1 | 2 + |d(p)
q+1 | 2 + |d
(h)
q+1 | 2 + |w
(h)
q+1 | 2 dx
+
T 3 Tr
l;k∈Λv
a 2
v,l,k [P >0 (ψ 2
k )φ 2
k,k,k + P >0 φ 2
k,k,k ]k ⊗k dx.
(6.5)
Here and throughout the paper, we denote T = [0, 1] 3 and (f, g) ∈ X × X by (f, g) ∈ X
Throughout this paper, we use the notation x y to denote x ≤ Cy, for a universal constant C that may be different from line to line, and x y to denote that x is much less than y.
t B N2,1 , (3.15) and (3.16) imply (3.13).3.3 Gluing flowsDefine the intervals I l , J l (l ≥ 0) byI l := t l + τq 3 , t l + 2τq 3 ,(3.17)
This completes the proof of Proposition 6.1.Proof. Similar to the proof of Proposition 6.1, it suffices to control the helicity forwe rewrite that:=h q (t) + h low,1 . (6.17)Using the standard integration by parts and Remark 4.6, one deduces that |h low,0 | + |h low,1 | λ −50α q+1 δ q+2 . (6.18)
Non-uniqueness of leray solutions of the forced Navier-Stokes equations. D Albritton, E Brué, M Colombo, Ann.of Math. 1961D. Albritton, E. Brué, and M. Colombo. Non-uniqueness of leray solutions of the forced Navier-Stokes equations. Ann.of Math., 196(1):415-455, 2022.
Hydrodynamic and magnetohydrodynamic turbulence: Invariants, cascades and locality. H Aluie, Johns Hopkins UniversityP h.D. thesisH. Aluie. Hydrodynamic and magnetohydrodynamic turbulence: Invariants, cascades and locality. P h.D. thesis, Johns Hopkins University, 2009.
Fourier analysis and nonlinear partial differential equations. H Bahouri, J Y Chemin, R Danchin, Grundlehren der Mathematischen Wissenschaften. 343Fundamental Principles of Mathematical SciencesH. Bahouri, J. Y. Chemin, and R. Danchin. Fourier analysis and nonlinear partial differential equa- tions, volume 343 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences].
. Springer, HeidelbergSpringer, Heidelberg, 2011.
Weak solutions of ideal MHD which do not conserve magnetic helicity. R Beekie, T Buckmaster, V Vicol, Ann. of PDE. 61R. Beekie, T. Buckmaster, and V. Vicol. Weak solutions of ideal MHD which do not conserve magnetic helicity. Ann. of PDE., 6(1):1-40, 2020.
Wild solutions of the Navier-Stokes equations whose singular sets in time have Hausdorff dimension strictly less than 1. T Buckmaster, M Colombo, V Vicol, J. Eur. Math. Soc. 249T. Buckmaster, M. Colombo, and V. Vicol. Wild solutions of the Navier-Stokes equations whose singular sets in time have Hausdorff dimension strictly less than 1. J. Eur. Math. Soc., 24(9):3333-3378, 2021.
Onsager's conjecture for admissible weak solutions. T Buckmaster, C D Lellis, L Székelyhidi, V Vicol, Commun. Pure Appl. Math. 722T. Buckmaster, C. D. Lellis, L. Székelyhidi, and V. Vicol. Onsager's conjecture for admissible weak solutions. Commun. Pure Appl. Math., 72(2):229-274, 2019.
Non-conservative H 1/2− weak solutions of the incompressible 3D euler equations. T Buckmaster, N Masmoudi, M Novack, V Vicol, arXiv:2101.09278T. Buckmaster, N. Masmoudi, M. Novack, and V. Vicol. Non-conservative H 1/2− weak solutions of the incompressible 3D euler equations.. arXiv:2101.09278, 2021., 2022.
Nonuniqueness of weak solutions to the Navier-Stokes equation. T Buckmaster, V Vicol, Ann. of Math. 1892T. Buckmaster and V. Vicol. Nonuniqueness of weak solutions to the Navier-Stokes equation. Ann. of Math., 189(2):101-144, 2019.
Remarks on singularities, dimension and energy dissipation for ideal hydrodynamics and MHD. R E Caflisch, I Klapper, G Steele, Comm. Math. Phys. 1842R. E. Caflisch, I. Klapper, and G. Steele. Remarks on singularities, dimension and energy dissipation for ideal hydrodynamics and MHD. Comm. Math. Phys., 184(2):443-455, 1997.
Sharp nonuniqueness for the Navier-Stokes equations. A Cheskidov, X Luo, Invent. math. 2293A. Cheskidov and X. Luo. Sharp nonuniqueness for the Navier-Stokes equations. Invent. math., 229(3):987-1054, 2022.
Onsager's conjecture on the energy conservation for solutions of Euler's equation. P Constantin, E Weinan, E S Titi, Comm. Math. Phys. 1651P. Constantin, Weinan E, and E. S. Titi. Onsager's conjecture on the energy conservation for solutions of Euler's equation. Comm. Math. Phys., 165(1):207-209, 1994.
Cauchy problem for dissipative Hölder solutions to the incompressible Euler equations. S Daneri, Comm. Math. Phys. 3292S. Daneri. Cauchy problem for dissipative Hölder solutions to the incompressible Euler equations. Comm. Math. Phys., 329(2):745-786, 2014.
Non-uniqueness for the Euler equations up to Onsager's critical exponent. S Daneri, E Runa, L SzsékelyhidiJr, Ann. of PDE. 7144S. Daneri, E. Runa, and L. Szsékelyhidi Jr. Non-uniqueness for the Euler equations up to Onsager's critical exponent. Ann. of PDE, 7(1):44, 2021. No 8.
Non-uniqueness and h-principle for Hölder-continuous weak solutions of the Euler equations. S Daneri, L SzékelyhidiJr, Arch. Ration. Mech. Anal. 2242S. Daneri and L. Székelyhidi Jr. Non-uniqueness and h-principle for Hölder-continuous weak solutions of the Euler equations. Arch. Ration. Mech. Anal., 224(2):471-514, 2017.
Dimension of the singular set of wild hölder solutions of the incompressible euler equations. L , De Rosa, S Haffter, Nonlinearity. 3510L. De Rosa and S. Haffter. Dimension of the singular set of wild hölder solutions of the incompressible euler equations. Nonlinearity, 35(10):5150-5192, 2021.
Proof of Taylor's conjecture on magnetic helicity conservation. D Faraco, S Lindberg, Comm. Math. Phys. 2732D. Faraco and S. Lindberg. Proof of Taylor's conjecture on magnetic helicity conservation. Comm. Math. Phys., 273(2):707-738, 2021.
Bounded solutions of ideal MHD with compact support in space-time. D Faraco, S Lindberg, L SzsékelyhidiJr, Arch. Ration. Mech. Anal. 2391D. Faraco, S. Lindberg, and L. Szsékelyhidi Jr. Bounded solutions of ideal MHD with compact support in space-time. Arch. Ration. Mech. Anal., 239(1):51-93, 2021.
Magnetic helicity, weak solutions and relaxation of ideal MHD. D Faraco, S Lindberg, L SzsékelyhidiJr, arXiv:2109.09106D. Faraco, S. Lindberg, and L. Szsékelyhidi Jr. Magnetic helicity, weak solutions and relaxation of ideal MHD. arXiv:2109.09106, 2021.
A proof of Onsager's conjecture. P Isett, Ann. of Math. 1883P. Isett. A proof of Onsager's conjecture. Ann. of Math., 188(3):871-963, 2018.
Remarks on the magnetic helicity and energy conservation for ideal magnetohydrodynamics. E Kang, J Lee, Nonlinearity. 2011E. Kang and J. Lee. Remarks on the magnetic helicity and energy conservation for ideal magneto- hydrodynamics. Nonlinearity., 20(11):2681-2689, 2007.
Infinitely many non-conservative solutions for the three-dimensional euler equations with arbitrary. C Khor, C Miao, W Ye, initial data in C 1/3− . arXiv.2204.03344.C. Khor, C. Miao, and W. Ye. Infinitely many non-conservative solutions for the three-dimensional euler equations with arbitrary initial data in C 1/3− . arXiv.2204.03344., 2022.
The Euler equations as a differential inclusion. C De Lellis, Jr L Szsékelyhidi, Ann. of Math. 1703C. De Lellis and Jr. L. Szsékelyhidi. The Euler equations as a differential inclusion. Ann. of Math., 170(3):1417-1436, 2009.
On admissibility criteria for weak solutions of the Euler equations. C De Lellis, L SzsékelyhidiJr, Arch. Ration. Mech. Anal. 1951C. De Lellis and L. Szsékelyhidi Jr. On admissibility criteria for weak solutions of the Euler equations. Arch. Ration. Mech. Anal., 195(1):225-260, 2010.
Non-uniqueness of weak solutions to 3D generalized magnetohydrodynamic equations. Y Li, Z Zeng, D Zhang, J. Math. Pures. Appl. 165Y. Li, Z. Zeng, and D. Zhang. Non-uniqueness of weak solutions to 3D generalized magnetohydrody- namic equations. J. Math. Pures. Appl., 165:232-285, 2022.
Non-uniqueness of weak solutions to hyperviscous Navier-Stokes equations -on sharpness of j.-L. lions exponent. T Luo, E S Titi, Calc. Var. Partial Differential Equations. 593T. Luo and E.S. Titi. Non-uniqueness of weak solutions to hyperviscous Navier-Stokes equations -on sharpness of j.-L. lions exponent. Calc. Var. Partial Differential Equations., 59(3):1-15, 2020.
An intermittent onsager theorem. M Novack, V Vicol, arXiv.2204.033442022M. Novack and V. Vicol. An intermittent onsager theorem. arXiv.2204.03344., 2022.
Relaxation and magnetic reconnection in plasmas. J B Taylor, Reviews of Modern Physics. 583741J. B. Taylor. Relaxation and magnetic reconnection in plasmas. Reviews of Modern Physics., 58(3):741, 1986.
Generalized MHD equations. J Wu, J. Diff. Equ. 1952J. Wu. Generalized MHD equations. J. Diff. Equ., 195(2):284-312, 2003.
| [] |
[
"A Latent Space Model for HLA Compatibility Networks in Kidney Transplantation",
"A Latent Space Model for HLA Compatibility Networks in Kidney Transplantation"
] | [
"Zhipeng Huang [email protected] \nDepartment of Computer and Data Sciences Case\nDepartment of Computer and Data Sciences Case Western Reserve University Cleveland\nWestern Reserve University Cleveland\n44106, 44106OH, OHUSA, USA\n",
"Kevin S Xu \nDepartment of Computer and Data Sciences Case\nDepartment of Computer and Data Sciences Case Western Reserve University Cleveland\nWestern Reserve University Cleveland\n44106, 44106OH, OHUSA, USA\n"
] | [
"Department of Computer and Data Sciences Case\nDepartment of Computer and Data Sciences Case Western Reserve University Cleveland\nWestern Reserve University Cleveland\n44106, 44106OH, OHUSA, USA",
"Department of Computer and Data Sciences Case\nDepartment of Computer and Data Sciences Case Western Reserve University Cleveland\nWestern Reserve University Cleveland\n44106, 44106OH, OHUSA, USA"
] | [] | Kidney transplantation is the preferred treatment for people suffering from end-stage renal disease. Successful kidney transplants still fail over time, known as graft failure; however, the time to graft failure, or graft survival time, can vary significantly between different recipients. A significant biological factor affecting graft survival times is the compatibility between the human leukocyte antigens (HLAs) of the donor and recipient. We propose to model HLA compatibility using a network, where the nodes denote different HLAs of the donor and recipient, and edge weights denote compatibilities of the HLAs, which can be positive or negative. The network is indirectly observed, as the edge weights are estimated from transplant outcomes rather than directly observed. We propose a latent space model for such indirectly-observed weighted and signed networks. We demonstrate that our latent space model can not only result in more accurate estimates of HLA compatibilities, but can also be incorporated into survival analysis models to improve accuracy for the downstream task of predicting graft survival times. | 10.1109/bibm55620.2022.9995514 | [
"https://export.arxiv.org/pdf/2211.02234v1.pdf"
] | 253,370,284 | 2211.02234 | 47fb78095ef57b572362671008e7391ea6604f72 |
A Latent Space Model for HLA Compatibility Networks in Kidney Transplantation
Zhipeng Huang [email protected]
Department of Computer and Data Sciences Case
Department of Computer and Data Sciences Case Western Reserve University Cleveland
Western Reserve University Cleveland
44106, 44106OH, OHUSA, USA
Kevin S Xu
Department of Computer and Data Sciences Case
Department of Computer and Data Sciences Case Western Reserve University Cleveland
Western Reserve University Cleveland
44106, 44106OH, OHUSA, USA
A Latent Space Model for HLA Compatibility Networks in Kidney Transplantation
Index Terms-estimated networkindirectly-observed networkbiological networkweighted networksigned networksurvival analysishuman leukocyte antigensgraft survival
Kidney transplantation is the preferred treatment for people suffering from end-stage renal disease. Successful kidney transplants still fail over time, known as graft failure; however, the time to graft failure, or graft survival time, can vary significantly between different recipients. A significant biological factor affecting graft survival times is the compatibility between the human leukocyte antigens (HLAs) of the donor and recipient. We propose to model HLA compatibility using a network, where the nodes denote different HLAs of the donor and recipient, and edge weights denote compatibilities of the HLAs, which can be positive or negative. The network is indirectly observed, as the edge weights are estimated from transplant outcomes rather than directly observed. We propose a latent space model for such indirectly-observed weighted and signed networks. We demonstrate that our latent space model can not only result in more accurate estimates of HLA compatibilities, but can also be incorporated into survival analysis models to improve accuracy for the downstream task of predicting graft survival times.
I. INTRODUCTION
Kidney transplantation is by far the preferred treatment for people suffering from end-stage renal disease (ESRD), an advanced state of chronic kidney disease. Despite the advantages of kidney transplants, most patients with ESRD are treated with dialysis, primarily because there exist an insufficient number of compatible donors for patients. Kidney transplants (and other organ transplants in general) inevitably fail over time, referred to as graft failure. These patients then require a replacement with another kidney transplant, or they must return to the waiting list while being treated with dialysis.
The time to graft failure, or graft survival time, is determined by a variety of factors. A significant biological factor affecting clinical survival times of transplanted organs is the compatibility between the human leukocyte antigens (HLAs) of the organ donor and recipient. Mismatches between donor and recipient HLAs may cause the recipient's immune response to launch an attack against the transplanted organ, resulting in worse outcomes, including shorter graft survival times. However, it is extremely rare to identify donors that have a perfect HLA match with recipients (less than 10%), so This research was partially conducted while the authors were at the University of Toledo. most transplants (more than 90%) involve different degrees of mismatched HLAs. Interestingly, HLA mismatches appear not to be equally harmful. Prior work indicates that some mismatches may still lead to good post-transplant outcomes [1,2], suggesting differing levels of compatibility between donor and recipient HLAs.
Recently, Nemati et al. [3] proposed an approach to encode donor and recipient HLAs into feature representations that accounts for biological mechanisms behind HLA compatibility. By adding these features to a Cox proportional hazards (CoxPH) model, they were able to improve the prediction accuracy for the graft survival time for kidney transplants. Their CoxPH model also provides estimates of the compatibilities between donor and recipient HLAs through the coefficients of the model. However, most donor-recipient HLA pairs are infrequently observed, with many HLA pairs occurring in less than 1% of all transplants. These estimated compatibilities are thus extremely noisy, and in some cases, have standard errors as large or larger than the estimates themselves! In this paper, we propose to represent HLA compatibility as a network, where the edge weight between a donor HLA and a recipient HLA denotes the compatibility. We propose a latent space model for the HLA compatibility network, which is a weighted, signed, and bipartite network with extremely noisy edge weights. The latent space model allows us to use compatibilities of other HLA pairs in the network to improve the estimated compatibility of a given HLA pair.
Our main contributions are as follows:
• We introduce the notion of an HLA compatibility network between donor and recipient HLAs, a noisy signed and weighted network estimated from outcomes of kidney transplants involving those HLAs. • We propose a latent space model for the HLA compatibility network to capture the underlying structure between HLAs and better predict their compatibilities. • We demonstrate that applying our latent space model to the HLA compatibility network results in more accurate predictions of the compatibilities. • We find that the predicted compatibilities from our latent space model can further improve accuracy for the downstream task (see Figure 1) of predicting outcomes of kidney transplants, namely the graft survival times.
II. BACKGROUND
A. Human Leukocyte Antigens (HLAs)
The HLA system is a system of proteins expressed on a transplant's cell that recognizes the immunogenicity of a kidney transplant from a donor [2,4]. The HLA system includes three primary HLA loci: HLA-A, HLA-B, and HLA-DR. For each HLA locus, there are many diverse HLA proteins. The general way to express HLA antigens is using two-digit, or low-resolution typing, where each HLA is identified by "HLA" followed by a hyphen: a letter indicating the locus, and a 1-digit or 2-digit number which indicates the HLA protein, e.g. HLA-A1 [5]. In this paper, we drop the "HLA" prefix and refer to HLA-A1 as simply A1.
Each person has two sets of HLAs, one inherited from the father and one from the mother. Therefore, each donor and recipient has 6 HLAs at these 3 loci, and anywhere from 0 to 6 of the HLAs can be mismatched. The compatibility between the mismatched HLAs of the kidney donor and recipient has shown to be a significant factor affecting transplanted kidneys' survival times [2,4,6]. A kidney transplant recipient with higher HLA compatibility with the donor has a higher possibility of a good transplant outcome.
B. Survival Analysis
Survival analysis focuses on analyzing time-to-event data, where the objective is typically to model the expected duration until an event of interest occurs for a given subject [7]. For many subjects, however, the exact time of the event is unknown due to censoring, which may occur for many reasons, including the event not yet occurring, or the subject dropping out from the study. We briefly introduce some survival analysis terminology and models relevant to this paper.
Let T denote the time that the event of interest (graft failure) occurs. The hazard function is defined as
h(t) = lim ∆t→0 Pr(t ≤ T < t + ∆t|T ≥ t) ∆t
and denotes the rate of the event of interest occurring at time t given that it did not occur prior to time t. A common assumption in survival analysis is the proportional hazards assumption, which assumes that covariates are multiplicatively related to the hazard function. In the Cox proportional hazards (CoxPH) model, the hazard function takes the form
h(t|x i ) = h 0 (t) exp(ω 0 + ω 1 x i1 + . . . + ω d x id ),
where h 0 (t) is a baseline hazard function, x ij denotes the jth covariate for subject i, and ω j denotes the coefficient for the jth covariate. Note that the hazard ratio for any two subjects x 1 , x 2 is independent of the baseline hazard function h 0 (t).
Assume that the jth covariate is binary. If we consider two groups of subjects who differ only in the jth covariate, then the hazard ratio is given by e ωj , and the log of the hazard ratio is thus ω j .
C. Latent Space Models
Network models are used to represent relations among interacting units. Hoff et al. [8] proposed a class of latent space models for networks, where the probability of a edge existence between two entities depends on their positions in an unobserved Euclidean space or latent space. Let A denote the adjacency matrix of a network, with a ij = 1 for node pairs (i, j) with an edge and a ij = 0 otherwise. The model assumes conditional independence between node pairs given the latent positions, and the log odds of an edge being formed between nodes (i, j) is given by α+βx i,j − z i −z j , where x ij denotes observed covariates between nodes (i, j), z i , z j ∈ R d denotes the latent positions in a d-dimensional latent space for nodes i and j, and α and β are the linear parameters. Within this parameterization, two nodes have higher probability to form an edge if they have closer latent positions.
D. Related Work 1) Extensions of Latent Space Models: The latent space model provides a visual and interpretable spatial representation for relational data. The model of Hoff et al. [8] was extended by researchers for more complex network based on data structures, including bipartite networks [9], discrete-time dynamic networks [10], and multimodal networks [11]. Nodespecific random effects were added to the latent space model by [12] to capture the degree heterogeneity. In this work, we propose a HLA latent space model for a signed, weighted, and bipartite indirectly-observed network to capture the relation within HLA compatibility networks.
2) Predicting Kidney Transplant Outcomes: A variety of approaches have been proposed for predicting outcomes for kidney transplants, as well as other organ transplants. Due to the high rate of censored subjects in this type of data, most approaches use some form of survival prediction, including an ensemble model that combines CoxPH models with random survival forests [13] and a deep learning-based approach [14].
The most relevant prior work is that of Nemati et al. [3], who propose a variety of feature representations for HLA for predicting graft survival time. They experimented with these different feature representations using CoxPH models, gradient boosted trees, and random survival forests and found that including HLA information could result in small improvements of up to 0.007 in the prediction accuracy as measured by Harrell's concordance index (C-index) [15].
III. HLA COMPATIBILITY NETWORKS
A. Data Description
This study used data from the Scientific Registry of Transplant Recipients (SRTR). The SRTR data system includes data on all donor, wait-listed candidates, and transplant recipients in the U.S., submitted by the members of the Organ Procurement and Transplantation Network (OPTN). The Health Resources and Services Administration (HRSA), U.S. Department of Health and Human Services provides oversight to the activities of the OPTN and SRTR contractors.
We use the same inclusion criteria and data preprocessing as in Nemati et al. [3], which we briefly summarize in the following. We consider only transplants performed between the years 2000 and 2016 with deceased donors, recipients aged 18 years or older, and only candidates who are receiving their first transplant. We use death-censored graft failure as the clinical endpoint (prediction target), so that patients who died with a functioning graft are treated as censored since they did not exhibit the event of interest (graft failure). For censored instances, the censoring date is defined to be the last follow-up date. We consider a total of 106, 372 kidney transplants, for which 74.6% are censored.
1) HLA Representation: We encode HLA information using the HLA types and pairs of the donor and recipient for each transplant. These HLA type and pair features are constructed according to the approach of Nemati et al. [3]. HLA types for the donor and recipient are represented by a one-hotlike encoding, resulting in binary variables such as DON A1, DON A2, . . . , REC A1, REC A2, . . . , where the value for a donor (resp. recipient) HLA type variable is one if the donor (resp. recipient) possesses that HLA type. An HLA type that is a split of a broad type has ones for variables for both the split and broad. For example, DON A23 = 1 for a transplant if the donor possesses HLA type A23. Since A23 is a split of the broad A9, DON A9 = 1 for this transplant also.
Donor-recipient HLA pairs are also represented by a one-hot-like encoding, resulting in binary variables such as DON A1 REC A1, DON A1 REC A2, . . . , where the value for such an HLA pair variable is one if the donor and recipient possess the specified HLA types, and the HLA pair is active. (Some HLA pairs are inactive due to asymmetry in the roles of donor and recipient HLAs; we refer interested readers to Nemati et al. [3] for details.) Broads and splits are handled in the same manner as for HLA types. For example, DON A23 REC A1 = 1 for a transplant if the donor possesses A23, the recipient possess A1, and the HLA pair (A23, A1) is active. If this is the case, then DON A9 REC A1 = 1 also because A23 is a split of the broad A9.
B. HLA Compatibility Network Construction
While it is possible to construct a network of donors and recipients directly (i.e. with donors and recipients as nodes and an edge denoting a transplant from a donor to a recipient), this directly-observed network is of very little interest scientifically. Each donor node has maximum degree of 2 because that is the maximum number of kidneys they can donate. Each recipient node also has very small degree denoting the number of transplants they have received. The network consists of many small components that are not connected.
We instead consider an indirectly-observed HLA compatibility network, where nodes denote HLA types. We consider a separate set of donor nodes and recipient nodes since the effects of an HLA type may differ when it appears on the donor compared to recipient side. To avoid confusion between a donor and recipient node, we use uppercase letters for the HLA type of a donor node (e.g. A1) and lowercase letters for the HLA types of a recipient node (e.g. a1).
We propose the following definition of HLA compatibility using hazard ratios, which are commonly used in survival analysis as described in Section II-B.
Definition III.1. We define the compatibility between a donor HLA d i and a recipient HLA r j as sum of the negative log of the hazard ratios for donor d i , for recipient r j , and for the donor-recipient pair (d i , r j ).
Let δ i and γ j denote the negative logs of the hazard ratios for donor HLA type d i and recipient HLA type r j , respectively. Let η ij denote the negative log of the hazard ratio for the donor-recipient HLA pair (d i , r j ). Then, the compatibility of donor HLA d i and recipient HLA r j from Definition III.1 can be written as
Compatibility(d i , r j ) = δ i + γ j + η ij .
(1)
1) Estimating HLA Compatibilities:
The compatibilities of the donor and recipient HLA types are unknown, so we must estimate them from the transplant data. To estimate the compatibilities, we first remove all HLA types and pairs that were not observed in at least 100 transplants 1 . We fit an 2penalized CoxPH model to the data. We use 2-fold crossvalidation to tune the 2 penalty parameter to maximize the partial log-likelihood on the validation folds. The result is a set of coefficients for each of the covariates, including basic covariates such as age and race, as well as the HLA types and the HLA pairs. The negated coefficients for the donor HLA type d i , the recipient HLA type r j , and the donor-recipient HLA pair (d i , r j ) can be substituted into (1) to obtain the estimated compatibility of (d i , r j ). Since positive coefficients in the CoxPH model are associated with higher probability of graft failure, we negate the estimated coefficients so that positive compatibilities are associated with lower probability of graft failure, i.e. better transplant outcomes.
2) HLA Compatibility Network: The HLA compatibilities can be represented as a network with both node and edge weights. There are two types of nodes in the network: donor nodes and recipient nodes. Each donor node has an unknown true weight δ i , and each recipient node has an unknown true weight γ j . Each edge connects a donor node d i to a recipient node r j and has true weight η ij . In the HLA compatibility network, the true node and edge weights are terms from (1) and are unobserved. We observe only a noisy version of the weights in the form of the estimated CoxPH coefficients. We model the observed node and edge weights as independent realizations of Gaussian random variables in the following manner:
• Observed donor node weight: y di ∼ N (δ i , σ 2 δi ). • Observed recipient node weight: y rj ∼ N (γ j , σ 2 γj ). • Observed edge weight by w ij ∼ N (η ij , σ 2 ij ). We can thus view the observed node and edge weights as estimates of the true node and edge weights, respectively. The estimated CoxPH coefficients also have estimated standard errors, which we can use as estimates for σ δi , σ γj , σ ij .
There are 3 separate HLA compatibility networks, one for each of the 3 loci: HLA-A, HLA-B, and HLA-DR. The HLA-A compatibility network estimated using the observed node and edge weights is shown in Figure 2. Notice that node and edge weights can be both positive and negative. All donor-recipient pairs that have been observed in at least 100 transplants contain an edge in the network. The variation in edge weights can be quite large, ranging from roughly −0. IV. HLA LATENT SPACE MODEL We utilize a latent space model to learn the hidden features underlying the HLA compatibility network. Within the latent space model, donors' and recipients' node positions are embedded in the latent space, where a donor and recipient in the observed HLA network with higher edge weight are put closer. Thus, the HLA compatibilities could be induced through distances between nodes in the latent space.
A. Model Description
Let z di and z rj denote the positions of the ith donor node and the jth recipient node, respectively, in the latent space. Let N d and N r denote the number of donor nodes and recipient nodes, respectively. Unlike the latent distance model of Hoff et al. [8], which models a binary relation between two nodes using a logistic regression, we model the affinity (true edge weight) between a donor and recipient node using the linear relation η ij = α−β ||z di −z rj || 2 . Moreover, we employ donor and recipient node effect terms δ i , γ j to the model as in [11,12] to capture the true node weights. Then, the compatibility between donor node d i and recipient node r j is given by
µ ij = η ij + δ i + γ j = α − β ||z di − z rj || 2 + δ i + γ j ,(2)
where α and β are scalar parameters. The observed data consists of the observed node and edge weights {y di , y rj , w ij }, modeled as described in Section III-B2.
B. Estimation Procedure
The proposed model has the set of unknown parameters θ = (Z d , Z r , α, β, γ, δ). z di ∈ Z d and z ri ∈ Z r represent the latent positions of donor and recipient nodes, respectively, in a d-dimensional Euclidean space. Therefore, Z d and Z r are N d × d and N r × d matrices, respectively. Each pair of nodes is associated with a donor node effect γ i ∈ γ and a recipient node effect δ i ∈ δ. α and β are the slope and intercept terms of the linear relationship. We constrain the slope β to be positive to keep the node pairs with higher edge weights closer together in the latent space.
Beginning with the likelihood of the latent distance model derived by Hoff et al. [8], we can write the likelihood of our proposed HLA latent space model as
p(W, y d , y r |θ) = n1 i=1 n2 j=1 p(w ij |z d , z r , α, β) n1 i=1 p(y di |δ i ) n2 j=1 p(y rj |γ j ),
where θ denotes the set of all unknown parameters. The probability distributions for the observations are given by
p(w ij |z d , z r , α, β) = N (w ij |η ij , σ 2 ij ) p(y di |δ i ) = N (y di |δ i , σ 2 δi ) p(y rj |γ j ) = N (y rj |γ j , σ 2 γj ),
where σ 2 ij denotes the variance of the observed edge weights, and σ 2 δi , σ 2 γj denote the variances of the observed node weights. We use plug-in estimators for these variance parameters using the estimated standard errors for the CoxPH coefficients.
We optimize the log-likelihood of the model parameters using the L-BFGS-B [16] optimizer implemented in SciPy.
Parameter Initialization: We employ a multidimensional scaling (MDS) algorithm [17] as an initialization for the latent node positions. MDS attempts to find a set of positions where each point represents one of the entities, and the distances between points depend on their dissimilarities for each pair of entities. As the HLA compatibility network is a weighted bipartite network, we do not have edge weights between node pairs within the same group: donor-donor edge weights and recipient-recipient edge weights. Instead, we use the correlation coefficients between nodes based on their edge weights with the other type of node. For example, we compute the correlation coefficient between the edge weights of two donors nodes with all recipient nodes. We then define the dissimilarity matrix with entries given by d ij = 1−logistic(w ij ), where w ij represents the weights or correlation coefficients between node pair (i, j), depending on whether they are nodes of different types or the same type, respectively. All other parameters are initialized randomly.
V. SIMULATION EXPERIMENTS
To make a pilot evaluation of our proposed model, we fit our model to simulated networks. We simulate the HLA bipartite networks with number of nodes N d = 20, N r = 20, parameters α = 1, β = 1, and latent dimension d = 2. The latent positions are sampled independently from a 2-D normal distribution: z d , z r ∼ N (0, 0.5I). Donor and recipient effects are sampled independently from a 1-D Normal distribution: δ d , γ r ∼ N (0, 0.5). We compute the true weights using (2). The noise of the edge weights is controlled by a scalar σ 2 w . Similar to the latent space model of Huang et al. [18], β is not identifiable because it enters multiplicatively into (2). The latent positions Z are also not identifiable in a latent space model, as noted by Hoff et al. [8], and can only be identified up to a rotation. Thus, we set β = 1 in all simulations and use a Procrustes transform to rescale and rotate the latent positions to best match the true latent positions.
Low noise simulated network: We first fit the HLA latent space model to the low noise simulated network by setting σ w = 0.15. After fitting the model, we compute the root mean squared error (RMSE) and R 2 values between the actual parameters and estimated parameters, which are shown in Table I. Both RMSE and R 2 indicate an accurate prediction for all the parameters and the edge weights.
High noise simulated network: Since the real kidney transplant data has extremely noisy edge weights, we conduct another experiment by setting a high variance to the simulation where σ w = 1.5. As we increase the noise, the RMSE increases as expected. The R 2 for nodal effect δ and γ still indicate reasonable estimates. The R 2 for w, z d , and z r is about 0.5, indicating moderately accurate estimates, which we consider acceptable for a high noise network.
VI. REAL DATA EXPERIMENTS
A. Model-based Exploratory Analysis
In the HLA latent space model, we expect negative relationships between HLA pair coefficients (edge weights) and distances in Euclidean space. We fix the latent dimension to be d = 2 in three HLA networks (A, B, DR) to visualize the latent positions of the nodes. After fitting the HLA latent space model, the 2-D latent space plots we obtained for all three HLA loci are shown in Figure 3a-c.
From examining the latent positions, we find that pairs of nodes with higher edge weights tend to appear closer together Figure 3d. By comparing the latent space plot and the edge weights, we discover that A and B have a relatively good fit, while DR is slightly worse, with some cyan lines longer than magenta.
B. HLA Compatibility Prediction
Next, we evaluate the ability of our latent space model to predict HLA compatibilities. We split the transplant data 50/50 into training/test, and we fit the 2 -penalized CoxPH model two times, once each on the training and test folds. We obtained two sets of HLA compatibility data: HLA 1 denotes the first (training) fold, and HLA 2 denotes the second (test) fold. We then fit the proposed HLA latent space model on the HLA 1 data for varying latent dimension d and evaluate prediction accuracy on the HLA 2 data. 1) Evaluation Metrics: We consider 3 measures of prediction accuracy:
• Root mean squared error (RMSE) between the predicted and observed compatibilities on HLA 2 . • Mean log-probability on HLA 2 : We use a Gaussian distribution with mean given by the observed compatibilities on HLA 2 and standard deviation given by its standard error to compute the log probability. This is equivalent to a negated weighted RMSE where higher weight is given to HLA pair coefficients with smaller standard errors. • Sign prediction accuracy: We threshold the predicted and observed compatibilities at 0 and compute the binary classification accuracy. We consider also the prediction accuracy for the downstream task of graft survival time prediction, which we describe in Section VI-C.
2) Comparison Baselines: We compare our proposed latent space model against 3 other methods: 3) Results: From Table II, note that the predicted weights using other models to refine the estimated compatibilities are more accurate for each of the 4 metrics and 3 loci compared to directly using the CoxPH coefficients. Among the refinement methods, the LSM and NMTF have similar prediction accuracy, and both significantly outperform PCA. While the NMTF is competitive to our LSM in prediction accuracy, the LSM can also provide useful interpretations through the latent space, as shown in Section VI-A. Finally, notice the difficulty of the HLA compatibility prediction problem-the sign prediction accuracy on all 3 loci are quite low, from 51% to 58%, despite the improvement from using the refinement.
C. Graft Survival Time Prediction
One difficulty of evaluating the HLA compatibility predictions from Section VI-B is that the CoxPH coefficients from both data splits have very high standard errors. As a result, the prediction target (the HLA compatibility on the test set) is extremely noisy. Recall that the initial HLA compatibility estimates are obtained from fitting a CoxPH model tuned to maximize prediction accuracy for the graft survival times. It is unclear whether improved prediction accuracy of the HLA compatibilities on the test set also lead to improved prediction accuracy of graft survival.
We thus propose an end-to-end evaluation of our HLA compatibility estimates by incorporating them into the graft survival prediction, as shown in Figure 1. After fitting the latent space model, we replace the CoxPH coefficients for the HLA types with the negated estimates of donor and recipient effects given by −γ i and −δ j , respectively. We also replace the CoxPH coefficients for the HLA pairs with the negated estimates of the donor-recipient pair weights −η ij .
The question we are seeking to answer here is as follows: Does replacing the estimated CoxPH coefficients with the estimated HLA compatibilities from our latent space model improve accuracy on the downstream task-graft survival prediction? To evaluate this, we use the same 50/50 training/test splits from Section VI-B and compare the C-indices on the test splits between the trained CoxPH model and the updated CoxPH coefficients using our latent space model.
The answer to this question is yes, as shown by the Cindices in Table II. Notice that our proposed latent space model improves the C-index by about 0.011 compared to directly using the CoxPH coefficients. The NMTF and PCA provide smaller improvements of 0.009 and 0.007, respectively. We note that an improvement of 0.011 in the C-index for graft survival prediction in kidney transplantation is a large improvement! For comparison, using the same dataset and inclusion criteria, Nemati et al. [3] evaluated C-indices using 6 different HLA representations across 3 different survival prediction algorithms and found achieved a maximum improvement of 0.007. Similarly, using the same dataset but slightly different inclusion criteria, Luck et al. [14] achieved a maximum improvement of 0.005. The appreciable improvement in Cindex demonstrates the utility of our latent space modeling approach not only for interpreting HLA compatibilities, but also for graft survival prediction.
VII. CONCLUSION
We proposed a model for HLA compatibility in kidney transplantation using an indirectly-observed weighted and signed network. The weights were estimated from a CoxPH model fit to data on over 100, 000 transplants, yet they are very noisy, with standard errors sometimes on the same order of magnitude as the mean weight estimates themselves.
Our main contribution was to develop a latent space model for the HLA compatibility network. The latent space model provided both an interpretable visualization of the HLA compatibilities and used the network structure to improve the estimated compatibilities. We found that the latent space model not only resulted in more accurate estimated compatibilities, but also improved graft survival prediction accuracy when the estimates were substituted back into the CoxPH model as coefficients.
Limitations and Future Work: We chose to use a linear model for the HLA compatibilities for simplicity. Both compatibility estimation and survival prediction accuracy could also potentially be improved by incorporating non-linearities into the model. Furthermore, if we consider improving survival prediction accuracy as our objective, then our proposed approach can be viewed as a two-stage process: first estimate weights by maximizing the CoxPH partial log-likelihood, and then refine the weight estimates by maximizing the latent space model log-likelihood. The improvement in survival prediction accuracy from this two-stage process suggests that accuracy could potentially be further improved by jointly maximizing both objective functions. Indeed, this is an interesting avenue for future work that we are currently exploring.
Fig. 2 .
2Heat maps illustrating node and edge weights in the HLA compatibility networks for HLA-A. Rows and columns are ordered by their positions from fitting a 1-D latent space model to the network. Entries along the diagonal should be more positive (red), as these nodes are closer in the latent space, and entries should turn more negative (blue) further away from the diagonal.
3 to 0.3 in A and DR and about −0.4 to 0.4 in B.
latent space and vice versa. For example, in the HLA-A plot, donor A74 and recipient a29 have a high weight of 0.349 and tend to be placed close together in the 2-D plot. Conversely, donor A28 and recipient a33 have a low weight of −0.257 and are placed on opposite sides of the latent space. We draw cyan dashed lines indicating the top 3 highest weight pairs and magenta dashed lines indicating the top 3 lowest weight pairs in all three 2-D HLA latent space plots. These weights and standard errors are also shown in
TABLE I ERROR
IBETWEEN ESTIMATED AND ACTUAL PARAMETERS BY FITTING THE HLA LATENT SPACE MODEL TO 15 SIMULATED NETWORKS. MEAN ± STANDARD ERROR IS SHOWN FOR EACH PARAMETER.Parameters
Low noise
High noise
RMSE
w
0.124 ± 0.001 1.264 ± 0.014
z d
0.014 ± 0.002 0.111 ± 0.007
zr
0.024 ± 0.010 0.101 ± 0.006
δ
0.105 ± 0.018 0.173 ± 0.015
γ
0.097 ± 0.015 0.170 ± 0.012
α
0.047 ± 0.068 0.779 ± 0.107
TABLE II EVALUATION
IIMETRICS FOR HLA COMPATIBILITY AND GRAFT SURVIVAL TIME PREDICTION. OUR PROPOSED LSM PERFORMS COMPETITIVELY ON HLA COMPATIBILITY PREDICTION ON ALL 3 LOCI AND ACHIEVES THE BEST C-INDEX FOR GRAFT SURVIVAL TIME PREDICTION.Non-negative matrix tri-factorization (NMTF)[19]: A technique for learning low-dimensional feature representation of relational data that decomposes the given matrix into three smaller matrices rather than two matrices as in standard non-negative matrix factorization. As we have negative values in our HLA network, we first apply a logistic function to transform all compatibilities to (0, 1). We then apply NMTF. Finally, we use a logit function to transform the values back to the original domain with both positive and negative values. • Principal component analysis (PCA): A classical linear dimensionality reduction technique that projects the given matrix into a lower dimensional space. We reconstruct the compatibilities from the first d principal components. For all of the methods, we choose the number of dimensions d that returns the best mean log-probability on HLA 2 .Metric
Model
HLA-A HLA-B HLA-DR
RMSE
CoxPH
0.159
0.191
0.168
CoxPH + LSM
0.110
0.139
0.129
CoxPH + NMTF
0.112
0.140
0.122
CoxPH + PCA
0.124
0.149
0.138
Mean log-prob.
CoxPH
0.609
0.374
0.541
CoxPH + LSM
0.911
0.657
0.800
CoxPH + NMTF
0.936
0.659
0.885
CoxPH + PCA
0.887
0.606
0.776
Sign prediction
CoxPH
0.535
0.515
0.510
CoxPH + LSM
0.542
0.576
0.552
CoxPH + NMTF
0.542
0.522
0.573
CoxPH + PCA
0.535
0.528
0.564
C-index
CoxPH
0.614
CoxPH + LSM
0.625
CoxPH + NMTF
0.623
CoxPH + PCA
0.621
• Cox proportional hazards (CoxPH): Directly uses the esti-
mated CoxPH coefficients without any further processing.
•
The estimated hazard ratios for these rarely occurring HLA types and pairs have extremely high standard errors, and in some cases, including them creates instabilities in estimating the CoxPH coefficients.
ACKNOWLEDGMENTThe authors thank Robert Warton, Dulat Bekbolsynov, and Stanislaw Stepkowski for their assistance with the HLA data.The research reported in this publication was supported by the National Library of Medicine of the National Institutes of Health under Award Number R01LM013311 as part of the NSF/NLM Generalizable Data Science Methods for Biomedical Research Program. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.The data reported here have been supplied by the Hennepin Healthcare Research Institute (HHRI) as the contractor for the Scientific Registry of Transplant Recipients (SRTR). The interpretation and reporting of these data are the responsibility of the author(s) and in no way should be seen as an official policy of or interpretation by the SRTR or the U.S. Government. Notably, the principles of the Helsinki Declaration were followed.
Association between specific HLA combinations and probability of kidney allograft loss: the taboo concept. I I N Doxiadis, J M A Smits, G M Th, G G Schreuder, H C Persijn, J J Van Houwelingen, F H J Van Rood, Claas, The Lancet. 3489031I. I. N. Doxiadis, J. M. A. Smits, G. M. Th. Schreuder, G. G. Persijn, H. C. van Houwelingen, J. J. van Rood, and F. H. J. Claas, "Association between specific HLA combinations and probability of kidney allograft loss: the taboo concept," The Lancet, vol. 348, no. 9031, pp. 850- 853, 1996.
The acceptable mismatch program as a fast tool for highly sensitized patients awaiting a cadaveric kidney transplantation: short waiting time and excellent graft outcome. F H J Claas, M D Witvliet, R J Duquesnoy, G G Persijn, I I N Doxiadis, Transplantation. 782F. H. J. Claas, M. D. Witvliet, R. J. Duquesnoy, G. G. Persijn, and I. I. N. Doxiadis, "The acceptable mismatch program as a fast tool for highly sensitized patients await- ing a cadaveric kidney transplantation: short waiting time and excellent graft outcome," Transplantation, vol. 78, no. 2, pp. 190-193, 2004.
Predicting kidney transplant survival using multiple feature representations for HLAs. M Nemati, H Zhang, M Sloma, D Bekbolsynov, H Wang, S Stepkowski, K S Xu, Proc. 19th Int. Conf. 19th Int. ConfM. Nemati, H. Zhang, M. Sloma, D. Bekbolsynov, H. Wang, S. Stepkowski, and K. S. Xu, "Predicting kidney transplant survival using multiple feature repre- sentations for HLAs," in Proc. 19th Int. Conf. Artif. Intell. Med., 2021, pp. 51-60.
Utility of HLA antibody testing in kidney transplantation. A Konvalinka, K Tinckam, J. Am. Soc. Nephrol. 267A. Konvalinka and K. Tinckam, "Utility of HLA antibody testing in kidney transplantation," J. Am. Soc. Nephrol., vol. 26, no. 7, pp. 1489-1502, 2015.
The human leukocyte antigen (HLA) system. U Shankarkumar, Int. J. Hum. Genet. 42U. Shankarkumar, "The human leukocyte antigen (HLA) system," Int. J. Hum. Genet., vol. 4, no. 2, pp. 91-103, 2004.
Alloantibody responses after renal transplant failure can be better predicted by donorrecipient HLA amino acid sequence and physicochemical disparities than conventional HLA matching. V Kosmoliaptsis, D Mallon, Y Chen, E M Bolton, J A Bradley, C J Taylor, Am. J. Transplant. 167V. Kosmoliaptsis, D. Mallon, Y. Chen, E. M. Bolton, J. A. Bradley, and C. J. Taylor, "Alloantibody responses after renal transplant failure can be better predicted by donor- recipient HLA amino acid sequence and physicochemical disparities than conventional HLA matching," Am. J. Transplant., vol. 16, no. 7, pp. 2139-2147, 2016.
Machine learning for survival analysis: A survey. P Wang, Y Li, C K Reddy, ACM Comput. Surv. 516P. Wang, Y. Li, and C. K. Reddy, "Machine learning for survival analysis: A survey," ACM Comput. Surv., vol. 51, no. 6, pp. 1-36, 2019.
Latent space approaches to social network analysis. P D Hoff, A E Raftery, M S Handcock, J. Am. Stat. Assoc. 97460P. D. Hoff, A. E. Raftery, and M. S. Handcock, "Latent space approaches to social network analysis," J. Am. Stat. Assoc., vol. 97, no. 460, pp. 1090-1098, 2002.
Interlocking directorates in irish companies using a latent space model for bipartite networks. N Friel, R Rastelli, J Wyse, A E Raftery, Proc. Natl. Acad. Sci. 11324N. Friel, R. Rastelli, J. Wyse, and A. E. Raftery, "In- terlocking directorates in irish companies using a latent space model for bipartite networks," Proc. Natl. Acad. Sci., vol. 113, no. 24, pp. 6629-6634, 2016.
Latent space models for dynamic networks. D K Sewell, Y Chen, J. Am. Stat. Assoc. 110512D. K. Sewell and Y. Chen, "Latent space models for dynamic networks," J. Am. Stat. Assoc., vol. 110, no. 512, pp. 1646-1657, 2015.
Joint latent space model for social networks with multivariate attributes. S S Wang, S Paul, P. De Boeck, arXiv:1910.12128arXiv preprintS. S. Wang, S. Paul, and P. De Boeck, "Joint latent space model for social networks with multivariate attributes," arXiv preprint arXiv:1910.12128, 2019.
Bilinear mixed-effects models for dyadic data. P D Hoff, J. Am. Stat. Assoc. 100469P. D. Hoff, "Bilinear mixed-effects models for dyadic data," J. Am. Stat. Assoc., vol. 100, no. 469, pp. 286- 295, 2005.
Using machine learning and an ensemble of methods to predict kidney transplant survival. E Mark, D Goldsman, B Gurbaxani, P Keskinocak, J Sokol, PLoS ONE. 141209068E. Mark, D. Goldsman, B. Gurbaxani, P. Keskinocak, and J. Sokol, "Using machine learning and an ensemble of methods to predict kidney transplant survival," PLoS ONE, vol. 14, no. 1, p. e0209068, 2019.
Deep learning for patient-specific kidney graft survival analysis. M Luck, T Sylvain, H Cardinal, A Lodi, Y Bengio, arXiv:1705.10245arXiv preprintM. Luck, T. Sylvain, H. Cardinal, A. Lodi, and Y. Bengio, "Deep learning for patient-specific kidney graft survival analysis," arXiv preprint arXiv:1705.10245, 2017.
Evaluating the yield of medical tests. F E Harrell, R M Califf, D B Pryor, K L Lee, R A Rosati, JAMA. 24718F. E. Harrell, R. M. Califf, D. B. Pryor, K. L. Lee, and R. A. Rosati, "Evaluating the yield of medical tests," JAMA, vol. 247, no. 18, pp. 2543-2546, 1982.
A limited memory algorithm for bound constrained optimization. R H Byrd, P Lu, J Nocedal, C Zhu, SIAM J. Sci. Comput. 165R. H. Byrd, P. Lu, J. Nocedal, and C. Zhu, "A limited memory algorithm for bound constrained optimization," SIAM J. Sci. Comput., vol. 16, no. 5, pp. 1190-1208, 1995.
Multidimensional scaling. M A A Cox, T F Cox, Handbook of data visualization. SpringerM. A. A. Cox and T. F. Cox, "Multidimensional scaling," in Handbook of data visualization. Springer, 2008, pp. 315-347.
A mutually exciting latent space Hawkes process model for continuous-time networks. Z Huang, H Soliman, S Paul, K S Xu, Proc. 38th Conf. Uncertain. 38th Conf. UncertainZ. Huang, H. Soliman, S. Paul, and K. S. Xu, "A mutually exciting latent space Hawkes process model for continuous-time networks," in Proc. 38th Conf. Uncer- tain. Artif. Intell., 2022, pp. 863-873.
A non-negative matrix tri-factorization approach to sentiment classification with lexical prior knowledge. T Li, Y Zhang, V Sindhwani, Proc. Jt. Conf. 47th Annu. Meet. ACL Int. Jt. Conf. Nat. Lang. Process. AFNLP. Jt. Conf. 47th Annu. Meet. ACL Int. Jt. Conf. Nat. Lang. ess. AFNLPT. Li, Y. Zhang, and V. Sindhwani, "A non-negative matrix tri-factorization approach to sentiment classifica- tion with lexical prior knowledge," in Proc. Jt. Conf. 47th Annu. Meet. ACL Int. Jt. Conf. Nat. Lang. Process. AFNLP, 2009, pp. 244-252.
| [] |
[
"Contrastive Counterfactual Visual Explanations With Overdetermination",
"Contrastive Counterfactual Visual Explanations With Overdetermination"
] | [
"Adam White ",
"Kwun Ho Ngan ",
"James Phelan ",
"Kevin Ryan ",
"\nCity Data Science Institute -City\nCity Data Science Institute -City\nUniversity of London\nUK\n",
"\nCity Data Science Institute -City\nUniversity of London\nUK\n",
"\nSAMAN SADEGHI AFGEH, City Data Science Institute -City\nUniversity of London\nUK\n",
"\nCity Data Science Institute -City\nUniversity of London\nUK\n",
"\nCONSTANTINO CARLOS REYES-ALDASORO, City Data Science Institute -City\nUniversity of London\nUK\n",
"\nARTUR D'AVILA GARCEZ, City Data Science Institute -City\nUniversity of London\nUK\n",
"\nUniversity of London\nUK\n"
] | [
"City Data Science Institute -City\nCity Data Science Institute -City\nUniversity of London\nUK",
"City Data Science Institute -City\nUniversity of London\nUK",
"SAMAN SADEGHI AFGEH, City Data Science Institute -City\nUniversity of London\nUK",
"City Data Science Institute -City\nUniversity of London\nUK",
"CONSTANTINO CARLOS REYES-ALDASORO, City Data Science Institute -City\nUniversity of London\nUK",
"ARTUR D'AVILA GARCEZ, City Data Science Institute -City\nUniversity of London\nUK",
"University of London\nUK"
] | [] | A novel explainable AI method called CLEAR Image is introduced in this paper. CLEAR Image is based on the view that a satisfactory explanation should be contrastive, counterfactual and measurable. CLEAR Image explains an image's classification probability by contrasting the image with a corresponding image generated automatically via adversarial learning. This enables both salient segmentation and perturbations that faithfully determine each segment's importance. CLEAR Image uses regression to determine a causal equation describing a classifier's local input-output behaviour. Counterfactuals are also identified, that are supported by the causal equation. Finally, CLEAR Image measures the fidelity of its explanation against the classifier. CLEAR Image was successfully applied to a medical imaging case study where it outperformed methods such as Grad-CAM and LIME by an average of 27% using a novel pointing game metric. CLEAR Image excels in identifying cases of 'causal overdetermination' where there are multiple patches in an image, any one of which is sufficient by itself to cause the classification probability to be close to one.Data-driven AI for Computer Vision can achieve high levels of predictive accuracy, yet the rationale behind these predictions is often opaque. This paper proposes a novel explainable AI (XAI) method called CLEAR Image that seeks to reveal the causal structure implicitly modelled by an AI system, where the causes are an image's segments and the effect is the AI system's classification probability. The explanations are for single predictions and describe the local input-output behaviour of the classifier. CLEAR Image is based on the philosopher James Woodward's seminal analysis of causal explanation[32], which develops Judea Pearl's manipulationist account of causation[18]. Together they constitute the dominant accounts of explanation in the philosophy of science. We argue that a successful explanation for an AI system should be contrastive, counterfactual and measurable.According to Woodward, to explain an event E is "to provide information about the factors on which it depends and exhibit how it depends on those factors". This requires a causal equation to describe the causal structure responsible for generating the event. The causal equation must support a set of counterfactuals; a counterfactual specifies a possible world where, contrary to the facts, a desired outcome occurs. The counterfactuals serve to illustrate the causal structure, and to answer a set of 'what-if-things-had-been-different' questions. In XAI, counterfactuals usually state minimal changes needed to achieve the desired outcome. | 10.1007/s10994-023-06333-w | [
"https://arxiv.org/pdf/2106.14556v3.pdf"
] | 235,658,630 | 2106.14556 | 57e004990264d8877817ec1a648345097fa43cd4 |
Contrastive Counterfactual Visual Explanations With Overdetermination
Adam White
Kwun Ho Ngan
James Phelan
Kevin Ryan
City Data Science Institute -City
City Data Science Institute -City
University of London
UK
City Data Science Institute -City
University of London
UK
SAMAN SADEGHI AFGEH, City Data Science Institute -City
University of London
UK
City Data Science Institute -City
University of London
UK
CONSTANTINO CARLOS REYES-ALDASORO, City Data Science Institute -City
University of London
UK
ARTUR D'AVILA GARCEZ, City Data Science Institute -City
University of London
UK
University of London
UK
Contrastive Counterfactual Visual Explanations With Overdetermination
CCS Concepts: • Computing methodologies → Computer visionInterest point and salient region detectionsImage seg- mentationCausal reasoning and diagnosticsArtificial intelligence Additional Key Words and Phrases: Explainable AI, Counterfactuals
A novel explainable AI method called CLEAR Image is introduced in this paper. CLEAR Image is based on the view that a satisfactory explanation should be contrastive, counterfactual and measurable. CLEAR Image explains an image's classification probability by contrasting the image with a corresponding image generated automatically via adversarial learning. This enables both salient segmentation and perturbations that faithfully determine each segment's importance. CLEAR Image uses regression to determine a causal equation describing a classifier's local input-output behaviour. Counterfactuals are also identified, that are supported by the causal equation. Finally, CLEAR Image measures the fidelity of its explanation against the classifier. CLEAR Image was successfully applied to a medical imaging case study where it outperformed methods such as Grad-CAM and LIME by an average of 27% using a novel pointing game metric. CLEAR Image excels in identifying cases of 'causal overdetermination' where there are multiple patches in an image, any one of which is sufficient by itself to cause the classification probability to be close to one.Data-driven AI for Computer Vision can achieve high levels of predictive accuracy, yet the rationale behind these predictions is often opaque. This paper proposes a novel explainable AI (XAI) method called CLEAR Image that seeks to reveal the causal structure implicitly modelled by an AI system, where the causes are an image's segments and the effect is the AI system's classification probability. The explanations are for single predictions and describe the local input-output behaviour of the classifier. CLEAR Image is based on the philosopher James Woodward's seminal analysis of causal explanation[32], which develops Judea Pearl's manipulationist account of causation[18]. Together they constitute the dominant accounts of explanation in the philosophy of science. We argue that a successful explanation for an AI system should be contrastive, counterfactual and measurable.According to Woodward, to explain an event E is "to provide information about the factors on which it depends and exhibit how it depends on those factors". This requires a causal equation to describe the causal structure responsible for generating the event. The causal equation must support a set of counterfactuals; a counterfactual specifies a possible world where, contrary to the facts, a desired outcome occurs. The counterfactuals serve to illustrate the causal structure, and to answer a set of 'what-if-things-had-been-different' questions. In XAI, counterfactuals usually state minimal changes needed to achieve the desired outcome.
A contrastive explanation seeks to answer the question 'Why E rather than F?' F comes from a contrast class of events that were alternatives to E, but which did not happen [26]. An explanation identifies the causes that led to E occurring rather than F, even though the relevant contrast class to which F belongs is often not explicitly conveyed.
For Woodward, all causal claims are counterfactual and contrastive: 'to causally explain an outcome is always to explain why it, rather than some alternative, occurred'. Woodward's theory of explanation stands in opposition to the multiple XAI methods that claim to provide counterfactual explanations [28], but which only provide statements of single or multiple counterfactuals. As this paper will illustrate, counterfactuals will only provide incomplete explanations without a supporting causal equation.
CLEAR Image excels at identifying cases of 'causal overdetermination'. The causal overdetermination of an event occurs when two or more sufficient causes of that event occur. A standard example from the philosophy literature is of soldiers in a firing squad simultaneously shooting a prisoner, with each shot being sufficient to kill the prisoner. The death of the prisoner is causally overdetermined. This causal structure may well be ubiquitous in learning systems.
For example, there may be multiple patches in a medical image, any of which being sufficient by itself to cause a classification probability close to one. To the best of our knowledge, CLEAR Image is the first XAI method capable of identifying causal overdetermination.
CLEAR Image explains an image's classification probability by contrasting the image with a corresponding GAN generated image. Previously, XAI use of GANs has just focused on their difference masks, which are created by subtracting the original image from its corresponding GAN generated image. However, as we will illustrate, a difference mask should only be a starting point for segmentation and explanation. This is because the segments identified from a difference mask can vary significantly in their relevance to a classification; furthermore, other segments critical to the classification can often be absent from the mask. Therefore, CLEAR Image uses a novel segmentation method that combines information from the difference mask, the original image and the classifier's behaviour. After completing its segmentation, CLEAR Image identifies counterfactuals and then follows a process of perturbation, whereby segments of the original image are changed, and the change in outcome is observed to produce a regression equation. The regression equation is used to determine the contribution each segment makes to the classification probability. As we will show, the explanations provided by leading XAI methods such as LIME and Grad-CAM often cannot be trusted. CLEAR Image, therefore, measures the fidelity of its explanation against the classifier, where fidelity refers to how closely an XAI method is able to mimic a classifier's behaviour.
CLEAR Image was evaluated in two case studies, both involving overdetermination. The first uses a multifaceted synthetic dataset, and the second uses chest X-rays. CLEAR Image outperformed XAI methods such as LIME and Grad-CAM by an average of 31% on the synthetic dataset and 27% on the X-ray dataset based on a pointing game metric defined in this paper for the case of multiple targets. Our code will be made available on GitHub.
The contribution of this paper is four-fold. We introduce an XAI method that:
• generates contrastive, counterfactual and measurable explanations outperforming established XAI methods in challenging image domains; • uses a GAN-generated contrast image in determining a causal equation, segment importance scores and counterfactuals.
• offers novel segmentation and pointing game algorithms for the evaluation of image explanations.
• is capable of identifying causal overdetermination, i.e. the multiple sufficient causes for an image classification.
CLEAR Image is a substantial development of an earlier XAI method, (Counterfactual Local Explanations viA Regression), which only applies to tabular data [30]. New functionality includes: (i) the novel segmentation algorithm (ii) generating perturbed images by infilling from the corresponding GAN image (iii) a novel pointing game suitable for images with multiple targets (iv) identification of sufficient causes and overdetermination (v) measurement of fidelity errors for counterfactuals involving categorical features.
The remainder of the paper is organised as follows: Section 2 defines the relevant notation and background. Section 3 discusses the immediate related work. Section 4 introduces the CLEAR Image method and algorithms. Section 5 details the experimental setup and discusses the results. Section 6 concludes the paper and indicates directions for future work.
BACKGROUND
Key Notation
This paper adopts the following notation: let the instance be an image, and be a machine learning system that maps to class label with probability . Let be partitioned into segments (regions) { 1 , . . . , }. Let any variable with a prime subscript ′ be the variable from the GAN-generated image, e.g. ′ is the GAN generated image derived from , and maps to class with probability ′ .
Explanation by Perturbation
Methods such as Occlusion [35], Extremal Perturbation [8], FIDO [3], LIME [19] and Kernel SHAP [15] use perturbation to evaluate which segments of an image are most responsible for 's classification probability y. The underlying idea is that the contribution that a segment makes to can be determined by substituting it with an uninformative segment ′ , where ′ may be either grey, black or blurred [35,8,19] or in-painted without regard to any contrast class [3]. There are three key problems with using perturbed images to explain a classification:
(1) A satisfactory explanation must be contrastive; it must answer 'Why E rather than F?' None of the above methods does this. Their contrasts are instead images of uninformative segments.
(2) The substitution may fail to identify the contribution that makes to . For example, replacing with black pixels can take the entire image beyond the classifier's training distribution. By contrast, blurring or uninformative in-painting might result in ′ being too similar to resulting in the contribution of being underestimated.
(3) A segmentation needs to be relevant to its explanatory question. Current XAI perturbation approaches produce radically different segmentations. FIDO and Extremal Perturbation identify 'optimal' segments that, when substituted by an uninformative segment, maximally affect the classification probability; by contrast, LIME uses a texture/intensity/colour algorithm (e.g. Quickshift [27] ).
CLEAR Image uses GAN generated images to address each of these problems: (i) its foil is a GAN generated image ′ belonging to a contrast class selected by the user. (ii) inpainting with segments derived from ′ enables better estimation of each segment's contribution to the difference between probabilities and ′ . (iii) the differences between and ′ are used to guide the segmentation.
RELATED WORK
The XAI methods most relevant to this paper can be broadly grouped into four types:
(i) Counterfactual methods: Wachter et al. [29] first proposed using counterfactuals as explanations of single machine learning predictions. Many XAI methods have attempted to generate 'optimal' counterfactuals; for example, [12] review sixty counterfactual methods. The algorithms differ in their constraints and the attributes referenced in their loss functions [28]. Desiderata often include that a counterfactual is: (1) actionable -e.g. does not recommend that a person reduces their age, (2) near to the original observation -common measures include Manhattan distance, L1 norm and L2 norm, (3) sparse -only changing the values of a small number of features, (4) plausible -e.g. the counterfactual must correspond to a high density part of the training data, (5) efficient to compute. Karimi et al. [13] argue that these methods are likely to identify counterfactuals that are either suboptimal or infeasible in terms of their actionability. This is because they do not take account of the causal structure that determines the consequences of the person's actions. The underlying problem is that unless all of the person's features are causally independent of each other, then when the person acts to change the value of one feature, other downstream dependent features may also change. However, this criticism does not apply to CLEAR Image. CLEAR Image's purpose is to explain the local input-output behaviour of a classifier, and the role of its counterfactuals is (i) to illustrate the classifier's causal structure (at the level of how much each segment can cause the classification probability to change) and (ii) to answer contrastive questions. Hence if the explanatory question is "why is this image classified as showing a zebra and not a horse?", CLEAR Image might highlight the stripes on the animal as being a cause of the classification. Whilst this might be a satisfactory explanation of the classification, it is, of course, not actionable. In this paper, we provide a different criticism of counterfactual methods: that they fail to provide satisfactory explanations because they do not provide a causal equation describing the local behaviour of the classifier they are meant to explain. Without this, they cannot identify:
the relative importance of different features, how the features are taken to interact with each other, or the functional forms that the classifier is, in effect, applying to each feature. They will also fail to identify cases of overdetermination.
(ii) Gradient-based methods: These provide saliency maps by backpropagating an error signal from a neural network's output to either the input image or an intermediate layer. Simonyan, Vedaldi, and Zisserman [24] use the derivative of a class score for the image to assign an importance score to each pixel. Kumar, Wong, and Taylor's CLass-Enhanced Attention Response [14] uses backpropagation to visualise the most dominant classes; this should not be confused with our method. A second approach modifies the backpropagation algorithm to produce sharper saliency maps, e.g. by suppressing the negative flow of gradients. Prominent examples of this approach [25,33] have been found to be invariant to network re-parameterisation or the class predicted [1,16]. A third approach [22,4] uses the product of gradients and activations starting from a late layer. In Grad-CAM [22], the product is clamped to only highlight positive influences on class scores.
(iii) Perturbation based methods: LIME and Kernel SHAP generate a dataset of perturbed images, which feeds into a regression model, which then calculates segment importance scores (LIME) or Shapley Values (Kernel SHAP). These bear some similarity to CLEAR Image but key differences include: they do not use a GAN generated image, do not identify counterfactuals and do not report fidelity. Extremal Perturbation uses gradient descent to determine an optimal perturbed version of an image that, for a fixed area, has the maximal effect on a network's output whilst guaranteeing that the selected segments are smooth. FIDO uses variational Bernoulli drop to find a minimal set of segments that would change an image's class. In contrast to LIME, Kernel SHAP and Extremal Perturbation, FIDO uses a GAN to in-paint segments with 'plausible alternative values'; however, these values are not generated to belong to a chosen contrast class. Furthermore, segment importance scores are not produced.
(iv) GAN difference methods: Generative adversarial network (GAN) [9] has been widely applied for synthetic image generation. Image translation through direct mapping of the original image to its target class has gained popularity, such as CycleGAN [36] and StarGAN [5]. StarGan V2 [6] introduced a style vector for conditional image translation and produced high quality images over a diverse set of target conditions. These models, however, do not keep the translation minimal and make modification even for intra-domain translation. Fixed-point GAN [23] introduced an identity loss penalising any deviation of the image during intra-domain translation. This aims to enhance visual similarity with the original image. DeScarGAN [31] adopts this loss function in its own GAN architecture and has outperformed Fixed-point GAN in their case study for Chest X-Ray pathology identification and localisation.
CLEAR Image builds on the strengths of the above XAI methods but also addresses key shortcomings. As already outlined, it uses a 'GAN-augmented' segmentation algorithm rather than just a difference mask. Next, methods such as LIME and Kernel SHAP assume that a classification probability is a simple linear addition of its causes. This is incorrect for cases of causal overdetermination, and CLEAR Image, therefore, uses a sigmoid function (see section 4.2). Finally, our experiments confirm that prominent XAI methods often fail to identify the most relevant regions of an image;
CLEAR Image, therefore, measures the fidelity of its explanations.
THE CLEAR IMAGE METHOD
CLEAR Image is a model-agnostic XAI method that explains the classification of an image made by any classifier (see Figure 1). It requires both an image and a contrast image ′ generated by a GAN. CLEAR Image segments into { 1 , . . . , } ∈ and then applies the same segmentation to ′ creating { ′ 1 , . . . , ′ } ∈ ′ . It then determines the contributions that different subsets of S make to y by substituting with the corresponding segments of ′ . CLEAR Image is GAN agnostic, allowing the user to choose the GAN architecture most suitable to their project. A set of 'image-counterfactuals' { 1 . . . } is also identified. Figures 1 to 5 provide a running example of the CLEAR Image pipeline, using the same X-ray taken from the CheXpert dataset. pipeline. The GAN produces a contrast image. CLEAR Image explains the classification probability by comparing the input image with its contrast image. It produces a regression equation that measures segment scores, reports fidelity and identifies cases of overdetermination. In this example, class is 'pleural effusion' and its contrast class ′ is 'healthy'. Using our Densenet model, the X-ray shown in this figure had a probability of belonging to equal to 1, and its contrast image had a probability of belonging to equal to 0.
GAN-Based Image Generation
To generate contrastive images, StarGAN-V2 [6] and DeScarGAN [31] have been deployed as the network architectures for our two case studies, the first using CheXpert, the second using a synthetic dataset. The use of these established GAN networks demonstrates how the generated contrastive images can aid in the overall CLEAR Image pipeline.
Default training hyperparameters were applied unless otherwise stated. Details of model training and hyperparameters can be found in Appendix B. The source image was used as input for the Style Encoder instead of a specific reference image for StarGAN-V2. This ensures the generated style mimics that of the input source images. StarGAN-V2 is also not locally constrained (i.e. the network will modify all pixels in an image related to the targeted class, which will include irrelevant spurious regions of the image). A post-generation lung space segmentation step using a pre-trained U-Net model [20] was therefore implemented. The original diseased lung space was replaced with the generated image, with a Gaussian Blur process to fuse the edge effect (see Figure 2). This confines the feature identification space used by CLEAR Image to the lung space. It is an advantage of the CLEAR Image pipeline that it is possible to use pre-processing to focus the explanation on the relevant parts of . As we will show, XAI methods that do not take a contrast image as part of their input can sometimes identify parts of , known to be irrelevant, as being responsible for the classification. Counterfactual-regression fidelity error = | ( ) − |.
Generating Contrastive Counterfactual Explanations
The following steps generate an explanation of prediction for image :
(1) GAN-Augmented segmentation algorithm. This algorithm is based on our findings (in Section 5.4) that the segments ( ℎ ) determined by analysing high intensity differences between an image and its corresponding GAN generated image ′ will often miss regions of that are important to explaining 's classification. It is therefore necessary to supplement segments ℎ with a second set of segments confined to those regions of corresponding to low intensity differences between and ′ . is created based on similar textures/intensities/colours solely within .
Pseudocode for our algorithm is shown in Algorithm 1. First, high and low thresholds ( ℎ and ) are determined by comparing the differences between and ′ using multi-Otsu; alternatively the thresholds can be user-specified. ℎ is then used to generate a set of segments, ℎ . The supplementary segments , are determined by applying the low threshold, , to the low intensity regions and then applying a sequence of connected component labelling, erosion and Felzenszwalb [7]. The combined set of segments, ℎ and , is checked to see if any individual segment is an image-counterfactual. If none is found, an iterative process is applied to gradually increase the minimum segment size parameter. The final set of segments (S, S') is subsequently created using the combined set ( ℎ , ) as shown in Figure 3. (2) Determine 's image-counterfactuals. A dataset of perturbed images is created by selectively replacing segments of with the corresponding segments of ′ (see Figure 4). A separate image is created for every combination in which either 1, 2, 3, or 4 segments are replaced. Each perturbed image is then passed through to determine its classification probability. All image-counterfactuals involving changes in up to four segments are then identified.
(The maximum number of perturbed segments in a counterfactual is a user parameter; the decision to set it to 4 in our experiments was made as we found counterfactuals involving 5+ segments to have little additional explanatory value.)
(3) Perform a stepwise logistic regression. A tabular dataset is created by using a {0,1} representation of the segments in each perturbed image from step 2. Consider a perturbed image . This will be composed of a combination of segments from the original image and segments ′ from the GAN contrast image ′ . In order to represent in tabular form, each segment of that is from is represented as a 1 and each segment of that is from ′ is represented as a 0. For example, if consisted solely of { ′ 1 , 2 , 3 , 4 }, and had a classification probability from equal to 0.75 of being 'pleural effusion', then this would be represented in tabular form as {0, 1, 1, 1, 0.75}. The table of representation vectors can then be used to generate a weighted logistic regression in which those perturbed images that are image-counterfactuals are given a high weighting and act as soft constraints.
(4) Calculate segment importance scores. These are the regression coefficients for each segment from step 3. (7) Iterate to the best explanation. Because CLEAR Image produces fidelity statistics, its parameters can be changed to achieve a better trade-off between interpretability and fidelity. For example, increasing the number of segments in the regression equation and including interaction terms might each increase the fidelity of an explanation but reduce its interpretability. Fig. 5. Extracts from a CLEAR Image report. The report identifies that substituting both segments 4 and 11 with the corresponding segments from its contrast image flips the classification probability to 'healthy' According to the logistic regression equation these substitutions would change the probability of the X-ray being classified as 'pleural effusion' to 0.44. However, when these segments are actually substituted and passed through the classifier, the probability changed to 0.43, hence the fidelity error is 0.01. CLEAR Image also identifies that substituting segments 3 and 11 also creates an image-counterfactual. Note that unlike methods such as GradCAM, CLEAR Image is able to identify segments that have a negative impact on a classification probability.
= 0 and _ _ < do _ _ += _ _ ← Create_low_intensity_segments( , , _ _ ) ← _ _ ( ) ← _ _ _ ( , ℎ , ) , ′ ← Add_Segments( ℎ , , , ′ ) return , ′
For CLEAR Image an explanation is a tuple < ; ; ; , >, where are segment importance scores, are imagecounterfactuals, is a regression equation, are the causes resulting in overdetermination, and are fidelity errors.
The regression equation is a causal equation with each independent variable (each referring to whether a particular segment is from or ′ ) being a direct cause of the classification probability. Figure 5 shows an extract from a CLEAR report. Pseudocode summarising how CLEAR Image generates an explanations is provided in Algorithm 2. The causal overdetermination of an effect occurs when multiple sufficient causes of that effect occur. By default, CLEAR Image only reports sufficient causes which each consist of a single segment belonging to . Substituting a sufficient cause for its corresponding member in ′ guarantees the effect. In the philosophy of science, it is generally taken that for an effect to be classified as overdetermined, it should be narrowly defined, such that all the sufficient causes have the same, or very nearly the same impact [17]. Hence for the case studies, the effect is defined as ( ∈ ) > 0.99, though the user may choose a different probability threshold. A sufficient cause changes a GAN generated healthy image to a diseased image. This is in the opposite direction to CLEAR Image's counterfactuals whose perturbed segments flip the classification to 'healthy'. Sufficient causes can be read off from CLEAR Image's regression equation. Using Fig. 6. Overdetermination. The report identifies segments 9 and 11 as each sufficient to have caused the original X-ray to be classified as 'pleural effusion' with a probability greater than 0.99. Hence this is a case of causal overdetermination. The corresponding GAN-generated image ′ has a classification probability ≈ 0 for pleural effusion. If a perturbed image was created by substituting all the segments of the original image with the corresponding segments of ′ except for segment 9, then would still have a classification probability for pleural effusion greater than 0.99. The same would apply if only segment 11 was substituted. the example in Figure 6 with the logistic formula, a classification probability of > 0.99 requires w x > 4.6. The GAN healthy image corresponds to all the binary segment variables being equal to 0. Hence, w x is equal to the intercept value of -4.9, giving a probability of (1 + 4.9 ) −1 ≈ 0.01. If a segment ′ is now replaced by , the corresponding binary variable changes to 1. Hence if segment 9 is infilled, then Seg09 = 1 and w x = 6.8 ( . .11.7 − 4.9). Similarly, infilling just segment 11 will make w x > 4.6. Either substitution is sufficient to guarantee w x > 4.6, irrespective of any other changes that could be made to the values of the other segment variables. Hence segments 9 and 11 are each a sufficient cause leading to overdetermination.
By contrast, XAI methods such as LIME and Kernel SHAP cannot identify cases of overdetermination. This is because they use simple linear regression instead of logistic regression. For example, suppose that an image has three segments: 1 , 2 , 3 . In the regression dataset, each segment infilled from has a value of 1 and each segment infilled from ′ has a value of 0. LIME/Kernel SHAP's regression equation will have the form: = 1 1 + 2 2 + 3 3 . In the case of LIME, is meant to be the classification probability and the regression coefficients ( 1 , 2 , 3 ) are the feature importance scores.
Let us suppose there is overdetermination, with segments 1 and 2 each being a sufficient cause for to be in a given class (e.g. 'pleural effusion') with more than 0.99 probability. Hence, the regression equation should set to a value greater than 0.99 not only when 1 = 2 = 1, but also when either 1 = 1 or 2 = 1. This is clearly impossible with the above linear form (and the constraint that ≤ 1). Mutatis mutanda, the same argument applies for Kernel SHAP.
EXPERIMENTAL INVESTIGATION
There are two case studies, the first using a synthetic dataset, the second analysing pleural effusion X-rays taken from the CheXpert dataset [11]. Transfer learning was used to train both a VGG-16 with batch normalisation and a DenseNet-121 classifier for each dataset. CLEAR Image was evaluated against Grad-CAM, Extremal Perturbations and LIME. The evaluation consisted of both a qualitative comparison of saliency maps and a comparison of pointing game and intersection over union (IoU) scores. CLEAR Image's fidelity errors were also analysed (none of the other XAI methods measures fidelity).
Datasets
The synthetic dataset's images share some key characteristics found in medical imaging including: (i) different combinations of features leading to the same classification (ii) irrelevant features. All images (healthy and diseased) contain a set of concentric circles, a large and a small ellipse. An image is 'diseased' if either: (1) the small ellipse is thin-lined, and the large ellipse contains a square or (2) there is a triangle, and the large ellipse contains a square. The dataset is an adaptation of [31].
CheXpert is a dataset of chest X-rays with automated pathological label extraction through radiology reports, consisting of 224,316 radiographs of 65,240 patients in total. Images were extracted just for the classes 'pleural effusion'
and 'no finding'. Mis-classified images and images significantly obstructed by supporting devices were manually filtered.
A random frontal X-ray image per patient was collected. In total, a dataset of 2,440 images was used in this work for model training, validation and testing. Appendix A.2 details the data preparation process. A hospital doctor provided the ground truth annotation to the X-ray images with pleural effusion for our case study.
Evaluation Metrics
This paper uses two complementary metrics to evaluate XAI methods. Both require annotated images identifying 'target'
regions that should be critical to their classification. A pointing game produces the first metric, which measures how successfully a saliency map 'hits' an image's targets. Previously pointing games have been designed for cases where (i)
images have single targets (ii) the saliency maps have a maximum intensity point [8,34]. By contrast, this paper's case studies have multiple targets, and the pixels within each CLEAR Image segment have the same value. We, therefore, The second metric is IoU. It is assumed that each pixel in a saliency map is classified as 'salient' if it is above 70 ℎ percent of the maximum intensities in that map. IoU then measures the overlap between the 'salient' pixels and the pixels belonging to the image's targets :
= ∩ / ∪ .
The chosen percentile was an empirically identified threshold to maintain a relatively high IoU score by balancing high intersection with and small union of pixel regions with a large enough (see Appendix A.1 for details).
Both metrics are useful but have counterexamples. For example, IoU would give too high a score to a saliency map that strongly overlapped with a large image target but completely missed several smaller targets that were also important to a classification. However, applied together, the two metrics provide a good indication of an XAI's performance. Quickshift segmentation with kernel sizes 4 and 20 for the CheXpert and synthetic datasets respectively.
Experimental Runs
ℎ ← 0; ← 0; ℎ ← foreach ′ ∈ ′ do // starting with highest foreach ∈ do if ′ ∈ then // square idx match ℎ ← ℎ + 1 ; ℎ ← else ← + 1 if ∀ (ℎ = )
Experimental Results
CLEAR Image outperforms the other XAI methods on both datasets (Figure 7a). Furthermore, its fidelity errors are low, indicating that the regression coefficients are accurate for the counterfactually important segments (Figure 7b). Figure 7c illustrates some of the benefits of using the 'Best Configuration', which uses GAN-augmented segmentation and infills using ′ . This is compared with (i) segmenting with Felzenszwalb and infilling with ′ (ii) segmenting with GAN-augmented but infilling with black patches (iii) segmenting with Felzenszwalb, infilling with black patches. Figure 8 illustrates how CLEAR Image's use of GAN-augmented leads to a better explanation than just using a difference mask (e.g. CLEAR Image's performance was similar for VGG-16 and DenseNet; therefore, only the DenseNet results are presented unless otherwise stated.
CLEAR Image's regression equation was able to capture the relatively complex causal structure that generated the synthetic dataset. Figure 9 shows an example. A square (SQ) is a necessary but insufficient cause for being diseased. An image is labelled as diseased if there is also either a triangle (TR) or the small ellipse is thin-lined (TE). When SQ, TR and TE are all present in a single image, there is a type of overdetermination in which TR and TE are each a sufficient cause relative to the 'image with SQ already present'. As before, a diseased image corresponds to the binary segment variables equalling one and a classification probability of being diseased > 0.99 requires w x > 4.6. This can only be achieved by Seg 5 (corresponding to SQ) plus at least one of Seg 2 or Seg 7 (TE, TR) being set to 1 (i.e. being present). Figure 10 compares the saliency maps for synthetic data.
For the CheXpert dataset, figure 11 illustrates how CLEAR Image allows for a greater appreciation of the pathology compared to 'broad-brush' methods such as Grad-CAM (please see Appendix A1 for further saliency maps). Nevertheless, the IoU scores highlight that the segmentation can be further improved. For CheXpert's counterfactuals, only 5% of The bars show 95% confidence intervals. Fig. 8. GAN-Augmented Segmentation versus GAN difference mask. The difference mask identifies four segments but when CLEAR Image perturbs these, the two nearest to the top were found to be irrelevant. Of the other two segments, CLEAR Image identifies the segment it colors green to be far more important to the classification probability. Fig. 9. Extracts from a CLEAR Image report for a synthetic image. The regression equation shows that Seg05 is a necessary but insufficient cause of the X-ray being diseased. The maps illustrate how CLEAR Image and LIME are able to tightly focus on salient regions of an image compared to broadbrush methods such as Grad-CAM and Extremal. The significance of a patch is indicated by its red intensity.
images did not have a counterfactual with four or fewer ′ segments. Most images required several segments to be infilled before its classification flipped to 'healthy', 17% required one segment, 30% with two segments, 24% with three segments and 24% with four segments. 17% of the X-rays' were found to be causally overdetermined.
CONCLUSION AND FUTURE WORK
A key reason for CLEAR Image's outperformance of other XAI methods is its novel use of GANs. It recognises that a difference mask is only the starting point for an explanation. Instead, it uses a GAN image both for infilling and as an input into its own segmentation algorithm.
As AI systems for image data are increasingly adopted in society, understanding their implicit causal structures has become paramount. Yet the explanations provided by XAI methods cannot always be trusted, as the differences in The examples in this paper help to illustrate our claim that XAI counterfactual methods will often fail to provide satisfactory explanations of a classifier's local input-output behaviour. This is because a satisfactory explanation requires both counterfactuals and a supporting causal equation. It is only because CLEAR Image produces a causal equation that it is then able to identify (a) segment importance scores, including identifying segments with negative scores ( Figure 5) (b) segments that are necessary but insufficient causes (Figure 9) (c) cases of overdetermination ( Figure 6). Providing only counterfactuals by themselves is clearly insufficient; imagine another science, say physics, treating a statement of counterfactuals as being an explanation, rather than seeking to discover the governing equation. Perhaps the primary benefit of XAI counterfactual methods is in suggesting sets of actions, but as Karimi et al. [13] argue, counterfactual methods will often perform suboptimally at this task.
A limitation of CLEAR Image is that it first requires training a GAN, which can be a challenging process. Another possible limitation could be the understandability of CLEAR Image to non-technical users. However, its reports can be suitably tailored, e.g. only showing saliency maps, lists of counterfactuals and cases of overdetermination.
We have shown that CLEAR Image can illuminate cases of causal overdetermination. Many other types of causal structures may also be ubiquitous in AI. For example, causal pre-emption and causal clustering are well documented within the philosophy of science [2,21]. The relevance of these to XAI will be a future area of work. A user study should also be carried out. However, these are time/resource consuming and need to be devised carefully by experts within specific application domains to produce sound, reliable results. Instead, we focus on objective measures and evaluations of XAI research which in our view must precede any user study. Future work will also focus on improving segmentation, e.g. by introducing domain-specific constraint parameters for GANs, to minimise the modifications of specified attributes (e.g. changes in the heart when generating lung X-rays). Additional qualitative results for the CheXpert dataset are presented in this section (Figures 12 and 13) where the most important segments (regions) identified by each XAI method is matched against the annotated ground truth. These are the pixels of saliency maps that are above 70 percent of the maximum intensity (i.e. the segments used to calculate the IoU scores). This threshold was determined empirically to yield high IoU score across all the XAI methods evaluated (see Figure 14). Figure 12 shows the additional results for the DenseNet model while Figure 13 presents the results for the VGG16 model. These results have demonstrated higher precision using CLEAR Image in identifying significant segment matching against the annotated ground truth in comparison to other explanation methods. These two figures provide a qualitative comparison to supplement the results presented in Figure 7 where CLEAR Image outperforms other XAI methods.
A.2 Data Pre-Processing
CheXpert has a total of 14 pathological classes including 'No Finding', and these are labelled through an automated rule-based labeller from text radiology reports. For each observation, the Stanford team has classified each radiograph as either negative (0), uncertain (-1) or positive (1). Other metadata includes gender, age, X-ray image projection and presence of supporting devices.
In this study, this dataset (v1.0) was applied for the model development of a binary classification task to demonstrate the capability of CLEAR Image as an XAI framework. An initial filtering process of the metadata was applied for the two classes used in the study -(1) Diseased with Pleural Effusion and (2) Healthy (this was assumed to be X-ray images with no findings and no positive observations in any of the pathological conditions). To minimise potential complications with other pathological conditions, X-ray images with only positive in pleural effusion were used with the other pathological categories either as negative/blank.
A review of the filtered images also identified that the dataset was curated with some images having significant artefacts that can hamper model training performance. Figure 15 presents some of these images in both diseased and healthy categories. Many of these consisted of artefacts from image capturing and processing (e.g. image distortion, orientation, low resolutions or miscalibration). Some images were also significantly obstructed by limbs or support devices. Some healthy images were also wrongly labelled according to a hospital doctor, who assisted in our project. A secondary manual filtering was conducted to remove any identified images with artefacts.
The 2440 selected images were split approximately 80/10/10 for the training/validation/testing. The images were also resized to 256 x 256 as the input into the classification model and generative adversarial network (GAN) as described in Section 5. Figure 16 presents some typical images in the final dataset for both diseased and healthy categories.
DeScarGAN and Parameters
The DeScarGAN architecture was adopted for the synthetic dataset in Section 5.1. 80% of the dataset (4000 images) was used for GAN training and 20% of the dataset (1000 images) was used for validation. A total of 2,500 epochs was run and the best epoch was selected on visual quality. Additional 100 images were generated as an out-of-sample test dataset. Adam optimizer was used with 1 = 0.5, 2 =0.999. An initial learning rate of 10 −4 was used and stepped down to a final learning rate of 10 −6 . Default hyperparameters for loss functions were used to mimic a similar investigation from the original author as shown below in Table 1:
Loss
B.2 StarGAN-V2 and Parameters
StarGAN-V2 [6] has been adopted in this work as a state-of-art GAN network for image translation. The GAN provided the necessary contrastive images for the CheXpert dataset. Default hyperparameters were maintained while notable loss weights are highlighted in Table 2. Adam optimizer was used with 1 = 0, 2 =0.99. A total of 50,000 epochs were run for the CheXpert dataset. The style encoding was referenced to the input image for the translation to the targeted class. This aided in maintaining the general features of the images compared to the original. As StarGAN-V2 [6] did not constrain its generation to a localised region (e.g. lungs), post-processing of segmentation and blending was implemented for the CheXpert dataset. Segmentation of the lung region was based on a pre-trained model with a U-Net architecture.
The segmentation mask was subsequently used to guide the replacement of pixels within the lung region from the GAN generated healthy image onto the original diseased image. Gaussian Blur was applied to minimise the edge effect during the blending process. This post-processing step aided in restricting the feature identification space within the lungs and reducing the computational cost for locating the counterfactuals. An evaluation of similarity to real healthy images was performed using the Fréchet inception distance (FID) [10] benchmarking against the set of healthy images in the model training dataset. Four image sets were compared: (1) real healthy images in the validation set, set of images with pleural effusion processed as described in Figure 2 with replacement of lung segments using (2) corresponding GAN-generated healthy images, (3) Gaussian blurred version of the original images and (4) constant value of zero (i.e. black). This FID score indicated how close each of the four compared image sets to the benchmark images in the training set. A low score indicated similarity between the two datasets.
Loss Term Weight Value
As observed in Figure 17, the processed images with replacement using GAN generated healthy lung segments resemble more similar to actual healthy images than blurred or black segments. As such, GAN generated processed images as described in Figure 2 were selected as the choice of synthetic healthy images for this work. The same parameter values were used for the synthetic case study except: case_study = 'Synthetic' image_segment_type ='Thresholding'
Fig. 1 .
1The CLEAR
Fig. 2 .
2The process of generating a contrast image. An original diseased image is firstly used to generate a healthy contrast image with a trained GAN model. In this example, StarGAN v2 is used as the architecture. The generated healthy lung airspace is then segmented using a U-Net segmentation model blended onto the original diseased image to produce the final image by applying GaussianBlur to minimise any edging effect around the segments.
Definition 4 . 1 .
41An image-counterfactual from l to ′ is an image resulting from a change in the values of one or more segments of x to their corresponding values in ′ such that class( ( )) = , class( ( )) = ′ and ≠ ′ .The change is minimal in that if any of the changed segments had remained at its original value, then class( ( )) = class( ( )).CLEAR Image uses a regression equation to quantify the contribution that the individual segments make to . It then measures the fidelity of its regression by comparing the classification probability resulting from each with an estimate obtained from the regression equation.
Definition 4 . 2 .
42Counterfactual-regression fidelity error Let ( ) denote the application of the CLEAR Image regression equation given image-counterfactual .
Fig. 3 .
3The GAN-Augmented segmentation algorithm. There are three stages. First, segments are identified from the high intensity differences between the original image and its contrast image ′ (a). Second, additional segments are identified from the regions of corresponding to low intensity differences between and ′ (b) Third, the segments from the two steps are combined (c).
( 5 )
5Identify cases of causal overdetermination (see below).
( 6 )
6Measure the fidelity of the regression by calculating fidelity errors (seeFigure 5) and goodness of fit statistics.
ℎ
, ← Determine_Thresholds( , ′ ) ℎ , ← Create_Difference_masks( , ′ , ℎ , ) ℎ ← Create_high_intensity_segments( ℎ ,_ _ ) // Connected components, erosion and Felzenswalb are now used to create the low intensity segments ← Create_low_intensity_segments( ℎ , ) // If there are no counterfactuals then increase the size of the segments while
Fig. 4 .
4Determining image-counterfactuals. In this example segments 4 and 11 are evaluated both separately and in combination. Substituting 11 with its corresponding contrast segment ′ 11 creates a perturbed image (b) with the same classification probability as the original image (a). The same applies with segment 4 (c). However substituting both segments 4 and 11 results in a perturbed image (d) which has a classification probability of 0.43. Given a decision boundary at probability of 0.5, (d) would be classified as a 'healthy' X-ray and would therefore be an image-counterfactual.
Algorithm 2 :
2CLEAR Image input : -input image, ′ -contrast image, -AI classifier. , ′ ← GAN_Augmented_Segmentation( , ′ , ) D← Create_Perturbed_Data( , ′ , ) ← Find_Counterfactuals( , ′ , ) ←Find_Regression_Equation( , ) ←Extract_Segment_Scores( ) ←Find_Overdetermination( ) ← Calculate_Fidelity( , ) return explanation=< , , , , >
formulated a novel pointing game. The pointing game partitions a 'diseased' image into 49 square segments, P = { 1 . . . 49 } and identifies which squares contain each of the targets. The corresponding saliency map is also partitioned, and each square is allocated a score equal to the average intensity of that square's pixels Q = { 1 . . . 49 }. The pointing game then starts with the of highest intensity and determines if the corresponding contains a relevant feature. A successful match is a 'hit' and an unsuccessful match is a 'miss'. This process continues until every target has at least one hit. The score for an image is the number of hits over the number of hits plus misses. Pseudocode is provided in Algorithm 3.
CLEAR
Image was run using logistic regression with the Akaike information criterion; full testing and parameter values can be found in Appendix B.3. The test datasets consisted of 95 annotated X-rays and 100 synthetic images. The average running time for CLEAR Image was 20 seconds per image for the synthetic dataset and 38 seconds per image for the Algorithm 3: Pointing Game input : -input image, -annotated features -XAI saliency map ← Square_Idx_of_Each_Feature( , ) ← Average_Intensity_Each_Square( ) ′ ← Square_Idx_Sort_Highest_Intensity( )
Fig. 7 .
7Evaluation metrics. Figure (a) compares the performances of different XAI methods with the DenseNet models. Figure (b) shows the fidelity errors for the DenseNet models. Figure (c) compares the performances of different configurations of CLEAR Image.
Fig. 10 .
10Comparison of XAI methods on synthetic data. The pointing game scores are shown in green and the IoU scores are in purple.
Figure 11 '
11s saliency maps show. It is therefore critical that an XAI method measures its own fidelity. By 'knowing when it does not know', it can alert the user when its explanations are unfaithful.
Fig. 11 .
11Comparison of XAI methods on X-ray. The pointing game scores are shown in green and the IoU scores are in purple. The significance of a patch is indicated by the intensity of red against the blue outlined annotated ground truth.A SUPPLEMENTAL RESULTS FOR CHEXPERT DATASET AND ASSOCIATED DATA PRE-PROCESSINGA.1 Supplementary qualitative results
Fig. 12 .
12Representative comparative examples of the identified important segments of a DenseNet-based image classification model (Val Acc: 98.8%) for pleural effusion using (i) CLEAR Image, (ii) Grad-CAM, (iii) Extremal Perturbation and (iv) LIME.Fig. 13. Representative comparative examples of the identified important segments of a VGG16-based image classification model (Val Acc: 97.5%) for pleural effusion using (i) CLEAR Image, (ii) Grad-CAM, (iii) Extremal Perturbation and (iv) LIME.
Fig. 14 .
14Comparison of IoU score against four XAI methods, (1) CLEAR Image, (2) GradCAM, (3) Extremal and (4) LIME to determine the threshold of intensity at 10% intervals. CLEAR Image outperforms the other XAI methods for each of the 4 intensity thresholds.
Fig. 15 .Fig. 16 .
1516Representative examples of poorly curated images including image distortion, mis-orientation, obstruction by limbs and support devices as well as significant spine deformation. Representative examples of final images for (a) diseased with identifiable regions of pathology and (b) healthy images with clear air space. All images have minimal obstructions from support devices.
Fig. 17 .
17Comparison of Fréchet inception distance (FID) against the training healthy image dataset with (1) a set of real healthy images in the validation set, set of images with pleural effusion processed as described inFigure 2with replacement of lung segments using(2)corresponding GAN-generated healthy images, (3) Gaussian blurred version of the original images and (4) constant value of zero (i.e. black). For the CLEAR Image configuration experiments the parameter 'image_infill' had values ['GAN', 'black'] and the parameter image_segment_type had values ['Augmented_GAN', 'Felzenszwalb']
CheXpert dataset, running on a Windows i7-8700 RTX 2070 PC. Default parameter values were used for the other XAI methods, except for the following beneficial changes: Extremal Perturbations was run with 'fade to black' perturbationthen
// exit once all features hit
break
return < ℎ ,
>
type, and using areas {0.025,0.05,0.1,0.2} with the masks summed and a Gaussian filter applied. LIME was run using
Table 1 .
1Default Loss Function Hyperparameters used in DeScarGAN
Table 2 .
2Default Loss Function Hyperparameters used in StarGANv2[6]
Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, Been Kim, arXiv:1810.03292Sanity checks for saliency maps. arXiv preprintJulius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, and Been Kim. 2018. Sanity checks for saliency maps. arXiv preprint arXiv:1810.03292.
Inferring causal complexity. Michael Baumgartner, Sociological methods & research. 38Michael Baumgartner. 2009. Inferring causal complexity. Sociological methods & research, 38, 1, 71-101.
Explaining image classifiers by counterfactual generation. Chun-Hao Chang, Elliot Creager, Anna Goldenberg, David Duvenaud, arXiv:1807.08024arXiv preprintChun-Hao Chang, Elliot Creager, Anna Goldenberg, and David Duvenaud. 2018. Explaining image classifiers by counterfactual generation. arXiv preprint arXiv:1807.08024.
Grad-cam++: generalized gradient-based visual explanations for deep convolutional networks. Aditya Chattopadhay, Anirban Sarkar, Prantik Howlader, N Vineeth, Balasubramanian, 2018 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEEAditya Chattopadhay, Anirban Sarkar, Prantik Howlader, and Vineeth N Balasubramanian. 2018. Grad-cam++: generalized gradient-based visual explanations for deep convolutional networks. In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 839-847.
Stargan: unified generative adversarial networks for multi-domain image-to-image translation. Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, Jaegul Choo, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionYunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, and Jaegul Choo. 2018. Stargan: unified generative adversarial networks for multi-domain image-to-image translation. In Proceedings of the IEEE conference on computer vision and pattern recognition, 8789-8797.
Stargan v2: diverse image synthesis for multiple domains. Yunjey Choi, Youngjung Uh, Jaejun Yoo, Jung-Woo Ha, 10.1109/CVPR42600.2020.008212020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Yunjey Choi, Youngjung Uh, Jaejun Yoo, and Jung-Woo Ha. 2020. Stargan v2: diverse image synthesis for multiple domains. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 8185-8194. doi: 10.1109/CVPR42600.2020.00821.
Efficient graph-based image segmentation. F Pedro, Felzenszwalb, P Daniel, Huttenlocher, International journal of computer vision. 59Pedro F Felzenszwalb and Daniel P Huttenlocher. 2004. Efficient graph-based image segmentation. International journal of computer vision, 59, 2, 167-181.
Understanding deep networks via extremal perturbations and smooth masks. Ruth Fong, Mandela Patrick, Andrea Vedaldi, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionRuth Fong, Mandela Patrick, and Andrea Vedaldi. 2019. Understanding deep networks via extremal perturbations and smooth masks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2950-2958.
Generative adversarial nets. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio, Advances in neural information processing systems. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in neural information processing systems, 2672-2680.
Gans trained by a two time-scale update rule converge to a local nash equilibrium. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, Sepp Hochreiter, ; U V Luxburg, S Bengio, H Wallach, R Fergus, S Vishwanathan, R Garnett, Advances in Neural Information Processing Systems. I. Guyon. Curran Associates, Inc30Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. 2017. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in Neural Infor- mation Processing Systems. I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors. Volume 30. Curran Associates, Inc. https : / / proceedings . neurips . cc / paper / 2017 / file / 8a1d694707eb0fefe65871369074926d-Paper.pdf.
. Jeremy Irvin, Pranav Rajpurkar, Michael Ko, Yifan Yu, Silviana Ciurea-Ilcus, Chris Chute, Henrik Marklund, Behzad Haghgoo, Robyn Ball, Katie Shpanskaya, Jayne Seekins, David A Mong, Safwan S Halabi, Jesse K , Jeremy Irvin, Pranav Rajpurkar, Michael Ko, Yifan Yu, Silviana Ciurea-Ilcus, Chris Chute, Henrik Marklund, Behzad Haghgoo, Robyn Ball, Katie Shpanskaya, Jayne Seekins, David A. Mong, Safwan S. Halabi, Jesse K.
. Ricky Sandberg, David B Jones, Curtis P Larson, Langlotz, N Bhavik, Matthew P Patel, Andrew Y Lungren, Sandberg, Ricky Jones, David B. Larson, Curtis P. Langlotz, Bhavik N. Patel, Matthew P. Lungren, and Andrew Y.
Chexpert: a large chest radiograph dataset with uncertainty labels and expert comparison. Ng, arXiv:1901.07031cs.CVNg. 2019. Chexpert: a large chest radiograph dataset with uncertainty labels and expert comparison. (2019). arXiv: 1901.07031 [cs.CV].
Amir-Hossein, Gilles Karimi, Barthe, arXiv:2010.04050Bernhard Schölkopf, and Isabel Valera. 2020. A survey of algorithmic recourse: definitions, formulations, solutions, and prospects. arXiv preprintAmir-Hossein Karimi, Gilles Barthe, Bernhard Schölkopf, and Isabel Valera. 2020. A survey of algorithmic recourse: definitions, formulations, solutions, and prospects. arXiv preprint arXiv:2010.04050.
Algorithmic recourse: from counterfactual explanations to interventions. Bernhard Amir-Hossein Karimi, Isabel Schölkopf, Valera, Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. the 2021 ACM Conference on Fairness, Accountability, and TransparencyAmir-Hossein Karimi, Bernhard Schölkopf, and Isabel Valera. 2021. Algorithmic recourse: from counterfac- tual explanations to interventions. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 353-362.
Explaining the unexplained: a class-enhanced attentive response (clear) approach to understanding deep neural networks. Devinder Kumar, Alexander Wong, Graham W Taylor, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. the IEEE Conference on Computer Vision and Pattern Recognition WorkshopsDevinder Kumar, Alexander Wong, and Graham W Taylor. 2017. Explaining the unexplained: a class-enhanced attentive response (clear) approach to understanding deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 36-44.
A unified approach to interpreting model predictions. M Scott, Su-In Lundberg, Lee, Advances in Neural Information Processing Systems. Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems, 4765-4774.
A theoretical explanation for perplexing behaviors of backpropagationbased visualizations. Weili Nie, Yang Zhang, Ankit Patel, PMLRInternational Conference on Machine Learning. Weili Nie, Yang Zhang, and Ankit Patel. 2018. A theoretical explanation for perplexing behaviors of backpropagation- based visualizations. In International Conference on Machine Learning. PMLR, 3809-3818.
Counterfactual theories. Laurie Ann , Paul , The Oxford handbook of causation. Laurie Ann Paul. 2009. Counterfactual theories. In The Oxford handbook of causation.
Causality: Models, Reasoning and Inference. Judea Pearl, Cambridge University PressNew York, NY, USA1st editionJudea Pearl. 2000. Causality: Models, Reasoning and Inference. (1st edition). Cambridge University Press, New York, NY, USA.
Why should i trust you? explaining the predictions of any classifier. Sameer Marco Tulio Ribeiro, Carlos Singh, Guestrin, http:/doi.acm.org/10.1145/2939672.2939778Proc. ACM SIGKDD 2016 (KDD '16). ACM SIGKDD 2016 (KDD '16)San Francisco, California, USAACMMarco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. Why should i trust you? explaining the predictions of any classifier. In Proc. ACM SIGKDD 2016 (KDD '16). ACM, San Francisco, California, USA, 1135-1144. isbn: 978-1-4503-4232-2. doi: 10.1145/2939672.2939778. http://doi.acm.org/10.1145/2939672.2939778.
U-net: convolutional networks for biomedical image segmentation. Olaf Ronneberger, Philipp Fischer, Thomas Brox, arXiv:1505.04597cs.CVOlaf Ronneberger, Philipp Fischer, and Thomas Brox. 2015. U-net: convolutional networks for biomedical image segmentation. (2015). arXiv: 1505.04597 [cs.CV].
Trumping preemption. Jonathan Schaffer, The Journal of Philosophy. 97Jonathan Schaffer. 2004. Trumping preemption. The Journal of Philosophy, 97, 4, 165-181.
Grad-cam: visual explanations from deep networks via gradient-based localization. R Ramprasaath, Michael Selvaraju, Abhishek Cogswell, Ramakrishna Das, Devi Vedantam, Dhruv Parikh, Batra, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionRamprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. 2017. Grad-cam: visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, 618-626.
Learning fixed points in generative adversarial networks: from image-to-image translation to disease detection and localization. Zongwei Md Mahfuzur Rahman Siddiquee, Nima Zhou, Ruibin Tajbakhsh, Feng, B Michael, Yoshua Gotway, Jianming Bengio, Liang, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionMd Mahfuzur Rahman Siddiquee, Zongwei Zhou, Nima Tajbakhsh, Ruibin Feng, Michael B Gotway, Yoshua Bengio, and Jianming Liang. 2019. Learning fixed points in generative adversarial networks: from image-to-image translation to disease detection and localization. In Proceedings of the IEEE International Conference on Computer Vision, 191-200.
Deep inside convolutional networks: visualising image classification models and saliency maps. Karen Simonyan, Andrea Vedaldi, Andrew Zisserman, Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2014. Deep inside convolutional networks: visualising image classification models and saliency maps.
Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, Martin Riedmiller, arXiv:1412.6806Striving for simplicity: the all convolutional net. arXiv preprintJost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. 2014. Striving for simplicity: the all convolutional net. arXiv preprint arXiv:1412.6806.
The scientific image. Bas C Van Fraassen, Oxford University PressBas C Van Fraassen et al. 1980. The scientific image. Oxford University Press.
Quick shift and kernel methods for mode seeking. Andrea Vedaldi, Stefano Soatto, European conference on computer vision. SpringerAndrea Vedaldi and Stefano Soatto. 2008. Quick shift and kernel methods for mode seeking. In European conference on computer vision. Springer, 705-718.
Sahil Verma, John Dickerson, Keegan Hines, arXiv:2010.10596Counterfactual explanations for machine learning: a review. arXiv preprintSahil Verma, John Dickerson, and Keegan Hines. 2020. Counterfactual explanations for machine learning: a review. arXiv preprint arXiv:2010.10596.
Counterfactual explanations without opening the black box: automated decisions and the gdpr. Sandra Wachter, Brent Mittelstadt, Chris Russell, Harv. JL & Tech. 31841Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2017. Counterfactual explanations without opening the black box: automated decisions and the gdpr. Harv. JL & Tech., 31, 841.
Measurable counterfactual local explanations for any classifier. Adam White, Artur D'avila Garcez, ECAI 2020. IOS PressAdam White and Artur d'Avila Garcez. 2020. Measurable counterfactual local explanations for any classifier. In ECAI 2020. IOS Press, 2529-2535.
Descargan: disease-specific anomaly detection with weak supervision. Julia Wolleb, Robin Sandkühler, Philippe C Cattin, International Conference on Medical Image Computing and Computer-Assisted Intervention. Julia Wolleb, Robin Sandkühler, and Philippe C Cattin. 2020. Descargan: disease-specific anomaly detection with weak supervision. In International Conference on Medical Image Computing and Computer-Assisted Intervention.
. Springer, Springer, 14-24.
Making things happen: a theory of causal explanation. J Woodward, Oxford University Press9780195189537Oxford, England. isbnJ. Woodward. 2003. Making things happen: a theory of causal explanation. Oxford University Press, Oxford, England. isbn: 9780195189537.
Visualizing and understanding convolutional networks. D Matthew, Rob Zeiler, Fergus, European conference on computer vision. SpringerMatthew D Zeiler and Rob Fergus. 2014. Visualizing and understanding convolutional networks. In European conference on computer vision. Springer, 818-833.
Top-down neural attention by excitation backprop. Jianming Zhang, Sarah Adel Bargal, Zhe Lin, Jonathan Brandt, Xiaohui Shen, Stan Sclaroff, International Journal of Computer Vision. 126Jianming Zhang, Sarah Adel Bargal, Zhe Lin, Jonathan Brandt, Xiaohui Shen, and Stan Sclaroff. 2018. Top-down neural attention by excitation backprop. International Journal of Computer Vision, 126, 10, 1084-1102.
Learning deep features for discriminative localization. Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, Antonio Torralba, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionBolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. 2016. Learning deep features for discriminative localization. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2921-2929.
Unpaired image-to-image translation using cycle-consistent adversarial networks. Jun-Yan Zhu, Taesung Park, Phillip Isola, Alexei A Efros, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionJun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision, 2223-2232.
| [] |
[
"HCG 16 Revisited: Clues About Galaxy Evolution in Groups Received ; accepted",
"HCG 16 Revisited: Clues About Galaxy Evolution in Groups Received ; accepted"
] | [
"Observatório Nacional ",
"Rua José Gal ",
"Cristino "
] | [] | [
"RJ"
] | We present new spectroscopic observations of 5 galaxies, members of the unusually active compact group HCG 16, observed using the Palomar 5m telescope. The high signal to noise ratios (S/N ∼ 70) of the spectra allow us to study the variation of the emission line characteristics and the stellar populations in the nucleus and the circumnuclear regions of the galaxies. The emission line characteristics of these galaxies are complex, varying between Seyfert 2 and LINERs or between LINERs and starbursts. All of the galaxies show traces of intermediate age stellar populations, supporting our previous result that post-starburst galaxies are common in compact groups. The galaxies HCG16-4 and HCG16-5 show double nuclei and therefore could be two cases of recent merger.Our observations support a scenario where HCG 16 was formed by the successive merger of metal poor, low mass galaxies. The galaxies HCG16-1 and HCG16-2, which are more evolved, form the old core of the group. Galaxies HCG16-4 and HCG16-5 are two more recent additions still in a merging phase.Galaxy HCG16-5 is a starburst galaxy which is just beginning to fall into the core. If HCG 16 is representative of compact groups in their early stage, the whole set of observations implies that the formation of compact groups is the result of hierarchical galaxy formation. HCG 16 could be one example of this process operating in the local universe. | 10.1086/300816 | [
"https://arxiv.org/pdf/astro-ph/9901006v1.pdf"
] | 17,771,281 | astro-ph/9901006 | 4f6cec9537b80d67cb8bf718b7be1e98ea21c479 |
HCG 16 Revisited: Clues About Galaxy Evolution in Groups Received ; accepted
3 Jan 1999
Observatório Nacional
Rua José Gal
Cristino
HCG 16 Revisited: Clues About Galaxy Evolution in Groups Received ; accepted
RJ
3 Jan 19991Subject headings: galaxies: Compact groups -galaxies: Evolution -galaxies: Interactions -galaxies: AGNs -galaxies: Starbursts -3 -
We present new spectroscopic observations of 5 galaxies, members of the unusually active compact group HCG 16, observed using the Palomar 5m telescope. The high signal to noise ratios (S/N ∼ 70) of the spectra allow us to study the variation of the emission line characteristics and the stellar populations in the nucleus and the circumnuclear regions of the galaxies. The emission line characteristics of these galaxies are complex, varying between Seyfert 2 and LINERs or between LINERs and starbursts. All of the galaxies show traces of intermediate age stellar populations, supporting our previous result that post-starburst galaxies are common in compact groups. The galaxies HCG16-4 and HCG16-5 show double nuclei and therefore could be two cases of recent merger.Our observations support a scenario where HCG 16 was formed by the successive merger of metal poor, low mass galaxies. The galaxies HCG16-1 and HCG16-2, which are more evolved, form the old core of the group. Galaxies HCG16-4 and HCG16-5 are two more recent additions still in a merging phase.Galaxy HCG16-5 is a starburst galaxy which is just beginning to fall into the core. If HCG 16 is representative of compact groups in their early stage, the whole set of observations implies that the formation of compact groups is the result of hierarchical galaxy formation. HCG 16 could be one example of this process operating in the local universe.
Introduction
To study the dynamical structure of compact groups of galaxies, obtained new spectroscopic data on 17 of Hickson's compact groups (HCGs), extending the observations to galaxies which are in the immediate vicinity of the original group members (within 0.35 Mpc, H • = 75 km/s/Mpc, from the nominal center, in average, Ribeiro et al. 1998). The analysis based on this survey (Ribeiro et al. 1998;Zepf et al. 1997) helped to resolve some of the ambiguities presented by the HCGs. In particular, it revealed that compact groups may be different dynamical stages of evolution of larger structures, where replenishment by galaxies from the halo is always operating. Several other papers have addressed this particular scenario from either the observational or theoretical point of view (e.g. Barton et al. 1996;Ebeling, Voger, & Boringer 1994;Rood & Struble 1994;Diaferio, Geller, & Ramella 1994, 1995Governato, Tozzi, & Cavaliere 1996).
Consistent with the dynamical analysis, the classification of the activity types and the study of the stellar populations of the galaxies in these groups suggest that their evolution followed similar paths and that they were largely influenced by their environment (Ribeiro et al. 1998;Mendes de Oliveira et al. 1998). Most of the groups have a core (basically corresponding to the Hickson definition of the group) and halo structure (see Ribeiro et al. 1998 for a definition of the halo population). The core is dominated by AGNs, dwarf AGNs and galaxies whose spectra do not show any emission, whereas starbursts populate the halo.
The AGNs are located in the most early-type, luminous galaxies and are preferentially concentrated towards the central parts of the groups. The starbursts in the halo, on the other hand, appear to be located preferentially in late-type spiral galaxies (Coziol et al. 1998a(Coziol et al. , 1998b. This last result for the core of the groups was recently confirmed by Coziol et al. (1998c) from a study of a new sample of 58 compact groups in the southern hemisphere (Iovino & Tassi 1998). In this study, we also show that no Seyfert 1s have been found in -4out sample of compact groups.
In terms of star formation and populations, the galaxies in the core of the groups (the "non-starburst" galaxies) seem more evolved than those in the outer regions: the galaxies are more massive and more metal rich than the starbursts and they show little or no star formation. Most of these galaxies have, however, stellar metallicities which are unusually high compared to those of normal galaxies with similar morphologies (Coziol et al. 1998b).
They also show unusually narrow equivalent widths of metal absorption lines and relatively strong Balmer absorption lines, which are consistent with the presence of a small (less than 30%) population of intermediate age stars (Rose 1985). These observations suggest that most of the non-starburst galaxies in the groups are in a relatively evolved "post-starburst" phase (Coziol et al. 1998b).
HCG 16 is a group composed of 7 galaxies with a mean velocity V= 3959 ± 66 km s −1 and a dispersion σ = 86 ± 55 km s −1 (Ribeiro et al. 1998). Although we are keeping Hickson's nomenclature for this group, it is important to note that we are not following specifically Hickson's definition of a group, since this is not a crucial point for our analysis.
Besides, there is evidence that HCG 16 is part of a larger and sparser structure (Garcia 1993). Specific studies have been done on HCG 16, covering a broad domain of the electromagnetic spectrum, allowing a thorough exam of its physical properties. Radio and infrared (Menon 1995;Allam et al. 1996); CO observations estimating the mass of molecular gas in some of the HCG16's members (Boselli et al. 1996); rotation curves exhibiting abnormal shapes (Rubin, Hunter, & Ford 1991). Hunsberger et al. (1996) detected some dwarf galaxy candidates for HCG16-a, which is interpreted as a sign of strong interaction.
From the spectral characteristics, Ribeiro et al. (1996) identified one Seyfert 2 galaxy, two LINERs and three starburst galaxies. Considering the significant amount of information gathered for HCG 16, this group represents a unique opportunity to obtain new clues on -5the process of formation of the compact groups. Here in this paper we focus on study of the activity of five galaxies belonging to the group: four galaxies originally defined as the Hickson group number 16 and the fifth one added from Ribeiro et al. (1998). These authors re-defined this structure with seven galaxies (including the original four from Hickson), but we gathered high quality data for only five of them.
Observations and data reduction
Spectroscopic observations were performed at the Palomar 200-inch telescope using the Double Spectrograph on UT 1996 October 16. Typical exposure times were 600 to 900 seconds depending on the magnitude of the galaxy. Two gratings were used: one for the red side (316 l/mm, resolution of 4.6Å), and one for the blue side (300 l/mm, resolution of 4.9Å). The wavelength coverage was 3800Å to 5500Å in the blue and 6000Å to 8500Å in the red. For calibration, He-Ne arc lines were observed before and after each exposure throughout the night. During the night, the seeing varied around 1.5 arcsecs. It is important to stress that in this paper we present only a qualitative discussion of the relative rates of star formaton since the data were taken under non-photometric conditions hampering a proper flux calibration.
The reduction of the spectra was done in IRAF using standard methods. An overscan was subtracted along the dispersion axis, which took care of the bias correction. All the spectra were trimmed and divided by a normalized flat field. Wavelength calibration, done through a polynomial fit to the He-Ne arc lines, gave residuals of ∼0.1Å.
The relatively high signal to noise ratios of the spectra (S/N ∼ 70 on average), allow us to study the variation of the emission line characteristics and stellar populations as a function of their position in the galaxies. To do so, the reduction to one dimension was done in the case of the red spectra using up to 7 apertures of ∼ 3 arc seconds in width. Due to the lower S/N level obtained, only 3 apertures were used in the blue part of the spectrum.
To compare the line ratios and absorption features in the red with those measured in the blue, the reduction was also redone in the red using only 3 apertures.
In the case of the spectra reduced with 3 apertures, the spectrum of the galaxy NGC 6702 was used as a template to correct for contamination by the stellar populations (Ho 1996, Coziol et al. 1998a. Before subtraction, the spectrum of the template was normalized to fit the level of the continuum in the galaxies and in one case, HCG 16-5, the Balmer absorption lines were artificially enlarged to fit the broad absorption lines observed in this galaxy. Table 1 gives the basic characteristics of the 5 galaxies studied in this paper. The numbers in column 1 follow the nomenclature used in Ribeiro et al. (1996). The radial velocities in column 2 and the absolute magnitudes in column 3 were taken from Coziol et al. (1998b). The morphological types listed in column 4 were taken from Mendes de Oliveira & Hickson (1994). The different types of activity in column 5 correspond to our new classification as presented in Section 4 and Figure 3. The complexity of the AGNs is obvious from the multiple characteristics of their spectra. The next 3 columns correspond to the extension of the projected light on the spectra, as deduced from the red part of the spectrum. The total galaxy is measured from the extension until the signal reaches the sky level. The ionized region corresponds to the projected length where emission can be seen.
Results
Distribution of the light and ionized gas in the spectra
The nucleus corresponds to the extension of light at half maximum intensity (FWHM).
-7 -With the exception of HCG16-1, all the galaxies have a nucleus which is well resolved. The last column gives for each galaxy the equivalent of 1 arc second in parsecs. Figure 1 shows, on the left, the extension of the ionized gas, as traced by Hα and the two [N II] lines, and, on the right, the light profile along the slit. In the galaxies HCG16-1, HCG16-2 and HCG16-3, 90% of the light is concentrated in a window ∼ 9 arcsecs wide, which corresponds to ∼ 2 kpc at the distance of the galaxies. The remaining 10% of the light extends over a region not exceeding 8 kpc. These galaxies look compact compared to normal spiral galaxies.
In galaxies HCG16-4 and HCG16-5 the light is slightly more extended (∼ 3 and 6 kpc, respectively), but this is because these two galaxies probably have a double nucleus. The second nucleus in HCG16-4 corresponds to the second peak 5 arcsecs west of the primary nucleus, while in HCG-5 the second nucleus corresponds to the small peak 7 arcsecs east of the primary nucleus. It is very unlikely that these structures could be produced by dust, because we are using the red part of the spectra where extinction effects are minimized. In the next section, we will show also that the second nucleus in HCG16-5 presents a slightly different spectral characteristic compared to the primary nucleus, which is inconsistent with the idea that this is the same galaxy. HCG16-4 and HCG16-5 are probably the product of recent mergers of galaxies. Other studies present strong evidence of central double nuclei (Amram et al. 1992;Hibbard 1995).
In all the galaxies, the ionized gas is more intense and mostly concentrated in the nucleus. H II regions outside the nucleus are clearly visible only in HCG16-1 and HCG16-3.
It looks like the activity (star formation or AGN) is always concentrated in the center of the galaxies. In HCG16-5, the second nucleus seems less active (we see less ionized gas) than the primary nucleus, while in HCG16-4, the two nuclei appear equally active.
Variation of the activity type with the radius
In Ribeiro et al. (1996) we already determined the activity types of these galaxies.
Having in hand spectra with high S/N we now repeat our analysis of the activity for the five most luminous galaxies, but this time separating each spectrum in various apertures covering different regions in order to see how activity varies with the radius.
In Figure 2, we present the results of our classification of the activity type using the standard diagnostic diagram (Baldwin, Phillips & Terlevich 1981;Veilleux & Osterbrock 1987). The line ratios correspond to the values obtained after subtraction of the template galaxy NGC 6702. Because of the relatively lower S/N of the blue as compared to the red part of the spectra, we limit our study to only three apertures. In Figure 2, the first apertures, identified by filled symbols, cover the nucleus. The two other apertures cover regions to the east and to the west of the nucleus. The width of these apertures can be found in column 3 of Table 3. Note that these apertures are covering mostly the central part of the galaxies.
Our new classification is similar to the one given in Ribeiro et al. (1996). In particular, the galaxies keep their original classification as an AGN or a starburst. We note, however, some interesting variations. The most obvious of these variations concerns HCG16-1, which was classified as a luminous Seyfert 2 and now appears as a LINER nucleus with outer regions in a starburst phase. Another difference with our previous classification is related to the discovery of the second nucleus in HCG16-5, although we do not find any evidence of difference in excitation state of both nuclei, considering the large error bars (See Figure 2). We see very little variation in the other three galaxies. The level of excitation for HCG16-3 is higher suggesting that the gas in this galaxy is slightly less metal rich than in HCG16-4 (McCall, Rybsky, & Shields 1985;Evans & Dopita 1985).
To study the variation of the activity in greater detail, we have divided the spectra in the red into 7 equal apertures of ∼ 3 arc seconds in width. In Table 2, the different apertures are identified by a number which increases from east to west. The apertures centered on the nuclei are identified with a small n and the circumnuclear regions with a small ci. In column 3, the corresponding radius in parsecs is also given. The parameters that were measured are: the FWHM of the Hα emission line (column 4) and the ratio [N II]λ6548/Hα (column 5), which allow to distinguish between starbursts and AGNs (Baldwin, Phillips, & Terlevich 1981;Veilleux & Osterbrock 1987;Ho, Fillipenko, & Sargent 1993;Véron, Gonçalvez, & Véron-Cetty 1997); the equivalent width of Hα (column 6), which in a starburst is a good indicator of the strength of star formation (Kennicutt 1983;Kennicutt & Kent 1983;Copetti, Pastoriza, & Dottori 1986;Salzer, MacAlpine, & Boroson 1989;Kennicutt, Keel, & Blaha 1989;Coziol 1996); and the ratio [S II]λ6716 + λ6731/Hα (column 5), which we use as a tracer of the level of excitation (Ho, Fillipenko, & Sargent 1993;Kennicutt, Keel, & Blaha 1989;Lehnert & Heckman 1996;Coziol et al. 1999). All the lines were measured using the standard routines in SPLOT, fitting the continuum by eye. A gaussian profile was usually assumed, though in some cases, a lorentzian was used.
The uncertainties were determined by comparing values obtained by measuring the same lines in two different spectra of the same object.
In Figure 3, we present the diagrams of the ratio [N II]λ6548/Hα as a function of the EW of Hα. The corresponding regions are identified by their number in Table 2. In these diagrams, AGNs usually have a higher [N II]/Hα ratio than starbursts, but smaller EW (Coziol et al 1998b). We now examine each galaxy separately.
In HCG16-1, the star formation in the outer regions, as noted in Figure 2, appears quite clearly. As compared to HCG16-4, which is the strongest starburst we have in the group, the relatively lower EW of these H II regions suggests milder star formation. The EW of Hα is a measure of current to past star formation, the relatively lower EW suggests, therefore, an older phase of star formation (Kennicutt, Keel, & Blaha 1989;Salzer, MacAlpine, & Boroson 1989;Coziol 1996). The star formation is constant on the east side of the galaxy (apertures 1 and 2) but decreases to the west (from apertures 6 to 7). The nucleus and circumnuclear regions do not show any variation, the condition of excitation of the gas staying constant out to a radius of ∼ 1.2 kpc.
In HCG16-2, no star formation is observed. We see a slight variation in the circumnuclear regions, within a 1 kpc radius of the nucleus, and a more significant variation in the outer regions. If we assume that the source of the gas excitation is limited to the nucleus, the variation of the [N II]/Hα and EW in the outer regions can be explained by a simultaneous decrease of the gas excitation (Hα flux goes down) and a change towards older stellar populations (EW Hα decreases). This suggests that HCG16-2 is an AGN located in a galaxy dominated by intermediate and older age stellar populations. In starburst galaxies, the ratio [N II]/Hα is also sensitive to the abundance of nitrogen (Evans & Dopita 1985;Coziol et al. 1999). The increase of [N II]/Hα in the outer regions, therefore, could also suggests an increase of the abundance of nitrogen (Stauffer 1982;Storchi-Bergmann 1991;Storchi-Bergmann & Wilson 1996;Ohyama, Taniguchi & Terlevich 1997;Coziol et al. 1999). It may suggest a previous burst of star formation in the recent past of this AGN (Glass & Moordwood 1985;Smith et al. 1998).
HCG16-3 is a starburst galaxy at the periphery of the four other luminous members of HCG 16 and the only one in our sample which is not original member of the Hickson group.
Comparison with HCG16-4 indicates that the star formation is at a lower level. Again, no variation is observed within ∼ 1.2 kpc of the nucleus while the [N II]/Hα ratio increases and EW decreases in the outer regions. However, the variation of these two parameters is less severe than in the case of HCG16-2. Because HCG16-3 is classified as a starburst, we assume that the source of gas ionization is not limited only to the nucleus but follows -11the star formation. The variation observed would then mean that the star formation in the outer regions (aperture 2 and 6) is at a more advanced stage of evolution than in the nucleus.
The same behavior as in HCG16-3 is observed in HCG16-4. The star formation in this galaxy, however, is at a more intense level. This is probably because HCG16-4 is in a merger phase since this galaxy has a double nucleus. Contrary to HCG16-3, we see also some spectral variations in the nucleus, consistent with a double nucleus: apertures 3 and 2 correspond to the second nucleus while apertures 4 and 5 correspond to the primary nucleus. Again the outer regions seem to be in a more advanced stage of evolution than in the nucleus.
The variations observed in HCG16-5 are much more complex than in the other galaxies.
The presence of a second nucleus makes the interpretation even more difficult. In Figure 3, the second nucleus corresponds to apertures 6 and 7. It can be seen that the two nuclei have the same behavior. The variation of the parameters out of the nuclei is similar to what we observed in the two starbursts HCG16-3 and HCG16-4, but the range of variation is more similar to that observed in HCG16-2. Although HCG16-5 was classified as a LINER, its nature seems ambiguous, showing a mixture of starburst and AGN characteristics. It is important to note the difference with respect to HCG16-1, which is a central AGN encircled by star forming regions. In HCG16-5, on the other hand, the AGN in the nucleus seems to be mixed with intense star formation (Maoz et al. 1998;Larking et al. 1998). Out of the nucleus, there is no star formation and the AGNs may be responsible for ionizing the gas (Haniff, Ward, & Wilson 1991;Falcke, Wilson, & Simpson 1998;Contini 1997).
Variation of the excitation with the radius
Comparing the ratio [N II]λ6548/Hα with the ratio [S II]λ6716 + λ6731/Hα it is possible to distinguish between the different source of excitation of the gas (Kennicutt, Keel, & Blaha 1989, Ho, Fillipenko, & Sargent 1993, Lehnert & Heckman 1996. Shocks from surpernovae remnants in a starburst, for example, produce a [S II]/Hα ratio higher than 0.6, much higher than the mean value of ∼ 0.25 usually observed in normal H II regions or in starbursts (Greenawalt & Walterbos 1997;Coziol et al. 1997). In AGNs, however, the effect of shocks are more difficult to distinguish because both of these lines are highly excited (Baldwin, Phillips & Terlevich 1981;Veilleux & Osterbrock 1987;Ho, Fillipenko, & Sargent 1993;Villar-Martín, Tadhunter, & Clark 1997;Coziol et al. 1999). We will assume here that a typical AGN has [N II]/Hα > 1 and [S II]/Hα > 0.6.
In Figure 4, we now examine the behavior of these ratios as a function of the radius for each of the galaxies. In HCG16-1, although we now classify the nucleus as a LINER, the values of the two ratios are still consistent with those of a typical AGN. The [N II]/Hα ratio for the outer starbursts are at the lower limit of the value for AGNs, but the [S II]/Hα ratio is normal for gas ionized by hot stars. On the other hand, the outer region corresponding to aperture 7 has a very unusually high ratio, which suggests that this region could be the location of shocks (Ho, Fillipenko, & Sargent 1993;Lehnert & Heckman 1996;Contini 1997).
In HCG16-2, both ratios are high, consistent with its AGN nature. We note also that in the outer regions the [S II]/Hα ratio decreases or stays almost constant while the [N II]/Hα ratio increases. This suggests a variation of [N II]/Hα due to an abundance effect.
This behavior is consistent with our interpretation of Figure 3, and suggests that this AGN probably had a starburst in its outer region (like in HCG16-1, for example) in the recent past.
The values observed in the starburst HCG16-3 are consistent with excitation produced by massive stars. The outer regions however show values that could be interpreted as the products of shocks. The same behavior is observed in HCG16-4, although at a much lower level. This is consistent with the idea that HCG16-4 is much more active than HCG16-3.
In this galaxy the burst population in the outer regions, though more evolved than in the nucleus, are however younger than in the outer regions of HCG16-3.
Again, the analysis of HCG16-5 is the most complex. The values for the primary nucleus are at the lower limit for AGN and starburst and are consistent with shocks. The secondary nucleus has values consistent with shocks and AGN. All the outer regions show values unusually high, suggesting the presence of shocks or domination by an AGN. This observation supports our previous interpretation that HCG16-5 is a mixture of two AGNs with starbursts in their nucleus.
Variation of the stellar populations with the radius
In this section we complete our analysis for our 5 galaxies by studying the characteristics of their stellar populations, as deduced from the absorption features. For this study, we measured the absorption features in three apertures. The results are presented in Table 3. The three apertures are the same as those used for the activity classification. The corresponding widths in kpc are given in column 3. The absorption features were measured by drawing a pseudo continuum by eye using a region ∼ 100Å wide on each side of the line.
Columns 4 to 10 give the EW of the most prominent absorption features in the spectra.
Column 11 gives the ratios of the center of the line intensity of the Ca II H + Hǫ lines to the center of the line intensity of the Ca II K and column 12 gives the Mg 2 index. The uncertainties were determined the same way as for the emission line features.
In Figure 5, we show the diagram of the EW of Hδ as a function of the (Ca II H + Hǫ)/Ca II K index (Rose 1985). This diagram is useful for identifying post-starburst galaxies (Rose 1985;Poggianti & Barbaro 1996;Zabludoff et al. 1996;Caldwell et al. 1996;Barbaro & Poggianti 1997;Caldwell & Rose 1997 In Figure 5, we compare the five galaxies in HCG 16 with the sample of HCG galaxies previously studied by Coziol et al. (1998b). It can be seen that the five galaxies in HCG 16 have characteristics which indicate younger post-starburst phases than in most of the galaxies in Coziol et al. (1998b). This observation is consistent with our scenario for the formation of the groups, which suggests that HCG 16 is an example of a young group.
In Figure 5, it is interesting to compare the position of the two starburst galaxies HCG16-3 and HCG16-4. The position of HCG16-3 suggests that it contains more intermediate age stars than HCG16-4. But at the same time we deduce from Figure 3 that HCG16-4 has a younger burst than HCG16-3. How can we understand this apparent contradiction? One possibility is to assume that the EW(Hδ) in HCG16-4 is contaminated by emission, explaining the low EW observed for this galaxy. For the (Ca II H + Hǫ)/Ca II K indices we note also that these values are comparable with those produced by very massive stars (Rose 1985). Another alternative, however, would be to suppose that the stellar populations are from another generation suggesting multiple bursts of star formation in HCG16-4 (Coziol 1996;Smith et al. 1998;Taniguchi & Shioya 1998).
In Figure 5, the position of HCG16-2 is consistent with no star formation in its nucleus.
It could have been higher in the outer regions in the recent past, which is consistent with our interpretation of Figures 3 and 5 for this galaxy. We also note the very interesting position of HCG16-5, which shows a strong post-starburst phase in the two nuclei and in the outer regions. This observation supports our previous interpretation of these two LINERs as a mixture of AGNs with starbursts in their nuclei.
Finally, we examine the stellar metallicities of our galaxies, as deduced from the Mg2 index (Burstein et al. 1984;Brodie & Huchra 1990;Worthey, Faber, & González 1992;Bender, Burstein, & Faber 1993). In Figure 6, the stellar metallicity is shown as a function of the ratio EW(Ca II H + Hǫ)/EW(Ca II K), which increases as the stellar population get younger (Rose 1985;Dressler & Schectman 1987). For our study, we assume that a high value of the Mg2 index indicates a high stellar metallicity. In Figure 6, the range of Mg2 generally observed in late type spirals is indicated by two dotted lines. The upper limit for the early-type galaxies is marked by a dashed line. Figure 6 suggests that, the stellar populations are generally more metal rich in the nuclei than in the circumnuclear regions. The two AGNs, HCG16-1 and HCG16-2, are more metal rich, and, therefore, more evolved. HCG16-3 and HCG16-4 have, on the other hand, typical values for starburst galaxies (Coziol et al. 1998). In terms of stellar population and metallicity HCG16-5 is more similar to HCG16-3 and HCG16-4, which suggests a similar level of evolution.
Discussion
Our observations are consistent with the existence of a close relation between AGN and starbursts. In our sample the most obvious case is HCG16-1, which has a LINER nucleus and star formation in its outer regions. A similar situation was probably present in HCG16-2, in a recent past. HCG16-5, on the other hand, shows a very complicated case where we cannot clearly distinguish between star formation an AGN. The question then is what is the exact relation between these two phenomena?
One possibility would be to assume that AGN and starburst are, in fact, the same phenomenon (Terlevich et al. 1991): the AGN characteristics are produced by the evolution of a massive starbursts in the center of the galaxies. HCG16-5 could be a good example of this. However, nothing in our observations of this galaxy allows us to identify the mechanism producing the LINER with only star formation. In fact, the similarity of HCG16-5 to HCG16-2 suggests that what we see is more a mixture of the two phenomena,
where an AGN coexists in the nucleus with a starburst (Maoz et al. 1998;Larkin et al. 1998;Gonzalez-Delgado et al. 1997;Serlemitsos, Ptak, & Yaqoob 1997).
Perhaps the two phenomena are different, but still related via evolution. In one of their recent paper, Gonzalez-Delgado et al. (1997) proposed a continuous sequence where a starburst is related to a Seyfert 2, which, at the end, transforms into a Seyfert 1. Following our observations, it is interesting to see that in terms of stellar populations, HCG16-1 and HCG16-2 are the most evolved galaxies of the group. In Coziol et al. (1998b) we also noted that this is usually the case for the luminous AGN and low-luminosity AGN galaxies in the groups. The AGNs in the samples of Gonzalez-Delgado et al. (1998) and in Hunt et al. (1997) all look like evolved galaxies. However, as we noted in the introduction, we have not found any Seyfert 1 in the 60 compact groups we have investigated (Coziol et al. 1998). Following the scenario of Gonzalez-Delgado et al. (1998) this would simply mean that the groups are not evolved enough. This is difficult to believe as it would suggest that we observe all these galaxies in a very special moment of their existence. In Coziol et al. (1998b) the observations suggests that the end product of the evolution of the starburst-Seyfert 2 connection in the groups is a low-luminosity AGN or a galaxy without emission lines.
Maybe, there are no Seyfert 1 in the groups because the conditions for the formation of these luminous AGNs are not satisfied in the groups. On this matter, it is interesting to find two mergers in HCG 16: HCG16-4 and HCG16-5. But galaxy HCG16-4 is a strong starburst while HCG16-5 is, at most, a LINER or a Seyfert 2. Could it be then that the masses of these two mergers were not sufficient to produce a Seyfert 1? Maybe the mass of the merging galaxies and/or the details on how the merging took place are the important parameters (Moles, Sulentic, & Márquez 1998;Taniguchi 1998;Taniguchi & Shioya 1998).
An evolutionary scenario for the starburst-AGN connection is probably not the only possible alternative. It could also be that the presence of a massive black hole (MBH) in the nucleus of an AGN influences the evolution of the star formation (Perry 1992;Taniguchi 1998). One can imagine, for instance, that a MBH is competing with the starburst for the available gas. Once the interstellar gas has become significantly concentrated within the central region of the galaxy, it could accumulates in an extended accretion disk to fuel the MBH. Assuming 10% efficiency, accretion representing only 7 M ⊙ yr −1 will easely yield 10 13 L ⊙ , while astration rates of 10-100 M ⊙ yr −1 are necessary to produce 10 11 − 10 12 L ⊙ (Norman & Scoville 1988). Obviously the gas that goes into the nucleus to feed the MBH will not be available to form stars, hence the star formation phase will have a shorter lifetime. Other phenomena also related to AGNs, like jets, ejection of gas, or even just a very extended ionized region could stimulate or inhibit star formation in the circumnuclear regions (Day et al. 1997;Falcke 1998;Quillen & Bower 1998). Obviously, the more active the AGN the greater its influence should be. Therefore, the fact that most of the AGNs in the compact groups are of the shallower types (Seyfert 2, LINER and low-luminosity AGN) suggests that these phenomena probably were not so important in the groups.
Another interesting aspect of our observations concerns the origin of the compact groups. In Coziol et al. (1998b) and , we suggest that the core of the groups are old collapsing structures embedded in more extended systems where they are replenished in galaxies (Governato, Tozzi, & Cavaliere 1996). We have also proposed an evolutionary scenario for the formation of the galaxies in the group. Following this scenario, HCG 16 would be an example of a group at an early stage of its evolution. Our present observations support this scenario and give us further insights on how the groups could have formed.
The original core of HCG 16 is formed of the galaxies HCG16-1, HCG16-2, HCG16-4 and HCG16-5 . Our observations now suggest that HCG16-1, HCG16-2 form the evolved core of HCG 16, while HCG16-4 and HCG16-5 are more recent additions.
The fact that we see traces of mergers in these two last galaxies suggests that HCG16-4 and HCG16-5 originally were not two massive galaxies but 4 smaller mass, metal poor galaxies.
The remnant star formation activity in HCG16-1, HCG16-2 could also indicate that they too were formed by mergers, but a much longer time ago. This scenario may resolve the paradox of why galaxies in the cores of the HCGs have not already merged to form one big galaxy (Zepf & Whitmore 1991;Zepf 1993). If HCG 16 is typical of what happened in the other groups, then originally the number of galaxies was higher and their mass lower and hence the dynamic of the groups was much different. HCG16-3 looks, on this matter, like a more recent addition, and suggests that the process of formation of the group is still going on today.
We would like to thank Roy Gal, and Steve Zepf for very useful suggestions. Table 1. The profiles of HCG16-4 and HCG16-5 show secondary peaks corresponding to secondary nuclei. The value for the nuclei are identified by filled symbols. The horizontal dot line separate Seyfert 2 (and HII galaxies) from LINER (SBNGs). The continuous curve is the empirical separation between AGN and starburst as given by Veilleux & Osterbrock (1987). Table 2. We do not display any error bar because they are quite small, comparable to the size of the symbols. Table 2. lines. The horizontal dashed lines is the higher limit for normal early-type galaxies.
-25 -
Fig. 1 .
1-Extension of the ionized gas centered at Hα and light profiles along the slit. The direction of the east is indicated. At the left, the extension in kpc of the region of the spectra with 90% of the light is indicated. This same region is marked in the light profile by a dashed line at the 10% level of intensity. The FWHM and total extension of the galaxies are given in
Fig. 2 .
2-Standard diagnostic diagram of line ratios as measured in three different apertures.
Fig. 3 .
3-[N II]/Hα ratios as a function of the EW of Hα, as measured using 7 equal apertures of ∼ 3 arc seconds. The nuclei are identified by crosses and the circumnuclear region by a small filled dot. The numbers correspond to the different apertures as given in
Fig. 4 .
4-[N II]/Hα ratios as a function of the [S II]/Hα, as measured using 7 equal apertures of ∼ 3 arc seconds. The numbers correspond to the different apertures as given in
Fig. 5 .
5-The EW of Hδ line in function of the (Ca II + Hǫ)/Ca II K index. The horizontal dashed line separate post-starbursts from normal late-types spirals (seeCoziol et al. 1998b).
Fig. 6 .
6-The Mg2 index in function of the ratio of the EW of the Ca II + Hǫ and Ca II K lines. Range values of Mg2 for late-type spirals is indicated by the two horizontal dotted
). Galaxies with intermediate age populations have high EW of Hδ and high values of the (Ca II H + Hǫ)/Ca II K ratios. From this diagram, it can be seen that the five galaxies in HCG 16 show the presence of intermediate age stellar populations.
Table 1 .
1Characteristics of the galaxies in the groupTable 2. Variation of the emission characteristics as a function of the radiusHCG #
cz
M B
T
Activity
Extension of light in the spectra
1 arcsec
Type
Total galaxy Ionized regions Nucleus
(km s −1 )
(arcsec)
(arcsec)
(arcsec) (parsec)
16 01
4073
-20.79
2
LNR+ SBNG
36
32
1.8
263
16 02
3864
-20.21
2
Seyfert 2+LNR
31
21
3.7
250
16 03
4001
-20.29
· · ·
SBNG
34
28
3.7
259
16 04
3859
-19.95
10
SBNG
45
40
6.0
249
16 05
3934
-19.94
· · ·
LNR+Seyfert 2
40
15
8.3
254
Table 2 -
2Continued Note. -Apertures spanning the nucleus and the circumnuclear regions are indicated by n and ci respectively Note. -Radius are positive to the east and negative to the westHCG #
Ap. # radius FWHM
[NII]/Hα
EW
[SII]/Hα
(kpc)
(km s −1 )
(Å)
5n
0
· · ·
0.61
46 ±2
0.38±0.01
6ci
-0.76
· · ·
0.61
46 ±1
0.38±0.01
7
-1.53
· · ·
1.07±0.07 3.4±0.4
1.0 ±0.1
Table 3 .
3Variation of the absorption features with the aperture Log([OIII]/Hβ)Ca II H + Hε/Ca II K EW(Ca II K +ε)/EW(Ca II H)HCG # Ap. # Width
CaII K
CaII H
Hδ
G-band
Hβ
I(CaIIH)
I(CaIIK)
. S Allam, R Assendorf, G Longo, M Braun, G Richter, 11739Allam, S., Assendorf, R., Longo, G., Braun, M., Richter, G. 1996,å117, 39
. P Amram, M Marcelin, J Boulesteix, E Le Coarer, 266106Amram, P., Marcelin, M., Boulesteix, J., & le Coarer, E. 1992,å, 266, 106
. J A Baldwin, M M Phillips, R Terlevich, PASP. 935Baldwin, J. A., Phillips, M. M., Terlevich, R. 1981, PASP, 93, 5
. G Barbaro, B M Poggianti, A&A. 324490Barbaro, G., Poggianti, B. M., 1997, A&A, 324, 490
. E Barton, M J Geller, M Ramella, R O Marzke, L N Da Costa, AJ. 112871Barton, E., Geller, M.J., Ramella, M., Marzke, R.O., & da Costa, L.N. 1996, AJ, 112, 871
. R Bender, Burstein, S M Faber, ApJ. 411137Bender, R., Burstein, Faber, S. M. 1993, ApJ, 411, 137
. A Boselli, C Mendes De Oliveira, C Balkowski, V Cayatte, F Casoli, 314738Boselli, A., Mendes de Oliveira, C., Balkowski, C., Cayatte, V., & Casoli, F. 1996,å314, 738
. J P Brodie, J P Huchra, ApJ. 362503Brodie, J. P., Huchra, J. P. 1990, ApJ, 362, 503
. D Burstein, S M Faber, C M Gaskell, N Krumm, ApJ. 287586Burstein, D., Faber, S. M., Gaskell, C. M., Krumm, N. 1984, ApJ, 287, 586
. N Caldwell, J Rose, AJ. 113492Caldwell, N., Rose, J. 1997, AJ, 113, 492
. N Caldwell, J A Rose, M Franx, A J Leonardi, AJ. 11178Caldwell, N., Rose, J. A., Franx, M., Leonardi, A. J. 1996, AJ, 111, 78
. M Contini, A&A. 32371Contini, M. 1997, A&A, 323, 71
. M V F Copetti, M G Pastoriza, H A Dottori, A&A. 156111Copetti, M. V. F., Pastoriza, M. G., Dottori, H. A. 1986, A&A, 156, 111
. R Coziol, A&A. 309345Coziol, R., 1996, A&A, 309, 345
. R Coziol, R E Carlos Reyes, S Considère, E Davoust, T Contini, A&A. submittedCoziol, R., Carlos Reyes, R. E, Considère, S., Davoust, E., Contini, T. 1999, A&A, submitted
. R Coziol, T Contini, E Davoust, S Considère, ApJ. 48167Coziol, R., Contini, T., Davoust, E., Considère, S. 1997 ApJ, 481, L67
. R Coziol, A Iovino, R R De Carvalho, C Bernasconi, in preparationCoziol, R., Iovino, A., de Carvalho, R.R., Bernasconi, C. 1998c, in preparation
. R Coziol, A L B Ribeiro, R R De Carvalho, H V Capelato, ApJ. 493563Coziol, R., Ribeiro, A. L. B., de Carvalho, R. R., Capelato, H. V. 1998a, ApJ, 493, 563
. R Coziol, R R De Carvalho, H V Capelato, A L Ribeiro, ApJ. 506545Coziol, R., de Carvalho, R. R., Capelato, H. V., Ribeiro, A. L. B. 1998b, ApJ, 506, 545
. R R De Carvalho, A L B Ribeiro, H V Capelato, S E Zepf, ApJS. 1101de Carvalho, R. R., Ribeiro, A. L. B., Capelato, H. V., Zepf, S. E. 1997, ApJS, 110, 1
. A Dey, W Van Breugel, W Vacca, R Antonucci, ApJ. 490698Dey, A., van Breugel, W., Vacca, W., Antonucci R. 1997, ApJ, 490, 698
. A Diaferio, M J Geller, M Ramella, AJ. 107868Diaferio, A., Geller, M.J., & Ramella, M. 1994, AJ, 107, 868
. A Diaferio, M J Geller, M Ramella, AJ. 1092293Diaferio, A., Geller, M.J., & Ramella, M. 1995, AJ, 109, 2293
. A Dressler, S A Shectman, ApJ. 94899Dressler, A., Shectman, S. A. 1987, ApJ, 94, 899
. H Ebeling, W Voger, H Boringer, ApJ. 43644Ebeling, H., Voger, W., & Boringer, H. 1994, ApJ, 436, 44
. I N Evans, M A Dopita, ApJS. 58125Evans, I. N., Dopita, M. A., 1985, ApJS, 58, 125
. H Falcke, Reviews in Modern Astronomy. R.E. Schielicke11Astronomische Gesellschaft. highlight" talk) astro-ph/9802238Falcke, H. 1998, in "Reviews in Modern Astronomy", Vol. 11, R.E. Schielicke (Ed.), Astronomische Gesellschaft, ("highlight" talk) astro-ph/9802238
. H Falcke, A S Wilson, C Simpson, ApJ. 502199Falcke, H., Wilson, A. S., Simpson, C. 1998,ApJ, 502, 199
. A M Garcia, A&AS. 10047Garcia, A.M. 1993, A&AS, 100, 47
. I S Glass, A F Moorwood, MNRAS. 214429Glass, I. S., Moorwood, A. F. M. 1985, MNRAS, 214, 429
. R M Gonzalez Delgado, E Pérez, C Tadhunter, M J Vilchez, J M Rodríguez-Espinosa, ApJS. 108155Gonzalez Delgado, R. M., Pérez, E., Tadhunter, C., Vilchez, M. J., Rodríguez-Espinosa, J. M. 1997, ApJS, 108, 155
. R M Gonzalez-Delgado, T Heckman, C Leitherer, G Meurer, J Krolik, A Wilson, A Kinney, A Koratkar, ApJ. 505174Gonzalez-Delgado, R. M., Heckman, T., Leitherer, C., Meurer, G., Krolik, J., Wilson, A., Kinney, A., Koratkar, A., 1998, ApJ, 505, 174
. B Greenawalt, R A M Walterbos, ApJ. 483666Greenawalt, B., Walterbos, R. A. M. 1997, ApJ, 483, 666
. C A Haniff, M J Ward, A S Wilson, ApJ. 368167Haniff, C. A., Ward, M. J., Wilson, A. S. 1991, ApJ, 368, 167
. J E Hibbard, L C Ho, A V Fillipenko, W L W Sargent, ApJ. 41763Columbia UniversityPh. D. ThesisHibbard, J.E. 1995, Ph. D. Thesis, Columbia University Ho, L. C., Fillipenko, A. V., Sargent, W. L. W. 1993, ApJ, 417, 63
. L C Ho, PASP. 108637Ho, L.C. 1996, PASP, 108, 637
. L K Hunt, M A Malkan, M Salvati, N Mandolesi, E Palazzi, R Wade, ApJS. 108229Hunt, L. K., Malkan, M. A., Salvati, M., Mandolesi, N., Palazzi, E., Wade, R. 1997, ApJS, 108, 229
. S D Hunsberger, J C Charlton, D Zaritsky, ApJ. 46250Hunsberger, S.D., Charlton, J.C., & Zaritsky, D., ApJ, 462, 50
. A Iovino, E Tassi, AJ. submittedIovino, A., Tassi, E. 1998, AJ, submitted
. R C KennicuttJr, ApJ. 27254Kennicutt, R. C. Jr. 1983, ApJ, 272, 54
. R C KennicuttJr, S M Kent, AJ. 881094Kennicutt, R. C. Jr., Kent, S. M. 1983, AJ, 88, 1094
. R C Kennicutt, W C Keel, C A Blaha, AJ. 971022Kennicutt, R. C., Keel, W. C., Blaha, C. A. 1989, AJ, 97, 1022
. G Lake, N Katz, B Moore, ApJ. 495152Lake, G., Katz, N., Moore, B. 1998, ApJ, 495, 152
. J E Larkin, L Armus, R A Knop, B T Soifer, K Matthews, ApJS. 11459Larkin, J. E., Armus, L., Knop, R. A., Soifer, B. T., Matthews, K. 1998, ApJS, 114, 59
. M D Lehnert, T M Heckman, ApJ. 462651Lehnert M. D., Heckman, T. M. 1996, ApJ, 462, 651
. A J Leonardi, J A Rose, AJ. 111182Leonardi, A. J., Rose, J. A. 1996, AJ, 111, 182
. D Maoz, A Koratkar, J C Shields, L C Ho, A V Filippenko, A Sternberg, AJ. 11655Maoz, D., Koratkar, A., Shields, J. C., Ho, L. C., Filippenko, A. V., Sternberg , A. 1998, AJ, 116, 55
. M L Mccall, P M Rybsky, G A Shields, ApJS. 571McCall, M. L., Rybsky, P. M., Shields, G. A., 1985, ApJS, 57, 1
. C Mendes De Oliveira, P Hickson, ApJ. 427684Mendes de Oliveira, C., Hickson, P. 1994, ApJ, 427, 684
. C Mendes De Oliveira, H Plana, P Amram, M Bolte, J Boulesteix, astro- ph/9805129ApJ. Mendes de Oliveira, C., Plana, H., Amram, P., Bolte, M., Boulesteix, J. 1998, ApJ(astro- ph/9805129)
. T K Menon, MNRAS. 274845Menon, T.K. 1995, MNRAS, 274, 845
. M Moles, J W Sulentic, I Márquez, astro-ph/9707194ApJ. Moles, M., Sulentic, J. W., Márquez, I. 1998, ApJ, (astro-ph/9707194)
. B Moore, G Lake, N Katz, ApJ. 495139Moore, B., Lake, G., Katz, N. 1998, ApJ, 495, 139
. C A Norman, N Z Scoville, ApJ. 332124Norman, C. A., Scoville, N. Z., 1998, ApJ, 332, 124.
. Y Ohyama, Y Taniguchi, R Terlevich, ApJ. 4809Ohyama, Y. Taniguchi, Y. Terlevich, R. 1997, ApJ, 480, L9
J J Perry, Relationships between active galactic nuclei and starburst galaxies, A. V. Filipenko. 31169ASP conf. seriesPerry, J. J. 1992, in Relationships between active galactic nuclei and starburst galaxies, A. V. Filipenko, ed., ASP conf. series, vol 31, 169
. B M Poggianti, G Barbaro, A&A. 314379Poggianti, B. M., Barbaro, G. 1996, A&A, 314, 379
. A C Quillen, G Bower, ApJ. Quillen, A. C., Bower, G. 1998, ApJ
. A L B Ribeiro, R R De Carvalho, H V Capelato, S E Zepf, ApJ. 49772Ribeiro, A. L. B., de Carvalho, R. R., Capelato, H. V., Zepf, S. E. 1998, ApJ, 497, 72
. A L B Ribeiro, R R De Carvalho, R Coziol, H V Capelato, S E Zepf, ApJ. 4635Ribeiro, A. L. B., de Carvalho, R. R., Coziol, R., Capelato, H. V., Zepf, S. E. 1996, ApJ, 463, L5
. H J Rood, M F Struble, PASP. 106413Rood, H.J. & Struble, M.F. 1994, PASP, 106, 413
. J A Rose, AJ. 901927Rose, J. A. 1985, AJ, 90, 1927
. V Rubin, D Hunter, J R Ford, ApJS. 76153Rubin, V., Hunter, D., Ford JR., 1991, ApJS, 76, 153
. J J Salzer, G M Macalpine, T A Boroson, ApJS. 70447Salzer, J. J., MacAlpine, G. M., Boroson, T. A. 1989, ApJS, 70, 447
P Serlemitsos, A Ptak, T ; M Yaqoob, A Eracleous, C Koratkar, L Leitherer, Ho, The Physics of LINERs in View of Recent Observations. 70Serlemitsos, P., Ptak, A., Yaqoob, T. 1997, in The Physics of LINERs in View of Recent Observations, eds. M. Eracleous, A. Koratkar, C. Leitherer, and L. Ho, 70
. D A Smith, T Herter, M P Haynes, S G Neff, ApJ. in press (astro-ph/9808331Smith, D. A., Herter, T., Haynes M. P., Neff, S. G. 1998, ApJ, in press (astro-ph/9808331)
. J P Stauffer, ApJS. 50517Stauffer, J. P. 1982, ApJS, 50, 517
. T S Storchi-Bergmann, MNRAS. 249404Storchi-Bergmann, T. S. 1991, MNRAS, 249, 404
T S Storchi-Bergmann, A S Wilson, D Kunth, B Guiderdoni, N , The interplay between massive star formation, the ISM and galaxy evolution. Storchi-Bergmann, T. S., Wilson, A. S. 1996, in The interplay between massive star formation, the ISM and galaxy evolution, D. Kunth, D., B. Guiderdoni, N.
. Y Taniguchi, ApJ. 48717Taniguchi, Y. 1998, ApJ, 487, 17
. Y Taniguchi, Y Shioya, ApJ. 501167Taniguchi, Y., Shioya, Y. 1998, ApJ, 501, 167
. R Terlevich, G Tenorio-Tagle, J Franco, J Melnick, MNRAS. 255713Terlevich, R., Tenorio-Tagle, G., Franco, J., Melnick, J. 1991, MNRAS, 255, 713
. P Véron, A C Gonçalvez, M.-P Véron-Cetty, A&A. 31952Véron, P., Gonçalvez, A. C., Véron-Cetty, M.-P. 1997, A&A, 319, 52
. S Veilleux, D E Osterbrock, ApJS. 63295Veilleux, S., Osterbrock, D. E. 1987, ApJS, 63, 295
. M Villar-Martiín, C Tadhunter, N Clark, A&A. 32321Villar-Martiín, M., Tadhunter, C., Clark, N. 1997, A&A, 323, 21
. G Worthey, S M Faber, J J González, ApJ. 39869Worthey, G., Faber, S. M., González, J. J. 1992, ApJ, 398, 69
. A I Zabludoff, D Zaritsky, H Lin, D Tucker, Y Hashimoto, S A Shectman, A Oemler, R P Kirshner, ApJ. 466104Zabludoff, A. I., Zaritsky, D., Lin, H., Tucker, D., Hashimoto, Y., Shectman, S. A., Oemler, A., Kirshner, R. P. 1996, ApJ, 466, 104
. S E Zepf, ApJ. 407448Zepf, S. E. 1993, ApJ, 407, 448
. S E Zepf, B C Whitmore, ApJ. 383542Zepf, S. E., Whitmore, B. C. 1991, ApJ, 383, 542
. S E Zepf, R R De Carvalho, A L B Ribeiro, ApJ. 488L11 This manuscript was prepared with the AAS L A T E X macros v4.0Zepf, S. E., de Carvalho, R. R., Ribeiro, A. L. B. 1997, ApJ, 488, L11 This manuscript was prepared with the AAS L A T E X macros v4.0.
| [] |
[
"Recursion Relations, Generating Functions, and Unitarity Sums in N = 4 SYM Theory",
"Recursion Relations, Generating Functions, and Unitarity Sums in N = 4 SYM Theory"
] | [
"Henriette Elvang [email protected] \nCenter for Theoretical Physics\n\n",
"Daniel Z Freedman \nCenter for Theoretical Physics\n\n\nDepartment of Mathematics\nMassachusetts Institute of Technology\n77 Massachusetts Avenue Cambridge02139MAUSA\n",
"Michael Kiermaier [email protected] \nCenter for Theoretical Physics\n\n\nInstitute for the Physics and Mathematics\nUniverse University of Tokyo Kashiwa\n277-8582ChibaJapan\n"
] | [
"Center for Theoretical Physics\n",
"Center for Theoretical Physics\n",
"Department of Mathematics\nMassachusetts Institute of Technology\n77 Massachusetts Avenue Cambridge02139MAUSA",
"Center for Theoretical Physics\n",
"Institute for the Physics and Mathematics\nUniverse University of Tokyo Kashiwa\n277-8582ChibaJapan"
] | [] | We prove that the MHV vertex expansion is valid for any NMHV tree amplitude of N = 4 SYM. The proof uses induction to show that there always exists a complex deformation of three external momenta such that the amplitude falls off at least as fast as 1/z for large z. This validates the generating function for n-point NMHV tree amplitudes. We also develop generating functions for anti-MHV and anti-NMHV amplitudes. As an application, we use these generating functions to evaluate several examples of intermediate state sums on unitarity cuts of 1-, 2-, 3-and 4-loop amplitudes. In a separate analysis, we extend the recent results of arXiv:0808.0504 to prove that there exists a valid 2-line shift for any n-point tree amplitude of N = 4 SYM. This implies that there is a BCFW recursion relation for any tree amplitude of the theory. | 10.1088/1126-6708/2009/04/009 | [
"https://arxiv.org/pdf/0808.1720v2.pdf"
] | 14,124,358 | 0808.1720 | 0e41d8acda96ae8fecb92acd8718aa69450d5e26 |
Recursion Relations, Generating Functions, and Unitarity Sums in N = 4 SYM Theory
20 Dec 2008
Henriette Elvang [email protected]
Center for Theoretical Physics
Daniel Z Freedman
Center for Theoretical Physics
Department of Mathematics
Massachusetts Institute of Technology
77 Massachusetts Avenue Cambridge02139MAUSA
Michael Kiermaier [email protected]
Center for Theoretical Physics
Institute for the Physics and Mathematics
Universe University of Tokyo Kashiwa
277-8582ChibaJapan
Recursion Relations, Generating Functions, and Unitarity Sums in N = 4 SYM Theory
20 Dec 2008
We prove that the MHV vertex expansion is valid for any NMHV tree amplitude of N = 4 SYM. The proof uses induction to show that there always exists a complex deformation of three external momenta such that the amplitude falls off at least as fast as 1/z for large z. This validates the generating function for n-point NMHV tree amplitudes. We also develop generating functions for anti-MHV and anti-NMHV amplitudes. As an application, we use these generating functions to evaluate several examples of intermediate state sums on unitarity cuts of 1-, 2-, 3-and 4-loop amplitudes. In a separate analysis, we extend the recent results of arXiv:0808.0504 to prove that there exists a valid 2-line shift for any n-point tree amplitude of N = 4 SYM. This implies that there is a BCFW recursion relation for any tree amplitude of the theory.
Introduction
Recursion relations for tree amplitudes based on the original constructions of CSW [1] and BCFW [2,3] have had many applications to QCD, N = 4 SYM theory, general relativity, and N = 8 supergravity. 1 In this paper we are concerned with recursion relations for n-point tree amplitudes A n (1, 2, . . . , n) in which the external particles can be any set of gluons, gluinos, and scalars of N = 4 SYM theory.
Recursion relations follow from the analyticity and pole factorization of tree amplitudes in a complex variable z associated with a deformation or shift of the external momenta. A valid recursion relation requires that the shifted amplitude vanishes as z → ∞. This has been proven for external gluons by several interesting techniques, [1,3,5], but there is only partial information for amplitudes involving other types of particles [6]. Of particular relevance to us is a very recent result of Cheung [7] who shows that there always exists at least one valid 2-line shift for any amplitude of the N = 4 theory in which one particle is a negative helicity gluon and the other n − 1 particles are arbitrary. We use SUSY Ward identities to extend the result to include amplitudes with n particles of any 2 type. Thus for N = 4 SYM amplitudes there always exists a valid 2-line shift which leads to a recursion relation of the BCFW type. It is simplest for MHV amplitudes but provides a correct representation of all amplitudes.
For NMHV amplitudes the MHV vertex expansion of CSW is usually preferred, and this is our main focus. The MHV vertex expansion is associated with a 3-line shift [8], and it is again required to show that amplitudes vanish at large z under such a shift. To prove this, we use the BCFW representation to study n-point NMHV amplitudes in which one particle is a negative helicity gluon but other particles are arbitrary. Using induction on n we show that there is always at least one 3-line shift for which these amplitudes vanish in the large z limit. The restriction that one particle is a negative helicity gluon can then be removed using SUSY Ward identities. Thus there is a valid (and unique, as we argue) MHV vertex expansion for any NMHV amplitude of the N = 4 theory.
Next we turn our attention to the generating functions which have been devised to determine the dependence of amplitudes on external states of the theory. The original and simplest case is the MHV generating function of Nair [9]. A very useful extension to diagrams of the CSW expansion for NMHV amplitudes was proposed by Georgiou, Glover, and Khoze [10]. MHV and NMHV generating functions were further studied in a recent paper [11] involving two of the present authors. A 1:1 correspondence was established between the particles of N = 4 SYM theory and differential operators involving the Grassmann variables of the generating function. MHV amplitudes are obtained by applying products of these differential operators of total order 8 to the generating function, and an NMHV amplitude is obtained by applying a product of operators of total order 12. We review these constructions and emphasize that the NMHV generating function has the property that every 12th order differential operator projects out the correct MHV vertex expansion of the corresponding amplitude, specifically the expansion which is validated by the study of the large z behavior of 3-line shifts described above. In this sense the NMHV generating function is universal in N = 4 SYM theory. Its form does not contain any reference to a shift, but every amplitude is produced in the expansion which was established using a valid 3-line shift.
In [11] it was shown in examples at the MHV level how the generating function formalism automates and simplifies the sum over intermediate helicity states required to compute the unitarity cuts of loop diagrams. In this paper we show how Grassmann integration further simplifies and extends the MHV level helicity sums. We then apply the universal generating function to examples of helicity sums involving MHV and NMHV amplitudes.
Even in the computation of MHV amplitudes at low loop order, N 2 MHV and N 3 MHV tree amplitudes are sometimes required to complete the sums over intermediate states. We derive generating functions for all N k MHV amplitudes in [12] (see also [13]). Note though that when the amplitudes have k + 4 external lines these are equivalent to anti-MHV or anti-NMHV amplitudes. In this paper we discuss a general procedure to convert the conjugate of any N k MHV generating function into an anti-N k MHV generating function which can be used to compute spin sums. We study the n-point anti-MHV and anti-NMHV cases in detail and apply them in several examples of helicity sums. These include 3-loop and 4-loop cases. We use conventions and notation given in Appendix A of [11].
N = 4 SUSY Ward identities
The bosons and fermions of N = 4 SYM theory can be described by the following annihilation operators, which are listed in order of descending helicity:
B + (i) , F a + (i) , B ab (i) = 1 2 abcd B cd (i) , F − a (i) , B − (i) . (2.1)
The argument i is shorthand for the 4-momentum p µ i carried by the particle. Particles of opposite helicity transform in conjugate representations of the SU (4) global symmetry group (with indices a, b, . . . ), and scalars satisfy the indicated SU (4) self-duality condition. In this paper it is convenient to "dualize" the lower indices of positive helicity annihilators and introduce a notation in which all particles carry upper SU (4) indices, namely:
A(i) = B + (i) , A a (i) = F a + (i) , A ab (i) = B ab (i) ,(2.
2)
A abc (i) = abcd F − d (i) , A abcd (i) = abcd B − (i) .
Note that the helicity (hence bose-fermi statistics) of any particle is then determined by the SU (4) tensor rank r of the operator A a1...ar (i) .
Chiral supercharges Q a ≡ − α Q a α andQ a ≡˜ αQα a are defined to include contraction with the anti-commuting parameters α ,˜ α of SUSY transformations. The commutators of the operators Q a andQ a with the various annihilators are given by: It is frequently useful to suppress indices and simply use O(i) for any annihilation operator from the set in (2.2). A generic n-point amplitude may then be denoted by There are exactly two terms in the Ward identity. One can choose | ∼ |2 and learn that the first amplitude, involving one negative helicity and n − 1 positive helicity gluons, vanishes. The second fermion pair amplitude must then also vanish. 3 We have chosen one specific example for clarity, but the argument applies to all amplitudes with (
Q a , A(i) = 0 , Q a , A b (i) = i δ b a A(i) , Q a , A bc (p) = i 2! δ [b a A c] (i) , Q a , A bcd (i) = i 3! δ [b a A cd] (i) , Q a , A bcde (i) = i 4! δ [b a A cde] (i) ,
2.7)
This is a non-trivial identity if the index 1 appears m − 1 times, and the indices 2,3,4 each appear m times. The commutator then contains n − m + 1 terms, each again with an amplitude with n i=1 r i = 4m. To summarize, all amplitudes related by any one SUSY Ward identity must have the same total number of upper SU (4) indices. It is then easy to see the case m = 2 corresponds to MHV amplitudes, m = 3 to NMHV, while general N k MHV amplitudes must carry a total of 4(k + 2) upper indices.
In [11] a 1:1 correspondence between annihilation operators in (2.2) and differential operators involving the Grassmann variables η ia of generating functions was introduced. We will need this correspondence in Sec. 4 below, so we restate it here:
A(i) ↔ 1 , A a (i) ↔ ∂ ∂η ia , A ab (i) ↔ ∂ 2 ∂η ia ∂η ib , (2.8) A abc (i) ↔ ∂ 3 ∂η ia ∂η ib ∂η ic , A abcd (i) ↔ ∂ 4 ∂η ia ∂η ib ∂η ic ∂η id .
Thus a particle state whose upper SU (4) rank is r corresponds to a differential operator of order r. In accord with [11] we will refer to the rank r as the η-count of the particle state. We showed in [11] that an MHV amplitude containing a given set of external particles can be obtained by applying a product of the corresponding differential operators of total order 8 to the MHV generating function, and NMHV amplitudes that are obtained by applying products of total order 12 to the NMHV generating function. The classification of amplitudes based on the total η-count of the particles they contain is a consequence of SU (4) invariance.
Valid 3-line shifts for NMHV amplitudes
The major goal of this section is to prove that there is at least one 3-line shift for any NMHV amplitude A n (m 1 , . . . , m 2 , . . . , m 3 , . . .) under which the amplitude vanishes at the rate 1/z or faster as z → ∞. We show that this is true when the shifted lines m 1 , m 2 , m 3 share at least one common SU (4) index, and that such a shift is always available. The first step in the proof is to show that there is a valid 3-line shift for any NMHV amplitude A n (1 − , . . . , m 2 , . . . , m 3 , . . . , n), with particle 1 a negative helicity gluon, m 2 and m 3 sharing a common SU (4) index, and the other states arbitrary. This requires an intricate inductive argument which we outline here and explain in further detail in Appendix A. We then generalize the result to arbitrary NMHV amplitudes using a rather short argument based on SUSY Ward identities. This result implies that there is a valid MHV vertex expansion for any NMHV amplitude in N = 4 SYM theory.
Valid shifts for
A n (1 − , . . . , m 2 , . . . , m 3 , . . . , n)
We must start with a correct representation of the amplitude A n (1 − , . . . , m 2 , . . . , m 3 , . . . , n) which we can use to study the limit z → ∞ under the 3-line shift [8]
|m 3 ] → |m 3 ] = |m 3 ] + z 1 m 2 |X] ,
where |X] is an arbitrary reference spinor. Angle bracket spinors |1 , |m 2 , |m 3 are not shifted. It is assumed that the states m 2 and m 3 share at least one common SU (4) index. We must show that the large z limit of the amplitude deformed by this shift vanishes for all |X]. The amplitude then contains no pole at ∞ and Cauchy's theorem can be applied to derive a recursion relation containing a sum of diagrams, each of which is a product of two MHV subdiagrams connected by one internal line. This recursion relation agrees with the MHV vertex expansion of [1].
The representation we need was recently established by Cheung [7] who showed that every amplitude A n (1 − , . . . , x, . . . , n), with particle 1 a negative helicity gluon and others arbitrary, vanishes in the large z limit of the 2-line shift
|1] = |1] + z|x] , |1 = |1 , |x] = |x] , |x = |x − z|1 . (3.2)
This leads to a recursion relation containing a sum of diagrams which are each products of a Left subdiagram, whose n L lines include the shifted line1 and a Right subdiagram whose n R lines includẽ x. Clearly, n L + n R = n + 2. See Fig. 1. As explained in Appendix A, only the following two types of diagrams contribute to the recursion relation:
Type A: MHV × MHV diagrams with n L ≥ 3 and n R ≥ 4.
Type B: NMHV × anti-MHV diagrams with n L = n − 1 and n R = 3.
Our strategy is to consider the effect of the shift (3.1) as a secondary shift on each diagram of the recursion relation above. The action of the shift depends on how m 2 and m 3 are placed on the left and right subdiagrams. In Appendix A, we show that every Type A diagram vanishes as z → ∞, and that Type B diagrams can be controlled by induction on n. Thus the full amplitude A n (1 − , . . . , m 2 , . . . , m 3 , . . . , n), with lines m 2 and m 3 sharing a common SU(4) index, is shown to fall off at least as fast as 1/z under the [1, m 2 , m 3 |-shift. The full argument is complex and requires detailed examination of special cases for n = 6, 7. Interested readers are referred to Appendix A.
We used a 2-line shift simply to have a correct representation of the amplitude to work with in the proof of the large z falloff. That shift plays no further role. In the following we use a more general designation in which line 1 is relabeled m 1 .
Valid shifts for
A n (m 1 , . . . , m 2 , . . . , m 3 , . . .)
We now wish to show that any NMHV amplitude vanishes at least as fast as 1/z under the 3-line shift
|m 1 ] = |m 1 ] + z m 2 m 3 |X] , |m 2 ] = |m 2 ] + z m 3 m 1 |X] , |m 3 ] = |m 3 ] + z m 1 m 2 |X] , (3.3)
provided that the 3 lines m 1 , m 2 , m 3 have at least one common SU (4) index which we denote by a.
In the previous section we showed that the shift is valid if r 1 = 4, where, as usual, r 1 denotes the η-count of line m 1 .
We work with SUSY Ward identities and proceed by (finite, downward) induction on r 1 . We assume that 1/z falloff holds for all amplitudes with r 1 =r, for some 1 ≤r ≤ 4. We now want to show that it also holds for amplitudes with r 1 =r − 1. Since r 1 < 4, there is at least one SU (4) index not carried by the annihilation operator O(m 1 ). We denote this index by b and use O b (m 1 ) to denote the operator of rankr containing the original indices of O(m 1 ) plus b. This operator satisfies
[Q b , O b (m 1 )] = m 1 O(m 1 ) (no sum on b)
. The Ward identity we need (with | chosen such that
m 1 = 0) is 0 = [Q b , O b (m 1 ) . . . O(m 2 ) . . . O(m 3 ) . . . ] = m 1 O(m 1 ) . . . O(m 2 ) . . . O(m 3 ) . . . + O b (m 1 )[Q b , . . . O(m 2 ) . . . O(m 3 ) . . . ] . (3.4)
The first term in the final equality contains the NMHV amplitude we are interested in (which is an amplitude with r 1 =r − 1). The index b appears 3 times among the operators O(i) in the commutator in the second term, so there are 3 potentially non-vanishing terms in that commutator. Each term contains an unshifted angle bracket i times an NMHV amplitude with r 1 =r and lines m 1 , m 2 , m 3 sharing the common index a, thus it is an amplitude which vanishes as 1/z or faster. We conclude
A n = O(m 1 ) . . . O(m 2 ) . . . O(m 3 ) . . . → 1 z under the 3-line shift (3.3)
if lines m 1 , m 2 , m 3 share at least one SU (4) index .
(3.5)
We have thus established valid [m 1 , m 2 , m 3 |-shifts if the common index criterion is satisfied. One may ask if this is a necessary condition. There are examples of shifts not satisfying our criterion but which still produce 1/z falloff. In [11] the falloff of the 6-gluon amplitude A 6 (1 − , 2 − , 3 − , 4 + , 5 + , 6 + ) under 3-line shifts was studied numerically. The results in (6.49) of [11] show that some shifts of three lines which do not share a common index do nonetheless give 1/z falloff while others are O(1) at large z.
Note that the 6-gluon amplitude above has a unique shift satisfying our criterion, while any 6-point NMHV amplitude in which the 12 indices appear on 4 or more lines has several such shifts. The case
A 6 = A 1234 (1)A 1234 (2)A 123 (3)A 4 (4)A(5)A(
Generating functions
A generating function for MHV tree amplitudes in N = 4 SYM theory was invented by Nair [9]. The construction was extended to the NMHV level by Georgiu, Glover, and Khoze [10]. Generating functions are a very convenient way to encode how an amplitude depends on the helicity and global symmetry charges of the external states. The generating function for an n-point amplitude depends on 4n real Grassmann variables η ia , and the spinors |i , |i] and momenta p i of the external lines. A 1:1 correspondence between states of the theory and Grassmann derivatives was defined in [11] and given above in (2.8). Any desired amplitude is obtained by applying the product of differential operators associated with its external particles to the generating function. It was also shown in [11] that amplitudes obtained from the generating function obey SUSY Ward identities.
The discussion below is in part a review, but we emphasize the shift-independent universal property of the NMHV generating function. We follow [11], and more information can be found in that reference.
MHV generating function
The MHV generating function is 4
F n = n i=1 i, i + 1 −1 δ (8) n i=1 |i η ia .
(4.1)
The 8-dimensional δ-function can be expressed as the product of its arguments, i.e. In Sec. 2 we saw that MHV amplitudes O(1)O(2) . . . O(n) contain products of operators with a total of 8 SU (4) indices (with each index value appearing exactly twice among the O(i)). The associated product of differential operators D i from (2.8) has total order 8 and the amplitude may be expressed as:
O(1)O(2) . . . O(n) = D 1 D 2 . . . D n F n (4.3) = 12 23 . . . n − 1, n n1 (4.4)
The numerator is the spin factor which is the product of 4 angle brackets from the differentiation of δ (8) . It is easy [11] to compute spin factors. Here is an example of a 5-point function: Like brackets are not cancelled because we want to illustrate how this example conforms to the general structure above. 4 Lines are identified periodically, i ≡ i + n.
A 1234 (1)A 1 (2)A 23
The MHV vertex expansion of an NMHV amplitude
The NMHV generating function is closely tied to the MHV vertex expansion of [1]. The diagrams of such an expansion contain products of two MHV subamplitudes with at least one shifted line in each factor. For n-gluon NMHV amplitudes it was shown in [8] that this expansion agrees with the recursion relation obtained from the 3-line shift (3.3). For a general NMHV amplitude the recursion relation from any valid shift also leads to an expansion containing diagrams with two shifted MHV subamplitudes. In N = 4 SYM theory this expansion has the following important property which we demonstrate below; the recursion relation obtained from any valid 3-line shift contains no reference to the shift used to derive it. Therefore, all [m 1 , m 2 , m 3 |-shifts in which the shifted lines contain at least one common SU (4) index yield the same recursion relation! The MHV vertex expansion is thus unique for every amplitude.
A typical MHV vertex diagram is illustrated in Fig. 2. In our conventions all particle lines are regarded as outgoing. Therefore, if the particle on the internal line carries a particular set of SU (4) indices of rank r I in the left subamplitude, it must carry the complementary set of indices of rank 4 − r I in the right amplitude. Since each subamplitude must be SU (4) invariant, there is a unique state of the theory which can propagate across the diagram. Any common index a of the shifted lines m 1 , m 2 , m 3 , must also appear on the internal particle in the subdiagram that contains only one shifted line.
Let us assume, as indicated in Fig. 2, that the left subamplitude contains the external lines s + 1, . . . , t, including a shifted linem i , and that the right subamplitude contains lines t + 1, . . . , s and the remaining shifted linesm j ,m k (here i, j, k denotes a cyclic permutation of 1, 2, 3). In each subamplitude one uses the CSW prescription for the angle spinor of the internal line:
|P I ≡ P I |X] = t i=s+1 |i [i X] , | − P I = −|P I . (4.6)
The contribution of the diagram to the expansion is simply the product of the MHV subamplitudes times the propagator of the internal line. It is given by:
( ) L −P I , s + 1 · · · t − 1, t t, −P I 1 P 2 I ( ) R P I , t + 1 · · · s − 1, s s, P I (4.7)
The numerator factors are products of 4 angle brackets which are the spin factors for the left and right subamplitudes. They depend on the spinors |i and | ± P I in each subamplitude and can be calculated easily from the MHV generating function described in Sec. 4.1. The denominators contain the same cyclic products of i, i + 1 well known from the Parke-Taylor formula [14], and the standard propagator factor P 2 I = (p s+1 + . . . + p t ) 2 . The main point is that there is simply no trace of the initial shift in the entire formula (4.7) because i. only angle brackets are involved, and they are unshifted, and ii. the propagator factor is unshifted.
To complete the discussion we suppose that there is a another valid shift on lines [m 1 , m 2 , m 3 | which have a common index we will call b. Consider any diagram that appears in the expansion arising from the original [m 1 , m 2 , m 3 |-shift. If each subdiagram happens to contain (at least) one of the m i lines, then the same diagram with the same contribution to the amplitude occurs in the expansion obtained from the m i shift. A diagram from the m i expansion in which all 3 m i lines are located in one of the two subamplitudes cannot occur because the index b would appear 3 times in that subamplitude. This is impossible because that subamplitude is MHV and contains each SU (4) index only twice. This completes the argument that any valid 3-line shift yields the same MHV vertex expansion in which the contribution of each diagram is independent of the chosen shift. The MHV vertex expansion of any NMHV amplitude is unique.
The contribution of each diagram to the expansion depends on the reference spinor |X]. Since the physical amplitude contains no such arbitrary object, the sum of all diagrams must be independent of |X]. This important fact is guaranteed by the derivation of the recursion relation provided that the amplitude vanishes as z → ∞ for all |X]. This is what we proved in section 3.
The universal NMHV generating function
To obtain the generating function for the (typical) MHV vertex diagram in Fig. 2 we start with the product of MHV generating functions for each sub-diagram times the internal propagator. We rewrite this product as 1
n i=1 i, i + 1 W I δ (8) (L)δ (8) (R) (4.8)
with W I = s, s + 1 t, t + 1 s P I s + 1, P I P 2 I t P I t + 1, P I (4.9)
L = | − P I η Ia + t i=s+1
|i η ia (4.10)
R = |P I η Ia + s j=t+1 |j η ja . (4.11)
The Grassmann variable η Ia is used for the internal line. We have separated the denominator factors in (4.7) into a Parke-Taylor cyclic product over the full set of external lines times a factor W I involving the left-right split, as used in [10].
The contribution of (4.8) to the diagram for a given process is then obtained by applying the appropriate product of Grassmann derivatives from (2.8). This product includes derivatives for external lines and the derivatives D I L D I R for the internal lines. It follows from the discussion above that the operators D I L and D I R are of order r I and 4 − r I respectively, and that their product is simply
D I L D I R = 4 a=1 ∂ ∂η Ia . (4.12)
We apply this 4th order derivative to (4.8), convert the derivative to a Grassmann integral as in [11], and integrate using the formula [10] 4 a=1 dη Ia δ (8) L δ (8)
R = δ (8) n i=1 |i η ia 4 b=1 t j=s+1 P I j η jb . (4.13)
Thus we obtain the generating function
F I,n = δ (8) n i=1 |i η ia n i=1 i, i + 1 W I 4 b=1 t i=s+1 P I i η ib = δ (8) n i=1 |i η ia n i=1 i, i + 1 W I 4 b=1
s j=t+1 P I j η jb (4.14)
The two expressions are equal because δ (8) for the external lines is present. Using (4.2) one can see that (4.14) contains a sum of terms, each containing a product of 12 η ia . To obtain the contribution of the diagram to a particular NMHV process we simply apply the appropriate product of differential operators of total order 12. This gives the value of the diagram in the original form (4.7).
In Sec. 4.2 we argued that the MHV vertex expansion of any particular amplitude is unique and contains exactly the diagrams which come from the recursion relation associated with any valid 3-line shift [m 1 , m 2 , m 3 | which satisfies the common index criterion. A diagram is identified by specifying the channel in which a pole occurs. A 6-point amplitude A 6 (1, 2, 3, 4, 5, 6) can contain 2-particle poles in the channels (12), (23), (34), (45), (56), or (61), and there can be 3-particle poles in the channels (123), (234), (345). However, different 6-point NMHV amplitudes contain different subsets of the 9 possible diagrams. For example, the 6-gluon amplitudes with helicity configurations A 6 (− − − + ++) and A 6 (− + − + −+) each have one valid common index shift of the 3 negative helicity lines. In the first case, there are 6 diagrams, since diagrams with poles in the (45), (56) and (123) channels do not occur in the recursion relation, but all 9 possible diagrams contribute to the recursion relation for the second case. The amplitude A 1 (1)A 12 (2)A 23 (3)A 234 (4)A 134 (5)A 4 (6) for a process with 4 gluinos and 2 scalars is a more curious example; its MHV vertex expansion contains only one diagram with pole in the (45) channel.
We would like to define a universal generating function which contains the amplitudes for all npoint NMHV amplitudes, such that any particular amplitude is obtained by applying the appropriate 12th order differential operator. It is natural to define the generating function as
F n = I F I,n (4.15)
in which we sum the generating functions (4.14) for all n(n − 3)/2 possible diagrams that can appear in the MHV vertex expansion of n-point amplitudes, for example all 9 diagrams listed above for 6point amplitudes. If a particular diagramĪ does not appear in the MHV vertex expansion of a given amplitude, then the spin factor obtained by applying the appropriate Grassmann differential operator to the generating function FĪ ,n must vanish, leaving only the actual diagrams which contribute to the expansion.
To convince the reader that this is true, we first make an observation which follows from the way in which each F I,n is constructed starting from (4.8). We observe that the result of the application of a Grassmann derivative D (12) of order 12 in the external η ia to any F I,n is the same as the result of applying the operator D (16) = D (12) D I L D I R to the product in (4.8). If non-vanishing, this result is simply the product of the spin factors for the left and right subdiagrams, so the contribution of the diagram I to the amplitude corresponding to D (12) is correctly obtained.
We now show that D (12) FĪ ,n vanishes when a diagramĪ does not contribute to the corresponding amplitude. We first note that the amplitudes governed by F n are all NMHV. Thus they all have overall η-count 12, and SU (4) invariance requires that each index value a = 1, 2, 3, 4 must appear exactly 3 times among the external lines. Denote the lines which carry the index a in the amplitude under study by q 1a , q 2a , q 3a . Consider a diagram I and suppose that for every value of a its two subamplitudes each contain at least one line from the set q ka , k = 1, 2, 3. Then the diagram I appears in the MHV vertex expansion of the amplitude, and the diagram contributes correctly to D (12) F n . The other possibility is that there is a diagramĪ such that for some index value b the 3 lines q 1b , q 2b , q 3b appear in only one subamplitude, say the left subamplitude. Then the right subamplitude cannot be SU (4) invariant. Its spin factor vanishes and the diagram does not contribute.
Spin state sums for loop amplitudes
Consider the L-loop amplitude shown in figure 3. The evaluation of the (L + 1)-line unitarity cut involves a sum over all intermediate states that run in the loops. The generating functions allow us to do such sums very efficiently, for any arrangements of external states as long as the left and right subamplitudes, denoted I and J, are either MHV or NMHV tree amplitudes.
We begin by a general analysis of cut amplitudes of the type in figure 3. Assume that the full amplitude is N k MHV. Then the total η-count is ext i r i = 4(k + 2). Let the η-count of the lth loop state on the subamplitude I be w l ; then that same line will have η-count 4 − w l on the subamplitude J. The total η-counts on the subamplitudes I and J are then, respectively, We outline the general strategy before presenting the detailed examples. Let F I and F J be generating functions for the subamplitudes I and J of the cut amplitude. To evaluate the cut, we must act on the product F I F J with the differential operators of all the external states D (4k+8) ext and of all the internal states D 1 D 2 · · · D L+1 . The fourth order differential operators of the internal lines distribute themselves in all possible ways between F I and F J and thus automatically carry out the spin sum. In [11] it was shown how to evaluate the 1-loop MHV state sums when the external lines where all gluons. This was done by first acting with the derivative operators of the external lines, and then evaluating the derivatives for the loop states. We generalize the approach here to allow any set of external states of the N = 4 theory. This is done by postponing the evaluation of the external state derivatives, and instead carrying out the the internal line Grassmann derivatives by converting them to Grassmann integrations. The result is a generating function F cut for the cut amplitude. It is Figure 3: N k MHV loop amplitude evaluated by a unitarity cut of (L + 1)-lines. The sum over intermediate states involves all subamplitudes I and J with η-counts r I and r J such that r I + r J = 4(k + L + 3). (For L = 1 we assume that I and J each have more than one external leg, so that 3-point anti-MHV does not occur in the spin sum.) defined as
r I = ext i∈I r i + L+1 l=1 w l , r J = ext j∈J r j + L+1 l=1 (4 − w l ) ,(5.q r 1 l 1 l 2 l L1 r q1 I J external L = 1 L = 2 L = 3 L = 4 MHV → MHV × MHV MHV × NMHV NMHV × MHV MHV × N 2 MHV NMHV × NMHV N 2 MHV × MHV MHV × N 3 MHV NMHV × N 2 MHV N 2 MHV × NMHV N 3 MHV × MHV NMHV → MHV × NMHV NMHV × MHV MHV × N 2 MHV NMHV × NMHV N 2 MHV × MHV etcF cut = D 1 D 2 · · · D L+1 F I F J . (5.3)
The value of a particular cut amplitude is found by applying the external state differential operators D (4k+8) ext to F cut .
In the following we derive generating functions for unitarity cuts of 1-, 2-and 3-loop MHV and NMHV amplitudes. Spin sums involving N 2 MHV and N 3 MHV subamplitudes for L = 3, 4 are carried out using anti-MHV and anti-NMHV generating functions in section 6.
1-loop intermediate state sums
5.1.1 1-loop MHV × MHV
Consider the intermediate state sum in a 2-line cut 1-loop amplitude. Let the external states be any N = 4 states such that the full loop amplitude is MHV. By the analysis above, the subamplitudes I and J of the cut loop amplitude must then also be MHV.
We first calculate the intermediate spin sum and then include the appropriate prefactors. The state dependence of an MHV subamplitude is encoded in the δ (8) -factor of the MHV generating function. We will refer to the sum over spin factors as the "spin sum factor" of the cut amplitude. For the present case, the spin sum factor is
D 1 D 2 δ (8) (I) δ (8) (J) (5.4) with D i = 4
a=1 ∂/∂η ia being the 4th order derivatives associated with the internal line l i and
I = |l 1 η 1a + |l 2 η 2a + ext i∈I |i η ia , J = −|l 1 η 1a − |l 2 η 2a + ext j∈J |j η ja . (5.5)
We proceed by converting the D 1 D 2 Grassmann differentiations to integrations. Perform first the integration over η 2 to find [10]
D 1 D 2 δ (8) (I) δ (8) (J) = d 4 η 1 d 4 η 2 δ (8) (I) δ (8) (J) = δ (8) (I + J) d 4 η 1 4 a=1 ext j∈J l 2 j η ja − l 2 l 1 η 1a . (5.6)
The delta-function δ (8) (I +J) involves only the sum over external states and does therefore not depend on η 1 . The η 1 -integrations picks up the η 1 term only, so we simply get
D 1 D 2 δ (8) (I) δ (8) (J) = l 1 l 2 4 δ (8) all ext m |m η ma . (5.7)
If the external states are two negative helicity gluons i and j and the rest are positive helicity gluons, then, no matter where the gluons i and j are placed, we get l 1 l 2 4 i j 4 , in agreement with (5.6) of [15] and (4.9) of [11].
Let us now include also the appropriate pre-factors in the generating function. For the MHV subamplitudes these are simply the cyclic products of momentum angle brackets. Collecting the cyclic product of external momenta, we can write the full generating function for the MHV × MHV 1-loop generating function as
F 1-loop MHV×MHV =
q, q + 1 r, r + 1 l 1 l 2 2 q l 1 q + 1, l 1 r l 2 r + 1, l 2
1 ext i i, i + 1 δ (8)
all ext m |m η ma , or simply,
F 1-loop MHV×MHV = q, q + 1 r, r + 1 l 1 l 2 2 q l 1 q + 1, l 1 r l 2 r + 1, l 2 F tree MHV (ext) . (5.8)
Note that the state dependence of the cut MHV × MHV amplitude is included entirely in the MHV generating function, and all dependence on the loop momentum is in the prefactor.
Triple cut of NMHV 1-loop amplitude: MHV × MHV × MHV
In this section we evaluate the intermediate state sum for a 1-loop NMHV amplitude with a triple cut, as illustrated in figure 4. The triple cut is different from the cuts considered at the beginning of section 5. Its primary feature is that it gives three subamplitudes which are all MHV. To see this, note that the η-counts of the subamplitudes I, J and K are where the r i are the η-counts of the external states and w l and 4 − w l are the η-counts at each end of the internal lines. Since the full amplitude is NMHV, we have
r I = ext i∈I r i + w 1 + 4 − w 3 , r J = ext j∈J r j + w 2 + 4 − w 1 , r K = ext k∈K r k + w 3 + 4 − w 2 , (5.9) p r 1r I + r J + r K = all ext i r i + 12 = 24 . (5.10)
We now assume that each subamplitude I, J, and K has more than three legs and thus more than one external leg. Then (5.10) has only one solution, namely r I = r J = r K = 8, so each subamplitude is MHV.
Let us again first evaluate the spin sum and include the appropriate prefactors at the end. The spin sum factor is calculated by letting the differential operators of the internal states act on the product of the three MHV generating functions for the subamplitudes. We have
f triple = D 1 D 2 D 3 δ (8) (I) δ (8) (J) δ (8) (K) ,(5.11)
where
I = |l 1 η 1a − |l 3 η 3a + ext i∈I |i η ia , J = −|l 1 η 1a + |l 2 η 2a + ext j∈J |j η ja , (5.12) K = −|l 2 η 2a + |l 3 η 3a + ext k∈K |k η ka .
Again we convert the differentiations to integrations, and perform the integrations one at a time to find
f triple = d 4 η 1 d 4 η 2 d 4 η 3 δ (8) (I) δ (8) (J) δ (8) (K) = d 4 η 2 d 4 η 3 δ (8) (I + J) δ (8) (K) 4 a=1 ext i∈I l 1 i η ia − l 1 l 3 η 3a = δ (8) (I + J + K) d 4 η 3 4 a=1 ext i∈I l 1 i η ia − l 1 l 3 η 3a ext k∈K l 2 k η ka + l 2 l 3 η 3a = δ (8) all ext m |m η mb 4 a=1 − ext k∈K l 3 l 1 l 2 k η ka + ext i∈I l 2 l 3 l 1 i η ia . (5.13)
This is the generating function for the spin sum factor of the triple cut. 5 If the external particles are all gluons with three negative helicity gluons i , j , k distributed on the cut with i ∈ I, j ∈ J, and k ∈ K, then the triple cut spin sum factor is
D i D j D k f triple = D i D k D j δ (8) all ext m |m η mb 4 a=1 − l 3 l 1 l 2 k η k a + l 2 l 3 l 1 i η i a + . . . = D i D k 4 a=1 j i η i a + j k η k a + . . . − l 3 l 1 l 2 k η k a + l 2 l 3 l 1 i η i a + . . . = j i l 1 l 3 l 2 k − j k l 2 l 3 l 1 i 4 = l 1 j l 3 i k l 2 − l 2 j l 3 k l 1 i 4 . (5.15)
This agrees 6 with (4.23) of [16] and (4.13) of [11].
To complete the calculation, we must include the appropriate prefactors. The full MHV × MHV × MHV triple cut 1-loop generating function is then
F 1-loop NMHV triple cut MHV 3 = p, p + 1 q, q + 1 r, r + 1 p l 1 l 1 , p + 1 q l 2 l 2 , q + 1 r l 3 l 3 , r + 1 × F tree MHV (ext) × 4 a=1 − ext k∈K l 3 l 1 l 2 k η ka + ext i∈I l 2 l 3 l 1 i η ia . (5.16)
It is interesting to note that the structure of F tree MHV × a is very similar to the NMHV generating function for an MHV vertex diagram.
MHV 2-loop state sum with NMHV × MHV
As a first illustration of the application of the NMHV generating function, we calculate the intermediate state sum of a 3-line cut 2-loop MHV amplitude. The state sum splits into two separate cases NMHV × MHV and MHV × NMHV (see section 5). It suffices to derive an expression for the generating function of the NMHV × MHV state sum; from that the MHV × NMHV sum is easily obtained by relabeling momenta.
We express the NMHV subamplitude I in terms of its MHV vertex expansion. We denote by I L each MHV vertex diagram in the expansion, and we also let I L and I R label the Left and Right MHV subamplitudes of the diagram. For each MHV vertex diagram I L ⊂ I we compute the spin sum factor
f I L = D 1 D 2 D 3 δ (8) (I) 4 a=1 i∈I L iP I L η ia δ (8) (J) . (5.17)
The prefactors of the generating functions will be included later when we sum the contributions of all the diagrams. We are free to define the left MHV subamplitude I L to be the one containing either one or none of the loop momenta. For definiteness, let us denote the loop momentum contained in I L by l α , the others by l β , l γ . (If I L does not contain any loop momentum, this assignment is arbitrary.) 5 Note that using the overall δ (8) and the Schouten identity, the Q P -factor can be rearranged cyclically as Since l β , l γ / ∈ I L we get
− X ext k∈K l 3 l 1 l 2 k η ka + X ext i∈I l 2 l 3 l 1 i η ia = − X ext i∈I l 1 l 2 l 3 i η ia + X ext j∈J l 3 l 1 l 2 j η ja = − X ext j∈J l 2 l 3 l 1 j η ja + X ext k∈K l 1 l 2 l 3 k η ka .f I L = d 4 η α d 4 η β d 4 η γ δ (8) (I)δ (8) (J) 4 a=1 i∈I L iP I L η ia = δ (8) I + J d 4 η α d 4 η β 4 a=1 i ∈I i γ η i a i∈I L iP I L η ia = δ (8) I + J d 4 η α d 4 η β 4 a=1 γα η αa + γβ η βa + . . . ext i∈I L iP I L η ia + δ lα∈I L αP I L η αa = δ lα∈I L βγ 4 αP I L 4 δ (8) all ext k |k η ka . (5.18)
We have introduced a Kronecker delta δ lα∈I L which is 1 if l α ∈ I L and zero otherwise. If l α / ∈ I L , then none of the internal momenta connect to I L . The calculation shows that such "1-particle reducible" diagrams do not contribute to the spin sum. This is a common feature of all spin sums we have done.
Including now the prefactors and summing over all MHV vertex diagrams I L ⊂ I, the generating function for the cut 2-loop amplitude is
F 2-loop, n-pt NMHV×MHV = 1 j∈J j, j + 1 I L ⊂I W I L 1 i∈I i, i + 1 f I L , (5.19)
where W I L is the prefactor (4.9). Separating out the dependence on the external states into an overall factor F tree MHV (ext), we get
F 2-loop, n-pt NMHV×MHV = F tree MHV (ext)
q, q + 1 r, r + 1 q, l 1 l 1 , q + 1 r, l 3 l 3 , r + 1 l 1 l 2 2 l 2 l 3 2
I L ⊂I W I L (S.F.) I L (5.20) with (S.F.) I L = βγ 4 αP I L 4 δ lα∈I L ; l β ,lγ / ∈I L . (5.21)
Each term in the sum over I L ⊂ I depends on the reference spinor |X] through the prescription |P I L = P I L |X], but the sum of all diagrams must be |X]-independent.
Example: 3-line cut of 4-point 2-loop amplitude
Let the external states be A, B, C, D, with A, B on the subamplitude I and C, D on J, as shown in figure 5 with L = 2. The subamplitude I of the cut is a 5-point NMHV amplitude. Its MHV vertex expansion has five diagrams (I L |I R ), which we list with their spin sum factors:
(A, B|l 1 , l 2 , l 3 ) ↔ 0 , (B, l 1 |l 2 , l 3 , A) , (A, B, l 1 |l 2 , l 3 ) ↔ l 2 l 3 4 l 1 P I L 4 , (5.22) (l 3 , A|B, l 1 , l 2 ) , (l 3 , A, B|l 1 , l 2 ) ↔ l 1 l 2 4 l 3 P I L 4 .
We have checked numerically that the sum I L ⊂I W I L (S.F.) I L is independent of the reference spinor |X].
As a further check, let us assume that the two particles C and D are negative helicity gluons while the two particles A and B are positive helicity gluons. With the assumption that the cut is NMHV × MHV, there is only one choice for the internal particles: they have to be gluons, negative helicity coming out of the subamplitude I and thus positive helicity on J. So the spin sum only has one term, namely
A 5 A + , B + , l − 1 , l − 2 , l − 3 A 5 C − , D − , −l + 3 , −l + 2 , −l + 1 = [A B] 3 [B l 1 ][l 1 l 2 ][l 2 l 3 ][l 3 A] C D 3 D l 3 l 3 l 2 l 2 l 1 l 1 C . (5.23)
This should be compared with the result of the spin sum (5.20) with the appropriate spin state dependence from F tree MHV (ext). We have checked numerically that the results agree. We can use this result to replace the spin sum over I L ⊂ I in (5.20) by the anti-MHV × MHV factor (5.23) and then write the full 4-point generating function in the simpler form For each MHV vertex diagram of the subamplitudes I and J, there is a freedom in choosing which MHV vertex we call "left". This always allows us to choose I L and J L such that neither contains the internal momentum line l 4 . This is a convenient choice for performing the η 4 integration first and then evaluating the three other η-integrations. The spin sum factor for a product of MHV vertex diagrams I L and J L with l 4 / ∈ I L ∪ J L is then
F 2-loop, 4-pt NMHV×MHV = [A B] 3 D A A B B C C|l 1 |B] D|l 3 |A]P 2 l1l2 P 2 l2l3 F tree MHV (ext) .D 1 D 2 D 3 D 4 F tree-diagram NMHV (I L ) F tree-diagram NMHV (J L ) = D 1 D 2 D 3 d 4 η 4 δ (8) (I) 4 a=1 i∈I L iP I L η ia δ (8) (J) 4 b=1 j∈J L jP J L η jb = δ (8) (I + J) D 1 D 2 D 3 4 a=1 l 1 l 4 η 1a + l 2 l 4 η 2a + l 3 l 4 η 3a + . . . δ l1∈I L l 1 P I L η 1a + δ l2∈I L l 2 P I L η 2a + δ l3∈I L l 3 P I L η 3a + . . . δ l1∈J L l 1 P J L η 1a + δ l2∈J L l 2 P J L η 2a + δ l3∈J L l 3 P J L η 3a + . . . = δ (8) (I + J) (s.s.f.) I L ,J L , (5.25) where (s.s.f.) I L ,J L = det l 1 l 4 l 2 l 4 l 3 l 4 δ l1∈I L l 1 P I L δ l2∈I L l 2 P I L δ l3∈I L l 3 P I L δ l1∈J L l 1 P J L δ l2∈J L l 2 P J L δ l3∈J L l 3 P J L 4 .
(5.26)
We must sum over all diagrams including the appropriate prefactors. There are W I L and W J L factors (4.9) from the two MHV vertex expansions, as well as cyclic products. With momentum labels q, q + 1 etc as in figure 3 we can write the NMHV × NMHV part of the full 4-line cut 3-loop MHV amplitude as Finally, it may be noted that if l 1 , l 2 , l 3 ∈ I L ∩ J L , then the determinant (5.26) vanishes thanks to the Schouten identity. These observations are general and apply for any number of external legs to reduce the number of terms contributing in the sum over all products of MHV vertex diagrams. For the case of 4 external momenta, the number of contributing diagrams are thus reduced from 9 2 = 81 to 8 2 − 4 − 4 = 56.
We have verified numerically for the 4-point amplitude that the sum of all diagrams is independent of both reference spinors.
6 Anti-MHV and anti-NMHV generating functions and spin sums
In the previous section we have evaluated spin sums for unitarity cuts which involved MHV and NMHV subamplitudes. 7 However, the table in Fig. 3 shows that this is not enough. The unitarity cut at loop order L = 2, 3 includes the product of MHV and N 2 MHV amplitudes, and N 3 MHV is needed at 4-loop order. Our method would then require the generating functions for N 2 MHV and N 3 MHV amplitudes (see [12] for their construction). However, the situation is also workable if these amplitudes have a
Anti-generating functions
An N k MHV amplitude has external states whose η-counts r i add up to a total of 4(k + 2). The total η-count is matched in the generating function, which must be a sum of monomials of degree 4(k + 2) in the variables η ia . The states of the conjugate anti-N k MHV amplitude have η-counts 4 − r i , so the total η-count for n-point amplitudes is 4(n − (k + 2)). Thus anti-N k MHV generating functions must contain monomials of degree 4(n − (k + 2)). For example, the 3-point anti-MHV generating function has degree 4, and the 6-point anti-MHV and 7-point anti-NMHV cases both have degree 16.
The η-count requirement is nicely realized if we define the anti-N k MHV generating function as [17] the Grassmann Fourier transform of the conjugate of the corresponding N k MHV generating function. Given a set of N Grassmann variables θ I and their formal adjointsθ I , the Fourier transform of any function f (θ I ) is defined asf
(θ I ) ≡ d Nθ exp(θ Iθ I ) f (θ I ) . (6.1)
Any f (θ I ) is a sum of monomials of degree M ≤ N , e.g.θ J · · ·θ K , which can be "pulled out" of the 7 By conjugation, these results apply quite directly to cuts only involving anti-MHV and anti-NMHV subamplitudes.
integral and expressed as derivatives, viz.
d Nθ exp(θ Iθ I )θ J · · ·θ K = (−) N ∂ ∂θ J · · · ∂ ∂θ K d Nθ exp(θ Iθ I ) = ∂ ∂θ J · · · ∂ ∂θ K N I=1 θ I = 1 (N − M )! J...KI M +1 ...I N θ I M +1 . . . θ I N . (6.2)
The procedure to convert an N k MHV n-point generating function into an anti-N k MHV generating function uses conjugation followed by the Grassmann Fourier transform. The conjugate of any func-
tion 8 f ( ij , [kl], η ia ) is defined as f ([ji], lk ,η a i )
, including reverse order of Grassmann monomials. Evaluation of the Fourier transform
f ≡ i,a dη a i exp b,j η jbη b j f ([ji], lk ,η a i ) ,(6.3)
is then equivalent to the following general prescription:
1. Interchange all angle and square brackets: ij ↔ [ji].
2. Replace η ia → ∂ a i = ∂ ∂ηia . 3. Multiply the resulting expression by 4 a=1 n i=1 η ia from the right.
We first apply this to find an anti-MHV generating function. We will confirm that the result is correct by showing that it solves the SUSY Ward identities. We will then apply the prescription to find an anti-NMHV generating function.
Anti-MHV generating function
Applied to the conjugate of the MHV generating function (4.1) (with (4.2)), the prescription gives the anti-MHV generating function 9
F n = 1 n i=1 [i, i + 1] 1 2 4 4 a=1 n i,j=1 [ij] ∂ a i ∂ a j η 1a · · · η na . (6.4)
Evaluating the derivatives as in (6.2) we can write this as
F n = 1 n i=1 [i, i + 1] 1 (2 (n − 2)!) 4 4 a=1 k1,...,kn k1k2···kn [k 1 k 2 ] η k3a · · · η kna (6.5)
The sum is over all external momenta k i ∈ {1, 2, . . . , n}.
To confirm that (6.5) is correct we show thatF n obeys the SUSY Ward identities and produces the correct all-gluon anti-MHV amplitude A n (1 + 2 + 3 − . . . n − ). This is sufficient because the SUSY Ward identities have a unique solution [11] in the MHV or anti-MHV sectors. Any (anti-)MHV amplitudes can be uniquely written as a spin factor times an n-gluon amplitude. The desired n-gluon amplitude is obtained by applying the product D 3 . . . D n of 4th order operators of (2.8) for the (n − 2) negative gluons to the generating function (6.5). It is easy to obtain the expected result
D 3 · · · D nFn = 1 n i=1 [i, i + 1] 1 (2 (n − 2)!) 4 4 a=1 k1,k2 (n − 2)! k1k234...n [k 1 k 2 ] = [12] 4 n i=1 [i, i + 1] . (6.6)
The supercharges which act on generating functions are [11]
Q a = n i=1 [i| ∂ a i ,Q a = n i=1 |i η ia . (6.7)
Ward identities are satisfied if Q a andQ a annihilateF n . Formally this requirement is satisfied by the Grassmann Fourier transform, but we find the following direct proof instructive. We compute
Q aF n ∝ k1,k2,i,k4,...,kn [k 1 k 2 ][i| k1k2ik4···kn η k4a · · · η kna = 0 (6.8)
by the Schouten identity. The argument forQ a is slightly more involved. First writẽ
Q aFn ∝ i,k1,...,kn |i [k 1 k 2 ] k1k2k3k4···kn η ia η k3a · · · η kna . (6.9)
Note that the product of η's is nonvanishing only when i / ∈ {k 3 , . . . , k n }, i.e. when i is k 1 or k 2 . Thus
Q aFn ∝ −2 k1,...,kn |k 2 [k 2 k 1 ] k1k2k3k4···kn η k2a η k3a · · · η kna = −2(n − 2)! n k1=1 i =k1 η ia n k2=1 |k 2 [k 2 k 1 ] = 0 ,(6.10)
due to momentum conservation. For given k 1 the product of (n − 1) factors of η lia 's with l i = k 1 is the same for all choices of k 2 and it was therefore taken out of the sum over k 2 . This completes the proof that (6.5) produces all n-point anti-MHV amplitudes correctly.
For n = 3, the generating function (6.5) reduces to the anti-MHV 3-point amplitudes recently presented in [18].
An alternative form of the anti-MHV generating function can be given for n ≥ 4. It is more convenient for calculations because it contains the usual δ (8) ( n i=1 |i η ia ) as a factor. The second factor requires the selection of two special lines, here chosen to be 1 and 2. The alternate form reads
F n = 1 12 4 n i=1 [i, i + 1] 1 (2(n − 4)!) 4 δ (8) ( n i=1 |i η ia ) 4 a=1 k3,...,kn 12k3···kn [k 3 k 4 ] η k5a · · · η kna . (6.12)
Arguments very similar to the ones above show that (6.12) satisfies the Ward identities and produces the correct gluon amplitude A n (1 − 2 − 3 + 4 + 5 − . . . n − ). Since these requirements have a unique realization, the two forms (6.5) and (6.12) must coincide.
For n = 4, 5 the anti-MHV generating function (6.12) reduces to the "superamplitudes" recently presented in [17]. It is worth noting that for n = 4, any MHV amplitude is also anti-MHV; using momentum conservation it can explicitly be seen that the anti-MHV generating function (6.5), or in the form (6.12), is equal to the MHV generating function for n = 4.
Anti-NMHV generating function
Any anti-NMHV n-point amplitude I of the N = 4 theory has an anti-MHV vertex expansion, which is justified by the validity of the MHV vertex expansion of the conjugate NMHV amplitude. For each diagram of the expansion we use the conjugate of the CSW prescription, namely
|P I L ] = P I L |X . (6.13)
This involves a reference spinor |X . The sum of all diagrams is independent of |X .
We will obtain the anti-NMHV generating function by applying the prescription above to the conjugate of the NMHV generator (4.14). This prescription directly gives
F n,I L = 1 n i=1 [i, i + 1] W I L 1 2 4 4 a=1 i,j∈I k∈I L [ij][P I L k] ∂ a i ∂ a j ∂ a k η 1a · · · η na ,(6.14)
where W I L is obtained from W I L in (4.9) by exchanging angle and square brackets.
Carrying out the differentiations and relabeling summation indices gives the desired result: (6.16) It is not trivial to show that (6.16) follows from (6.15). We present the proof in [12].
F n,I L = 1 n i=1 [i, i + 1] W I L 1 (2 (n − 3)!) 4 4 a=1 k1∈I L k2,...,kn∈I [P I L k 1 ][k 2 k 3 ] k1k2...kn η k4a · · · η kna .
In analogy with Sec. 4.3 the universal anti-NMHV generating function is the sum One check that this result is correct is to show that the SUSY charges of (6.7) annihilateF n . This check can be carried out, but it is not a complete test that (6.17) is correct because the SUSY Ward identities do not have a unique solution in the NMHV or anti-NMHV sectors. See [19] or [11]. For this reason we show in Appendix B that (6.14) is obtained for any diagram starting from the product of anti-MHV generating functions for the left and right subamplitudes. Essentially we obtain (6.14) by the complex conjugate of the process which led from (4.13) to (4.14). It then follows that the application of external line derivatives (of total order 4(n − 3)) to (6.17) produces the correct "anti-MHV vertex expansion" of the corresponding anti-NMHV amplitude.
Anti-generating functions in intermediate spin sums
With the anti-MHV and anti-NMHV generating functions we can complete the unitarity sums for 3and 4-loop 4-point amplitudes.
L-loop anti-MHV × MHV spin sum
Consider the (L + 1)-line unitarity cut of an L-loop MHV amplitude, as in figure 3. The intermediate spin sum will include a sector where one subamplitude is N L−1 MHV and the other is MHV. We assume that the full amplitude has a total of 4 external legs, with 2 on each side of the cut, as in figure 5, so the tree subamplitudes have L + 3 legs. Then 10 the N L−1 MHV subamplitude is anti-MHV and we can apply our anti-MHV generating function to obtain the spin sum.
The spin sum is
F L-loop MHV 4-point anti-MHV × MHV = D 1 · · · D L+1FL+3 (I) F L+3 (J) = D 1 · · · D L+1 1 i∈I [i, i + 1] δ (8) (I) 1 j∈J j, j + 1 δ (8) (J) , (6.18) where δ (8) (I) ≡ 1 l 1 l 2 4 1 (2(L − 1)!) 4 δ (8) (I) 4 a=1 k3,...,k L+3 [k 3 k 4 ] l1l2k3···k L+3 η k5a · · · η k L+3 a (6.19)
is obtained from the Fourier transform; we use the form (6.12) for the anti-MHV generating function, selecting the loop momenta l 1 and l 2 as the two special lines. Then, focusing on the spin sum factor only, we have s.s.f. = D 1 · · · D L+1 δ (8) (I) δ (8) (J) . (6.20)
Converting the η 1 differentiation to integration we find s.s.f. = δ (8)
(I + J) D 2 · · · D L+1 1 l 1 l 2 4 1 (2(L − 1)!) 4 4 a=1 · · · + l 1 l 2 η 2a + . . . × k3,...,k L+3 [k 3 k 4 ] l1l2k3···k L+3 η k5a · · · η k L+3 a = δ (8) (ext) D 3 · · · D L+1 1 (2(L − 1)!) 4 4 a=1 k3,...,k L+3 [k 3 k 4 ] l1l2k3···k L+3 η k5a · · · η k L+3 a = [AB] 4 δ (8) (ext) .(6.
21)
A and B are the external legs on the subamplitude I, c.f. figure 5.
As a simple check that this result is correct, let the legs A and B be positive helicity gluons and take the two other external legs C and D to be negative helicity gluons. Then there is only one term in the spin sum, namely Rewriting the prefactors to separate the dependence on the loop momenta, the full result for the L-loop (L + 1)-line cut MHV generating function is then simply
A L+3 (A + , B + , l − 1 , . . . , l − L+1 ) A L+3 (C − , D − , −l + L+1 , . . . , −l + 1 ) ,(6.F L-loop MHV 4-point anti-MHV × MHV = [AB] 4 [AB][B|l 1 |C CD D|l L+1 |A] L i=1 P 2 i,i+1 −1 δ (8) (ext) . (6.23)
The result (6.23) of an L-loop calculation is strikingly simple, yet it counts the contributions of states of total η-count 0 ≤ r ≤ 8 distributed arbitrarily on the L + 1 internal lines in Fig. 5.
1-loop triple cut spin sum with anti-MHV × MHV × MHV
Consider the triple cut of a 1-loop amplitude. In section 5.1.2 we evaluated a triple cut spin sum assuming that the amplitude was overall NMHV, such that the three subamplitudes were MHV. We now consider the case where the amplitude is overall MHV. The η-count then tells us that r I +r J +r K = 8 + 4 × 3 = 20. At least one of the subamplitudes has to be anti-MHV with η-count 4. Thus let us assume I to be anti-MHV and J and K MHV. The result is non-vanishing only if I is a 3-point amplitude, i.e. it has only one external leg, which we will label A. This is illustrated in figure 6.
We evaluate the spin sum using the anti-MHV and MHV generating functions. The expression for the spin sum factor requires some manipulation using momentum conservation, but the final result is simple:
F 1-loop MHV triple cut anti-MHV × MHV × MHV = rA A, p + 1 q, q + 1 [l 1 A] 4 l 1 l 2 4 [Al 1 ][l 1 l 3 ][l 3 A] l 1 , p + 1 ql 2 l 2 l 1 rl 3 l 3 l 2 l 2 , q + 1 F tree MHV .
4-loop anti-NMHV × NMHV spin sum
The 5-line cut of the 4-point 4-loop amplitude includes an N 3 MHV × NMHV sector in its unitarity sum. We use notation as in figure 5. The tree subamplitudes are in this case 7-point functions and N 3 MHV is therefore the same as anti-NMHV. We evaluate the spin sum using the NMHV and anti-NMHV generating functions.
F 4-loop MHV 5-line cut anti-NMHV7× NMHV7 = δ (8) (ext) I L ,J L W I L W J L (s.s.f) I L ,J L l 1 l 2 4 i∈I [i, i + 1] j∈J j, j + 1 (6.26) where (s.s.f) I L ,J L = 5 j=3 δ lj ∈J L l j P J L l 1 l 2 + δ l1∈J L l 1 P J L l 2 l j + δ l2∈J L l 2 P J L l j l 1 × δ lj ∈I L [l j P I L ][A B] + δ A∈I L [A P I L ][B l j ] + δ B∈I L [B P I L ][l j A] 4 (6.27)
The sum I L ,J L is over all 13 anti-MHV and MHV vertex diagrams in the expansions of the subamplitudes I and J. We have checked numerically that the cut amplitude generating function is independent of the two reference spinors |X I and |X J ] from the CSW prescription of |P I L ] and |P J L .
The complete spin sum for this cut of the 4-loop 4-point amplitude contains the four contributions listed in the table in figure 3. The anti-MHV × MHV contribution is obtained as the L = 4 case of (6.23) and we have here presented the result for the anti-NMHV × NMHV spin sum. The MHV × anti-MHV and NMHV × anti-NMHV contributions are obtained directly from these results.
Other cuts of the 4-loop 4-point amplitude
The full 4-loop calculation requires the study of unitarity cuts in which a 6-point subamplitude appears with all 6 lines internal and cut. The simplest case is that of a 4-point function, hence overall MHV, which can be expressed as the product ( (6.28)
We have carried out each of these spin sums explicitly. The first two cases are related to each other by conjugation (including conjugation of the external states). The last two are related by interchanging I and K and relabeling the internal momenta accordingly. The 6-point NMHV amplitude can also be regarded as anti-NMHV. We have calculated the spin sums in both ways, using the NMHV and anti-NMHV generating function for J. Different diagrams contribute in these calculations, but numerically the results agree (and they are independent of the reference spinors). with i = j. We will show that for any amplitude A n with n > 4, we can find a valid shift [i, j such that the amplitude vanishes at least as fast as 1/z for large z. This implies that there is a valid BCFW recursion relation for any tree amplitude in N = 4 SYM.
The strategy of our proof is the following. In [7] it was shown that amplitudes A n vanish at large z under a shift [i − , j if line i is a negative helicity gluon. We extend this result using supersymmetric Ward identities and show that amplitudes vanish at large z under any shift [i, j in which the SU (4) indices carried by the particle on line j are a subset of the SU (4) indices of particle i. We then show that such a choice of lines i and j exists for all non-vanishing amplitudes A n with n ≥ 4, except for some pure scalar 4-and 6-point amplitudes. The 4-point amplitude is MHV hence determined by the SUSY Ward identities. We then analyze the scalar 6-point amplitudes explicitly and find that there exist valid shifts [i, j under which they vanish at large z. Let r i be the number of SU (4) indices carried by line i. We will show (7.4) by (finite, downward) induction on r i . For r i = 4 particle i is a negative helicity gluon and the statement (7.4) reduces to (7.3) which was proven in [7]. Assume now that (7.4) is true for all amplitudes with r i =r for some 1 ≤r ≤ 4. Consider any amplitude A n which has r i =r − 1 < 4 and in which the SU (4) indices of particle j are a subset of the indices of particle i. We write this amplitude as a correlation function
Large z behavior from Ward identities
A n (1 . . . i . . . j . . . n) = O(1) . . . O(i) . . . O(j) . . . O(n) . (7.5)
Pick an SU (4) index a which is not carried by line i. Such an index exists because r i < 4. There exists an operator O a (i) such that
[Q a , O a (i)] = i O(i) . (no sum) (7.6)
By assumption, the SU (4) index a is also not carried by line j, so [Q a , O(j)] = 0. We can now write a Ward identity based on the index a as follows:
0 = Q a , O(1) . . . O a (i) . . . O(j) . . . O(n) = i O(1) . . . O(i) . . . O(j) . . . O(n) + Q a , O(1) . . . O a (i) . . . O(j) . . . O(n) + O(1) . . . O a (i) Q a , . . . O(j) . . . O(n) + O(1) . . . O a (i) . . . O(j) Q a , . . . O(n) . (7.7)
Let us choose | ∼ | = |i , |j . Then the first term on the right hand side is the original amplitude (7.5), multiplied by a non-vanishing factor i , which does not shift under the [i, j -shift. The remaining three terms on the right hand side of (7.7) all involve the operators O a (i) and O(j). The number of SU (4) indices carried by O a (i) is r i +1 =r, and therefore, by the inductive assumption, each of the remaining amplitudes fall off at least as fast as 1/z under the [i, j -shift. They are multiplied by angle brackets of the form k with k = i, j. These angle brackets do not shift. Thus the last three terms on the right side of (7.7) go as 1/z or better for large z. We conclude that the amplitude A n (1 . . . i . . . j . . . n) also vanishes at least as 1/z for large z under the [i, j -shift. This completes the inductive step and proves (7.4).
Our result implies, in particular, that any shift [i, j + gives a 1/z falloff for any state i. This is because a positive helicity gluon j + carries no SU (4) indices, and the empty set is a subset of any set.
Existence of a valid 2-line shift for any amplitude
We have proven the existence of a valid recursion relation for any amplitude which admits a shift of the type (7.4). Let us examine for which amplitudes such a shift is possible. In other words, we study which amplitudes contain two lines i and j such that the SU (4) indices carried by line j are a subset of the indices carried by line i. For n-point functions with n ≥ 4 we find:
• Any amplitude which contains one or more gluons admits a valid shift
If the amplitude contains a negative helicity gluon we pick this particle as line i. On the other hand, if the amplitude contains a positive helicity gluon we pick the positive helicity gluon as line j. Independent of the choice of particle for the other shifted line, (7.4) guarantees that the amplitude vanishes for large z under the shift [i, j .
• Any amplitude with one or more positive helicity gluinos admits a valid shift We pick the positive helicity gluino as line j. Denote the SU (4) index carried by this gluino by a. If no other line carries this index a, the amplitude vanishes. Thus in a non-vanishing amplitude there must be at least one other line i = j which carries the index a. As line j does not carry indices other than a, we can apply (7.4) and conclude that the amplitude falls off at least as 1/z under the shift [i, j .
• Any amplitude with one or more negative helicity gluinos admits a valid shift This proof is the SU (4) conjugate version of the proof above. Now we pick the negative helicity gluino as line i and denote the SU (4) As i = k, [ik] is non-vanishing, and we conclude that the amplitude must vanish.
Thus in a non-vanishing amplitude there must be at least one other line j = i which does not carry the index a. As line i carries all indices except for a, we can apply (7.4) and again conclude that the amplitude falls off at least as 1/z under the [i, j -shift.
• Any pure scalar amplitude A n with n > 6 admits a valid shift In a pure scalar amplitude each particle carries two SU (4) indices. There are 4 2 = 6 different combinations of indices possible, corresponding to the six distinct scalars of N = 4 SYM. Thus any pure scalar amplitude A n with n > 6 must have at least two lines i and j with the same particle and thus with coinciding SU (4) indices. Using (7.4) we find that the amplitude vanishes for large z under the [i, j -shift.
We are left to analyze pure scalar amplitudes with n ≤ 6. Amplitudes containing two identical scalars admit a valid shift by (7.4). Thus we need only check amplitudes which involve distinct scalars: -n = 6: We perform explicit checks of the pure scalar amplitudes A 6 in which the external particles are precisely the six distinct scalars of the theory, i.e. the amplitudes involving the particles A 12 , A 13 , A 14 , A 23 , A 24 , and A 34 . We find that all possible permutations of the color ordering of the six scalars give amplitudes which fall off as 1/z under a shift [i, i + 3 for some choice of line i. This is done by explicitly computing each amplitude using the NMHV generating function, whose validity was proven in section 3, and then numerically testing the [i, i + 3 -shifts for different choices of line i.
We conclude that for any N = 4 SYM amplitude with n > 4 there exists at least one choice of lines i and j such that under a [i, j -shift A n (1 . . .ĩ . . .j . . . n) → 0 as z → ∞ .
(7.11)
The results also holds for n = 4, with the exception of the 4-scalar amplitude mentioned above.
The input needed for our proof of (7.11) was the result [7] that a [−, j -shift gives a 1/z-falloff (or better) for any state j. In N = 4 SYM, the validity of a [−, j -shift can also be derived from the validity of shifts of type [−, − using supersymmetric Ward identities. Thus we could have started with less: to derive (7.11) it is sufficient to know that amplitudes vanish at large z under any [−, − -shift.
Summary and Discussion
In this paper we have explored the validity and application of recursion relations for n-point amplitudes with general external states in N = 4 SYM theory. We now summarize our results, discuss some difficulties which limit their extension to N = 8 supergravity, and comment on some recent related papers.
1. We were especially concerned with recursion relations following from 3-line shifts because these give the most convenient representations for NMHV amplitudes, namely the MHV vertex expansion. We were motivated by the fact that these representations are useful in the study of multi-loop amplitudes in N = 4 SYM, and it is important [20] to know that they are valid. The expansion can be derived using analyticity in the complex variable z of the 3-line shift (3.3) if the shifted amplitude vanishes as z → ∞. We proved that this condition holds if the 3 shifted lines carry at least one common SU (4) index. SU (4) invariance guarantees that at least one such shift such exists for any NMHV amplitude. For shifts with no common index, there are examples of amplitudes which do not vanish at large z and other examples which do. So the common index criterion is sufficient but not always necessary.
2. We reviewed the structure of the MHV vertex expansion in order to emphasize properties which are important for our applications. A valid 3-line shift, which always exists, is needed to derive the expansion but there is no trace of that shift in the final form of the expansion. For most amplitudes there are several valid shifts, and each leads to the same expansion, which is therefore unique. The main reason for this is that the MHV subdiagrams depend only on holomorphic spinors |i and |P I of the external and internal lines of a diagram. These are not shifted, since the shift affects only the anti-holomorphic spinors |m i ] of the 3 shifted external lines, m 1 , m 2 , m 3 . These desireable properties allow the definition of a universal NMHV generating function which describes all possible n-point processes. This generating function is written as a sum of an "over-complete" set of diagrams which can potentially contribute. Particular amplitudes are obtained by applying a 12th order differential operator in the Grassmann variables of the generating function, and each diagram then appears multiplied by its spin factor. The spin factor vanishes for diagrams which do not contribute to the MHV vertex expansion of a given amplitude. What remains are the diagrams, each in correct form, which actually contribute to the expansion. 3. In [11] it was shown how to use the MHV generating function to carry out the intermediate spin sums in the unitarity cuts from which loop amplitudes are constructed from products of trees. In this paper we have used Grassmann integration to simplify and generalize the previously treated MHV level sums, and we have computed several new examples of sums which require the NMHV generating function on one or both sides of the unitarity cut. The external states in the cut amplitudes are arbitrary and we were able to describe this state dependence with new generating functions. 4. It is well known that the full set of amplitudes in N = 4 SYM theory includes the anti-MHV sector. This contains the n-gluon amplitude in helicity configuration A n (++−−. . .−−) and all others related by SUSY transformations. Each anti-MHV amplitude is the complex conjugate of an MHV amplitude, but this description is not well suited to the evaluation of unitarity sums. Similar remarks apply to anti-NMHV amplitudes which include A n (+ + + − − · · · −) and others related by SUSY. For this reason we developed generating functions for anti-MHV and anti-NMHV n-point amplitudes. We used a systematic prescription to convert any generating function to the conjugate generating function by conjugation of brackets and a simple transformation to a new function of the same Grassmann variables η ia . We then performed 3-and 4-loop unitarity sums in which anti-MHV or anti-NMHV amplitudes appear on one side of the cut and MHV or NMHV on the other side.
5. Our study of the large z behavior of NMHV amplitudes required starting with a concrete representation for them on which we could then perform a 3-particle shift. We used the BCFW recursion relation which is based on a 2-line shift. It was very recently shown in [7] that such a recursion relation is valid for any amplitude in N = 4 SYM which contains at least one negative helicity gluon. Using SUSY Ward identities we were able to remove this restriction. The BCFW recursion relation is valid for all amplitudes. 11 It is natural to ask whether the properties found for recursion relations and generating functions in N = 4 SYM theory are true in N = 8 supergravity. Unfortunately the answer is that not all features carry over at the NMHV level. One complication is that the shifted MHV subamplitudes which appear in the MHV vertex expansion involve the shifted spinors |m i ], so the expansion is no longer shift independent or unique. Valid expansions can be established for many 6-point NMHV amplitudes, but it is known [11] that there are some amplitudes which do not vanish at large z for any 3-line shift. In these cases one must fix the reference spinor |X] such that the O(1) term at z → ∞ vanishes in order to obtain a valid MHV vertex expansion.
Concerning the 2-line shift recursion relations, there are amplitudes in N = 8 SG which do not admit any valid 2-line shifts. One example is the 6-scalar amplitude φ 1234 φ 1358 φ 1278 φ 5678 φ 2467 φ 3456 . No choice of two lines satisfies the index subset criteria needed in section 7 above, and a numerical analysis shows that there no valid 2-line shifts [11], contrary to the analogous N = 4 SYM cases.
We would like to mention several very recent developments which provide, in effect, new versions of generating functions for amplitudes in N = 4 SYM theory.
The paper [21] presents expressions for tree and loop amplitudes based on the dual conformal symmetry [22,23]. This symmetry can be proven at tree level using an interesting new recursion relation [18] for amplitudes with general external states. The formula for NMHV tree amplitudes in [21] has the feature that it does not contain the arbitrary reference spinor that characterizes the MHV vertex expansions of [1]. Dual conformal symmetry appears to be a fundamental and important property of on-shell amplitudes, but the presence of a reference spinor may well be an advantage. Indeed, MHV vertex expansions provide expressions for amplitudes that are quite easy to implement in numerical programs, and the test that the full amplitudes are independent of the reference spinor is extremely useful in practical applications.
The paper [17] has several similarities with our work. They use the same SUSY generators devised in [11] and used here, the MHV generating function of [9] is common, and for n = 3, 4, 5 the anti-MHV generating functions coincide. For n ≥ 6 there are apparent differences in the representation of NMHV amplitudes, since the MHV vertex expansion is not directly used in [21,17] and there is no reference spinor. It could be instructive to explore the relation between these representations. In [17] the application of generating functions to double and triple cuts of 1-loop amplitudes initiated in [11] and studied above are extended to quadruple cuts with interesting results for the box coefficients which occur.
The very new paper [24] uses the fermion coherent state formalism to derive a new type of treelevel recursion relation for the entire set of N = 4 amplitudes. There are many other intriguing ideas to study here. Our strategy is then as follows. In section A.1, we first express the NMHV amplitude A n in terms of the recursion relation following a [1 − , shift and examine the resulting diagrams. In section A.2, we perform a secondary [1, m 2 , m 3 | shift on the vertex expansion resulting from the [1 − , shift. We pick particle for the first shift such that it is non-adjacent to lines m 2 and m 3 . This is always possible for n ≥ 7 (except for one special case at n = 7 which we examine separately in section A.4). We show that for large z each diagram in the [1 − , -expansion falls off at least as 1/z under the [1, m 2 , m 3 |-shift, provided all NMHV amplitudes A n−1 fall off as 1/z under a 3-line shift of this same type. This allows us to prove the falloff under the shift inductively in section A.3. In section A.5 we explicitly verify the falloff for n = 6 which validates the induction and completes the proof.
A.1 Kinematics and diagrams of the [1 − , shift
The [1 − , -shift is defined as
|1] = |1] + z| ] , |1 = |1 , |˜ ] = | ] , |˜ = | − z|1 , (A.1)
where particle 1 is a negative helicity gluon, while line is arbitrary. Consider a diagram of the [1 − , -expansion with internal momentumP 1K =1 + K. The condition that the internal momentum is on-shell fixes the value of z at the pole to be z 1K = P 2
1K
1|K| ] , so that the shifted spinors at the pole are
|1] = |1] + P 2 1K 1|K| ] | ] , |˜ = | − P 2 1K 1|K| ] |1 . (A.2)
At the pole, the internal momentumP 1K can be written as
(P 1K )α β = P 1K | ] 1|P K 1|K| ] . (A.3)
This expression factorizes becauseP 1K is null. It is then convenient to define spinors associated with P 1K as
|P 1K = P 1K | ] 1 1|K| ] , [P 1K | = 1|P K 1 . (A.4)
For future reference, we also record a selection of spinor products:
1˜ = 1 , 1P 1K = 1 , ˜ P 1K = − 1 P 2 1K 1|K| ] , (A.5) [1˜ ] = [1 ] , [1P 1K ] = − K 2 1 , [˜ P 1K ] = − 1|K| ] 1 . (A.6)
We write the diagrams resulting from the [1 − , shift such that line 1 is always on the Left subamplitude L and line on the Right sub-amplitude R. We denote the total number of legs on the L (R) subamplitude by n L (n R ). Applied to the n-point amplitude A n , we have n L + n R = n + 2.
We can use kinematics to rule out the following classes of L × R diagrams:
• There are no MHV × MHV diagrams with n R = 3.
Proof: On the R side we would have a 3-vertex with lines , P 1K and one more line y ∈ { − 1, + 1}. The R vertex is MHV when r + r y + r P = 8, which requires r + r y ≥ 4. The value of the R subamplitude is fixed by "conformal symmetry" (see sec 5 of [11])
A R = y˜ ry+r −5 yP 1K 3−r ˜ P 1K 3−ry . (A.7)
Upon imposing momentum conservation P 1K = −p − p y , short calculations yield
y˜ = yP 1K = ˜ P 1K = 0 . (A.8)
So all three angle brackets entering A R vanish. Since A R has one more angle bracket in the numerator than in the denominator, the amplitude vanishes in the limit where we impose momentum conservation.
• There are no anti-MHV × NMHV diagrams.
Proof: On the L side we would have a 3-vertex with lines 1, −P 1K and one more line x ∈ {2, n}.
For this vertex to be anti-MHV we need r 1 + r x + r P = 4, and since r 1 = 4, this diagram only exists if line x is a positive helicity gluon, i.e. r x = 0. The value of this subamplitude is
A L = [xP 1x ] 3 [1x][1P 1x ]
, (A.9) but using momentum conservation we find that each square bracket vanishes:
[xP 1x ] = [1x] = [1P 1x ] = 0 . (A.10)
As A L has more square brackets in the numerator than in the denominator we conclude that the L subamplitude vanishes.
Thus only the following two types of diagrams contribute to the recursion relation:
Type A: MHV × MHV diagrams with n L ≥ 3 and n R ≥ 4.
Type B: NMHV × anti-MHV diagrams with n L = n − 1 and n R = 3.
We have thus obtained a convenient representation of the amplitude A n (1 − , . . . , m 2 , . . . , m 3 , . . . , n) using the 2-line shift [1 − , . We will now use this representation of the amplitude to examine its behavior under a 3-line shift.
|m 3 ] = |m 3 ] + z 1m 2 |X] .
By assumption, the lines m 2 and m 3 have at least one SU (4) index in common. Such a choice is possible for any NMHV amplitude. Up to now, we have not constrained our choice of line for the primary shift. It is now convenient to choose an / ∈ {m 2 , m 3 } which is not adjacent to either m 2 or m 3 . This is always possible for n ≥ 7, except for one special case with n = 7 which we examine separately below. We will now show that under the shift (A.11), amplitudes vanish at least as 1/z for large z, provided this falloff holds for all NMHV amplitudes with n − 1 external legs under the same type of shift. This will be the inductive step of our proof.
The action of the shift on the recursion diagrams depends on how m 2 and m 3 are distributed between the L and R subamplitudes. We need to consider three cases: m 2 , m 3 ∈ L, m 2 , m 3 ∈ R and m 2 ∈ R, m 3 ∈ L (or, equivalently, m 2 ∈ L, m 3 ∈ R).
Case I: m 2 , m 3 ∈ R The legs on the R subamplitude include , m 2 , m 3 ,P 1K as well as at least one line separating from m 2,3 , so n R ≥ 5. Hence the diagram must be of type A: MHV × MHV.
Since m 2 , m 3 / ∈ K, the angle-square bracket 1|K| ] is unshifted, but The remaining angle brackets shift as follows:
P 2 1K = P 2 1K − z m 2 m 3 1|K|X] ,(A.ˆ P 1K ∼ O(z) , 1P 1K ∼ O(1) , 1ˆ ∼ O(1) , (A.15)
while all other angle brackets are O(1).
We can now examine the effect of the |1, m 1 , m 2 ] shift on the MHV × MHV diagram:
• A L : On the L subamplitude, only |1] and |P 1K shift. The shift is a (rescaled) [1 − ,P 1K -shift and thus A L falls off at least as 1/z for large z by the results of [7].
• The propagator gives a factor 1/z.
• A R : Since line1 belongs to the L subamplitude, 1P 1K and 1ˆ do not appear in A R and it thus follows from (A.14) and (A.15) that all angle brackets in A R which involveP 1K orˆ are O(z) under the shift. The numerator of A R consists of four angle brackets and grows at worst as z 4 for large z. IfP 1K and˜ are consecutive lines in the R subamplitude, then the denominator of A R contains three shifted angle bracket and therefore goes as z 3 . Otherwise, the denominator contains four shifted angle brackets and goes as z 4 . Thus the worst possible behavior of A R is O(z).
We conclude that any diagrams with m 2 , m 3 ∈ R fall off as
O(z −1 ) 1 z O(z 1 ) ∼ O(z −1 ) for large z.
Case II: 12 m 3 ∈ L, m 2 ∈ R.
Since we chose non-adjacent to m 2 , the R subamplitude must have n R ≥ 4 legs. Hence all diagrams in this class must be of type A (MHV × MHV).
We need to analyze the large z behavior of the angle-brackets relevant for the MHV subamplitudes. As z → ∞ we find that the leading behavior of |ˆ and |P 1K is given by
|ˆ = | − m 2 |1 + K|X] 1m 2 [ X] |1 + O(z −1 ) , (A.16) |P 1K = 1 1m 2 |m 2 + O(z −1 ) . (A.17)
Short calculations then yield the following large z behavior for the relevant angle brackets:
m 2ˆ ∼ O(1) , ˆ P 1K ∼ O(1) , aP 1K ∼ O(1) for any a / ∈ {m 2 ,ˆ } , (A.18) but m 2P1K ∼ O(z −1 ) . (A.19)
To derive these falloffs, we used
m 2 |1 + K + |X] = 0 , (A.20)
which holds because the R subamplitude has more than 3 legs, as noted above. • The propagator gives a factor of 1/z. 12 Note that the case of m 2 ∈ L, m 3 ∈ R is obtained from this one by taking m 2 ↔ m 3 and z ↔ −z in all expressions.
For the cases n = 6, 7, our inductive step is not applicable to all diagrams because we cannot always pick line non-adjacent to m 2 and m 3 . The diagrams where we cannot pick in this way must be analyzed separately. For n = 7, our reasoning above only fails for a small class of diagrams. Let us analyze this class of diagrams next.
A.4 Special diagrams for n = 7
For 7-point amplitudes there is one color ordering of the three lines 1, m 2 and m 3 which needs to be analyzed separately, namely
A 7 (1, x 2 , m 2 , x 4 , x 5 , m 3 , x 7 ) . (A.22)
In this case we cannot choose to be non-adjacent to both m 2 and m 3 . Instead choose = x 2 . The analysis of all diagrams goes through as in section A.2, except that Case II may now include a diagram of Type B (NMHV × anti-MHV), namely
A L (1, −P 1K , x 4 , x 5 , m 3 , x 7 ) 1 P 2 1K A R (˜ , m 2 ,P 1K ) . (A.23)
It appears because is adjacent to m 2 .
As z → ∞ we find
|1] = |1] + z m 2 m 3 |X] , (A.24) |P 1K ] = |P 1K ] − z 1m 2 m 3 1 1 |X] , (A.25) |m 3 ] = |m 3 ] + z 1m 2 |X] , (A.26)
while |P 1K remains unshifted. Short calculations then yield the following large z behavior for the relevant square brackets:
[1P 1K ] ∼ O(z) , [aP 1K ] ∼ O(z) , [a1] ∼ O(z) for any a / ∈ {1,P 1K } . (A.27)
Now consider the effect of the secondary shift on the NMHV × anti-MHV diagram:
• A L : After a rescaling |P 1K → − 1m2 1 |P 1K and |P 1K ] → − 1 1m2 |P 1K ], the shift acts exactly as a 3-line [1,P 1K , m 3 |-shift. Note that lineP 1K on the L side has at least one common index with m 3 , because lineP 1K cannot carry this index on the R side. In fact, this index is already carried by line m 2 in the right subamplitude, and as the right subamplitude is a 3-point anti-MHV amplitude, each index must occur exactly once for a non-vanishing result. The behavior of the L subamplitude is thus given by the falloff of a n = 6 amplitude under a [1,P 1K , m 3 | shift, in which line 1 is a negative helicity gluon and linesP 1K and m 3 share a common index.
• The propagator gives a factor of 1/z. We have thus reduced the validity of our shift at n = 7 to its validity at n = 6. Let us now analyze 6-point amplitudes.
A.5 Proof for n = 6
Our analysis above for n > 6 only used the fact that was non-adjacent to m 2,3 by ruling out certain diagrams of type B (NMHV × anti-MHV). For n = 6, we cannot rule out these diagrams and will thus analyze them individually below. Also, we will estimate the large z behavior of the NMHV=anti-MHV 5-point subamplitudes that appear in the [1 − , -shift expansion. This will complete the explicit proof of the desired large z falloff at n = 6, without relying on a further inductive step. The denominator of the anti-MHV subamplitude will go as z 4 or z 5 , depending on whether the lines 1, m 2 , m 3 are consecutive or not. The numerator contains four square brackets, at least one of which does not shift under [1, m 2 , m 3 |. This can be seen as follows. As the lines 1, m 2 , m 3 share a common SU (4) index, the other two lines in the 5-point amplitude,P 1K and (say) y are the lines which do not carry this index. Since the numerator of an anti-MHV amplitude contains precisely the square brackets of particles which do not carry a certain SU (4) index, we conclude that there must be a factor of [yP 1K ] in the numerator. 13 This factor does not shift under the 3-line shift [1, m 2 , m 3 |, so the numerator grows as z 3 at worst. The 5-point anti-MHV L-subamplitude will thus have at least a 1/z falloff. As both the propagator and the right subamplitude remain unshifted, we conclude that the amplitudes (a)-(d) vanish at large z.
• Next, consider the amplitude (e) above: A 6 (1, x 2 , m 2 , x 4 , x 5 , m 3 ). Choose = x 4 . There are potentially two NMHV vertex diagrams: first, the ( , x 5 ) channel which has m 2 , m 3 ∈ L and we can thus apply the same argument we used for cases (a)-(d). The diagram of this channel therefore falls off at least as 1/z. Secondly, consider the (m 2 , )-channel. This diagram has the same right subamplitude that we encountered for n = 7 in the diagram (A.23) above. By the same analysis we conclude that A R ∼ O(z). Note that the left subamplitude is given B Anti-NMHV generating function from Anti-MHV vertex expansion
In section 6.1.2 we applied the Fourier transform prescription to obtain a generating function for anti-NMHV amplitudes, The purpose of this appendix is to use the anti-MHV generating function to prove that (B.1)-(B.2) indeed is the correct generating function for an anti-MHV vertex diagram. The value of each anti-MHV subamplitude is found by applying the appropriate derivative operators to the anti-MHV generating function, whose correctness we have already confirmed in section 6.1.1. The internal line must be an SU (4) invariant, so its total order is 4. Given the external states, there is a unique choice of internal state, so the 4 internal line differentiations can be taken outside the product of anti-MHV generating functions. Thus the value 14 of the diagram is D ext D IFn L (L)F n R (R) .
Consider any anti-MHV vertex diagram
(B.4)
This is true for any external states, hence the correct value of any anti-MHV vertex diagram is 14 It was described in [11] how to obtain the correct overall sign for the diagram. and similarly for δ (8) (R).
The prefactors of (B.1) and (B.5) are clearly the same, so we just need to prove that the spin factor in (B.6) is equal to that in (B.2). We start from (B.6) and write out the full expressions for the "anti-delta-functions" (B.7), seperating out the internal line I from the external lines,
(S.F.) I = D I a i L <j L [i L j L ] ∂ a i L ∂ a j L + i L [i L P I ] ∂ a i L ∂ a I k L η k L a η Ia × i R <j R [i R j R ] ∂ a i R ∂ a j R + i R [i R P I ] ∂ a i R ∂ a I η Ia k R η k R[i L P I ] ∂ a i L k L η k L a × i R <j R [i R j R ] ∂ a i R ∂ a j R η Ia k R η k R a + i R [i R P I ] ∂ a i R k R η k L a . (B.9)
Then perform the D I differentiation:
(S.F.) I = a (−) n L −1 i L <j L [i L j L ] ∂ a i L ∂ a j L k L η k L a i R [i R P I ] ∂ a i R k R η k L a + (−) 2n L −1 i L [i L P I ] ∂ a i L k L η k L a i R <j R [i R j R ] ∂ a i R ∂ a j R k R η k R a . (B.10)
Now factor out the product of all η's corresponding to the external lines to find (S.F.) I = 1 2 4
a i L ,j L ,i R [i L j L ][i R P I ] ∂ a i L ∂ a j L ∂ a i R − i R ,j R ,i L [i R j R ][i L P I ] ∂ a i R ∂ a j R ∂ a i L ext k η ka . (B.11)
Note that by the Schouten identity the antisymmetrized sum over 3 square brackets vanishes,
i R ,j R ,k R [i R j R ][k R | ∂ a i R ∂ a j R ∂ a k R = 0 . (B.12)
We can thus remove the L restriction on the index i L in the second term in (B.11) and replace it by an index m running over all external states. We then obtain
i R ,j R ,i L [i R j R ][i L P I ] ∂ a i R ∂ a j R ∂ a i L = i R ,j R ,m [i R j R ][mP I ] ∂ a i R ∂ a j R ∂ a m = − m,i R ,j R [m i R ][j R P I ] ∂ a m ∂ a i R ∂ a j R − i R ,m,j R [j R m][i R P I ] ∂ a j R ∂ a m ∂ a i R .
(B.13)
In the second line we have used the Schouten identity to split the sum. This is done in order to complete the sum over i L , j L in the first term of (B.11) to a sum over all external states m and n (S.F.) I = 1 2 4 This is precisely the expected spin factor (B.2) (given in the main text in (6.14)) that our prescription predicts.
a i L ,j L ,i R [i L j L ][i R P I ] ∂ a i L ∂ a j L ∂ a i R + m,i R ,j R [mi R ][j R P I ] ∂ a m ∂ a i R ∂ a j R + i R ,m,j R [j R m][i R P I ] ∂ a j R ∂ a
[
Q a , A(i)] = [i ] A a (i) , Q a , A b (i) = [i ] A ab (i) , Q a , A bc (i) = [i ] A abc (i) , Q a , A bcd (i) = [i ] A abcd (i) , Q a , A bcde (i) = 0 . (2.3)Note thatQ a raises the helicity of all operators and involves the spinor angle brackets i . Similarly, Q a lowers the helicity and spinor square brackets [i ] appear.
A n (1, 2, . . . , n) = O(1)O(2) . . . O(n) . (2.4) SU (4) invariance requires that the total number of (suppressed) indices is a multiple of 4, i.e. n i=1 r i = 4m. It is well known, however, that amplitudes A n with n ≥ 4 vanish if n i=1 r i = 4. To see this we use SUSY Ward identities, as in the
, O(1)O(2) . . . O(n) . (2.6) SU (4) symmetry requires that the upper index 1 appears exactly twice among the operators O(i) and that the indices 2,3,4 each appear once. The commutator again contains two terms, one from each O(i) that carries the index 1. The argument above then applies immediately. Let's continue and discuss the Ward identity (2.6) for the general case n i=1 r i = 4m, m ≥ 2. The upper index 1 must appear m + 1 times among the O(i) and the indices 2,3,4 each appear m times. The commutator then contains m + 1 terms, and each of these involves an amplitude with n i=1 r i = 4m. Ward identities with the conjugate supercharges Q a have the similar structure 0 = Q 1 , O(1)O(2) . . . O(n) .
Figure 1 :
1Diagrammatic expansion of an amplitude A n (1 − , . . . , x, . . . , n) under a 2-line shift [1 − , x .
Figure 2 :
2A generic MHV vertex diagram of an NMHV amplitude A n (m 1 , . . . , m 2 , . . . , m 3 , . . . ), arising from a 3-line shift [m 1 , m 2 , m 3 |. The set of linesm i ,m j ,m k is a cyclic permutation ofm 1 ,m 2 ,m 3 .
I and J must have an η-count r I,J which is a multiple of 4. If the overall amplitude is MHV and L = 1, then (5.2) gives r I + r J = 16, and the only possibility is that both subamplitudes I and J are MHV with η-counts 8 each. (Total η-count 4 is non-vanishing only for a 3-point anti-MHV amplitude; such spin sums are considered in section 6.) Likewise, a 2-loop MHV amplitude has r I + r J = 20 = 8 + 12 = 12 + 8, so the intermediate state sum splits into MHV × NMHV plus NMHV × MHV. The table in figure 3 summarizes the possibilities for MHV and NMHV loop amplitudes with (L + 1)-line cuts. For each split, one must sum over all intermediate states; the tree generating functions allow us to derive new generating functions for cut amplitudes with all intermediate states summed.
Figure 4 :
4Triple cut of NMHV 1-loop amplitude gives MHV subamplitudes I, J, and K.
Figure 5
5: 4-point L-loop MHV amplitude with (L + 1)-line cut.
checked numerically the agreement between (5.20) and (5.24) for 4 external lines. The generating function (5.24) gives the correct result for any MHV choice of 4 external states.5.3 MHV 3-loop state sum with NMHV × NMHVConsider the NMHV × NMHV part of the 3-loop spin sum. We express the I and J subamplitudes in terms of their MHV vertex expansions; thus in the intermediate state sum we must sum over all products of MHV vertex diagrams I L ⊂ I and J L ⊂ J. We first compute the spin sum factor associated with such a product, then include the necessary prefactors in order to get a general expression for the intermediate state sum.
F 3 -| l 2 l 3 l 4 A) , (l 1 l 2 | l 3 l 4 AB) , (l 2 l 3 | l 4 ABl 1 ) , (ABl 1 l 2 | l 3 l 4 ) , (Bl 1 l 2 l 3 | l 4 (
34444loop,n-pt NMHV × NMHV = − q, q + 1 r, r + 1 q, l 1 l 1 , q + 1 r, l 4 l 4 , r + 1 l 1 l 2 2 l 2 l 3 2 l 3 l 4 2 × F tree MHV (ext)I L ⊂I, J L ⊂J W I L W J L (s.s.f.) I L ,J L . (5.27)The product of any MHV vertex diagrams I L and J L involve two independent reference spinors X I and X J from the internal momentum prescriptions |P I L = P I L |X I ] and |P J L = P J L |X J ], but the sum over all diagrams must be independent of both reference spinors.Consider the 4-point 3-loop amplitude. Let the external states be A, B, C, D, with A, B on the subamplitude I and C, D on J, as in figure 5. The NMHV subamplitudes I and J are 6-point functions, so their MHV vertex expansions involve a sum of 9 diagrams. For the subamplitude I these diagrams are listed as (I L |I R ): (AB | l 1 l 2 l 3 l 4 ) , (Bl 1 ABl 1 | l 2 l 3 l 4 ) , (Bl 1 l 2 | l 3 l 4 A) , (l 1 l 2 l 3 | l 4 AB) .
For the subamplitude J, replace A, B by D, C to find (J L |J R ). (This gives reverse cyclic order on J.) Recall that we are assuming l 4 / ∈ I L , J L .In some cases, the spin sum factor for product of diagrams (I L |I R ) × (J L |J R ) can directly be seen to vanish. For instance, if no loop momenta are contained in I L or J L , then a row in the matrix (5.26) vanishes, and hence (s.s.f.) I L ,J L = 0. It follows that the diagrams (AB | l 1 l 2 l 3 l 4 ) and (DC | l 1 l 2 l 3 l 4 ) do not contribute to the spin sum. Another non-contributing case is when I L and J L each contain only one loop momentum l i which is common to both. Then a 2 × 2 submatrix of (5.26) vanishes, and hence (s.s.f.) I L ,J L = 0. This means that products such as (Bl 1 | l 2 l 3 l 4 A) × (DCl 1 | l 2 l 3 l 4 ) vanish.
small number of external lines. For example, if we are interested in the 4-line cut of a 3-loop 4-point function, then the N 2 MHV amplitudes we need are 6-point functions, and these are the complex conjugates of MHV amplitudes, usually called anti-MHV amplitudes. To evaluate their contribution to the intermediate state sum we need an anti-MHV generating function expressed in terms of the original η ia variables, so we can apply our integration techniques. Thus we first describe a general method to construct anti-N k MHV generating functions from N k MHV generating functions and use it to find explicit expressions for the anti-MHV and anti-NMHV cases (section 6.1). Then we apply these generating functions to evaluate several examples of unitarity sums in which (N)MHV amplitudes occur on one side of the cut and anti-(N)MHV amplitudes on the other (section 6.2). The most sophisticated example is the intermediate state sum for a 5-line cut of a 4-loop 4-point function.
1/((n − 3)!)4 compensates for the overcounting produced by the contraction of the Levi-Civita symbol with the products of η's. The expression (6.15) contains a hidden factor of δ (8) (ext) and can be written
L k4,...,kn∈I 12k3k4...kn [k 3 P I L ][k 4 k 5 ] η k6a · · · η kna .
Figure 6 :
622) whose "spin sum factor" is simply CD 4 [AB] 4 . This is exactly what our result (6.21) produces when Triple cut of MHV 1-loop amplitude with anti-MHV subamplitude I and MHV subamplitudes J and K.the two 4th order derivative operators D C and D D of the external negative helicity gluons are applied.
Figure 7 :
7check of this result is to assign all external states to be gluons, with A and B (on, say, subamplitude J) having negative helicity and the rest positive. Then the spin sum only contains one term, which gives a spin factor [l 1 l 3 ] 4 l 1 B 4 l 2 l 3 4 . This must be compared with the result of (6.24) with D A D B applied, giving a spin factor [l 1 A] 4 l 1 l 2 4 AB 4 . Momentum conservation on the 3-point subamplitude I gives [l 1 A] 4 l 1 l 2 4 AB 4 = [l 3 A] 4 l 3 l 2 4 AB 4 = [l 3 l 1 ] 4 l 3 l 2 4 l 1 B 4 , (6.25) so the results agree. A unitarity cut of diagrams that contribute to the 4-point MHV amplitude at 4 loops. Note that subamplitude J only connects to internal lines.
Consider the anti-NMHV 7 (I) × NMHV 7 (J) sector of the 5-line cut of the 4-loop 4-point amplitude. The intermediate state sum is straightforward to evaluate using the anti-NMHV generating function in the form (6.16), choosing the lines l 1 and l 2 as the special lines 1 and 2. The result of the intermediate spin sum is then
×
2 → 3)(3 → 3)(3 → 2) of 3 sub-amplitudes. See Figure 7. The spin sum requires integration over the 6 × 4 = 24 η ia variables of the internal lines, and each term contains 8 of the 16 Grassmann variables η Aa .η Ba , η Ca , η Da associated with the external states. Thus, before any integrations, we are dealing with a product of generating functions containing monomials of degree 8 + 24 = 32. The full unitarity sum contains several sectors in which the 32 η'NMHV 6 × MHV 5 .
7
Valid 2-line shifts for any N = 4 SYM amplitude In this section we turn our attention to 2-line shifts which give recursion relations of the BCFW type. We examine the behavior of a general N = 4 SYM tree level amplitude A n (1 . . . i . . . j . . . n) (7.1) under a 2-line shift of type [i, j , i.e. |ĩ] = |i] + z|j] , |ĩ = |i , |j] = |j] , |j = |j − z|i , (7.2)
For
an N = 4 n-point tree level amplitude A n of the form (7.1) it was shown in[7] thatA n (1 . . . i − . . . j . . . n) ∼ O(z −1 ) under a [i − , j shift if i is a negative helicity gluon, line j arbitrary . (7.3) Now consider any amplitude A n which has two lines i and j such that the SU (4) indices of line j are a subset of the SU (4) indices of line i. We will prove that the amplitude vanishes at large z under the BCFW [i, j -shift given in(7.2). Specifically we will show thatA n (1 . . . i . . . j . . . n) ∼ O(z −1 ) , or better, under the shift [i, j if all SU (4) indices of j are also carried by i . (7.4)
index which is not carried by this gluino by a. If all other lines carry this index a, the amplitude vanishes. To see this, pick any other line k = i. The operator O a (k) on this line carries the index a, so there exists an operator O(k) which satisfies [Q a , O(k)] = [ k] O a (k) . (7.8) Picking | ] ∼ |i] we obtain 0 = Q a , O(1) . . . O(i) . . . O(k) . . . O(n) ∼ [ik] O(1) . . . O(i) . . . O a (k) . . . O(n) . (7.9)
-n = 4 :
4There are two types of inequivalent 4-point amplitudes with four distinct scalars. The first type is the constant amplitude A 12 A 23 A 34 A 41 = 1, and SU (4) equivalent versions thereof. The only contribution to this amplitude is from the 4-scalar interaction in the Lagrangian. Clearly, it does not a have any good 2-line shifts, but since it is MHV it can be determined uniquely from SUSY Ward identities. The second type of amplitudes are SU (4) equivalent versions of A 12 A 34 A 23 A , this amplitude vanishes at large z under a [1, 3 -shift, and thus admits a valid recursion relation.-n = 5: By SU (4) invariance, there are no non-vanishing 5-point functions with 5 distinct external scalars.
HE and DZF thank the organizers of the workshop "Wonders of Gauge theory and Supergravity" in Paris June[23][24][25][26][27][28] 2008, where this work was initiated. HE thanks Saclay and LPT-ENS for their hospitality. HE and DZF gratefully acknowledge funding from the MIT-France exchange program. MK would like to thank the members of Tokyo University at IPMU and Komaba campus for stimulating discussions and hospitality in the final stages of this work.HE is supported by a Pappalardo Fellowship in Physics at MIT. DZF is supported by NSF grant PHY-0600465. The work of MK was partially supported by the World Premier International Research Center Initiative of MEXT, Japan. All three authors are supported by the US Department of Energy through cooperative research agreement DE-FG0205ER41360.
A 3 -
3line shifts of NMHV amplitudes with a negative helicity gluonIn section 3 we considered NMHV n-point amplitudes A n (1 − , . . . , m 2 , . . . , m 3 , . . . , n), with particle 1 a negative helicity gluon and m 2 and m 3 sharing at least one common SU (4) index. We claimed that one always obtains a valid MHV vertex expansion from the 3-line shift [1, m 2 , m 3 |. In this appendix we provide the detailed proof of this claim. As a starting point we use the result[7] that a [1 − , -shift of any tree amplitude of N = 4 SYM falls off at least as 1/z for large z, for any choice of particle = 1. The [1 − , -shift therefore gives a valid recursion relation without contributions from infinity.
A. 2
2The secondary |1, m 2 , m 3 ] shift We now act with the 3-line shift |1, m 2 , m 3 ] whose validity we want to prove. The shift [1, m 2 , m 3 | is defined as |1] = |1] + z m 2 m 3 |X] , |m 2 ] = |m 2 ] + z m 3 1 |X] , (A.11)
Now consider the effect of the secondary shift on the MHV × MHV diagram: • A L : It follows from (A.18) that all angle brackets in the L subamplitude are O(1), so A L ∼ O(1).
•
A R : The right subamplitude is a 3-point anti-MHV and is thus the ratio of four square brackets in the numerator and three square brackets in the denominator. According to (A.27) all square brackets are O(z) and we conclude that A R ∼ O(z) for large z.Provided n = 6 amplitudes fall off at least as 1/z under any 3-line shift [1, m 2 , m 3 | in which 1 is a negative helicity gluon and m 2 and m 3 share a common SU (4) index, we conclude that the full amplitude (A.22) goes as O(z −1 ) O(z −1 ) O(z) ∼ O(z −1 ) for large z.
Choose a [1, m 2 , m 3 |-shift where m 2 and m 3 share a common SU (4) index. Using the freedom to reverse the ordering of the states 123456 → 165432, there are six independent cases determined by the color ordering:(a) A 6 (1, x 2 , x 3 , x 4 , m 2 , m 3 ) (b) A 6 (1, x 2 , x 3 , m 2 , m 3 , x 6 ) (c) A 6 (1, x 2 , x 3 , m 2 , x 5 , m 3 ) (d) A 6 (1, m 2 , x 3 , x 4 , x 5 , m 3 ) (e) A 6 (1, x 2 , m 2 , x 4 , x 5 , m 3 ) (f) A 6 (1, x 2 , m 2 , x 4 , m 3 , x 6 )• Consider first the four cases (a)-(d). In these amplitudes, can be chosen to be non-adjacent to m 2 ,m 3 . We pick(a),(b),(c): → x 2 , (d): → x 4 . (A.28) In all four situations the NMHV diagram is a Case III diagram (m 2 , m 3 ∈ L), so we have to check the large z behavior of the Left 5-point anti-MHV amplitude under the [1, m 2 , m 3 |-shift.
W
I (S.F.) I . (B.1) In this appendix we use I to denote the diagrams of the anti-MHV vertex expansion. The sum is over all diagrams in any anti-MHV vertex expansion and the spin factor is (S.F.) , j][kP I L ] ∂ ia ∂ ja ∂ ka η 1a · · · η na . (B.2)
(
. . . ) . (B.3)
.F.) I = D I δ (8) (L) δ (8) (R) .
i L/R , j L/R , k L/R are external states on the L/R side of the vertex expansion. Evaluate first the derivatives ∂ I inside the curly brackets to get(S.F.) I = D I i L <j L [i L j L ] ∂ a i L ∂ a j L k L η k L a η Ia + (−) n L −1 i L
of the spinors |1], |m 2 ], |m 3 ] given by |1] → |1] = |1] + z m 2 m 3 |X] , |m 2 ] → |m 2 ] = |m 2 ] + z m 3 1 |X] , (3.1)
6) is one of many examples. Both [1, 2, 3| and [1, 2, 4| are valid shifts in this case.
|P 1K = |P 1K − z m 2 m 3 while |P 1K ] = |P 1K ].For arbitrary external lines a / ∈ {ˆ ,1} one can check that12)
and therefore
|1] = |1] + z m 2 m 3
1| |X]
1|K| ]
|P 1K ] ,
1| |X]
1|K| ]
|1 ,
(A.13)
|ˆ = |˜ + z m 2 m 3
1|K|X]
1|K| ]
|1 .
aP 1K ∼ O(z) ,
aˆ ∼ O(z) .
(A.14)
We had to again use (B.12) in the last step. Finally the Schouten identity allows us to convert the sum over external momenta on the R to a sum over external momenta on the L subamplitude. The result is(S.F.) I = 1 2 4 a m,n,i L [mn][i L P I ] ∂ a m ∂ a n ∂ am ∂ a
i R
ext k
η ka
=
1
2 4
a m,n,i R
[mn][i R P I ] ∂ a
m ∂ a
n ∂ a
i R
ext k
η ka .
(B.14)
i L
ext k
η ka .
(B.15)
Readers are referred to the review[4] and the references listed there.2 With the exception of one 4-scalar amplitude. See Sec. 7.
This argument does not apply to the case n = 3 because the strong constraint of momentum conservation forces either |1 = c|2 or |1] = c|2]. If the first occurs, then one cannot choose | so as to isolate one of the two terms in (2.5), and A 3 (1, 2, 3) with P n i=1 r i = 4 need not vanish for complex momenta. This amplitude is anti-MHV.
The power in[16,11] was 8, not 4, because the calculations were done in N = 8 SG instead of N = 4 SYM.
For simplicity it is assumed that the only complex numbers contained in f are the spinor components of |i and |i].9 We omit an overall factor (−1) n inFn. This has no consequence for our applications in spin sums.
An (L + 3)-point anti-MHV amplitude has η-count 4(n − (k + 2)) = 4(L + 1), so it is N L−1 MHV.
Except for one particular 4-scalar amplitude which is constant and thus inert under all shifts.
To see this, one can also consider the conjugate MHV amplitude. Its numerator must contain a factor yP 1K because the conjugated particles on linesP 1K and y share a common SU (4) index. Conjugating back, we replace angle by square brackets and obtain the factor [yP 1K ] in the numerator.
AcknowledgementsWe are grateful to Zvi Bern and David Kosower for very valuable discussions and suggestions leading to this work. We thank Clifford Cheung for sharing with us his results on the 2-line shifts. We have also benefitted from discussions with Lance Dixon, Iosif Bena, Paolo Benincasa, John Joseph Carrasco and Pierre Vanhove.
We conclude that at worst A R ∼ O(1). common SU (4) index, the L subamplitude must be NMHV in order to be non-vanishing. Thus there can be no MHV × MHV diagrams in this class. All amplitudes must be of type B (NMHV × anti-MHV). The right subamplitude is anti-MHV and must have n R = 3 legs in order for the diagram to be non-vanishing. The secondary shift acts on the L subamplitude as a 3-line shift. • A R, All angle brackets are O(1), except for m 2P1K which is O(z −1 ) according to (A.19). Note that on the L MHV subamplitude, the internal lineP 1K cannot have the common SU (4) index of m 2 and m 3 , because this index is already carried by lines 1 and m 3. 1Therefore,P 1K on the R subamplitude must have this index in common with m 2 . The "spin factor" in the numerator of the MHV subamplitude A R thus includes at least one factor of m 2P1K . If lines m 2 andP 1K are non-adjacent in the R subamplitude then all angle brackets in the denominator are O(1) according to (A.18). m 3 share at least one common SU (4) index, the shift is precisely of the same type as the original secondary 3-line shift. This shift acts only on the L subamplitude, which has n − 1 legs• A R : All angle brackets are O(1), except for m 2P1K which is O(z −1 ) according to (A.19). Note that on the L MHV subamplitude, the internal lineP 1K cannot have the common SU (4) index of m 2 and m 3 , because this index is already carried by lines 1 and m 3 . Therefore,P 1K on the R subamplitude must have this index in common with m 2 . The "spin factor" in the numerator of the MHV subamplitude A R thus includes at least one factor of m 2P1K . If lines m 2 andP 1K are non-adjacent in the R subamplitude then all angle brackets in the denominator are O(1) according to (A.18). On the other hand, if lines m 2 andP 1K are adjacent the denominator of A R also contains one factor of m 2P1K and is thus O(z −1 ). We conclude that at worst A R ∼ O(1). common SU (4) index, the L subamplitude must be NMHV in order to be non-vanishing. Thus there can be no MHV × MHV diagrams in this class. All amplitudes must be of type B (NMHV × anti-MHV). The right subamplitude is anti-MHV and must have n R = 3 legs in order for the diagram to be non-vanishing. The secondary shift acts on the L subamplitude as a 3-line shift [1, m 2 , m 3 |. As particle 1 is a negative helicity gluon and as lines m 2 and m 3 share at least one common SU (4) index, the shift is precisely of the same type as the original secondary 3-line shift. This shift acts only on the L subamplitude, which has n − 1 legs.
The L subamplitude A L goes as 1/z provided a [1, m 2 , m 3 |-shift with line 1 a negative helicity gluon and lines m 2 and m 3 sharing a common SU (4) index is a good shift for amplitudes. • A L, with n − 1 legs• A L : The L subamplitude A L goes as 1/z provided a [1, m 2 , m 3 |-shift with line 1 a negative helicity gluon and lines m 2 and m 3 sharing a common SU (4) index is a good shift for amplitudes with n − 1 legs.
• The propagator is unshifted and thus O(1). • The propagator is unshifted and thus O(1).
• A R, The right subamplitude is unshifted and thus O. • A R : The right subamplitude is unshifted and thus O(1).
O(z −1 ) for large z, assuming the validity of the same type of shift for n − 1 legs. In summary, the diagrams resulting from the |1 − vertex expansion of the amplitude A n all fall off at least as fast as 1/z under the secondary shift [1 − , m 2 , m 3 |. For Case III, we needed to assume that a 3-line shift of type [1 − , m 2 , m 3 | gives at least a falloff of 1/z for (n − 1)-point amplitudes of the same type. We can thus use a simple inductive argument to show the validity of the shift for all ≥ 7. We conclude that any diagrams with m 2 , m 3 ∈ L fall off as as O(z −1 ) O(1) O(1) ∼. 4 SYM NMHV n-point amplitudes A n satisfy A n (1, x 2 , . . . , x n ) → O(z −1 ) (A.21We conclude that any diagrams with m 2 , m 3 ∈ L fall off as as O(z −1 ) O(1) O(1) ∼ O(z −1 ) for large z, assuming the validity of the same type of shift for n − 1 legs. In summary, the diagrams resulting from the |1 − vertex expansion of the amplitude A n all fall off at least as fast as 1/z under the secondary shift [1 − , m 2 , m 3 |. For Case III, we needed to assume that a 3-line shift of type [1 − , m 2 , m 3 | gives at least a falloff of 1/z for (n − 1)-point amplitudes of the same type. We can thus use a simple inductive argument to show the validity of the shift for all ≥ 7, all N = 4 SYM NMHV n-point amplitudes A n satisfy A n (1, x 2 , . . . , x n ) → O(z −1 ) (A.21)
3 sharing at least one common SU (4) index. Consider now a [1, m 2 , m 3 |-shift on A n+1 (1, x 2 , . . . , x n+1 ), again with m 2,3 chosen to share at least one common SU (4) index. We have shown above that can always be chosen such that all MHV × MHV vertex diagrams of the [1 −. 2shift fall off at least as 1/z under the [1, m 2 , m 3 |-shift. Under the assumption (A.21), we have shown that NMHV n × anti-MHV 3under any [1, m 2 , m 3 |-shift with line 1 a negative helicity gluon and lines m 2,3 sharing at least one common SU (4) index. Consider now a [1, m 2 , m 3 |-shift on A n+1 (1, x 2 , . . . , x n+1 ), again with m 2,3 chosen to share at least one common SU (4) index. We have shown above that can always be chosen such that all MHV × MHV vertex diagrams of the [1 − , -shift fall off at least as 1/z under the [1, m 2 , m 3 |-shift. Under the assumption (A.21), we have shown that NMHV n × anti-MHV 3
From eq. (A.27) we know that all square brackets involving 1, m 3 ,P 1K grow as O(z), so the numerator will be at worst O(z 4 ). A L = A 5 (1, x 2 ,P 1KAs the three shifted lines 1, m 3 ,P 1K are not all consecutive, the denominator always goes as z 5. So A L ∼ O(1/z), and as the propagator goes as 1/z, we conclude that the whole diagram is at worst O(1/zby A L = A 5 (1, x 2 ,P 1K , x 5 , m 3 ). From eq. (A.27) we know that all square brackets involving 1, m 3 ,P 1K grow as O(z), so the numerator will be at worst O(z 4 ). As the three shifted lines 1, m 3 ,P 1K are not all consecutive, the denominator always goes as z 5 . So A L ∼ O(1/z), and as the propagator goes as 1/z, we conclude that the whole diagram is at worst O(1/z).
Choose = x 2 . Then the NMHV vertex appears in the diagram with channel. • Finally, Case, To see this. note that A L = A 5 (1,P 1K , x 4 , m 3 , x 6 ), so the three shifted lines,1,P 1K , m 3 are again not all consecutive. We conclude that this diagram also falls off at least as 1/z for large z• Finally, consider case (f), A 6 (1, x 2 , m 2 , x 4 , m 3 , x 6 ). Choose = x 2 . Then the NMHV vertex appears in the diagram with channel ( , m 2 ). This diagram can be treated just as the second diagram of case (e). To see this, note that A L = A 5 (1,P 1K , x 4 , m 3 , x 6 ), so the three shifted lines,1,P 1K , m 3 are again not all consecutive. We conclude that this diagram also falls off at least as 1/z for large z.
We conclude that for a NMHV 6-point amplitude, any 3-line shift which involves a negative helicity gluon and two other states which share at least one common SU (4)-index. falls off at least as 1/z forWe conclude that for a NMHV 6-point amplitude, any 3-line shift which involves a negative helicity gluon and two other states which share at least one common SU (4)-index, falls off at least as 1/z for
MHV vertices and tree amplitudes in gauge theory. F Cachazo, P Svrcek, E Witten, arXiv:hep-th/0403047JHEP. 04096F. Cachazo, P. Svrcek and E. Witten, "MHV vertices and tree amplitudes in gauge theory," JHEP 0409, 006 (2004) [arXiv:hep-th/0403047].
New recursion relations for tree amplitudes of gluons. R Britto, F Cachazo, B Feng, arXiv:hep-th/0412308Nucl. Phys. B. 715499R. Britto, F. Cachazo and B. Feng, "New recursion relations for tree amplitudes of gluons," Nucl. Phys. B 715, 499 (2005) [arXiv:hep-th/0412308].
Direct proof of tree-level recursion relation in Yang-Mills theory. R Britto, F Cachazo, B Feng, E Witten, arXiv:hep-th/0501052Phys. Rev. Lett. 94181602R. Britto, F. Cachazo, B. Feng and E. Witten, "Direct proof of tree-level recursion relation in Yang-Mills theory," Phys. Rev. Lett. 94, 181602 (2005) [arXiv:hep-th/0501052].
On-Shell Methods in Perturbative QCD. Z Bern, L J Dixon, D A Kosower, arXiv:0704.2798Annals Phys. 3221587hep-phZ. Bern, L. J. Dixon and D. A. Kosower, "On-Shell Methods in Perturbative QCD," Annals Phys. 322, 1587 (2007) [arXiv:0704.2798 [hep-ph]].
On Tree Amplitudes in Gauge Theory and Gravity. N Arkani-Hamed, J Kaplan, arXiv:0801.2385JHEP. 080476hep-thN. Arkani-Hamed and J. Kaplan, "On Tree Amplitudes in Gauge Theory and Gravity," JHEP 0804, 076 (2008) [arXiv:0801.2385 [hep-th]].
Supersymmetric Ward identities and NMHV amplitudes involving gluinos. S J Bidder, D C Dunbar, W B Perkins, arXiv:hep-th/0505249JHEP. 050855S. J. Bidder, D. C. Dunbar and W. B. Perkins, "Supersymmetric Ward identities and NMHV amplitudes involving gluinos," JHEP 0508, 055 (2005) [arXiv:hep-th/0505249].
On-Shell Recursion Relations for Generic Theories. C Cheung, arXiv:0808.0504hep-thC. Cheung, "On-Shell Recursion Relations for Generic Theories," arXiv:0808.0504 [hep-th].
A direct proof of the CSW rules. K Risager, arXiv:hep-th/0508206JHEP. 05123K. Risager, "A direct proof of the CSW rules," JHEP 0512, 003 (2005) [arXiv:hep-th/0508206].
A Current Algebra For Some Gauge Theory Amplitudes. V P Nair, Phys. Lett. B. 214215V. P. Nair, "A Current Algebra For Some Gauge Theory Amplitudes," Phys. Lett. B 214, 215 (1988).
Non-MHV tree amplitudes in gauge theory. G Georgiou, E W N Glover, V V Khoze, arXiv:hep-th/0407027JHEP. 040748G. Georgiou, E. W. N. Glover and V. V. Khoze, "Non-MHV tree amplitudes in gauge theory," JHEP 0407, 048 (2004) [arXiv:hep-th/0407027].
M Bianchi, H Elvang, D Z Freedman, arXiv:0805.0757Generating Tree Amplitudes in N=4 SYM and N = 8 SG. hep-thM. Bianchi, H. Elvang and D. Z. Freedman, "Generating Tree Amplitudes in N=4 SYM and N = 8 SG," arXiv:0805.0757 [hep-th].
Proof of the MHV vertex expansion for all tree amplitudes in N=4 SYM theory. M Kiermaier, H Elvang, D Z Freedman, arXiv:0811.3624hep-thM. Kiermaier, H. Elvang and D. Z. Freedman, "Proof of the MHV vertex expansion for all tree amplitudes in N=4 SYM theory," arXiv:0811.3624 [hep-th].
All tree-level amplitudes in N=4 SYM. J M Drummond, J M Henn, arXiv:0808.2475hep-thJ. M. Drummond and J. M. Henn, "All tree-level amplitudes in N=4 SYM," arXiv:0808.2475 [hep-th].
An Amplitude for n Gluon Scattering. S J Parke, T R Taylor, Phys. Rev. Lett. 562459S. J. Parke and T. R. Taylor, "An Amplitude for n Gluon Scattering," Phys. Rev. Lett. 56, 2459 (1986).
One loop n point gauge theory amplitudes, unitarity and collinear limits. Z Bern, L J Dixon, D C Dunbar, D A Kosower, arXiv:hep-ph/9403226Nucl. Phys. B. 425217Z. Bern, L. J. Dixon, D. C. Dunbar and D. A. Kosower, "One loop n point gauge theory ampli- tudes, unitarity and collinear limits," Nucl. Phys. B 425, 217 (1994) [arXiv:hep-ph/9403226].
Unexpected Cancellations in Gravity Theories. Z Bern, J J Carrasco, D Forde, H Ita, H Johansson, arXiv:0707.1035Phys. Rev. D. 7725010hep-thZ. Bern, J. J. Carrasco, D. Forde, H. Ita and H. Johansson, "Unexpected Cancellations in Gravity Theories," Phys. Rev. D 77, 025010 (2008) [arXiv:0707.1035 [hep-th]].
Generalized unitarity for N=4 super-amplitudes. J M Drummond, J Henn, G P Korchemsky, E Sokatchev, arXiv:0808.0491hep-thJ. M. Drummond, J. Henn, G. P. Korchemsky and E. Sokatchev, "Generalized unitarity for N=4 super-amplitudes," arXiv:0808.0491 [hep-th].
A note on dual superconformal symmetry of the N=4 super Yang-Mills S-matrix. A Brandhuber, P Heslop, G Travaglini, arXiv:0807.4097hep-thA. Brandhuber, P. Heslop and G. Travaglini, "A note on dual superconformal symmetry of the N=4 super Yang-Mills S-matrix," arXiv:0807.4097 [hep-th].
Some Properties Of Scattering Amplitudes In Supersymmetric Theories. M T Grisaru, H N Pendleton, Nucl. Phys. B. 12481M. T. Grisaru and H. N. Pendleton, "Some Properties Of Scattering Amplitudes In Supersym- metric Theories," Nucl. Phys. B 124, 81 (1977).
. Z Bern, private communicationZ. Bern, private communication.
Dual superconformal symmetry of scattering amplitudes in N=4 super-Yang-Mills theory. J M Drummond, J Henn, G P Korchemsky, E Sokatchev, arXiv:0807.1095hep-thJ. M. Drummond, J. Henn, G. P. Korchemsky and E. Sokatchev, "Dual superconformal symmetry of scattering amplitudes in N=4 super-Yang-Mills theory," arXiv:0807.1095 [hep-th].
Conformal properties of four-gluon planar amplitudes and Wilson loops. J M Drummond, G P Korchemsky, E Sokatchev, arXiv:0707.0243Nucl. Phys. B. 795385hep-thJ. M. Drummond, G. P. Korchemsky and E. Sokatchev, "Conformal properties of four-gluon planar amplitudes and Wilson loops," Nucl. Phys. B 795, 385 (2008) [arXiv:0707.0243 [hep-th]].
Gluon scattering amplitudes at strong coupling. L F Alday, J M Maldacena, arXiv:0705.0303JHEP. 070664hep-thL. F. Alday and J. M. Maldacena, "Gluon scattering amplitudes at strong coupling," JHEP 0706, 064 (2007) [arXiv:0705.0303 [hep-th]].
What is the Simplest Quantum Field Theory?. N Arkani-Hamed, F Cachazo, J Kaplan, arXiv:0808.1446hep-thN. Arkani-Hamed, F. Cachazo and J. Kaplan, "What is the Simplest Quantum Field Theory?," arXiv:0808.1446 [hep-th].
| [] |
[
"arXiv:physics/9805018v1 [physics.data-an]",
"arXiv:physics/9805018v1 [physics.data-an]"
] | [] | [] | [] | U sing projections and correlations to approxim ate probability distributions D ean K arl en O ttawa-C arl eton Institute for Physics D epartm ent ofPhysics,C arl eton U niversity O ttawa,C anada K 1S 5B 6 (M ay 7,1998) A bstract A m ethod to approxi m ate conti nuous m ul ti -di m ensi onalprobabi l i ty densi ty functi ons (PD Fs) usi ng thei r projecti ons and correl ati ons i s descri bed. T he m ethod i s parti cul arl y usefulfor event cl assi cati on w hen esti m ates of system ati c uncertai nti es are requi red and for the appl i cati on of an unbi nned m axi m um l i kel i hood anal ysi sw hen an anal yti c m odeli snotavai l abl e. A si mpl e goodnessof ttestofthe approxi m ati on can be used,and si m ul ated event sam pl es that fol l ow the approxi m ate PD Fs can be e ci entl y generated. T he source code for a FO RT R A N -77 i m pl em entati on ofthi s m ethod i s avai l abl e. | 10.1063/1.168691 | [
"https://export.arxiv.org/pdf/physics/9805018v1.pdf"
] | 16,878,397 | physics/9805018 | 6adaeaadc39a2df12ac133ed17820fe0eb6d1a00 |
arXiv:physics/9805018v1 [physics.data-an]
13 May 1998
arXiv:physics/9805018v1 [physics.data-an]
13 May 1998Typeset usi ng R EV T E X
U sing projections and correlations to approxim ate probability distributions D ean K arl en O ttawa-C arl eton Institute for Physics D epartm ent ofPhysics,C arl eton U niversity O ttawa,C anada K 1S 5B 6 (M ay 7,1998) A bstract A m ethod to approxi m ate conti nuous m ul ti -di m ensi onalprobabi l i ty densi ty functi ons (PD Fs) usi ng thei r projecti ons and correl ati ons i s descri bed. T he m ethod i s parti cul arl y usefulfor event cl assi cati on w hen esti m ates of system ati c uncertai nti es are requi red and for the appl i cati on of an unbi nned m axi m um l i kel i hood anal ysi sw hen an anal yti c m odeli snotavai l abl e. A si mpl e goodnessof ttestofthe approxi m ati on can be used,and si m ul ated event sam pl es that fol l ow the approxi m ate PD Fs can be e ci entl y generated. T he source code for a FO RT R A N -77 i m pl em entati on ofthi s m ethod i s avai l abl e.
I. IN T R O D U C T IO N
V i sual i zati on ofm ul ti -di m ensi onaldi stri buti ons i s often perform ed by exam i ni ng si ngl e vari abl edi stri buti ons(thati s,one-di m ensi onalprojecti ons)and l i nearcorrel ati on coe ci ents am ongstthe vari abl es. T hi scan be adequate w hen the sam pl e si ze i ssm al l ,the di stri buti on consi stsofessenti al l y uncorrel ated vari abl es,orw hen the correl ati onsbetween the vari abl es i s approxi m atel y l i near. T hi s paper descri bes a m ethod to approxi m ate m ul ti -di m ensi onal di stri buti ons i n thi s m anner and i ts appl i cati ons i n data anal ysi s.
T he m ethod descri bed i n thi s paper, the Projecti on and C orrel ati on A pproxi m ati on (PC A ),i sparti cul arl y usefuli n anal ysesw hi ch m ake use ofei thersi m ul ated orcontrolevent sam pl es. In parti cl e physi cs, for exam pl e, such sam pl es are used to devel op al gori thm s that e ci entl y sel ect events ofone type w hi l e preferenti al l y rejecti ng events ofother types. T he al gori thm can be as si m pl e as a set ofcri teri a on quanti ti es di rectl y m easured i n the experi m ent or as com pl ex as an appl i cati on of an arti ci alneural network [ 1]on a l arge num ber of observabl es. T he m ore com pl ex al gori thm m ay resul t i n hi gher e ci ency and puri ty,but the determ i nati on ofsystem ati c errors can be di cul t to esti m ate. T he PC A m ethod can be used to de ne a sophi sti cated sel ecti on al gori thm w i th good e ci ency and puri ty,i n a way that system ati c uncertai nti es can be rel i abl y esti m ated.
A nother appl i cati on of the PC A m ethod i s i n param eter esti m ati on from a data set usi ng a m axi m um l i kel i hood techni que. Ifthe i nform ati on avai l abl e i s i n the form ofsi m ul ated event sam pl es,i t can be di cul t to appl y an unbi nned m axi m um l i kel i hood m ethod, because i t requi res a functi onalrepresentati on ofthe m ul ti di m ensi onalprobabi l i ty densi ty functi on (PD F).T he PC A m ethod can be used to approxi m ate the PD Fs requi red for the m axi m um l i kel i hood m ethod. A si m pl e goodness of t test i s avai l abl e to determ i ne i fthe approxi m ati on i s val i d.
To veri fy the stati sti caluncertai nty ofan anal ysi s,i t can be usefulto create a l arge ensem bl e ofsi m ul ated sam pl es,each sam pl e equi val ent i n si ze to the data set bei ng anal yzed. In cases w here thi s i s not practi calbecause ofl i m i ted com puti ng resources,the approxi m ati on devel oped i n the PC A m ethod can be used,as i t i s i n a form that l eads to an e ci ent m ethod for event generati on.
In the fol l ow i ng secti ons,the projecti on and correl ati on approxi m ati on w i l lbe descri bed al ong w i th i ts appl i cati ons. A n exam pl e data anal ysi s usi ng the PC A m ethod i s show n.
II.P R O JE C T IO N A N D C O R R E LA T IO N A P P R O X IM A T IO N
C onsi der an arbi trary probabi l i ty densi ty functi on P (x) of n vari abl es, x i . T he basi s for the approxi m ati on ofthi s PD F usi ng the PC A approach i s the n-di m ensi onalG aussi an di stri buti on,centered atthe ori gi n,w hi ch i sdescri bed by an n n covari ance m atri x,V ,by
G (y)= (2 ) n=2 j V j 1=2 exp 1 2 y T V 1 y(1)
w here j V ji sthe determ i nantofV .T he vari abl esx are not,i n general ,G aussi an di stri buted so thi sform ul a woul d bea poorapproxi m ati on ofthePD F,i fused di rectl y.Instead,thePC A m ethod uses param etertransform ati ons,y i (x i ),such thatthe i ndi vi dualdi stri buti ons fory i areG aussi an and,asa resul t,then-di m ensi onaldi stri buti on fory m ay bewel lapproxi m ated by Eq.(1). T he m onotoni c functi on y(x)thattransform sa vari abl e x,havi ng a di stri buti on functi on p(x),to the vari abl e y,w hi ch fol l ow s a G aussi an di stri buti on ofm ean 0 and vari ance 1,i s
y(x)= p 2erf 1 (2F (x) 1)(2)
w here erf 1 i s the i nverse error functi on and F (x) i s the cum ul ati ve di stri buti on ofx,
F (x)= R x x m in p(x 0 )dx 0 R xm ax x m in p(x 0 )dx 0 :(3)
T he resul ti ng n-di m ensi onaldi stri buti on for y w i l lnot,i n general ,be an n-di m ensi onal G aussi an di stri buti on. It i s onl y guaranteed that the projecti ons ofthi s di stri buti on onto each y i axi s i s G aussi an. In the PC A approxi m ati on, however, the probabi l i ty densi ty functi on of y i s assum ed to be G aussi an. A l though not exact, thi s can represent a good approxi m ati on ofa m ul ti -di m ensi onaldi stri buti on i n w hi ch the correl ati on ofthe vari abl es i s rel ati vel y si m pl e.
W ri tten i n term s ofthe projecti ons,p i (x i ),the approxi m ati on ofP (x) usi ng the PC A m ethod i s,
P (x)= j V j 1=2 exp 1 2 y T (V 1 I)y n Y i= 1 p i (x i )(4)
w here V i s the covari ance m atri x for y and I i s the i denti ty m atri x. To approxi m ate the projecti ons,p i (x i ),needed i n Eqs. (3) and (4),bi nned frequency di stri buti ons (hi stogram s) ofx i can be used. T heprojecti on and correl ati on approxi m ati on i sexactfordi stri buti onsw i th uncorrel ated vari abl es,i n w hi ch caseV = I.Iti sal so exactfora G aussi an di stri buti on m odi ed by m onotoni c one-di m ensi onalvari abl e transform ati onsforany num ber ofvari abl es;orequi val entl y, m ul ti pl i cati on by a non-negati ve separabl e functi on.
A l arge vari ety ofdi stri buti onscan be wel lapproxi m ated by the PC A m ethod.H owever, there are di stri buti ons for w hi ch thi s w i l l not be true. For the PC A m ethod to yi el d a good approxi m ati on i n two-di m ensi ons,the correl ati on between the two vari abl es m ust be the sam e si gn for al l regi ons. If the space can be spl i t i nto regi ons, i nsi de of w hi ch the correl ati on haseveryw here the sam e si gn,then the PC A m ethod can be used on each regi on separatel y. To determ i ne i f a di stri buti on i s wel l approxi m ated by the PC A m ethod, a goodness of t test can be appl i ed,as descri bed i n the next secti on.
T he generati on ofsi m ul ated event sam pl es that fol l ow the PC A PD F i s strai ghtforward and e ci ent.Eventsaregenerated i n y space,accordi ng to Eq.(1),and then aretransform ed to the x space. T he procedure i nvol ves no rejecti on oftri alevents, and i s therefore ful l y e ci ent.
III.G O O D N E SS O F F IT T E ST
Som e appl i cati onsofthe PC A m ethod do notrequi re thatthe PD Fsbe parti cul arl y wel l approxi m ated. For exam pl e, to esti m ate the puri ty and e ci ency of event cl assi cati on, i t i s onl y necessary that the si m ul ated or controlsam pl es are good representati ons ofthe data. O ther appl i cati ons,such as i ts use i n m axi m um l i kel i hood anal yses,requi re the PD F to be a good approxi m ati on,i n order that the esti m ators are unbi ased and that the estim ated stati sti caluncertai nti es are val i d. T herefore i t m ay be i m portant to check that the approxi m ate PD F deri ved w i th the PC A m ethod i s adequate for a gi ven probl em .
In general ,w hen approxi m ati ng a m ul ti di m ensi onaldi stri buti on from a sam pl e ofevents, i t can be di cul t to deri ve a goodness of t stati sti c,l i ke a 2 stati sti c. T hi s i s because the requi red m ul ti di m ensi onalbi nni ng can reduce the average num ber ofevents per bi n to a very sm al lnum ber,m uch l ess than 1.
W hen the PC A m ethod i sused,however,i ti seasy to form a stati sti c to testi fa sam pl e ofevents fol l ow s the PD F,w i thout sl i ci ng the vari abl e space i nto thousands ofbi ns. T he PC A m ethod al ready ensures thatthe projecti ons ofthe approxi m ate PD F w i l lm atch that ofthe event sam pl e. A stati sti c that i s sensi ti ve to the correl ati on am ongst the vari abl es i s m ost easi l y de ned i n the space oftransform ed vari abl es,y,w here the approxi m ate PD F i s an n-di m ensi onalG aussi an. For each event the val ue X 2 i s cal cul ated,
X 2 = y T V 1 y ;(5)
and i fthe events fol l ow the PD F,the X 2 val ues w i l lfol l ow a 2 di stri buti on w i th n degrees offreedom ,w here n i sthe di m ensi on oftheG aussi an.A probabi l i ty wei ght,w ,can therefore be form ed,
w (X 2 )= Z 1 X 2 2 (t;n)dt ;(6)
w hi ch w i l l be uni form l y di stri buted between 0 and 1, i f the events fol l ow the PD F.T he procedure can be thought of i n term s of di vi di ng the n-di m ensi onal y space i nto l ayers centered aboutthe ori gi n (and w hose boundari esare atconstantprobabi l i ty i n y space)and checki ng that the ri ght num ber ofevents appears i n each l ayer. T he goodness of t test for the PC A di stri buti on i s therefore reduced to a test that the w di stri buti on i s uni form . W hen the goodness of t test show s that the event sam pl e i s not wel ldescri bed by the projecti on and correl ati on approxi m ati on,further steps m ay be necessary before the PC A m ethod can be appl i ed to an anal ysi s. To i denti fy correl ati ons w hi ch are poorl y descri bed, the goodness of t test can be repeated foreach pai r ofvari abl es. Ifthe test fai l s for a pai r ofvari abl es,i t m ay be possi bl e to i m prove the approxi m ati on by m odi fyi ng the choi ce of vari abl es used i n the anal ysi s,or by treati ng di erent regi ons ofvari abl e space by separate approxi m ati ons.
IV .E V E N T C LA SSIF IC A T IO N
G i ven two categori esofevents thatfol l ow the PD FsP 1 (x)and P 2 (x),the opti m alevent cl assi cati on schem e to de ne a sam pl e enri ched i n type 1 events,sel ects events havi ng the l argest val ues for the rati o ofprobabi l i ti es,R = P 1 (x)=P 2 (x). U si ng si m ul ated or control sam pl es,the PC A m ethod can be used to de ne the approxi m ate PD Fs P 1 (x) and P 2 (x), and i n orderto de ne a quanti ty l i m i ted to the range [ 0;1] ,i ti susefulto de ne a l i kel i hood rati o
L = P 1 (x) P 1 (x)+ P 2 (x) :(7)
W i th onl y two categori es ofevents,i t i s i rrel evant i fthe PD Fs P 1 and P 2 are renorm al i zed to thei rrel ati ve abundancesi n the data set. T he general i zati on to m ore than two categori es ofevents requi res that the PD Fs P i be renorm al i zed to thei r abundances. In ei ther case, each event i s cl assi ed on the basi s ofthe w hether or not the val ue ofL for that event i s l arger than som e cri ti calval ue. System ati cerrorsi n theesti m ated puri ty and e ci ency ofeventcl assi cati on can resul ti f the si m ul ated (orcontrol )sam pl esdo notfol l ow the true PD Fs.To esti m ate the system ati c uncertai nti es of the sel ecti on, the projecti ons and covari ance m atri ces used to de ne the PC A PD Fs can be vari ed over sui tabl e ranges.
V . E X A M P LE A P P LIC A T IO N
In thi ssecti on the PC A m ethod and i tsappl i cati onsare dem onstrated w i th si m pl e analysesofsi m ul ated eventsam pl es. T wo sam pl es,one l abel ed si gnaland the otherbackground, are generated w i th,x 1 2 (0;10)and x 2 2 (0;1),accordi ng to the di stri buti ons, T he transform ati on gi ven i n Eq.(2)i sappl i ed to the si gnalcontrolsam pl e,w hi ch resul ts i n the di stri buti on show n i n Fi g.4. To de ne the transform ati on,the projecti ons show n i n Fi g.1 are used,40 bi nsforeach di m ensi on. T he projecti onsofthe transform ed di stri buti on are G aussi an,and the correl ati on coe ci ent i s found to be 0. 40. T he goodness of t test, descri bed i n secti on III, checks the assum pti on that the transform ed di stri buti on i s a 2di m ensi onalG aussi an.T he resul ti ng w (X 2 )di stri buti on from thi stesti srel ati vel y uni form , as show n i n Fi g.5.
d s (x 1 ;x 2 )= (x 1 a 1 ) 2 + a 2 (a 3 (x 1 a 4 (1 + a 5 x 2 )) 4 + a 6 )((x 2 a 7 ) 4 + a 8 ) d b (x 1 ;x 2 )= 1 (b 1 (x 1 + x 2 ) 2 + b 2 x 3 2 + b 3 )(8)
A separatetransform ati on ofthebackground controlsam pl egi vesthedi stri buti on show n i n Fi g.6,w hi ch has a correl ati on coe ci ent of0. 03. N ote that a sm al ll i near correl ati on coe ci ent does not necessari l y i m pl y that the vari abl es are uncorrel ated. In thi s case the 2-di m ensi onaldi stri buti on i s wel ldescri bed by 2-di m ensi onalG aussi an,as show n i n Fi g.5.
Si nce the PC A m ethod gi ves a rel ati vel y good approxi m ati on of the si gnaland background probabi l i ty di stri buti ons,an e ci ent event cl assi cati on schem e can be devel oped, as descri bed i n secti on IV . C are needs to be taken,however,so that the esti m ati on ofthe overal le ci ency and puri ty ofthe sel ecti on i snotbi ased. In thi sexam pl e,the approxi m ate si gnal PD F i s de ned by 81 param eters (two projecti ons of 40 bi ns, and one correl ati on coe ci ent)deri ved from the 4000 eventsi n the si gnalcontrolsam pl e. T hese param etersw i l l be sensi ti ve to the stati sti cal uctuati onsi n the controlsam pl e,and thusi fthe sam e control sam pl e i sused to opti m i ze the sel ecti on and esti m ate the e ci ency and puri ty,the esti m ates m ay be bi ased. To reduce thi s bi as,addi ti onalsam pl es are generated w i th the m ethod descri bed atthe end ofsecti on II.T hese sam pl esare used to de ne the 81 param eters,and the event cl assi cati on schem e i s appl i ed to the ori gi nalcontrolsam pl es to esti m ate the puri ty and e ci ency. In thi s exam pl e data anal ysi s,the bi as i s sm al l . W hen the ori gi nalcontrol sam pl e i s used to de ne the 81 param eters,the opti m alsi gnalto noi se i s achi eved w i th an e ci ency of0: 880 and puri ty of0: 726. W hen the PC A generated sam pl es are used i nstead, the sel ecti on e ci ency i s reduced to 0: 873,for the sam e puri ty.
W hen the cl assi cati on schem e i s appl i ed to the data sam pl e,261 events are cl assi ed as si gnalevents. G i ven the e ci ency and puri ty quoted above,the num ber ofsi gnalevents i n the sam pl e i s esti m ated to be 217 19.
T he num ber ofsi gnalevents i n the data sam pl e can be m ore accuratel y determ i ned by usi ng a m axi m um l i kel i hood anal ysi s. T he l i kel i hood functi on i s de ned by
L = 400 Y j= 1 (f s P s (x j )+ (1 f s )P b (x j ))(9)
w here the product runs over the 400 data events,f s i s the fracti on ofevents attri buted to si gnal , and P s and P b are the PC A approxi m ated PD Fs, de ned by Eq.(4). T he si gnal fracti on, esti m ated by m axi m i zi ng the l i kel i hood, i s 0: 617 0: 040, a rel ati ve uncertai nty of6. 4% com pared to the 8. 5% uncertai nty from the counti ng m ethod. To check that the data sam pl e i s wel ldescri bed by the m odelused to de ne the l i kel i hood functi on,Eq.(9), the rati o ofprobabi l i ti es,Eq.(7),i s show n i n Fi g.7,and com pared to a m i xture ofPC A generated si gnaland background sam pl es.
V I. FO R T R A N IM P LE M E N T A T IO N
T he source code for a FO RT R A N -77 i m pl em entati on ofthe m ethods descri bed i n thi s paper i s avai l abl e from the author. T he program was ori gi nal l y devel oped for use i n an anal ysi s ofdata from O PA L,a parti cl e physi cs experi m ent l ocated at C ER N ,and m akes use ofthe C ER N LIB l i brary [ 2] . A n al ternate versi on i s al so avai l abl e,i n w hi ch the cal l s to C ER N LIB routi nes are repl aced by cal l s to equi val ent routi nes from N ET LIB [ 3] .
w here the vectors ofconstants are gi ven by a= (7;2;6;4;0: 8;40;0: 6;2)and b= (0: 1;3;0: 1). T hese sam pl es of4000 events each correspond to si m ul ated or controlsam pl es used i n the anal ysi s ofa data set. In w hat fol l ow s i t i s assum ed that the anal yti c form s ofthe parent di stri buti ons,Eq.(8),are unknow n.T he si gnaland background controlsam pl es are show n i n Fi g.1 and Fi g.2 respecti vel y. A thi rd sam pl e,consi dered to be data and show n i n Fi g.3,i s form ed by m i xi ng a further 240 events generated accordi ng to d s and 160 events generated accordi ng to d b .
FIGFIG . 1 .
1T he poi nts represent the sam pl e of 4000 events generated accordi ng to the functi on d s i n Eq. (8),w hi ch are used as a controlsam pl e for the si gnaldi stri buti on. C ontours ofd s are show n to ai d the eye. T he two projecti ons of the di stri buti on are used by the PC A m ethod to approxi m ate the si gnalPD F.
FIG . 2 .
2T he poi nts represent the sam pl e of4000 events generated accordi ng to the functi on d b i n Eq. (8), w hi ch are used as a controlsam pl e for the background di stri buti on. C ontours of d b are show n to ai d the eye. T he two projecti ons ofthe di stri buti on are used by the PC A m ethod to approxi m ate the background PD F.
FIG . 3 .
3T he poi nts represent the data sam pl e of400 events consi sti ng of240 events generated accordi ng to the functi on d s and 160 generated accordi ng to d b i n Eq. (8).
FIG . 4 .FIG . 5 .
45T he poi nts show the di stri buti on of the 4000 si gnal events after bei ng transform ed accordi ng to Eq.(2). T he projecti ons are now G aussi an di stri buti ons,centered at 0 w i th w i dth 1, and the overal ldi stri buti on appearsto fol l ow a 2-di m ensi onalG aussi an. T he correl ati on coe ci ent T he upperand l ower hi stogram s show the resul ts ofthe goodness of ttest appl i ed to the si gnaland background controlsam pl es. T he 2 val uesare 31 and 14 for19 degreesoffreedom , respecti vel y.
FIG . 6 .FIG . 7 .
67T hepoi ntsshow thedi stri buti on ofthe4000 background eventsafterbei ng transform ed accordi ng to Eq. (2). T he correl ati on coe ci ent i s 0. 03, and the two vari abl es appear to be uncorrel ated. A check i sm adethatthedata sam pl ei sconsi stentw i th them odelused i n them axi m um l i kel i hood anal ysi s. T he di stri buti on ofthe probabi l i ty rati o,Eq.(7),i s show n for the data events and com pared to the expected di stri buti on,as gi ven by a m i xture of PC A generated si gnaland background sam pl es. T he agreem ent i s good,the val ue for 2 i s 36 for 35 degrees offreedom .
R eferences to arti ci alneuralnetworks are num erous.O ne source w i th a focuson appl i cati ons i n H i gh Energy Physi cs i s. R eferences to arti ci alneuralnetworks are num erous.O ne source w i th a focuson appl i ca- ti ons i n H i gh Energy Physi cs i s: http://www.cern.ch/NeuralNets/nnwInHep.html.
. C Er N Lib I S Avai, Inform ati on on C ER N LIB i s avai l abl e from : http://wwwinfo.cern.ch/asd/index.html.
N etl i b i s a col l ecti on ofm athem ati calsoftware. N etl i b i s a col l ecti on ofm athem ati calsoftware,papers,and databases found at http://www.netlib.org.
| [] |
[
"A Marketplace for Data: An Algorithmic Solution",
"A Marketplace for Data: An Algorithmic Solution"
] | [
"Munther DahlehAnish Agarwal [email protected] \nLIDS\nSDSC\nIDSS at Massachusetts Institute of Technology\n\n",
"Tuhin Sarkar [email protected] \nLIDS\nSDSC\nIDSS at Massachusetts Institute of Technology\n\n"
] | [
"LIDS\nSDSC\nIDSS at Massachusetts Institute of Technology\n",
"LIDS\nSDSC\nIDSS at Massachusetts Institute of Technology\n"
] | [] | In this work, we aim to create a data marketplace; a robust real-time matching mechanism to efficiently buy and sell training data for Machine Learning tasks. While the monetization of data and pre-trained models is an essential focus of industry today, there does not exist a market mechanism to price training data and match buyers to vendors while still addressing the associated (computational and other) complexity. The challenge in creating such a market stems from the very nature of data as an asset: (i) it is freely replicable; (ii) its value is inherently combinatorial due to correlation with signal in other data; (iii) prediction tasks and the value of accuracy vary widely; (iv) usefulness of training data is difficult to verify a priori without first applying it to a prediction task. As our main contributions we: (i) propose a mathematical model for a two-sided data market and formally define the key associated challenges; (ii) construct algorithms for such a market to function and rigorously prove how they meet the challenges defined. We highlight two technical contributions: (i) a new notion of "fairness" required for cooperative games with freely replicable goods; (ii) a truthful, zero regret mechanism for auctioning a particular class of combinatorial goods based on utilizing Myerson's payment function and the Multiplicative Weights algorithm. These might be of independent interest. Preprint. Work in progress. | 10.1145/3328526.3329589 | [
"https://arxiv.org/pdf/1805.08125v2.pdf"
] | 29,156,312 | 1805.08125 | 662f57ffafe14a207c81241dbf0ed2802e9f9db9 |
A Marketplace for Data: An Algorithmic Solution
Munther DahlehAnish Agarwal [email protected]
LIDS
SDSC
IDSS at Massachusetts Institute of Technology
Tuhin Sarkar [email protected]
LIDS
SDSC
IDSS at Massachusetts Institute of Technology
A Marketplace for Data: An Algorithmic Solution
In this work, we aim to create a data marketplace; a robust real-time matching mechanism to efficiently buy and sell training data for Machine Learning tasks. While the monetization of data and pre-trained models is an essential focus of industry today, there does not exist a market mechanism to price training data and match buyers to vendors while still addressing the associated (computational and other) complexity. The challenge in creating such a market stems from the very nature of data as an asset: (i) it is freely replicable; (ii) its value is inherently combinatorial due to correlation with signal in other data; (iii) prediction tasks and the value of accuracy vary widely; (iv) usefulness of training data is difficult to verify a priori without first applying it to a prediction task. As our main contributions we: (i) propose a mathematical model for a two-sided data market and formally define the key associated challenges; (ii) construct algorithms for such a market to function and rigorously prove how they meet the challenges defined. We highlight two technical contributions: (i) a new notion of "fairness" required for cooperative games with freely replicable goods; (ii) a truthful, zero regret mechanism for auctioning a particular class of combinatorial goods based on utilizing Myerson's payment function and the Multiplicative Weights algorithm. These might be of independent interest. Preprint. Work in progress.
Introduction
A Data Marketplace -Why Now? Machine Learning (ML) is starting to take the place in industry that "Information Technology" had in the late 1990s: businesses of all sizes and in all sectors, are recognizing the necessity to develop predictive capabilities for continued profitability. To be effective, ML algorithms rely on high-quality training data -however, obtaining relevant training data can be very difficult for firms to do themselves, especially those early in their path towards incorporating ML into their operations. This problem is only further exacerbated, as businesses increasingly need to solve these prediction problems in real-time (e.g. a ride-share company setting prices, retailers/restaurants sending targeted coupons to clear inventory), which means data gets "stale" quickly. Therefore, we aim to design a data marketplace -a real-time market structure for the buying and selling of training data for ML.
What makes Data an Unique Asset? (i) Data can be replicated at zero marginal cost -in general, modeling digital goods (i.e. freely replicated goods) as assets is a relatively new problem (cf. Aiello et al. (2001)). (ii) Its value to a firm is inherently combinatorial i.e. the value of a particular dataset to a firm depends on what other (potentially correlated) datasets are available -hence, it is not obvious how to set prices for a collection of datasets with correlated signals. (iii) Prediction tasks and the value of an increase in prediction accuracy vary widely between different firms -for example, a 10% increase in prediction accuracy has very different value for a hedge fund maximizing profit compared to a logistics company trying to decrease inventory costs. (iv) The authenticity and usefulness of data is difficult to verify a priori without first applying it to a prediction task -continuing the example from above, a particular dataset of say satellite images may be very predictive for a specific financial instrument but may have little use in forecasting demand for a logistics company.
Why Current Online Markets Do Not Suffice? Arguably, the most relevant real-time markets to compare against are: (i) online ad auctions (cf. Varian (2009)); (ii) prediction markets (cf. Wolfers & Zitzewitz (2004)). Traditionally, in these markets (such as the online ad auctions) the commodity (ad-space) is not a replicable good and buyers usually have a strong prior on the value of the commodity (cf. Liu & Chen (2006); Zhang et al. (2014)). In contrast for a data market, it is infeasible for a firm to make bids on specific datasets as they have no prior on its usefulness. Secondly, it is infeasible to run something akin to a second price auction (and variants thereof) since data is freely replicable (unless a a seller artificially restricts the number of replications, which may be suboptimal for maximizing revenue). This problem only gets exacerbated due to the combinatorial nature of data. Thus any market which matches prediction tasks and training features on sale, needs to do so based on which datasets collectively are, empirically the most predictive and "cheap" enough for a buyer -a capability online ad markets and prediction markets do not have. See Section 1.3 for a more thorough comparison with online ad and prediction markets.
Overview of Contributions
Mathematical Model of Two-Sided Data Market. Formal Definition of Key Challenges. As the main contribution of this paper, we mathematically model a full system design for a data marketplace; we rigorously parametrize the participants of our proposed market -the buyers, sellers and the marketplace itself (Section 2.1) -and the mechanism by which they interact (Section 2.2). To the best of our knowledge, we are the first to lay out an architecture for a data marketplace that takes into account some of the key properties that make data unique -it is freely replicable, it is combinatorial (i.e. features have overlapping information), buyers having no prior on usefulness of individual datasets on sale, the prediction tasks of buyers vary widely. In Section 3, we study the key challenges for such a marketplace to robustly function in real-time, which include: (i) how to incentive buyers to report their internal valuations truthfully; (ii) how to update the price for a collection of correlated datasets such that revenue is maximized over time; (iii) how to divide the generated revenue "fairly" among the training features so they get paid for their marginal contribution; (iv) how to construct algorithms that achieve all of the above and are efficiently computable (e.g. run in polynomial time in the parameters of the marketplace)?
Algorithmic Solution. Theoretical Guarantees. In Section 4, we construct algorithms for the various functions the marketplace must carry out: (i) allocate training features to and collect revenue from buyers; (ii) update the price at which the features are sold; (iii) distribute revenue amongst the data sellers. In Section 5, we prove these particular constructions do indeed satisfy the desirable marketplace properties laid out in Section 3. We highlight two technical contributions: (i) Property 3.4 -a novel notion of "fairness" required for cooperative games with freely replicable goods; (ii) a truthful, zero regret mechanism for auctioning a particular class of combinatorial goods based on utilizing Myerson's payment function (cf. Arora et al. (2012)) and the Multiplicative Weights algorithm (cf. Myerson (1981)). These might be of independent interest.
Motivating Example from Inventory Optimization
We begin with an example from inventory optimization to help build intuition for our proposed architecture for a data marketplace (refer to Section 2 for a mathematical formalization of these dynamics). We refer back to it throughout the paper as we introduce various notations and algorithms to explicitly construct such a marketplace.
Example from Inventory Optimization: Imagine data sellers are retail stores selling anonymized minute-by-minute foot-traffic data streams into a marketplace and data buyers are logistics companies who want features that best forecast future inventory demand. In such a setting, even though a logistics company clearly knows there is some value in these data streams on sale, it is very unrealistic for such a company to have a prior on what collection of foot-traffic data streams are predictive for its demand forecasting and make separate bids on each of them (this is even without taking into account the additional complication arising from the overlap in signal i.e. the correlation that invariably will exist between the foot-traffic data streams of the various retail stores). Instead what a logistics company does realistically have access to is a well-defined cost model for not predicting demand well (cf. Heyman & Sobel (2004); Ma et al. (2013)) -e.g. "10% over/under-capacity costs $10,000 per week". Hence it can make a bid into a data market of what a marginal increase in forecasting accuracy of inventory demand is worth to it -e.g. "willing to pay $1000 for a percentage increase in demand forecasting accuracy from the previous week". In such a setting, the marketplace we design performs the following steps: (i) a logistics company supplies a prediction task (i.e. a time series of historical inventory demand) and a bid signifying what a marginal increase in accuracy is worth to it; (ii) the mechanism then supplies the logistics company with foot-traffic data streams that are "cheap" enough as a function of the bid made and the current price of the foot-traffic data streams; (iii) revenue is collected based only on the increased accuracy in forecasting inventory demand empirically seen and the supplied bid; (iv) revenue is divided amongst all the retail stores who provided foot-traffic data;
(v) the price associated with the foot-traffic data streams is then updated.
What we find exciting is that this example can easily be adapted to a variety of commercial settings (e.g. hedge funds sourcing alternative data to predict certain financial instruments, utility companies sourcing electric vehicle charging data to forecast electricity demand during peak hours etc.). Thus we believe the dynamic described above can potentially be a natural, scalable way for businesses to source data for ML tasks, without knowing a priori what combination of data sources will be useful.
Literature Review
Auction design and Online Matching. In this work, we are specifically concerned with online auction design in a two-sided market. There is a rich body of literature on optimal auction design theory initiated by Myerson (1981), Riley & Samuelson (1981). We highlight some representative papers. In Rochet & Tirole (2003) and Caillaud & Jullien (2003), platform design and the function of general intermediary service providers for such markets is studied; in Gomes (2014), advertising auctions are studied; in the context of ride-sharing such as those in Uber and Lyft, the efficiency of matching in Chen & Sheldon (2016) and optimal pricing in Banerjee et al. (2015) are studied. An extensive survey on online matching, specifically in the context of ad allocation, can be found in Mehta et al. (2013). These paper generally focus on the tradeoff between inducing participation and extracting rent from both sides. Intrinsic to such models is the assumption that the value of the goods being sold (or service being provided) is known partially or in expectation. This is the key issue in applying these platform designs for a data marketplace -as stated earlier, it is unrealistic for a buyer to know the value of the various data streams being sold a priori (recall the inventory example in Section 1.2 in which a logistic company cannot realistically make bids on separate data streams or bundles of data streams). Secondly these prior works do no take into account the freely replicable, combinatorial nature of a good such as data.
Online Ad Auctions. See Varian (2009) for a detailed overview. There are two key issues with online ad markets that make it infeasible for data -(i) ad-space is not a replicable good i.e. for any particular user on an online platform, at any instant in time, only a single ad can be shown in an ad-space. Thus an online ad market does not need to do any "price discovery" -it simply allocates the ad-space to the highest bidder and to ensure truthfulness, the highest bidder pays the second highest bid i.e. the celebrated second price auction (and variants thereof). In contrast, for a freely replicable good such as data, a second price auction does not suffice (unless a seller artificially restricts a dataset to be replicated a fixed number of times, which may be suboptimal for maximizing revenue); (ii) buyers of online ad-space have a strong prior on the value of a particular ad-space -for example, a pharmaceutical company has access to historical click-through rates (CTR) for when a user searches for the word "cancer ". So it is possible for firms to make separate bids for different ad-spaces based on access to past performance information such as CTR (cf. Liu & Chen (2006); Zhang et al. (2014)). In contrast, since prediction tasks vary so greatly, past success of a specific training feature on sale has little meaning for a firm trying to source training data for its particular ML task; again, making it is infeasible for a firm to make bids on specific datasets as they have no prior on its usefulness.
Prediction Markets. See Wolfers & Zitzewitz (2004) for a detailed overview. Such markets are a recent phenomenon and have generated a lot of interest, rightly so. Typically in such markets, there is a discrete random variable, W , that a firm wants to accurately predict. The market executes as follows: (i) "Experts" sell probability distributions ∆ W i.e. predictions on the chance of each outcome; (ii) the true outcome, w, is observed; (iii) the market pays the experts based on ∆ W and w. In such literature, payment functions such as those inspired by Kullback-Leibler divergence are utilized as they incentivize "experts" to be truthful (cf. Hanson (2012)). Despite similarities, prediction markets remain infeasible for data -"experts" have to explicitly choose which tasks to make specific predictions for. In contrast, it is not known a priori whether a particular dataset has any importance for a prediction task; in the inventory optimization example in Section 1.2, retail stores selling foot-traffic data cannot realistically know which logistics company's demand forecast their data stream will be predictive for (not even taking into account combinatorial issues). Thus a data market must instead provide a real-time mechanism to match training features to prediction tasks based on collective empirical predictive value.
Information Economics. There has been an exciting recent line of work that directly tackle data as an economic good which we believe to be complimentary to our work. We divide them into three major buckets and highlight some representative papers: (i) data sellers have detailed knowledge of the specific prediction task and incentives to exert effort to collect high-quality data (e.g. reduce variance) are modeled Babaioff et al. (2012);Cummings et al. (2015); (ii) data sellers have different valuations for privacy and mechanisms that tradeoff privacy loss vs. revenue gain are modeled Ghosh & Roth (2011);Ligett & Roth (2012); (iii) there is a feedback effect between data sellers and data buyers and incentives for individuals to provide data when they know the information will be used to adjust prices for goods they consume in the future are modeled Bergemann et al. (2018) (e.g. individuals providing health data knowing that it could potentially affect their insurance prices in the future). These are all extremely important lines of work to pursue, but they focus on different (but complementary) objectives. Specifically, referring to the inventory optimization example in Section 1.2), we model the sellers (retail stores) as simply trying to maximize revenue by selling foot-traffic data they already collect. Hence we assume they have (i) no ability to fundamentally increase the quality of their data stream; (ii) no knowledge of the prediction task; (iii) no concerns for privacy. In many practical commercial settings, these assumptions do suffice as the data is sufficiently anonymized, and these sellers are trying to monetize data they are already collecting through their operations. We focus our work on such a setting, and it would be interesting future work to find ways of incorporating privacy, feedback and the cost of data acquisition into our model. Let there be M sellers, each trying to sell streams of data in this marketplace. We formally parameterize a seller through the following single quantity:
Feature. X j ∈ R T , j ∈ [M ]
is a single univariate time series (i.e. a single feature) over T time steps. For simplicity, we associate with each seller a single feature and thus restrict X j to be in R T . Our model is naturally extended to the case where sellers are selling multiple streams of data by considering each stream as another "seller" in the marketplace. We refer to the matrix denoting any subset of features as X S , S ⊂ [M ]. In line with the motivation we provide in Section 1.2 for our model, we assume data sellers do not have the ability to change the quality of the data stream (e.g. reducing variance) they supply into the market nor any concerns for privacy (we assume data is sufficiently anonymized as is common in many commercial settings). Additionally sellers have no knowledge of the prediction tasks their data will be used for and simply aim to maximize revenue from the datasets that they have at hand.
Buyers
Let there be N buyers in the market, each trying to purchase the best collection of datasets they can afford in this marketplace for a particular prediction task. We formally parameterize a buyer through the following set of quantities -For n ∈ [N ]:
Prediction Task. Y n ∈ R T is a time series over T time steps that buyer n wants to predict well. To avoid confusion, we clarify what we mean by a "time series" for both Y n and X j using the inventory optimization example in Section 1.2 -The historical inventory demand over time for the logistics company is the "time series" Y n . Similarly the time stamped foot-traffic data sold by retailers is referred to as the 'time series" of features, X j , j ∈ [M ]. Hence each scalar in Y n and X j has an associated time stamp and the "prediction task" in this example would be to forecast inventory demand from time-lagged foot traffic data. Our aim is to keep the mechanism as agnostic as possible to the particulars of the prediction task (i.e. data sellers having no knowledge of the prediction task) and so we think of time stamps as the most natural way to connect features X j to the labels Y n .
Prediction Gain Function. G n : R 2T → [0, 1], the prediction gain function, takes as inputs the prediction task Y n and an estimateŶ n , and outputs the quality of the prediction. For regression, an example of G n is 1 − RMSE 2 (root-mean-squared-error). For classification, an example of G n is Accuracy 3 . In short, a larger value for G n implies better prediction accuracy. To simplify the exposition (and without any loss of generality of the model), we assume that all buyers use the same gain function i.e. G = G n for all n.
Value for prediction quality. µ n ∈ R + parametrizes how buyer n values a marginal increase in accuracy. As an illustration, recall the inventory optimization example in Section 1.2 where a logistics company makes a bid of the form -"willing to pay $1000 for a percentage increase in demand forecasting accuracy from the previous week". We then have the following definition for how a buyer values an increase in accuracy, Definition 2.1. Let G be the prediction gain function. We define the value buyer n gets from estimatê Y n as:
µ n · G(Y n ,Ŷ n )
i.e. µ n is what a buyer is willing to pay for a unit increase in G.
Though a seemingly straightforward definition, we view this as one of the key modeling decisions we make in our design of a data marketplace -in particular, a buyer's valuation for data does not come from specific datasets, bur rather from an increase in prediction accuracy of a quantity of interest.
Public Bid Supplied to Market: Note that µ n is a private valuation -it is not necessarily what is provided to the marketplace. Let b n ∈ R + refer to the actual bid revealed to the marketplace (which may not necessarily be equal to µ n if the buyer is strategic).
Marketplace
The function of the marketplace is to match buyers and sellers as defined above. We formally parameterize a marketplace through the following set of quantities -For n ∈ [N ]:
Price. p n ∈ R + is the price associated with the features on sale when buyer n arrives.
Machine Learning/Prediction Algorithm. M : R M T → R T , the learning algorithm utilized by the marketplace, takes as input the features on sale X M , and produces an estimateŶ n of Buyer n's prediction problem Y n . M does not necessarily have to be supplied by the marketplace and is a simplifying assumption; for example each buyer could provide their own learning algorithm that they intend to use, or point towards one Allocation Function. AF : (p n , b n ; X M ) → X M , X M ∈ R M , takes as input the current price p n and the bid b n received, to decide the quality at which buyer n gets allocated the features on sale X M (e.g. by adding noise or subsampling the features). In Section 4.1, we provide explicit instantiations of AF. In Section 4.2, we provide detailed reasoning for why we choose this particular class of allocation functions.
Revenue Function. RF : (p n , b n , Y n ; M, G, X M ) → r n , r n ∈ R + , the revenue function, takes as input the current price p n , in addition to the bid and the prediction task provided by the buyer (b n and Y n respectively), to decide how much revenue r n to extract from the buyer.
Payment Division Function. PD : (Y n , X M ; M, G) → ψ n , ψ n ∈ [0, 1] M , the payment-division function, takes as input the prediction task Y n along with the features that were allocated X M , to compute ψ n , a vector denoting the marginal value of each allocated feature for the prediction task.
2 RMSE = 1 Ymax−Y min T i=1 (Ŷ i −Y i ) 2 T
, where: (i)Ŷi is the predicted value at time step i ∈ [T ] produced by the machine learning algorithm, M, (ii) Ymax, Ymin are the max and min of Yn respectively.
3 Accuracy = 1 T T i=1 1(Ŷi = Yi), withŶi defined similarly to that above Price Update Function. PF : (p n , b n , Y n ; M, G, X M ) → p n+1 , p n+1 ∈ R + , the price-update function, takes as input the current price p n , in addition to the bid and the prediction task provided by the buyer (b n and Y n respectively) to update the price associated with each of the features.
Buyer Utility
We can now precisely define the function, U : R + × R T → R, each buyer is trying to maximize, Definition 2.2. The utility buyer n receives by bidding b n for prediction task Y n is given by
U(b n , Y n ) := µ n · G(Y n ,Ŷ n ) − RF(p n , b n , Y n ) (1) whereŶ n = M(Y n , X M ) and X M = AF(p n , b n ; X M ).
In words, the first term on the right hand side (r.h.s) of (1) is the value derived from a gain in prediction accuracy (as in Definition 2.1). Note this is a function of the quality of the features that were allocated based on the bid b n . The second term on the r.h.s of (1) is the amount the buyer pays, r n . Buyer utility as in Definition 2.2 is simply the difference between these two terms.
Marketplace Dynamics
We can now formally define the per-step dynamic within the marketplace (refer to Figure 1 for a more intuitive graphical overview). Whenever a buyer n arrives, the following steps occur in sequence (we assume p 0 , b 0 , Y 0 are initialized randomly):
For n ∈ [N ]: 1. Market sets price p n , where p n = PF(p n−1 , b n−1 , Y n−1 ) 2. Buyer n arrives with prediction task Y n 3. Buyer n bids b n where b n = arg max z∈R+ U(z, Y n ) 4. Market allocates features X M to Buyer n , where X M = AF(p n , b n ; X M )
5. Buyer n achieves G Y n , M( X M ) gain in prediction accuracy 6. Market extracts revenue, r n , from Buyer n, where r n = RF(p n , b n , Y n ; M, G) 7. Market divides r n amongst allocated features using ψ n , where ψ n = PD(Y n , X M ; M, G)
It is worth noting from Step 3 of the dynamics laid out above, a buyer is "myopic" over a single-stage; it comes into the market once and leaves after being provided the estimateŶ n , maximizing utility only over that stage. In particular, we do not study the additional complication if the buyer's utility is defined over multiple-stages.
Desirable Properties of Marketplace
We define the key properties required of such a marketplace for it to be feasible in a large-scale, real-time setting, where buyers are arriving in quick succession and need to be matched with a large number of data sellers within minutes, if not quicker. Intuitively we require the following properties: (i) buyers are truthful in their bids; (ii) overall revenue is maximized; (iii) revenue is fairly divided amongst sellers; (iv) marketplace runs efficiently. We now formally define these properties.
Truthfulness
Property 3.1 (Truthful). A marketplace is "truthful" if for all Y n , µ n = arg max z∈R+ U(z, Y n )
Property 3.1 requires that the allocation function, AF, and the revenue function, RF, incentivize buyers to bid their true valuation for an increase in prediction accuracy.
Revenue Maximization
Property 3.2 (Revenue Maximizing). Let {(µ 1 , b 1 , Y 1 ), (µ 2 , b 2 , Y 2 ), . . . , (µ N , b N , Y N )
} be a sequence of buyers entering the market. A marketplace is "revenue maximizing" if the price-update function, PF(·), produces a sequence of prices, {p 1 , p 2 , . . . , p n }, such that the "worst-case" average regret, relative to the optimal price p * in hindsight, goes to 0,
lim N →∞ 1 N sup {(bn,Yn):n∈[N ]} sup p * ∈R+ N n=1 RF(p * , b n , Y n )− N n=1 RF(p n , b n , Y n ) = lim N →∞ 1 N R(N, M ) = 0 where R(N, M ) i.e.
regret, refers to the expression with the bracket in the middle term. Property 3.2 is the standard worst-case regret guarantee (cf. Hazan et al. (2016)). It necessitates the price-update function, PF, produce a sequence of prices p n such that the average difference with the unknown optimal price, p * goes to zero as N increases. Note property 3.2 is a robust guarantee -it must hold over the worst case sequence of buyers (i.e. µ n , b n , Y n ).
Revenue Division
In the following section, we abuse notation and let S ⊂ [M ] refer to both the index of the training features on sale and to the actual features, X S themselves.
Shapley Fairness
Property 3.3 (Shapley Fair). A marketplace is "Shapley-fair" if ∀ n ∈ [N ], ∀ Y n , the following holds on PD (and its output, ψ n ):
1. Balance:
n ), PD([M ], Y(1)n ) be ψ (1) n , ψ(2)
n respectively. Let ψ n be the output of PD(
[M ], Y (1) n + Y (2) n ). Then ψ n = ψ (1) n + ψ (2) n .
The conditions of Property 3.3 are the standard axioms of fairness laid out in Shapley (1952). We choose them as they are the de facto method to assess the marginal value of goods (i.e. features in our setting) in a cooperative game (i.e. prediction task in our setting). Note that if we chose a much simpler notion of fairness such as computing the marginal change in the prediction accuracy with and without every single feature, one at a time, then the correlation between features would lead to the market "undervaluing" each feature. As a toy example of this phenomenon, consider a market with only two identical features on sale. It is easy to see that the simple mechanism above would lead to zero value being allocated to each feature, even though they collectively might have had great value. This is clearly undesirable. That is why Property 3.3 is a necessary notion of fairness as it takes into account the combinatorial nature of the different features, X j .
We then have the following celebrated theorem from Shapley (1952), Theorem 3.1 (Shapley Allocation). Let ψ shapley ∈ [0, 1] [M ] be the output of the following algorithm,
ψ shapley (m) = T ⊂[M ]\{m} |T |! (M − |T |−1)! M ! G Y n , M( X T ∪m ) − G Y n , M( X T )(2)
Then ψ shapley is the unique allocation that satisfies all conditions of Property 3.3
It is easily seen that the running time of this algorithm is Θ(2 M ), which makes it infeasible at scale if implemented as is. But it still serves as a useful standard to compare against. . Then a marketplace is -"robust-to-replication" if ∀ n ∈ [N ], ∀ Y n , the following holds on PD:
Robustness to Replication
ψ + n (m) + i ψ + n (m + i ) ≤ ψ n (m) + .
Property 3.4 is a novel notion of fairness, which can be considered as a necessary additional requirement to the Shapley notions of fairness for freely replicable goods. We use Example 3.1 below to elucidate how adverse replication of data can lead to grossly undesirable revenue divisions (refer to Figure 2 for a graphical illustration). Note that implicit in the definition of Property 3.4 is that the "strategy-space" of the data sellers is the number of times they replicate their data. Example 3.1. Consider a simple setting where the marketplace consists of only two sellers, A and B, each selling one feature where they are both identical to one another. By Property 3.3, the Shapley value of A and B are equal, i.e. ψ(A) = 1 2 , ψ(B) = 1 2 . However if seller A replicated his or her feature once and sold it again in the marketplace, it is easy to see that the new Shapley allocation will be ψ(A) = 2 3 , ψ(B) = 1 3 . Hence it is not robust to replication since the aggregate payment remains the same (no change in accuracy).
Such a notion of fairness in attribution is especially important in the "computer" age where digital goods can be produced at close to zero marginal cost, and yet users get utility from bundles of digital goods with potentially complex combinatorial interactions between them (e.g. battery cost attribution among smartphone applications, reward allocation among "experts" in a prediction market).
Computational Efficiency
We assume that access to the Machine Learning algorithm, M and the Gain function, G each require computation running time of O(M ) i.e. computation complexity scales at most linearly with the number of features/sellers, M . We then have the following computational efficiency requirement of the market, Property 3.5 (Efficient). A marketplace is "efficient" if for each step, n, the marketplace as laid out in Section 2.2 runs in polynomial time in M , where M is the number of sellers. In addition, the computation complexity of the marketplace cannot grow with N .
Such a marketplace is feasible only if it functions in real-time. Thus, it is pertinent that the computational resources required for any buyer n to interface with the market are low i.e. ideally run as close to linear-time in the number of sellers, M , as possible and not be dependent on the number of buyers seen thus far. Due to the combinatorial nature of data, this is a non-trivial requirement as such combinatorial interactions normally lead to an exponential dependence in M . For example, the Shapley Algorithm in Theorem 3.1 runs in Θ(2 M ). Similarly, standard price update algorithms for combinatorial goods satisfying Property 3.2 scale very poorly in M (cf. Daskalakis & Syrgkanis (2016)) -for example, if we maintain separate prices for every data stream (i.e. if p n ∈ R M + ) it is easily seen that regret-minimizing algorithms such as Multiplicative Weights (cf. Arora et al. (2012)) or Upper Confidence Bandits (cf. Auer (2002)), will have exponential running time or exponentially loose guarantees (in M ) respectively. In fact from Daskalakis & Syrgkanis (2016), we know regret minimizing algorithms for even very simple non-additive buyer valuations are computationally intractable.
Marketplace Construction
We now explicitly construct instances of AF, RF, PD and PF and argue in Section 5 that the properties laid out in Section 3 hold for these particular constructions.
Allocation and Revenue Functions
Allocation Function. Recall the allocation function, AF, takes as input the current price p n and the bid b n received, to decide the quality at which buyer n gets allocated the features on sale X M . In essence, AF adds noise to/degrades X M based on the difference between p n and b n to supply the features at a quality level the buyers can "afford" (as quantified by b n ). If the bid is greater than the current price (i.e. b n > p n ), then X M is supplied as is. However if the bid is less than the current price, then the allocation function degrades the set of features being sold in proportion to the difference between b n and p n . This degradation can take many forms and depends on the structure of X j itself. Below, we provide some examples of commonly used allocation functions for some typical X j encountered in ML.
Example 4.1. Consider X j ∈ R T i.e. sequence of real numbers. Then an allocation function (i.e. perturbation function), AF * 1 (p n , b n ; X j ), commonly used (cf. Cummings et al. (2015); Ghosh & Roth (2011)) would be for t in [T ],
X j (t) = X j (t) + max(0, p n − b n ) · N (0, σ 2 )
where N (0, σ 2 ) is a standard univariate Gaussian.
Example 4.2. Consider X j ∈ {0, 1} T i.e. sequence of bits. Then an allocation function (i.e. masking function), AF * 2 (p n , b n ; X j ), commonly used (cf. Schmidt et al. (2018)) would be for t in [T ],
X j (t) = B(t; θ) · X j (t)
where B(t; θ) is an independent Bernoulli random variable with parameter θ = min( bn pn , 1).
In both examples if b n ≥ p n , then the buyer is given X M as is without degradation. However if b n < p n , then X j is degraded in proportion to the difference between b n and p n . While the specific form of the degradation depends on the structure of X j itself, in Section 5.1 (specifically Assumption 1), we formalize a natural requirement for any such allocation function such that Property 3.1 (i.e. truthfulness) holds.
Revenue Function. Observe from Definition 2.1, we parameterize buyer utility through a single parameter µ n -how much a buyer values a marginal increase in prediction quality. This crucial modeling choice allows us to use Myerson's celebrated payment function rule (cf. Myerson (1981)) given below,
RF * (p n , b n , Y n ) = b n · G Y n , M AF * (b n , p n ) − bn 0 G Y n , M AF * (z, p n ) dz. (3)
We show in Theorem 5.1, RF * ensures buyer n is truthful (as defined in Property 3.1). Refer to Figure 3 for a graphical view of AF * and RF * .
Price Update Function
Recall from Section 3.4 that designing general purpose zero-regret algorithms for combinatorial goods is computationally intractable -if we maintain a separate price for each feature, X j , such algorithms would require exponential running time, and hence Property 3.5 would not be satisfied. This is only exacerbated by the highly non-linear nature of the revenue function RF * . That is why to achieve a computationally efficient, truthful, zero-regret algorithm, we need to exploit the specific structure of data. We recall from Definition 2.1 that a buyer's utility comes solely from the quality of the estimateŶ n received, rather than the particular datasets allocated. Thus the key observation we use is that from the buyer's perspective, instead of considering each feature X j as a separate good (which leads to computational intractability), it is greatly simplifying to think of X M as the total amount of "information" on sale -the allocation function AF(p n , b n ) is then a mechanism to collectively adjust the quality of X M so it is "cheap" enough for buyer n (based on the difference between p n and b n ) as instantiated in Section 4.1.
Thus it suffices to maintain a single price, p n ∈ R + for all of X M . The market is still tasked with how to pick p n 4 . We now provide some intuition of how increasing/decreasing p n affects the amount of revenue collected, and the implicit tradeoff that lies therein. Observe from the construction of RF * in (3) that for a fixed bid and prediction task (b n , Y n ), it is easily seen that if p n is picked to be too large, then the positive term in RF * is small (as the degradation of the signal in X M is very high), leading to lower than optimal revenue generation. Similarly, if p n is picked to be too small, it is easily seen that the negative term in RF * is large, which again leads to an undesired loss in revenue. However since our particular construction of AF * and RF * allowed p n to be a scalar, we can apply off-the-shelf zero-regret algorithms, specifically Multiplicative Weights, to achieve zero-regret as we show in Theorem 5.2.
We now define some quantities needed to construct the price update algorithm. As we make precise in Assumption 4 in Section 5, we assume the bids come from some bounded set, B ⊂ R + . Define B max ∈ R to be the maximum element of B. Define B net ( ) to be a minimal -net of B 5 . Intuitively, the elements of B net ( ) serve as our "experts" (i.e. the different prices we experiment with) in the Multiplicative Weights algorithm. The price update algorithm is then given in Algorithm 1.
Algorithm 1 PRICE-UPDATE: PF * (b n , Y n , B, , δ) 1: Let B net ( ) be an -net of B 2: for c i ∈ B net ( ) do 3:
Set w i 1 = 1 initialize weights of all experts to 1 4: end for 5: for n = 1 to N do 6:
W n = |B net ( )| i=1 w i n 7:
Let p n = c i with probability w i n /W n note p n is not a function of b n 8:
for c i ∈ B net ( ) do 9:
Let g i n = RF * (c i , b n , Y n )/B max revenue gain if price c i was used 10:
Set w i n+1 = w i n · (1 + δg i n )
Multiplicative Weights update step 11:
end for 12: end for 13: return p n
Payment-Division Functions
Shapley Approximation
In our model (as seen in Section 2.2), a buyer only makes an aggregate payment to the market based on the increase in accuracy experienced (see RF * in (3)). It is thus up to the market to design a mechanism to fairly (as defined in Property 3.3) allocate the revenue among the sellers to incentivize their participation. Following the seminal work in Shapley (1952), there have been a substantial number of applications (cf. Bachrach et al. (2010); Balkanski et al. (2017)) leveraging the ideas in Shapley (1952) to fairly allocate cost/reward among strategic entities cooperating towards a common goal. Since the Shapley algorithm stated in (2) is the unique method to satisfy Property 3.3, but unfortunately runs in time Θ(2 M ), the best one can do is to approximate (2) as closely as possible. We introduce a randomized version of the Shapley allocation that runs in time O(M ), which gives an -approximation for (2) with high probability (shown in Theorem 5.3). We note some similar sampling based methods, albeit for different applications (cf. Mann & Shapley (1952); Castro et al. (2009);Maleki et al. (2013)).
We first define some quantities needed to construct the revenue division algorithm. Let σ [M ] refer to the set of all permutations over [M ]. For any permutation σ ∈ σ [M ] , let [σ < m] refer to the set of features in [M ] that came before m. The key observation is that instead of enumerating over all permutations in σ [M ] as in the Shapley allocation, it suffices to sample σ k ∈ σ [M ] uniformly at random with replacement, K times, where K depends on the -approximation a practitioner desires. We provide guidance on how to pick K in Section 5. The revenue division algorithm is given in Algorithm 2.
Algorithm 2 SHAPLEY-APPROX: PD * A (Y n , X M , K) 1: for all m ∈ [M ] do 2: for all k ∈ [K] do 3: σ k ∼ Unif(σ [M ] ) 4: G = G(Y n , M(X [σ k <m] )) 5: G + = G(Y n , M(X [σ k <m ∪ m] ))
Robustness to Replication
Recall from Section 3.3.2 that for freely replicable goods such as data, the standard Shapley notion of fairness does not suffice (see Example 3.1 for how it can lead to undesirable revenue allocations). Though this issue may seem difficult to overcome in general, we again exploit the particular structure of data as a path forward. Specifically, we note that there are standard methods to define the "similarity" between two vectors of data. A complete treatment of similarity measures has been done in Goshtasby (2012). We provide two examples: Example 4.3. Cosine similarity, a standard metric used in text mining and information retrieval, is
| X 1 , X 2 | ||X 1 || 2 ||X 2 || 2 , X 1 , X 2 ∈ R mT
Example 4.4. "Inverse" Hellinger distance, a standard metric to define similarity between underlying data distributions, is
1 − 1 2 x∈X ( p 1 (x) − p 2 (x)) 2 ) 1/2 , p 1 ∼ X 1 , p 2 ∼ X 2
We now introduce some natural properties any such similarity metric must satisfy for our purposes, Given the observation that we can define a similarity metric for two data vectors (i.e. features), we now have all the necessary notions to define a "robust-to-replication" version of the randomized Shapley approximation algorithm we introduce in Section 4.3.1. The intuition behind the algorithm we design is that it essentially penalizes similar features (relative to the similarity metric, SM) a sufficient amount to disincentive replication. The revenue division algorithm is given in Algorithm 3. See Figure 4 for an illustration of the effect of Algorithm 3 on the original allocation in Example 3.1. To give performance guarantees, we state four mild and natural assumptions we need on: (i) M (prediction algorithm); (ii) G (prediction gain function); (iii) AF * (allocation function) (iv) b n (bids made). We list them below. Assumption 1. M, G, AF * are such that an increase in the difference between p n and b n leads to a decrease in G i.e. an increase in "noise" cannot lead to an increase in prediction accuracy. Specifically, for any Y n , p n , let X
(1)
M , X (2) M be the outputs of AF(p n , b (1) ; X M ), AF(p n , b (2) ; X M ) respctively. Then if b (1) ≤ b (2) , we have G Y n , M( X (1) M ) ≤ G Y n , M( X (2) M ) .
Assumption 2. M, G are such that replicated features do not cause a change in prediction accuracy. Specifically, ∀ S ⊂ [M ], ∀ Y n , ∀ m ∈ S, let m + i refer to the i th replicated copy of m (i.e. X + m,i = X m ). Let S + = ∪ m (m ∪ i m + i ) refer to the set of original and replicated features. Then G(Y n , M(X S )) = G(Y n , M(X S + )) Assumption 3. The revenue function RF * is L-Lipschitz with respect to price i.e. for any
Y n , b n , p (1) , p (2) , we have that |RF * (p (1) , b n , Y n ) − RF * (p (2) , b n , Y n )|≤ L|p (1) − p (2) |.
Truthfulness.
Theorem 5.1. For AF * , Property 3.1 (Truthfulness) can be achieved if and only if Assumption 1 holds. In which case, RF * guarantees truthfulness.
Theorem 5.1 is an application of Myerson's payment function (cf. Myerson (1981)) which ensures b n = µ n . See Appendix A for the proof. Again, the key is the definition of buyer utility in Definition 2.1 -it lets us parameterize a buyers value for increased accuracy by a scalar, µ n , which is what allows us to use Myerson's payment function (unfortunately generalization of Myerson's payment function to the setting where µ n is a vector are severely limited cf. Daskalakis (2011)).
Revenue Maximization.
Theorem 5.2. Let Assumptions 1, 3 and 4 hold. Let p n:n∈[N ] be the output of Algorithm 1. Let L be the Lipschitz constant of RF * with respect to price (where L is defined as in Assumption 3). Let B max ∈ R be the maximum element of B (where B is defined as in Assumption 4). Then by choosing the algorithm hyper-parameters = 1
L √ N , δ = log(|B net ( )|) N
we have that for some positive constant C > 0, the total average regret is bounded by,
1 N E[R(N )] ≤ CB max log B max L √ N N = O( log(N ) N ).
where the expectation is taken over the randomness in Algorithm 1. Hence, Property 3.2 (Revenue Maxmization) holds.
Theorem 5.2 proves Algorithm 1 is a zero regret algorithm (where the bound is independent of the number of features sold since the maximum prediction gain is bounded above i.e. G(·) ≤ 1). See Appendix B for the proof. Note that a limitation of our mechanism is that AF * is fixed and we degrade each feature by the same scaling (since p n is the same across features). An interesting line of future work would be to study whether we can make the allocation AF adaptive -so far we fix AF * a priori, using some standard notions from the literature but it could potentially be made adaptive to the prediction tasks that arrive over time to further increase the revenue generated.
Fairness in Revenue Division.
Theorem 5.3. Let ψ n,shapley be the unique vector satisfying Property 3.3 (Shapley Fairness) as given in (2). For Algorithm 2, pick the following hyperparameter: K > M log(2/δ) 2 2
, where δ, > 0. Then with probability 1 − δ, the outputψ n of Algorithm 2, achieves the following, ||ψ n,shapley −ψ n || ∞ < .
Theorem 5.3 is worth noting as it gives an -approximation for ψ n,shapley , the unique vector satisfying Property 3.3, in O(M ). In comparison, to compute it exactly would take Θ(2 M ) complexity. To the best of our knowledge, the direct application of random sampling to compute feature importances for ML algorithms along with finite sample guarantees is novel. We believe this random sampling method could be used as a model-agnostic tool (not dependent on the particulars of the prediction model used) to assess feature importance -a prevalent question for data scientists seeking interpretability from their prediction models. See Appendix C for the proof. Theorem 5.4. Let Assumption 2 hold. For Algorithm 3, pick the following hyperparameters: (2), where δ, > 0. Then with probability 1 − δ, the output, ψ n , of Algorithm 3 is -"Robust to Replication" i.e. Property 3.4 (Robustness-to-Replication) holds. Additionally Conditions 2-4 of Property 3.3 continue to hold for ψ n with -precision.
K ≥ M log(2/δ) 2( 3 ) 2 , λ = log
Theorem 5.4 states Algorithm 3 protects against adversarial replication of data, while maintaining the conditions of the standard Shapley fairness other than balance. See Appendix C for the proof. The key observation is that unlike general digital goods, there is a precise way to compute how similar one data stream is to another (refer to Definition 4.1). A natural question to ask is whether Condition 1 of Property 3.3 and Property 3.4 can hold together. Unfortunately they cannot, Proposition 5.1. If the identities of sellers in the marketplace is anonymized, the balance condition in Property 3.3 and Property 3.4 cannot simultaneously hold.
Note however, Algorithm 3, down-weights features in a "local" fashion i.e. highly correlated features are individually down-weighted, while uncorrelated features are not. Hence, Algorithm 3 incentivizes sellers to provide data that is: (i) predictive for a wide variety of tasks; (ii) uncorrelated with the other features on sale i.e. has unique information.
In Algorithm 3, we exponentially penalize (i.e. down weight) each feature, for a given similarity metric, SM. An open question for future work is -which revenue allocation mechanism is the most balanced preserving while being robust to replication? We provide a necessary and sufficient condition for a penalty function to be robust to replication for a given similarity metric, SM, below, Proposition 5.2. Let Assumption 2 hold. Then for a given similarity metric SM, a penalty function f is "robust-to-replication" if and only if it satisfies the following relation
(c + 1)f (x + c) ≤ f (x) where c ∈ Z + , x ∈ R + .
See Appendix E for a proof.
Conclusion
In this paper, we introduce the first mathematical model for a two-sided data market -we define key challenges, construct algorithms, and give theoretical performance guarantees. We highlight two technical contributions: (i) a new notion of "fairness" required for cooperative games with freely replicable goods (and associated algorithms); (ii) a truthful, zero regret mechanism for auctioning a particular class of combinatorial goods based on utilizing Myerson's payment function and the Multiplicative Weights algorithm. There might exist applications of our overall framework and the corresponding theoretical guarantees outside data markets for other types of combinatorial goods, when there is a need to design efficient, truthful, zero-regret payment and pricing mechanisms. We believe the key requirement in such a setting is to find a way to model buyer utility through a scalar parameter (e.g. number of unique views for multimedia ad campaigns, total battery usage for smartphone apps). Our framework might be of independent interest in such use cases. We end with some interesting lines of questioning for future work: (i) Which revenue division mechanism is the most balanced preserving while being robust to replication? (ii) Note that in this work we do not take into account an important attribute of data -a a firm's utility for a particular dataset may be heavily dependent on what other firms get access to it (e.g. a hedge fund might pay a premium to have a particularly predictive dataset only go to them). So an important question to answer is how does a firm efficiently convey to the market (through their bidding) the externalities they experiences associated with a dataset being replicated too many times and how will the associated mechanism compare with what we describe in this work?
A Truthfulness Theorem A.1 (Theorem 5.3). For AF * , Property 3.1 (Truthfulness) can be achieved if and only if Assumption 1 holds. In which case, RF * guarantees truthfulness.
Proof. This is a classic result from Myerson (1981). We provide the arguments here for completeness and for consistency with the properties and notation we introduce in our work. We begin with the backward direction. By Assumption 1 the following then holds ∀ b n ≥ b n G(Y n , M(AF * (b n , p n ))) ≥ G(Y n , M(AF * (b n , p n ))) (4)
To simplify notation, let h(z; G, p n , Y n , M) = G(Y n , M(AF * (z, p n ))). In words, h(z) is the gain in prediction accuracy as a function of the bid, z, for a fixed G, Y n , M, p n .
By definition of (1), it suffices to show that if b n = µ n , the following holds
µ n · h(µ n ) − µ n · h(µ n ) + µn 0 h(z)dz ≥ µ n · h(b n ) − b n · h(b n ) + bn 0 h(z)dz(5)
This is equivalent to showing that
µn 0 h(z)dz ≥ bn 0 h(z)dz − (b n − µ n ) · h(b n )(6)
Case 1: b n > µ n . In this case, (6) is equivalent to
(b n − µ n ) · h(b n ) ≥ bn µn h(z)dz(7)
This is immediately true due to monotonicity of h(z) which comes from (4). Case 2: b n < µ n . In this case, (6) is equivalent to
µn bn h(z)dz ≥ (µ n − b n ) · h(b n )(8)
Again, this is immediately true due to monotonicity of h(z).
Now we prove the opposite direction, i.e. if we have a truthful payment mechanism, which we denote as RF , an increased allocation of features cannot decrease accuracy. Our definition of a truthful payment function implies the following two inequalities ∀ b > a a · h(a) − RF (·, a, ·) ≥ a · h(b) − RF (·, b, ·)
(9) b · h(b) − RF (·, b, ·) ≥ b · h(a) − RF (·, a, ·) (10) These two inequalities imply a · h(a) + b · h(b) ≥ a · h(b) + b · h(a) =⇒ h(b)(b − a) ≥ h(a)(b − a)(11)
Since by construction b − a > 0, we can divide both sides of the inequality by b − a to get h(b) ≥ h(a) ⇐⇒ G n (Y n , M(AF * (b, p n ))) ≥ G n (Y n , M(AF * (a, p n ))) (12) Since the allocation function AF * (z, p n ) is increasing in z, this completes the proof.
B Price Update -Proof of Theorem 5.2 Theorem B.1 (Theorem 5.2). Let Assumptions 1, 3 and 4 hold. Let p n:n∈[N ] be the output of Algorithm 1. Let L be the Lipschitz constant of RF * with respect to price (where L is defined as in Assumption 3). Let B max ∈ R be the maximum element of B (where B is defined as in Assumption 4).
Then by choosing algorithm hyper-parameters = 1 L √ N , δ = log(|B net ( )|) N we have that for some positive constant C > 0, the total average regret is bounded by,
1 N E[R(N )] ≤ CB max log B max L √ N N = O( log(N ) N ).
where the expectation is taken over the randomness in Algorithm 1. Hence, Property 3.2 (Revenue Maxmization) holds.
Proof. Our proof here is an adaptation of the classic result from Arora et al. (2012). We provide the arguments here for completeness and for consistency with the properties and notation we introduce in our work. It is easily seen by Assumption 1 that the revenue function RF * is non-negative. Now since by construction the gain function, G ∈ [0, 1], we have that the range of RF * is in [0, B max ]. This directly implies that for all i and n, g i n ∈ [0, 1] (recall g i n is the (normalized) revenue gain if we played price i for every buyer n).
We first prove a regret bound for the best fixed price in hindsight within B net ( ). Let g alg n be the expected (normalized) gain of Algorithm 1 for buyer n. By construction we have that
g alg n = |B net ( )| i=1 w i n g i n W n
Observe we have the following inductive relationship regarding W n
W n+1 = |B net ( )| i=1 w i n+1 (13) = |B net ( )| i=1 w i n + δw i n g i n (14) = W n + δ |B net ( )| i=1 w i n g i n (15) = W n (1 + δg alg n ) (16) = W 1 Π n i=1 (1 + δg alg n ) (17) (a) = |B net ( )|·Π n i=1 (1 + δg alg n )(18)
where (a) follows since W 1 was initialized to be |B net ( )|.
Taking logs and utilizing the inequality log(1 + x) ≤ x for x ≥ 0, we have
log(W N +1 ) = log(|B net ( )|) + N i=1 log 1 + δg alg n ) (19) ≤ log(|B net ( )|) + N i=1 δg alg n(20)
Now using that log(1 + x) ≥ x − x 2 for x ≥ 0, we have for all prices c i ∈ B net ( ),
log(W N +1 ) ≥ log w i n+1 (21) = N n=1 log 1 + δg i n ) (22) ≥ N n=1 δg i n − (δg i n ) 2 (23) (a) ≥ N i=1 δg i n − δ 2 N(24)
where (a) follows since g i n ∈ [0, 1]. Thus we have that for all prices c i ∈ B net ( ) So far we have a bound on how well Algorithm 1 performs against prices in B net ( ). We now extend it to all of B. Let g opt n be the (normalized) revenue gain from buyer n if we had played the optimal price, p * (as defined in Property 3.2). Note that by Assumption 4, we have p * ∈ B. Then by the construction of |B net ( )|, there exists c i ∈ B net ( ) such that |c i − p * |≤ . Then by Assumption 3, we have that |g opt n − g i n |=
1 B max |RF * (p * , b n , Y n ) − RF * (c i , b n , Y n )|≤ L B max
We thus have 1 N , where δ, > 0. Then with probability 1 − δ, the outputψ n of Algorithm 2, achieves the following ψ n,shapley −ψ n ∞ <
Proof. It is easily seen that ψ n,shapley can be formulated as the following expectation
ψ n,shapley (m) = E σ∼Unif(σ Sn ) [G n (Y n , M n (X [σ<m ∪ m] )) − G n (Y n , M n (X [σ<m] )](26)
The random variableψ k n (m) is distributed in the following manner:
P ψ k n (m) = G n (Y n , M(X [σ k <m ∪ m] )) − G n (Y n , M(X [σ k <m] )); σ ∈ σ Sn = 1 S n !(27)
We then have
Sinceψ n (m) has bounded support between 0 and 1, and theψ k n (m) are i.i.d, we can apply Hoeffding's inequality to get the following bound P |ψ n,shapley −ψ n (m)|> < 2 exp −2 2 K
By applying a Union bound over all m ∈ S n ≤ M , we have P ψ n,shapley −ψ n ∞ > < 2M exp −2 2 K (30)
Setting δ = 2M exp −2 2 K and solving for K completes the proof.
Theorem C.2 (Theorem 5.4). Let Assumption 2 hold. For Algorithm 3, pick the following hyperparameters: K ≥ M log(2/δ) 2( 3 ) 2 , λ = log (2), where δ, > 0. Then with probability 1 − δ, the output, ψ n , of Algorithm 3 is -"Robust to Replication" i.e. Property 3.4 (Robustness-to-Replication) holds. Additionally Conditions 2-4 of Property 3.3 continue to hold for ψ n with -precision.
Proof. To reduce notational overhead, we drop the dependence on n of all variables for the remainder of the proof. Let S = {X 1 , X 2 , . . . , X K } refer to the original set of allocated features without replication. Let S + = {X (1,1) , X (1,2) , . . . , X (1,c1) , X (2,1) , . . . , X (K,c K ) } (with c i ∈ N), be an appended version of S with replicated versions of the original features, i.e. X (m,i) is the (i − 1)-th replicated copy of feature X m .
Letψ,ψ + be the respective outputs of Step 1 of Algorithm 3 for S, S + respectively. The total revenue allocation to seller m in the original and replicated setting is given by the following: The fact that Conditions 2-4 of Property 3.3 continue to hold for follow ψ n with -precision follow easily from Theorem 5.3 and the construction of ψ n .
of the many excellent standard open-source libraries widely used such as SparkML, Tensorflow and Scikit-Learn (cf. Meng et al. (2015); Pedregosa et al. (2011); Abadi et al. (2016)).
Figure 1 :
1Overview of marketplace dynamics.
M m=1 ψ n (m) = 1 2 .
2Symmetry: ∀ m, m ∈ [M ], ∀S ⊂ [M ] \ {m, m }, if PD(S ∪ m, Y n ) = PD(S ∪ m , Y n ), then ψ n (m) = ψ n (m )3. Zero Element: ∀ m ∈ [M ], ∀S ⊂ [M ], if PD(S ∪ m, Y n ) = PD(S, Y n ), then ψ n (m) = 0 4. Additivity: Let the output of PD([M ], Y
Figure 2 :
2Shapley fairness is inadequate for freely replicable goods.
Property 3. 4 (
4Robustness to replication). For all m ∈ [M ], let m + i refer to the i th replicated copy of m i.e. X + m,i = X m . Let [M ] + = ∪ m (m ∪ i m + i ) refer to the set of original and replicated features. Let ψ + n = PD([M ] + , Y n )
Figure 3 :
3Features allocated (AF * ) and revenue collected (RF * ) for a particular price vector pn and bid bn.
m) 9: end for 10: returnψ n = [ψ n (m) : m ∈ [M ]]
Figure 4 :
4A simple example illustrating how SHAPLEY-ROBUST down weights similar data to ensure robustness to replication.
Definition 4. 1 (
1Adapted from Goshtasby (2012)). A similarity metric is a function, SM : R mT × R mT → [0, 1], that satisfies: (i) Limited Range: 0 ≤ SM(·, ·) ≤ 1; (ii) Reflexivity: SM(X, Y ) = 1 if and only if X = Y ; (iii) Symmetry: SM(X, Y ) = SM(Y, X); (iv) Define dSM(X, Y ) = 1 − SM(X, Y ), then Triangle Inequality: dSM(X, Y ) + dSM(Y, Z) ≥ dSM(X, Z)
Algorithm 3
3SHAPLEY-ROBUST: PD * B (Y n , X M , K, SM, λ) 1:ψ n (m) = SHAPLEY-APPROX(Y n , M, G, K) 2: ψ n (m) =ψ n (m) exp −λ j∈[M ]\{m} SM(X m , X j ) 3: return ψ n = [ψ n (m) : m ∈ [M ]]
Assumption 4 .
4For all steps n, let the set of possible bids b n come from a closed, bounded set B i.e. b n ∈ B, where diameter(B) = D, where D < ∞.
.
AF * , RF * , PF * run in O(M ). PD * a , PD * b run in O(M 2 ) time.Hence, Property 3.5 holds.
n − log(|B net ( )|) − δ 2 N Dividing by δN and picking δ = log(|B net (
N. 1 (
1RF * (p n , b n , Y n )] ≥ 1 N RF * (p * , b n , Y n ) − 2B max log(|B net ( )|) and noting that |B net ( )|≤ 3Bmax , we have that for some positive constant RF * (p n , b n , Y n )] ≥ 1 N RF * (p * , b n , Y n ) − CB max log B max L Theorem 5.3).Let ψ n,shapley be the unique vector satisfying Property 3.3 as given in (2). For Algorithm 2, pick the following hyperparameter: K > M log(
For
Property 3.4 to hold, it suffices to show that ψ + (m) ≤ ψ(m) + . We have that i∈cmψ + (m, since λ, SM(·) ≥ 0; (b) follows by condition (i) of Definition 4.1; (c) follows from Theorem 5.3; Hence it suffices to show that c m ψ+ (m, 1) + 1 3 exp(−λ(c m − 1)) ≤ψ(m) + ∀c m ∈ N. We have c m exp(−λ(c m − 1)) ψ + (m, 1) + 1 3 (d)≤ c m exp(−λ(c m − 1)) d) and (f) follow from Theorem 5.3; (e) follows since c m ∈ N; (g) follows since c m exp(−λ(c m − 1)) ≤ 1 ∀c m ∈ N by picking λ = log(2).
Recall from Section 2.2 that the market must pick pn before buyer n arrives. Otherwise no truthfulness guarantees can be made.5 We endow R with the standard Euclidean metric. An -net of a set B is a set K ⊂ B such that for every point x ∈ B, there is a point x0 ∈ K such that |x − x0|≤ .
Proposition C.1 (Proposition 5.1). If the identities of sellers in the marketplace is anonymized, the balance condition in Property 3.3 and Property 3.4 cannot simultaneously hold.Proof. We show this through an extremely simple counter-example consisting of three scenarios.In the first scenario, the marketplace consists of exactly two sellers, A, B, each selling identical features i.e. X A = X B . By Condition 1 and 2 of Property 3.3, both sellers must receive an equal allocation i.e. ψ 1 (A) = ψ 1 (B) = 1 2 for any prediction task. Now consider a second scenario, where the marketplace against consists of the same two sellers, A and B, but this time seller A replicates his or her feature once and sells it again in the marketplace as A . Since by assumption the identity of sellers is anonymized, to achieve the "balance" condition in Property 3.3, we require ψ 2 (A) = ψ 2 (B) = ψ 2 (A ) = 1 3 . Thus the total allocation to seller A is ψ 2 (A) + ψ 2 (A ) = 2 3 > 1 2 = ψ 1 (A) i.e. Property 3.4 does not hold. Finally consider a third scenario, where the marketplace consists of three sellers A, B and C, each selling identical features i.e. X A = X B = X C . It is easily seen that to achieve "balance", we require ψ 3 (A) = ψ 3 (B) = ψ 3 (C) = 1 3 . Since the marketplace cannot differentiate between A and C, we either have balance or Property 3.4 i.e. "robustness to replication". Proof. This is immediately seen by studying the four functions: (i) AF * simply tunes the quality of each feature X j for j ∈ [M ], which is a linear time operation in M ; (ii) RF * again runs in linear time as we require a constant number of calls to G and M; (iii) PF * runs in linear time as we call G and M once for every price in B net ( ); (iv) PD * a has a running time of M 2 log(2/δ) 2 2 for any level of precision and confidence given by , δ respectively i.e. we require M log(2/δ) 2 2 calls to G and M to compute the Shapley Allocation for each featureStep 2, is also a linear time operation in M (note that the pairwise similarities between X i , X j for any i, j ∈ [M ] can be precomputed).E Optimal Balance-Preserving, Robust-to-Replication Penalty FunctionsIn this section we provide a necessary and sufficient condition for "robustness-to-replication" any penalty function f : R + → R + must satisfy, where f takes as argument the cumulative similarity of a feature with all other features. In Algorithm 3, we provide a specific example of such a penalty function given by exponential down-weighting. We have the following result holds Proposition E.1 (Proposition 5.2). Let Assumption 2 hold. Then for a given similarity metric SM, a penalty function f is "robust-to-replication" if and only if it satisfies the following relationProof. Consider the case where a certain data seller with feature X i has original cumulative similarity x, and makes c additional copies of its own data. The following relation is both necessary and sufficient to ensure robustness,ψWe first show sufficiency. By Assumption 2, the new Shapley value (including the replicated features) for a single feature X i denoted byψ, is no larger than the original Shapley value, ψ, for the same feature. Then it immediately follows that (c + 1)f (x + c) ≤ f (x).We now show that it is also necessary. We study how much the Shapley allocation changes when only one player duplicates data. The Shapley allocation for feature X i is defined asA key observation to computing the new Shapley value is that v(S ∪ {i}) − v(S) ≥ 0 if i appears before all its copies. Define M to be the number of original sellers (without copying) and c are the additional copies. By a counting argument one can show thatObserve this inequality turns into an equality when all the original sellers have exactly the same data. We observe that for a large number of unique sellers then copying does not change the Shapley allocation too much −c/M . In fact, this bound tells us that when there are a large number of sellers, replicating a single data set a fixed number of times does not change the Shapley allocation too much, i.e.,ψ i ≈ψ i (with the approximation being tight in the limit as M tends to infinity). Therefore, we necessarily need to ensure thatIf we make the extremely loose relaxation of letting c ∈ R + instead of Z + , then the exponential weighting in Algorithm 3 is minimal in the sense that it ensures robustness with least penalty in allocation. Observe that the penalty function (assuming differentiability) should also satisfyBy Gronwall's Inequality we can see that f (x) ≤ Ce −Kx for suitable C, K ≥ 0. This suggests that the exponential class of penalty ensure robustness with the "least" penalty, and are minimal in that sense.
Martín Abadi, Ashish Agarwal, Barham, Paul, Brevdo, Eugene, Chen, Zhifeng, Citro, Craig, Gregory S Corrado, Andy Davis, Dean, Jeffrey, Devin, Matthieu, Ghemawat, Sanjay, Ian J Goodfellow, Harp, Andrew, Geoffrey Irving, Isard, Michael, Jia, Yangqing, Józefowicz, Kaiser, Lukasz, Kudlur, Manjunath, Josh Levenberg, Mané, Dan, Monga, Rajat, Moore, Sherry, Derek Murray, Gordon, Chris Olah, Schuster, Mike, Jonathon Shlens, Steiner, Benoit, Sutskever, Ilya, Talwar, Kunal, Paul A Tucker, Vanhoucke, Vincent, Vasudevan, Vijay, Fernanda B Viégas, Vinyals, Oriol, Warden, Pete, Wattenberg, Martin, Wicke, Martin, Yu, Yuan, Zheng, Xiaoqiang, abs/1603.04467TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. CoRR. Abadi, Martín, Agarwal, Ashish, Barham, Paul, Brevdo, Eugene, Chen, Zhifeng, Citro, Craig, Corrado, Gre- gory S., Davis, Andy, Dean, Jeffrey, Devin, Matthieu, Ghemawat, Sanjay, Goodfellow, Ian J., Harp, Andrew, Irving, Geoffrey, Isard, Michael, Jia, Yangqing, Józefowicz, Rafal, Kaiser, Lukasz, Kudlur, Manjunath, Levenberg, Josh, Mané, Dan, Monga, Rajat, Moore, Sherry, Murray, Derek Gordon, Olah, Chris, Schuster, Mike, Shlens, Jonathon, Steiner, Benoit, Sutskever, Ilya, Talwar, Kunal, Tucker, Paul A., Vanhoucke, Vincent, Vasudevan, Vijay, Viégas, Fernanda B., Vinyals, Oriol, Warden, Pete, Wattenberg, Martin, Wicke, Martin, Yu, Yuan, & Zheng, Xiaoqiang. 2016. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. CoRR, abs/1603.04467.
Priced oblivious transfer: How to sell digital goods. Bill Aiello, Ishai, Yuval, Omer Reingold, Pages 119-135 of: International Conference on the Theory and Applications of Cryptographic Techniques. SpringerAiello, Bill, Ishai, Yuval, & Reingold, Omer. 2001. Priced oblivious transfer: How to sell digital goods. Pages 119-135 of: International Conference on the Theory and Applications of Cryptographic Techniques. Springer.
The Multiplicative Weights Update Method: a Meta-Algorithm and Applications. Theory of Computing. Arora, Sanjeev, Hazan, Elad, Kale, Satyen, 8Arora, Sanjeev, Hazan, Elad, & Kale, Satyen. 2012. The Multiplicative Weights Update Method: a Meta- Algorithm and Applications. Theory of Computing, 8(1), 121-164.
Using confidence bounds for exploitation-exploration trade-offs. Peter Auer, Journal of Machine Learning Research. 3Auer, Peter. 2002. Using confidence bounds for exploitation-exploration trade-offs. Journal of Machine Learning Research, 3(Nov), 397-422.
Optimal Mechanisms for Selling Information. Moshe Babaioff, Kleinberg, Robert, Renato Paes Leme, Pages 92-109 of: Proceedings of the 13th ACM Conference on Electronic Commerce. EC '12. New York, NY, USAACMBabaioff, Moshe, Kleinberg, Robert, & Paes Leme, Renato. 2012. Optimal Mechanisms for Selling Information. Pages 92-109 of: Proceedings of the 13th ACM Conference on Electronic Commerce. EC '12. New York, NY, USA: ACM.
Approximating power indices: theoretical and empirical analysis. Yoram Bachrach, Markakis, Evangelos, Resnick, Ezra, Ariel D Procaccia, Jeffrey S Rosenschein, Amin Saberi, Autonomous Agents and Multi-Agent Systems. 202Bachrach, Yoram, Markakis, Evangelos, Resnick, Ezra, Procaccia, Ariel D., Rosenschein, Jeffrey S., & Saberi, Amin. 2010. Approximating power indices: theoretical and empirical analysis. Autonomous Agents and Multi-Agent Systems, 20(2), 105-122.
Statistical cost sharing. Pages 6222-6231 of. Eric Balkanski, Syed, Umar, Sergei Vassilvitskii, Advances in Neural Information Processing Systems. Balkanski, Eric, Syed, Umar, & Vassilvitskii, Sergei. 2017. Statistical cost sharing. Pages 6222-6231 of: Advances in Neural Information Processing Systems.
Pricing in ride-share platforms: A queueingtheoretic approach. Siddhartha Banerjee, Carlos Riquelme, Ramesh Johari, Banerjee, Siddhartha, Riquelme, Carlos, & Johari, Ramesh. 2015. Pricing in ride-share platforms: A queueing- theoretic approach.
The Design and Price of Information. Dirk Bergemann, Alessandro Bonatti, Alex Smolin, American Economic Review. 1081Bergemann, Dirk, Bonatti, Alessandro, & Smolin, Alex. 2018. The Design and Price of Information. American Economic Review, 108(1), 1-48.
Chicken & egg: Competition among intermediation service providers. Bernard Caillaud, Jullien, Bruno, RAND journal of Economics. Caillaud, Bernard, & Jullien, Bruno. 2003. Chicken & egg: Competition among intermediation service providers. RAND journal of Economics, 309-328.
Polynomial Calculation of the Shapley Value Based on Sampling. Javier Castro, Gómez, Daniel, Juan Tejada, Comput. Oper. Res. 365Castro, Javier, Gómez, Daniel, & Tejada, Juan. 2009. Polynomial Calculation of the Shapley Value Based on Sampling. Comput. Oper. Res., 36(5), 1726-1730.
M Chen, Keith, Michael Sheldon, Dynamic Pricing in a Labor Market: Surge Pricing and Flexible Work on the Uber Platform. Chen, M Keith, & Sheldon, Michael. 2016. Dynamic Pricing in a Labor Market: Surge Pricing and Flexible Work on the Uber Platform.
Accuracy for Sale: Aggregating Data with a Variance Constraint. Rachel Cummings, Ligett, Katrina, Aaron Roth, Zhiwei Wu, Steven, Juba Ziani, Pages 317-324 of: Proceedings of the 2015 Conference on Innovations in Theoretical Computer Science. ITCS '15. New York, NY, USAACMCummings, Rachel, Ligett, Katrina, Roth, Aaron, Wu, Zhiwei Steven, & Ziani, Juba. 2015. Accuracy for Sale: Aggregating Data with a Variance Constraint. Pages 317-324 of: Proceedings of the 2015 Conference on Innovations in Theoretical Computer Science. ITCS '15. New York, NY, USA: ACM.
Learning in Auctions: Regret is Hard, Envy is Easy. Pages 219-228 of. C Daskalakis, V Syrgkanis, IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS). Daskalakis, C., & Syrgkanis, V. 2016 (Oct). Learning in Auctions: Regret is Hard, Envy is Easy. Pages 219-228 of: 2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS).
Multi-Item Auctions Defying Intuition?. Constantinos Daskalakis, Daskalakis, Constantinos. 2011. Multi-Item Auctions Defying Intuition?
Selling Privacy at Auction. Pages 199-208 of. Arpita Ghosh, Aaron Roth, Proceedings of the 12th ACM Conference on Electronic Commerce. EC '11. the 12th ACM Conference on Electronic Commerce. EC '11New York, NY, USAACMGhosh, Arpita, & Roth, Aaron. 2011. Selling Privacy at Auction. Pages 199-208 of: Proceedings of the 12th ACM Conference on Electronic Commerce. EC '11. New York, NY, USA: ACM.
Optimal auction design in two-sided markets. Renato Gomes, The RAND Journal of Economics. 452Gomes, Renato. 2014. Optimal auction design in two-sided markets. The RAND Journal of Economics, 45(2), 248-272.
Similarity and dissimilarity measures. A Goshtasby, Ardeshir, SpringerPages 7-66 of: Image registrationGoshtasby, A Ardeshir. 2012. Similarity and dissimilarity measures. Pages 7-66 of: Image registration. Springer.
Logarithmic markets coring rules for modular combinatorial information aggregation. Robin Hanson, The Journal of Prediction Markets. 11Hanson, Robin. 2012. Logarithmic markets coring rules for modular combinatorial information aggregation. The Journal of Prediction Markets, 1(1), 3-15.
Introduction to online convex optimization. Elad Hazan, Foundations and Trends R in Optimization. 23-4Hazan, Elad, et al. . 2016. Introduction to online convex optimization. Foundations and Trends R in Optimization, 2(3-4), 157-325.
Courier Corporation. Daniel P Heyman, Sobel, J Matthew, Stochastic models in operations research: stochastic optimization. 2Heyman, Daniel P, & Sobel, Matthew J. 2004. Stochastic models in operations research: stochastic optimization. Vol. 2. Courier Corporation.
Take It or Leave It: Running a Survey When Privacy Comes at a Cost. Pages 378-391 of. Katrina Ligett, Aaron Roth, Internet and Network Economics. Goldberg, Paul W.Berlin, Heidelberg; Berlin HeidelbergSpringerLigett, Katrina, & Roth, Aaron. 2012. Take It or Leave It: Running a Survey When Privacy Comes at a Cost. Pages 378-391 of: Goldberg, Paul W. (ed), Internet and Network Economics. Berlin, Heidelberg: Springer Berlin Heidelberg.
Designing online auctions with past performance information. De Liu, Jianqing Chen, Decision Support Systems. 423Liu, De, & Chen, Jianqing. 2006. Designing online auctions with past performance information. Decision Support Systems, 42(3), 1307-1320.
The bullwhip effect on product orders and inventory: a perspective of demand forecasting techniques. Yungao Ma, Wang, Che Nengmin, Ada Huang, Yufei Xu, Jinpeng , International Journal of Production Research. 511Ma, Yungao, Wang, Nengmin, Che, Ada, Huang, Yufei, & Xu, Jinpeng. 2013. The bullwhip effect on product orders and inventory: a perspective of demand forecasting techniques. International Journal of Production Research, 51(1), 281-302.
Bounding the Estimation Error of Sampling-based Shapley Value Approximation With/Without Stratifying. Sasan Maleki, Tran-Thanh, Long, Greg Hines, Rahwan, Talal, Alex Rogers, abs/1306.4265CoRRMaleki, Sasan, Tran-Thanh, Long, Hines, Greg, Rahwan, Talal, & Rogers, Alex. 2013. Bounding the Estimation Error of Sampling-based Shapley Value Approximation With/Without Stratifying. CoRR, abs/1306.4265.
Values for large games IV: Evaluating the electoral college exactly. I Mann, L S Shapley, Santa Monica CATech. rept. RAND CorpMann, I, & Shapley, LS. 1952. Values for large games IV: Evaluating the electoral college exactly. Tech. rept. RAND Corp Santa Monica CA.
Online matching and ad allocation. Aranyak Mehta, Foundations and Trends R in Theoretical Computer Science. 84Mehta, Aranyak, et al. . 2013. Online matching and ad allocation. Foundations and Trends R in Theoretical Computer Science, 8(4), 265-368.
Xiangrui Meng, Joseph K Bradley, Yavuz, Burak, Evan R Sparks, Venkataraman, Shivaram, Liu, Davies, Jeremy Freeman, D B Tsai, Amde, Manish, Owen, Sean, Xin, Doris, Reynold Xin, Michael J Franklin, Reza Zadeh, Zaharia, Matei, Ameet Talwalkar, MLlib: Machine Learning in Apache Spark. CoRR, abs/1505. 6807Meng, Xiangrui, Bradley, Joseph K., Yavuz, Burak, Sparks, Evan R., Venkataraman, Shivaram, Liu, Davies, Freeman, Jeremy, Tsai, D. B., Amde, Manish, Owen, Sean, Xin, Doris, Xin, Reynold, Franklin, Michael J., Zadeh, Reza, Zaharia, Matei, & Talwalkar, Ameet. 2015. MLlib: Machine Learning in Apache Spark. CoRR, abs/1505.06807.
Optimal auction design. Roger B Myerson, Mathematics of operations research. 61Myerson, Roger B. 1981. Optimal auction design. Mathematics of operations research, 6(1), 58-73.
Scikit-learn: Machine Learning in Python. Fabian Pedregosa, Varoquaux, Gaël, Alexandre Gramfort, Michel, Vincent, Thirion, Bertrand, Olivier, Blondel, Mathieu, Prettenhofer, Peter, Ron Weiss, Dubourg, Vincent, Jake Vanderplas, Alexandre Passos, Cournapeau, David, Brucher, Matthieu, Matthieu Perrot, Édouard Duchesnay, J. Mach. Learn. Res. 12Pedregosa, Fabian, Varoquaux, Gaël, Gramfort, Alexandre, Michel, Vincent, Thirion, Bertrand, Grisel, Olivier, Blondel, Mathieu, Prettenhofer, Peter, Weiss, Ron, Dubourg, Vincent, Vanderplas, Jake, Passos, Alexandre, Cournapeau, David, Brucher, Matthieu, Perrot, Matthieu, & Duchesnay, Édouard. 2011. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res., 12(Nov.), 2825-2830.
Optimal auctions. John G Riley, Samuelson, F William, The American Economic Review. 713Riley, John G, & Samuelson, William F. 1981. Optimal auctions. The American Economic Review, 71(3), 381-392.
Platform competition in two-sided markets. Jean-Charles Rochet, Jean Tirole, Journal of the european economic association. 14Rochet, Jean-Charles, & Tirole, Jean. 2003. Platform competition in two-sided markets. Journal of the european economic association, 1(4), 990-1029.
Adversarially Robust Generalization Requires More Data. Ludwig Schmidt, Santurkar, Shibani, Tsipras, Dimitris, Talwar, Kunal, Aleksander Madry, abs/1804.11285CoRRSchmidt, Ludwig, Santurkar, Shibani, Tsipras, Dimitris, Talwar, Kunal, & Madry, Aleksander. 2018. Adversari- ally Robust Generalization Requires More Data. CoRR, abs/1804.11285.
A VALUE FOR N-PERSON GAMES. L S Shapley, Santa Monica CATech. rept. RAND CorpShapley, LS. 1952. A VALUE FOR N-PERSON GAMES. Tech. rept. RAND Corp Santa Monica CA.
Online ad auctions. Hal R Varian, American Economic Review. 992Varian, Hal R. 2009. Online ad auctions. American Economic Review, 99(2), 430-34.
Prediction markets. Justin Wolfers, Eric Zitzewitz, Journal of economic perspectives. 182Wolfers, Justin, & Zitzewitz, Eric. 2004. Prediction markets. Journal of economic perspectives, 18(2), 107-126.
Optimal Real-time Bidding for Display Advertising. Weinan Zhang, Yuan, Shuai, Wang, Pages 1077-1086 of: Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. KDD '14. New York, NY, USAACMZhang, Weinan, Yuan, Shuai, & Wang, Jun. 2014. Optimal Real-time Bidding for Display Advertising. Pages 1077-1086 of: Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. KDD '14. New York, NY, USA: ACM.
| [] |
[
"Entanglement Entropy in Generalised Quantum Lifshitz Models",
"Entanglement Entropy in Generalised Quantum Lifshitz Models"
] | [
"J Angel-Ramelli ",
"V Giangreco ",
"M Puletti ",
"L Thorlacius \nDepartment of Physics\n) The Oskar Klein Centre for Cosmoparticle Physics\nStockholm University\n106 91AlbaNova, StockholmSweden\n",
"\nScience Institute\nUniversity of Iceland\nDunhaga 3107ReykjavíkIceland (\n"
] | [
"Department of Physics\n) The Oskar Klein Centre for Cosmoparticle Physics\nStockholm University\n106 91AlbaNova, StockholmSweden",
"Science Institute\nUniversity of Iceland\nDunhaga 3107ReykjavíkIceland ("
] | [] | We compute universal finite corrections to entanglement entropy for generalised quantum Lifshitz models in arbitrary odd spacetime dimensions. These are generalised free field theories with Lifshitz scaling symmetry, where the dynamical critical exponent z equals the number of spatial dimensions d, and which generalise the 2+1-dimensional quantum Lifshitz model to higher dimensions. We analyse two cases: one where the spatial manifold is a d-dimensional sphere and the entanglement entropy is evaluated for a hemisphere, and another where a d-dimensional flat torus is divided into two cylinders. In both examples the finite universal terms in the entanglement entropy are scale invariant and depend on the compactification radius of the scalar field. | 10.1007/jhep08(2019)072 | [
"https://arxiv.org/pdf/1906.08252v2.pdf"
] | 195,069,343 | 1906.08252 | da79cbd45f2f34f2c6650156c79de4884042e9be |
Entanglement Entropy in Generalised Quantum Lifshitz Models
J Angel-Ramelli
V Giangreco
M Puletti
L Thorlacius
Department of Physics
) The Oskar Klein Centre for Cosmoparticle Physics
Stockholm University
106 91AlbaNova, StockholmSweden
Science Institute
University of Iceland
Dunhaga 3107ReykjavíkIceland (
Entanglement Entropy in Generalised Quantum Lifshitz Models
We compute universal finite corrections to entanglement entropy for generalised quantum Lifshitz models in arbitrary odd spacetime dimensions. These are generalised free field theories with Lifshitz scaling symmetry, where the dynamical critical exponent z equals the number of spatial dimensions d, and which generalise the 2+1-dimensional quantum Lifshitz model to higher dimensions. We analyse two cases: one where the spatial manifold is a d-dimensional sphere and the entanglement entropy is evaluated for a hemisphere, and another where a d-dimensional flat torus is divided into two cylinders. In both examples the finite universal terms in the entanglement entropy are scale invariant and depend on the compactification radius of the scalar field.
Introduction
Quantum entanglement refers to a correlation of a purely quantum mechanical nature between degrees of freedom in a physical system. Consider a quantum system that can be divided into two subsystems A and B, such that the Hilbert space can be written as a direct product of the Hilbert spaces of the subsystems, H = H A ⊗ H B , and take the full system to be in a state described by a density matrix ρ. The entanglement entropy of subsystem A is then defined as the von Neumann entropy of the reduced density matrix, obtained by taking a trace over the degrees of freedom in subsystem B, i.e.
S[A] = −Tr (ρ A log ρ A ) ,(1.1)
with ρ A = Tr B ρ. We will take ρ to be the ground state density matrix of the full system but our results can be extended to more general states.
Entanglement entropy is a useful theoretical probe that encodes certain universal properties of field theories describing critical systems, see e.g. [1][2][3][4][5]. A well known example in this respect is the entanglement entropy of a two-dimensional conformal field theory (CFT) [6][7][8], which has a universal logarithmic term,
S[A] = c 3 log L A ε , (1.2)
where c is the central charge of the CFT in question, L A is the spatial size of the subsystem A, and ε is a UV-cutoff. Here we are assuming that A is connected and that its size is small compared to the full system size L A L. The logarithmic UV behaviour of the entanglement entropy tells us that the system has long-range entangled degrees of freedom (in contrast to an area-law where short-range entanglement would mainly contribute).
In recent years considerable effort has been devoted to investigating such universal terms in the entanglement entropy of CFTs in arbitrary dimensions (see [9,10] for reviews). What will be important for us is the following general UV behavior of entanglement entropy in even D-dimensional CFTs,
S[A] = c D−2 Σ D−2 ε D−2 + · · · + c 0 log L A ε + . . . ,(1.3)
where Σ D−2 is the area of the (D−2)-dimensional entangling surface ∂A and L A is a characteristic length associated with ∂A. Only c 0 is universal in this expression. The other coefficients depend on the regularisation scheme used. One also finds a universal sub-leading term in odd-dimensional CFTs but in this case it is finite rather than logarithmic, see e.g. [10] and references therein. The coefficient of the universal term is a function of topological and geometric invariants, such as the Euler density and Weyl invariants constructed from the entangling (hyper)-surface [11,12]. This reflects the fact that the logarithmic term in S[A] is related to the conformal anomaly of the stress-energy tensor of the corresponding CFT.
In the present work we will study entanglement entropy, including universal finite terms (i.e. of order O(1) with respect to L A ), in a family of d+1-dimensional scale invariant quantum field theories introduced in [13]. The scale symmetry is a non-relativistic Lifshitz symmetry that acts asymmetrically on the time and spatial coordinates,
x → λ x, τ → λ z τ , (1.4)
with a dynamical critical exponent equal to the number of spatial dimensions z = d. These theories are closely related to the well known quantum Lifshitz model (QLM), first studied in the seminal work [14]. This is a scale invariant free field theory in 2 + 1-dimensional spacetime with a z = 2 dynamical critical exponent. It is an effective field theory for certain quantum dimer models (and their universality class) in square lattices at a critical point and involves a compactified free massless scalar field [14]. Non-trivial Lifshitz scaling is achieved via a kinetic term that is asymmetric between time and space (with higher derivatives acting in the spatial directions). The higher-derivative construction can easily be extended to free scalar field theories in any number of spatial dimensions d with z = d Lifshitz scaling. In [13] such theories were dubbed generalized quantum Lifshitz models (GQLMs) and several interesting symmetry properties were revealed in correlation functions of scaling operators. The periodic identification of the scalar field did not figure in that work but turns out be important when one considers entanglement entropy in a GQLM defined on a topologically nontrivial geometry.
A key property of the QLM and GQLM theories is that the ground state wave functional is invariant under conformal transformations involving only the spatial dimensions [13,14]. The spatial conformal symmetry is a rather special feature (the corresponding critical points are called conformal quantum critical points [14]) and it manifests in the scaling properties of entanglement entropy. In essence, the symmetry allows us to map a (d+1)-dimensional Lifshitz field theory with z = d to a d-dimensional Euclidean CFT. In the d = 2 QLM the CFT is the standard free boson CFT but for d > 2 GQLM's the spatial CFT is a higherderivative generalised free field theory. Such higher-derivative CFTs have been discussed in a number of contexts, for instance in relation to higher spin theories, e.g. [15,16], as models in elastic theory e.g. [17], in a high-energy physics setting in connection with the naturalness problem e.g. [18,19], and in the context of dS/CFT duality e.g. [20,21]. These theories are not unitary but their n-point correlation functions are well defined in Euclidean spacetime and being free field theories they have no interactions that trigger instability. Entanglement entropy and its scaling properties have been extensively studied in the QLM [1,[22][23][24][25][26][27]. 1 The replica method in the QLM was first developed in [22], where it was found that for a smooth entanglement boundary, the scaling behaviour of the entanglement entropy in the QLM (and more generally for conformal quantum critical points in (2+1)-dimensions) is of the form
S[A] = c 1 L A ε + c 6 ∆χ log L A ε + . . . ,(1.5)
where c 1 depends on the regulator and ∆χ is the change in the Euler characteristic (upon dividing the the system in two), which in turns depends on the topology of the system and on the entangling surface. The above behavior follows from general expectations for the free energy of a two-dimensional CFT with boundary [29]. This result is for the entanglement entropy of a non-relativistic (2+1)-dimensional theory, however its computation starts from a ground state which is a "time-independent" conformal invariant. As the time coordinate only appears as a spectator, the final result displays features of a two-dimensional CFT. This is in line with the results of [13,14], where it was shown that equal-time correlation functions of local scaling operators in the QLM and GQLM can be expressed in terms of correlation functions of a d-dimensional Euclidean CFT.
Furthermore, by choosing a smooth partition, such that we have no contribution from the logarithm (i.e. ∆χ = 0), a further sub-leading (of order one in L A ) universal term appears in the entanglement entropy for the QLM [1], that is
S EE = c 1 L A ε + γ QCP + . . . . (1.6)
The universal term γ QCP , where QCP stands for quantum critical point, depends both on the geometry and topology of the manifolds [23,[25][26][27]. It depends on the geometry in the sense that it includes a scaling function written in terms of aspect ratios typical of the given subsystem, and on the topology through a contribution from zero modes and non-trivial winding modes. In this sense, the entanglement entropy of the QLM is able to capture longrange (non-local) properties of the system. In particular, γ QCP was computed by various methods for a spatial manifold in the form of a cylinder in [24][25][26][27]30], for a sphere in [27], and the toroidal case was treated in [25] by means of a boundary field theory approach. The toroidal case was further investigated in [31], where analytic results were obtained for Rényi entropies. 2 Our aim is to extend the study of these finite universal terms in the entanglement entropy to generalised quantum Lifshitz models. In particular, we analyse their scaling properties in full generality, in any number of spatial dimensions d with Lifshitz exponent z = d. For technical reasons (which we explain below) we restrict attention to the case of even integer d. More concretely, we obtain universal terms in the entanglement entropy in GQLM on two classes of manifolds. On the one hand, we divide a d-dimensional sphere into two d-dimensional hemispheres, on the other hand we consider a d-dimensional flat torus, sliced into two d-dimensional cylinders. 3 Our computations are purely field theoretical. The theories we consider represent rare examples of non-relativistic critical theories, for which entanglement entropy can be obtained analytically. We view them as toy-models where we can explore quantum entanglement for different values of the dynamical critical exponent z. Further motivation comes from a puzzling aspect of Lifshitz holography, where one considers gravitational solutions that realise the Lifshitz scaling (1.4), see e.g. [34,35]. In AdS/CFT the entanglement entropy is computed by means of the Ryu-Takayanagi formula [36,37]. The usual working assumption is to apply the RT prescription also in Lifshitz holography, but in a static Lifshitz spacetime the holographic entanglement entropy does not depend on the critical exponent at all (see e.g. [38]). From a field theory point of view, however, one expects the higher-derivative terms 2 For related studies of the scaling properties of entanglement entropy for a toroidal manifold in scale invariant (2+1)-dimensional systems, see also [32,33]. 3 Here a d-dimensional cylinder refers to the product of an interval and a (d−1)-dimensional flat torus.
to dominate at short distances, and thus the UV behaviour of entanglement should reflect the value of the dynamical critical exponent. The absence of z from the holographic entanglement entropy is puzzling if Lifshitz spacetime is the gravitational dual of a strongly coupled field theory with Lifshitz symmetry. Similar considerations motivated the work in [39][40][41]. While we clearly see a dependence on z, it is difficult to compare our results directly to those of these authors as we are not working within the same class of field theories and we focus on universal sub-leading terms rather than the leading area terms.
The paper is arranged as follows. In Section 2 we briefly review the construction of generalized quantum Lifshitz models and extend their definition to two specific compact manifolds. These are higher-derivative field theories so we must ensure that the variational problem is well posed. This amounts to imposing z conditions on the variations (2.8), (2.15) and including a boundary action (2.10) for the d-torus or (2.16) for the d-sphere. In Section 3 we discuss the replica method, which we use to compute the entanglement entropy. In essence, this approach maps the problem of computing the nth-power of the reduced density matrix to a density matrix of an n times replicated field theory. The goal is to produce a result that can be analytically continued in n, in order to calculate the entanglement entropy according to (3.1). As we explain in Section 3, the replica method forces all the replicated fields to be equal at the cut, since the cut is not physical and our original field theory only has one field. There is an additional subtlety in implementing this condition due to the periodic identification of the scalar field in the GQLM and in order to ensure the correct counting of degrees of freedom we separate the replicated fields into classical and fluctuating parts. The fluctuating fields obey Dirichlet boundary conditions as well as the vanishing of the conformal Laplacian and its integer powers at the cut. Their contribution is encoded in partition functions computed via functional determinants defined on the sphere and torus respectively. The classical fields give rise to winding sectors that are encoded in the function W (n) described in Section 3. For the spherical case this contribution is simple and only amounts to a multiplicative factor √ n. For the toroidal case, the contribution from the classical fields is less trivial, and requires summing over classical vacua of the action. For higher-derivative theories some further conditions have to be implemented in the classical sector, and we argue that a compatible prescription is to use Neumann boundary conditions for these fields. At this point no freedom and/or redundancy is left, and it is straightforward to compute the sum over winding modes. We collect the contributions from the classical and fluctuating fields to the universal finite term of the entanglement entropy for a d-sphere and a d-torus in Sections 4 and 5 respectively. We conclude with some open questions in Section 6.
Most of the technical details are relegated to appendices. In Appendix A we review the computation of the functional determinant contribution for the spherical case, which was originally worked out in [42]. In Appendix B we develop an alternative expression for the formulae presented in Appendix A, which we find more transparent and better suitable for numerical evaluation. In Appendix C we compute the functional determinant contribution for the toroidal case. Finally we compute the winding sector contribution for the d-torus in Appendix D.
Generalised quantum Lifshitz models in (d+1)-dimensions
The 2+1-dimensional quantum Lifshitz model [14] can be generalised to d+1-dimensions [13]. Whenever the dynamical critical exponent z is equal to the number of spatial dimensions, the ground state wave-functional is invariant under d-dimensional conformal transformations acting in the spatial directions, extending the connection between the quantum Lifshitz model and a free conformal field theory in one less dimension to any d. We recall that in the 2+1dimensional case the scalar field is compactified [14], and below we will also compactify the scalar field in the GQLM at general d on a circle of radius R c , that is identify φ ∼ φ + 2πR c .
The ground state wave functional of the GQLM is [13]
ψ 0 = 1 √ Z [Dφ]e − 1 2 S[φ] |φ ,(2.1)
where |φ is an orthonormal basis of states in the Hilbert space made up of eigenstates of the field operator, and the partition function Z is given by
Z = [Dφ] e −S[φ] . (2.2)
We are interested in computing the sub-leading universal finite term of the entanglement entropy (1.1) in the ground state, i.e. with ρ = |ψ 0 ψ 0 |, when the manifold M is a d-sphere or a d-torus. The subsystem A will consist of field degrees of freedom on a submanifold of M.
For technical reasons we restrict attention to the case where d is an even (positive) integer.
We follow the normalisation convention of [27] and write the action as
S[φ] = S 0 [φ] + S ∂M [φ] = κ 4π M d d x √ G φ P z, M φ + S ∂M [φ] ,(2.3)
where G = det G ab (a, b = 1, . . . , d) is the determinant of the Euclidean metric on the manifold M, and P z, M is a proper conformal differential operator of degree z in d dimensions. The specific form of P z, M depends on M, as we will discuss at the end of this section. In order to have a well-defined variational problem, the action has to include a suitable boundary term S ∂M whose specific form is also given below. Note that the scalar field φ has dimension zero under Lifshitz scaling in the GQLM at general d. We find it convenient to use the shorthand g = κ 4π . 4 We note that for a flat manifold, the compactification of the field implies a global shift symmetry compatible with conformal symmetry [43,44]. This is also true for the z = d theory on the d-sphere (and more generally on any Einstein manifold) provided the action includes appropriate terms that generalise the notion of conformal coupling to a higher-derivative setting.
Let us consider how the action in (2.3) is constructed concretely for the two cases, mentioned above. To keep the discussion somewhat general, we assume that both z and d are even 4 The normalisation of the action in (2.3) and the compactification radius Rc are not independent. A rescaling of the scalar field will affect both g and Rc, while physical quantities that are independent of rescaling are expressed in terms of 2πRc √ g [43,44].
positive integers and do not insist on z = d for the time being. The case when d is an odd integer is also interesting but raises a number of technical issues that we do not address in this work. The boundary terms in the action will be important once we divide the system into subsystems and apply the replica method (cf. Section 3).
d-torus. In section 5, we consider a torus obtained as the quotient space of R d and a ddimensional lattice. The manifold is flat, and in this case the operator appearing in the action
S 0 [φ] in (2.
3) is simply the z/2 power of the Laplace-Beltrami operator,
P z,T d = (−1) z/2+1 ∆ z/2 = (−1) z/2+1 (−∂ a ∂ a ) z/2 , (2.4) that is S 0 [φ] = (−1) z/2+1 g M d d x φ∆ z/2 φ . (2.5)
For d = z = 2 this reduces to S = g (∇φ) 2 as in [14] (after integrating by parts). Varying
S 0 [φ] we obtain δS 0 [φ] = 2(−1) z/2+1 g M d d x (∆ z/2 φ)δφ + (−1) z/2+1 g ∂M d d−1 x z−1 k=0 (−1) k (∂ k n φ)(∂ z−1−k n δφ) (2.6)
where the partial derivatives should be understood as follows
∂ 2 n = (∂ a ∂ a ) , ∂ 2 +1 n = n a ∂ a (∂ b ∂ b ) ,(2.7)
with n a an oriented unit vector normal to the boundary. We need to choose appropriate boundary conditions for the variations. One possibility is to demand that
∂ 2 n δφ ∂M = 0 , = 0, . . . , z 2 − 1 . (2.8)
The reason behind this choice is that we will be interested in the eigenvalue problem of ∆ z/2 , for which we require the operator to be self-adjoint and to have a complete set of consistent boundary conditions. The replica method forces us to choose Dirichlet conditions on the field at the boundary (cf. Section 3) and the remaining conditions are chosen to be consistent with the self-adjointness of the operator. Equipped with (2.8), the variation of the Lagrangian reduces to
δS 0 [φ] = 2(−1) z/2+1 g M d d x (∆ z/2 φ)δφ + (−1) z/2+1 g ∂M d d−1 x z 2 −1 =0 (∂ 2 n φ)(∂ z−1−2 n δφ) .
(2.9)
Hence, defining the following boundary action
S ∂ [φ] = (−1) z/2+1 g ∂M d d−1 x z 2 −1 k=0 (∂ 2k n φ)(∂ z−2k−1 n φ),(2.10)
with variation given by
δS ∂ [φ] = (−1) z/2+1 g ∂M d d−1 x z 2 −1 k=0 (∂ 2k n φ)(∂ z−2k−1 n δφ), (2.11)
clearly gives a well-defined variation for the total action
S[φ] = S 0 [φ] + S ∂ [φ]
and leads to the following equations of motion ∆ z/2 φ = 0, and ∂ 2k n δφ ∂M = 0, for k = 0, . . . ,
z 2 − 1 . (2.12) d-sphere.
When the manifold M is a unit d-sphere, the operator in (2.3) is the so-called GJMS operator on a d-sphere [45]. In essence, GJMS operators generalise the conformal Laplacian to higher derivatives and d-dimensional curved manifolds [45] (see [46][47][48][49][50][51][52][53][54] for more references on the subject). This means that a GJMS operator of degree 2k in d-dimensions (where k is a positive integer) is constructed so that it transforms in a simple way under a Weyl transformation of the metric, G ab → e 2ω G ab ,
P 2k (e 2ω G) = e −(d/2+k)ω P 2k (G) e (d/2−k)ω . (2.13)
In general, the operator P 2k is well defined for k = 1, . . . d/2 for even d, in the sense that it reduces to the standard Laplacian of degree k in flat space [47]. For odd d-dimensional manifolds operators satisfying (2.13) exist for all k ≥ 1 [47,52,53].
On a unit d-sphere the GJMS operator of degree 2k explicitly reads as
P 2k,S d = k j=1 ∆ S d + d 2 − j d 2 + j − 1 , (2.14) where ∆ S d = − 1 √ G ∂ a √ G G ab ∂ b , with a, b = 1, . . . d,
is the Laplace-Beltrami operator. The case of most interest to us is to consider a GJMS operator of degree 2k = d. This is known in the literature as the critical case, while k < d/2 is referred to as the subcritical case. It is straightforward to check, using (2.13), that the final action S 0 (2.3) is invariant under Weyl transformations. The factorisation in (2.14) is a general characteristic of Einstein manifolds. Since the eigenfunctions of the Laplace-Beltrami operator on a compact Riemannian manifold form an orthonormal basis, one can easily obtain the spectrum of the GJMS operator on the sphere from the factorisation above. This will play an important role later in the computation of partition functions in Section 4.
When S d is divided into hemispheres, the action S 0 [φ] has to be complemented by boundary terms. As before, in order to have a well-defined variational problem, we compute the variation of the action δS 0 , impose z boundary conditions on δφ and its derivatives, and then cancel any remaining terms against appropriate boundary terms. We impose the following boundary conditions
δφ ∂M = 0 , ∆ k δφ ∂M = 0 k = 1, . . . z 2 − 1 ,(2.15)
that is Dirichlet boundary conditions on the variation δφ and vanishing of its Laplacian and its powers at the boundary. This is analogous to (2.8) for a curved manifold. The explicit
expression for S ∂ [φ] is S ∂ [φ] = g d/2 =1 M d d x ∂ a √ GG ab d/2− j=1 ∆ + d 2 − j d 2 + j − 1 φ (2.16) ×∂ b d/2 k=d/2− +2 ∆ + d 2 − k d 2 + k − 1 φ ,
where the products in the above expression are taken to be empty when the upper extreme is less than the lower extreme. Finally, the classical equation of motion is
P d,S d φ = d/2 j=1 ∆ S d + d 2 − j d 2 + j − 1 φ = 0 . (2.17)
Replica method
The entanglement entropy of subsystem A can be defined as
S[A] = − lim n→1 ∂ n Tr(ρ n A ) ,(3.1)
where an analytic continuation of the index n is assumed. This definition is equivalent to the von Neumann entropy of ρ A (1.1). Following [8,55], we will use the replica approach to evaluate (3.1). At the heart of this method is to view each appearance of the density matrix ρ in Tr ρ n A as coming from an independent copy of the original theory, so that one ends up working with n replicated scalar fields. The process of taking partial traces and multiplying the replicas of ρ then induces a specific set of boundary conditions at the entanglement cuts on the replica fields.
In this section we adapt the replica trick to generalised quantum Lifshitz theories. For the QLM the replica method was reviewed in [26,27]. Our starting point is the ground state density matrix ρ = ψ 0 ψ 0 , with |ψ 0 as in (2.1). Now divide the manifold into two regions A and B and assume that the Hilbert space splits as H = H A ⊗ H B . This allows us to write the density matrix as . We then construct ρ n A by the gluing procedure represented in Figure 1. Each copy of ρ is represented by a path integral as in (3.2) with fields labelled by a replica index i = 1, . . . , n. The partial trace over field degrees of freedom with Figure 1: The gluing procedure due to the replica trick. Gluing due to the partial trace over B is represented in red, due to multiplication of the reduced density matrices ρ A in blue, and due to the final total trace in yellow. support in B gives the following reduced density matrix for the i-th replica,
ρ = 1 Z [Dφ A ][Dφ B ][Dφ A ][Dφ B ]e − 1 2 S[φ A ]− 1 2 S[φ B ]− 1 2 S[φ A ]− 1 2 S[φ B ] |φ A ⊗ |φ B φ A | ⊗ φ B |,ρ A = Tr B (ρ) = 1 Z [Dφ A i ][Dφ A i ][Dφ B i ]e − 1 2 S[φ A i ]− 1 2 S[φ A i ]−S[φ B i ] |φ A i φ A i |. (3.3)
Multiplying together two adjacent copies of the reduced density matrix gives
ρ 2 A = 1 Z 2 [Dφ A i ][Dφ A i+1 ][Dφ A i+1 ][Dφ B i ][Dφ B i+1 ]e − 1 2 S[φ A i ]− 1 2 S[φ A i+1 ]−S[φ A i+1 ]−S[φ B i ]−S[φ B i+1 ] |φ A i φ A i+1 |. (3.4)
The δ-function coming from φ A i |φ A i+1 forces the identification φ A i = φ A i+1 , effectively gluing together the primed field from replica i and the unprimed field from replica i + 1, as indicated in Figure 1. It follows that multiplying n copies of the reduced density matrix gives rise to pairwise gluing conditions φ A i = φ A i+1 for i = 1, . . . , n − 1, and when we take the trace of the complete expression we get a gluing condition between the first and last replicas, φ A 1 = φ A n . Combining this with the gluing condition φ B i = φ B i for i = 1, . . . , n from the partial trace in (3.3), and the boundary condition
φ A i Γ = φ B i Γ ,
where Γ denotes the entangling surface separating A and B, we see that all the replica fields are forced to agree on Γ. The final result for ρ n A is then
Tr A (ρ n A ) = 1 Z n A∪B bc n i=1 [Dφ A i ]e − n i=1 S[φ A i ] bc n i=1 [Dφ B i ] e − n i=1 S[φ B i ] (3.5) with bc : φ A i Γ (x) = φ B j Γ (x) ≡ cut(x) , i, j = 1, . . . , n ,(3.6)
where cut(x) is some function of the boundary coordinates. We write the denominator in (3.5) as Z n A∪B to emphasise that it contains n copies of the partition function of the original system before any subdivision into fields on A and B. In the numerator, however, the field configurations of the different replicas are integrated over independently, except that the replicated fields are subject to the boundary conditions (3.6) (up to the periodic identification φ ∼ φ + 2πR c ).
In order to take the periodic identification into account when applying boundary conditions, we separate each replicated field into a classical mode and a fluctuation, following a long tradition, see e.g. [43,44],
φ A(B) i = φ A(B) i,cl + ϕ A(B) i , i = 1, . . . , n . (3.7)
The modes φ i,cl satisfy the following classical equations of motion and boundary conditions,
P z φ A(B) i,cl (x) = 0 , φ A(B) i,cl (x) Γ = cut(x) + 2πR c w A(B) i , i = 1, . . . , n , (3.8) where w A(B) i
are integers indicating the winding sector. The classical field determines the total field value at the entanglement cut, while the fluctuating field ϕ satisfies Dirichlet boundary conditions,
ϕ A(B) i (x) Γ = 0 i = 1, . . . , n . (3.9)
In two dimensions this condition, along with the equation of motion of the classical fields, ensures that the action factorises [27], 5
S[φ A(B) i ] = S[φ A(B) i,cl ] + S[ϕ A(B) i ] , i = 1, . . . , n . (3.10)
The decomposition of the action is less trivial in higher dimensions but it can be achieved if the Dirichlet boundary condition on the fluctuating field at the entanglement cut is augmented by further conditions. It is straightforward to check that imposing (3.9) along with (2.8)-(2.15) on the fluctuating fields at the cut leads to a well-posed variational problem as well as self-adjointness of the operator P z,M on M. As was discussed earlier, this combination of conditions amounts to the vanishing of the Laplace operator and its integer powers acting on the fluctuating fields at the boundary. This turns out to be enough to ensure that the total action splits according to (3.10) (once again the equations of motion for the classical fields have to be used to achieve factorization). We note, that with this prescription and using the classical equations of motion, the boundary terms in the action can be written in a form that only depends on the classical part of the field,
S ∂ [φ A(B) i ] = S ∂ [φ A(B) i,cl ] . (3.11)
In the presence of winding modes, there remains some redundancy in the classical part of the action, as further discussed in Appendix D where we compute the contribution from the classical winding sector for the d-torus.
As a consequence of (3.10), the fluctuating modes ϕ i simply contribute as n independent fields obeying Dirichlet boundary conditions (3.9) at the entanglement cut. For the classical modes, on the other hand, we can solve for the A and B sectors simultaneously, as the boundary value problem (3.8) has a unique solution in A ∪ B Γ, up to winding numbers. At the entanglement cut only relative winding numbers matter and we can choose to write the boundary conditions for the classical fields as [26]
φ i,cl Γ (x) = cut(x) + 2πR c w i , i = 1, . . . , n − 1 , φ n,cl Γ (x) = cut(x) . (3.12)
Thus, the trace of the n-th power of the reduced density matrix reads
Tr A (ρ n A ) = 1 Z n A∪B n i=1 D [Dϕ A i ]e −S[ϕ A i ] n i=1 D [Dϕ B i ] e −S[ϕ B i ] W (n) ,(3.13)
where W (n) is the contribution coming from summing over all classical field configurations satisfying the boundary conditions (3.12). The subscript D on the integral sign is a reminder that the the fluctuating fields obey Dirichlet boundary conditions.
At this point we need to distinguish the spherical case from the toroidal one. We start by analysing the problem on the d-sphere, which turns out to be particularly simple.
d-sphere. We closely follow the treatment of the two-dimensional case in [27]. The crucial observation here is that the winding mode can be reabsorbed by the global shift symmetry of the action,
S[φ i ] = S[φ i + φ 0
i ] with constant φ 0 i , as mentioned in Section 2. Indeed, since the fields satisfying the classical equation of motion include any constant part of the total field, we can use the symmetry to rewrite their boundary conditions as
φ cl i|Γ (x) = cut(x) + 2π R c ω i + φ 0 i . (3.14)
We then choose φ 0 i = −2π R c ω i to cancel out all winding numbers. The boundary conditions then become φ cl i|Γ (x) = cut(x) , i = 1, . . . , n , (3.15) and the sum over classical configurations can be written as
W (n) = φ i,cl e − i S[φ i,cl ] = φ n,cl e −n S[φ n,cl ] = φ n,cl e −S[ √ n φ n,cl ] . (3.16)
Consequently, we have for the d-sphere
Tr A (ρ n A ) = 1 Z n A∪B n i=1 D [Dϕ A i ]e −S[ϕ A i ] n i=1 D [Dϕ B i ] e −S[ϕ B i ] φ n,cl e −S[ √ n φ n,cl ] . (3.17)
We can now combine the n-th fluctuating fields with support on A and B and the n-th classical field to define
Φ n = ϕ n + √ n φ n,cl , (3.18) with ϕ n = ϕ A(B) n in A(B).
Notice that the effective compactification radius of Φ n is now √ nR c [26,27]. The path integral over ϕ A n and ϕ B n along with the contribution W (n) from the rescaled classical field amounts to the partition function on the whole d-sphere for the combined field Φ n , which is equal to the partition function of the original field up to a factor of √ n due to the different compactification radius, and it therefore almost exactly cancels one power of the original partition function in the denominator in (3.17),
Tr A (ρ n A ) = 1 Z n A∪B n−1 i=1 D [Dϕ A i ]e −S[ϕ A i ] n−1 i=1 D [Dϕ B i ] e −S[ϕ B i ] √ n Z A∪B (3.19) = √ n Z D,A Z D,B Z A∪B n−1 , where Z D,A(B) ≡ D [Dϕ A(B) i ]e −S[ϕ A(B) i ] denotes the Dirichlet partition function on A(B).
Hence, the entanglement entropy is given by 20) and the original problem has been reduced to the computation of partition functions with appropriate boundary conditions on the regions A and B and A ∪ B.
S EE = − lim n→1 ∂ n Tr A (ρ n A ) = − log Z D,A Z D,B Z A∪B − 1 2 ,(3.
We will consider the case where the d-sphere is divided into two hemispheres. Then we only have to compute the partition function on the full sphere and a Dirichlet partition function on a hemisphere. These are in turn given by determinants of the appropriate GJMS operators. The detailed computation is described in Section 4 and Appendices A and B. d-torus. We now apply the replica method in the case of a d-torus. We cut the torus into two parts, thus introducing two boundaries which are given by two disjoint periodically identified (d − 1)-intervals (in d = 2 this is simply an S 1 ). As explained before, all the fields have to agree at the cuts Γ a (where now the index a = 1, 2 labels each cut). For the quantum fields this simply implies that they need to satisfy Dirichlet boundary conditions, that is
ϕ A(B) i Γa = 0 , i = 1, . . . , n, a = 1, 2 . (3.21)
As explained earlier, further conditions are necessary in dimensions d > 2, and we demand that the conditions (2.8) hold at the cut for the fluctuating fields.
Now consider the classical fields on the torus. We can use the global shift symmetry discussed in Section 2 to write the boundary conditions for the classical fields as
φ i,cl Γa (x) = cut a (x) + 2πR c ω a i + φ 0 i (3.22)
with φ 0 i constant on the whole torus. As in the spherical case, we can choose the φ 0 i so that they absorb the winding numbers from one of the cuts,
φ i,cl Γ 1 (x) = cut 1 (x) , i = 1, . . . , n , (3.23) φ i,cl Γ 2 (x) = cut 2 (x) + 2πR c ω i , i = 1, . . . , n ,(3.24)
where ω i : = ω 2 i − ω 1 i . We are effectively left to deal with winding sectors at a single entanglement cut and since only the relative winding number between adjacent replicas matters we can eliminate one more winding number to obtain
φ i,cl Γ 1 (x) = cut 1 (x) , i = 1, . . . , n (3.25) φ i,cl Γ 2 (x) = cut 2 (x) + 2πR c ω i , i = 1, . . . , n − 1, φ n,cl Γ 2 (x) = cut 2 (x) . (3.26)
At this point, we can use the same unitary rotation U n as in [26,27], to bring the classical fields φ i,cl i = 1, . . . , n into a canonical form constructed to separate the contribution from the winding modes from the contribution from modes subject to boundary conditions given by the functions cut 1,2 (x). Concretely, we define the matrices
U n = 1 √ 2 − 1 √ 2 0 . . . 1 √ 6 − 1 √ 6 − 2 √ 6 0 . . . . . . 1 √ n(n−1) 1 √ n(n−1) . . . . . . − 1 − 1 n 1 √ n 1 √ n . . . . . . 1 √ n ,(3.27)
and
M n−1 = diag 1, . . . , 1, 1 n U n−1 , (3.28) so that we havē φ cl i | Γ 1 (x) = 0 i = 1, . . . , n − 1 ,φ cl n | Γ 1 (x) = √ n cut 1 (x) , (3.29) φ cl i | Γ 2 (x) = 2πR c (M n−1 ) ij ω j , i = 1, . . . , n − 1 ,φ cl n | Γ 2 (x) = √ n cut 2 (x) + 2π R c √ n n−1 i=1 ω i .
Hence, the sum over all the classical configurations reduces to a sum over the vector w = (ω 1 , . . . , ω n−1 ) ∈ Z n−1 and an integral over the n-th classical mode. Notice that, as for the spherical case, the n-th classical modeφ cl n has a compactification radius amplified by √ n, due to the rotation (3.27). We want to use this mode to reconstruct a full partition function on the torus, that is define
Φ n = ϕ n +φ cl n , (3.30) so that D [dϕ A n ]e −S[ϕ A n ] D [dϕ B n ]e −S[ϕ B n ] [dφ cl n ]e −S[φ cl n ] = √ n Z A∪B ,(3.31)
where the √ n factor on the right-hand-side of (3.31) accounts for the different compactification radius. Thus, the replica method finally gives
Tr A (ρ n A ) = √ n Z D,A Z D,B Z free,A∪B n−1 W (n) ,(3.32)
where W (n) contains the contributions from the first n − 1 classical configurations satisfying the boundary conditions in (3.29) at Γ 1(2) .
In two dimensions the classical fields are uniquely determined by the equations of motion and the boundary conditions (3.29), and thus the classical action has only one vacuum. The contribution from the winding sector is then simply given by the sum over the corresponding winding modes [26,27],
W (n) = w∈Z n−1 e − n−1 i=1 S[φ cl i ] .
(3.33)
However, in higher dimensions (d > 2) the conditions (3.29) do not uniquely specify the vacua of the classical action. In other words, our construction is consistent for more than one set of boundary conditions applied on derivatives of the classical fields and the value of the boundary action depends on the boundary conditions. This is the redundancy mentioned in Section 2.
The classical field satisfies a higher-derivative equation of motion, whose general solution is parametrised by z/2 constants. The boundary condition imposed on Φ n will fix one of these constants but we need to add z/2 − 1 further boundary conditions for the classical field to fix the rest. The value of the boundary terms in the action will in general depend on the choice of boundary conditions.
In the present work we impose a generalised form of Neumann boundary conditions on derivatives of the classical fields,
∂ n ∆ kφcl i ∂r = 0 k = 0, . . . , z 2 − 2. (3.34)
This prescription is compatible with the conditions imposed on the fluctuations, 35) and, at the same time, it gives a non-vanishing classical boundary action, which is important in order for the sum over winding modes to converge. The contribution from winding modes is then accounted for in any number dimentions by computing
∆ k φ i ∂M = ∆ k (φ cl i + ϕ i ) ∂M = ∆ kφcl i ∂M , for k = 1, . . . , z 2 − 1 ,(3.W (n) = w∈Z n−1 e − n−1 i=1 S[φ cl i ] ,(3.36)
where the classical fieldsφ cl i satisfy the boundary conditions (3.29) and the Neumann conditions in (3.34). The details of the computation are reported in Appendix D.
Finally, the entanglement entropy for the d-torus is given by
S EE = − log Z D,A Z D,B Z A∪B − 1 2 − W (1) ,(3.37)
since the winding sector is normalised such that W (1) = 1. The computation of the partition functions for the d-torus is presented in Section 5 below.
We close this section by noting that even though winding numbers come into play across entanglement cuts in our computation, we are restricting our attention to a single topological sector of the original theory on the d-torus. Indeed, since we periodically identify the field, we could consider winding sectors on the d-torus itself,
φ(x 1 + L 1 , . . . , x d + L d ) = φ(x 1 , . . . , x d ) + 2πL I m I , I = 1, . . . , d , m I ∈ Z . (3.38)
We have set m I = 0 in our calculations in the present paper but a more general study can be carried out, evaluating the contribution from winding sectors W (n, m L ) associated with an entanglement cut for each topological configuration, and then summing over the m I . The corresponding topological contributions to entanglement entropy in a scalar field theory on a two-dimensional cylinder, are obtained in [26].
Entanglement entropy on a hemisphere
In this section we calculate the universal finite terms of entanglement entropy in GQLM resulting from the division of a d-sphere into two d-hemispheres A and B by an entanglement cut at the equator as shown in Figure 2. According to the replica calculation in Section 3, we have to compute (3.20), where now A and B are the two d-hemispheres, and the bulk action contains the GJMS operator (2.14) with 2k = d. The partition function on the whole manifold (Z A∪B in (3.20)) contains a zero mode, which should be treated separately [43,44],
Z A∪B = 2πR c g π A d det g π P d,S d − 1 2 (4.1)
where A d is the area of the d-sphere and the g π A d factor comes from the normalisation of the eigenfunction corresponding to the zero eigenvalue. The functional determinant det is the (regularised) product of the non-zero eigenvalues. The operator (2.14) on the unit d-hemisphere with Dirichlet boundary conditions does not have a zero eigenvalue, so the partition functions on the subsystems A and B can be directly computed via regularised functional determinants. Hence we can write (3.20) as
S[A] = 1 2 log det P d,H d D det P d,H d D det P d,S d + log 4πgA d R c − 1 2 , (4.2)
where the D subscript on H d D indicates Dirichlet boundary conditions on the fields at the boundary of the d-hemisphere H d . At the end of the day, the entanglement entropy can only depend on the combination gA d . All factors of g/π inside functional determinants must therefore cancel out in the final result and going forward we simply leave them out of our formulas.
We now turn to the explicit computation of the functional determinants appearing in (4.2). In a series of papers [42,[56][57][58], Dowker calculates determinants of GJMS operators on spheres in any even dimension d and for any degree k ≤ d/2 via ζ-function methods. We give a selfcontained review of these calculations in Appendix A, partly to adapt them to our notation and partly to have all the results we want to use in one place. Determinants of critical GJMS operators (where the degree 2k of the operator matches d) on spheres and hemispheres are expressed in terms of multiple Γ-functions in [42]. A simplified version of these results, expressing them in terms of the more familiar Riemann ζ-function, is presented in Appendix B.
The starting point of Dowker's computation is the observation that the determinant of the GJMS operator on a d-sphere, given in terms of the spectral ζ-function, can be obtained as a sum of the corresponding determinant on a d-hemisphere with Dirichlet and Neumann boundary conditions [42,59] (again expressed in terms of spectral ζ-functions). On the hemisphere with Dirichlet boundary conditions the log-determinant of the GJMS operator is given by
log det P d,H d D = −Z d (0, a D , d/2) = − d n=0 h D n (d)ζ (−n) − f D (d), (4.3) where Z d (s, a D , d/2) is the spectral ζ-function corresponding to the GJMS operator of degree 2k = d, cf. (A.1) and (A.2).
Here ζ is the Riemann ζ-function, and h D n , f D are given by
h D n (d) = − 1 (d − 1)! d n + 1 + 1 d! d + 1 n + 1 − d−n j=0 (−1) j (d − j)! d + 1 j d − j + 1 n + 1 , (4.4) f D (d) = − 1 d! d−1 l=1 log(l)(l − d) d+1 + M (d). (4.5)
The d k are Stirling numbers of the first kind, (z) k is a Pochhammer symbol, and M (d) is a sum of harmonic numbers and generalized Bernoulli polynomials whose explicit form is not important to us, as it cancels in the final expression for the entanglement entropy. The derivations of h D n and f D can be found in Appendix B.2, while the derivation of M (d) can be found in A.3, its explicit form is given in equation (A.58). These functions may seem quite complicated at first sight, but they all consist of well understood algebraic functions that can easily be evaluated using a computer. For the determinant of a critical GJMS operator on a hemisphere with Neumann boundary conditions we find a similar result
log det P d,H d N = −Z d (0, a N , d/2) = − d n=0 h N n (d)ζ (−n) − f N (d), (4.6)
with h N n and f N given by
h N n (d) = 1 d! d + 1 n + 1 − d−n j=0 (−1) j (d − j)! d j d − j + 1 n + 1 , (4.7) f N (d) = log(d − 1)! + f D (d). (4.8)
We note that our result in (4.6) differs from [42] by a sign in the term log(d − 1)!. This is because we treat the zero mode separately as is apparent in (4.1) and (4.2).
As mentioned above, the log-determinant on the whole sphere is the sum of the logdeterminants on the hemisphere with Dirichlet and Neumann boundary conditions [42],
log det P d,S d = log det P d,H d N + log det P d,H d D .
(4.9)
With an eye towards the entropy formula (4.2), we express the ratio of determinants as
2 log det P d,H d D − log det P d,S d = − d n=0 h D n (d) − h N n (d) ζ (−n) − f D (d) + f N (d) = d n=0 h n (d)ζ (−n) + log(d − 1)! , (4.10)
where, using the properties of the binomial coefficients, one can write h n in the the following form
h n (d) = 1 (d − 1)! d n + 1 + d−n j=0 (−1) j (d − j)! d j − 1 d − j + 1 n + 1 . (4.11)
Putting everything together, we obtain a surprisingly simple expression for the entanglement entropy of a hemisphere,
S[H d ] = 1 2 d n=0 h n (d)ζ (−n) + log 4πgA d (d − 1)!R c − 1 2 , (4.12)
with h n (d) given above in (4.11). For dimensions d = 2, 4, 6, and 8, in the critical case z = d, the entropy is given explicitly by
d = z = 2 : S EE = log 8πgR c − 1 2 (4.13) d = z = 4 : S EE = log 4 2gπR c − 1 2 − ζ(3) 4π 2 (4.14) d = z = 6 : S EE = log 16 π 3 gR c − 1 2 − 15 ζ(3) 32π 2 + 3 ζ(5) 32π 4 (4.15) d = z = 8 : S EE = log 32 3gπ 2 R c − 1 2 − 469 ζ(3) 720π 2 + 7 ζ(5) 24π 4 − ζ(7) 32π 6 ,(4.16)
and more values are plotted in Fig. 3. The two-dimensional case agrees with the result presented in [27]. Notice that the logarithmic term depends on the product R c √ g, which is independent of rescaling of the fields. Hence, in the case of a d-sphere cut into two dhemispheres, the finite universal terms of the entanglement entropy (4.12) are constant, they only depend on the physical compactification radius R c √ g of the target space, which appears in the above expression through the zero modes.
Explicit expressions for the subcritical case, when z < d for z and d both even integers, make apparent the relative simplicity of the results (4.12). Indeed, in sub-critical cases several additional terms involving derivatives of Riemann zeta functions appear. Tremendous simplification occurs when z = d to produce the neat expressions in (4.13). The specific expressions Figure 3: The universal finite term (4.12) in the entanglement entropy of GQLM on a hemisphere plotted against the number of spatial dimension d (which is equal to the critical exponent z). We normalise S[H d ] with respect to the two-dimensional case, and set g = R c = 1.
d z S (H d ) S (H 2 )
for the functional determinants in subcritical cases were computed originally in [42], and are included in Appendix A.
The result in (4.12) only depends on "topological data" represented by the scale invariant compactification radius of the target space and not on other geometric features. One might object that this is because we initially set the radius of the d-sphere to one, and thus our computations are insensitive to the geometry. Indeed, as mentioned in the Introduction, for smooth entangling cuts in even-dimensional CFTs, the entanglement entropy is expected to have a universal term proportional to the logarithm of a characteristic scale of the system with a constant of proportionality which depends on the central charge and on the Euler characteristic. It can be checked that introducing a radius R of the d-sphere in our problem modifies the above results by adding a term proportional to ∆χ log R , (4.17) where ∆χ is the change in the Euler characteristic due to dividing the d-sphere along the entanglement cut. For the two-dimensional case this was understood in [1]. Just as for a two-dimensional sphere, the change in the Euler characteristic vanishes for the chosen entanglement cut (while having a non-smooth entangling surface can introduce further universal logarithmic terms). Indeed, on a non-unit sphere all eigenvalues entering our determinants are rescaled, and upon regularising this contributes,
log det P 2k,H d = −d Z d (0, a D , k) log R − Z d (0, a D , k) , (4.18) log det P 2k,S d = −d (Z d (0, a D , k) + Z d (0, a N , k)) log R − Z d (0, a D , k) − Z d (0, a N , k) ,
instead of equations (A.2), (A.3). Including the contribution coming from the normalisation of the zero-mode this would leave us with
d 2 1 + Z d (0, a N , k) − Z d (0, a D , k) log R , (4.19)
but it is straightforward to check, using (A.45) and (A.54), that this combination vanishes.
In fact, Dowker's construction of the determinant for the sphere as sum of determinants on hemispheres with Dirichlet and Neumann boundary conditions makes this quite transparent, since the spectral ζ-function in the Neumann case is nothing but the Dirichlet one after subtracting the zero mode. 6 Finally, we should stress that the sub-leading universal terms as (4.17) (which vanish here due to the chosen entanglement surface) are those expected in a ddimensional CFT. The quantum field theory we are considering lives on a (d+1)-dimensional manifold, and yet due to the enhanced d-dimensional symmetries in the critical d = z case, it has entanglement properties typical of d-dimensional CFTs.
Entanglement entropy on cut d-torus
We now turn our attention to the sub-leading universal terms in the entanglement entropy on a flat d-dimensional torus with circumferences L 1 , . . . , L d ,
T d L 1 ,...,L d : = R d /(L 1 Z × . . . × L d Z), (5.1) that is cut into two d-cylinders: Y B := [−L B , 0] × T d−1 L 2 ,...,L d and Y A := [0, L A ] × T d−1 L 2 ,...,L d , where our conventions are L B > 0 and L 1 = L A + L B .
The two-dimensional case is shown in Figure 4. The replica method for the entanglement entropy on the torus was discussed in Section 3, and it requires us to compute (3.37), where the winding sector contribution is given by (3.36), with the classical fields satisfying the equations of motion and boundary conditions expressed in (3.29) and (3.34). For the d-torus, the bulk and boundary terms in the action are given by (2.5) and (2.10), respectively. The operator P d,T d in (2.4) is simply an integer power of the Laplacian. We first compute the quantum contribution to the entanglement entropy arising from the partition functions in (3.37), and after that we tackle the winding sector contribution. All the detailed calculations of functional determinants are relegated to Appendix C, and those regarding the winding sector to Appendix D. In this section we collect the results and discuss some interesting limits.
The operator P d,T d has a zero mode and as result the torus partition function is given by
Z A∪B = 2πR c g π A d det g π P d,T d − 1 2 , (5.2)
where A d is the area of the d-torus. As was the case for the sphere, the g π factor in the determinant only amounts to a rescaling of the torus to which the entanglement entropy is not sensitive, and we can ignore it in our calculations. On the d-cylinder with Dirichlet boundary conditions, on the other hand, there is no zero mode and we can write the (sub-leading terms of) entanglement entropy as
S[A] = 1 2 log det P d,Y A,D det P d,Y B,D det P d,T d + log 4πg A d R c − 1 2 − W (1) . (5.3)
The required functional determinants are evaluated in Appendix C.
By means of equations (C.31a) and (C.6), we find that the determinant on the full torus is given by
log det ∆ d/2 T d L 1 ,...,L d = − d 2 ζ T d L 1 ,...,L d (0) = d log(L 1 ) + d 2 L 1 ζ T d−1 L 2 ,...,L d (−1/2) − d 2 G (0; L 1 , . . . , L d ) ,(5.2 3/2−s L s+1/2 1 Γ(s) √ π n d−1 ∈Z d−1 ∞ n 1 =1 n 1 n T d−1 Ξ d−1 n d−1 s−1/2 K s−1/2 L 1 n 1 n T d−1 Ξ d−1 n d−1 ,
where the primed sum indicates the omission of the zero mode, K ν (z) is a modified Bessel function of the second kind and Ξ d−1 = diag 2π/L 2 2 , . . . , 2π/L d 2 is a diagonal matrix.
We have explicit expressions both for the spectral ζ-function on the torus in (C.13) and its derivative evaluated at s = 0 in (C.18), but at this stage we find it more convenient to use the above expression, and only insert explicit formulae at the end, after some cancellations.
+ d 2 Lζ T d−1 L 2 ,...,L d (−1/2) − d 4 G (0; 2L, . . . , L d ),(0)
where L = L A for Y A and L = L 1 − L A for Y B . We can rewrite the difference between the log-determinants as
log det ∆ d/2 [0,L 1 −L A ]×T d−1 L 2 ,...,L d + log det ∆ d/2 [0,L A ]×T d−1 L 2 ,...,L d − log det ∆ d/2 T d L 1 ,...,L d = d 2 log 4u(1 − u) + d 2 ζ T d−1 L 2 ,...,L d (0) − d 4 G (0; 2L A , . . . , L d )+ − d 4 G (0; 2(L 1 − L A ), . . . , L d ) + d 2 G (0; L 1 , . . . , L d ),(5.7)
where the parameter u = L A /L 1 characterises the relative size of the two d-cylinders.
The explicit expression for ζ and a relabelling of the sides L i . As discussed in Appendix C, despite its appearance the above expression is rather convenient to handle, thanks to the fast convergence of the modified Bessel functions contained in the auxiliary function G. The derivative of the function G with respect to s, evaluated at s = 0, is given by
G (0, L, L 2 , . . . , L d ) = 8L π n d−1 ∈Z d−1 ∞ n 1 =1 ( n T d−1 Ξ d−1 n d−1 ) 1/4 √ n 1 K −1/2 L n 1 n T d−1 Ξ d−1 n d−1 = 2 n d−1 ∈Z d−1 ∞ n 1 =1 exp −L n 1 n T d−1 Ξ d−1 n d−1 n 1 ,(5.8)
where we have used the explicit expression (E.4) for the modified Bessel function K − 1 2 .
As an explicit example of the above result, the determinant ratio for z = d = 2 is explicitly given by
log det ∆ [0,L 1 −L A ]×S 1 L 2 + log det ∆ [0,L A ]×S 2 L 2 − log det ∆ T 2 L 1 ,L 2 = log 2u|τ 1 |η 2 (2uτ 1 ) + log 2(1 − u)|τ 1 |η 2 (2(1 − u)τ 1 ) − log L 2 1 η 4 (τ 1 ) = −2 log L 1 + log 4u(1 − u)|τ 1 | 2 + 2 log η(2(1 − u)τ 1 ) η(2uτ 1 ) η 2 (τ 1 ) ,(5.9)
where we used (C.20) and (C.29) and introduced the notation τ k = i L 1 L k+1 , for k = 1 , . . . , d−1, for the aspect ratios of the general d-torus.
For the winding sector, the computations are detailed in Appendix D. The end result, given in (D.13), is
− W (1) = log Λ z − 1 2 − ∞ −∞ dk √ π e −k 2 log ω∈Z exp − π Λ z ω 2 − 2i π Λ z k ω ,(5.10)
with Λ z given by
Λ z = g π R 2 c (−1) z/2 z! (1 − 2 z )B z u 1−z + (1 − u) 1−z 1 |τ 1 | . . . |τ d−1 | . (5.11)
where B z are the Bernoulli numbers. For instance, in d = 2 we have
Λ 2 = 4π g R 2 c 1 u(1 − u) 1 |τ 1 | . (5.12)
Finally, putting together the contributions from the functional determinants and the winding sector, (5.7) and (5.10) respectively, we get the following (rather long) expression for the entanglement entropy (5.3),
S[A] = d 4 log 4u(1 − u) + d 4 ζ T d−1 L 2 ,...,L d (0) − d 8 G (0; 2L A , . . . , L d ) − d 8 G (0; 2L B , . . . , L d ) + d 4 G (0; L 1 , . . . , L d ) + log 4πg R 2 c + log A d − 1 − 1 2 log |τ 1 . . . τ d−1 | + 1 2 log (−1) d/2 d! 4(1 − 2 d )B d u 1−d + (1 − u) 1−d − ∞ −∞ dk √ π e −k 2 log ω∈Z exp − π Λ d ω 2 − 2i π Λ d k ω . (5.13)
It can be verified that the entanglement entropy is symmetric under the transformation u → 1 − u [33] as required for a pure state of the full system.
For the special case of d = z = 2 we obtain (using (5.9) and (D.16))
S[A 2 ] = − 1 2 log |τ 1 | + 1 2 log 4u(1 − u)|τ 1 | 2 + log η(2(1 − u)τ 1 ) η(2uτ 1 ) η 2 (τ 1 ) − 1 + log 4π g R 2 c − 1 2 log u(1 − u) − 1 2 log |τ 1 | − ∞ −∞ dk √ π e −k 2 log ω∈Z exp − π Λ 2 ω 2 − 2i π Λ 2 k ω = log η(2(1 − u)τ 1 ) η(2uτ 1 ) η 2 (τ 1 ) − 1 + log 8π g R 2 c − ∞ −∞ dk √ π e −k 2 log ω∈Z exp − π Λ 2 ω 2 − 2i π Λ 2 k ω ,(5.14)
where
Λ 2 = 4π g R 2 c 1 u(1−u) 1 |τ 1 | .
The final result looks relatively simple due to some cancellations between the classical and quantum contributions. To our knowledge, this is the first time that universal finite terms in the entanglement entropy on a torus have been obtained in closed form using path integral methods, even for the two-dimensional case. They have been computed numerically in [30] and by means of a boundary field method in [25].
We will now check some interesting limits of our general expressions.
Halved d-torus. The first simplifying special case that that we consider is when the torus is divided into two equal parts: The contribution from the functional determinants (5.7) simplifies tremendously, leaving only a single term,
L A = L B , L 1 = 2L A , u = 1 2 . (5.15)log det ∆ d/2 [0,L A ]×T d−1 L 2 ,...,L d + log det ∆ d/2 [0,L A ]×T d−1 L 2 ,...,L d − log det ∆ d/2 T d 2L A ,...,L d = d 2 ζ T d−1 L 2 ,...,L d (0) ,(5.
16) and we obtain for the universal terms in the entanglement entropy
S[A] = d 4 ζ T d−1 L 2 ,...,L d (0) + log 4πg R 2 c + d 2 log L 1 − 1 − log |τ 1 . . . τ d−1 | + 1 2 log (−1) d/2 d! 4(1 − 2 −d )B d − ∞ −∞ dk √ π e −k 2 log ω∈Z exp − π Λ d ω 2 − 2i π Λ d k ω ,(5.17)
where now
Λ d (u = 1/2) = g π R 2 c (−1) d/2 2 d d! 1 − 2 d B d 1 |τ 1 | . . . |τ d−1 | . (5.18)
In d = 2 this reduces to
S[A 2 ] = −1 + log 8π g R 2 c − ∞ −∞ dk √ π e −k 2 log ω∈Z exp − π Λ 2 ω 2 − 2i π Λ 2 k ω , (5.19)
since the contributions from the Dedekind-eta functions in (5.14) cancel against each other when u = 1/2. Moreover, Λ 2 is given here by
Λ 2 (u = 1/2) = 16 g π R 2 c 1 |τ 1 | . (5.20)
Thin d-torus. In d = 2, the infinitely thin torus limit (sometimes called also the long torus limit) amounts to |τ 1 | 1 and u fixed. It can be helpful to think of this limit as L 2 → 0 while all the other lengths (L 1 , L A ) are kept fixed. In this case, the contribution from the integral in the expression (5.14) is exponentially suppressed, and moreover, we have the asymptotic behaviour (E.7) for the Dedekind η function. It is then clear that all the contributions from the Dedekind η-functions vanish in this limit, and we are left with the simple result
S[A 2 ] = log 8π g R 2 c − 1 ,(5.21)
which agrees with [25]. In this limit the entanglement entropy for the thin torus is twice the entanglement entropy for the thin cylinder [25], since the entropy still carries information about the two boundaries of the torus.
We can take a look at the same limit for the d-torus. In the d-dimensional case we assume that L 1 , L A are fixed and of order one, while all the other sides are approaching zero, that is L 2 , . . . , L d → 0. There is an ambiguity in how to take this limit, so as a first step we consider the case
L 1 , L A L 2 · · · L d ,(5.22)
which can also be written as
1 |τ 1 | · · · |τ d−1 | .(5.ζ T d−1 L 2 ,...,L d (0) ≈ 4 (−π) p p! L 2 . . . L 2p+1 L 2p 2p+2 ζ (−2p) + 2 L 2 . . . L 2p L 2p−1 2p+1 (−2π) p (2p − 1)!! ζ(−2p + 1) ≈ 4 (−π) p p! L 2 . . . L 2p+1 L 2p 2p+2 ζ (−2p) ,(5.24)where p = d−1 2 −1 = d−1 2
for even d. We can rewrite the term more elegantly as a function of the aspect ratios, 7
ζ T d−1 L 2 ,...,L d (0) ≈ 2 Γ p + 1 2 π p+ 1 2 ζ p + 1 2 |τ 2p+1 | 2p |τ 1 | . . . |τ 2p | .
Finally, the integral over k in (5.13) is also exponentially suppressed and will not contribute to the final expression. Then, keeping only the most divergent term according to (5.22), we obtain
S[A d ] ≈ d Γ p + 1 2 2 π p+ 1 2 ζ p + 1 2 |τ 2p+1 | 2p |τ 1 | . . . |τ 2p | , p = d − 1 2 . (5.25)
Similar limits were discussed in [33] for the Renyi entropies of 3+1-dimensional relativistic fields theories with various twisted boundary conditions. Except for having the same powerlaw divergence, our results appear not to agree with their findings. The comparison is tricky though, as there are effectively three length scales in the d = 4, and since we are looking at the regularised entanglement entropy we do not have an explicit cut-off as in [33].
We should stress that when d = 2 all the sums in ζ The thin sliced d-torus. In d = 2 this limit corresponds to L A → 0 while all the other length scales involved remain fixed, that is u → 0 while |τ 1 | is kept fixed. The integral in (5.14) can then be evaluated, for instance by means of the Poisson summation formula (E.2), and at leading order it gives 1 2 log u. Considering only the leading term in the expansion of the Dedekind function (E.6), we obtain, for u → 0,
S[A 2 ] = − π 24|τ 1 |u + . . . ,(5.26)
which agrees with the entanglement entropy for the infinite long and thin sliced cylinder computed in [27]. Indeed, in this limit the torus and the cylinder are indistinguishable at leading order.
We can proceed with similar arguments in higher dimensions, assuming u → 0 while all the aspect ratios |τ i |, with i = 1, . . . , d − 1 are kept fixed. In order to simplify the computation we assume all the aspect ratios to be equal, |τ i | = σ for all i = 1, . . . , d − 1. Then, the leading divergent terms are contained in G (0, 2L A , L 2 , . . . , L d ) in (5.13), and by estimating the d−1-dimensional sum in G (0, 2L A , L 2 , . . . , L d ) (cf. (5.8)) with an integral we obtain the following leading behaviour for the entanglement entropy,
S[A d ] ≈ κ d u d−1 σ d−1 ,(5.27)
where κ d is a numerical coefficient that depends on the number of dimensions d. Similar behaviour was obtained for the three-dimensional torus in conformal field theories in [62] (see also [63]), and also in [64] from a holographic approach.
The wide d-torus. As our final example, we consider the so-called wide torus limit, that is when the directions transverse to the cut are very large while L A , L 1 are kept fixed. Let us start by considering this limit for d = 2. This means that |τ 1 | → 0 while u is kept fixed.
Using the expansion of the Dedekind-eta function (E.6), we see that the term containing the logarithm of the ratio of Dedekind-eta functions in (5.14) produces the leading divergence. Hence, from the general expression for the entanglement entropy in d = 2 (5.14), we obtain
S[A 2 ] ≈ − π 24u(1 − u)|τ 1 | + π 6|τ 1 | . (5.28)
This asymptotic behaviour is also expected for the universal function of the Renyi entropies of the two-dimensional torus, cf. [33] and references therein, and was found in holographic CFTs in [32].
In higher dimensions we can consider the limit when u is kept fixed, and all the transverse directions are very large compared to L 1 , L A , but all the aspect ratios approach zero at the same rate, that is |τ i | = ε, with i = 1, . . . , d−1 and ε → 0. In this case, the expressions in (5.13) simplify, and, as in the two-dimensional case, the leading divergent contribution is contained in the functions G (0, L, L 2 , . . . , L d ) (cf. (5.8)), where L can be L 1 , 2L A or 2(L 1 − L A ). Using a similar expansion as performed in the thin sliced torus limit, we obtain
S[A d ] ≈ f d (u) ε d−1 ,(5.29)
where f d (u) is a function symmetric under the exchange u → 1 − u.
In the above discussion, the case u = 1 2 is special for any dimension d, since the function f d (u = 1/2) = 0, so that the sub-leading but still diverging terms become important. Looking directly at (5.17) and (5.19), there is no contribution now coming from G (0, L, L 2 , . . . , L d ), and the next divergent term is logarithmic in the aspect ratios τ i , which in the two-dimensional torus is entirely coming from the integral in (5.19), while in higher dimensions it receives contributions also from the area term log √ A d and ζ T d−1 (0). In our simplified limit where all the ratios |τ i | approach zero at the same rate, we see that
S[A d ] ≈ 1 2 log ε , u = 1 2 , d ≥ 2 . (5.30)
This is consistent with the findings of [33], where for the z = 2 free boson field theory in 3+1 dimensions the universal function of Renyi entropies J n satisfies the relation lim |τ |→0 J n (u = 1/2, |τ |)|τ | = 0. 8 This clearly holds in our case since the universal term of the entanglement entropy has a logarithmic divergence. Rather different behaviour was observed in free twodimensional CFTs for Renyi entropies [33] and also in holographic CFTs in two and three space dimensions for entanglement entropy [64], where also for u = 1/2 the universal part continues to have a power-law divergence similar to (5.28) and (5.29). The disagreement was already observed in [33].
Discussion
In this work we have analytically computed the universal finite corrections to the entanglement entropy for GQLMs in arbitrary d+1 dimensions, for even integer d, and on either a d-sphere cut into two d-dimensional hemispheres or a d-torus cut into two d-dimensional cylinders. GQLMs are free field theories where the Lifshitz exponent is equal to the number of spatial dimensions, and they are described in terms of compactified massless scalars. When d = z = 2 the GQLM reduces to the quantum Lifshitz model [14], and our findings confirm the known results of [25][26][27]. The calculations are performed by means of the replica method. Caution is required when performing the cut as the massless scalar field in the GQLM is compactified. It is useful to discern between the role of the fluctuating fields and the classical modes. In essence, the periodical identification mixes with the boundary conditions imposed at the cut on the replicated fields [23,[25][26][27]30], and disentangling the winding modes from the rest leads to an additional universal sub-leading contribution to the entanglement entropy. The fluctuating fields satisfy Dirichlet conditions at the cut (as well as further conditions imposed on their even-power derivatives), while the classical fields take care of the periodic identification. The contribution from the fluctuating fields comes from the ratio of functional determinants of the relevant operators, which we compute via spectral ζ-function methods. The classical fields contribute via the zero-mode and via the winding sector summarised in the function W (n).
For the spherical case, the full analytic expression of the universal finite terms turned out to be a constant, depending only on the scale invariant compactification radius, cf. (4.12). This is a consequence of the presence of zero modes, and their normalisation.
For the toroidal case, the story is rather rich. The general expression is (5.13), while (5.17) is valid when we cut the torus by half. In both cases, the universal term is comprised of a scaling function, which depends on the relevant aspect ratios of the subsystems, and a constant term, which contains the "physical" compactification radius. The last one comes from the zero mode of the partition function of the d-torus as well as from the winding sector. We considered various limits, such as the thin torus limit, which results in the simple expressions (5.21) and (5.25) in two and d dimensions respectively, the thin sliced d-torus, cf. (5.26) and (5.27), and finally we examined the wide torus limit, cf. (5.28), (5.29), and (5.30) where the last expression is valid for u = 1/2. Notice that in the toroidal case, where the winding sector is non trivial, it also contributes to the scaling function. For example, in the thin torus limit its contribution is decisive in order to cancel divergences and leave a finite result, cf. (5.21) and (5.25). Our findings confirm expectations from the study of the (2+1)dimensional QLM [24][25][26][27], that also for critical non-relativistic theories entanglement entropy can encode both local and non-local information of the whole system. Moreover, our results give substance to the field theoretic intuition that entanglement entropy should depend on the dynamical critical exponent, see also [39][40][41] for analogous results in this direction. The specific dependence is rather non-trivial already in the simplest spherical case. The next step would be to extend our analysis to even-dimensional spacetime, that is when d is an odd integer. Progress has been recently made in e.g. [65] (and references therein), in computing determinants of GJMS operators on odd-dimensional spheres, see also [66] for a different approach based on heat kernel techniques. It would also be interesting to broaden our study of GQLMs by examining entanglement entropy for non-smooth entangling surfaces. In the QLM for non-smooth boundaries, sharp corners source a universal logarithmic contribution (i.e. log L A ), with a coefficient that depends on the central charge and the geometry of the surgery [22,29]. For d-dimensional CFTs and in presence of non-smooth entangling surfaces further UV divergences also appear whose coefficients are controlled by the opening angle. In particular, for a conical singularity, the nearly smooth expansion of the universal corner term (seen as a function of the opening angle θ) is simply proportional to the central charge of the given CFT [67][68][69][70][71][72][73][74]. 9 Another relevant direction to pursue is the study of post-quench time evolution of entanglement in these systems, see e.g. [77] for recent results in QLM and [78] for Lifshitz-type scalar theories. 10 In this respect, GQLMs can provide an interesting and rich playground where one can answer many questions in a full analytic manner.
A The determinant of GJMS operators on spheres and hemispheres
In this appendix we review the calculation of the determinants of GJMS operators on spherical domains, originally performed by Dowker in [42], and rewrite his results in a way we find more transparent. Building on his previous work, in particular [56] and [57], Dowker writes the following expression for the spectral ζ-function of a GJMS operator of degree 2k, which we denote by P 2k , on the hemisphere, Our task therefore is to evaluate Z d (0, a D/N , k). In order to do so, we first review the definition and key properties of the Barnes ζ-function. We then turn to the evaluation of the spectral ζ-function of one of the factors in (A.1) and finally put everything together to get the desired log determinants.
Z d (s, a D/N , k) = m∈N d k−1 j=0 (m · d + a D/N ) 2 − α 2 j −s , (A.1) with d = (1, . . . , 1) ∈ R d , α j = j + 1/2,
A.1 Definition and properties of the Barnes ζ-function
The Barnes ζ function is defined for s > d as
ζ(s, a|d) : = m∈N d (a + m · d) −s = ∞ m 1 ,...,m d =0 (a + m 1 d 1 + . . . m d d d ) −s . (A.4)
For the special case of d = 1 = (1, . . . , 1) we use the simplified notation,
(d − k)!(k − 1)! lim z→0 d d−k dz d−k z d e −az (1 − e −z ) d = (−1) d−k B (d) d−k (a) (d − k)!(k − 1)! = (−1) k (k − 1)! I d (k, a), k = 1, . . . , d (A.8) where B (d)
l (a) are generalized Bernoulli polynomials. For future convenience we define
R d (k, a) := (−1) d−k B (d) d−k (a) (d − k)!(k − 1)! , (A.9)
such that for k = 1, . . . , d we can simply write Res s=k ζ d (s, a) = R d (k, a). In particular this means that we have the following expansion around s = 0 for k = 1, . . . , d,
ζ d (k + s, a) = 1 s R d (k, a) + C d (k, a) + O(s). (A.10)
Calculating ∂ s sζ d (k + s, a)| s=0 using the analytic continuation (A.6) one finds that
C d (k, a) = R d (k, a) − ψ(k)R d (k, a), (A.11)
where the derivative of R should be read as R d (k, a) = lim s→0 ∂ s R d (k + s, a), and ψ is the digamma function. For non-negative k, ζ d (−k, a) can be evaluated using its analytic continuation (A.6). On page 151 of [80] the following expression is given
ζ d (−k, a) = (−1) d k! (d + k)! B (d) d+k (a), (A.12)
where again B
ζ N (s) : = ζ N (s) − (a 2 N − α 2 ) −s = m∈N d \{0} (a N + m · d) 2 − α 2 −s . (A.14)
Below, we derive the following two identities
ζ D (0) = ζ d (0, a D + α) + ζ d (0, a D − α) + d/2 r=1 α 2r r H 1 (r)R d (2r, a D ) , (A.15) ζ N (0) = ζ d (0, a N + α) − log ρ d − log(a N + α) + d/2 r=1 α 2r r H 1 (r)R d (
A.2.1 Dirichlet boundary conditions
We now proceed with the derivation of (A.15). Since we will only be dealing with Dirichlet boundary conditions throughout this section, we will omit the D-index in a D and only reintroduce it when we reach the final result. The first step in the evaluation is to perform a binomial expansion of ζ D (s). The expansion converges because (a + m · d) 2 ≥ α 2 for d = 1
in both the Dirichlet and Neumann cases, with equality only arising for the zero mode of the Neumann case, which is omitted anyway. We thus get 2s + 2r, a),
ζ D (s) = m∈N d ∞ r=0 (s) r r! α 2r (a + m · d) −(2s+2r) = ∞ r=0 (s) r r! α 2r m∈N d (a + m · d) −(2s+2r) = ζ d (2s, a) + ∞ r=1 sf (s) r! α 2r ζ d ((A.18)
where we used the definition of the Barnes ζ-function for d = 1 given in (A.5) and rewrote the Pochhammer symbol for r > 0 as (s) r = s(s + 1) · · · (s + r − 1) = : sf (s) to make the behaviour around s = 0 more transparent. We also note that f (0) = (r − 1)!. Considering that ζ d (2r) has simple poles at r = 1, . . . , d/2 and converges for higher r, we can immediately write down the following expression at s = 0
ζ D (0) = ζ d (0, a) + 1 2 d/2 r=1 α 2r r R d (2r, a) (A.19) with R d (k, a) defined in equation (A.8).
The next step is to evaluate ζ D (0). In order to do this, we first take a partial derivative with respect to s at general values of s and then let s → 0, We note that f (0) = (r − 1)!H r−1 , where H r−1 is a harmonic number, and remind the reader that for r = 1, . . . , d/2 and small s
ζ d (2s + 2r, a) = R d (2r, a) 2s + C d (2r, a) + O(s),
and similarly
ζ d (2s + 2r, a) = − R d (2r, a) 2s 2 + O(s).
Thus at s = 0 we can write
f (s)sζ d (2s + 2r, a) s=0 = (r − 1)! H r−1 2 R d (2r, a) (A.21) f (s)ζ d (2s + 2r, a) + sf (s)∂ s ζ d (2s + 2r, a) s=0 = (r − 1)!C d (2r, a) (A.22)
Putting these expressions together we find There is no contribution to ζ d after r = d/2, since all such terms in the sum get set to 0 by the s factor.
The next task is to compute the remaining infinite series in (A.23). In order to do this we first note that ζ d (s, a) admits the following integral representation for s > d
ζ d (s, a) = 1 Γ(s) ∞ 0 dt e −at t s−1 (1 − e −t ) d . (A.24)
Using this we can rewrite
∞ r=u+1 α 2r r ζ d (2r, a) = 2 ∞ 0 dt t −1 e −at (1 − e −t ) d ∞ r=d/2+1 (αt) 2r (2r)! = ∞ 0 dt t −1 e −at (1 − e −t ) d 2 cosh(αt) =e αt +e −αt −2 d/2 r=0 (αt) 2r (2r)! = lim σ→0 ∞ 0 dt t σ−1 e −(a+α)t (1 − e −t ) d + ∞ 0 dt t σ−1 e −(a−α)t (1 − e −t ) d − 2 d/2 r=0 (α) 2r (2r)! ∞ 0 dt t 2r+σ−1 e −at (1 − e −t ) d = lim σ→0 Γ(σ) ζ d (σ, a + α) + ζ d (σ, a − α) − 2ζ d (σ, a) − 2 d/2 r=1 α 2r (2r)! Γ(2r + σ)ζ d (2r + σ, a) , (A.25)
where we introduced a convergence parameter σ in order for the integral representation to make sense. For the σ → 0 limit we will need to use the analytic continuation of ζ d . We now take a closer look at the different parts of the expression above around σ = 0. The ζ d functions on the first line of (A.25) are convergent at σ = 0 while the Gamma function has a simple pole. Expanding order by order in σ gives
Γ(σ) ζ d (σ, a + α) + ζ d (σ, a − α) − 2ζ d (σ, a) = = 1 σ ζ d (0, a + α) + ζ d (0, a − α) − 2ζ d (0, a) + + ζ d (0, a + α) + ζ d (0, a − α) − 2ζ d (0, a)− − γ ζ d (0, a + α) + ζ d (0, a − α) − 2ζ d (0, a) + O(σ). (A.26)
Carrying out the corresponding expansion on the second line of (A.25) gives
−2 d/2 r=1 α 2r (2r)! Γ(2r) + Γ(2r)ψ(2r)σ + O(σ 2 ) 1 σ R d (2r, a) + C d (2r, a) + O(σ) = − 1 σ d/2 r=1 α 2r r R d (2r, a) − d/2 r=1 α 2r r ψ(2r)R d (2r, a) − d/2 r=1 α 2r r C d (2r, a) + O(σ). (A.27)
To get a finite end result we need the pole to vanish when we add (A.26) and (A.27). 11 This is equivalent to the condition 2r, a).
ζ d (0, a + α) + ζ d (0, a − α) = 2ζ d (0, a) + d/2 r=1 α 2r r R d ((A.28)
We can then take the limit and write 2r, a),
(A.25) = −γ ζ d (0, a + α) + ζ d (0, a − α) − 2ζ d (0, a) + + ζ d (0, a + α) + ζ d (0, a − α) − 2ζ d (0, a) − d/2 r=1 α 2r r ψ(2r)R d (2r, a) − d/2 r=1 α 2r r C d (2r, a) = ζ d (0, a + α) + ζ d (0, a − α) − 2ζ d (0, a)− − γ d/2 r=1 α 2r r R d (2r, a) − d/2 r=1 α 2r r ψ(2r)R d (2r, a) − d/2 r=1 α 2r r C d (2r, a) = ζ d (0, a + α) + ζ d (0, a − α) − 2ζ d (0, a)− − d/2 r=1 α 2r r H 2r−1 R d (2r, a) − d/2 r=1 α 2r r C d ((A.29)
where H 2r−1 ≡ ψ(2r) + γ is a harmonic number. For clarity and later convenience we write down the following identity resulting from the above discussion in the case of Dirichlet boundary conditions explicitly
∂ s ∞ r=1 (s) r r! α 2r ζ d (2s + 2r, a) s=0 = ζ d (0, a + α) + ζ (0, a − α) − 2ζ d (0, a)+ + d/2 r=1 α 2r r H r−1 2 − H 2r−1 R d (2r, a) (A.30)
Putting everything back together into (A.23) we can write for the complete ζ-function in the Dirichlet case
ζ D (0) : = ζ (0) = ζ d (0, a D + α) + ζ d (0, a D − α) + d/2 r=1 α 2r r H r−1 2 − H 2r−1 R d (2r, a D ) = ζ d (0, a D + α) + ζ d (0, a D − α) + d/2 r=1 α 2r r H 1 (r)R d (2r, a D ), (A.31)
where H 1 (r) is a special case of
H n (r) : = H r−1 2n − H 2r−1 . (A.32)
defined for any integer n ≥ 1. Higher values of n will be relevant when carrying out the corresponding analysis for a zeta function of the form ζ d (2ns + 2r, a) instead of ζ d (2s + 2r, a).
A.2.2 Neumann boundary conditions
Calculating the Neumann case is equivalent to setting a N − α = ε and letting ε go to 0. In this limit Barnes [81] calculated that
ζ d (0, ε) = − log ε − log ρ d + O(ε), (A.33) where ρ d ≡ ρ d (1)
is called a Γ-modular form and obeys the following identity involving the multiple Gamma function
Γ d (a) ρ d = Γ d+1 (a) Γ d+1 (a + 1)
.
(A.34)
The multiple Gamma function Γ d (a|d) is defined,
Γ d (a|d) ρ d (d) : = e ζ d (0,a|d) (A.35)
and we write Γ d (a) ≡ Γ d (a|1). Equipped with these tools we can compute the derivative of the ζ-function in the Neumann case (A.14),
ζ N (0) : = ζ N (0) − ∂ s (a 2 N − α 2 ) −s s=0 = ζ d (0, a N + α) − log ρ d − log(a N − α) + log a 2 N − α 2 + d/2 r=1 α 2r r H 1 (r)R d (2r, a N ) = ζ d (0, a N + α) − log ρ d + log(a N + α) + d/2 r=1 α 2r r H 1 (r)R d (2r, a N ), (A.36)
where ζ N is just ζ D but with a D replaced by a N . We can also write the following Neumann version of the identity (A.30) 2r, a N ).
∂ s ∞ r=1 (s) r r! α 2r ζ d (2s + 2r, a N ) − (a 2 N − α 2 ) −s s=0 = = ζ d (0, a N + α) − log ρ d + log(a N + α) − 2ζ d (0, a N )− + d/2 r=1 α 2r r H 1 (r)R d (
(A.37)
A.3 Determinant of GJMS operators
We are now prepared to tackle the calculation of the log-determinants. In section A.3.2, we discuss the Neumann case, which is a little more involved, as the zero mode makes the log-determinant ill-defined in the critical case k = d/2. For the subcritical case k = 1, . . . , d/2 − 1 we get
log det P 2k,H d N = −ζ d+1 (0, d/2 − k) + ζ d+1 (0, d/2 + k) − M (d, k), (A.39)
while for the critical case we get the slightly more complicated result
log det P d,H d N = − log(d − 1)! + log ρ d+1 + ζ d+1 (0, d) − M (d), (A.40)
where M (d) = M (d, d/2) and ρ d+1 is a Γ-modular form as defined in (A.34).
In section A.3.3 we assemble these results to construct the determinant of GJMS operators on a sphere, summarised in (A.60).
A.3.1 Determinant on the hemisphere with Dirichlet boundary conditions
We will now generalise the results of the last section. Our starting point is the spectral ζfunction defined in (A.1). As before, we start by doing a binomial expansion for every term in the product,
Z d (s, a, k) = m∈N d k−1 j=0 ∞ r j =0 (s) r j r j ! α 2r j j (m · d + a) 2s+2r j = ∞ r∈N k k−1 j=0 (s) r j r j ! α 2r j j m∈N d 1 (m · d + a) 2ks+2 k−1 j=0 r j = ∞ r∈N k k−1 j=0 (s) r j r j ! α 2r j j ζ d (2ks + 2r · d, a) = ζ d (2ks, a) r=0 + k−1 i=0 ∞ r i =1 (s) r i r i ! α 2r i i ζ d (2ks + 2r i , a)
r=(0,...,0,r i ,0,...,0) We remind the reader that α j = j + 1 2 , d = 1, and a D = (d + 1)/2 for Dirichlet boundary conditions. For simplicity we write a instead of a D throughout this section.
+ + k−1 i=1 i−1 j=0 ∞ r i ,r j =1 (s) r i r i ! (s) r j r j ! α 2r i i α 2r j j ζ d (2ks + 2r i + 2r j ,
The remaining sums, denoted by R(s) above, will vanish for Z d and Z d around s = 0. This is easy to see, as (s) r = O(s) while the ζ d functions will contribute with simple poles and ζ with double poles. When more than two Pochhammer symbols are present, they overcome the poles and give zero in the limit. We can make further simplifications in some of the sums above by noting that the r i are dummy indices and rewrite (A.41) as 2r, a) .
Z d (s, a, k) = ζ d (ds, a) + ∞ r=1 (s) r r! k−1 j=0 α 2r j ζ d (2ks + 2r, a) + ∞ r,r =1 (s) r r! (s) r r ! k−1 i=1 i−1 j=0 α 2r i α 2r j ζ d (Z d (0, a, k) = ζ d (0, a) + 1 2k d/2 r=1 1 r k−1 j=0 α 2r j R d ((A.43)
Using equation (A.28) we can rewrite this as
Z d (0, a, k) = 1 2k k−1 j=0 ζ d (0, a + α j ) + ζ d (0, a − α j ) . (A.44)
By virtue of (A.12) evaluated at k = 0, this can in turn be written in terms of generalised Bernoulli polynomials [42],
Z d (0, a, k) = 1 2k d! k−1 j=0 B (d) d (d/2 + j + 1) + B (d) d (d/2 − j) . (A.45)
Evaluating the derivative at s = 0 is also relatively painless now. For the first two terms the result follows directly from the discussion in the previous sections, in particular from equation (A.30) in the Dirichlet and (A.37) in the Neumann case. In the Dirichlet case we thus have d, a, k),
∂ s ζ d (2ks, a) + ∞ r=1 (s) r r! k−1 j=0 α 2r j ζ d (2ks + 2r, a) s=0 = = k−1 j=0 ζ d (0, a + α j ) + ζ d (0, a − α j ) + d/2 r=1 1 r k−1 j=0 α 2r j H k (r)R d (2r, a) = : k−1 j=0 ζ d (0, a + α j ) + ζ d (0, a − α j ) + M 1 ((A.46)
where we defined M 1 in the last line to simplify the notation. To calculate the contribution from the double sum in (A.42) we remind the reader that ζ d (s + n, a) = 1 s R d (n, a) + O(s 0 ) and ζ d (s + n, a) = −1 s 2 R d (n, a) + O(s) for n = 1, . . . , d. Since (s) r = : sf (s) = O(s) with f (0) = (r − 1)!, this means that the only terms in the sum that will survive are those for which 2r + 2r ≤ d. Also noting that ∂ s (s) r s=0 = (r − 1)! we get the following result n (α−x) [80] of generalized Bernoulli polynomials implies, according to (A.9), that R d (2r, a N ) = R d (2r, a D ) for even dimension, which in turn implies that M (d, k) does not depend on whether we chose a D or a N . It is possible to further simplify the result by using (A.34) and noting that α j +n = α j+n . This induces a telescope-like cancellation in the product,
∂ s ∞ r,r =1 (s) r r! (s) r r ! k−1 i=0 i−1 j=0 α 2r i α 2r j ≡A(r,r ) ζ d (2ks + 2r + 2r , a) s=0 = d/2 r=1 d/2−r r =1 1 r 1 r A(r, r ) s 2 ∂ s ζ d (2ks + 2r + 2r , a) s=0 + 2 d/2 r=1 d/2−r r =1 1 r 1 r A(r, r ) sζ d (2ks + 2r + 2r , a) s=0 = d/2 r=1 d/2−r r =1 1 r 1 r A(r, r ) − 1 2k R d (2r + 2r , a) + 2 2k R d (2r + 2r , a) = 1 2k d/2 r=1 d/2−r r =1 1 r 1 r A(r, r )R d (2r +Z d (0, a, k) = k−1 j=0 ζ d (0, a + α j ) + ζ d (0, a − α j ) + M (d, a, k) = log 1 ρ 2k d k−1 j=0 Γ d (a + α j )Γ d (a − α j ) + M (d,log 1 ρ 2k d k−1 j=0 Γ d (a + α j )Γ d (a − α j ) = log k−1 j=0 Γ d (a + α j ) ρ d Γ d (a − α j ) ρ d = log k−1 j=0 Γ d+1 (a + α j ) Γ d+1 (a + α j+1 ) Γ d+1 (a − α j ) Γ d+1 (a − α j−1 ) = log Γ d+1 (a + α 0 ) Γ d+1 (a + α k ) Γ d+1 (a − α k−1 ) Γ d+1 (a − α −1 ) = log Γ d+1 (d/2 − k + 1) Γ d+1 (d/2 + k + 1) . (A.50)
Putting everything together we get the following expression for the log-determinant
log det P 2k,H d D = −Z d (0, a D , k) = − log Γ d+1 (d/2 − k + 1) Γ d+1 (d/2 + k + 1) − M (d, k) = −ζ d+1 (0, d/2 − k + 1) + ζ d+1 (0, d/2 + k + 1) − M (d, k), (A.51)
which is valid for k = 1, . . . , d/2, i.e. in the critical as well as subcritical case.
A.3.2 Determinant on the hemisphere with Neumann boundary conditions
In the subcritical case there is no zero mode and we can continue from (A.50), inserting the appropriate a for Neumann boundary conditions, i.e. a N = (d − 1)/2. Then the expression for the spectral ζ-function is as in (A.45), thanks to the above mentioned property of the generalised Bernoulli polynomials [42], and the subcritical expression for the determinant is
log det P 2k,H d N = −Z d (0, a N , k) = − log Γ d+1 (d/2 − k) Γ d+1 (d/2 + k) − M (d, k) = −ζ d+1 (0, d/2 − k) + ζ d+1 (0, d/2 + k) − M (d, k). (A.52)
In the critical case k = d/2 the calculation is more subtle. In order to get a well-defined expression, we must subtract the zero mode contribution from the sum,
Z d (s, a N , k) = m∈N d \{0} k−1 j=0 (m · d + a N ) 2 − α 2 j −s = Z d (s, a N , k) − k−1 j=0 a 2 N − α 2Z d (0, a N , k) = 1 2k d! k−1 j=0 B (d) d (d/2 − j − 1) + B (d) d (d/2 + j) − 1 . (A.54)
We can now use (A.48) as well as (A.33), to write down the following expression,
Z (0, a N , k) = lim ε→0 k−2 j=0 ζ d (0, a N + α j ) + ζ d (0, a N − α j ) + ζ d (0, a N + α k−1 ) − log(ε) − log(ρ d ) + M (d, k) + log a 2 N − α 2 k−1 =log(a N +α k−1) +log(ε) + k−2 j=0 log a 2 N − α 2 j + O(ε), (A.55)
where we denote ε = a N − α k−1) . The logarithmic divergence cancels and we obtain
Z (0, a N , k) = k−2 j=0 ζ d (0, a N + α j ) + ζ d (0, a N − α j ) + log a 2 N − α 2 j + ζ d (0, a N + α k−1 ) − log(ρ d ) + log(d − 1) + M (d, k) = k−2 j=0 ζ d (0, a N + α j ) + ζ d (0, a N − α j ) + ζ d (0, d − 1) − log(ρ d ) + log (d − 1)! + M (d, k) = log (d − 1)! ρ 2k d Γ d (d − 1) k−2 j=0 Γ d (a N + α j )Γ(a N − α j ) + M (d, k) (A.56)
where we used the fact that a N + α k−1 = d − 1 and k−2 j=0 log a 2 N − α 2 j = log (d − 2)! for Neumann boundary conditions. As before, we can use identities involving the multiple Gamma function to induce cancellations. This leads us to the final result in the critical case,
log det P d,H d N = −Z d (0, a N , d/2) = − log (d − 1)! ρ d Γ d+1 (1) Γ d+1 (d) − M (d) = − log (d − 1)! Γ d+1 (d) − M (d) = − log (d − 1)! + log ρ d+1 + ζ d+1 (0, d) − M (d), (A.57)
where we used ρ d = Γ d+1 (1), and the abbreviated notation M (d, d/2) = : M (d). Since the critical case is the most relevant to this paper, we give the explicit form of M (d),
M (d) = d/2 r,l=1 1 r(d − 2r)!(2r − 1)! l + 1 2 2r H r−1 d − H 2r−1 B (d) d−2r d + 1 2 + 1 d d/2 r,l=1 d/2−r t=1 l m=1 1 rt(d−2r−2t)!(2r+2t−1)! l+ 1 2 2r m+ 1 2 2t B (d) d−2r−2t d+1 2 .
(A.58)
As mentioned in the main body, our result differs by Dowker's result due to the different sign in front of the log(d − 1)! term because we are considering the functional determinant of the GJMS operators on the sphere with the zero mode removed.
A.3.3 Determinant on the Sphere
The log-determinant on the sphere is obtained by adding the log-determinants on the hemisphere for Dirichlet and Neumann boundary conditions. In the critical case, the spectral ζ-function for the sphere at s = 0 is given by (A.45) and (A.54), [42], so that 59) and the log-determinant on the sphere is thus given by
Z d (0, k) = 1 k d! k−1 j=0 B (d) d (d/2 + j + 1) + B (d) d (d/2 + j) − 1 ,(A.log det P d,S d = log det P d,H d D + log det P d,H d N = − log (d − 1)! Γ d+1 (1) Γ d+1 (d)Γ d+1 (d + 1) − 2M (d) = −ζ d+1 (0, 1) + ζ d+1 (0, d + 1) + ζ d+1 (0, d) − log (d − 1)! + log ρ d+1 − 2M (d).
(A. 60) In the subcritical case we instead have
Z d (0, k) = 1 k d! k−1 j=0 B (d) d (d/2 + j + 1) + B (d) d (d/2 + j) , (A.61)
and the functional determinant reads
log det P 2k,S d = log det P 2k,H d D + log det P 2k,H d N = − log Γ d+1 (d/2 − k) Γ d+1 (d/2 + k) Γ d+1 (d/2 − k + 1) Γ d+1 (d/2 + k + 1) − 2M (d, k) = −ζ d+1 (0, d/2 − k) + ζ d+1 (0, d/2 + k)+ − ζ d+1 (0, d/2 − k + 1) + ζ d+1 (0, d/2 + k + 1) − 2M (d, k).
(A.62)
B An alternative form for determinants of GJMS operators on spheres and hemispheres
In this appendix we will show how to rewrite Dowker's expressions (A.51) and (A.57) for the log-determinants of GJMS operators on hemispheres as
log det P 2k,H d D = − d n=0 h D n (d, k)ζ (−n) − f D (d, k), log det P 2k,H d N = − d n=0 h N n (d, k)ζ (−n) − f N (d, k), (B.1)
where ζ is the Riemann ζ-function, and h n and f are functions that we derive and that depend on the boundary conditions, the dimension, and the degree of the GJMS-operator. The spherical case then follows directly as the sum of Dirichlet and Neumann hemispherical results,
log det P 2k,S d = − d n=0 h D n (d, k) + h N n (d, k) ζ (−n) − f D (d, k) − f N (d, k). (B.2)
B.1 Rewriting ζ d in terms of the Riemann ζ-function
In [82], Adamchik gives a closed form of the Barnes ζ-function in terms of a series of Riemann ζ-functions. His calculation is summarized by equations (14), (17), and (23) in the reference, which in our slightly different notation and after a bit of rearranging read
ζ d (0, z) = (−1) d+1 log G d (z) + d−1 k=0 (−1) k z k R d−k , (B.3)
where we note that the G d (z) are multiple Gamma functions with a different normalization compared to the one used by Dowker. We have the following explicit closed forms for all the parts of the above expression,
log G d (z) = (−1) d (d − 1)! d−1 k=0 P k,d (z) ζ (−k) − ζ (−k, z) , Re(z) > 0 R d−k = 1 (d − k − 1)! d−k−1 l=0 d − k l + 1 ζ (−l). (B.4)
For our purposes, it is not necessary to know how the polynomials P k,d (z) are defined, it suffices to know that they satisfy [82] d−1
k=0 P k,d (z)n k = (n − z + 1) d ≡ d−1 k=1 (n + k − z). (B.5)
We are only interested in the special case of the above formula, for which z is a positive integer. For z = 1 it is clear that log We can now define the following quantity
A(d, z) ≡ (−1) d+1 log G d (z) = 1 (d − 1)! d−1 k=0 P k,d (z) z−1 n=1 n k log n = 1 (d − 1)! z−1 n=1 log(n)(n − z + 1) d , z > 1 . (B.8)
It is further clear from ζ(s) = ζ(s, 1) that A(d, 1) = 0. Another very useful simplification comes from rewriting the R d−k sum in (B.3) as
d−1 k=0 (−1) k z k R d−k = d−1 k=0 z k (−1) k (d − k − 1)! d−k−1 l=0 d − k l + 1 ζ (−l) = d−1 l=0 d−l−1 k=0 (−1) k (d − k − 1)! z k d − k l + 1 ζ (−l) ≡ d−1 l=0 D l (d, z)ζ (−l), (B.9)
where we define D k (d, z) in the last line as
D k (d, z) ≡ d−k−1 j=0 (−1) j (d − j − 1)! z j d − j k + 1 . (B.10)
With these definitions it possible to write down the ζ d (0, z) in the following very simple way
ζ d (0, z) = A(d, z) + d−1 k=0 D k (d, z)ζ (−k), z ∈ N + . (B.11)
B.2 Determinants in terms of Riemann ζ-functions
Dirichlet boundary conditions on the hemisphere
We can now use the above technology to rewrite the log-determinant (A.51) on the hemisphere for Dirichlet boundary conditions,
log det P 2k,H d D = −ζ d+1 (0, d/2 − k + 1) + ζ d+1 (0, d/2 + k + 1) − M (d) = −A(d + 1, d/2 − k + 1) + A(d + 1, d/2 + k + 1) − M (d)− − d n=0 D n (d + 1, d/2 − k + 1) − D n (d + 1, d/2 + k + 1) ζ (−n) ≡ − d n=0 h D n (d, k)ζ (−n) − f D (d, k).
(B.12)
Here we have defined two functions
h D n (d, k) ≡ D n (d + 1, d/2 − k + 1) − D n (d + 1, d/2 + k + 1), (B.13) f D (d, k) ≡ A(d + 1, d/2 − k + 1) − A(d + 1, d/2 + k + 1) + M (d).
(B.14)
Since the critical case k = d/2 is of particular interest to us, we give the explicit form of h D n and f D in that case,
h D n (d) ≡ h D n (d, d/2) = − 1 (d − 1)! d n + 1 + 1 d! d + 1 n + 1 − d−n j=0 (−1) j (d − j)! d + 1 j d − j + 1 n + 1 , (B.15) f D (d) ≡ f D (d, d/2) = − 1 d! d−1 j=1 log(j)(j − d) d+1 + M (dlog det P 2k,H d N = −ζ d+1 (0, d/2 − k) + ζ d+1 (0, d/2 + k) − M (d) = −A(d + 1, d/2 − k) + A(d + 1, d/2 + k) − M (d)− − d n=0 D n (d + 1, d/2 − k) − D n (d + 1, d/2 + k) ζ (−n) ≡ − d n=0 h N n (d, k)ζ (−n) − f N (d, k), (B.17)
with the functions h N n (d, k) and f N (d, k) defined in the last line as
h N n (d, k) ≡ D n (d + 1, d/2 − k) − D n (d + 1, d/2 + k), (B.18) f N (d, k) ≡ A(d + 1, d/2 − k) − A(d + 1, d/2 + k) + M (d). (B.19)
The critical case (A.57) is slightly harder, as we also need to rewrite log ρ d+1 . In order to do this, we use the following formula from [82],
log ρ d = − 1 (d − 1)! d−1 n=0 d n + 1 ζ (−n). (B.20)
Reminding the reader that d d+1 = 0 we can thus write
log det P d,H d N = − log(d − 1)! + log ρ d+1 + ζ d+1 (0, d) − M (d) = − d n=0 1 d! d + 1 n + 1 − D n (d + 1, d) ζ (−n) − log(d − 1)! + A(d + 1, d) − M (d) ≡ − d n=0 h N n (d)ζ (−n) − f N (dh N n (d) = 1 d! d + 1 n + 1 − d−n j=0 (−1) j (d − j)! d j d − j + 1 n + 1 , (B.22) f N (d) = log(d − 1)! − 1 d! d−1 j=1 log(j)(j − d) d+1 + M (d). (B.23)
C Functional determinants on the flat d-torus
In this appendix we calculate functional determinants of the Laplacian on the flat torus (Appendix C.1) and on a cylinder obtained by cutting the torus along a cycle and imposing Dirichlet boundary conditions along the cut (Appendix C.2). After that, in Appendix C.3, we derive the functional determinant of k-powers of the Laplacian both on the flat torus as well as on the cylinder.
C.1 Determinant of the Laplacian on the flat torus
The d-dimensional flat torus can be defined as the hyperinterval with side lengths given by L i and opposing sides identified,
T d L 1 ,...,L d : = R d /(L 1 Z × . . . × L d Z). (C.1)
Since the manifold is flat, the Laplace-Beltrami operator is just the standard Laplacian on Euclidean space.
We compute the functional determinant for the Laplacian on the d-torus via a spectral ζ-function method. The eigenvalues of the Laplace operator −∂ a ∂ a , are given by
λ n 1 ,...,n d = 2πn 1 L 1 2 + · · · + 2πn d L d 2 , n 1 , . . . , n d ∈ Z . (C.2)
As usual, the zero mode, with n 1 = n 2 = · · · = n d = 0, needs to be removed in the computation of the spectral ζ-function. In order to calculate the determinant of the Laplacian, we need to evaluate
ζ T d L 1 ,...,L d (s) : = n∈Z d 2πn 1 L 1 2 + · · · + 2πn d L d 2 −s = n∈Z d n T d Ξ n d −s , (C.3)
where n d : = (n 1 , . . . , n d ) is a d-vector, Ξ : = diag 2π/L 1 2 , . . . , 2π/L d 2 a d × d matrix, and the prime on the sum denotes the omission of the zero mode.
In section 2.2 of [83] the analytic continuation of the sum (C.3) is evaluated recursively. In our notation the result is given by
ζ T d L 1 ,...,L d (s) = 2 L 1 2π 2s ζ(2s) + L 1 Γ(s − 1/2) 2 √ πΓ(s) ζ T dΓ(s) √ π n d−1 ∈Z d−1 ∞ n 1 =1 n 1 n T d−1 Ξ d−1 n d−1 s−1/2 K s−1/2 L 1 n 1 n T d−1 Ξ d−1 n d−1 , (C.5)
with K ν (z) being the modified Bessel function of the second kind, n d−1 = (n 2 , . . . , n d ), and Ξ d−1 = diag 2π/L 2 2 , . . . , 2π/L d 2 .
We can directly evaluate the log-determinant on the flat torus, as the analytic continuation (C.4) has only a pole at s = d 2 in the whole complex plane,
log det P 2,T d = log det ∆ T d L 1 ,...,L d = −ζ T d L 1 ,...,L d (0) = 2 log(L 1 ) + L 1 ζ T d−1 L 2 ,...,L d (−1/2) − G (0; L 1 , . . . , L d ) . (C.6)
This expression is recursive, and thus not in a closed form yet, however, it turns out to be useful in the evaluation of the functional determinant contribution to the entanglement entropy in Section 5.
We now solve the recursion directly and provide a closed form for the ζ-function. Our calculation of the closed form is slightly more direct than the one carried out in section 4.2.3 of [84] but the end result is the same. In order to make the calculation more transparent, let us for a while rewrite (C.4) as
ζ d (s) = h 1 (s) + f 1 (s)ζ d−1 s − 1 2 + g d (s), (C.7)
where the functions above summarize the information in the recursion: (C.8d)
ζ d−k (s) : = ζ T d−k L k+1 ,
In this notation it is not difficult to see, that after k steps we get
ζ d (s) = h 1 (s) + k j=2 h j s − j − 1 2 j−1 i=1 f i s − i − 1 2 + g d (s)+ + k j=2 g d−j+1 s − j − 1 2 j−1 i=1 f i s − i − 1 2 + ζ d−k s − k 2 k j=1 f j s − j − 1 2 . (C.9)
The anchor for the recursion is
ζ 1 (s) : = n∈Z 2πn L d −2s = 2 L d 2π 2s ζ(2s) = : h d (s) . (C.10)
We note that the products of f k functions in (C.9) simplify to
j−1 i=1 f i s − i − 1 2 = L 1 L 2 · · · L j−1 1 2 √ π j−1 Γ s − j−1 2 Γ(s) , (C.11)
hence, performing the recursion for k = d − 1 steps and reinserting the definitions then gives us the following expression
ζ T d L 1 ,...,L d (s) = 2 √ π 1 2π 2s d j=1 π j 2 Γ s − j−1 2 Γ(s) ζ(2s − j + 1)L 1 · · · L j−1 L 2s−j+1 j + + 1 Γ(s) d−1 j=1 L 1 · · · L j−1 1 2 √ π j−1 Γ s − j − 1 2 G s − j − 1 2 ; L j , . . . , L d , (C.12)
with the understanding that for j = 1 the product L 1 . . . L j−1 is simply 1. If we in addition insert the definition of G we can rewrite the complete expression as so the only term that survives is the one where the derivative hits the Γ-function, and we effectively set s = 0 everywhere and the Γ-function to 1. For the first sum the situation is a bit trickier, as there are two Γ-functions involved, whose poles cancel only for even j, meaning that we have to separate even and odd j for the calculation. Let us first take a look at the even j part of the first sum, that is j = 2 , we have d ds
ζ T d L 1 ,...,L d (s) = 2 √ π 1 2π 2s d j=1 π j 2 Γ s − j−1 2 Γ(s) ζ(2s − j + 1)L 1 · · · L j−1 L 2s−j+1 j + + 2 2−s Γ(s) d−1 j=1 L 1 · · · L j−1 L s− j−2 2 j (2π) j/2 n∈Z d−j ∞ n j =1 n j n T d−j Ξ d−j n d−j s−j/2 × × K s−j/2 L j n j n T d−j Ξ d−j n d−j , (C.13) where in this notation Ξ d−j is (d − j) × (d − j)-matrix2 √ π 1 2π 2s d/2 =1 π Γ s − 2 −1 2 Γ(s) ζ(2s − 2 + 1)L 1 · · · L 2 −1 L 2s−2 +1 2 s=0 = = 2 d/2 =1 π − 1 2 L 1 · · · L 2 −1 L 2 −1 2 Γ 1 2 − ζ(1 − 2 ). (C.15)
When j is odd, that is j = 2 − 1, we get d ds
2 √ π 1 2π 2s d/2 =1 π −1/2 Γ (s − + 1) Γ(s) ζ(2s − 2 + 2)L 1 · · · L 2 −2 L 2s−2 +2 2 −1 s=0 = = −2 log(L 1 ) + 4 d/2 −1 =1 L 1 · · · L 2 L 2 2 +1 (−π) ! ζ (−2 ). (C.16)
Before putting everything together, we note that the sums over the modified Bessel functions converge exponentially at s = 0 [83]. We can thus introduce the following notation for their limits
S d−j (L j , . . . , L d ) : = 1 (2π) j/2 n∈Z d−j ∞ n j =1 n T d−j Ξ d−j n d−j n j j 2 K −j/2 L j n j n T d−j Ξ d−j n d−j , (C.17) where S d−j is convergent for d > j > 0. Finally, we obtain ζ T d L 1 ,...,L d (0) = −2 log(L 1 ) + 4 d/2 −1 j=1 L 1 · · · L 2j L 2j 2j+1 (−π) j j! ζ (−2j)+ + 2 d/2 j=1 L 1 · · · L 2j−1 L 2j−1 2j (−2π) j (2j − 1)!! ζ(−(2j − 1)) + 4 d−1 j=1 L 1 · · · L j−1 L j 2 −1 j S d−j (L j+1 , . . . , L d ). (C.18)
The expression above gives us the functional determinant for a d-dimensional torus, and despite its intimidating appearance, it is rather straightforward to handle, since the modified Bessel functions hidden in S converge rapidly to zero as the integers n i increase.
It is instructive to evaluate the d = 2 case. First of all, we note that the sum on the first line of (C.18) is empty for d = 2. The remaining two sums in (C.18) consist only of the j = 1 term, giving us
ζ T 2 L 1 ,L 2 (0) = −2 log L 1 + π 3 L 1 L 2 + 4 L 1 S 1 (L 2 ) = = −2 log L 1 + π 3 L 1 L 2 + 2 n 2 ∈Z ∞ n 1 =1 e −2πn 1 |n 2 | L 1 L 2 n 1 .
Using more standard conventions, see e.g. [43], introducing the modular parameter τ = iL 1 /L 2 , as well as defining q := e 2πiτ , and then performing the sum over n 1 , we obtain
ζ T 2 L 1 ,L 2 (0) = −2 log L 1 + π 3 L 1 L 2 + 2 n 2 ∈Z ∞ n 1 =1 e −2πn 1 |n 2 | L 1 L 2 n 1 = −2 log L 1 − iπ 3 τ − 4 ∞ m=1 log (1 − q m ) = − log L 2 1 η 4 (τ ) , (C.19)
where η is the Dedekind η-function, defined as in (E.5). If we denote the area of the 2-torus by A, where A = L 1 L 2 in our convention, 12 then taking into account the contribution from the zero-mode, the log-determinant and the partition function on the torus can be written as
log det ∆ T 2 = −ζ T 2 L 1 ,L 2 (0) = log L 2 1 η 4 (τ ) , Z(τ ) = √ A exp 1 2 ζ T 2 L 1 ,L 2 (0) = 1 Im(τ )η 2 (τ ) , (C.20)
which is a well known result, see e.g. [43].
C.2 Determinant of the Laplacian on the cut d-torus
We now consider cutting the torus T d L 1 ,...,L d at x 1 = 0 and x 1 = L < L 1 , as shown in figure 4. As discussed in the main body, this gives rise to two subsystems, each of which is a cylinder represented by an interval times a d−1-dimensional torus. Imposing Dirichlet boundary conditions (3.21) the eigenvalues of the Laplacian −∂ a ∂ a are µ m,n 2 ,...,n d = mπ L 2 + λ n 2 ,...,n d , m ∈ N + , n 2 , . . . , n d ∈ Z , (C. 21) with λ n 2 ,...,n d the eigenvalue on T d−1 L 2 ,...,L d . Notice that there is no zero mode now. The spectral ζ-function on this geometry thus takes the form
ζ [0,L]×T d−1 L 2 ,...,L d (s) : = λ ∞ m=1 λ + mπ L 2 −s = L π 2s ζ(2s) + λ ∞ m=1 λ + mπ L 2 −s , (C.22)
where we schematically write λ for the (d−1)-dimensional toroidal part of the eigenvalue, and we explicitly separate the (d−1)-dimensional toroidal zero mode from the rest in the last passage. We can now evaluate the primed sum by means of the identities (E.1), (E.2), and (E.3) collected in appendix E, and we obtain
λ ∞ m=1 λ + mπ L 2 −s = 1 Γ(s) λ ∞ m=1 ∞ 0 dt t s−1 e −t λ+( mπ L ) 2 = 1 Γ(s) λ ∞ 0 dt t s−1 e −tλ ∞ m=1 e −t( mπ L ) 2 = 1 Γ(s) λ ∞ 0 dt t s−1 e −tλ − 1 2 + L 2 √ π t − 1 2 + L √ π t − 1 2 ∞ m=1 e − (mL) 2 t = − 1 2 ζ T d−1 L 2 ,...,L d (s) + L 2 √ π Γ(s − 1/2) Γ(s) ζ T d−1 L 2 ,...,L d (s − 1/2) + 2L s+ 1 2 Γ(s) √ π λ ∞ m=1 m √ λ s− 1 2 K s−1/2 2Lm √ λ . (C.23)
Notice that λ is nothing but n d−1 Ξ d−1 n d−1 as defined in Appendix C.1. Hence, adopting the same notation here, the above term containing the modified Bessel function can be written in terms of the function G as defined in (C.5), and we can finally write the expression for the ζ-function on the cut torus as follows, (C.25)
ζ
Our expression (C.25) agrees with the results of [85] obtained by contour integration.
Let us check that for d = 2 we obtain the well-known result for the log determinant of the Laplacian on a cylinder. In this case T 1 L 2 is nothing but a circle of length L 2 , and according to (C.10), we have
ζ T 1 L 2 (s) = 2 L 2 2π 2s ζ(2s) . (C.26)
Hence, we obtain This is a well-known result in literature, see e.g. [43,44]. It is convenient to leave α general, so that we can easily use the above results for the functional determinants for arbitrary cuts.
log det ∆ [0,L]×S 1 L 2 = −ζ [0,L]×S 1 L 2 (0) = log 2 L L 2 − π 3 L L 2 − 2 ∞ n 2 =1 ∞ n 1 =1 e −4π L L 2 n 1 n 2 n 1 = log 2 L L 2 − π 3 L L 2 + 2 ∞ n 2 =1
C.3 Determinant of powers of the Laplacian on the torus
When calculating the entanglement entropy of the GQLM on a d-torus with d even, the determinants that arise are those of even powers of the Laplacian. On the flat d-torus geometry the higher-derivative conformal operator P z is indeed just the z/2-th power of the standard Laplacian, cf. equation (2.4) in Section 2. In order to generalise our previous result, we first make the observation that, since the flat torus as well as the cut torus are compact manifolds, the spectrum of ∆ k is just given by the set of λ k , where λ is an eigenvalue of ∆ as in (C.2) or (C.21) in the case of the d-torus or the cut d-torus respectively. In particular, the spectral ζ-function corresponding to ∆ k is given by where we write schematically T for either T d L 1 ,...,L d or [0, L] × T d−1 L 2 ,...,L d , and λ for the corresponding eigenvalues (C.2) or (C.21) respectively. This leads to the simple result for the determinants [86] log det ∆ k where the critical case is found by setting k = d/2.
D The winding sector for the d-torus
The goal of this appendix is to compute the winding sector contribution (3.36) that originates from cutting the d-torus, as discussed in Section 3. In order to do so, we first need to solve the classical equations of motions for the n − 1 classical fields, (2.12), obeying the boundary conditions (3.29) as well as (3.34). As discussed in Section 3, the n-th classical field is reabsorbed into the constrained partition functions to create a free one, cf (3.31), thus here we are only concerned with n − 1 classical fields.
To facilitate reading, we list here again equations of motion and explicit conditions which the n − 1 classical fields have to fulfill. The equations of motion are given by The standard way of solving such a partial differential equation is by separation of variables, and here it suffices to separate only the first variable x 1 from the remaining orthogonal d − 1 directions y, as e.g.φ Notice that the last condition above has to be imposed either at x 1 = L A or at x 1 = −L B , or, in other words, we solve for classical fields in the region A and in the region B separately and then we glue the solutions at the boundary. As we discussed in Section 3, these conditions (D.5b)-(D.5c) are not sufficient to specify a solution of the equation of motion (D.5a), which is in general a polynomial of degree z − 1, expressed in terms of z coefficients. A choice of supplementary boundary conditions are given by (3.34), which now imply ∂ n ∆ kφcl i Γ 1 = 0 k = 0, . . . ,
z 2 −2 , ⇒ ∂ 2 −1 x 1 f i (0) = 0 , = 1, . . . , z 2 −1, (D.6a) ∂ n ∆ kφcl i Γ 2 = 0 k = 0, . . . , z 2 −2 , ⇒ ∂ 2 −1 x 1 f i (L A ) = ∂ 2 −1 x 1 f i (−L B ) = 0 , = 1, . . . , z 2 −1 .
(D.6b)
We then solve the equations respectively in A and B, and glue the solutions at the two cuts, that is the whole solution is given by While the explicit value of the coefficients is quite complicated, their i dependence is simple. For instance, for the first few values of z, the functions f i,A read Notice that the functions f i are continuous everywhere on A ∪ B, but they are not differentiable at the cuts Γ 1 , Γ 2 , even though the left and right derivatives exist and are finite. This is not unusual, and it is true already in d = 2, see for example the cylindric case in [26,27]. Now we are ready to evaluate the winding sector contribution (3.36), that is the function
f i (x 1 ) = f i,A (x 1 ), x 1 ∈ [0, L A ] ,f i,A (x 1 ) = 2πR cωi |x 1 | L A , z = 2, 2πR cωi −2 |x 1 | L A 3 + 3 x 1 L A 2 , z = 4,W (n) = φcl i e − n−1 i=1 S[φ cl i ] .
The action is given by the bulk term S 0 (2.5) and the boundary part S ∂ (2.10), and it is clear that only the latter will contribute to W (n). As it turns out, evaluating S ∂ [φ cl i ] results in the following rather simple closed expression (notice that only the highest derivative term contributes to the action (2.10)), 13
S[φ c i ] = g π 2 R 2 cω i ·ω i (−1) z/2 z! (1 − 2 z )B z 1 + L A L B z−1 L 2 · · · L d L z−1 A , (D.9)
where the B z are Bernoulli numbers. Notice that only the case z = d gives a scale invariant winding sector contribution as expected from a critical theory. With this result, we can use the analytic continuation found in appendix F of [27] to write W (n) = ω∈Z n−1 exp −π Λ z (L A , L B , L 2 , . . . , L d ) ω T · T n−1 ω (D.10)
= √ nΛ − n−1 2 z ∞ −∞ dk √ π e −k 2 ω∈Z exp − π Λ z ω 2 − 2i π Λ z k ω n(−1) z/2 z! (1 − 2 z )B z L z−1 A + L z−1 B L z−1 A L z−1 B L 2 · · · L d . (D.12)
The derivative with respect to n at 1 is finally given by
− W (1) = log Λ z − 1 2 − ∞ −∞ dk √ π e −k 2 log ω∈Z exp − π Λ z ω 2 − 2i π Λ z k ω .
(D. 13) In order to make comparison in a more transparent way it is useful to rewrite Λ z in terms of the aspect ratios of the d-torus. Introducing the dimensionless parameters τ , as well as u the parameter controlling the cut, as follows 14) and noticing that L B = L 1 − L A , we can write
τ k = i L 1 L k+1 , k = 1 , . . . , d − 1 , u = L A L 1 , (D.Λ z = g π R 2 c (−1) z/2 z! (1 − 2 z )B z u 1−z + (1 − u) 1−z 1 |τ 1 | . . . |τ d−1 | . (D.15)
In particular, for d = 2 and z = 2, we obtain
Λ 2 = 4π g R 2 c 1 u(1 − u) 1 |τ 1 | .
(D.16) 13 As we discussed the classical fields are not differentiable at the cuts, however left and right derivatives exist and they are finite, so here the integrals are evaluated using left and right limits, exactly as in d = 2 dimensions.
In the semi-infinite limit, that is when |τ 1 | >> 1, the integral in W (1) (D.13) is exponentially suppressed, hence at the leading order in Λ 2 we have −W (1) = 1 2 log (4π g R 2 c ) −
E Useful formulae
Here we collect some useful formulae and definitions of special functions used in the paper.
The integral representation of the Gamma function immediately leads to
−s = 1 Γ(s) ∞ 0 dt t s−1 e − t .
(E.1)
The Poisson summation formula is given by The integral representation of a modified Bessel function is
K ν (z) = 1 2 ∞ 0 du e − z 2 (u+ 1 u ) u ν−1 , (E.3)
and the explicit expression for the special case ν = − 1 2 is
K − 1 2 (x) = π 2 e −x √ x . (E.4)
The Dedekind function is defined as follows η(τ ) := q 1/24 ∞
n=1
(1 − q n ) , q := e 2iπτ . (E.5)
Its expansion for small imaginary argument is given by
η(i|τ |) ≈ e − π 12|τ
field eigenstates {|φ A } and {|φ B } provide orthonormal bases in H A and H B , respectively, and we have used that φ A and φ B are free fields with support on non-overlapping subsets of M to write S[φ] = S[φ A ] + S[φ B ]
Figure 2 :
2The sphere is cut into hemispheres A and B by an entangling cut at the equator.
Figure 4 :
4The torus is cut into cylinders Y A and Y B by the two entangling cuts Γ 1 and Γ 2 .
,...,L d (s) is the spectral ζ-function on the (d − 1)-torus. The auxiliary function G is defined in Appendix C.1, as G(s; L 1 , L 2 , . . . , L d ) : = (5.5)
For
d-cylinders with Dirichlet boundary conditions, using (C.31b) and (C.25), we obtain log det ∆ d/2 [0,L]×T d−1 L 2 ,...,L d = d 2 log(2L) + d 4 ζ T d−1 L 2 ,...,L d
is given by (C.18) with the replacement d → d−1
Figure 5 :
5We plot the final expression for the universal finite term in the entanglement entropy on a half torus (5.17) against the number of spatial dimension d (which is equal to the critical exponent z). We normalise S[T d /2] with respect to the two-dimensional case, and set g = R c = L 1 = · · · = L d = 2L A = 1 in the plot.
23 )
23Let us examine how the different terms in (5.13) behave when the inequalities in (5.22) hold. First, all the functions G (0, L, L 2 , . . . , L d ) (5.8) with L = 2L A , 2(L 1 − L A ), L 1 are exponentially suppressed, since all the elements of the matrix Ξ d−1 diverge, while L is kept fixed. Now consider the term ζ T d−1 L 2 ,...,L d (0) in (5.13). This term is defined in (C.18) with a shift d → d−1 and subsequent relabelling of the torus sides. With the choice (5.22) all the Bessel functions contained in (C.17), and thus in ζ T d−1 L 2 ,...,L d (0), are exponentially suppressed. It then follows that the leading piece of ζ T d−1 L 2 ,...,L d (0) in (C.18) is given by the highest term in the sums, that is
contribution from this term is the logarithm −2 log L 2 . Then the only divergent contributions are coming from the log terms (see e.g. (5.14)) and they cancel, leaving the finite term shown in(5.21).
a D = (d + 1)/2 for Dirichlet boundary conditions, and a N = (d − 1)/2 for Neumann boundary conditions. The GJMS operators are well defined for k = 1, . . . , d/2 and we distinguish between the subcritical case k < d/2 and the critical case k = d/2. Using the above form of the spectral ζ-function the log determinant of P 2k on the d-hemisphere H d with either Neumann or Dirichlet boundary conditions can be found aslog det P 2k,H d D/N = −Z d (0, a D/N , k). (A.2)The expression Z d (0, a D/N , k) should be interpreted as lim s→0 ∂ s Z d (s, a D/N , k). Given (A.2), one can then add the Neumann and the Dirichlet cases to find the log determinant of P 2k on the whole sphere log det P 2k,S d = −Z d (0, a D , k) − Z d(0, a N , k).(A.3)
ζ d (s, a) : = ζ(s, a|1). (A.5) Analytic continuation of ζ d (s, a) is facilitated by the relation (see [80], page 149), ζ d (s, a) = Γ(1 − s)I d (s, a), (A.6) with I d (s, a) given by the integral I d (s, a) z) s−1 e −az (1 − e −z ) n dz. (A.7) I d (s, a) is an entire function in s. The new definition of ζ d (s, a) is analytic for all s except for simple poles at s = 1, . . . , d where it has residues Res s=k ζ d (s, a) = 1
l
(a) are generalised Bernoulli polynomials.A.2 A first step towards Z dIn order to calculate Z d (0, a D/N , k) with Z d as in (A.1), we need to evaluate the derivative at s = 0 of ζ-functions of the formζ D (s) : = m∈N d (a D + m · d) 2 − α 2 −s , (A.13) with α = (d − 1)/2 and a D = d i − (d − 1)/2, with d ∈ N d forDirichlet boundary conditions, as such functions roughly correspond to the factors of Z d . From now on we set d ≡ (1, . . . , 1) as this is the only case relevant to our discussion. In particular, this implies that a D = (d + 1)/2. For Neumann boundary conditions we have to set a N = (d − 1)/2 instead of a D , which makes the term coming from the origin 0 ∈ N d ill-defined. For the Neumann case we thus have to omit the origin from the summation,
2r, a N ). (A.16) Here the R d are residues of the Barnes ζ-function given in (A.8), log ρ d is a Γ-modular form as described in equations (A.34) and (A.33) below, and H r is a harmonic number defined via, H n (r) : = H r−1 2n − H 2r−1 . (A.17)
∂
s ζ D (s) = ∂ s ζ d (2s, a) + ∞ r=1 α 2r r! f (s)sζ d (2s + 2r, a) + f (s)ζ d (2s + 2r, a) + sf (s)∂ s ζ d (2s + 2r, a) . (A.20)
ζ
D (0) = 2ζ d (0, a) + r C d (2r, a) + ∞ r=d/2+1 α 2r r ζ d (2r, a). (A.23)
In section A.3.1, for the case of Dirichlet boundary conditions on the hemisphere, we obtainlog det P 2k,H d D = −ζ d+1 (0, d/2 − k + 1) + ζ d+1 (0, d/2 + k + 1) − M (d,k), (A.38) where the function M (d, k) is defined via equations (A.46), (A.47), and (A.49). Equation (A.38) is valid for k = 1, . . . , d/2.
a) r=(0,...,0,r i ,0,...,0,r j ,0,...
sum vanishes at s = 0 as ζ d only contributes a pole proportional to 1/s while the Pochhammer symbols each contribute a factor of s. The rest of the expression looks just like the ζ-functions in the previous section, so the result follows directly from (A.19) and (A.18),
2r , a) = : M 2 (d, a, k). (A.47) Putting (A.46) and (A.47) together we get the result
the last line we used the definition of the multiple Gamma function (A.34) to rewrite the result, and where we defined M (d, k) : = M 1 (d, a, k) + M 2 (d, a, k). (A.49)We omitted a in M (d, k), because the property B
= 0 we have for the critical case k = d/2[42]
G d ( 1 ) = 0 .
10For z > 1 we can use the fact that ζ(s, z) = ζ(s)
− 1 L
12 ,...,L d (s − 1/2) + G(s; L 1 , . . . , L d ) , (C.4) where we define the function G(s; L 1 , L 2 , . . . , L d )
−k (s) : = G(s; L k+1 , . . . , L d ) .
[ 0 ,
0L]×T d−1 L 2 ,...,L d (s; 2L, L 2 , . . . , L d ).(C.24) , . . . , L d ) .
as the Dedekind function as in (E.5), we see that we can rewrite the functional determinant on the cylinder as log det ∆ [0,L]×S 1 L 2 = log 2α|τ | η 2 (2ατ ) . (C.29)
ζ
T (s, k) : = λ λ k −s = ζ T (ks), (C.30)
these equations on the flat torus given by [−L B , L A ] × [0, L 2 ] × . . . × [0, L d ] with the ends of each of the intervals identified and place the cuts Γ 1 at x 1 = 0 and Γ 2 at x 1 = L A (and thus x 1 = −L B ). With this, the boundary conditions along the x 1 direction (3.29) for i = 1, . . . , n − 1 can be rewritten asφ cl i | Γ 1 (x) =φ cl i (0, y) = 0 , (D.2a) φ cl i | Γ 2 (x) = 2πR cωi =φ cl i (L A ,y) =φ cl i (−L B , y) = 2πR cωi , (D.2b) where we denote x = (x 1 , y) = (x 1 , x 2 , . . . x d ) andω i : = (M n−1 ) ij ω j throughout this section. Notice that the above conditions (D.2a)-(D.2b) have to hold for all the coordinates y in the d−1-dimensional torus, i.e. y ∈ [0, L 2 ] × . . . × [0, L d ]. Along the d−1-toroidal directions we have the periodicity conditions φ cl i (x 1 , y) =φ cl i (x 1 , y + β) , β := (L 2 , . . . , L d ) . (D.3)
cl i (x 1 , y) = f i (x 1 ) g i (y) , i = 1, . . . , n − 1 . (D.4) The boundary condition (D.2b) shows that g i (y) can only be a constant for all i = 1, . . . , n−1, and from now on we set g i (y) = 1 and work only with the functions f i (x 1 ). Hence, the equations of motion and boundary conditions expressed on f i (i = 1, . . . i (L A , y) =φ cl i (−L B , y) = 2πR cωi ⇒ f i (L A ) = f i (−L B ) = 2πR cωi . (D.5c)
same for the functions f i,B after the replacement L A → L B .
(L A , L B , L 2 , . . . , L d ) : = g π R 2 c
for for all k. Again we write these functions explicitly for the critical case,)
(B.21)
where we define h N
n (d) and f N (d) in the last line. In analogy to the Dirichlet case we can
define h N
n (d, d/2) ≡ h N
n (d) and f N (d, d/2) ≡ f N (d), as with this definition equation (B.17)
becomes valid
obtained from Ξ by removing the first j columns and j rows. From this expression we can now directly calculate ζ T d L 1 ,...,L d (0), as needed for the determinant. Calculating the derivative of the second sum at s = 0 is simple,since
1
Γ(ε)
= ε + . . . ,
d
ds
1
Γ(s) |s=ε
= 1 + . . . ,
ε << 1 ,
(C.14)
||τ |
,
as
|τ | → 0 + ,
(E.6)
while for large imaginary argument we have
η(i|τ |) ∼ e − π
12 |τ | ,
as
|τ | → ∞ .
(E.7)
See also[28] for related study of Rényi entropies in quantum dimer models.
Derivatives of the classical fields are in general discontinuous at the cut, but the left-and right-derivatives remain bounded.
See[42,60,61] for related studies of conformal anomalies for GJMS operators on spherical manifolds.
Where we have used the ζ-function identity ζ (−2n) = (−1) n (2n)! 2(2π) 2n ζ(2n + 1) , n ∈ N.
Note that the bosons are not compactified in[33].
See also[75,76] for related holographic results.10 See also the work[79] for a study of signatures of chaos in the QLM.
More rigorously one would first prove the convergence of the infinite sum (A.25), which after rewriting implies the vanishing of the pole and in turn (A.28).
We set here the coupling g = 1 as well as 2πRc = 1, since we are only interested here in checking that our calculations reproduce well known results in literature.
AcknowledgementsWe acknowledge useful discussions with J. Bardarson
Universal entanglement entropy in 2D conformal quantum critical points. B Hsu, M Mulligan, E Fradkin, E.-A Kim, 10.1103/PhysRevB.79.115421arXiv:0812.0203Phys. Rev. 79115421B. Hsu, M. Mulligan, E. Fradkin and E.-A. Kim, "Universal entanglement entropy in 2D conformal quantum critical points", Phys. Rev. B79, 115421 (2009), arXiv:0812.0203.
Entanglement entropy and quantum field theory: A Non-technical introduction. P Calabrese, J L Cardy, 10.1142/S021974990600192Xquant-ph/0505193Workshop on Quantum Entanglement in Physical and Information Sciences. Pisa, Italy4429P. Calabrese and J. L. Cardy, "Entanglement entropy and quantum field theory: A Non-technical introduction", Int. J. Quant. Inf. 4, 429 (2006), quant-ph/0505193, in: "Workshop on Quantum Entanglement in Physical and Information Sciences Pisa, Italy, December 14-18, 2004", 429p.
Entanglement in Many-Body Systems. L Amico, R Fazio, A Osterloh, V Vedral, quant-ph/0703044Rev.Mod.Phys. 80L. Amico, R. Fazio, A. Osterloh and V. Vedral, "Entanglement in Many-Body Systems", Rev.Mod.Phys. 80, 517 (2008), quant-ph/0703044, https://arxiv.org/pdf/quant-ph/0703044v3.
Colloquium: Area laws for the entanglement entropy. J Eisert, 10.1103/RevModPhys.82.277Reviews of Modern Physics. 82277J. Eisert, "Colloquium: Area laws for the entanglement entropy", Reviews of Modern Physics 82, 277 (2010).
Quantum entanglement in condensed matter systems. N Laflorencie, 10.1016/j.physrep.2016.06.008Quantum entanglement in condensed matter systems. 646N. Laflorencie, "Quantum entanglement in condensed matter systems", Physics Reports 646, 1 (2016), Quantum entanglement in condensed matter systems, http://www.sciencedirect.com/science/article/pii/S0370157316301582.
On Geometric Entropy. C Callan, F Wilczek, hep-th/9401072Phys.Lett. 333C. Callan and F. Wilczek, "On Geometric Entropy", Phys.Lett. B333, 55 (1994), hep-th/9401072, https://arxiv.org/abs/hep-th/9401072.
Geometric and renormalized entropy in conformal field theory. C Holzhey, F Larsen, F Wilczek, 10.1016/0550-3213(94)90402-2hep-th/9403108Nucl. Phys. 424443C. Holzhey, F. Larsen and F. Wilczek, "Geometric and renormalized entropy in conformal field theory", Nucl. Phys. B424, 443 (1994), hep-th/9403108.
Entanglement entropy and quantum field theory. P Calabrese, J L Cardy, 10.1088/1742-5468/2004/06/P06002hep-th/0405152J. Stat. Mech. 04066002P. Calabrese and J. L. Cardy, "Entanglement entropy and quantum field theory", J. Stat. Mech. 0406, P06002 (2004), hep-th/0405152.
Holographic Entanglement Entropy. M Rangamani, T Takayanagi, 10.1007/978-3-319-52573-0arXiv:1609.01287Lect. Notes Phys. 9311M. Rangamani and T. Takayanagi, "Holographic Entanglement Entropy", Lect. Notes Phys. 931, pp.1 (2017), arXiv:1609.01287.
Entanglement entropy: holography and renormalization group. T Nishioka, 10.1103/RevModPhys.90.035007arXiv:1801.10352Rev. Mod. Phys. 9035007T. Nishioka, "Entanglement entropy: holography and renormalization group", Rev. Mod. Phys. 90, 035007 (2018), arXiv:1801.10352.
Entanglement entropy, conformal invariance and extrinsic geometry. S N Solodukhin, 10.1016/j.physletb.2008.05.071arXiv:0802.3117Phys. Lett. 665305S. N. Solodukhin, "Entanglement entropy, conformal invariance and extrinsic geometry", Phys. Lett. B665, 305 (2008), arXiv:0802.3117.
Distributional Geometry of Squashed Cones. D V Fursaev, A Patrushev, S N Solodukhin, 10.1103/PhysRevD.88.044054arXiv:1306.4000Phys. Rev. 8844054D. V. Fursaev, A. Patrushev and S. N. Solodukhin, "Distributional Geometry of Squashed Cones", Phys. Rev. D88, 044054 (2013), arXiv:1306.4000.
Correlation functions in theories with Lifshitz scaling. V Keranen, W Sybesma, P Szepietowski, L Thorlacius, 10.1007/JHEP05(2017)033arXiv:1611.09371JHEP. 170533V. Keranen, W. Sybesma, P. Szepietowski and L. Thorlacius, "Correlation functions in theories with Lifshitz scaling", JHEP 1705, 033 (2017), arXiv:1611.09371.
Topological order and conformal quantum critical points. E Ardonne, P Fendley, E Fradkin, 10.1016/j.aop.2004.01.004cond-mat/0311466Annals Phys. 310493E. Ardonne, P. Fendley and E. Fradkin, "Topological order and conformal quantum critical points", Annals Phys. 310, 493 (2004), cond-mat/0311466.
Free k scalar conformal field theory. C Brust, K Hinterbichler, 10.1007/JHEP02(2017)066arXiv:1607.07439JHEP. 170266C. Brust and K. Hinterbichler, "Free k scalar conformal field theory", JHEP 1702, 066 (2017), arXiv:1607.07439.
Partition function of free conformal fields in 3-plet representation. M Beccaria, A A Tseytlin, 10.1007/JHEP05(2017)053arXiv:1703.04460JHEP. 170553M. Beccaria and A. A. Tseytlin, "Partition function of free conformal fields in 3-plet representation", JHEP 1705, 053 (2017), arXiv:1703.04460.
Hidden global conformal symmetry without Virasoro extension in theory of elasticity. Y Nakayama, 10.1016/j.aop.2016.06.010arXiv:1604.00810Annals Phys. 372Y. Nakayama, "Hidden global conformal symmetry without Virasoro extension in theory of elasticity", Annals Phys. 372, 392 (2016), arXiv:1604.00810.
T Griffin, K T Grosvenor, P Horava, Z Yan, 10.1007/s00220-015-2461-2arXiv:1412.1046Scalar Field Theories with Polynomial Shift Symmetries. 340985T. Griffin, K. T. Grosvenor, P. Horava and Z. Yan, "Scalar Field Theories with Polynomial Shift Symmetries", Commun. Math. Phys. 340, 985 (2015), arXiv:1412.1046.
Cascading Multicriticality in Nonrelativistic Spontaneous Symmetry Breaking. T Griffin, K T Grosvenor, P Horava, Z Yan, 10.1103/PhysRevLett.115.241601arXiv:1507.06992Phys. Rev. Lett. 115241601T. Griffin, K. T. Grosvenor, P. Horava and Z. Yan, "Cascading Multicriticality in Nonrelativistic Spontaneous Symmetry Breaking", Phys. Rev. Lett. 115, 241601 (2015), arXiv:1507.06992.
The dS / CFT correspondence. A Strominger, 10.1088/1126-6708/2001/10/034hep-th/0106113JHEP. 011034A. Strominger, "The dS / CFT correspondence", JHEP 0110, 034 (2001), hep-th/0106113.
Higher Spin Realization of the dS/CFT Correspondence. D Anninos, T Hartman, A Strominger, 10.1088/1361-6382/34/1/015009arXiv:1108.5735Class. Quant. Grav. 3415009D. Anninos, T. Hartman and A. Strominger, "Higher Spin Realization of the dS/CFT Correspondence", Class. Quant. Grav. 34, 015009 (2017), arXiv:1108.5735.
Entanglement entropy of 2D conformal quantum critical points: hearing the shape of a quantum drum. E Fradkin, J E Moore, 10.1103/PhysRevLett.97.050404cond-mat/0605683Phys. Rev. Lett. 9750404E. Fradkin and J. E. Moore, "Entanglement entropy of 2D conformal quantum critical points: hearing the shape of a quantum drum", Phys. Rev. Lett. 97, 050404 (2006), cond-mat/0605683.
Universal Behavior of Entanglement in 2D Quantum Critical Dimer Models. B Hsu, E Fradkin, 10.1088/1742-5468/2010/09/P09004arXiv:1006.1361J. Stat. Mech. 10099004B. Hsu and E. Fradkin, "Universal Behavior of Entanglement in 2D Quantum Critical Dimer Models", J. Stat. Mech. 1009, P09004 (2010), arXiv:1006.1361.
Shannon and entanglement entropies of one-and two-dimensional critical wave functions. J.-M Stéphan, 10.1103/PhysRevB.80.184421Physical Review B. 80J.J.-M. Stéphan, "Shannon and entanglement entropies of one-and two-dimensional critical wave functions", Physical Review B 80, J. (2009).
M Oshikawa, arXiv:1007.3739Boundary Conformal Field Theory and Entanglement Entropy in Two-Dimensional Quantum Lifshitz Critical Point. M. Oshikawa, "Boundary Conformal Field Theory and Entanglement Entropy in Two-Dimensional Quantum Lifshitz Critical Point", arXiv:1007.3739.
Logarithmic terms in entanglement entropies of 2D quantum critical points and Shannon entropies of spin chains. M P Zaletel, J H Bardarson, J E Moore, 10.1103/PhysRevLett.107.020402arXiv:1103.5452Phys. Rev. Lett. 10720402M. P. Zaletel, J. H. Bardarson and J. E. Moore, "Logarithmic terms in entanglement entropies of 2D quantum critical points and Shannon entropies of spin chains", Phys. Rev. Lett. 107, 020402 (2011), arXiv:1103.5452.
Entanglement entropy and mutual information of circular entangling surfaces in the 2+1-dimensional quantum Lifshitz model. T Zhou, X Chen, T Faulkner, E Fradkin, 10.1088/1742-5468/2016/09/093101arXiv:1607.01771J. Stat. Mech. 160993101T. Zhou, X. Chen, T. Faulkner and E. Fradkin, "Entanglement entropy and mutual information of circular entangling surfaces in the 2+1-dimensional quantum Lifshitz model", J. Stat. Mech. 1609, 093101 (2016), arXiv:1607.01771.
Renyi entanglement entropies in quantum dimer models : from criticality to topological order. J.-M Stephan, G Misguich, V Pasquier, 10.1088/1742-5468/2012/02/P02003arXiv:1108.1699J. Stat. Mech. 12022003J.-M. Stephan, G. Misguich and V. Pasquier, "Renyi entanglement entropies in quantum dimer models : from criticality to topological order", J. Stat. Mech. 1202, P02003 (2012), arXiv:1108.1699.
Finite Size Dependence of the Free Energy in Two-dimensional Critical Systems. J L Cardy, I Peschel, 10.1016/0550-3213(88)90604-9Nucl. Phys. 300377J. L. Cardy and I. Peschel, "Finite Size Dependence of the Free Energy in Two-dimensional Critical Systems", Nucl. Phys. B300, 377 (1988).
Shannon and entanglement entropies of one-and two-dimensional critical wave functions. J.-M Stéphan, S Furukawa, G Misguich, V Pasquier, https:/link.aps.org/doi/10.1103/PhysRevB.80.184421Phys. Rev. B. 80184421J.-M. Stéphan, S. Furukawa, G. Misguich and V. Pasquier, "Shannon and entanglement entropies of one-and two-dimensional critical wave functions", Phys. Rev. B 80, 184421 (2009), https://link.aps.org/doi/10.1103/PhysRevB.80.184421.
Entanglement in gapless resonating-valence-bond states. J.-M Stéphan, H Ju, P Fendley, R G Melko, 10.1088/1367-2630/15/1/015004New Journal of Physics. 1515004J.-M. Stéphan, H. Ju, P. Fendley and R. G. Melko, "Entanglement in gapless resonating-valence-bond states", New Journal of Physics 15, 015004 (2013), http://dx.doi.org/10.1088/1367-2630/15/1/015004.
Scaling of entanglement in 2 + 1-dimensional scale-invariant field theories. X Chen, G Y Cho, T Faulkner, E Fradkin, 10.1088/1742-5468/2015/02/P02010arXiv:1412.3546J. Stat. Mech. 15022010X. Chen, G. Y. Cho, T. Faulkner and E. Fradkin, "Scaling of entanglement in 2 + 1-dimensional scale-invariant field theories", J. Stat. Mech. 1502, P02010 (2015), arXiv:1412.3546.
Two-cylinder entanglement entropy under a twist. X Chen, W Witczak-Krempa, T Faulkner, E Fradkin, 10.1088/1742-5468/aa668aJournal of Statistical Mechanics: Theory and Experiment. 43104X. Chen, W. Witczak-Krempa, T. Faulkner and E. Fradkin, "Two-cylinder entanglement entropy under a twist", Journal of Statistical Mechanics: Theory and Experiment 2017, 043104 (2017), http://dx.doi.org/10.1088/1742-5468/aa668a.
Gravity duals of Lifshitz-like fixed points. S Kachru, X Liu, M Mulligan, 10.1103/PhysRevD.78.106005arXiv:0808.1725Phys. Rev. 78106005S. Kachru, X. Liu and M. Mulligan, "Gravity duals of Lifshitz-like fixed points", Phys. Rev. D78, 106005 (2008), arXiv:0808.1725.
M Taylor, arXiv:0812.0530Non-relativistic holography. M. Taylor, "Non-relativistic holography", arXiv:0812.0530.
Aspects of Holographic Entanglement Entropy. S Ryu, T Takayanagi, 10.1088/1126-6708/2006/08/045hep-th/0605073JHEP. 060845S. Ryu and T. Takayanagi, "Aspects of Holographic Entanglement Entropy", JHEP 0608, 045 (2006), hep-th/0605073.
Holographic derivation of entanglement entropy from AdS/CFT. S Ryu, T Takayanagi, 10.1103/PhysRevLett.96.181602hep-th/0603001Phys. Rev. Lett. 96181602S. Ryu and T. Takayanagi, "Holographic derivation of entanglement entropy from AdS/CFT", Phys. Rev. Lett. 96, 181602 (2006), hep-th/0603001.
Holographic geometries for condensed matter applications. V Kernen, L Thorlacius, arXiv:1307.2882Proceedings, 13th Marcel Grossmann Meeting on Recent Developments in Theoretical and Experimental General Relativity, Astrophysics, and Relativistic Field Theories (MG13). 13th Marcel Grossmann Meeting on Recent Developments in Theoretical and Experimental General Relativity, Astrophysics, and Relativistic Field Theories (MG13)Stockholm, SwedenV. Kernen and L. Thorlacius, "Holographic geometries for condensed matter applications", arXiv:1307.2882, in: "Proceedings, 13th Marcel Grossmann Meeting on Recent Developments in Theoretical and Experimental General Relativity, Astrophysics, and Relativistic Field Theories (MG13): Stockholm, Sweden, July 1-7, 2012", 902-921p.
Lifshitz entanglement entropy from holographic cMERA. S A Gentle, S Vandoren, 10.1007/JHEP07(2018)013arXiv:1711.11509JHEP. 180713S. A. Gentle and S. Vandoren, "Lifshitz entanglement entropy from holographic cMERA", JHEP 1807, 013 (2018), arXiv:1711.11509.
Entanglement Entropy in Lifshitz Theories. T He, J M Magan, S Vandoren, 10.21468/SciPostPhys.3.5.034arXiv:1705.01147SciPost Phys. 334T. He, J. M. Magan and S. Vandoren, "Entanglement Entropy in Lifshitz Theories", SciPost Phys. 3, 034 (2017), arXiv:1705.01147.
M R Mohammadi Mozaffar, A Mollabashi, 10.1007/JHEP07(2017)120arXiv:1705.00483Entanglement in Lifshitz-type Quantum Field Theories. 1707120M. R. Mohammadi Mozaffar and A. Mollabashi, "Entanglement in Lifshitz-type Quantum Field Theories", JHEP 1707, 120 (2017), arXiv:1705.00483.
Determinants and conformal anomalies of GJMS operators on spheres. J S Dowker, 10.1088/1751-8113/44/11/115402arXiv:1010.0566J. Phys. 44115402J. S. Dowker, "Determinants and conformal anomalies of GJMS operators on spheres", J. Phys. A44, 115402 (2011), arXiv:1010.0566, https://arxiv.org/abs/1010.0566.
P Di Francesco, P Mathieu, D Senechal, Conformal Field Theory. New YorkSpringer-VerlagP. Di Francesco, P. Mathieu and D. Senechal, "Conformal Field Theory", Springer-Verlag (1997), New York.
Applied Conformal Field Theory. P H Ginsparg, hep-th/9108028, in: "Les Houches Summer School in Theoretical Physics: Fields, Strings, Critical Phenomena Les Houches. FranceP. H. Ginsparg, "Applied Conformal Field Theory", hep-th/9108028, in: "Les Houches Summer School in Theoretical Physics: Fields, Strings, Critical Phenomena Les Houches, France, June 28-August 5, 1988", 1-168p.
Conformally invariant powers of the laplacian, i: Existence. L J M C R Graham, R Jenne, G A J Sparling, s2-46Journal of London Mathematical Society. 557L. J. M. C.R. Graham, R. Jenne and G. A. J. Sparling, "Conformally invariant powers of the laplacian, i: Existence", Journal of London Mathematical Society s2-46, 557 (1992).
The asymptotics of the Laplacian on a manifold with boundary II. T P Branson, P B Gilkey, D V V Vassilevich ; D, Vassilevich, hep-th/9504029Boll.Union.Mat.Ital. 11T. P. Branson, P. B. Gilkey and D. V.Vassilevich, "The asymptotics of the Laplacian on a manifold with boundary II", Boll.Union.Mat.Ital.11B:39-67,1997 s2-46, D. V.Vassilevich, hep-th/9504029, https://arxiv.org/pdf/hep-th/9504029.
Conformally invariant powers of the Laplacian: A Complete non-existence theorem. A R Gover, K Hirachi, 10.1090/S0894-0347-04-00450-3Journal of the American Mathematical Society. 17A. R. Gover and K. Hirachi, "Conformally invariant powers of the Laplacian: A Complete non-existence theorem", Journal of the American Mathematical Society 17, 389 (2004), http://dx.doi.org/10.1090/S0894-0347-04-00450-3.
Laplacian operators and Q-curvature on conformally Einstein manifolds. A Gover, math/0506037A. Gover, "Laplacian operators and Q-curvature on conformally Einstein manifolds", math/0506037, https://arxiv.org/pdf/math/0506037.
Conformal Powers of the Laplacian via Stereographic Projection. C R R Graham ; C, Graham, 10.3842/SIGMA.2007.121Symmetry, Integrability and Geometry: Methods and Applications. 17C. R. Graham, "Conformal Powers of the Laplacian via Stereographic Projection", Symmetry, Integrability and Geometry: Methods and Applications 17, C. R. Graham (2007), http://dx.doi.org/10.3842/SIGMA.2007.121.
Juhl's Formulae for GJMS Operators and Q-Curvatures. C Fefferman, C R Graham, arXiv:1203.0360C. Fefferman and C. R. Graham, "Juhl's Formulae for GJMS Operators and Q-Curvatures", arXiv:1203.0360, https://arxiv.org/pdf/1203.0360v1.
A Quartic Conformally Covariant Differential Operator for Arbitrary Pseudo-Riemannian Manifolds (Summary). S Paneitz, 10.3842/SIGMA.2008.036Symmetry, Integrability and Geometry: Methods and Applications. 17S. Paneitz, "A Quartic Conformally Covariant Differential Operator for Arbitrary Pseudo-Riemannian Manifolds (Summary)", Symmetry, Integrability and Geometry: Methods and Applications 17, S. Paneitz (2008), http://dx.doi.org/10.3842/SIGMA.2008.036.
On conformally covariant powers of the Laplacian. A , arXiv:0905.3992A. Juhl, "On conformally covariant powers of the Laplacian", arXiv:0905.3992, https://arxiv.org/pdf/0905.3992v3.
Explicit formulas for GJMS-operators and Q-curvatures. A , arXiv:1108.0273A. Juhl, "Explicit formulas for GJMS-operators and Q-curvatures", arXiv:1108.0273, https://arxiv.org/pdf/1108.0273v4.
H Baum, A , Conformal Differential Geometry. Birkhäuser BaselH. Baum and A. Juhl, "Conformal Differential Geometry", Birkhäuser Basel (2010).
Entanglement entropy and conformal field theory. P Calabrese, J Cardy, 10.1088/1751-8113/42/50/504005arXiv:0905.4013J. Phys. 42504005P. Calabrese and J. Cardy, "Entanglement entropy and conformal field theory", J. Phys. A42, 504005 (2009), arXiv:0905.4013.
Vacuum energy on orbifold factors of spheres. P Chang, J Dowker, 10.1016/0550-3213(93)90223-CNuclear Physics B. 395407P. Chang and J. Dowker, "Vacuum energy on orbifold factors of spheres", Nuclear Physics B 395, 407 (1993), http://www.sciencedirect.com/science/article/pii/055032139390223C.
Effective action in spherical domains. J S Dowker, 10.1007/BF02101749hep-th/9306154Commun. Math. Phys. 162633J. S. Dowker, "Effective action in spherical domains", Commun. Math. Phys. 162, 633 (1994), hep-th/9306154, https://arxiv.org/abs/hep-th/9306154.
Numerical evaluation of spherical GJMS determinants for even dimensions. J S Dowker, arXiv:1310.0759J. S. Dowker. 162J. S. Dowker, "Numerical evaluation of spherical GJMS determinants for even dimensions", arXiv hep-th 162, J. S. Dowker (2013), arXiv:1310.0759, https://arxiv.org/abs/1310.0759.
Functional determinants on spheres and sectors. J S Dowker, 10.1063/1.530826,10.1063/1.531171hep-th/9312080J. Math. Phys. 35988J. Math. Phys.J. S. Dowker, "Functional determinants on spheres and sectors", J. Math. Phys. 35, 4989 (1994), hep-th/9312080, [Erratum: J. Math. Phys.36,988(1995)].
Polyakov formulas for GJMS operators from AdS/CFT. D E Diaz, 10.1088/1126-6708/2008/07/103arXiv:0803.0571JHEP. 0807103D. E. Diaz, "Polyakov formulas for GJMS operators from AdS/CFT", JHEP 0807, 103 (2008), arXiv:0803.0571.
Holographic Weyl anomaly for GJMS operators: one Laplacian to rule them all. F Bugini, D E Daz, 10.1007/JHEP02(2019)188arXiv:1811.10380JHEP. 1902188F. Bugini and D. E. Daz, "Holographic Weyl anomaly for GJMS operators: one Laplacian to rule them all", JHEP 1902, 188 (2019), arXiv:1811.10380.
Cornering gapless quantum states via their torus entanglement. W Witczak-Krempa, L E Hayward Sierens, R G Melko, 10.1103/PhysRevLett.118.077202arXiv:1603.02684Phys. Rev. Lett. 11877202W. Witczak-Krempa, L. E. Hayward Sierens and R. G. Melko, "Cornering gapless quantum states via their torus entanglement", Phys. Rev. Lett. 118, 077202 (2017), arXiv:1603.02684.
Entanglement entropy in free quantum field theory. H Casini, M Huerta, 10.1088/1751-8113/42/50/504007arXiv:0905.2562J. Phys. 42504007H. Casini and M. Huerta, "Entanglement entropy in free quantum field theory", J. Phys. A42, 504007 (2009), arXiv:0905.2562.
Holographic torus entanglement and its renormalization group flow. P Bueno, W Witczak-Krempa, 10.1103/PhysRevD.95.066007arXiv:1611.01846Phys. Rev. 9566007P. Bueno and W. Witczak-Krempa, "Holographic torus entanglement and its renormalization group flow", Phys. Rev. D95, 066007 (2017), arXiv:1611.01846.
A technical note on the calculation of GJMS (Rac and Di) operator determinants. J S Dowker, arXiv:1807.11872J. S. Dowker, "A technical note on the calculation of GJMS (Rac and Di) operator determinants", arXiv:1807.11872.
Heat kernel methods for Lifshitz theories. A O Barvinsky, D Blas, M Herrero-Valea, D V Nesterov, G Pérez-Nadal, C F Steinwachs, 10.1007/JHEP06(2017)063Journal of High Energy Physics. 2017C. F. SteinwachsA. O. Barvinsky, D. Blas, M. Herrero-Valea, D. V. Nesterov, G. Pérez-Nadal and C. F. Steinwachs, "Heat kernel methods for Lifshitz theories", Journal of High Energy Physics 2017, C. F. Steinwachs (2017), http://dx.doi.org/10.1007/JHEP06(2017)063.
Universal entanglement for higher dimensional cones. P Bueno, R C Myers, 10.1007/JHEP12(2015)168arXiv:1508.00587JHEP. 1512168P. Bueno and R. C. Myers, "Universal entanglement for higher dimensional cones", JHEP 1512, 168 (2015), arXiv:1508.00587.
Universality of corner entanglement in conformal field theories. P Bueno, R C Myers, W Witczak-Krempa, 10.1103/PhysRevLett.115.021602arXiv:1505.04804Phys. Rev. Lett. 11521602P. Bueno, R. C. Myers and W. Witczak-Krempa, "Universality of corner entanglement in conformal field theories", Phys. Rev. Lett. 115, 021602 (2015), arXiv:1505.04804.
Universal corner entanglement from twist operators. P Bueno, R C Myers, W Witczak-Krempa, 10.1007/JHEP09(2015)091arXiv:1507.06997JHEP. 150991P. Bueno, R. C. Myers and W. Witczak-Krempa, "Universal corner entanglement from twist operators", JHEP 1509, 091 (2015), arXiv:1507.06997.
Corner contributions to holographic entanglement entropy. P Bueno, R C Myers, 10.1007/JHEP08(2015)068arXiv:1505.07842JHEP. 150868P. Bueno and R. C. Myers, "Corner contributions to holographic entanglement entropy", JHEP 1508, 068 (2015), arXiv:1505.07842.
Bounds on corner entanglement in quantum critical states. P Bueno, W Witczak-Krempa, 10.1103/PhysRevB.93.045131arXiv:1511.04077Phys. Rev. 9345131P. Bueno and W. Witczak-Krempa, "Bounds on corner entanglement in quantum critical states", Phys. Rev. B93, 045131 (2016), arXiv:1511.04077.
On Shape Dependence and RG Flow of Entanglement Entropy. I R Klebanov, T Nishioka, S S Pufu, B R Safdi, 10.1007/JHEP07(2012)001arXiv:1204.4160JHEP. 12071I. R. Klebanov, T. Nishioka, S. S. Pufu and B. R. Safdi, "On Shape Dependence and RG Flow of Entanglement Entropy", JHEP 1207, 001 (2012), arXiv:1204.4160.
Exact results for corner contributions to the entanglement entropy and Rnyi entropies of free bosons and fermions in 3d. H Elvang, M Hadjiantonis, 10.1016/j.physletb.2015.08.017arXiv:1506.06729Phys. Lett. 749383H. Elvang and M. Hadjiantonis, "Exact results for corner contributions to the entanglement entropy and Rnyi entropies of free bosons and fermions in 3d", Phys. Lett. B749, 383 (2015), arXiv:1506.06729.
A holographic proof of the universality of corner entanglement for CFTs. R.-X Miao, 10.1007/JHEP10(2015)038arXiv:1507.06283JHEP. 151038R.-X. Miao, "A holographic proof of the universality of corner entanglement for CFTs", JHEP 1510, 038 (2015), arXiv:1507.06283.
Corner contributions to holographic entanglement entropy in non-conformal backgrounds. D.-W Pang, 10.1007/JHEP09(2015)133arXiv:1506.07979JHEP. 1509133D.-W. Pang, "Corner contributions to holographic entanglement entropy in non-conformal backgrounds", JHEP 1509, 133 (2015), arXiv:1506.07979.
Entanglement Entropy for Singular Surfaces in Hyperscaling violating Theories. M Alishahiha, A F Astaneh, P Fonda, F Omidi, 10.1007/JHEP09(2015)172arXiv:1507.05897JHEP. 1509172M. Alishahiha, A. F. Astaneh, P. Fonda and F. Omidi, "Entanglement Entropy for Singular Surfaces in Hyperscaling violating Theories", JHEP 1509, 172 (2015), arXiv:1507.05897.
Entanglement entropy of local operators in quantum Lifshitz theory. T Zhou, 10.1088/1742-5468/2016/09/093106Journal of Statistical Mechanics: Theory and Experiment. 93106T. Zhou, "Entanglement entropy of local operators in quantum Lifshitz theory", Journal of Statistical Mechanics: Theory and Experiment 2016, 093106 (2016), http://dx.doi.org/10.1088/1742-5468/2016/09/093106.
Entanglement Evolution in Lifshitz-type Scalar Theories. M R Mohammadi Mozaffar, A Mollabashi, 10.1007/JHEP01(2019)137arXiv:1811.11470JHEP. 1901137M. R. Mohammadi Mozaffar and A. Mollabashi, "Entanglement Evolution in Lifshitz-type Scalar Theories", JHEP 1901, 137 (2019), arXiv:1811.11470.
Scrambling in the quantum Lifshitz model. E Plamadeala, E Fradkin, 10.1088/1742-5468/aac136Journal of Statistical Mechanics: Theory. 63102E. Plamadeala and E. Fradkin, "Scrambling in the quantum Lifshitz model", Journal of Statistical Mechanics: Theory and Experiment 2018, 063102 (2018), http://dx.doi.org/10.1088/1742-5468/aac136.
Zeta and q-Zeta Functions and Associated Series and Integrals. H Srivastava, J Choi, Zeta and q-Zeta Functions and Associated Series and Integrals. H. Srivastava and J. ChoiLondonElsevierH. Srivastava and J. Choi, "Zeta and q-Zeta Functions and Associated Series and Integrals", edited by H. Srivastava and J. Choi, in: "Zeta and q-Zeta Functions and Associated Series and Integrals", edited by H. Srivastava and J. Choi, Elsevier (2012), London, 245 -397p, http://www.sciencedirect.com/science/article/pii/B9780123852182000037.
The theory of the moltiple gamma function. E Barnes, Trans. Camb. Philos. Soc. 19E. Barnes, "The theory of the moltiple gamma function", Trans. Camb. Philos. Soc. 19, 374 (1904), https://archive.org/details/transactions19camb.
Multiple gamma function and its application to computation of series. V S Adamchik, math/0308074Ramanujan J. 19, V. S. AdamchikV. S. Adamchik, "Multiple gamma function and its application to computation of series", Submitted to: Ramanujan J. 19, V. S. Adamchik (2003), math/0308074, https://arxiv.org/abs/math/0308074.
Multidimensional extension of the generalized Chowla-Selberg formula. E Elizalde, 10.1007/s002200050472hep-th/9707257Commun. Math. Phys. 198E. Elizalde, "Multidimensional extension of the generalized Chowla-Selberg formula", Commun. Math. Phys. 198, 83 (1998), hep-th/9707257, https://arxiv.org/abs/hep-th/9707257.
Ten Physical Applications of Spectral Zeta Functions. E Elizalde, Springer-VerlagBerlin Heidelberg2 editionE. Elizalde, "Ten Physical Applications of Spectral Zeta Functions", 2 edition, Springer-Verlag Berlin Heidelberg (2012).
Zeta functions of Dirac and Laplace-type operators over finite cylinders. K Kirsten, P Loya, J Park, 10.1016/j.aop.2006.03.003Annals of Physics. 321K. Kirsten, P. Loya and J. Park, "Zeta functions of Dirac and Laplace-type operators over finite cylinders", Annals of Physics 321, 1814 (2006), http://www.sciencedirect.com/science/article/pii/S0003491606000716.
Multiplicative anomaly and zeta factorization. E Elizalde, M Tierz, 10.1063/1.1646447hep-th/0402186J. Math. Phys. 451168E. Elizalde and M. Tierz, "Multiplicative anomaly and zeta factorization", J. Math. Phys. 45, 1168 (2004), hep-th/0402186.
| [] |
[
"Transitioning to human interaction with AI systems: New challenges and opportunities for HCI professionals to enable human-centered AI",
"Transitioning to human interaction with AI systems: New challenges and opportunities for HCI professionals to enable human-centered AI"
] | [
"Wei Xu [email protected] \nCenter for Psychological Sciences\nZhejiang University\nHangzhouChina\n",
"Marvin J Dainoff \nDept. of Psychology\nMiami University\nOxfordOhio\n",
"Liezhong Ge \nCenter for Psychological Sciences\nZhejiang University\nHangzhouChina\n",
"Zaifeng Gao \nDepartment of Psychology and Behavioral Sciences\nZhejiang University\nHangzhouChina\n",
"Wei Xu ",
"Wei Xu "
] | [
"Center for Psychological Sciences\nZhejiang University\nHangzhouChina",
"Dept. of Psychology\nMiami University\nOxfordOhio",
"Center for Psychological Sciences\nZhejiang University\nHangzhouChina",
"Department of Psychology and Behavioral Sciences\nZhejiang University\nHangzhouChina"
] | [] | While AI has benefited humans, it may also harm humans if not appropriately developed.The priority of current HCI work should focus on transiting from conventional human interaction with non-AI computing systems to interaction with AI systems. We conducted a high-level literature review and a holistic analysis of current work in developing AI systems from an HCI perspective. Our review and analysis highlight the new changes introduced by AI technology and the new challenges that HCI professionals face when applying the human-centered AI (HCAI) approach in the development of AI systems. We also identified seven main issues in human interaction with AI systems, which HCI professionals did not encounter when developing non-AI computing systems. To further enable the implementation of the HCAI approach, we identified new HCI opportunities tied to specific HCAI-driven design goals to guide HCI professionals addressing these new issues. Finally, our assessment of current HCI methods shows the limitations of these methods in support of developing AI systems. We propose the alternative methods that can help overcome these limitations and effectively help HCI professionals apply the HCAI approach to the development of AI systems. We also offer strategic recommendation for HCI professionals to 3 effectively influence the development of AI systems with the HCAI approach, eventually developing HCAI systems. | 10.1080/10447318.2022.2041900 | [
"https://export.arxiv.org/pdf/2105.05424v4.pdf"
] | 236,428,680 | 2105.05424 | e0e51b697701cca5ee1a46348c73a9eb80ef8094 |
Transitioning to human interaction with AI systems: New challenges and opportunities for HCI professionals to enable human-centered AI
Wei Xu [email protected]
Center for Psychological Sciences
Zhejiang University
HangzhouChina
Marvin J Dainoff
Dept. of Psychology
Miami University
OxfordOhio
Liezhong Ge
Center for Psychological Sciences
Zhejiang University
HangzhouChina
Zaifeng Gao
Department of Psychology and Behavioral Sciences
Zhejiang University
HangzhouChina
Wei Xu
Wei Xu
Transitioning to human interaction with AI systems: New challenges and opportunities for HCI professionals to enable human-centered AI
1 Biographies Wei Xu received his Ph.D. in Psychology with specialization in HCI/Human Factors and his M.S. in Computer Science from Miami University in 1997. He is a Professor of HCI/Human Factors at the Center for Psychological Sciences of Zhejiang University, China. His research interests include human-AI interaction, human-computer interaction, and aviation human factors. Marvin J. Dainoff received his Ph.D. in Psychology from University of Rochester in 1969. He is a Professor Emeritus of Psychology/Human Factors at Miami University. He is a Past President of the Human Factors and Ergonomics Society. His research interests include sociotechnical system solutions for complex systems, human-computer interaction, and workplace ergonomics. Liezhong Ge received his Ph.D. in Psychology with specialization in human factors from Zhejiang University, China, in 1992. He is a Professor of HCI/Human Factors at the Center for Psychological * Address correspondence to arXiv:2105.05424v2 [cs.HC] 07Jan 2022 2 Sciences of Zhejiang University. His research interests include human-computer interaction, user experience, and facial recognition. Zaifeng Gao received his Ph.D. in Psychology from Zhejiang University, China, in 2009. He is a Professor of Psychology/Human Factors at the Department of Psychology and Behavioral Sciences, Zhejiang University. His research interests include engineering psychology, autonomous driving, and cognitive psychology. Corresponding author:artificial intelligencehuman-artificial intelligence interactionhuman-centered artificial intelligence, human-computer interaction, human factors, autonomy
While AI has benefited humans, it may also harm humans if not appropriately developed.The priority of current HCI work should focus on transiting from conventional human interaction with non-AI computing systems to interaction with AI systems. We conducted a high-level literature review and a holistic analysis of current work in developing AI systems from an HCI perspective. Our review and analysis highlight the new changes introduced by AI technology and the new challenges that HCI professionals face when applying the human-centered AI (HCAI) approach in the development of AI systems. We also identified seven main issues in human interaction with AI systems, which HCI professionals did not encounter when developing non-AI computing systems. To further enable the implementation of the HCAI approach, we identified new HCI opportunities tied to specific HCAI-driven design goals to guide HCI professionals addressing these new issues. Finally, our assessment of current HCI methods shows the limitations of these methods in support of developing AI systems. We propose the alternative methods that can help overcome these limitations and effectively help HCI professionals apply the HCAI approach to the development of AI systems. We also offer strategic recommendation for HCI professionals to 3 effectively influence the development of AI systems with the HCAI approach, eventually developing HCAI systems.
Introduction
History is repeating
When the personal computers (PC) first emerged in the 1980s, the development of its applications basically adopted a "technology-centered approach," ignoring the needs of ordinary users. With the popularity of PC, the problems of user experience were gradually exposed. With the initiative primarily driven by human factors, psychology, and computer science, the field of human computer interaction (HCI) came into being (Baecker et al., 1995). As an interdisciplinary field, HCI adopts a "human-centered design" approach to develop computing products meeting user needs. Accordingly, SIG CHI (Special Interest Group on Computer-Human Interaction) of the Association for Computing Machinery (ACM) was established in 1982. The first SIG CHI Annual Conference on Human Factors in Computing Systems was held in 1983 and the main theme of "human-centered design" has been reflected in the annual conferences since 1983 (Grudin, 2005).
History seems to be repeating; we are currently facing a similar challenge as seen in 1980s as we enter the era of artificial intelligence (AI). While AI technology has brought in many benefits to humans, it is having a profound impact on people's work and lives. Unfortunately, the development of AI systems is mainly driven by a "technology-centered design" approach (e.g., Shneiderman, 2020aShneiderman, , 2020bXu, 2019a;Zheng et al., 2017). Many AI professionals are primarily dedicated to studying algorithms, rather than providing useful AI systems to meet user needs, resulting in the failure of many AI systems (Hoffman et al., 2016;Lazer et al., 2014;Lieberman, 2009;Yampolskiy, 2019). Specifically, the AI Incident Database has collected more than 1000 AI related accidents (McGregor et al., 2021), such as an autonomous car killing a pedestrian, a trading algorithm causing a market "flash crash" where billions of dollars transfer between parties, and a facial recognition system causing an innocent person to be arrested. When more and more AIbased decision-making systems such as enterprises and government services are put into use, decisions made using biased "thinking" will directly affect people's daily work and life, potentially harming them. AI systems trained with distorted data may produce biased "thinking," easily amplify prejudice and inequality for certain user groups, and even harm individual humans. Like the "dual use" nature of nuclear energy and biochemical technology, the rewards and potential risks brought by AI coexist. Since the development and use of AI is a decentralized global phenomenon, the barriers to entry are relatively low, making it more difficult to control (Li, 2018).
As the HCI field that was brought into existence by PC technology 40 years ago, history has brought HCI professionals to a juncture. This time the consequence of ignoring the "humancentered design" philosophy is obviously more severe. Concerning the potential harm to humans from AI technology in the long run, prominent critics have expressed their concern that AI may create a "machine world" and eventually replace humans (Hawking, Musk et al., 2015;Russell et al., 2015). Leading researchers have also raised deep concerns (e.g., Stephanidis, Salvendy et al., 2019;Shneiderman 2020Shneiderman a, 2020cHancock, 2019;Salmon, Hancock & Carden, 2019;Endsley, 2017Endsley, , 2018Lau et al., 2020). Salmon, Hancock & Carden (2019) urged: "the ball is in our court, but it won't be for much longer… we must take action." Hancock (2019) further described the current boom in the development of AI-based autonomous technologies as "a horse has left the stables" while Salmon (2019) believes that "we find ourselves in a similar situation: chasing a horse that has really started to run wild." Stephanidis, Salvendy et al. (2019) conducted a comprehensive review that identifies seven grand challenges with intelligent interactive systems and proposed the research priorities to address the challenges. In addition, twenty-three scholars from MIT, Stanford, Harvard and other universities jointly published a paper on machine behavior in Nature (Rahwan et al., 2019), urging that society should fully understand unique AI-based machine behavior and its impacts to humans and society.
To respond to the challenges, Stanford University established a "Human-Centered AI" research institution: focusing on "ethically aligned design" (Li, 2018;Donahoe, 2018).
Shneiderman and Xu have moved forward further proposing a human-centered AI (HCAI) approach (Shneiderman, 2020a(Shneiderman, , 2020b(Shneiderman, , 2020cXu, 2019aXu, , 2019bXu & Ge, 2020). Shneiderman (2020a, 2020d) further proposes a design framework and a governance structure of ethical AI for implementation of HCAI. Others also promoted HCAI different perspective, for instance, Cerejo (2021) proposed an alternative development process to implement HCAI. Overall, the promotion of HCAI is still in its infancy, there is a need to further advance the HCAI approach and the promotion to a wide audience, especially the HCI community as HCAI is rooted in the "human-centered design" philosophy that has been adopted by HCI.
Unique autonomous characteristics of AI technology
We encounter many unique issues and risks introduced by AI technology which do not exist when dealing with non-AI computing systems. HCI professionals need to fully understand these unique characteristics of AI technology so that we can take effective approaches to address their consequences.
AI technology has brought in unique autonomous characteristics that specifically refers to the ability of an AI-based autonomous system to perform specific tasks independently. AI systems can exhibit unique machine behaviors and evolve to gain certain levels of human-like cognitive/intelligent capabilities, such as, self-executing and self-adaptive abilities; they may successfully operate under certain situations that are possibly not fully anticipated, and the results may not be deterministic (den Broek et al., 2017;Kaber, 2018;Rahwan et al., 2019;Xu & Ge, 2020). These autonomous characteristics are typically used to define the degree of intelligence of an AI system (Watson & Scheidt, 2005).
In contrast, automation represents the typical characteristics of non-AI computing systems.
Automation is the ability of a system to perform well-defined tasks and to produce deterministic results, typically relying on a fixed set of rules or algorithms based on mechanical or digital computing technology. Automation cannot perform the tasks that were not designed for; in such cases, the operators must manually take over the automated system (Kaber, 2018). For example, the autopilot on a civil aircraft flight deck can carry out certain flight tasks previously undertaken by pilots, moving them away from their traditional role of directly controlling the aircraft to a more supervisory role managing the airborne automation. However, in abnormal situations which the autopilot was not designed for, pilots must immediately intervene to take over manual control of the aircraft (Xu, 2007). Table 1 compares the characteristics between non-AI based automated systems and AIbased autonomous systems from an HCI perspective (Kaber, 2018;Xu, 2021;den Broek et al., 2017;Bansal et al., 2019a). As Table 1 shows, the fundamental difference in capabilities between the two is that autonomous systems can be built with certain levels of human-like cognitive/intelligent abilities (Kaber, 2018;Xu, 2021). On the other hand, both automated and autonomous systems require human intervention in operations for safety. These differences and similarities between the two are of great significance for HCI solutions in developing AI systems. (Based on Kaber, 2018;den Broek et al., 2017;Bansal et al., 2019a;Rahwan et al., 2019;Xu, 2021) Characteristics Non-AI Based Automated Systems (With varied levels of automation)
Table 1 Comparative analysis between non-AI-based automation and AI-based autonomy
AI-Based Autonomous Systems (With varied degrees of autonomy)
Examples: conventional office software, washing machine, elevator, automated manufacturing lines Examples: smart speakers, intelligent decision systems, autonomous vehicles, intelligent robots
Human-like sensing ability Limited Yes (With advanced technologies)
Human-like cognitive abilities (pattern recognition, learning, reasoning, etc.) No Yes (Abilities vary across design scenarios)
Human-like self-executing ability
No (Require human manual activation and intervention according to predefined rules Yes (Perform operations independently in specific situations: abilities vary across scenarios)
Human-like self-adaptive ability to unpredictable environments
No Yes (Abilities vary across scenarios)
Operation outcomes
Deterministic Non-deterministic, unexpected
Human intervention
Human intervention is required. Human must be the ultimate decision maker It should be noted that some characteristics of AI systems as listed in Table 1 depend on the high degree of autonomy to be driven by future AI technology, which may not exist today and may be available as predicted (e.g., Bansal et al., 2019b;Demir et al., 2018a). The distinction between automation technology versus AI-based autonomy technology has been debated (e.g., O'Neill et al., 2020;Kaber, 2018). Kaber (2018) believes that recent human-automation interaction research has confused concepts of automated systems and autonomous systems, which has led to inappropriate expectation for design and misdirected criticism of design methods; he differentiates the two concepts with a new framework, where key requirements of design for autonomous systems include the capabilities such as agent viability in a target context, agent self-governance in goal formulation and fulfilment of roles, and independence in defined tasks performance. Kaber (2018) auges that the traditional automation taxonomies (e.g., Sheridan & Verplank, 1978;Parasuraman, Sheridan & Wickens, 2000) were not referring to autonomous agents capable of formulating objectives and determining courses of action and make no mention of, for example, self-sufficiency or selfgovernance in context.
We argue that non-AI based automated systems and AI-based autonomous systems are not differentiated in terms of level of automation; the essential difference between the two depends on whether or not there is a certain level of human-like cognitive/intelligent capabilities on AI technology. However, AI technology is a double-edged sword. On the one side, with the help of AI technologies (e.g., algorithms, deep machine learning, big data), an AI system can complete certain tasks that cannot be done by previous automation technology in certain scenarios (Kaber, 2018;Madni & Madni, 2018;Xu. 2021); on the other hand, the unique autonomous characteristics of AI systems, such as self-evolving machine behaviors and potential non-deterministic/unexpected outcomes, may cause biased and unexpected system output without human supervision, harming humans and society (Kaber, 2018;Rahwan et al., 2019;Xu, 2021).
An emerging form of the human-machine relationship in the AI era
AI technology has also brought in a paradigmatic change in the human-machine relationship. The focus of HCI work has traditionally been on the interaction between humans and non-AI computing systems. These systems rely on fixed logic rules and algorithms to respond to the input from humans, humans interact with these systems in a form of "stimulus-response" (Farooq, U., & Grudin, J. (2016). These computing systems (e.g., automated machines) primarily work as an assistive tool, supporting human's monitoring and execution of tasks (Wickens et al., 2015).
As we are currently entering in the AI-based "autonomous world," AI technology has given new roles to machines in human-machine systems as driven by its autonomous characteristics. The interaction between humans and AI systems is essentially the interaction between humans and the autonomous/AI agents in the AI systems. As AI technology advances, intelligent agents can be developed to exhibit unique behavior and possess the autonomous characteristics with certain levels of human-like intelligent abilities as summarized in Table 1 (Rahwan et al., 2019;Xu, 2021). With AI technology a machine can evolve from an assistive tool that primarily supports human operations to a potential collaborative teammate of a team with a human operator, playing the dual roles of "assistive tool + collaborative teammate" (e.g., Brill et al., 2018;Lyons et al., 2018;O'Neill et al., 2020). Thus, the human-machine relationship in the AI era has added a new type of human-AI collaboration, often called "Human-Machine Teaming" (e.g., Brill et al., 2018;Brandt et al., 2018;Shively et al., 2018).
The exploratory research work related to human-AI collaboration has begun in some work domains. Examples include intelligent air traffic management system (Kistan et al., 2018), operatorintelligent robot teaming (Calhoun et al., 2018), pilot-system teaming on the intelligent flight deck (Brandt et al., 2018), driver-system teaming in high-level autonomous vehicles (de Visser et al., 2018;Navarro, 2018). We will discuss these in detail in Section 3.2.
Human-machine teaming also has a "double-edged sword" effect. For example, on the one hand, AI technologies (e.g., deep machine learning, big data of collective domain knowledge) can help human decision-making operations be more effective way under some operating scenarios, than individual operators using non-AI systems; on the other hand, if the human-centered approach is not followed in the development of AI systems, there is no guarantee that humans have the final decision-making authority of the systems in unexpected scenarios, and the potential unexpected and indeterministic outcome of the systems may cause ethical and safety failures (Yampolskiy, 2019;McGregor et al., 2021). Thus, AI technology has brought in new challenges and opportunities for HCI design.
The research questions and the aims of the paper
Based on the previous discussions, as AI technology becomes increasingly ubiquitous in people's work and lives, the machines being used by humans are no longer the "conventional" computing systems as we have known them. From the work focus perspective as HCI professionals, we are witnessing the transition from "conventional" human-computer interaction to human interaction with AI-based systems that exhibit new qualities (Rahwan et al., 2019;Kaber, 2018;Xu, 2021 This vision of work further challenges the HCI community to prepare for a future that requires the design of AI systems whose behaviors we cannot fully anticipate and for work that we do not know enough about (Stephanidis, Salvendy et al., 2019;Hancock, 2020;Lau et al., 2018).
There are three research questions that we intend to answer in this paper:
1. What are the challenges to HCI professionals to develop human-centered AI systems? 2. What are the opportunities for HCI professionals to lead in applying HCAI to address the challenges?
3. How can current HCI approaches be improved to apply HCAI?
The purposes of this paper are to identify the new challenges and opportunities for HCI professionals as we deal with human interaction with AI systems; and to further advance HCAI by urging HCI professionals to take actions addressing the new challenges in their work.
To answer the three questions, we conducted a high-level literature review and analysis, in conjunction with our previous work in promoting HCAI. The rest of this paper is organized as follows: (1) highlight of the challenges for HCI professionals to implement HCAI as we transition to human interaction with AI systems (Section 2); (2) the opportunities for HCAI professionals to address the challenges (Section 3); (4) analyses of the gaps in current HCI approaches for applying HCAI and our recommendations for closing these gaps (Section 4). We present our conclusions with Section 5.
New challenges for HCI professionals to develop human-centered AI systems
Identifying new challenges in human interaction with AI systems
Research and applications of human interaction with AI systems are not new; many different research agendas, have been investigated over the past several years. People promoted their work under a variety of labels, such as, human-AI/machine teaming (e.g., Brill et al., 2018;Brandt et al., 2018), human-AI interaction (e.g., Amershi et al., 2019;Yang, Steinfeld et al., 2020), human-agent interaction (e.g., Prada & Paiva, 2014), human-autonomous system interaction (e.g., , human-AI symbiosis (e.g., Nagao, 2019). While there are different focuses across these studies, all of them essentially investigate human interaction with the "machines" in AI systems (i.e., intelligent agents, AI agents, autonomous systems) driven by AI technology; that is, humans interact with AI systems.
Thus, our goal was to answer our first research question: What are the new challenges for HCI professionals to develop human-centered AI systems as we transition from human interaction with conventional non -AI systems to interaction with AI systems? To this end, our high-level literature review and analysis was focused on the following two aspects.
• The research and application work that have been done in human interaction with AI systems • The unique issues related to AI systems that HCI professionals did not encounter in conventional HCI work (i.e., human interaction with non-AI systems)
The following four electronic databases were used to find the related papers over the last 10 years as of May 2, 2021: American Computing Machinery (ACM) Digital Library, IEEE Xplore Digital Library, Google Scholar, and ResearchGate. As a result, we found about 890 related papers;
we categorized the issues covered in these papers into 10 groups, then further categorized the issues into seven groups based on a further analysis with the primary references cited in this paper. We believe that the categorized seven main issues (Table 2) can represent the primary HCI challenges that reveal the significant differences between the familiar HCI concerns of human interaction with non-AI systems (see the Familiar HCI Concerns with Non-AI Systems column in Table 2) and the new HCI challenges of human interaction with AI systems (see the New HCI Challenges column in Table 2). • AI systems can be developed to exhibit unique machine behaviors with potentially biased and unexpected outcomes. The machine behavior may evolve as the machine learns (Rahwan et al., 2019) • Human controlled AI Section 3.1
Humanmachine collaboration
• Human interaction with non-AI computing system • Machine primarily works as an assistive tool • No collaboration between humans and machines .
• The intelligent agents of AI systems may be developed to work as teammates with humans to form human-AI collaborative relationships but there is debate on the topic (Brill et al., 2018;O'Neill et al., 2020) • As listed in Table 2, HCI professionals did not encounter these new HCI challenges in human interaction with non-AI systems. Furthermore, based on the HCAI approach (Xu, 2019a), we specified the primary HCAI-driven design goals for each of the main issues as shown in the Primary HCAI Design Goals column (to be further discussed in Section 2.2). The Detailed
Analyses & References column indicates the section number in this paper where we specifically discuss the findings.
It should be noted that the literature review was done in a qualitative nature, it may not be complete and there may be other new challenges yet to be revealed. However, we argue that these findings are sufficient to reveal the trending new challenges and opportunities for HCI professionals when working on human interaction with AI systems, and the findings are sufficient to compel us to make a call to urge HCI professionals to take action addressing the unique issues in AI systems.
Also, we assessed these new challenges and opportunities for HCI professionals from a holistic perspective, instead of having a deep dive on individual issues. Such a holistic perspective allows us to develop a strategic view for the future HCI work in the development of AI systems, so that we can offer our strategic recommendations to the HCI community.
Based on the analyses so far, we can draw the following initial conclusions. As AI technology becomes increasingly ubiquitous in people's work and lives, the machine being daily used by humans is no longer just the "conventional" computing system, but AI systems that exhibit unique characteristics post new challenges to HCI design. As HCI professionals, we need to drive human-centered solutions to timely and effectively address these new challenges. The HCI community must understand these new challenges and reassess the scope and methods of current HCI approach on how we can effectively develop human-centered AI systems. Such changes will inevitably bring about a paradigmatic shift for HCI research and application in the AI era, driving new design thinking and HCI approaches in developing AI systems.
Advancing the human-centered AI (HCAI) approach to address the new challenges
While the technology-centered approach is dominating the development of AI technology, researchers have individually explored a range of human-centered approaches to address the unique issues introduced by AI technology. For examples, these include humanistic design research (Auernhammer, 2020), participatory design (Neuhauser & Kreps, 2011), inclusive design (Spencer et al., 2018), interaction design (Winograd, 1996), human-centered computing (Brezillon, 2003;Ford et al., 2015;Hoffman et al., 2004), and social responsibility (Riedl, 2019). Each approach provides a specific perspective and allows examination of specific aspects of an AI system design.
In response to the possible negative ethical and moral impacts of AI systems, in 2018
Stanford University established a "human-centered AI" research institution. The strategy emphasizes on two aspects of work: technology and ethical design. They have realized that the next frontier for the development of AI systems cannot be just technology, it must also be ethical and beneficial to humans; AI is to augment human capabilities rather than replace them (Li, 2018;Donahoe, 2018).
Shneiderman and Xu have moved forward further proposing a human-centered AI (HCAI) approach (Shneiderman, 2020a(Shneiderman, , 2020b(Shneiderman, , 2020cXu, 2019aXu, , 2019b. Specifically, Xu (2019a) proposed a comprehensive approach for developing human-centered AI (HCAI). The HCAI approach includes three primary aspects of work interdependently: technology, human factors, and ethics, promoting the concept that we must keep human at the central position when developing AI systems. Shneiderman (2020a) promotes HCAI by providing a two-dimensional design guidelines framework and a comprehensive framework for the governance of ethical AI (Shneiderman, 2020d). Other researchers have also promoted HCAI approach from a variety of perspectives, such as design process (Cerejo, 2021), human-centered explainable AI (Ehsan et al., 2020;Bond et al., 2019), trustworthy AI (He et al., 2021), human-centered machine learning (Sperrle et al., 2021;Kaluarachchi et al., 2021).
However, there are gaps in previous HCAI work. First, most of the previous HCAI work was aimed primarily at either a broad audience or the AI community but not specifically for the HCI community, resulting in uncertainty in what action should be taken by HCI professionals to apply HCAI in their work addressing the new challenges that we did not encounter in human interaction with non-AI systems. Secondly, most of the previous work promoted HCAI at a high level, with no specific HCAI-driven design goals that can be closely tied to the new HCI challenges in human interaction with AI systems. Consequently, HCI professionals could not get direct guidance to their HCI work. Thirdly, there was no holistic assessment that specifically identified the unique challenges as we transition to interaction with AI systems from an HCI perspective, with the result that there are no specific opportunities tied to HCAI-driven design goals identified to guide HCI professionals addressing the new issues in developing HCAI systems. Lastly, the previous HCAI work has not included a comprehensive assessment of current HCI approaches (e.g., methods, process, skillset) on whether we have gaps to enable HCAI in future HCI work.
Thus, to further advance HCAI specifically to the HCI community, we need to specify HCAI-driven design goals, tie these to the unique challenges and opportunities for HCI professionals, and then carry out an assessment of current HCI approaches. We believe that such a goal-driven approach will help further advance the HCAI approach that can guide HCI professionals to take advantage of specific opportunities to address the new challenges in AI systems, eventually developing true HCAI systems.
To achieve the goal of further advancing HCAI, we first specified the HCAI-driven design goals and then tied these goals to the new challenges identified, as shown in the Primary HCAI Design Goals column in Table 2, so that HCI professionals can fully understand what HCAI design goals to aim for when addressing these new HCI challenges. We further elaborate the HCAI framework (Xu, 2019) by mapping these HCAI design goals across the three primary aspects of work interdependently: Technology, Human Factors, and Ethics (see the three smaller circles around the big "Human" circle at the center of Figure 1).
Figure 1 The Human-Centered AI (HCAI) framework with specified design goals (Adapted from Xu, 2019a)
(1) "Human Factors": We start from the needs of humans and implement the humancentered design approach advocated by the HCI community (e.g., user research, modeling, iterative user interface prototyping and testing) in the research and development of AI systems. The design goals are to develop usable & explainable AI and AI systems that guarantee human-driven decisionmaking (see the design goals next to the "Human Factors" circle as depicted in Figure 1).
(2) "Technology": We promote an approach for the development of AI technology by integrating human roles into human-machine systems and by taking complementary advantages of machine intelligence and human intelligence (see 3.3 for details). The design goals are to develop human-controlled AI and augment human abilities, rather than replacing humans (see the design goals next to the "Technology" circle in Figure 1).
(3) "Ethics": We must guarantee fairness, justice, privacy, and accountability in developing AI systems as ethical issues are much more significant in AI systems than conventional non-AI systems. The design goals are to develop ethical & responsible AI and AI systems that guarantee human-driven decision-making (see the design goals next to the "Ethics" circle in Figure 1).
Furthermore, the HCAI approach illustrated in Figure 1 is characterized as follows.
(1) Placing humans at the center. All the three primary aspects of work defined in the HCAI approach place humans at the center. Specifically, the "technology' work ensures that humans are kept at the center of AI systems through a deep integration between machine intelligence I and human intelligence to augment human capabilities. The "human factors" work ensures that AI systems are human-controllable by keeping humans as the ultimate decision makers, usable by providing effective interaction design, and explainable by providing understandable output to humans. The "ethics" work aims to address the specific ethical issues in AI systems by delivering ethical AI and providing "meaningful human control" for responsibility and accountability (e.g., van Diggelen et al., 2020;Beckers et al., 2019).
(2) The interdependency of human factors, technology, and ethics. The HCAI approach advocates synergy across the three aspects of work as illustrated by the lines connecting all the three aspects depicted in Figure 1. For example, if the impact of AI on humans (e.g., ethics) is not considered in design, it is impossible for AI systems to achieve the human-centered goals and may eventually harm humans even though the AI technology deployed is technically made "more like humans." On the other hand, ethically designed AI systems emphasize augmenting human capabilities rather than replacing humans. AI systems, such as autonomous weapons and vehicles, need to provide an effective meaningful human control mechanism (e.g., human-controllable interface through HCI design) to ensure that the human operators can quickly and effectively take over the control of systems in an emergency (van Diggelen et al., 2020). Also, the system should be able to track the accountability of human operators through the meaningful human control mechanism (see more in Section 3.7).
(3) Systematic design thinking. From a technical point of view, the HCAI approach considers humans and machines as a system and seeks to develop the complementarity of machine intelligence and human intelligence within the framework of human-machine systems. From the HCI perspective, the HCAI approach emphasizes on the perspective of human-machineenvironment system. An effective AI system should be the optimal match between human needs (physiology, psychology, etc.), AI technology, and environment (physics, culture, organization, society, etc.). From the perspective of ethical design, the HCAI approach systematically considers factors such as ethics, morality, law, and fairness. Therefore, the HCAI approach emphasizes that the development of AI systems is not just a technological project, but a sociotechnical project that needs collaboration across disciplines.
It should be pointed out that we specify the HCAI design goals herein as a minimum, HCI professionals need to collaborate with AI professionals to develop AI systems that meet these specific design goals across the three aspects of their work, then we can achieve the overall goal of HCAI: reliable, safe, and trustworthy AI (Shneiderman, 2020a;Xu, 2019a). The HCAI framework is scalable, implying that we may add more specific design goals across the three aspects of work (i.e., human factors, technology, ethics) to address additional challenges in AI systems as we move forward in the future.
To sum up, Section 2 has answered the first research question: What are the challenges to HCI professionals to develop human-centered AI systems? Based on our high-level literature review and analysis and driven by the HCAI approach, we have identified the unique challenges that HCI professionals need to address in the AI era where humans interact with AI systems across the seven main issues.
The emergence of these AI related unique issues is inevitable as technology advances, just like the emergence of HCI in the 1980's, when HCI professionals faced the issues arise from personal computer technology. Today, we are just facing a new type of "machines"-AI systems that present unique characteristics and challenges to us, urging us to take action. HCI professionals must fully understand the new challenges in human interaction with AI systems, then we can take new design thinking on how to effectively address the new challenges in the development of AI systems.
New Opportunities for HCI professionals to enable HCAI
The HCI community has recognized that new technologies present both challenges and opportunities to HCI professionals (e.g., Stephanidis, Salvendy et al., 2019;Shneiderman et al., 2016). Specifically, Stephanidis, Salvendy et al. (2019) identified grand challenges and offered comprehensive recommendations on the research priorities to address these grand challenges with intelligent interactive systems. As discussed previously, our approach is to focus on a holistic understanding of the unique challenges of human interaction with AI systems as compared to human interaction with non-AI systems from an HCAI perspective. More specifically, we identify the new HCI opportunities for HCI professionals to address the new challenges based on the HCAI approach and specific HCAI design goals.
This section is organized as follows: (1) highlight of the new challenges across the seven main issues as listed in Table 2 for HCI professionals to address;
(2) review of the overall status of current research and application in each of the seven main issues from the HCI perspective; (3) highlight of the new HCI opportunities that HCI professionals can make (or continue to make)
contributions to develop AI systems, as driven by the specific HCAI design goals.
From expected machine behavior to potentially unexpected behavior
The machine behavior (system output) of non-AI computing systems is typically expected since design of such systems are based on fixed rules or algorithms. Examples include: washing machines, elevators, and conventional office software. However, the behavioral outcome of AI systems could be non-deterministic and unexpected. Researchers are raising the alarm about the unintended consequences, which can produce negative societal effects (e.g., Yampolskiy, 2019;McGregor et al., 2021). Machine behavior in AI systems also has a special ecological form (Rahwan et al., 2019). Currently, the majority of people studying machine behavior are AI and computer science professionals without formal training in behavioral science (Epstein et al., 2018;Leibo et al., 2018). The leading researchers in the AI community call for other disciplines to join (Rahwan et al., 2019).
The HCAI design goal is to develop human-controlled AI through manageable machine behavior in AI systems in order to avoid biased and unexpected machine behaviors (see Figure 1).
Obviously, the multi-disciplinary HCI can play a significant role here (Bromham et al., 2016). HCI professionals can help ensure that we avoid generating extreme or unexpected behaviors and reduce biases generated from AI technologies such as machine learning (Zhang et al., 2020;Leibo et al., 2018). This is also aligned with the "ethical AI" design goal defined in the HCAI approach ( Figure 1). Machine behavior is one of the unique characteristics that distinguishes the HCI work from the conventional HCI work for non-AI systems. HCI professionals must fully understand the design implications of machine behavior in developing AI systems and find effective ways to manage the unique behaviors of AI systems in order to deliver HCAI-based systems. Driven by the HCAI approach, HCI professionals need to put humans and their needs first in the development process (e.g., data collection, algorithm data training, test) and to partner with AI professionals to manage the development of machine behavior.
With many of today's AI systems being derived from machine learning methods, study of the mechanism behind a machine's behavior will require assessment of how machine learning methods and processes impact machine behavior (Ribeiro et al., 2016). Many AI systems are based on supervised, reinforced, and interactive learning approaches that require work done by humans, such as, providing an enormous amount of labeled data for algorithm training. Initial work has started. As an example of HCAI-aligned design approaches, based on a "human-centered machine learning" approach (Fiebrink et al., 2018;Kaluarachchi et al., 2021), Interactive Machine Learning aims to avoid concerns over fairness, accountability, and transparency (Fiebrink et al., 2018). The interactive machine learning approach allows users to participate in the algorithm training iteratively by selecting, marking, and/or generating training examples to interact with required functions of the system, supported by HCI design and evaluation methods. The interactive machine learning approach also pays special attention to the interaction between humans and the machine learning process. From the HCAI perspective, the interactive machine learning approach emphasizes on human's roles in the development with human goals and capabilities, so that an AI system can deliver better results than humans or algorithms working alone (Amershi et al., 2014).
The unique machine behavior creates challenges for developing AI systems, but also opportunities for HCI professionals to play a role in future HCI work.
Application of HCI approach for managing machine behavior. HCI professionals may leverage HCI methods (e.g., iterative design and evaluation with end users) to continuously improve the design until undesirable results are minimized. A machine acquires its behavior during the development with AI technology; shaping the behavior is affected by many factors, like algorithms, architecture (e.g., learning parameter, knowledge representation), training, and data. For instance, HCI professionals may translate user needs into data needs and understand how to generate, train, optimize, and test AI-generated behaviors during development. Machine behavior can be shaped by exposing it to particular training data. Substantial human effort is necessary to annotate or characterize the data in the process of developing autonomous capabilities (e.g.,
labeling that a pedestrian is in the scene) (Acuna et al., 2018;Amershi et al., 2014). For instance, classification algorithms for images and text are trained to optimize accuracy on a specific set of human-labeled datasets, the choice/labeling of the dataset and those features represented can substantially influence the behavior (Bolukbasi et al., 2016;Buolamwini et al., 2018). HCI professionals need to figure out an effective approach that can bring in users' expectations to tune the algorithm for preventing biased responses.
Continued improvement of AI systems during behavior evolution. HCI professionals needs to study the evolution of the AI-based machine behavior for continued improvement of AI systems.
For instance, after being released to the market, algorithms, learning, and training data will all affect the evolution of an AI systems' behaviors. An autonomous vehicle will change its behaviors over time by software and hardware upgrades; a product recommendation algorithm makes recommendations based on continuous input from users, updating their recommendations accordingly. The AI community recognizes that it is not easy to define the expected result due to user's subjective expectations (Pásztor, 2018), HCI professionals need to find a collaborative method on how to improve the design through re-training based on user feedback and how to collect training data and define the expected results that users hope to obtain from AI systems, continuously improving the design of AI systems.
Enhancement of software testing approach. The testing methods in traditional software engineering are based on predictable software output and machine behaviors, which was specifically created for testing non-AI computing systems. Since the outputs of AI systems will evolve as the systems learn over time, HCI professionals need to collaborate with AI and computer science professionals to find ways to improve the testing methods from a behavioral science perspective. Due to the existence of the unique machine behavior, how do we effectively test and measure the evolving performance of AI systems by taking humans and AI agents as a human- at the center in their design, instead of end users (Kaluarachchi et al., 2021), HCI professionals need to ensure target end users to be fully considered based on the HCAI approach.
From interaction to potential human-AI collaboration
As discussed earlier, there is an emerging form of the human-machine collaboration in the AI era: human-machine teaming. As HCI professionals, we are not just dealing with the conventional the "interaction" between humans and machines, but also a new form of a humanmachine relationship that requires new perspective to study.
A significant amount of work has been invested in the area of human-machine teaming.
Initial research suggests that humans and AI systems can be more effective when working together as a combined unit rather than as individual entities (Bansal et al., 2019b(Bansal et al., , 2019bDemir et al., 2018b). People argue that the more intelligent the AI system, the greater the need for collaborative capabilities (Johnson & Vera, 2019). Designing for such potential collaboration suggests a radical shift from the current HCI thinking to a more sophisticated strategy based on teaming (Johnson & Vera, 2019).
A variety of topics are being explored in the research and application of the potential human-AI collaboration, such as, conceptual architecture and framework (e.g., Madni & Madni, 2018;Johnson & Vera, 2019;O'Neill et al., 2020;Prada & Paiva, 2014), performance measurements (e.g., Bansal et al., 2019a). To study the human-AI collaboration, researchers have leveraged the frameworks of other disciplines, such as psychological human-human team theory (e.g., de Visser, 2018; Mou & Xu, 2017). For example, the human-human team theory helps formulate basic principles: two-way/shared communication, trust, goals, situation awareness, language, intentions, and decision-making between humans and AI systems (e.g., Shively et al., 2018;Ho et al., 2017;Demir et al., 2017), instead of a one-way approach as we currently do in the conventional HCI context.
There are debates on whether AI systems can truly work as a teammate with humans (e.g., Schneiderman, 2021c; Klein, G., Woods, et al., 2004), we believe that a common HCAI design goal should be shared between both sides to ensure human-controlled AI and human-driven decisionmaking in human interaction (or collaboration) with AI (see Figure 1). Humans should not be required to adapt to non-human "teammates"; instead, designers should create technology to serve as a good team player (or super tool) alongside humans (Cooke, 2018;Schneiderman, 2021c). HCI professionals need to fully understand what the machines really can do. There is a need for future HCI work in this area.
Clarification of human and AI's roles.
While HCI traditionally studied the interaction between humans and non-AI computing systems, future HCI work should be focused on understanding of how humans and AI systems interact, negotiate, and even work as teammates for collaboration. Future HCI work needs to assess whether humans and AI agents can be a true collaborative teammate versus an AI agent serving merely as a super tool (Shneiderman, 2020c;Klein, G., Woods, et al., 2004), as a peer (Ötting, 2020), or as a leader (Wesche & Sonderegger, 2019). As AI technology advances, an important HCI question is the design-center for the interaction (or mutual collaboration); that is, who are the ultimate decision makers? We need to investigate how we can ensure that when in complex or life-changing decision-making, humans retain a critical decision-making role instead of AI systems while creating and maintaining an enjoyable user experience .
Modeling human-AI interaction and collaboration. Conventional interaction models (e.g., MHP, GOMS) can no longer meet the needs of complex interactions in the AI era (e.g., Card et al., 1983), there is a need for HCI professionals to explore the human cognitive mechanisms for modeling the interaction and collaboration with AI systems (Liu et al., 2018). The goal is to design interfaces of AI systems for facilitating effective interaction and potential collaboration. Other topics include the models of teaming relationships (e.g., peer, teammate, and mentor/mentee), teaming processes (e.g., interactions, communications, and coordination), and applications of human-AI collaboration across different work domains.
Advancement of current HCI approach. HCI professionals should not only rely on current HCI approach but do much more. For example, Computer-Supported Cooperative Work (CSCW)
has long been an active subfield of HCI, and its goal is to use the computer as a facilitator to mediate communication between people (Kies et al., 1998). Lieberman (2009) argues that AI brings to the picture of a collaboration between the user and the computer taking a more active collaborative role, rather than simply a conduit for collaboration between people. We need to do a comparative assessment between CSCW and the potential human-AI collaboration, and fully understand the implications for system design.
Innovative HCI design to facilitate human-AI collaboration. Future HCI work also needs to explore innovative design approaches (e.g., collaborative interaction, adaptive control strategy, human directed authority) from the perspective of human-AI interaction. For instance, we need to model under what conditions (e.g., based on a threshold of mutual trust between humans and AI agents) an AI agent will take over or hand off the control of system to a human in specific domains such as autonomous vehicles (Kistan et al., 2018;Shively et al., 2017). In the context of distributed AI and multi-agent systems, we need to figure out the collaborative relationship between at least one human operator and the collective system of agents, where and how multiple agents communicate and interact with primitives such as common goals, shared beliefs, joint intentions, and joint commitments, as well as conventions to manage any changes to plans and actions (Wooldridge, 2009;Hurts et al., 1994).
From siloed machine intelligence to human-controlled hybrid intelligence
With the development of AI technology, the AI community has begun to realize that machines with any degree of intelligence cannot completely replace human intelligence (e.g., intuition, consciousness, reasoning, abstract thinking), and the path of developing AI technology in isolation has encountered challenges. Consequently, it is necessary to introduce the role of human or human cognitive model into the AI system to form a hybrid intelligence (e.g., Zheng et al., 2017).
Researchers have explored hybrid intelligence by leveraging the strengths from both human intelligence and machine intelligence (Zheng et al., 2017;Dellermann et al., 2019aDellermann et al., , 2019bJohnson & Vera, 2019). Recent work has initially found the significant potential of augmentation through integration of both kinds of intelligence. The applications of hybrid augmented intelligence help achieve collaborative decision-making in complex problems, thereby gaining superior excellent results that cannot be achieved separately (e.g., Carter & Nielsen, 2017;Crandall et al., 2018;Dellermann, et al., 2019aDellermann, et al., , 2019b.
The HCAI design goal is to develop human-controlled AI through human-machine hybrid intelligence (see Figure 1). Driven by the HCAI approach, we advocate that hybrid intelligence must be developed in a context of "human-machine" systems by leveraging the complementary advantages of AI and human intelligence to produce a more powerful intelligence form: humanmachine hybrid intelligence (Dellermann et al., 2019b). This strategy not only solves the bottleneck effect of developing AI technology as discussed earlier, but also emphasizes the use of humans and machines as a system (human-machine system), introducing human functions and roles, and ensuring the final decision of humans on the system control.
In the AI community, the research on hybrid intelligence can basically be divided into two categories. The first category is to develop human-in-the-loop AI systems at the system level, so that humans are always kept as a part of an AI system (e.g., Zanzotto, 2019). For instance, when the confidence of the system output is low, humans can intervene by adjusting input, creating a feedback loop to improve the system's performance (Zheng et al., 2017;Zanzotto, 2019). Human-in-the-loop AI systems combines the advantages of human intelligence and AI, and effectively realizes human-AI interactions through user interventions, such as online assessment for crowdsourced human input (Mnih et al., 2015;Dellermann et al., 2019a), user participation in training, adjustment and testing of algorithms (Acuna et al., 2018;Correia et al., 2019).
The second type of hybrid intelligence involves embedding the human cognitive model into an AI system to form a hybrid intelligence based on cognitive computing (Zheng et al., 2017). This type of AI system introduces a variety of biologically inspired computing intelligence technologies.
Elements include intuitive reasoning, causal models, memory and knowledge evolution, etc.
According to the HCAI approach, this type of hybrid intelligence is not in the true sense of humanmachine hybrid intelligence, because such systems are not based on human-machine systems and cannot be effective in achieving the central decision-making role of human operators in the system.
Of course, the introduction of human cognitive models into machine intelligence, is important for the development of machine intelligence.
There will be a lot of opportunities for HCI professionals to play a role in developing human-machine hybrid intelligence systems.
Human-controlled hybrid intelligence. The HCAI approach advocates human-in-the loop AI systems, but we emphasize that the systems must be based on human-controlled AI. In other words, AI systems are embedded in the "human loop" as supporting super tools (Shneiderman, 2021c), ensuring that human-driven decision-making is still primary. It has been argued that there are currently two basic schemes in terms of the control mechanism (who is in control) of hybrid intelligence: "human-in-loop control" and "human-machine collaborative control" (Wang, 2019).
The effective handover between humans and AI systems in an emergency state is a current important topic. The loss of control in autonomous systems (e.g., decision tracking after launch of autonomous weapon systems, autonomous vehicles) has become a widely concerned issue from the perspective of the "ethical AI" design goal defined in the HCAI approach. The HCAI approach requires humans to have the ultimate control. HCI needs to carry out this work from the HCI perspectives. Future HCI work needs to seek solutions from an integrated perspectives of system design, human-machine interaction, and ethical AI design.
Cognitive computing and modeling. As an interdisciplinary, future HCI work needs to help the AI community explore cognitive computing based on human cognitive abilities (e.g., intuitive reasoning, knowledge evolution) in support of developing human-machine hybrid intelligent systems. Future HCI work should help accelerate the conversion of existing psychological research results to support the work of cognitive computing and define cognitive architecture for AI research (Salvi et al., 2016). This is essential because the development of existing hybrid intelligent systems based on cognitive computing does not fully consider the central role and decision-making function of human operators in the systems. HCI professionals should collaborate with AI professionals to explore the approach of integrating the cognitive computing method with the human-in-the-loop method, either at both system and/or at biological levels (e.g., brain-computer interface technology), developing HCAI-based hybrid intelligent systems.
HCI framework for human-machine hybrid intelligence. Future HCI work needs to lay the foundation for developing effective human-machine hybrid intelligent systems in terms of theory and methods. For example, we may apply the joint cognitive systems theory (Hollnagel & Woods, 2005) and consider a human-machine intelligent hybrid system as a joint cognitive system with two cognitive agents: human and machine. Research work is needed to explore how to use the two cognitive agents to support the design of autonomous systems based on the concept of humanmachine hybrid intelligence through a deep integration between human biological intelligence and machine intelligence.
Human role in future human-computer symbiosis. Licklider (1960) proposed the wellknown concept of human-computer symbiosis, pointing out that the human brain could be tightly coupled with computers to form a collaborative relationship. In the long run, people argue that the human-machine hybrid intelligence will form an effective productive partnership to achieve a true human-machine symbiosis at both systemic and biological levels, eventually enabling a merger between humans and machines (e.g., Gerber et al., 2020). HCI professionals need to ensure that such a human-machine symbiosis will deliver the most value to humans and fully ensure that human functions and roles are seamlessly integrated into AI systems as the ultimate decision makers without harming humans, eventually moving towards a so called "human-machine symbiosis world" with humans at the center (Gerber et al., 2020), instead of a "machine world".
From user interface usability to AI explainability
Designing a usable user interface (UI) for a computing system is the core work for HCI professionals in developing non-AI systems. We currently encounter new challenges in designing the user interfaces for AI systems besides UI usability. As the core technology of AI, machine learning and its learning process are opaque, and the output of AI-based decisions is not intuitive.
For many non-technical users, a machine learning-based intelligent system is like a "black box", especially neural networks for pattern recognition in deep machine learning. This "black box" phenomenon may cause users to question the decisions from AI systems: why do you do this, why is this the result, when did you succeed or fail, when can I trust you? The "black box" phenomenon has become one of the major concerns in developing AI systems (Muelle et al., 2019;Hoffman et al., 2018;Bathaee, 2018;Gunning, 2017). The "black box" phenomenon may occur in various AI systems, including the application of AI in financial stocks, medical diagnosis, security testing, legal judgments, and other fields, causing inefficiency in decision-making and loss of public trust in AI.
As a solution, explainable AI intends to provide the user what the machine is thinking and
can explain to the user why (PwC, 2018), which is also the HCAI design goal (see Figure 1).
Explainable AI is getting more and more attention. One representative research program is the five-
year research plan for explainable AI launched by DARPA in 2016 (Gunning, 2017 Existing psychological theories on explanation may help the explainable AI work.
Psychological research has been carried out in the aspects of mechanisms, measurements, and modeling. For example, inductive reasoning, causal reasoning, self-explanation, comparative explanation, counterfactual reasoning (Hoffman, et al., 2017;Broniatowski, 2021). Although many hypotheses were verified in laboratory studies (Mueller, et al., 2019), much work is still needed to build computational models for explainable AI work. Researchers have realized the importance of non-AI disciplines such as HCI in explainable AI research (e.g., Bond et al., 2019;Hoffman et al., 2018). For instance, the survey of Miller et al. (2017) indicates that most explainable AI projects were only conducted within the AI discipline. AI professionals tend to build explainable AI for themselves rather than target end users; most explainable AI publications do not follow a humancentered approach (Kaluarachchi et al., 2021). Miller et al. (2017) argue that if explainable AI adopts appropriate models in behavioral science, and these evaluations focus more on users rather than technology, explainable AI research is more likely to be successful. collaborative task-driven (e.g., error detection), and exploratory interpretation to support the adaptation between humans and machines (Kulesza's at al., 2015;Mueller et al., 2019).
Human-centered explainable AI. Future HCI work need to further explore user-participatory solutions based on the HCAI approach, such as user exploratory and user-participation approaches (Mueller et al., 2019). This is to ensure that AI is understandable to target users beyond explainability, which AI professionals have previously neglected. In addition, there is a lack of user-participated experimental validation in many existing studies, along with a lack of rigorous behavioral science-based methods (Abdul et al., 2018;Mueller et al., 2019). HCI should give full play to its expertise to support validation methods. When adopting user-participated experimental evaluation, it is necessary to overcome the unilateral evaluation methods in some existing studies that only evaluate the performance of AI systems, HCI should promote the evaluation of the AI systems as a human-machine system by adding an end users perspective (Muelle et al., 2019;Hoffman, et al., 2018).
Acceleration of the transfer of psychological theories. HCI professionals can leverage our own multidisciplinary backgrounds to act as intermediaries, accelerating the transfer of the knowledge from other disciplines. For example, many of the existing psychological theories and models have not generated usable results (Abdul et al., 2018;Muelle et al., 2019). Also, there is a large body of human factors research work on automation transparency and situation awareness (e.g., Sarter & Woods, 1995;Endsley, 2017), their comprehensive mitigation solutions may offer possible solutions to explainable AI.
Sociotechnical systems approach. From a sociotechnical systems perspective, the impacts of other social and organizational factors (e.g., user/organizational trust and acceptance) on explainability and comprehensibility of AI should be further explored (Ehsan & Riedl, 2020;Klein, et al., 2019). HCI professionals need to consider the influence of other factors, such as culture, user knowledge, decision-making behaviors, user skill growth, users' personality and cognitive styles.
HCI can apply its multidisciplinary expertise in this regard.
From human-centered automation to human-controlled autonomy
We are transitioning to an "autonomous world". AI technology demonstrates its unique autonomous characteristics, society is currently introducing more autonomous technologies that possess unique characteristics different from traditional automation technologies as discussed earlier, but safety and negative effects have not attracted enough attention (Hancock, 2019;Salmon, 2019). As an example, many companies have heavily invested in developing AI-based autonomous systems (e.g., autonomous vehicles), but consumer's trust on AI is still questionable. Lee and Kolodge's survey (2018) shows that among the 8571 American drivers surveyed, 35 % of them said they definitely do not believe that autonomous vehicles have the ability to operate safely without driver control. On the other side, some consumers over trusted AI-based autonomous systems as evidenced by several fatal accidents of autonomous vehicles (e.g., NTSB, 2017;Endsley, 2018).
Researchers have expressed their deep concerns about the safety issues caused by autonomous technologies (e.g., de Visser et al., 2018;Endsley, 2017Endsley, , 2918Hancock, 2019). Also, as we just enter the AI age, people have begun to confuse the concepts of automation and autonomy, leading to inappropriate expectations and potential misuse of technology (Kaber, 2018).
It is also important for HCI professionals to understand how previous research on automation provides lessons and implications to what we should do with autonomous systems. In the past few decades, the human factors community has carried out extensive research on automation and promoted a human-centered automation approach (e.g., Billings, 1997;Sarter & Woods, 1995;Xu, 2007). The results show that users have over-trust and over-reliance on automation and many automation systems have vulnerabilities in complex domains, such as aviation and nuclear power. These systems worked well under normal conditions, but it might cause the human operator's "automation surprise" problems when unexpected events occurred (e.g., Sarter & Woods, 1995;Xu 2004). Operators may not be able to understand what and why automation is doing, creating an "out-of-the-loop" effect and resulting in the wrong manual intervention (Endsley, 2015;Endsley & Kiris, 1995;Wickens & Kessel, 1979). This vulnerability of automation has brought challenges to safety, for example, civil flight deck automation has caused deadly flight accidents (Endsley, 2015). Bainbridge (1983) defined a classic phenomenon of "ironies of automation": the higher the degree of automation, the less the operator's attention to the system; in an emergency state, it is more difficult for the operator to control the system through manual control. Endsley (2017) believes that in an autonomous system, with the improvement of the "automation" level of individual functions, and as the autonomy of the overall system increases, the operator's attention to these functions and situation awareness will decrease in emergency situations, so that the possibility of the "out-of-the-loop" effect will also increase. Fatal accidents involving autonomous vehicles that have occurred in recent years have showed these typical human factors issues as seen in automation (e.g., Navarro, 2018;NHTSA, 2017). More importantly, AI-based autonomous systems' human-like intelligent abilities (e.g., learning) will continue to evolve as they are used in different environments. The uncertainty of the operational results of autonomous systems means that the system may develop behaviors in unexpected ways. Therefore, initial research shows that autonomous systems may give operators a stronger shock than "automation surprise" (Prada & Paiva, 2014;Shively et al., 2017). These effects may further amplify the impact of the "automation surprise" issue and cause decrements in failure performance and a loss of situation awareness associated with increasing degree of automation, just as suggested by the lumberjack effect (Onnasch et al., 2014). In addition, initial studies also show that autonomous systems may also lead to a highly emotional reaction of the operator, some social factors are more likely to affect the operator's cognitive ability, personality traits and communication attributes Mou & Xu, 2017).
From the HCAI perspective, we advocate human-controlled AI through human-controlled autonomy (see Figure 1). At present, we are in the transition from human-centered automation to human-controlled autonomy. The effort to address the classic "ironies of automation" issue has existed over 30 years, but is still not completely solved (Bainbridge, 1983;Strauch, 2017). Today, we encounter new ironies: autonomous systems that exhibit unique characteristics compared with traditional automation. The potentially amplified impacts of autonomy plus its unique autonomous characteristics compels us to take the lessons learned from automation and explore new approaches beyond what we have done in automation to address the new challenges, driving towards humancontrolled autonomy and avoiding excessive autonomy (Shneiderman, 2020a), based on the HCAI design goal of human-controlled AI (see Figure 1).
Overall, the HCI research on autonomous systems is currently in its infancy. New challenges and opportunities co-exist as we enter the AI-based autonomous world, requiring HCI professionals go beyond current design thinking of the traditional automation technology and develop innovative approaches to address the unique issues in autonomy brought about AI technology.
Understanding the impacts of autonomous characteristics. We need to assess and fully understand the impacts of the autonomous characteristics from the HCAI perspective. There are many basic questions for HCI professionals to explore, for instance, what are the implications for HCI design? We also need to further empirically assess the "automation surprise" effect and the "lumberjack" effect for AI systems with a varied degree of autonomy. The results will inform HCI recommendations to the development of AI-based autonomous systems.
Innovative design paradigms. We need to explore alternate paradigms to optimize the design of autonomous systems. Although HCI professionals have participated in the development of autonomous systems (e.g., autonomous vehicles), recent accidents have sounded an alarm, compelling us to rethink the current approach (Xu, 2020). For example, from the human-AI collaboration perspective, the research on human-autonomy teaming has been carried out over the last several years (e.g., Shively et al., 2017;O'Neill et al., 2020). We may explore the design for autonomous systems that emulate the interactions between people by considering mutual trust, shared situation awareness, and shared autonomy/control authority in a reciprocal relationship between human and autonomous systems. Thus, we can maximize design opportunities to minimize the potential risk caused by autonomous systems. Future HCI work may explore applying the paradigm of human-machine hybrid intelligence to develop innovative solutions for effective human-machine co-driving and take-over/handoff (Jeong, 2019). For interaction design, we need to take innovative approaches beyond traditional HCI design. By applying the HCAI approach, we need to develop an effective human control mechanism with well-designed UIs to enable human operators to monitor and quickly take over control of autonomous systems in emergencies.
Design for human-controlled autonomy. SAE J3016 (2018) regulations presume that when autonomous vehicles are equipped with high-level (L4-L5) automated driving functions, driver monitoring and manual intervention are not required. HCI professionals need to step in to ensure that humans are always the ultimate decision makers in system design (including remote operations) to address safety issues (Navarro, 2018;Shneiderman, 2021c). To reinforce the "human-controlled AI" design goal, we advocate to implement a "human meaningful control" design to implement the accountability and traceability of autonomous systems (de Sio & den Hoven, 2018). When defining the levels of "automation" for autonomous vehicles, SAE J3016
(2018) seems to be ignoring the differences between automation and AI-based autonomous technology, and SAE J3016 takes a system-centered approach classifying the "automation" level, instead of a human-centered approach (Hancock, 2019). HCI professionals need to obtain empirical data to help improve the classification of autonomous levels from the HCAI perspective, which may help standardize the way of defining product requirements, measurement for certification of autonomous technology, and the need for human operator training.
A multidisciplinary HCI approach. As an interdisciplinary area, HCI professionals are in a unique position to address the emerging issues from a broad perspective of sociotechnical systems, including the impacts of autonomous systems on human expectations, operational roles, ethics, privacy, and so on. While we have an established process of certifying humans as autonomous agents, there is no industry consensus as to how to certify computerized autonomous systems for human interaction with AI systems (Cummings, 2019;Cummings & Britton, 2020). More research is needed to further assess the impacts of autonomous technology across a variety of context.
From conventional interactions to intelligent interactions
In the AI era, we are transitioning from conventional interaction with non-AI computing systems to the intelligent interaction with AI systems driven by AI technology, such as voice input and facial recognition. Historically, interaction paradigms have guided UI development in HCI work, e.g., the WIMP (window, icon, menu, pointing) paradigm. However, WIMP's limited sensing channels and unbalanced input/output bandwidth restrict the human-machine interaction (Fan et al., 2018). Researchers explored the concepts of Post-WIMP and Non-WIMP, such as the PIBG paradigm (physical object, icon, button, gesture) for pen interaction (Dai et al., 2004), and RBI (reality-based interaction) for virtual reality (Jacob et al., 2008), but the effectiveness of these proposed paradigms requires verification (Fan et al., 2018). To further analyze the interaction paradigm for AI systems, Fan et al. (2018) proposed a software research framework, including interface paradigms, interaction design principles, and psychological models. Zhang et al. (2018) proposed a RMCP paradigm that includes roles (role), interactive modal (modal), interactive commands (commands), and information presentation style (presentation style) for AI systems.
These efforts have not been fully validated and primarily initiated by the computer science community without the participation of HCI professionals. HCI professionals face new challenges, the HCAI design goal of "Usable AI" calls for future HCI work (Figure 1).
New interaction paradigms.
In the realization of multi-modality, natural UI, and pervasive computing, hardware technology is no longer an obstacle, but the user's interaction capabilities have not been correspondingly supported (Streitz, 2007). How to design effective multi-modal integration of visual, audio, touch, gestures, and parallel interaction paradigms for AI systems is an important topic for future HCI work. This work was mainly carried out in the computer science community, the HCI community should support for defining the paradigms, metaphors, and experimental verification to address the unique issues in AI systems as we did in the personal computer era (Fu et al., 2014;Sundar, 2020).
Usable user interface. AI technology changes the way we design user interfaces. There is a big opportunity for HCI professionals to design usable user interface based on effective interaction design with AI systems. Conventional non-AI interfaces tend to be rigid in the order in which they accept instructions and the level of detail which they require (Lieberman, 2009). AI-based technologies, such as gesture recognition, motion recognition, speech recognition, and emotion/intent/context detection, can let systems accept a broader range of human input through multi-modalities in parallel. Human interaction with AI systems, which is distinct from interaction with non-AI systems, requires HCI professionals to develop more effective approaches to support HCI design work exploring the design of usable UI that can effectively facilitate human-AI interaction, including potential human-AI collaboration.
Adapting AI technology to human capability. Human limited cognitive resources become a bottleneck of HCI design in the pervasive computing environment. As early as 2002, Carnegie
University's Aura project research showed that the most precious resource in pervasive computing is not computer technology, but human cognitive resources (Garlan et al., 2002). In an implicit interaction scenario initiated by ambient intelligent systems, intelligent systems may cause competition between human cognitive resources in different modalities, and users will face a high cognitive workload. In addition, implicit, multi-modal, and pervasive interactions are anticipated to be not only conscious and intentional, but also subconscious and even unintentional or incidental (Bakker & Niemantsverdriet, 2016). Thus, HCI design must consider the "bandwidth" of human cognitive processing and their resource allocation while developing innovative approaches to reduce user cognitive workload through appropriate interaction technology, adapting AI technology to human capabilities as driven by the HCAI approach, instead of adapting humans to AI technology.
HCI design standards for AI systems. We also need interaction design standards to guide HCI design work in the development of AI systems. Existing HCI design standards were primarily developed for non-AI systems, there is a lack of design standards and guidelines that specifically support HCAI-based systems. The design standards and guidelines for AI systems need to fully consider the unique characteristics of AI systems, including design for transparency, unpredictability, learning, evolution, and shared control (Holmquist, 2017); design for engagement, decision-making, discovery, and uncertainty (Girardin & Lathia, 2017). There are initial design guidelines available, such as "Google AI + People Guidebook" (Google PAIR, 2019), Microsoft's 18 design guidelines (Amershi et al., 2019), and Shneiderman's 2-D HCAI design framework (Shneiderman, 2020a). The International Organization for Standardization (ISO) also sees this urgency, publishing a document on AI systems, titled "Ergonomics of Human-System Interaction -Part 810: Robotic, Intelligent and Autonomous Systems" (ISO, 2020). The HCI community needs to put forward a series of specific design standards and guidelines based on empirical HCI studies, playing a key role in developing HCI design standards for HCAI systems.
From general user needs to specific ethical AI needs
The focus of the conventional HCI work is on general user needs in interaction with non-AI computing systems, such as usability, functionality, and security. As one of the major challenges presented by AI technology, ethical AI design has received a lot of attentions in research and application, and multiple sets of ethical guidelines from various organizations around the world are currently available, large high-tech companies have published internal ethical guidelines for development of AI systems (e.g., IEEE, 2019; Hagendorff, 2020). Shneiderman (2021d) proposes 15 recommendations at three levels of governance: team, organization, and industry, aiming at increasing the reliability, safety, and trustworthiness of HCAI systems. However, research also shows that the effectiveness of the practical application of ethical AI guidelines is currently far from satisfactory (e.g., Mittelstadt, 2019). In this section, we approach the ethical AI issues from the HCI design perspective with the hope that ethical AI issues can be successfully addressed in the frontline of developing AI system.
Specifically, the concept of "meaningful human control" is one of the approaches being studied (e.g., van Diggelen et al., 2020;Beckers et al., 2019). In alignment with HCAI, van Diggelen et al. (2020) define meaningful human control as having three essential components: (1) human operators are making informed and conscious decisions about the use of autonomous technology;
(2) human operators have sufficient information to ensure the lawfulness of the action they are taking; and (3) the system is designed and tested, and human operators are properly trained, to ensure effective control over its use (e.g., autonomous weapons). Currently there are many perspectives for consideration of the meaningful human control, ranging from accountability and system transparency design (Bryson and Winfield, 2017), governance and regulation (Bryson & Theodorou, 2019), delegation (van Diggelen et al., 2020), to certification (Cummings, 2019). There is a consensus that the human must always remain in control of ethically sensitive decisions (van Diggelen et al., 2020;de Sio & den Hoven, 2018), which is also the HCAI design goal (see Figure 1).
The HCI community can help address the ethical AI issues from the following aspects.
Adoption of the meaningful human control in design. One of the key requirements for meaningful human control is allowing human operators to make informed and conscious decisions about the use of autonomous technology. To achieve this design goal, HCI professionals need to partner with system designers to ensure a transparent system design and effective interaction design, keeping human operators in the loop with sufficient situation awareness. Also, HCI professionals need to leverage existing knowledge from research on automation, such as, automation awareness and UI transparent design. For life-critical autonomous systems (e.g., autonomous vehicles and weapons), we advocate the implementation of the "tracking" and "tracing" mechanisms in system design of autonomous systems (de Sio & den Hoven, 2018), so that we can trace back the error data to identify the accountability of system versus human operators if the autonomous system fails and also be able to use the data for future improvement of the design (de Sio & den Hoven, 2018;Xu, 2021).
Integration of HCI approaches into AI development. Although multiple sets of ethical guidelines are currently available, the AI community lacks common aims, professional norms, methods to translate principles into practice. In many cases, ethical design was considered after the technology is developed rather than during development (Mittelstadt, 2019 Designing explainable AI systems will also help develop ethical AI.
An HCI multidisciplinary approach. As a multidiscipline, we need to help AI professionals in ethical AI design. AI engineers typically lack formal training in applying ethics to design in their engineering courses and tend to view ethical decision making as another form of solving technical problems. The AI community now recognizes that the ethical AI design requires wisdom and cooperation from a multidisciplinary field extending beyond computer science (Li & Etchemendy, 2018). Many of the ethical AI issues need solutions from social and behavioral science, such as human privacy, human acceptance of AI systems, human needs for decision-making when using AI systems. The HCI community can leverage their interdisciplinary skills to assess the ethical related issues and help propose solutions by adopting methods of social and behavioral science from a broader sociotechnical systems perspective.
Summary of the opportunities for HCI professionals to enable HCAI
To sum up what have been discussed and proposed for future HCI work, Table 3 summarizes the new opportunities for HCI professionals, and the expected HCAI-driven design goals across the seven main issues. Specifically, the Opportunities for HCI Professionals column lists the future HCI work discussed earlier for addressing the new challenges identified, and the Primary HCIA Design Goals column specifically lists the primary HCAI design goals to be achieved across these opportunities. Our review and analyses in this section provide the answers to the second research question:
What are the opportunities for HCI professionals to lead in applying HCAI to address the new challenges? As a result, our holistic assessment has enabled us to specifically identify the unique challenges as we transition to interaction with AI systems from the HCAI perspective; we also tie these new HCI opportunities to HCAI-driven design goals, eventually guiding HCI professionals addressing the new issues in developing AI systems.
We urge HCI professionals to proactively participant in the research and application of AI systems to exploit these opportunities for addressing the new challenges. Addressing the new challenges through these opportunities will promote the HCAI approach in developing AI systems, further advancing the HCAI approach and influencing the development of AI systems.
The need to improve HCI approaches for applying HCAI
This section is to provide the answers to the third research question: How can current HCI approaches be improved in the application of HCAI? Our analyses of current HCI approaches herein focus on the assessment of the current HCI methods being used and how HCI professionals currently influence the development of AI systems in terms of HCI processes and professional skillset; if gaps exist, how we should improve the current HCI approaches to enable the HCAI approach.
Existing HCI approaches were primarily defined for non-AI computing systems and may not effectively address the unique issues in AI systems. Future HCI work inevitably puts forward new requirements on these HCI approaches if we want to effectively address these new challenges by applying HCAI. The previous HCAI work has not included a comprehensive assessment of current HCI approaches (e.g., methods, process, skillset) on whether we have gaps to enable HCAI in future HCI work. Recent research already reported that many challenges have been encountered in the process of developing AI systems for HCI professionals (e.g., .
Enhancing current HCI methods
Research shows that there are a lack of effective methods for designing AI systems and HCI professionals have had difficulty performing the typical HCI activities of conceptualization, rapid prototyping, and testing Holmquist, 2017;Dove et al., 2017;van Allen., 2018).
Driven by the unique characteristics of AI technology as discussed earlier, there are new needs that pose challenges to the existing HCI methods. For instance, the pervasive computing environment and ecosystems of artifacts, services, and data has challenged the existing HCI methods that focus on single user-artifact interaction (Brown et al., 2017).
The HCI community has realized the need to enhance existing methods (Stephanidis, Salvendy et al., 2019;Palvendo, 2019;Xu, 2018;Xu & Ge, 2018). To effectively address the identified unique issues of AI systems as discussed earlier, we need to enhance existing methods and leverage the methods from other disciplines. To this end, we assessed the over 20 existing methods of HCI, human factors, and other related disciplines from the HCAI perspective. As a result, Table 4 summarizes a comparison between the current HCI methods (i.e., typical HCI methods being used in designing non-AI systems) and the selected 7 alternative methods that are presented by enhancing existing HCI methods and leveraging the methods from other disciplines (e.g., Jacko, 2012; Endsley et al., 2012;Lieberman, 2009). Study the impacts of the entire pervasive computing environment (multiple users and AI agents) and the ecosystems of artifacts, services, and data in distributed contexts of use (Brown et al., 2017) User research, HCI test
Comprehensively assess the impacts of AI technologies, and optimize the design to support people's daily work and life (e.g., Jun et al., 2021) Limited to labbased study, cannot effectively assess the broad impacts of AI on people's daily life and work "In-the-wild" study Carry out in-situ development and engagement, sampling experiences, and probing people in the field (e.g., home, workplace) to fully understand people's real experience and behavior while interacting with AI (Roger et al., 2017) System and user needs analysis Utilize the learning ability of AI systems to dynamically and intelligently replace more manual tasks and improve the overall performance of human-machine systems (e.g., Xu, Dov, et al., 2019) Static and unchanging allocation of human-machine functions and tasks (Martelaro, et al., 2017) System design, prototype AI/intelligence is used as a tool to truly empower HCI designers; technology becomes a valuable tool, facilitating designers throughout HCI iterative design and evaluation (Yang, 2018) No tool to effectively help design AI systems, HCI professionals need to learn the technical details of AI AI as a design material Plug in AI/intelligence as a new design material in developing AI systems without having technical know-how (Holmquist, 2017) Needs analysis, system design Provide personalized capabilities and contents based on real-time digital personas, user behaviors, and usage context (e.g., Kleppe, 2017) Difficult to predict user needs, hard to obtain real-time data, such as user behaviors and contextual information Big databased interaction design Model real-time user behaviors and contextual scenes using AI algorithms and big data to produce digital personas and user's usage scenarios, understand personalized user needs in real time (e.g., Berndt et al., 2017) HCI evaluation Assess AI systems and behaviors as AI systems evolve over time, optimize interaction design and potential human-AI collaboration from longitudinal perspective (Wang & Siau, 2018) Limited to make interaction design decisions at a fixed time without considering the evolvement of AIbased machine behavior over time
Longitudinal study
Assess the performance and impacts of human-AI systems or interface as AI systems evolve over time, including potential human-AI collaboration (Lieberman, 2009) In For instance, when at the developmental stage of determining requirements for an AI system, HCI professionals may use the following methods (among others):
• The 'scaled up & ecological method" and/or the "in-the-wild study" to assess the impacts of AI technology on users and gather user requirements from a broad perspective of sociotechnical systems • The 'dynamic allocation of human-machine functions" method, the "big-data-based interaction design" method, and/or the "human-machine teaming based collaborative design" method to support HCI and system design • The "AI as a design material" approach and the "WOZ design" approach to quickly prototype and test design concepts for the AI system to ensure the design meets user needs.
Next, they can follow up an iterative process to improve and test the design. Once the system is released to the market, HCI professionals may apply the "scaled up & ecological" method and/or the "in-the-wild study" method to further assess the AI system and its potential impacts on users. If necessary, they may adopt the "longitudinal study" method to assess the performance and impacts of the AI system as the system behavior evolves over time, including potential human-AI collaboration.
As shown in Table 4, these 7 alternative HCI methods can help HCI professionals overcome the limitations of conventional HCI methods when applying HCAI in developing AI systems. In addition, the advantages of these alternative HCI methods can be taken by HCI professionals for their HCI work across various development stages (see the R & D stages of AI systems column).
Thus, we can draw our initial conclusion that there are gaps in current HCI methods in implementing HCAI, our comprehensive assessment helps identify alternative HCI methods for HCI professionals to be more effectively in developing AI systems to achieve the HCAI design goals in future HCI work. In addition, we encourage HCI professionals to leverage these alternative HCI methods and also expect these alternative methods will be improved in future HCI work.
Influencing the development of AI systems
As HCAI is an emerging approach, HCI professionals are also challenged by how we can effectively influence the development of AI systems when HCI methods are available. We argue that the effectiveness of the influence relies on several factors, including the integration of HCI methods into the development, HCI professionals' skillset and knowledge, acceptance of HCAI by AI professionals, and so on.
Research shows HCI professionals have challenges in integrating HCI processes into the development of AI systems. For instance, many HCI professionals still join AI projects only after requirements are defined, a typical phenomenon when HCI was an emerging new field (Yang, Steinfeld et al., 2020). Consequently, the HCAI approach and design recommendations from HCI professionals could be easily declined (Yang, 2018). AI professionals often claim that many problems that HCI could not solve in the past have been solved through AI technology (e.g., voice and gesture input), and they are able to design the interaction by themselves. Although some of the designs are innovative as developed by AI professionals, studies have shown that the outcomes may not be acceptable from a usability perspective (e.g., Budiu & Laubheimer, 2018). Also, some HCI professionals find it challenging to collaborate effectively with AI professionals due to lack of a shared workflow and a common language (Girardin & Lathia, 2017). Recent studies have shown that HCI professionals do not seem to be prepared themselves for providing effective design support for AI systems (Yang, 2018). We offer several strategic recommendations below on how HCI professionals can effectively applying HCAI and influence the development of AI systems, which will further advance HCAI in future HCI work.
First, HCI professionals need to integrate the existing and enhanced HCI methods into the development process of AI systems to maximize the interdisciplinary collaboration. A multidisciplinary team, including AI, data, computer science, and HCI, may bring various frictions and opportunities for misunderstandings, which must be overcome through process optimization (Girardin & Lathia, 2017;Holmquist, 2017). For instance, in order to understand the similarities and differences in practices between HCI professionals and the professionals from other disciplines, Girardin & Lathia (2017) summarize a series of touch points and principles. Within the HCI community, researchers have indicated how the HCI process should be integrated into the development of AI systems and the role that HCI professionals should play in the development of AI (Lau et al., 2018). For example, Cerejo (2021) proposed a "pair design" process that puts two people (one HCI professional and one AI professional) working together as a pair across the development stages of AI systems.
Second, HCI professionals should take a leading role to promote HCAI as we did 40 years ago for promoting the "human-centered design" approach for PC applications. In response to the challenges faced by the interdisciplinary communities such as human factors and HCI, past president of the Human Factors and Ergonomics Society (HFES), William Howell, put forward a model of "shared philosophy" by sharing the human-centered design philosophy with other disciplines (Howell, 2001), instead of a "unique discipline" model by claiming the sole ownership of the human-centered design by a discipline. Over the past 40 years, the participation of the professionals from HCI, human factors, computer science, psychology, and other disciplines into the fields of HCI is the embodiment of the "shared philosophy" model. In the early days of the AI era, it is even more necessary for HCI professionals to actively share the HCAI design philosophy with the AI community. Although the HCAI approach may not be fully accepted by all in the initial stage, but we need to minimize this time-lagging effect. Influence ultimately determines our continuing efforts, just as we jointly have been promoting user experience to society over the last 40 years and user experience has finally become a consensus for society.
Third, HCI professionals must update their skillset and knowledge in AI. Some HCI professionals have already raised their concerns (Yang, 2018). While AI professionals should understand HCI approaches, HCI professionals also need to have a basic understanding of AI technology and apply the knowledge to facilitate the process integration and collaboration.
Opportunities include employees' continuing education, online courses, industry events and webinars, workshop, self-learning, or even learning through handling low-risk projects to develop new skills, so that HCI professionals can fully understand the design implications posed by the unique characteristics of AI systems and be able to overcome weaknesses in ability to influence AI systems as reported (Yang, 2018).
In a long run, we need to train our next generation of developers and designers to create HCAI systems. Over the past 40 years, HCI, human factors, and psychology disciplines have provided an extensive array of professional capabilities. These capabilities have emerged into a relatively mature user experience culture (Xu, 2005). For this to occur for HCAI, new measures at the level of university education are required. For instance, relevant departments or programs (e.g., HCI, Human Factors, Computer Science, Informatics, Psychology) need to proactively cultivate interdisciplinary talents doing HCAI work for society. More specifically, it is necessary to provide students with multiple options to learn interdisciplinary knowledge, such as, hybrid curriculums of "HCI + AI or HCAI," "AI major + social science minor," or "social science major + AI minor," and establish master and doctorate programs targeting HCAI.
Fourth, HCI professionals need to proactively initiate applied research and application work through cross-industry and cross-disciplinary collaborations to increase their influences. Scholars in academia should actively participate in AI related collaborative projects across disciplines and collaboration between industry and academia. The development of autonomous vehicles is a good example. Many companies currently have heavily invested in developing autonomous vehicles. As discussed earlier, there are opportunities for academia to partner with industry to overcome the challenges. Also, the human-machine hybrid augmented intelligence, as advocated by the HCAI approach, needs collaborative work between academia and industry.
Finally, we need to foster a mature environment for implementation of HCAI, including government policy, management commitment, organizational culture, development process, design standards and governance, and development process. We firmly believe that a mature HCAI culture will eventually come in, the history has proved our initial success in promoting HCI and user experience in the computer era. The HCI community needs to take the leading role again.
Conclusions
This paper has answered three research questions: (1) What are the challenges to HCI professionals to develop human-centered AI systems (Section 2); (2) What are the opportunities for HCI professionals to lead in applying HCAI to address the challenges (Section 3); and (3) How can current HCI approaches be improved to apply HCAI (Section 4)? The primary contributions of this paper and the urgent messages to the HCI community as actions are as follows.
Driven by AI technology, the focus of HCI work is transitioning from human interaction with non-AI computing systems to interaction with AI systems. These systems exhibit unique characteristics and challenges for us to develop human-centered AI systems. While they have benefited humans, they may also harm humans if they are not appropriately developed. We have further advanced the HCAI approach specifically for HCI professionals to take actions. The HCI community must fully understand these unique challenges and take the human-centered AI approach to maximize the benefits of AI technology and avoid risk to humans.
Based on the HCAI approach and the specified HCAI-driven design goals, we have identified new opportunities across the seven main issues that the HCI community needs to take a leading role to effectively address human interaction with AI systems. The emergence of these unique AI related issues is inevitable as technology advances, just as it was during the emergence of HCI in the 1980's. Today, we are just facing a new type of machine -AI systems that present new challenges to us, compelling us to take action by taking the HCAI approach. This paper has identified main gaps in current HCI approaches in applying HCAI in order to effectively address the new challenges and opportunities. We call for action by the HCI community.
Specifically, we need to:
• Enhance and integrate HCI methods into the development of AI systems to maximize the interdisciplinary collaboration; take the leading role to promote the HCAI approach • Update our skillsets and knowledge in AI and train the next generation of developers/designers for developing HCAI systems • Proactively conduct applied research related to AI systems through cross-disciplinary collaboration • Foster a mature organizational environment for implementing the HCAI approach.
History has proven through our initial success in promoting human-centered design in the PC era in the past, the HCI community needs to take the leading role again.
). While the HCI community has contributed to the development of AI systems, what else does it need to do to enable the implementation of the HCAI approach and offer HCI solutions to address the unique challenges posted to humans and society by AI technology?
Table 2
2Summary of main issues for human interaction with AI systemsMain Issues
Familiar HCI Concerns
with Non-AI Systems
(e.g., Jacko, 2012)
New HCI Challenges with AI
Systems
(Selected references)
Primary HCAI
Design Goals
(Figure 1)
Detailed
Analysis &
References
(Section#)
Machine
behavior
• Machines behave as
expected by design
• HCI design focuses on
usability of system
output/UI, user mental
model, user training,
operation procedure, etc.
machine system? As AI technology advances, it is also critical to assess its influence on humans and society on a long-term basis (e.g., a longitudinal perspective), requiring enhanced testing methods to be supported by social scientist and psychologists(Horvitz, 2017;Stephanidis, Salvendy et al., 2019). HCI approach in design. HCI professionals can partner with AI professionals to develop better AI systems from an HCI design perspective. For instance, for the interactive machine learning approach, the system displays the current model and corresponding results (such as classification results, clustering results, prediction results, etc.) to users through the user interfaces of AI systems, HCI professionals can support the design from the perspectives of visualization design, interactive technology and mental modeling of target users. Also, many human-centered machine learning research projects conducted by AI professionals made AI engineers the 'human'Leveraging
). The program covers three aspects of work: (1) develop a series of new or improved machine learningtechnologies to obtain explainable algorithms; (2) develop explainable UI model with advanced
HCI technology and design strategies; and (3) evaluate existing psychological theories on
explanation to assist explainable AI.
While research on explainable AI is currently underway, some researchers have realized that the ultimate purpose of explainable AI should ensure that target users can understand the output by AI systems(Kulesza et al. 2015;Broniatowski, 2021). For example, an "explainable AI" version of data scientists is not understandable to most people. Chowdhury (2018) argues that understandable AI requires the collaborations across disciplines besides AI and data science. The solution should meet target user needs (e.g., knowledge level), and ultimately achieving an understandable AI, which is aligned with HCAI. The current work on understandable AI focuses on UI modeling, applications of psychological theories, and validation of related experiments(Zhou et al., 2018; Future work cannot rely solely on technical methods and requires the participation of HCI.Future HCI work can be mainly considered as follows. interaction design. Most previous explainable AI research was based on static, one-way interpretation from AI systems. HCI research on the interaction design of AI systems needs to explore alternative approaches, such as co-adaptive dialog-based interface, UI modeling, natural interactive dialogue. Future HCI work needs to continue to develop effective human-These findings show the
relevance of the HCAI approach and why HCI professionals should participate in explainable AI
research.
Hoffman et al., 2018; DARPA, 2017).
Effective centered solutions, such as: interactive dialogues with AI systems (Abdul et al., 2018), visualization
approach (Chatzimparmpas et al., 2020), human-in-the-loop explainable AI (Chittajallu et al, 2019),
Table 3
3Summary of the opportunities for HCI professionalsMain Issues
(Sections 3.1-3.7)
Opportunities for HCI Professionals
(Sections 3.1-3.7)
Primary HCAI
Design Goals
(Figure 1)
From expected machine
behavior to potentially
unexpected behavior
(Section 3.1)
• Application of HCI approach for managing machine
behavior
• Continued improvement of AI systems during behavioral
evolution
• Enhancement of software testing approach
• Leveraging HCI approach in design
• Human-
controlled AI
From interaction to
potential human-AI
collaboration (Section
3.2)
• Clarification of human and AI's roles
• Modeling human-AI interaction and collaboration
• Advancement of current HCI approach
• Innovative HCI design to facilitate human-AI
collaboration
• Human-driven
decision-making
• Human-
controlled AI
From siloed machine
intelligence to human-
controlled hybrid
intelligence (Section 3.3)
• Human-controlled hybrid intelligence
• Cognitive computing and modeling
• HCI framework for hybrid intelligence
• Human role in future human-computer symbiosis
• Augmenting
human
• Human-
controlled AI
From user interface
usability to AI
explainability (Section
3.4)
• Effective interaction design
• Human-centered explainable AI
• Acceleration of the transfer of psychological knowledge
• Sociotechnical systems approach
• Explainable AI
From human-centered
automation to human-
controlled autonomy
(Section 3.5)
• Understanding the impacts of autonomous characteristics
• Innovative design paradigms
• Design for human-controlled autonomy
• A multidisciplinary HCI approach
• Human-
controlled
autonomy
From conventional
interaction to intelligent
interactions
(Section 3.6)
• New interaction paradigms
• Usable user interface
• Adapting AI technology to human capability
• HCI design standards for AI systems
• Usable AI
From general user needs
to specific ethical AI
needs (Section 3.7)
• Adoption of meaningful human control in design
• Integration of HCI approaches into AI development.
• An HCI multidisciplinary HCI approach
• Ethical &
responsible AI
Table 4
4Comparison between conventional HCI methods and the alternative HCI methodsCharacteristics of the Alternative HCI MethodsUser research, HCI testComprehensively assess the impacts of the entire pervasive computing environment and optimize the design of AI systems such as an intelligent Internet of Things (e.g.,Oliveira et al., 2021) (selected)
Dynamic allocation of humanmachine functions and tasks as intelligent machines learn over time, emphasizing the complementarity of human and machine intelligence Optimize the human-machine collaboration and performance of AI systems by taking advantage of the functional complementarity and adaptability between humans and AI systems Machine works as a tool + teammate; emphasize on the human-machine teaming relationship, shared information, goals, tasks, and autonomy between humans and AI systems (e.g.,Johnson & Vera, 2019) At the early stage of development, prototype and test intelligent capabilities of AI systems to assess and validate design ideas (e.g.,Martelaro, et al., 2017) Use Wizard of Oz (WOz) prototyping method to emulate and test intelligent functions of an AI system and design ideas at early stageDynamic
allocation of
human-
machine
functions
System
analysis and
design,
human-
machine
functional
analysis
Machine works as a
tool, basically no
collaboration
between human and
machine
Human-
machine
teaming
based
collaborative
design
Low-fidelity
prototyping,
HCI test
Focus on the non-
intelligent
functions, difficult
to present
intelligent functions
Prototyping
of machine
intelligent
functions
Table 4 ,
4the Needs for HCI Professionals to Make Contributions in Developing AI Systems column describes what are required to develop HCAI systems, which is the gap in current HCI methods that are primarily applied to the development of non-AI systems (e.g., Jacko, 2012; Xu, 2017), as highlighted by the Limitations of Conventional HCI Methods column. The Characteristics of the Alternative HCI Methods column describes the characteristics of the selected 8 alternative methods based on the advantages of these methods advocated by the referenced researchers.
AcknowledgementsThe authors appreciate the insightful comments from Professor Ben Shneiderman and four anonymous reviewers on an earlier draft of this paper. These insights have significantly improved the quality of this paper. Any opinion herein is those of the authors and does not reflect the views of any other individuals or corporation.
Trends and trajectories for explainable, accountable and intelligible systems: an HCI research agenda. A Abdul, J Vermeulen, D Wang, B Y Lim, M Kankanhalli, Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems Montreal, QC, Canada. the 2018 CHI Conference on Human Factors in Computing Systems Montreal, QC, CanadaPaper No.582Abdul, A., Vermeulen, J., Wang, D., Lim, B. Y., & Kankanhalli, M. (2018). Trends and trajectories for explainable, accountable and intelligible systems: an HCI research agenda. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems Montreal, QC, Canada, Paper No.582.
Neuroergonomics: The Brain at Work and in Everyday Life (edited). H Ayaz, F Dehais, Academic PressCambridge, MassachusettsAyaz, H., & Dehais, F. (2018). Neuroergonomics: The Brain at Work and in Everyday Life (edited). Cambridge, Massachusetts: Academic Press.
Efficient interactive annotation of segmentation datasets with polygon-rnn++. D Acuna, H Ling, A Kar, S Fidler, Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. the IEEE conference on Computer Vision and Pattern RecognitionAcuna, D., Ling, H., Kar, A., & Fidler, S. (2018). Efficient interactive annotation of segmentation datasets with polygon-rnn++. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (pp. 859-868).
Power to the people: The role of humans in interactive machine learning. S Amershi, M Cakmak, W B Knox, T Kulesza, AI Magazine, WinterAmershi, S., Cakmak, M., Knox, W.B., Kulesza, T. (2014). Power to the people: The role of humans in interactive machine learning. AI Magazine, Winter, 2014, 105-120.
Guidelines for human-AI interaction. S Amershi, D Weld, M Vorvoreanu, A Fourney, B Nushi, P Collisson, J Suh, S Iqbal, P N Bennett, K Inkpen, J Teevan, R Kikin-Gil, E Horvitz, Glasgow, Scotland, UKAmershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., Suh, J., Iqbal, S., Bennett, P. N. Inkpen, K., Teevan, J., Kikin-Gil, R., & Horvitz, E. (2019). Guidelines for human-AI interaction. CHI 2019, May 4-9, 2019, Glasgow, Scotland, UK.
Human-centered AI: The role of human-centered design research in the development of AI. J Auernhammer, 10.21606/drs.2020.282Synergy -DRS International Conference 2020. Boess, S., Cheung, M. and Cain, R.Auernhammer, J. (2020) Human-centered AI: The role of human-centered design research in the development of AI, in Boess, S., Cheung, M. and Cain, R. (eds.), Synergy -DRS International Conference 2020, 11-14 August, Held nline. https://doi.org/10.21606/drs.2020.282
The interaction-attention continuum: Considering various levels of human attention in interaction design. S Bakker, K Niemantsverdriet, International Journal of Design. 102Bakker, S., & Niemantsverdriet, K. (2016). The interaction-attention continuum: Considering various levels of human attention in interaction design. International Journal of Design, 10(2), 1-14.
A historical and intellectual perspective. R Baecker, J Grudin, W A Buxton, S Greenberg, Readings in Human-Computer Interaction: Toward the YearBaecker, R., Grudin, J., Buxton, W. A., & Greenberg, S. (1995). A historical and intellectual perspective. Readings in Human-Computer Interaction: Toward the Year 2000, 35-47.
Ironies of automation. L Bainbridge, Automatica. 19Bainbridge, L. (1983). Ironies of automation. Automatica, 19, 775-779.
Beyond accuracy: The role of mental models in human-AI team performance. G Bansal, B Nushi, E Kamar, W S Lasecki, D S Weld, E Horvitz, Proceedings of the AAAI Conference on Human Computation and Crowdsourcing. the AAAI Conference on Human Computation and Crowdsourcing7Bansal, G., Nushi, B., Kamar, E., Lasecki, W. S., Weld, D. S., & Horvitz, E. (2019a). Beyond accuracy: The role of mental models in human-AI team performance. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing (Vol. 7, No. 1, pp. 2-11).
Updates in human-AI teams: Understanding and addressing the performance/compatibility tradeoff. G Bansal, B Nushi, E Kamar, D S Weld, W S Lasecki, E Horvitz, 10.1609/aaai.v33i01.33012429Proceedings of the AAAI Conference on AI. 33: AAAI-19. the AAAI Conference on AI. 33: AAAI-19IAAI-19, EAAI-20Bansal, G., Nushi, B., Kamar, E., Weld, D.S., Lasecki, W. S., Horvitz, E. (2019b). Updates in human-AI teams: Understanding and addressing the performance/compatibility tradeoff. Proceedings of the AAAI Conference on AI. 33: AAAI-19, IAAI-19, EAAI-20, https://doi.org/10.1609/aaai.v33i01.33012429.
The artificial intelligence black box and the failure of intent and causation. Harvard Journal of Law & Technology. 312Bathaee,Y.Bathaee,Y. (2018). The artificial intelligence black box and the failure of intent and causation. Harvard Journal of Law & Technology, 31(2), 890-938.
Toward human-centered algorithm design. E P Baumer, Big Data & Society. 422053951717718854Baumer, E. P. (2017). Toward human-centered algorithm design. Big Data & Society, 4(2), 2053951717718854.
Intelligent autonomous vehicles with an extendable knowledge base and meaningful human control. Beckers, Counterterrorism, Crime Fighting, Forensics, and Surveillance Technologies III. 11166111660Beckers et al. (2019). Intelligent autonomous vehicles with an extendable knowledge base and meaningful human control. In Counterterrorism, Crime Fighting, Forensics, and Surveillance Technologies III (Vol. 11166, p. 111660C). Int. Soc. for Optics and Photonics.
Modeling User Behavior in Social Media with Complex Agents. J O Berndt, S C Rodermund, F Lorig, I J Timm, The Third International Conference on Human and Social Analytics. Berndt, J.O., Rodermund, S.C., Lorig, F., Timm, I.J. (2017). Modeling User Behavior in Social Media with Complex Agents. HUSO 2017: The Third International Conference on Human and Social Analytics. 19-24, IARIA, 2017 ISBN: 978-1-61208-578-4
Man is to computer programmer as woman is to homemaker? debiasing word embeddings. T Bolukbasi, K.-W Chang, J Y Zou, V Saligrama, A T Kalai, Advances in Neural Information Processing Systems. Bolukbasi, T., Chang, K.-W., Zou, J. Y., Saligrama, V. & Kalai, A. T. (2016). Man is to computer programmer as woman is to homemaker? debiasing word embeddings. in Advances in Neural Information Processing Systems 4349-4357.
Human centered artificial intelligence: weaving UX into algorithmic decision making. R Bond, M D Mulvenna, H Wan, D D Finlay, A Wong, A Koene, . . Adel, T , In RoCHIBond, R., Mulvenna, M. D., Wan, H., Finlay, D. D., Wong, A., Koene, A., ... & Adel, T. (2019). Human centered artificial intelligence: weaving UX into algorithmic decision making. In RoCHI (pp. 2-9).
A Human-Autonomy Teaming Approach for a Flight-Following Task. S L Brandt, R R Lachter, R J Shively, DOI10.1007/978-3-319-60642-2_2Advances in Neuroergonomics and Cognitive Engineering. C. BaldwinSpringer International Publishing AGAdvances in Intelligent Systems and ComputingBrandt, S. L., Lachter, R. R., & Shively, R. J. (2018). A Human-Autonomy Teaming Approach for a Flight-Following Task. In C. Baldwin (ed.), Advances in Neuroergonomics and Cognitive Engineering, Advances in Intelligent Systems and Computing, Springer International Publishing AG,DOI10.1007/978-3-319-60642-2_2.
Focusing on context in human-centered computing. P Brezillon, 10.1109/MIS.2003.1200731IEEE Intelligent Systems. 18Brezillon, P. (2003). Focusing on context in human-centered computing. IEEE Intelligent Systems, 18, 62- 66. doi:10.1109/MIS.2003.1200731.
Navigating the Advent of Human-Machine Teaming. J Brill, C Cummings, M L Evans, Iii , A W Hancock, P A Lyons, J B Oden, K , Proceedings of the Human Factors and Ergonomics Society Annual Meeting. the Human Factors and Ergonomics Society Annual MeetingSage CA; Los Angeles, CASAGE Publications62Brill J., C., Cummings, M. L., Evans III, A. W., Hancock, P. A., Lyons, J. B., & Oden, K. (2018). Navigating the Advent of Human-Machine Teaming. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 62, No. 1, pp. 455-459). Sage CA: Los Angeles, CA: SAGE Publications.
Interdisciplinary research has consistently lower funding success. L Bromham, R Dinnage, X Hua, Nature. 534Bromham, L., Dinnage, R. & Hua, X. (2016). Interdisciplinary research has consistently lower funding success. Nature, 534, 684-687.
Psychological Foundations of Explainability and Interpretability in Artificial Intelligence. D A Broniatowski, Broniatowski, D. A. (2021). Psychological Foundations of Explainability and Interpretability in Artificial Intelligence.
B Brown, S Bødker, K Höök, Does HCI scale? Scale hacking and the relevance of HCI. interactions. 24Brown, B., Bødker, S., & Höök, K. (2017). Does HCI scale? Scale hacking and the relevance of HCI. interactions, 24(5), 28-33.
How society can maintain human-centric artificial intelligence. J J Bryson, A Theodorou, Human-centered digitalization and services. SingaporeSpringerBryson, J. J., & Theodorou, A. (2019). How society can maintain human-centric artificial intelligence. In Human-centered digitalization and services (pp. 305-323). Springer, Singapore.
Intelligent Assistants Have Poor Usability: A User Study of Alexa, Google Assistant, and Siri. Budiu,r, P Laubheimer, Budiu,R. & Laubheimer, P. (2018). Intelligent Assistants Have Poor Usability: A User Study of Alexa, Google Assistant, and Siri. https://www.nngroup.com
Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. J Buolamwini, T Gebru, Proceedings of the 1st Conference on Fairness, Accountability and Transparency. Friedler, S. A. & Wilson, C.the 1st Conference on Fairness, Accountability and TransparencyPMLR81Buolamwini, J. & Gebru, T. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. in Proceedings of the 1st Conference on Fairness, Accountability and Transparency (eds. Friedler, S. A. & Wilson, C.) 81, 77-91 (PMLR, 2018).
Ecological Interface Design. C M Burns, J Hajdukiewicz, CRC PressBurns, C. M. & Hajdukiewicz, J. (2004). Ecological Interface Design. CRC Press.
5 ways to help robots work together with people. The Conversation. Nancy Cooke, Cooke, Nancy (2018). 5 ways to help robots work together with people. The Conversation. https://theconversation.com/5-ways-to-help-robots-work-together-with-people-101419
Human-autonomy teaming interface design considerations for multi-unmanned vehicle control. G L Calhoun, H A Ruff, K J Behymer, E M Frost, Theoretical Issues in Ergonomics Science. 19Calhoun, G. L., Ruff, H. A., Behymer, K. J. & Frost, E. M. (2018). Human-autonomy teaming interface design considerations for multi-unmanned vehicle control. Theoretical Issues in Ergonomics Science, 19, 321-352.
The Psychology of Human-Computer Interaction. S K Card, T P Moran, A Newell, Lawrence Erlbaum AssociatesHillsdaleCard, S. K., Moran, T. P., & Newell, A. (1983). The Psychology of Human-Computer Interaction. Hillsdale: Lawrence Erlbaum Associates.
Using artificial intelligence to augment human intelligence. Distill, 2, 12. Cerejo Joana (2021) The design process of human centered AI -Part2. S Carter, M Nielsen, Carter, S. & Nielsen, M. (2017). Using artificial intelligence to augment human intelligence. Distill, 2, 12. Cerejo Joana (2021) The design process of human centered AI -Part2. BootCamp: https://bootcamp.uxdesign.cc/human-centered-ai-design-process-part-2-empathize-hypothesis- 6065db967716
A survey of surveys on the use of visualization for interpreting machine learning models. A Chatzimparmpas, R M Martins, I Jusufi, A Kerren, Information Visualization. 19Chatzimparmpas, A., Martins, R. M., Jusufi, I., & Kerren, A. (2020). A survey of surveys on the use of visualization for interpreting machine learning models. Information Visualization, 19, 207-233.
XAI-CBIR: Explainable AI system for content based retrieval of video frames from minimally invasive surgery videos. D R Chittajallu, B Dong, P Tunisin, IEEE 16th International Symposium on Biomedical Imaging. ISBI 2019Chittajallu, D. R., Dong, B., Tunisin, P., et al. (2019). XAI-CBIR: Explainable AI system for content based retrieval of video frames from minimally invasive surgery videos. 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019).
Influencing trust for human-automation collaborative scheduling of multiple unmanned vehicles. A S Clare, M L Cummings, N P Repenning, Human Factors. 57Clare, A. S., Cummings, M.L., & Repenning, N. P. (2015). Influencing trust for human-automation collaborative scheduling of multiple unmanned vehicles. Human Factors, 57, 1208-1218.
Towards hybrid crowd-AI centered systems: Developing an integrated framework from an empirical perspective. A Correia, H Paredes, D Schneider, S Jameel, B Fonseca, 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC). Correia, A., Paredes, H., Schneider, D., Jameel, S., & Fonseca, B. (2019). Towards hybrid crowd-AI centered systems: Developing an integrated framework from an empirical perspective. In 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC) (pp. 4013-4018).
Cooperating with machines. J W Crandall, Nat. Commun. 9233Crandall, J. W. et al. (2018). Cooperating with machines. Nat. Commun. 9, 233.
Holistic modelling for human autonomous system interaction. M L Cummings, A S Clare, 10.1080/1463922X.2014.1003990Theoretical Issues in Ergonomics Science. 16Cummings, M. L. & Clare, A.S. (2015) Holistic modelling for human autonomous system interaction, Theoretical Issues in Ergonomics Science, 16, 214-231, DOI:10.1080/1463922X.2014.1003990
Lethal autonomous weapons: Meaningful human control or meaningful human certification?. M L Cummings, IEEE Technology and Society Magazine38Cummings, M. L. (2019). Lethal autonomous weapons: Meaningful human control or meaningful human certification? IEEE Technology and Society Magazine, 38, 20-26.
Regulating safety-critical autonomous systems: past, present, and future perspectives. M L Cummings, D Britton, Living with Robots. Academic PressCummings, M. L., & Britton, D. (2020). Regulating safety-critical autonomous systems: past, present, and future perspectives. In Living with Robots (pp. 119-140). Academic Press.
From automation to autonomy: the importance of trust repair in human-machine interaction. E J De Visser, R Pak, T H Shaw, 10.1080/00140139.2018.1457725Ergonomics. 61de Visser, E. J., Pak, R. & Shaw, T. H. (2018). From automation to autonomy: the importance of trust repair in human-machine interaction, Ergonomics, 61, 1409-1427, DOI: 10.1080/00140139.2018.1457725.
Meaningful human control over autonomous systems: A philosophical account. S F De Sio, V J Den Hoven, 10.3389/frobt.2018.00015Frontiers in Robotics and AI. 515de Sio, S. F., & den Hoven, V. J. (2018). Meaningful human control over autonomous systems: A philosophical account. Frontiers in Robotics and AI, 5, 15. doi: 10.3389/frobt.2018.00015
Approaching full autonomy in the maritime domain: Paradigm choices and human factors challenges. H V Den Broek, J M Schraagen, G Te Brake, J Van Diggelin, Proceedings of the MTEC. the MTECSingaporeden Broek, H.V., Schraagen, J. M., te Brake, G. & van Diggelin, J. (2017). Approaching full autonomy in the maritime domain: Paradigm choices and human factors challenges. In Proceedings of the MTEC, Singapore, 26-28 April 2017.
Physical object icons buttons gesture (PIBG): a new interaction paradigm with pen. G Z Dai, H Wang, Proceedings of the 8 th International Conference on Computer Supported Cooperative Work. the 8 th International Conference on Computer Supported Cooperative WorkXiamenDai, G. Z. & Wang, H. (2004). Physical object icons buttons gesture (PIBG): a new interaction paradigm with pen. In: Proceedings of the 8 th International Conference on Computer Supported Cooperative Work, Xiamen, 11-20.
The future of human-ai collaboration: a taxonomy of design knowledge for hybrid intelligence systems. D Dellermann, A Calma, N Lipusch, T Weber, S Weigel, P Ebel, Hawaii international conference on system sciences (HICSS). Hawaii, USADellermann D, Calma A, Lipusch N, Weber T, Weigel S, Ebel P (2019a) The future of human-ai collaboration: a taxonomy of design knowledge for hybrid intelligence systems. In: Hawaii international conference on system sciences (HICSS). Hawaii, USA
Hybrid intelligence. D Dellermann, P Ebel, M Söllner, J M Leimeister, Business & Information Systems Engineering. 61Dellermann, D., Ebel, P., Söllner, M., & Leimeister, J. M. (2019b). Hybrid intelligence. Business & Information Systems Engineering, 61, 637-643.
Team coordination and effectiveness in human-autonomy teaming. M Demir, A D Likens, N J Cooke, P G Amazeen, N J Mcneese, IEEE Transactions on Human-Machine Systems. 49Demir, M., Likens, A. D., Cooke, N. J., Amazeen, P. G., & McNeese, N. J. (2018a). Team coordination and effectiveness in human-autonomy teaming. IEEE Transactions on Human-Machine Systems, 49, 150-159.
A conceptual model of team dynamical behaviors and performance in human-autonomy teaming. M Demir, N J Cooke, P G Amazeen, Cognitive Systems Research. 52Demir, M., Cooke, N. J., & Amazeen, P. G. (2018b). A conceptual model of team dynamical behaviors and performance in human-autonomy teaming. Cognitive Systems Research, 52, 497-507.
Human Centered AI: Building trust, democracy and human rights by design. An Overview of Stanford's global digital policy incubator and the XPRIZE Foundation's June 11th Event. E Donahoe, Donahoe, E. (2018). Human Centered AI: Building trust, democracy and human rights by design. An Overview of Stanford's global digital policy incubator and the XPRIZE Foundation's June 11th Event. Stanford GDPi. https://medium.com/stanfords-gdpi/human-centered-ai-building-trust- democracy-and-human-rights-by-design-2fc14a0b48af
UX design innovation: Challenges for working with machine learning as a design material. G Dove, K Halskov, J Forlizzi, J Zimmerman, 10.1145/3025453.3025739Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems -CHI '17. the 2017 CHI Conference on Human Factors in Computing Systems -CHI '17New York, New York, USAACM PressDove, G., Halskov, K., Forlizzi, J. & Zimmerman, J. (2017). UX design innovation: Challenges for working with machine learning as a design material. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems -CHI '17. ACM Press, New York, New York, USA, 278-288. https://doi.org/10.1145/3025453.3025739
U Ehsan, M O Riedl, arXiv:2002.01092Human-centered explainable AI: Towards a reflective sociotechnical approach. arXiv preprintEhsan, U., & Riedl, M. O. (2020). Human-centered explainable AI: Towards a reflective sociotechnical approach. arXiv preprint arXiv:2002.01092.
Designing for Situation Awareness: An approach to User-Centered Design. M R Endsley, D G Jones, CRC PressLondon2nd editionEndsley, M. R. & Jones, D. G. (2012). Designing for Situation Awareness: An approach to User-Centered Design (2nd edition). London: CRC Press.
From here to autonomy: lessons learned from human-automation research. M R Endsley, 10.1177/0018720816681350Human Factors. 59Endsley, M. R. (2017). From here to autonomy: lessons learned from human-automation research. Human Factors, 59, 5-27, DOI: 10.1177/0018720816681350.
Situation awareness in future autonomous vehicles: Beware of the unexpected. M R Endsley, Proceedings of the 20th Congress of the International Ergonomics Association (IEA 2018). the 20th Congress of the International Ergonomics Association (IEA 2018)SpringerEndsley, M. R. (2018). Situation awareness in future autonomous vehicles: Beware of the unexpected. Proceedings of the 20th Congress of the International Ergonomics Association (IEA 2018), IEA 2018, published by Springer.
Closing the AI Knowledge Gap. Z Epstein, arXiv [cs.CY]Epstein, Z. et al. Closing the AI Knowledge Gap. arXiv [cs.CY] (2018).
Introduction to the special issue on human-centered machine learning. R Fiebrink, M Gillies, 10.1145/3205942ACM Trans. Interact. Intell. Syst. 87Fiebrink, R. & Gillies, M. (2018). Introduction to the special issue on human-centered machine learning. ACM Trans. Interact. Intell. Syst. 8, 2, Article 7 (June 2018), 7 pages. https://doi.org/10.1145/3205942.
Human computer integration. Interactions. U Farooq, J Grudin, 23Farooq, U., & Grudin, J. (2016). Human computer integration. Interactions, 23, 27-32.
Thoughts on human-computer interaction in the age of artificial intelligence (in Chinese). J J Fan, F Tian, Y Du, 10.1360/N112017-00221Sci Sin Inform. 48Fan, J. J., Tian, F., Du, Y et al. (2018). Thoughts on human-computer interaction in the age of artificial intelligence (in Chinese). Sci Sin Inform, 48, 361-375, doi: 10.1360/N112017-00221
Cognitive orthoses: Toward human-centered AI. AI Magazine. K M Ford, P J Hayes, C Glymour, J Allen, 10.1609/aimag.v36i4.262936Ford, K. M., Hayes, P. J., Glymour, C., & Allen, J. (2015). Cognitive orthoses: Toward human-centered AI. AI Magazine, 36. doi:10.1609/aimag.v36i4.2629
A computational cognition model of perception, memory, and judgment. X L Fu, L H Cai, Y Liu, Sci China Inf Sci. 5732114Fu, X. L, Cai, L. H, Liu, Y., et al. (2014). A computational cognition model of perception, memory, and judgment. Sci China Inf Sci, 57, 032114.
When user experience designers partner with data scientists. F Girardin, N ; D Lathia, D P Siewiorek, A Smailagic, P Steenkiste, the AAAI Spring Symposium Series Technical Report: Designing the User Experience of Machine Learning Systems. Palo Alto, CaliforniaThe AAAI Press1Project Aura: Toward distraction-free pervasive computingGirardin, F. & Lathia, N. (2017). When user experience designers partner with data scientists. . In the AAAI Spring Symposium Series Technical Report: Designing the User Experience of Machine Learning Systems. The AAAI Press, Palo Alto, California. https://www.aaai.org/ocs/index.php/SSS/SSS17/paper/view/ 15364 environment n, D., Siewiorek, D. P., Smailagic, A., & Steenkiste, P. (2002). Project Aura: Toward distraction-free pervasive computing. IEEE Pervasive Computing, 1, 22-31.
Conceptualization of the human-machine symbiosis-A literature review. A Gerber, P Derckx, D A Döppner, D Schoder, Proceedings of the 53rd Hawaii International Conference on System Sciences. the 53rd Hawaii International Conference on System SciencesGerber, A., Derckx, P., Döppner, D. A., & Schoder, D. (2020). Conceptualization of the human-machine symbiosis-A literature review. In Proceedings of the 53rd Hawaii International Conference on System Sciences.
People + AI Guidebook: Designing human-centered AI products. Pair Google, pair.withgoogle.com/Google PAIR (2019). People + AI Guidebook: Designing human-centered AI products. pair.withgoogle.com/
Team situation awareness in Human-Autonomy Teaming: A systems level approach. D Grimm, M Demir, J C Gorman, N J Cooke, Proceedings of the Human Factors and Ergonomics Society Annual Meeting. the Human Factors and Ergonomics Society Annual Meeting62Grimm, D., Demir, M., Gorman, J. C., & Cooke, N. J. (2018). Team situation awareness in Human- Autonomy Teaming: A systems level approach. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 62, 149-149.
Three faces of human-computer interaction. J Grudin, IEEE Annals of the History of Computing. 27Grudin, J. (2005). Three faces of human-computer interaction. IEEE Annals of the History of Computing, 27, 46-62.
Explainable Artificial Intelligence (XAI) at DARPA. D Gunning, Gunning, D. (2017). Explainable Artificial Intelligence (XAI) at DARPA. https://www.darpa.mil/attachments/XAIProgramUpdate.pdf.
Some pitfalls in the promises of automated and autonomous vehicles. P A Hancock, 10.1080/00140139.2018.1498136Ergonomics. 62Hancock, P.A. (2019). Some pitfalls in the promises of automated and autonomous vehicles. Ergonomics, 62, 479-495, DOI: 10.1080/00140139.2018.1498136.
Autonomous weapons: an open letter from AI & robotics researchers. Future of Life Institute. S Hawking, E Musk, S Wozniak, Hawking, S., Musk, E., Wozniak, S. et al., (2015). Autonomous weapons: an open letter from AI & robotics researchers. Future of Life Institute.
The ethics of AI ethics: An evaluation of guidelines. Minds and Machines. T Hagendorff, 30Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds and Machines, 30(1), 99-120.
H He, J Gray, A Cangelosi, Q Meng, T M Mcginnity, J Mehnen, arXiv:2105.04408The Challenges and Opportunities of Human-Centered AI for Trustworthy Robots and Autonomous Systems. arXiv preprintHe, H., Gray, J., Cangelosi, A., Meng, Q., McGinnity, T. M., & Mehnen, J. (2021). The Challenges and Opportunities of Human-Centered AI for Trustworthy Robots and Autonomous Systems. arXiv preprint arXiv:2105.04408.
The myths and costs of autonomous weapon systems. R R Hoffman, T M Cullen, J K Hawley, 10.1080/00963402.2016.1194619Bulletin of the Atomic Scientists. 72Hoffman, R.R., Cullen, T. M. & Hawley, J. K. (2016) The myths and costs of autonomous weapon systems, Bulletin of the Atomic Scientists, 72, 247-255, DOI: 10.1080/00963402.2016.1194619
What is design in the context of human-centered computing?. R R Hoffman, A Roesler, B M Moon, 10.1109/MIS.2004.36IEEE Intelligent Systems. 19Hoffman, R. R., Roesler, A., & Moon, B. M. (2004). What is design in the context of human-centered computing? IEEE Intelligent Systems, 19, 89-95. doi:10.1109/MIS.2004.36
Explaining explanation for "explainable AI. R R Hoffman, G Klein, S T Mueller, Proceedings of the Human Factors and Ergonomics Society Annual Meeting. the Human Factors and Ergonomics Society Annual MeetingSage CA; Los Angeles, CASAGE Publications62Hoffman, R. R., Klein, G., & Mueller, S. T. (2018). Explaining explanation for "explainable AI". In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 62, No. 1, pp. 197-201). Sage CA: Los Angeles, CA: SAGE Publications.
L E Holmquist, Intelligence on tap: artificial intelligence as a new design material. Interactions. 24Holmquist, L.E. (2017). Intelligence on tap: artificial intelligence as a new design material. Interactions, 24, 28-33.
Joint Cognitive Systems: Foundations of Cognitive Systems Engineering. E Hollnagel, D D Woods, CRC PressLondonHollnagel, E. & Woods, D. D. (2005). Joint Cognitive Systems: Foundations of Cognitive Systems Engineering. London: CRC Press.
Cognitive ergonomics of multi-agent systems: Observations, principles and research issues. K Hurts, P De Greef, International Conference on Human-Computer Interaction. Berlin, HeidelbergpringerHurts, K., & de Greef, P. (1994, August). Cognitive ergonomics of multi-agent systems: Observations, principles and research issues. In International Conference on Human-Computer Interaction (pp. 164-180). pringer, Berlin, Heidelberg.
The HF/E/E parade: A tale of two models. W C Howell, Proceedings of the Human Factors and Ergonomics Society Annual Meeting. the Human Factors and Ergonomics Society Annual Meeting45Howell, W. C. (2001). The HF/E/E parade: A tale of two models. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 45, 1−5.
Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems. The Institute of Electrical and Electronics Engineers. IEEEFirst Edition. IncorporatedIEEE. (2019). Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems. (First Edition). The Institute of Electrical and Electronics Engineers (IEEE), Incorporated.
Where is the human? Bridging the gap between AI and HCI. K Inkpen, S Chancellor, M De Choudhury, M Veale, E P Baumer, Extended abstracts of the 2019 chi conference on human factors in computing systems. Inkpen, K., Chancellor, S., De Choudhury, M., Veale, M., & Baumer, E. P. (2019, May). Where is the human? Bridging the gap between AI and HCI. In Extended abstracts of the 2019 chi conference on human factors in computing systems (pp. 1-9).
Robotic, Intelligent and Autonomous Systems (version for review. 810ISO (International Organization for StandardizationISO (International Organization for Standardization). (2020). Ergonomics-Ergonomics of Human-System Interaction -Part 810: Robotic, Intelligent and Autonomous Systems (version for review).
Reality-Based Interaction: A Framework for Post-WIMP Interfaces. R J K Jacob, A Girouard, L M Hirshfield, M S Horn, O S Erin, T Solovey, J Zigelbaum, Florence, ItalyJacob, R.J.K., Girouard, A., Hirshfield, L.M., Horn, M.S., Erin, O.S., Solovey, T. & Zigelbaum, J. (2008). Reality-Based Interaction: A Framework for Post-WIMP Interfaces. CHI 2008, April 5-10, 2008, Florence, Italy.
Human computer interaction handbook: Fundamentals, evolving technologies, and emerging applications. Jacko, J. A. (Ed.).CRC pressJacko, J. A. (Ed.). (2012). Human computer interaction handbook: Fundamentals, evolving technologies, and emerging applications. CRC press.
Human-system cooperation in automated driving. K A Jeong, International Journal of Human-Computer Interaction. 35Jeong, K. A. (2019). Human-system cooperation in automated driving. International Journal of Human- Computer Interaction, 35, 917-918.
No AI is an island: The case for teaming intelligence. M Johnson, A Vera, 10.1609/aimag.v40i1.284240AI MagazineJohnson, M., & Vera, A. (2019). No AI is an island: The case for teaming intelligence. AI Magazine, 40, 16-28. https://doi.org/10.1609/aimag.v40i1.2842
An Integrated Analysis Framework of Artificial Intelligence Social Impact Based on Application Scenarios. S U Jun, W E I Yuming, H U A N G Cui, Science of Science and Management of S. & T. 42053Jun, S. U., Yuming, W. E. I., & Cui, H. U. A. N. G. (2021). An Integrated Analysis Framework of Artificial Intelligence Social Impact Based on Application Scenarios. Science of Science and Management of S. & T., 42(05), 3.
A conceptual framework of autonomous and automated agents. D B Kaber, Theoretical Issues in Ergonomics Science. 19Kaber, D. B. (2018). A conceptual framework of autonomous and automated agents. Theoretical Issues in Ergonomics Science, 19, 406-430.
A Review of Recent Deep Learning Approaches in Human-Centered Machine Learning. T Kaluarachchi, A Reis, S Nanayakkara, Sensors. 2172514Kaluarachchi, T., Reis, A., & Nanayakkara, S. (2021). A Review of Recent Deep Learning Approaches in Human-Centered Machine Learning. Sensors, 21(7), 2514.
Machine learning and cognitive ergonomics in air traffic management: Recent developments and considerations for certification. T Kistan, A Gardi, R Sabatini, 10.3390/aerospace50401035103Kistan, T., Gardi, A. & Sabatini, R. (2018). Machine learning and cognitive ergonomics in air traffic management: Recent developments and considerations for certification. Aerospace, 5, 103, doi:10.3390/aerospace5040103.
Trust across culture and context. H A Klein, M H Lin, N L Miller, L G Militello, J B Lyons, J G Finkeldey, Journal of Cognitive Engineering and Decision Making. 13Klein, H. A., Lin, M. H., Miller, N. L., Militello, L. G., Lyons, J. B., & Finkeldey, J. G. (2019). Trust across culture and context. Journal of Cognitive Engineering and Decision Making, 13, 10-29.
Ten challenges for making automation a "team player" in joint human-agent activity. G Klein, D D Woods, J M Bradshaw, R R Hoffman, P J Feltovich, IEEE Intelligent Systems. 6Klein, G., Woods, D. D., Bradshaw, J. M., Hoffman, R. R., and Feltovich, P. J. (2004). Ten challenges for making automation a "team player" in joint human-agent activity. IEEE Intelligent Systems 6, 91- 95.
Analysing and understanding news consumption patterns by tracking online user behaviour with a multimodal research design. M Kleppe, M Otte, Digital Scholarship in the Humanities. 32Supplement 2Kleppe, M. & Otte, M. (2017). Analysing and understanding news consumption patterns by tracking online user behaviour with a multimodal research design. Digital Scholarship in the Humanities, 32, Supplement 2,
Coordinating computer-supported cooperative work: A review of research issues and strategies. J K Kies, R C Williges, M B Rosson, Journal of the American society for information science. 499Kies, J. K., Williges, R. C., & Rosson, M. B. (1998). Coordinating computer-supported cooperative work: A review of research issues and strategies. Journal of the American society for information science, 49(9), 776-791.
Principles of explanatory debugging to personalize interactive machine learning. T Kulesza, M Burnett, W K Wong, S Stumpf, Proceedings of the 20th International Conference on Intelligent User Interfaces. the 20th International Conference on Intelligent User InterfacesACMKulesza, T., Burnett, M., Wong, W. K., & Stumpf, S. (2015). Principles of explanatory debugging to personalize interactive machine learning. In Proceedings of the 20th International Conference on Intelligent User Interfaces (pp. 126-137). ACM.
The parable of Google Flu: Traps in the big data analysis. D Lazer, R Kennedy, G King, A Vespignani, 10.1126/science.1248506Science. 3436176Lazer, D., Kennedy, R., King, G., & Vespignani, A. (2014). The parable of Google Flu: Traps in the big data analysis. Science, 343(6176), 1203-1205. https://doi.org/10.1126/science.1248506
Machine learning and human factors: Status, applications, and future directions. N Lau, L Fridman, B J Borghetti, J D Lee, Proceedings of the Human Factors and Ergonomics Society Annual Meeting. the Human Factors and Ergonomics Society Annual MeetingSage CA; Los Angeles, CASAGE Publications62Lau, N., Fridman, L., Borghetti, B. J., & Lee, J. D. (2018). Machine learning and human factors: Status, applications, and future directions. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 62, No. 1, pp. 135-138). Sage CA: Los Angeles, CA: SAGE Publications.
Understanding attitudes towards self-driving vehicles: quantitative analysis of qualitative data. J D Lee, K Kolodge, Proceedings of the Human Factors and Ergonomics Society Annual Meeting. the Human Factors and Ergonomics Society Annual MeetingPhilidelphia, PALee, J. D. & Kolodge, K. (2018). Understanding attitudes towards self-driving vehicles: quantitative analysis of qualitative data. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 2018, Philidelphia, PA.
Psychlab: A psychology laboratory for deep reinforcement learning agents. J Z Leibo, arXiv [cs.AILeibo, J. Z. et al. (2018). Psychlab: A psychology laboratory for deep reinforcement learning agents. arXiv [cs.AI].
Man-computer symbiosis. J C R Licklider, IRE Transactions on Human Factors in Electronics. 1Licklider, J. C. R. (1960). Man-computer symbiosis. IRE Transactions on Human Factors in Electronics, 1, 4-11.
User Interface Goals, AI Opportunities. AI Magazine. H Lieberman, 30Lieberman, H. (2009). User Interface Goals, AI Opportunities. AI Magazine, 30, 16-22
How to make A.I. that's good for people. F F Li, Li, F.F. (2018). How to make A.I. that's good for people. The New York Times. https://www.nytimes.com/2018/03/07/opinion/artificial-intelligence-human.html.
A common goal for the brightest minds from Stanford and beyond: Putting humanity at the center of AI. F F Li, J Etchemendy, Li, F.F. & Etchemendy, J. (2018). A common goal for the brightest minds from Stanford and beyond: Putting humanity at the center of AI.
A psychological model of human-computer cooperation for the era of artificial intelligence. Y Liu, Y M Wang, Y L Bian, 10.1360/N112017-00225Sci Sin Inform. 48Liu, Y., Wang, Y. M., Bian, Y. L., et al. (2018). A psychological model of human-computer cooperation for the era of artificial intelligence. Sci Sin Inform, 48, 376-389, doi: 10.1360/N112017-00225
Viewing machines as teammates: A qualitative study. J B Lyons, S Mahoney, K T Wynne, M A Roebke, AAAI Spring Symposium Series. Lyons, J. B., Mahoney, S., Wynne, K. T., & Roebke, M. A. (2018). Viewing machines as teammates: A qualitative study. In 2018 AAAI Spring Symposium Series.
Architectural framework for exploring adaptive human-machine teaming options in simulated dynamic environments. A M Madni, C C Madni, 10.3390/systems6040044Systems. 6Madni, A.M. & Madni, C. C. (2018). Architectural framework for exploring adaptive human-machine teaming options in simulated dynamic environments. Systems, 6, 1-17, doi:10.3390/systems6040044.
Overview of the DARPA Augmented Cognition Technical Integration Experiment. Mark St, David A John, Jeffrey G Kobus, Morrison & Dylan, Schmorrow, 10.1207/s15327590ijhc1702_2International Journal of Human-Computer Interaction. 17Mark St. John, David A. Kobus, Jeffrey G. Morrison & Dylan Schmorrow (2004) Overview of the DARPA Augmented Cognition Technical Integration Experiment. International Journal of Human- Computer Interaction, 17, 131-149, DOI: 10.1207/s15327590ijhc1702_2
WoZ Way: Enabling Real-Time Remote Interaction Prototyping & Observation in On-Road Vehicles. N Martelaro, W Ju, ACM Conference on Computer-Supported Cooperative Work and Social Computing. Portland, OR, USAMartelaro, N., Ju, W. (2017). WoZ Way: Enabling Real-Time Remote Interaction Prototyping & Observation in On-Road Vehicles. ACM Conference on Computer-Supported Cooperative Work and Social Computing. February 25-March 1, 2017, Portland, OR, USA.
AI Incident Database. S Mcgregor, McGregor, S. (2021). AI Incident Database. https://incidentdatabase.ai/ accessed July 12, 2021
Explainable AI: Beware of Inmates Running the Asylum. T Miller, P Howe, L Sonenberg, Miller, T.; Howe, P.; Sonenberg, L. (2017). Explainable AI: Beware of Inmates Running the Asylum. Available online: https://arxiv.org/pdf/1712.00547.pdf
AI ethics-too principled to fail?. B Mittelstadt, arXiv:1906.06668arXiv preprintMittelstadt, B. (2019). AI ethics-too principled to fail? arXiv preprint arXiv:1906.06668.
Human-level control through deep reinforcement learning. V Mnih, K Kavukcuoglu, Nature. 518Mnih V, Kavukcuoglu K et al. (2015) Human-level control through deep reinforcement learning. Nature, 518, 529-533.
The media inequality: Comparing the initial human-human and human-AI social interactions. Y Mou, K Xu, Computers in Human Behavior. 72Mou, Y., & Xu, K. (2017). The media inequality: Comparing the initial human-human and human-AI social interactions. Computers in Human Behavior, 72, 432-440.
Explanation in human-AI systems: A literature meta-review. S T Mueller, R R Hoffman, W Clancey, A Emrey, G Klein, arXiv:1902.01876synopsis of key ideas and publications, and bibliography for explainable AI. arXiv preprintMueller, S. T., Hoffman, R. R., Clancey, W., Emrey, A., & Klein, G. (2019). Explanation in human-AI systems: A literature meta-review, synopsis of key ideas and publications, and bibliography for explainable AI. arXiv preprint arXiv:1902.01876.
Training and Design Approaches for Enhancing Automation Awareness (Boeing Document D6-82577). R J Mumaw, D Boonman, J Griffin, & W Xu, Mumaw, R. J., Boonman, D., Griffin, J., & W. Xu. (1999). Training and Design Approaches for Enhancing Automation Awareness (Boeing Document D6-82577), December 1999.
A state of science on highly automated driving. J Navarro, 10.1080/1463922X.2018.1439544Theoretical Issues in Ergonomics Science. 20Navarro, J. (2018). A state of science on highly automated driving. Theoretical Issues in Ergonomics Science, 20, 366-296, DOI: 10.1080/1463922X.2018.1439544.
Collision between a car operating with automated vehicle control systems and a tractorsemitrailor truck near. Nhtsa, National Transportation Safety Board. NHTSA (The U.S. Department of National Highway Traffic Safety AdministrationAccidents ReportAutomated Vehicles for SafetyNHTSA. (2018). Automated Vehicles for Safety. NHTSA (The U.S. Department of National Highway Traffic Safety Administration) Report: https://www.nhtsa.gov/technology-innovation/automated-vehicles- safety NTSB. (2017). Collision between a car operating with automated vehicle control systems and a tractor- semitrailor truck near Williston, Florida, May 7, 2016. Accidents Report, by National Transportation Safety Board (NTSB) 2017, Washington, DC.
Participatory design and artificial intelligence: Strategies to improve health communication for diverse audiences. L Neuhauser, G L Kreps, Association for the Advancement of Artificial Intelligence. AAAI Spring SymposiumNeuhauser, L., & Kreps, G. L. (2011). Participatory design and artificial intelligence: Strategies to improve health communication for diverse audiences. In AAAI Spring Symposium (pp. 49-52). Association for the Advancement of Artificial Intelligence.
Improving the Design of Ambient Intelligence Systems: Guidelines Based on a Systematic Review. J D Oliveira, J C Couto, V S M Paixão-Cortes, R H Bordini, International Journal of Human-Computer Interaction. Oliveira, J. D., Couto, J. C., Paixão-Cortes, V. S. M., & Bordini, R. H. (2021). Improving the Design of Ambient Intelligence Systems: Guidelines Based on a Systematic Review. International Journal of Human-Computer Interaction, 1-9.
Human performance consequences of stages and levels of automation: An integrated meta-analysis. L Onnasch, C D Wickens, H Li, D Manzey, Human Factors. 56Onnasch, L., Wickens, C. D., Li H., & Manzey. D. (2014). Human performance consequences of stages and levels of automation: An integrated meta-analysis. Human Factors, 56, 476-488.
Human-autonomy teaming: A review and analysis of the empirical literature. T O'neill, N Mcneese, A Barron, B Schelble, Human Factors. 0018720820960865O'Neill, T., McNeese, N., Barron, A., & Schelble, B. (2020). Human-autonomy teaming: A review and analysis of the empirical literature. Human Factors, 0018720820960865.
Artificial intelligence as colleague and supervisor: Successful and fair interactions between intelligent technologies and employees at work. S K Ötting, Dissertation of Bielefeld UniversityÖtting, S. K. (2020). Artificial intelligence as colleague and supervisor: Successful and fair interactions between intelligent technologies and employees at work. Dissertation of Bielefeld University.
Human-centered artificial intelligence and machine learning. M O Riedl, Human Behavior and Emerging Technologies. 1Riedl, M. O. (2019). Human-centered artificial intelligence and machine learning. Human Behavior and Emerging Technologies, 1, 33-36.
Neuroergonomics: The Brain at Work. R Parasuraman, M Rizzo, Oxford University PressOxfordParasuraman, R. & Rizzo, M. (2006). Neuroergonomics: The Brain at Work. Oxford: Oxford University Press.
A model for types and levels of human interaction with automation. R Parasuraman, T B Sheridan, C D Wickens, IEEE Transactions on systems, man, and cybernetics-Part A: Systems and Humans. 30Parasuraman, R., Sheridan, T. B., & Wickens, C. D. (2000). A model for types and levels of human interaction with automation. IEEE Transactions on systems, man, and cybernetics-Part A: Systems and Humans, 30(3), 286-297.
D Pásztor, AI UX: 7 Principles of Designing Good AI Products. Pásztor, D (2018) AI UX: 7 Principles of Designing Good AI Products, UXStudio: https://uxstudioteam.com/ux-blog/ai-ux/
Human-agent interaction: Challenges for bringing humans and agents together. R Prada, A Paiva, Proc. of the 3rd Int. Workshop on Human-Agent Interaction Design and Models (HAIDM 2014) at the 13th Int. Conf. on Agent and Multi-Agent Systems. of the 3rd Int. Workshop on Human-Agent Interaction Design and Models (HAIDM 2014) at the 13th Int. Conf. on Agent and Multi-Agent SystemsAAMASPrada, R., & Paiva, A. (2014). Human-agent interaction: Challenges for bringing humans and agents together. In Proc. of the 3rd Int. Workshop on Human-Agent Interaction Design and Models (HAIDM 2014) at the 13th Int. Conf. on Agent and Multi-Agent Systems (AAMAS 2014) (pp. 1- 10).
Explainable AI: Driving Business Value through greater understanding. PwC (PricewaterhouseCoopers). PwC (PricewaterhouseCoopers). (2018). Explainable AI: Driving Business Value through greater understanding. https://www.pwc.co.uk/audit-assurance/assets/explainable-ai.pdf
. I Rahwan, M Cebrian, N Obradovich, J Bongard, J.-F Bonnefon, C Breazeal, Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J.-F., Breazeal, C., . . .
Machine behaviour. M Wellman, Nature. 568& Wellman, M. (2019). Machine behaviour. Nature, 568, 477-486.
Why should I trust you? Explaining the predictions of any classifier. M T Ribeiro, S Singh, C Guestrin, Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data MiningACMRibeiro, M. T., Singh, S. & Guestrin, C. (2016). Why should I trust you? Explaining the predictions of any classifier. in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 1135-1144 (ACM, 2016).
Research priorities for robust and beneficial artificial intelligence. S Russell, D Dewey, M Tegmark, AI MagazineRussell, S., Dewey, D., Tegmark, M. (2015). Research priorities for robust and beneficial artificial intelligence. AI Magazine, 105-114.
Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles. SAE (Society of Automotive EngineersRecommended Practice J3016 (revised 2018-06SAE (Society of Automotive Engineers). (2018). Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles. Recommended Practice J3016 (revised 2018- 06).
The horse has bolted! Why human factors and ergonomics has to catch up with autonomous vehicles (and other advanced forms of automation). P M Salmon, 10.1080/00140139.2018.1563333Ergonomics. 62Salmon, P. M. (2019). The horse has bolted! Why human factors and ergonomics has to catch up with autonomous vehicles (and other advanced forms of automation). Ergonomics, 62, 502-504, DOI: 10.1080/00140139.2018.1563333.
To protect us from the risks of advanced artificial intelligence, we need to act now. P M Salmon, P Hancock, A W Carden, Conversation Media Group. 25ConversationSalmon, P. M., Hancock, P., & Carden, A. W. (2019). To protect us from the risks of advanced artificial intelligence, we need to act now. In Conversation (Vol. 25). Conversation Media Group.
Insight solutions are correct more often than analytic solutions. C Salvi, E Bricolo, J Kounios, 10.1080/13546783.2016.1141798Think. Reason. 22Salvi, C., Bricolo, E., Kounios, J. et al. (2016). Insight solutions are correct more often than analytic solutions. Think. Reason., 22, 443-460. http://dx.doi.org/10.1080/13546783.2016.1141798
How in the world did we ever get into that mode: Mode error and awareness in supervisory control. N B Sarter, D D Woods, Doi: 1518/001872095779049516Human Factors. 37Sarter, N. B., & Woods, D. D. (1995). How in the world did we ever get into that mode: Mode error and awareness in supervisory control. Human Factors, 37, 5-19, Doi: 1518/001872095779049516.
Human and computer control of undersea teleoperators. T B Sheridan, W L Verplank, Massachusetts Inst of Tech Cambridge Man-Machine Systems LabSheridan, T. B., & Verplank, W. L. (1978). Human and computer control of undersea teleoperators. Massachusetts Inst of Tech Cambridge Man-Machine Systems Lab.
Why humanautonomy teaming?. R J Shively, J Lachter, S L Brandt, M Matessa, V Battiste, W W Johnson, International conference on applied human factors and ergonomics. ChamSpringerShively, R. J., Lachter, J., Brandt, S. L., Matessa, M., Battiste, V., & Johnson, W. W. (2017). Why human- autonomy teaming? In International conference on applied human factors and ergonomics (pp. 3- 11). Springer, Cham.
Human-centered artificial intelligence: Reliable, safe & trustworthy. B Shneiderman, 10.1080/10447318.2020.1741118International Journal of Human-Computer Interaction. 36Shneiderman, B. (2020a) Human-centered artificial intelligence: Reliable, safe & trustworthy, International Journal of Human-Computer Interaction, 36, 495-504, DOI: 10.1080/10447318.2020.1741118
Human-centered artificial intelligence: Three fresh ideas. B Shneiderman, AIS Transactions on Human-Computer Interaction. 12Shneiderman, B. (2020b). Human-centered artificial intelligence: Three fresh ideas. AIS Transactions on Human-Computer Interaction, 12, 109-124.
Design lessons from AI's two grand goals: Human emulation and useful applications. B Shneiderman, IEEE Transactions on Technology and Society. 1Shneiderman, B. (2020c). Design lessons from AI's two grand goals: Human emulation and useful applications, IEEE Transactions on Technology and Society, 1, 73-82. https://ieeexplore.ieee.org/document/9088114.
Bridging the gap between ethics and practice: Guidelines for reliable, safe, and trustworthy Human-Centered AI systems. B Shneiderman, ACM Transactions on Interactive Intelligent Systems (TiiS). 10Shneiderman, B. (2020d). Bridging the gap between ethics and practice: Guidelines for reliable, safe, and trustworthy Human-Centered AI systems. ACM Transactions on Interactive Intelligent Systems (TiiS), 10, 1-31.
Grand challenges for HCI researchers. interactions. B Shneiderman, C Plaisant, M Cohen, S Jacobs, N Elmqvist, N Diakopoulos, 23Shneiderman, B., Plaisant, C., Cohen, M., Jacobs, S., Elmqvist, N., & Diakopoulos, N. (2016). Grand challenges for HCI researchers. interactions, 23(5), 24-25.
Designing Out Stereotypes in Artificial Intelligence: Involving users in the personality design of a digital assistant. J Spencer, J Poggi, R Gheerawo, Proceedings of the 4th EAI international conference on smart objects and technologies for social good. the 4th EAI international conference on smart objects and technologies for social goodSpencer, J., Poggi, J., & Gheerawo, R. (2018). Designing Out Stereotypes in Artificial Intelligence: Involving users in the personality design of a digital assistant. In Proceedings of the 4th EAI international conference on smart objects and technologies for social good (pp. 130-135).
A Survey of Human-Centered Evaluations in Human-Centered Machine Learning. F Sperrle, M El-Assady, G Guo, R Borgo, D H Chau, A Endert, D Keim, Computer Graphics Forum. 40Sperrle, F., El-Assady, M., Guo, G., Borgo, R., Chau, D. H., Endert, A., & Keim, D. (2021, June). A Survey of Human-Centered Evaluations in Human-Centered Machine Learning. In Computer Graphics Forum (Vol. 40, No. 3, pp. 543-567).
Seven HCI grand challenges. C Stephanidis, G Salvendy, M Antona, J Y Chen, J Dong, V G Duffy, . . Zhou, J , International Journal of Human-Computer Interaction. 3514Stephanidis, C., Salvendy, G., Antona, M., Chen, J. Y., Dong, J., Duffy, V. G., ... & Zhou, J. (2019). Seven HCI grand challenges. International Journal of Human-Computer Interaction, 35(14), 1229-1269.
Ironies of automation: still unresolved after all these years. B Strauch, 10.1109/THMS.2017.2732506IEEE Transactions on Human-Machine Systems. 99Strauch, B. (2017). Ironies of automation: still unresolved after all these years. IEEE Transactions on Human-Machine Systems, 99,1-15. DOI:10.1109/THMS.2017.2732506
From Human-Computer Interaction to Human-Environment Interaction: Ambient Intelligence and the Disappearing Computer. N Streitz, https:/link.springer.com/chapter/10.1007/978-3-540-71025-7_1Proceedings of the 9th ERCIM Workshop on User Interfaces for All. the 9th ERCIM Workshop on User Interfaces for AllSpringerStreitz, N. (2007). From Human-Computer Interaction to Human-Environment Interaction: Ambient Intelligence and the Disappearing Computer. Proceedings of the 9th ERCIM Workshop on User Interfaces for All (pp. 3-13). Springer. https://link.springer.com/chapter/10.1007/978-3-540-71025- 7_1
Rise of machine agency: A framework for studying the psychology of human-AI interaction (HAII). S S Sundar, Journal of Computer-Mediated Communication. 251Sundar, S. S. (2020). Rise of machine agency: A framework for studying the psychology of human-AI interaction (HAII). Journal of Computer-Mediated Communication, 25(1), 74-88.
Prototyping ways of prototyping AI. Interactions. P Allen, 25Allen, P. (2018). Prototyping ways of prototyping AI. Interactions, 25, 46-51.
Cognitive Work Analysis: Toward Safe, Productive, and Healthy Computer-Based Work. K J Vicente, ErlbaumHillsdale, NJVicente, K. J. (1999). Cognitive Work Analysis: Toward Safe, Productive, and Healthy Computer-Based Work. Hillsdale, NJ: Erlbaum.
J Diggelen, J Barnhoorn, R Post, J Sijs, N Van Der Stap, J Van Der Waa, Delegation in Human-machine Teaming: Progress, Challenges and Prospects. Diggelen, J., Barnhoorn, J., Post, R., Sijs, J., van der Stap, N., & van der Waa, J. (2020) Delegation in Human-machine Teaming: Progress, Challenges and Prospects.
Intelligent Control, Beijing: China Science and Technology Press. F Wang, W Wang, K Siau, Artificial intelligence: a study on governance, policies, and regulations. MWAIS 2018 proceedings. 40Wang, F. C, (2019) Intelligent Control, Beijing: China Science and Technology Press, 2019. Wang, W., & Siau, K. (2018). Artificial intelligence: a study on governance, policies, and regulations. MWAIS 2018 proceedings, 40.
Autonomous systems. D P Watson, D H Scheidt, Johns Hopkins APL technical digest. 264Watson, D. P., & Scheidt, D. H. (2005). Autonomous systems. Johns Hopkins APL technical digest, 26(4), 368-376.
When computers take the lead: The automation of leadership. J S Wesche, A Sonderegger, Computers in human Behavior. 101Wesche, J. S., & Sonderegger, A. (2019). When computers take the lead: The automation of leadership. Computers in human Behavior, 101, 197-209.
Engineering psychology and human performance. C D Wickens, J G Hollands, S Banbury, R Parasuraman, Psychology PressWickens, C. D., Hollands, J. G., Banbury, S., & Parasuraman, R. (2015). Engineering psychology and human performance. Psychology Press.
The effects of participatory mode and task workload on the detection of dynamic system failures. C D Wickens, C Kessel, 10.1109/TSMC.1979.4310070IEEE Transactions on Systems, Man, and Cybernetics. 9Wickens, C.D. & Kessel, C. (1979). The effects of participatory mode and task workload on the detection of dynamic system failures," in IEEE Transactions on Systems, Man, and Cybernetics, vol. 9, no. 1, pp. 24-34, Jan. 1979, doi: 10.1109/TSMC.1979.4310070.
History of artificial intelligence. Wikipedia, Wikipedia (2021). History of artificial intelligence. https://en.wikipedia.org/wiki/History_of_artificial_intelligence. Accessed July 12, 2021.
Shifting viewpoints: Artificial intelligence and human-computer interaction. T Winograd, 10.1016/j.artint.2006.10.011Artificial Intelligence. 170Winograd, T. (2006). Shifting viewpoints: Artificial intelligence and human-computer interaction. Artificial Intelligence, 170, 1256-1258. doi:https://doi.org/10.1016/j.artint.2006.10.011.
An introduction to multiagent systems. M Wooldridge, John wiley & sonsWooldridge, M. (2009). An introduction to multiagent systems. John wiley & sons.
Status and challenges: Human factors in developing modern civil flight decks. W Xu, Journal of Ergonomics. 104Xu, W. (2004). Status and challenges: Human factors in developing modern civil flight decks. Journal of Ergonomics, 10 (4), 53-56.
Recent trend of research and applications on human-computer interaction. W Xu, Journal of Ergonomics. 11Xu, W. (2005). Recent trend of research and applications on human-computer interaction. Journal of Ergonomics, 11, 37-40.
Identifying problems and generating recommendations for enhancing complex systems: Applying the abstraction hierarchy framework as an analytical tool. W Xu, Human Factors. 496Xu, W. (2007). Identifying problems and generating recommendations for enhancing complex systems: Applying the abstraction hierarchy framework as an analytical tool. Human Factors, 49(6), 975-994.
Revisiting user-centered design approach: new challenges and new opportunities. W Xu, Chinese Journal of Ergonomics. 231Xu, W. (2017). Revisiting user-centered design approach: new challenges and new opportunities. Chinese Journal of Ergonomics, 23(1), 82-86.
Methods for user experience and innovative design in the intelligent era. W Xu, Chinese Journal of Applied Psychology. 251User-centered design (III)Xu, W. (2018). User-centered design (III): Methods for user experience and innovative design in the intelligent era. Chinese Journal of Applied Psychology, 25(1), 3-17.
Toward human-centered AI: A perspective from human-computer interaction. W Xu, Interactions. 264Xu, W. (2019a). Toward human-centered AI: A perspective from human-computer interaction. Interactions, 26(4), 42-46.
User-centered design (IV): Human-centered artificial intelligence. W Xu, Chinese Journal of Applied Psychology. 254Xu, W. (2019b). User-centered design (IV): Human-centered artificial intelligence. Chinese Journal of Applied Psychology, 25(4), 291-305.
User-centered design (V): From automation to the autonomy and autonomous vehicles in the intelligence era. W Xu, Chinese Journal of Applied Psychology. 262Xu, W. (2020). User-centered design (V): From automation to the autonomy and autonomous vehicles in the intelligence era. Chinese Journal of Applied Psychology, 26(2),108-129.
From automation to autonomy and autonomous vehicles: Challenges and opportunities for human-computer interaction. W Xu, Interactions. 281Xu, W. (2021). From automation to autonomy and autonomous vehicles: Challenges and opportunities for human-computer interaction. Interactions, 28(1), 49-53.
Engineering psychology in the era of artificial intelligence. W Xu, L Ge, Advances in Psychological Science. 289Xu, W. & Ge, L. (2020). Engineering psychology in the era of artificial intelligence. Advances in Psychological Science, 28(9), 1409-1425.
Applications of an interaction, process, integration and intelligence (IPII) design approach for ergonomics solutions. W Xu, D Furie, M Mahabhaleshwar, B Suresh, H Chouhan, Ergonomics. 627Xu, W., Furie, D., Mahabhaleshwar, M., Suresh, B., & Chouhan, H. (2019). Applications of an interaction, process, integration and intelligence (IPII) design approach for ergonomics solutions. Ergonomics, 62(7), 954-980.
How do visual explanations foster end users' appropriate trust in machine learning?. F Yang, Z Huang, J Scholtz, D L Arendt, Proceedings of the 25th ACM International Conference on Intelligent User Interfaces. the 25th ACM International Conference on Intelligent User InterfacesYang, F., Huang, Z., Scholtz, J., & Arendt, D. L. (2020). How do visual explanations foster end users' appropriate trust in machine learning? In Proceedings of the 25th ACM International Conference on Intelligent User Interfaces (pp. 189-201).
Re-examining whether, why, and how human-AI interaction is uniquely difficult to design. Q Yang, A Steinfeld, C Rosé, J Zimmerman, 10.1145/3313831.3376301CHI Conference on Human Factors in Computing Systems (CHI '20). Honolulu, HI, USA; New York, NY, USAACM12Yang, Q., Steinfeld, A., Rosé, C. & Zimmerman, J. (2020). Re-examining whether, why, and how human- AI interaction is uniquely difficult to design. In CHI Conference on Human Factors in Computing Systems (CHI '20), April 25-30, 2020, Honolulu, HI, USA. ACM, New York, NY, USA, 12 pages. https://doi.org/10.1145/3313831.3376301.
Investigating how experienced UX designers effectively work with machine learning. Q Yang, A Scuito, J Zimmerman, J Forlizzi, A Steinfeld, 10.1145/3196709.3196730Proceedings of the 2018 Designing Interactive Systems Conference (DIS '18). the 2018 Designing Interactive Systems Conference (DIS '18)New York, NY, USAACMYang, Q., Scuito, A., Zimmerman, J., Forlizzi, J. & Steinfeld, A. (2018). Investigating how experienced UX designers effectively work with machine learning. In Proceedings of the 2018 Designing Interactive Systems Conference (DIS '18). ACM, New York, NY, USA, 585-596. https://doi.org/10.1145/3196709.3196730.
Machine learning as a UX design material: How can we imagine beyond automation, recommenders, and reminders? Conference. Q Yang, AAAI Spring Symposium Series: User Experience of Artificial Intelligence. At; Palo Alto, CAYang, Q. (2018). Machine learning as a UX design material: How can we imagine beyond automation, recommenders, and reminders? Conference: 2018 AAAI Spring Symposium Series: User Experience of Artificial Intelligence, March 2018, At: Palo Alto, CA.
Predicting future AI failures from historic examples. foresight. R V Yampolskiy, 21Yampolskiy, R.V. (2019). Predicting future AI failures from historic examples. foresight, 21, 138-152.
Human-in-the-loop Artificial Intelligence. F M Zanzotto, Journal of Artificial Intelligence Research. 64Zanzotto, F. M. (2019). Human-in-the-loop Artificial Intelligence. Journal of Artificial Intelligence Research, 64, 243-252.
Interaction paradigm in intelligent systems (in Chinese). X L Zhang, F Lyu, S W Cheng, 10.1360/N112017-00217Sci Sin Inform. 48Zhang, X. L., Lyu, F., Cheng, S. W. (2018). Interaction paradigm in intelligent systems (in Chinese). Sci Sin Inform, 48, 406-418, doi: 10.1360/N112017-00217.
Long-term impacts of fair machine learning. X Zhang, M M Khalili, M Liu, Ergonomics in Design. 28Zhang, X., Khalili, M. M., & Liu, M. (2020). Long-term impacts of fair machine learning. Ergonomics in Design, 28, 7-11.
Hybrid-augmented intelligence: collaboration and cognition. N Zheng, Z Liu, P Ren, 10.1631/FITEE.1700053Frontiers of Information Technology & Electronic Engineering. 18Zheng, N., Liu, Z., Ren, P. et al. (2017). Hybrid-augmented intelligence: collaboration and cognition. Frontiers of Information Technology & Electronic Engineering, 18, 153-179. https://doi.org/10.1631/FITEE.1700053.
2D Transparency Space-Bring Domain Users and Machine Learning Experts Together. J Zhou, F Chen, Book:Zhou, J., & Chen, FSpringer International PublishingHuman and Machine Learning: Visible, Explainable, Trustworthy and Transparent. Kindle EditionZhou, J. & Chen, F. (2018). 2D Transparency Space-Bring Domain Users and Machine Learning Experts Together. In Book:Zhou, J., & Chen, F. Human and Machine Learning: Visible, Explainable, Trustworthy and Transparent. Springer International Publishing. Kindle Edition.
| [] |
[
"Harmonic-aligned Frame Mask Based on Non-stationary Gabor Transform with Application to Content-dependent Speaker Comparison *",
"Harmonic-aligned Frame Mask Based on Non-stationary Gabor Transform with Application to Content-dependent Speaker Comparison *"
] | [
"Feng Huang [email protected] \nAcoustic Research Institute\nAustrian Academy of Sciences\n\n",
"Peter Balazs [email protected] \nAcoustic Research Institute\nAustrian Academy of Sciences\n\n"
] | [
"Acoustic Research Institute\nAustrian Academy of Sciences\n",
"Acoustic Research Institute\nAustrian Academy of Sciences\n"
] | [] | We propose harmonic-aligned frame mask for speech signals using non-stationary Gabor transform (NSGT). A frame mask operates on the transfer coefficients of a signal and consequently converts the signal into a counterpart signal. It depicts the difference between the two signals. In preceding studies, frame masks based on regular Gabor transform were applied to single-note instrumental sound analysis. This study extends the frame mask approach to speech signals. For voiced speech, the fundamental frequency is usually changing consecutively over time. We employ NSGT with pitch-dependent and therefore time-varying frequency resolution to attain harmonic alignment in the transform domain and hence yield harmonic-aligned frame masks for speech signals. We propose to apply the harmonic-aligned frame mask to content-dependent speaker comparison. Frame masks, computed from voiced signals of a same vowel but from different speakers, were utilized as similarity measures to compare and distinguish the speaker identities (SID). Results obtained with deep neural networks demonstrate that the proposed frame mask is valid in representing speaker characteristics and shows a potential for SID applications in limited data scenarios. | 10.21437/interspeech.2019-1327 | [
"https://arxiv.org/pdf/1904.10380v1.pdf"
] | 128,359,088 | 1904.10380 | 7cdcd85e2e9e44c5b1040079313d48886ce27553 |
Harmonic-aligned Frame Mask Based on Non-stationary Gabor Transform with Application to Content-dependent Speaker Comparison *
Feng Huang [email protected]
Acoustic Research Institute
Austrian Academy of Sciences
Peter Balazs [email protected]
Acoustic Research Institute
Austrian Academy of Sciences
Harmonic-aligned Frame Mask Based on Non-stationary Gabor Transform with Application to Content-dependent Speaker Comparison *
Index Terms: Non-stationary Gabor transformframe maskharmonic alignmentpitch- dependent frequency resolutionspeaker featurespeaker comparison
We propose harmonic-aligned frame mask for speech signals using non-stationary Gabor transform (NSGT). A frame mask operates on the transfer coefficients of a signal and consequently converts the signal into a counterpart signal. It depicts the difference between the two signals. In preceding studies, frame masks based on regular Gabor transform were applied to single-note instrumental sound analysis. This study extends the frame mask approach to speech signals. For voiced speech, the fundamental frequency is usually changing consecutively over time. We employ NSGT with pitch-dependent and therefore time-varying frequency resolution to attain harmonic alignment in the transform domain and hence yield harmonic-aligned frame masks for speech signals. We propose to apply the harmonic-aligned frame mask to content-dependent speaker comparison. Frame masks, computed from voiced signals of a same vowel but from different speakers, were utilized as similarity measures to compare and distinguish the speaker identities (SID). Results obtained with deep neural networks demonstrate that the proposed frame mask is valid in representing speaker characteristics and shows a potential for SID applications in limited data scenarios.
Introduction
Time-frequency (TF) analysis is the foundation of audio and speech signal processing. The shorttime Fourier transform (STFT) is a widely used tool, which can be effectively implemented by FFT [1]. STFT features straightforward interpretation of a signal. It provides uniform time and frequency resolution with linearly-spaced TF bins. The corresponding theory was generalized in the framework of Gabor analysis and Gabor frames [2,3,4].
Signal synthesis is an important application area of time-frequency transforms. Signal modification, denoising, separation and so on can be achieved by manipulating the analysis coefficients to synthesize a desired one. The theory of Gabor multiplier [5] or, in general terms, frame multiplier [6,7] provides a basis for the stability and invertibility of such operations. A frame multiplier is an operator that converts a signal into another by pointwise multiplication in the transform domain for resynthesis. The sequence of multiplication coefficients is called a frame mask (or symbol).
Such operators allow easy implementation of time-varying filters [8]. They have been used in perceptual sparsity [9], denoising [10] and signal synthesis [11]. Algorithms to estimate frame mask between audio signals were investigated in [11,12], where it was demonstrated that the frame mask between two instrumental sounds (of a same note) was an effective measure to characterize timber variations between the instruments. Such masks were used for timber morphing and instrument categorization. In this paradigm, the two signals were of the same fundamental frequency and their harmonics were naturally aligned, which vouched for the prominence of the obtained mask for TF analysis/synthesis with uniform resolution.
This study extends the frame mask method to speech signals. One intrinsic property of (voiced) speech signal is that the fundamental frequency (f 0 or pitch) varies consecutively over time. Therefore, the harmonic structures are not well aligned when comparing two signals. We propose to employ the non-stationary Gabor transform (NSGT) [13] to tackle this issue. NSGT provides flexible time-frequency resolution by incorporating dynamic time/frequency hop-size and dynamic analysis windows [13,14,15]. We develop an NSGT whose frequency resolution changes over time. We set the frequency hop-size in ratio to f 0 to achieve harmonic alignment (or partial alignment cf. Section 4) in the transform domain. On this basis, we propose the harmonic-aligned frame mask. To demonstrate feasibility in speech, we shall evaluate the proposal in the context of vowel-dependent speaker comparison. Frame marks between voiced signals of the same vowel but pronounced by different speakers are proposed as similarity measures for speaker characteristics to distinguish speaker identities in a limited data scenario (cf. Section 5 for details). This paper is organized as follows. In Section 2, we briefly review frame and Gabor theory. In Section 3, we elaborate frame mask and the previous application in instrumental sound analysis.
In Section 4, we develop the non-stationary Gabor transform with pitch-dependent frequency resolution and propose the harmonic-aligned frame mask. Section 5 presents the evaluation in vowel-dependent speaker identification. And finally, Section 6 concludes this study.
Preliminaries and Notation
Frame Theory
Denote by {g λ : λ ∈ Λ} a sequence of signal atoms in the Hilbert space H, where Λ is a set of index.
This atom sequence is a frame [3] if and only if there exist constants A and B, 0
< A ≤ B < ∞, such that A f 2 2 ≤ λ |c λ | 2 ≤ B f 2 2 , ∀f ∈ H.(1)
where c λ = f, g λ are the analysis coefficients. A and B is called the lower and upper frame bounds, respectively. The frame operator S is defined by Sf = λ f, g λ g λ .
Given {h λ = S −1 g λ : λ ∈ Λ} the canonical dual frame of {g λ : λ ∈ Λ}, f can be perfectly reconstructed from the analysis coefficients by
f = λ f, g λ h λ .(2)
The dual frame always exists [16], and for redundant cases there are infinitely many other duals allowing reconstruction.
Discrete Gabor Transform
We take the Hilbert space H to be C L . Given non-zero prototype window g = (g[0], g [1], · · · , g[L −
1]) T ∈ C L , the translation operator T x and modulation operator M y are, respectively, defined as
T x g[l] = g[l − x] and M y g[l] = g[l]e 2πiyl L ,
where x, y ∈ Z L and the translation is performed modulo L. For selected constants a, b ∈ Z L , with some N, M ∈ N such that N a = M b = L, we take Λ to be a regular discrete lattice, i.e., λ = (m, n), and obtain the Gabor system [2] {g m,n } m∈Z M ,n∈Z N as
g m,n [l] = T na M mb g[l] = g[l − na]e 2πimb(l−na) L .(3)
If {g m,n } m,n satisfies (1) for ∀f ∈ C L , it is called a Gabor frame [17]. The discrete Gabor transform
(DGT) of f ∈ C L is a matrix C = {c m,n } ∈ C M ×N with c m,n = f, g m,n . The associated frame operator S : C L → C L reads Sf = M −1 m=0 N −1 n=0 f, g m,n g m,n .(4)
The canonical dual frame { g m,n } m,n of the Gabor frame {g m,n } m,n is given by g m,n = T na M mb S −1 g [18], with which f can be perfectly reconstructed by
f = M −1 m=0 N −1 n=0
c m,n g m,n .
Note that the DGT coefficients are essentially sampling points of the STFT of f with window g at the time-frequency points (na, mb), with a and b being the sampling step (i.e., hop-size) in time and frequency [18]. In non-stationary settings, the hop-sizes are allowed to be variant (cf. Section 4).
Frame mask for instrumental sound analysis
Frame Mask
Consider a pair of frames {g λ : λ ∈ Λ} and {h λ : λ ∈ Λ}. A frame multiplier [19], denoted by M σ;g,h , is an operator that acts on a signal by pointwise multiplication in the transform domain. The symbol σ = {σ λ , λ ∈ Λ} is a sequence that denotes the multiplication coefficients. For signal
f M σ;g,h f = λ σ λ f, g λ h λ .(5)
Here σ is called a frame mask. In the considered signal analysis/transform domain, σ can be viewed as a transfer function.
When Gabor frames {g m,n } m,n and {h m,n } m,n are considered, we set λ = (m, n). In this case the frame multiplier in (5) is known as Gabor multiplier. The corresponding frame mask σ = {σ m,n } ∈ C M ×N is also known as Gabor mask.
For Instrument Timbre Analysis and Conversion
The application of frame masks in musical signals was investigated in [11,12]. Based on DGT, the proposed signal model converts one sound into another by
f B = M σ − → AB ;g, g f A = M −1 m=0 N −1 n=0 σ − → AB m,n f A , g m,n g m,n ,(6)
where f A , f B ∈ R L are two audio signals and σ − → AB is the unknown mask to be estimated. An
obvious solution is to set σ − → AB m,n = c B m,n c A m,n ,
where c A m,n and c B m,n are the DGT coefficients of f A and f B , respectively. However, this solution is non-stable and unbounded as the DGT coefficients in the denominator can be 0 or very small. To guarantee existence of a stable solution, it was proposed to estimate the mask via
min σ − → AB f B − M σ − → AB ;g, g f A 2 + µd(σ − → AB ),(7)
with a (convex) regularization term d(σ − → AB ), whose influence is controlled by the parameter µ [12]. As the existence of a stable solution is assured, such approach in general can be applied to arbitrary pair of signals. However, it might be difficult to interpret the estimated masks (e.g., the mask between two pure-tone signals with different fundamental frequencies).
Given that f A and f B are of the same note produced by different instruments, the frame mask between the two signals was found to be effective to characterize the timbre difference between the two instruments [11,12]. Such masks were utilized as similarity measures for instrument classification and for timber morphing and conversion. Rationality of these applications roots from two aspects:
1) Instrumental signals of a same note possess the same fundamental frequency. Harmonic structures of the signals are naturally aligned.
2) DGT performs TF analysis over a regular TF lattice, and consequently preserves the property of harmonic alignment in the transform domain.
4 Frame mask for speech signals using Non-stationary Gabor transform Similar to audio sounds of instrument notes, (voiced) speech signals are also harmonic signals.
Analog to the above-mentioned applications, this study explores the application of frame mask in speech signals. In particular, we consider to use voiced speech as source and target signals and to estimate the frame mask between them. We are specially interested in the case that the source and the target are of the same content, e.g., the same vowel. For such a case, a valid frame mask could measure specific variations among the signals, such as speaker variations.
Nevertheless, attempting to use (7) for speech signals, we immediately face a fundamental problem. For speech signals, the fundamental frequency usually varies over time consecutively.
Therefore, harmonic structures of the source and target voice are mostly not aligned. To address this problem, we propose to employ non-stationary Gabor transform, which allows flexible timefrequency resolution [13]. Within the framework of non-stationary Gabor analysis, we intend to achieve dynamic alignment of the signals' harmonic structures. In the following, we shall develop NSGT with pitch-dependent frequency resolution to achieve harmonic alignment in the transform domain, and shall propose the harmonic-aligned frame mask for speech signals on that basis.
Non-stationary Gabor Transform with Pitch-dependent Frequency
Resolution
We consider analyzing a voiced signal f ∈ R L with a window g that is symmetric around zero.
As the stationary case in Section 2.2, we use a constant time hop-size a, resulting in N = L a ∈ N sampling points in time for the TF analysis. However, we set the frequency hop-size according to the fundamental frequency of the signal (see Remark 2.1 for discussion on pitch estimation issue).
Following the quasi-stationary assumption for speech signals, we assume that the fundamental frequency is approximately fixed within the interval of the analysis window. At time n, let f 0 (na) denote the fundamental frequency in Hz, we set the corresponding frequency hop-size as
b f0 n = pf 0 (na) q f s L ,(8)
where p, q ∈ N are a pair of parameters to be set. For example, with p = 1, for any n, c q,n , c 2q,n , c 3q,n , · · · naturally correspond to the harmonic frequencies of the signal. The parameter p allows performing partial alignment wrt. integer multiples of the p-th harmonic frequency.
Remark 1. To satisfy M f0 n b f0 n = L, ∀n ∈ Z N , zero-padding for f may be needed for an appropriate L. If an extremely large L is required, it is always practicable to divide the signal into segments of shorter duration using overlap-and-add windows, and obtain NSGT coefficients for each segment separately. A practical example for such procedure can be found in [14]. Now we consider the canonical dual { g m,n } m,n . Denote by supp(g) ⊆ [c, d] the support of the window g, i.e., the interval where the window is nonzero. We choose M f0 n ≥ d − c, ∀n ∈ Z N , which is referred to as the painless case [13]. In other words, we require the frequency sampling points to be dense enough. In this painless case, we have the following [13].
And the canonical dual frame { g m,n } m,n is given by
g m,n [l] = g m,n [l] s l,l .(11)
Harmonic-aligned Frame Mask
In this section, we present a general form of frame mask based on the above pitch-dependent NGST. For two voiced signals f A , f B ∈ R L , denote their fundamental frequency by f A;0 and f B;0 , respectively. Using (9) with the same window g and the same time hop-size a for both signals, we construct two Gabor systems {g A m,n } m∈Z
f B = M σ − → AB ;g A , g B f A = M −1 m=0 N −1 n=0 σ − → AB m,n f A , g A m,n g B m,n .(12)
To estimate the frame mask, existing methods [11,12] for the problem in (6) Remark 2. 1) The proposed approach practically depends on a reliable method to estimate the fundamental frequencies. A thorough discussion of such topic is beyond the scope of this paper.
In the evaluation, we applied the methods in [20,21]. 2) It may be a false impression that pitch independence is achieved in the frame masks by the harmonic alignment. On the contrary, the resulted frame mask is essentially dependent on the fundamental frequencies. It equivalently describes the variations between two spectra which are warped in a pitch-dependent and linear way. It contains information related to the spectral envelopes and also highly depends on the fundamental frequencies. It is our interests to utilize the proposed mask as feature measures for classification tasks.
Evaluation in Content-dependent Speaker Comparison
We now evaluate harmonic-aligned frame masks for speaker identity comparison in a contentdependent context. In particular, the source and target signals are of the some vowel but pronounced by different speakers. In this setting, we estimate the frame masks between an input To estimate the harmonic-aligned frame mask, we adopt the approach (7) and use transform domain proxy [11]. For our case, the first item in (7) can be written as
C B − σ − → AB C A 2 + µd(σ − → AB ),(13)
where denotes entrywise product. In this evaluation, we use the following regularization term
d(σ − → AB ) = σ − → AB − σ Ref 2 2 .(14)
With (14), the objective function in (13) is a quadratic form of σ − → AB , which leads to the following explicit solution
σ − → AB = C A C B + µσ Ref |C A | 2 + µ .(15)
Here denotes complex conjugate.
Experimental Settings
For experimental evaluation, we extracted two sets of English vowels, /iy/ and /u/ 1 , from the TIMIT database [22]. The vowels were from 390 speakers. For each speaker, there were 3 samples of /iy/ as well as 3 samples of /u/ included. The signals were down-sampled at 8000 Hz. Fundamental frequency was obtained with the method proposed in [20,21] and assumed known throughout the evaluation.
We chose from the 390 speakers a reference speaker whose fundamental frequency was about the average of all speakers'. For the NSGT, we used Hann window with support interval of 20ms length. The time hop-size a was set to 4ms. For the pitch-dependent frequency hop-size, i.e., (8),
we set q = 75 according to pilot tests. For p, we used an average value of the first formant frequency (F 1) as anchor frequency and the averagef 0 of a speaker as reference and fix p = F 1 f 0 for the speaker. We used F 1 = 280 Hz and F 1 = 310 Hz for /iy/ and /u/, respectively [23]. For (15), we empirically set σ Ref = 1 (all-ones) and µ = 10 −7 . Part of the routines in the LTFAT toolbox [1,24] were used to implement the NSGT. For each vowel type, the frame masks for an input speaker were computed from 3 × 3 pairs of signals 2 . To obtain a variety of masks, for a signal pair we computed the frame masks as illustrated in Fig. 1. Hence, C A and C B in (15) were one-columnwise for the feature extraction. Fig. 2 shows performance of the harmonic-aligned frame mask (HAFM) in the vowel-dependent speaker classification tasks. For comparison, the mel-frequency cepstral coefficients (MFCC) [28] and the NSGT coefficients (C-NSGT) were also evaluated in the same way. We also tested the condition that f 0 was included as an extra feature dimension. It can be seen from the results that C-NSGT mostly performed the worst. On the other hand, HAFM which is established based on C-NSGT outperforms the others with noticeably higher accuracy. This implies that with the comparison way of feature extraction, the HAFM feature is more effective to capture and represent the speaker variations. The accuracy of HAFM is 83% for the "DNN/iy/+DNN/u/" case (i.e., DNNs of both vowels were combined for decision). It can also be noticed that to include f 0 as extra feature seems beneficial for MFCC. However, such benefit is generally not observed for both C-NGST and HAFM, as f 0 related information has already been well incorporated in these features.
Results
In the evaluation, it was also observed that the frame mask based DNNs performed extremely well in distinguishing the reference speaker from the rest of the speakers. As the frame mask features were obtained by exhaustive comparison to the reference speaker, the resulted DNN were inherently good verification models for the reference speaker. One of our future directions is to combine the verification models of all enrolled speakers to construct a more comprehensive system.
Conclusions
The frame mask approach has been extended from instrumental sound analysis to voiced speech analysis. We have addressed the related issue by developing non-stationary Gabor transform (NSGT) with pitch-dependent and time-varying frequency resolution. The transform allows effective harmonic alignment in the transform domain. On this basis, harmonic-aligned frame mask has been proposed for voiced speech signals. We have applied the proposed frame mask as similarity measure to compare and distinguish speaker identities, and have evaluated the proposal in a voweldependent and limited-data setting. Results confirm that the proposed frame mask is feasible for speech applications. It is effective in representing speaker characteristics in the content-dependent context and shows a potential for speaker identity related applications, specially for limited data scenarios.
denotes rounding to the closest positive integer, and f s is the signal's sampling rate in Hz. With (8), q frequency sampling points are deployed per pf 0 (na) Hz. The total number of frequency sampling points at n is hence M f0 n = L b f0 n ∈ N. Consequently, we obtain the pitch-depenent non-stationary Gabor system (NSGS) {g m,n } m∈Z M f 0 n ,n∈Z N as g m,n [l] = T na M mb f 0 n g[l] = g[l − na]e called a non-stationary Gabor frame (NSGF) if it fulfills (1) for C L . The sequence {c m,n } m,n = { f, g m,n } m,n are the non-stationary Gabor transform coefficients. In general, due to the dynamic frequency hop-size, these coefficients do not form a matrix. Eq. (8) features a time-varying and pitch-dependent frequency resolution. More importantly, it allows harmonic alignment in the NSGT coefficients with respect to the frequency index m.
Proposition 1 .
1If {g m,n } m,n is a painless-case NSGF, then the frame operation S (cf. (4)) is an L × L diagonal matrix with diagonal element s l,l = N −1 n=0 M f0 n g[l − na] 2 > 0, ∀l ∈ Z L .
To simplify the presentation of the concept without losing the frame property (1), we can consider extend the two systems as {g A m,n } m∈Z M ,n∈Z N and {g B m,n } m∈Z M ,n∈Z N e.g., with periodic extension to the modulation operator wrt. the index m. Under such circumstance, we can denote the NGST coefficients in matrix forms as C A = {c A m,n } m,n ∈ C M ×N and C B = {c B m,n } m,n ∈ C M ×N . The harmonic-aligned frame mask (HAFM) σ − → AB ∈ C M ×N between the two voiced signals therefore acts as
can be directly applied. For both Gabor systems {g A } and {g B }, the parameters p and q in (8) need to be appropriately set. We set q for both systems to the same value. However, depending on specifics of the source and target signal (as well as the application purpose), the parameter p may be set to different values for both systems. Example 1: If f A;0 and f B;0 are close (enough), we consider p = 1 for both Gabor systems. This leads to a one-to-one alignment of all harmonics. Example 2: If f A;0 and f B;0 are significantly different in value, we may consider an anchor frequency F and set p A = F /f A;0 , p B = F /f B;0 . This results in partial alignment of the harmonics, i.e., only the harmonics around F and its multiples are aligned.
Figure 1 :
1Extraction of frame mask feature between two signals. (With only the time-shifted windows shown) speaker and a fixed reference speaker. For different speakers, we compare them to the same reference speaker, and use the estimated masks as speaker feature to measure and distinguish the speaker identities. It can be considered as a task of close-set speaker identification with contentdependent and limited-data constraints (see the experimental settings in 5.1).
.
With diagonal approximation on the covariance matrix of NSGF { g B }, i.e., ( g B m,n ) H · g B m ,n = 0 if (m, n) = (m , n ), we estimate the mask
Figure 2 :
2Performance of DNN-based speaker classification. (The total number of DNN training data was roughly 1.5 × 10 5 for HAFM, and 2.5 × 10 4 for MFCC and C-NSGT.)
The obtained mask vectors were used as speaker feature vectors. We employed fully connected deep neural network (DNN) for the evaluation. The feature vectors were divided in the following way for training and testing. For each speaker, 2/3 of the speaker's masks were randomly selected as training data, and the rest 1/3 were used for testing. The DNN structure was set as 1200 − 1024 − 1024 − 1024 − 390. For DNN training, the following settings were used[25,26,27]. The number of epoch for RBM pre-training was 50, with learning rate set as 10 −5 . The number of epochs for DNN fine-tuning was 25, where in the first 5 epochs only the parameters of the output layer were adjusted. The mini-batch size was set to 100.
We use these phonetic symbols as in the database's documents.
As there were also 3 samples from the reference speaker.
The linear time frequency analysis toolbox. P Søndergaard, B Torrésani, P Balazs, International Journal of Wavelets, Multiresolution and Information Processing. 1041250032P. Søndergaard, B. Torrésani, and P. Balazs, "The linear time frequency analysis toolbox," International Journal of Wavelets, Multiresolution and Information Processing, vol. 10, no. 4, p. 1250032, 2012. [Online]. Available: http://ltfat.github.io/
Theory of communication. D Gabor, J. IEE -Part I: General. 9473D. Gabor, "Theory of communication," J. IEE -Part I: General, vol. 94, no. 73, pp. 429-457, January 1947.
A Wavelet Tour of Signal Processing -The Sparse Way. S Mallat, Academic Press3rd edS. Mallat, A Wavelet Tour of Signal Processing -The Sparse Way, 3rd ed. Academic Press, 2009.
K Gröchenig, Foundations of Time-Frequency Analysis. Boston, MA, USAK. Gröchenig, Foundations of Time-Frequency Analysis. Boston, MA, USA, 2001.
A first survey of Gabor multipliers. H G Feichtinger, K Nowak, H. G. Feichtinger and K. Nowak, A first survey of Gabor multipliers, 2003, ch. 5, pp. 99-128.
Invertibility of multipliers. D T Stoeva, P Balazs, Applied and Computational Harmonic Analysis. 332D. T. Stoeva and P. Balazs, "Invertibility of multipliers," Applied and Computational Har- monic Analysis, vol. 33, no. 2, pp. 292-299, 2012.
Representation of the inverse of a multiplier. P Balazs, D T Stoeva, Journal of Mathematical Analysis and Applications. 422P. Balazs and D. T. Stoeva, "Representation of the inverse of a multiplier," Journal of Math- ematical Analysis and Applications, vol. 422, pp. 981-994, 2015.
Time-frequency formulation, design, and implementation of time-varying optimal filters for signal estimation. F Hlawatsch, G Matz, H Kirchauer, W Kozek, IEEE Transactions on Signal Processing. 485F. Hlawatsch, G. Matz, H. Kirchauer, and W. Kozek, "Time-frequency formulation, design, and implementation of time-varying optimal filters for signal estimation," IEEE Transactions on Signal Processing, vol. 48, no. 5, pp. 1417 -1432, May 2000.
Time-frequency sparsity by removing perceptually irrelevant components using a simple model of simultaneous masking. P Balazs, B Laback, G Eckel, W A Deutsch, IEEE Transactions on Audio, Speech and Language Processing. 181P. Balazs, B. Laback, G. Eckel, and W. A. Deutsch, "Time-frequency sparsity by removing perceptually irrelevant components using a simple model of simultaneous masking," IEEE Transactions on Audio, Speech and Language Processing, vol. 18, no. 1, pp. 34-49, 2010.
A time-frequency method for increasing the signal-to-noise ratio in system identification with exponential sweeps. P Majdak, P Balazs, W Kreuzer, M Dörfler, Proc. 36th International Conference on Acoustics, Speech and Signal Processing. 36th International Conference on Acoustics, Speech and Signal essingPragP. Majdak, P. Balazs, W. Kreuzer, and M. Dörfler, "A time-frequency method for increas- ing the signal-to-noise ratio in system identification with exponential sweeps," in Proc. 36th International Conference on Acoustics, Speech and Signal Processing, ICASSP 2011, Prag, 2011.
Time-frequency multipliers for sound synthesis. P Depalle, R Kronland-Martinet, B Torrsani, Proc. SPIE, Wavelets XII. SPIE, Wavelets XIIP. Depalle, R. Kronland-Martinet, and B. Torrsani, "Time-frequency multipliers for sound synthesis," in Proc. SPIE, Wavelets XII, 2007, pp. 221-224.
A class of algorithms for time-frequency multiplier estimation. A Olivero, B Torresani, R Kronland-Martinet, IEEE Transactions on Audio, Speech, and Language Processing. 218A. Olivero, B. Torresani, and R. Kronland-Martinet, "A class of algorithms for time-frequency multiplier estimation," IEEE Transactions on Audio, Speech, and Language Processing, vol. 21, no. 8, pp. 1550-1559, Aug 2013.
Theory, implementation and applications of nonstationary gabor frames. P Balazs, M Dörfler, F Jaillet, N Holighaus, G Velasco, Journal of Computational and Applied Mathematics. 2366P. Balazs, M. Dörfler, F. Jaillet, N. Holighaus, and G. Velasco, "Theory, implementation and applications of nonstationary gabor frames," Journal of Computational and Applied Mathe- matics, vol. 236, no. 6, pp. 1481 -1496, 2011.
A framework for invertible, realtime constant-q transforms. N Holighaus, M Dörfler, G A Velasco, T Grill, IEEE Transactions on Audio, Speech, and Language Processing. 214N. Holighaus, M. Dörfler, G. A. Velasco, and T. Grill, "A framework for invertible, real- time constant-q transforms," IEEE Transactions on Audio, Speech, and Language Processing, vol. 21, no. 4, pp. 775-785, April 2013.
A phase vocoder based on nonstationary Gabor frames. E S Ottosen, M Dörfler, Speech, and Language Processing. 25E. S. Ottosen and M. Dörfler, "A phase vocoder based on nonstationary Gabor frames," IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 25, no. 11, pp. 2199-2208, Nov 2017.
The art of frame theory. P Casazza, Taiwanese J. Math. 42P. Casazza, "The art of frame theory," Taiwanese J. Math., vol. 4, no. 2, pp. 129-202, 2000.
An Introduction to Frames and Riesz Bases. O Christensen, Birkhäuser Boston. O. Christensen, An Introduction to Frames and Riesz Bases. Birkhäuser Boston, 2003.
H G Feichtinger, T Strohmer, Gabor Analysis and Algorithms -Theory and Applications. Birkhäuser Boston. H. G. Feichtinger and T. Strohmer, Gabor Analysis and Algorithms -Theory and Applications. Birkhäuser Boston, 1998.
Basic definition and properties of Bessel multipliers. P Balazs, Journal of Mathematical Analysis and Applications. 3251P. Balazs, "Basic definition and properties of Bessel multipliers," Journal of Mathematical Analysis and Applications, vol. 325, no. 1, pp. 571 -585, 2007.
Pitch estimation in noisy speech using accumulated peak spectrum and sparse estimation technique. F Huang, T Lee, IEEE Trans. Audio, Speech and Lang. Proc. 211F. Huang and T. Lee, "Pitch estimation in noisy speech using accumulated peak spectrum and sparse estimation technique," IEEE Trans. Audio, Speech and Lang. Proc., vol. 21, no. 1, pp. 99-109, Jan. 2013.
Dictionary learning for pitch estimation in speech signals. F Huang, P Balazs, Proc. 2017 IEEE 27th International Workshop on Machine Learning for Signal Processing. 2017 IEEE 27th International Workshop on Machine Learning for Signal essingF. Huang and P. Balazs, "Dictionary learning for pitch estimation in speech signals," in Proc. 2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP), Sep. 2017, pp. 1-6.
DARPA TIMIT acoustic phonetic continuous speech corpus CDROM. "DARPA TIMIT acoustic phonetic continuous speech corpus CDROM," 1993. [Online]. Avail- able: http://www.ldc.upenn.edu/Catalog/LDC93S1.html
P Ladefoged, K Johnson, A course in phonetics. Boston, MAWadsworth, Cengage Learning6th edP. Ladefoged and K. Johnson, A course in phonetics, 6th ed. Boston, MA: Wadsworth, Cengage Learning, 2011.
The Large Time-Frequency Analysis Toolbox 2.0," in Sound, Music, and Motion, ser. Z Průša, P L Søndergaard, N Holighaus, C Wiesmeyr, P Balazs, Lecture Notes in Computer Science. M. Aramaki, O. Derrien, R. Kronland-Martinet, and S. YstadSpringer International PublishingZ. Průša, P. L. Søndergaard, N. Holighaus, C. Wiesmeyr, and P. Balazs, "The Large Time- Frequency Analysis Toolbox 2.0," in Sound, Music, and Motion, ser. Lecture Notes in Com- puter Science, M. Aramaki, O. Derrien, R. Kronland-Martinet, and S. Ystad, Eds. Springer International Publishing, 2014, pp. 419-442.
Reducing the dimensionality of data with neural networks. G E Hinton, R R Salakhutdinov, Science. 3135786G. E. Hinton and R. R. Salakhutdinov, "Reducing the dimensionality of data with neural networks," Science, vol. 313, no. 5786, pp. 504-507, Jul. 2006.
A Practical Guide to Training Restricted Boltzmann Machines. G Hinton, Tech. Rep.G. Hinton, "A Practical Guide to Training Restricted Boltzmann Machines," Tech. Rep., 2010. [Online]. Available: http://www.cs.toronto.edu/~hinton/absps/guideTR.pdf
An experimental study on speech enhancement based on deep neural networks. Y Xu, J Du, L.-R Dai, C.-H Lee, IEEE Signal Processing Letters. 211Y. Xu, J. Du, L.-R. Dai, and C.-H. Lee, "An experimental study on speech enhancement based on deep neural networks," IEEE Signal Processing Letters, vol. 21, no. 1, pp. 65-68, Jan 2014.
Comparison of different implementations of mfcc. Z Fang, Z Guoliang, S Zhanjiang, J. Comput. Sci. Technol. 166Z. Fang, Z. Guoliang, and S. Zhanjiang, "Comparison of different implementations of mfcc," J. Comput. Sci. Technol., vol. 16, no. 6, pp. 582-589, Nov. 2001.
| [] |
[
"Towards Gauge theory for a class of commutative and non-associative fuzzy spaces",
"Towards Gauge theory for a class of commutative and non-associative fuzzy spaces"
] | [
"Sanjaye Ramgoolam [email protected] \nQueen Mary College London\nE1 4NS\n"
] | [
"Queen Mary College London\nE1 4NS"
] | [] | We discuss gauge theories for commutative but non-associative algebras related to the SO(2k + 1) covariant finite dimensional fuzzy 2k-sphere algebras. A consequence of non-associativity is that gauge fields and gauge parameters have to be generalized to be functions of coordinates as well as derivatives. The usual gauge fields depending on coordinates only are recovered after a partial gauge fixing.The deformation parameter for these commutative but non-associative algebras is a scalar of the rotation group. This suggests interesting string-inspired algebraic deformations of spacetime which preserve | 10.1088/1126-6708/2004/03/034 | [
"https://arxiv.org/pdf/hep-th/0310153v3.pdf"
] | 14,861,362 | hep-th/0310153 | a65c1c01a40073e83a7b1feb059cea5d85bfebdc |
Towards Gauge theory for a class of commutative and non-associative fuzzy spaces
22 Jan 2004 October 2003
Sanjaye Ramgoolam [email protected]
Queen Mary College London
E1 4NS
Towards Gauge theory for a class of commutative and non-associative fuzzy spaces
22 Jan 2004 October 2003
We discuss gauge theories for commutative but non-associative algebras related to the SO(2k + 1) covariant finite dimensional fuzzy 2k-sphere algebras. A consequence of non-associativity is that gauge fields and gauge parameters have to be generalized to be functions of coordinates as well as derivatives. The usual gauge fields depending on coordinates only are recovered after a partial gauge fixing.The deformation parameter for these commutative but non-associative algebras is a scalar of the rotation group. This suggests interesting string-inspired algebraic deformations of spacetime which preserve
Introduction
The fuzzy four-sphere [1] has several applications in D-brane physics [2] [3]. While the fuzzy four-sphere has some similarities to the fuzzy two-sphere [4], it is also different in important ways. The Matrix Algebra of the fuzzy 2k-sphere at finite n is the space of transformations, End(R n ), of an irreducible representation, R n , of Spin(2k + 1). It contains a truncated spectrum of symmetric traceless representations of SO(2k + 1) which form a truncated spectrum of spherical harmonics, but contains additional representations.
The analysis of the representations in the Matrix Algebras for fuzzy spheres in diverse dimensions was given in [5]. For even spheres S 2k these representations span, at large n, the space of functions on a higher dimensional coset SO(2k + 1)/U (k), which are bundles over the spheres S 2k . The extra degrees of freedom in the Matrix algebra can be interpreted equivalently in terms of non-abelian fields on the spheres [6]. In the case of the fuzzy four sphere the higher dimensional geometry is SO(5)/U (2) which is one realization of CP 3 as a coset.
Any discussion of field theory on the fuzzy sphere S 2k requires a product for the fields. The configuration space of a scalar field on the fuzzy four-sphere, is the subspace of End(R n ) which transforms in the traceless symmetric representations. This vector subspace of End(R n ) admits an obvious product. It is obtained by taking the Matrix product followed by a projection back to the space of Matrices transforming as symmetric traceless representations. The vector space of truncated spherical harmonics equipped with this product is denoted by A n (S 2k ). The product is denoted by m 2 , which can be viewed as a map from A n (S 2k ) ⊗ A n (S 2k ) to A n (S 2k ). It is important to distinguish End(R n ) and A n (S 2k ), which are the full Matrix algebra and the algebra obtained after projection, respectively. The product on A n is non-associative and commutative but the non-associativity vanishes at large n [5] [6]. The Matrix algebra End(R n ) contains Matrices X µν transforming in the adjoint of SO(2k + 1). It also contains Matrices X µ transforming in the vector of SO(2k + 1). The vector of SO(2k + 1) and the adjoint combine into the adjoint of SO(2k + 2). The SO(2k + 2) acts by commutators on the whole Matrix algebra.
The projection used in defining the non-associative product on A n (S 2k ) commutes with SO(2k+1) but does not commute with SO(2k+2). Hence the generators of SO(2k+1) provide derivations on the A n (S 2k ) [6]. However derivations transforming in the vector of SO(2k + 1) would be very useful in developing gauge theory for the A n (S 2k ), where we would write a covariant derivative of the form δ α − iA α and use that as a building block for the gauge theory, a technique that has found many applications in Matrix Theories [7] [8]( see for example [9,10,11,12,13,14] ).
The classical sphere can be described in terms of an algebra generated by the coordinates z α of the embedding R 2k+1 , as a projection following from the constraint 2k+1 α=1 z α z α = 1. When we drop the constraint of a constant radius, the algebra of functions on R 5 generated by the z α admits derivations ∂ α which obey ∂ α z β = δ αβ . These translations can be projected to the tangent space of the sphere to give derivations on the sphere. The latter P α can be written as P α = δ α − Q α , where Q α is a derivative transverse to the sphere, and can be characterized by its action on the Cartesian coordinates.
The fuzzy sphere algebra contains analogous operators Z α which obey a constraint 2k+1 α=1 Z α Z α = n+2k n and form a finite dimensional algebra. It is natural to consider at finite n, a finite dimensional algebra generated by the Z α obtained by dropping this quadratic constraint, but keeping the structure constants of the algebra. This can be viewed as a deformation of the algebra of polynomial functions of Z α . We call this finite dimensional algebra A n (R 2k+1 ). Imposing the constraints Z α Z α = n+2k n on this deformed polynomial algebra gives the algebra A n (S 2k ). Again one can consider 'derivatives' δ α defined such that their action on the algebra A n (R 2k+1 ) at large n is just the action of the generators of the translation group. In general they will not satisfy the Leibniz rule at finite n.
An approach to gauge theory on the algebra A n (R 2k+1 ) is to study the deformation of Leibniz Rule that is obeyed by δ α . This will allow us to obtain the gauge transformation required of A α such that the covariant derivative D α = δ α − iA α is indeed covariant. We can expect the non-associativity to lead to extra terms in the transformation of A α . To get to gauge theory on A n (S 2k+1 ), we would define the projection operators P α , Q α and study their deformed Leibniz rule, and then obtain the gauge transformation rule for A α by requiring that P α − iA α is covariant. The strategy of doing non-commutative geometry on the deformed R 3 as a way of getting at the non-commutative geometry of the fuzzy S 2 is used in [15,16].
The structure constants of the algebra A n (S D ) and A n (R D+1 ) show an interesting simplification at large D. We will study simpler algebras B n (R 2k+1 ) and B n (S 2k ), which are obtained by choosing 2k + 1 generators among the D + 1 of A n (R D+1 ). By keeping k fixed and letting D be large we get a simple algebra, whose structure constants can be easily written down explicitly. This algebra is still commutative/non-associative, and allows us to explore the issues of doing gauge theory in this set-up. We explain the relation between B n (S 2k+1 ) and A n (S 2k+1 ) in section 2.
In section 3, we explore 'derivatives' on B n (R 2k+1 ) which obey δ α Z β = δ αβ . To completely define these operators we specify their action on a complete basis of the deformed polynomial algebra. The action we define reduces in the large n limit to the ordinary action of infinitesimal translations. Having defined the action of the derivatives on the space of elements in the algebra, we need to explore the interplay of the derivatives with the product. The operation δ α obeys the Leibniz rule at large n but fails to do so at finite
n. The precise deformation of the Leibniz rule is described in section 3.
In section 4 we show that by using a modified product m * 2 ( which is also commutative and non-associative ) the derivatives δ α do actually obey the Leibniz rule. The modified product is related to the projected Matrix product m 2 by a 'twisting'. While our use of the term 'twist' is partly colloquial, there is some similarity to the 'Drinfeld twist' that appears in quantum groups [17] [18]. Applications of the Drinfeld twist in recent literature related to fuzzy spheres include [19][20] [21].
In section 5, using the δ α and the product m * 2 we discuss an abelian gauge theory for the finite deformed polynomial algebra B * n (R 2k+1 ). The transformation law of A α , which guarantees that D α is covariant, now has additional terms related to the associator.
A detailed description of the associator becomes necessary, some of which is given in Appendix 2. Appendix 1 is a useful prelude to the discussion of the associator. It gives the relation between the product m * 2 and the product m c 2 which is essentially the classical product on polynomials. The discussion of the gauge invariance shows a key consequence of the non-associativity, that gauge fields can pick up derivative dependences under a gauge transformation by a gauge parameter which depends only on coordinates. Hence we should enlarge our notion of gauge field to include dependences on derivatives. This naturally suggests that the gauge parameters should also be generalized to allow dependence on derivatives. We can then obtain ordinary coordinate dependent gauge fields and their gauge transformations by a partial gauge fixing. This discussion is somewhat formal since a complete discussion of gauge theory on a deformation of R 2k+1 should include a careful discussion of the integral. We will postpone discussion of the integral for the deformed R 2k+1 to the future. However the structure of gauge fields, gauge parameters, gauge fixing in the non-associative context uncovered here carries over to the case of the gauge theory on B * n (S 2k ) which we discuss in the next section. One new ingredient needed here is the projection of the translations of R 2k+1 to the tangent space of the sphere. This requires introduction of new derivatives Q α such that the projectors to the tangent space are P α = δ α − Q α . The relevant properties of Q α are described and the construction of abelian gauge theory on B * n (S 2k ) done in section 5. The discussion of the integral is much simpler in this case. Details on the deformed Leibniz rule for Q α are given in Appendix 3. The final appendix 4, which is not used in the paper, but is an interesting technical extension of Appendix 1 shows how to write the classical product m c 2 in terms of m 2 , the product in A n (R 2k+1 )
We end with conclusions and a discussion of avenues for further research.
2. The Algebra B n (R 2k+1 )
We will define a deformed polynomial algebra B n (R 2k+1 ) starting from the fuzzy sphere algebras A n (S D ). Consider operators in End(R n ), where R n is the irrep of Spin(D + 1) whose highest weight is n times that of the fundamental spinor.
Z µ 1 µ 2 ···µ s = 1 n s r 1 =r 2 ··· =r s ρ r 1 (Γ µ 1 )ρ r 2 (Γ µ 2 ) · · · ρ r s (Γ µ s ) (2.1)
Each r i index can run from 1 to n. The µ indices can take values from 1 to D + 1. It is important to note that the maximum value that s can take at finite n is n.
The operators of the form (2.1) contain the symmetric traceless representation corresponding to Young Diagrams of type (s, 0, · · ·) where the integers denote the first, second etc. row lengths. By contracting the indices, the operators of fixed l also contain lower representations, e.g (s − 2, 0). For example
µ Z µµ = n(n − 1) n 2 µ Z µµα = (n − 1)(n − 2) n 2 Z α (2.2)
We now define A n (R D+1 ) we keep the product structure from A n (S D ) but we drop relations between Z's with contracted indices and Z's with fewer indices such as (2.2). Note that A n (R D+1 ) is still finite dimensional because s ≤ n, but it a larger algebra that A n (S D ). We can obtain A n (S D ) from A n (R D+1 ) by imposing the contraction relations.
The operators of the form given in (2.1) can be obtained by multiplying Z µ 1 using the product in A n (S D ). For example
Z µ 1 Z µ 2 = Z µ 1 µ 2 + 1 n δ µ 1 µ 2 (2.3)
More generally we have
Z µ 1 µ 2 ···µ s = Z µ 1 Z µ 2 · · · Z µ s + O(1/n) (2.4)
To make clear which product we are using we write
m 2 (Z µ 1 ⊗ Z µ 2 ) = Z µ 1 µ 2 + 1 n δ µ 1 µ 2 (2.5)
The m 2 denotes the product obtained by taking the Matrix product in End(R n ) and
projecting out the representations which do not transform in the symmetric traceless representations. This kind of product can be used for both A n (R D+1 ) and A n (S D ). Since the higher Z's are being generated from the Z α we can view A n (R D+1 ) as a deformation of the algebra of polynomials in the D + 1 variables Z α . As discussed previously in the context of A n (S D ) [5,6] this product is commutative and non-associative but becomes commutative and associative in the large n limit.
Now we look at a deformed R 2k+1 subspace of the deformed R D+1 space. We have generators Z µ 1 µ 2 ···µ s where the µ indices take values from 1 to 2k + 1. This operator is symmetric under any permutation of the µ's. The largest representation of SO(2k + 1)
contained in the set of operators for fixed s is the one associated with Young Diagram of row lengths (r 1 , r 2 , · · ·) = (s, 0, · · ·) and are symmetric traceless representations. We keep k fixed and let D be very large. We will get very simple algebras which we denote as B n (R 2k+1 ) and B n (S 2k ). The relation between B n (R 2k+1 ) and B n (S 2k ) is similar to that between A n (R 2k+1 ) and A n (S 2k+1 ).
Let us look at some simple examples of the product.
X µ 1 X µ 2 = n r 1 =1 ρ r 1 (Γ µ 1 ) n r 2 =1 ρ r 2 (Γ µ 2 ) = n r 1 =r 2 =1 ρ r 1 (Γ µ 1 )ρ r 2 (Γ µ 2 ) + n r 1 =r 2 =1 ρ r 1 (Γ µ 1 Γ µ 2 ) = n r 1 =r 2 =1 ρ r 1 (Γ µ 1 )ρ r 2 (Γ µ 2 ) + n r 1 =r 2 =1 ρ r 1 (δ µ 1 µ 2 ) = X µ 1 µ 2 + nδ µ 1 µ 2 (2.6)
In the first line we have written the product of two X operators which are both Γ matrices acting on each of n factors of a symmetrized tensor product of spinors [2]. In the second line we have separated the double sum into terms where r 1 = r 2 and where they are different. In other words we are looking separately at terms where the two Γ's act on the same tensor factor and when they act on different tensor factors. Where r 1 = r 2 the expression is symmetric under exchange of µ 1 and µ 2 . When r 1 = r 2 , there is a symmetric part and an antisymmetric part. The antisymmetric part is projected out when we want to define the product which closes on the fuzzy spherical harmonics [5]. This is why the third line only keeps δ µ 1 µ 2 . Expressing the product in terms of the normalized Z µ we have
Z µ 1 Z µ 2 = Z µ 1 µ 2 + 1 n δ µ 1 µ 2 (2.7)
By similar means we compute
Z µ 1 µ 2 Z µ 3 = Z µ 1 µ 2 µ 3 + (n − 1) n 2 (δ µ 1 µ 3 Z µ 2 + δ µ 2 µ 3 Z µ 1 ) (2.8)
The LHS contains sums of the form
r 1 =r 2 r 3 = r 1 =r 2 =r 3 + r 3 =r 1 =r 2 + r 1 =r 2 =r 3 (2.9)
The three types of sums in (2.9) lead, respectively, to the first, second and third terms on the RHS of (2.8). The factor of n − 1 in the second term, for example, comes from the fact that the r 1 = r 2 sum runs from 1 to n avoiding the single value r 3 . The 1/n 2 comes from normalizations. The relations (2.7) and (2.8) hold both in B n (R 2k+1 ) and A n (R 2k+1 ).
Using (2.7) and (2.8) it is easy to see the non-associativity. Indeed
(Z µ 1 Z µ 2 )Z µ 3 = Z µ 1 µ 2 µ 3 + (n − 1) n 2 (δ µ 1 µ 3 Z µ 2 + δ µ 2 µ 3 Z µ 1 ) + 1 n δ µ 1 µ 2 Z µ 3 Z µ 1 (Z µ 2 Z µ 3 ) = Z µ 1 µ 2 µ 3 + (n − 1) n 2 (δ µ 1 µ 3 Z µ 2 + δ µ 1 µ 2 Z µ 3 ) + 1 n δ µ 2 µ 3 Z µ 1 (Z µ 1 Z µ 2 )Z µ 3 − Z µ 1 (Z µ 2 Z µ 3 ) = 1 n 2 ( δ µ 1 µ 2 Z µ 3 − δ µ 2 µ 3 Z µ 1 ) (2.10)
We now explain the simplification that arises in B n (R 2k+1 ) as opposed to A n (R 2k+1 ).
Consider a product
Z µ 1 µ 2 Z µ 3 µ 4 = 1 n 4 r 1 =r 2 ρ r 1 (Γ µ 1 )ρ r 2 (Γ µ 2 ). r 3 =r 4 ρ r 3 (Γ µ 3 )ρ r 4 (Γ µ 4 ) (2.11)
In A n (R 2k+1 ) or A n (S 2k ), we will get terms of the form Z µ 1 µ 2 µ 3 µ 4 where r 1 = r 2 = r 3 = r 4 .
In addition there will be terms of the form δ µ 1 µ 3 Z µ 2 µ 4 from terms where we have r 1 = r 3 and r 2 = r 4 . If r 1 = r 3 = r 2 = r 4 , the antisymmetric part of Γ µ 1 Γ µ 3 appears in the Matrix product of Z µ 1 µ 2 with Z µ 3 µ 4 but transforms as the representation r = (2, 1, 0, ...).
Hence these are projected out. However when we consider terms coming from r 1 = r 3 and r 2 = r 4 and take the operators of the form
ρ r 1 ([Γ µ 1 , Γ µ 3 ])ρ r 2 ([Γ µ 2 , Γ µ 4 ]
), these will include representations of type (2, 2, 0, · · ·) which are projected out, in addition to those of type (2, 0, 0 · · ·). The latter come from the trace parts δ µ 1 µ 2 . They have coefficients that go as 1 D and hence disappear in the large D limit. We conjecture that all terms of this sort, which transform as symmetric traceless reps, but come from antisymmetric parts of products in coincident r factors, are subleading in a 1/D expansion. The simple product on the Z µ 1 µ 2 ··· , obtained by dropping all the antisymmetric parts in the coincident r factors ( or according to the conjecture, by going to the large D fixed k limit ) gives an algebra we denote as B n (R 2k+1 ). We will not try to give a proof of the conjecture that B n (R 2k+1 ) arises indeed in the large D limit A n (R D+1 ) as outlined above. We may also just study B n (R 2k+1 ) as an algebra that has many similarities with A n (R 2k+1 ) but has simpler structure constants, while sharing the key features of being a commutative non-associative algebra which approaches the ordinary polynomial algebra of functions on R 2k+1 in the large n limit.
We now give an explicit description of the product on elements Z µ 1 ···µ s in the algebra B n (R 2k+1 ) . It will be convenient to set up some definitions as a prelude to a general formula. For a set of integers S we will denote by Z µ(S) an operator of the form (2.1) with the labels on the µ's taking values in the set S. For example Z µ 1 ···µ s is of the form Z µ(S) with the set S being the set of integers ranging from 1 to s. Instead of writing Z µ 1 µ 4 µ 5 we can write Z µ(S) with S being the set of integers {1, 4, 5}.
Given two sets of positive integers T 1 and T 2 where T 1 and T 2 have the same number of elements, which we denote as
|T 1 | = |T 2 | ≡ t, we definẽ δ ( µ(T 1 ) µ(T 2 ) ) = σ δ( µ i 1 µ j σ(1) ) · · · δ( µ i t µ j σ(t) ) (2.12)
The δ's on the RHS are ordinary Kronecker deltas. The i's are the integers of the set T 1 with a fixed ordering, which can be taken as i 1 < i 2 < · · · < i t . The j's are the integers of the set T 2 with a similar ordering. The σ in the sum runs over all the permutations in the group of permutations of t elements. Theδ is therefore a sum over all the t! ways of pairing the integers in the set T 1 with those in the set T 2 .
With this notation the general formula for the product can be expressed as
Z µ(S 1 ) Z µ(S 2 ) = |T 1 | |T 2 | δ( |T 1 |, |T 2 | ) T 1 ⊂S 1 T 2 ⊂S 2 1 n 2|T 1 | (n − |S 1 | − |S 2 | + 2|T 1 |)! (n − |S 1 | − |S 2 | + |T 1 |)!δ ( µ(T 1 ) , µ(T 2 ) ) Z µ(S 1 ∪S 2 \T 1 ∪T 2 ) (2.13)
We have chosen S and T to be the sets describing two elements of B n of the form (2.1).
There is a sum over positive integers |T 1 | and |T 2 | which are the cardinalities of subsets T 1 and T 2 of S and T respectively. Given the restriction |T 1 | = |T 2 |, the sum over |T 1 | extends from 0 to min(|S|, |T |). The factor expressed in terms of factorials comes from the |T 1 | summations which can be done after replacing quadratic expressions in Γ's by δ's. The formula expresses the fact that the different terms in the product are obtained by summing over different ways of picking subsets T 1 and T 2 of S and T which contain the elements that lead to δ's. The set of remaining elements S ∪ T \ T 1 ∪ T 2 gives an operator of the form (2.1). It is instructive to check that (2.7) and (2.8) are special cases of (2.13). Note that we should set to zero all Z µ(S) where |S| > n. For example this leads to a restriction on the sum over |T 1 | requiring it to start at max(0, |S| + |T | − n). This is not an issue if we are doing the 1/n expansion.
Deformed derivations on
B n (R 2k+1 )
We now define operators δ α , which are derivations in the classical limit, by giving their action on the above basis.
δ α Z µ 1 µ 2 ···µ s = s i=1 δ αµ i Z µ(1..s\i) (3.1)
At finite n these are not derivations. Rather they are deformed derivations, which can described in terms of a co-product, a structure which comes up for example in describing the action of quantum enveloping algebras on tensor products of representations or on products of elements of a q-deformed space of functions ( see for example [22,23] )
∆(δ α ) = (δ α ⊗ 1 + 1 ⊗ δ α ) ∞ k=0 (−1) k n 2k δ α 1 δ α 2 · · · δ α k ⊗ δ α 1 δ α 2 · · · δ α k = ∞ k=0 (−1) k n 2k δ α δ α 1 δ α 2 · · · δ α k ⊗ δ α 1 δ α 2 · · · δ α k + ∞ k=0 (−1) k n 2k δ α 1 δ α 2 · · · δ α k ⊗ δ α δ α 1 δ α 2 · · · δ α k (3.2)
The leading k = 0 terms just lead to the ordinary Leibniz rule, and the corrections are subleading as n → ∞. These formulae show that co-product is a useful structure for describing the deformation of Leibniz rule. Another possibility one might consider is to see if adding 1/n corrections of the type δ α (1 + 1 n 2 δ β δ β ) gives a derivative which obeys the exact Leibniz rule. This latter possibility does not seem to work.
The co-product (3.2) has the property that
δ α .m 2 = m 2 .∆(δ α ) (3.3)
Another way of expressing this is that
δ α (A.B) = m 2 .∆(δ α )A ⊗ B = ∞ k=0 (−1) k n 2k δ α δ α 1 δ α 2 · · · δ α k A . δ α 1 δ α 2 · · · δ α k B + ∞ k=0 (−1) k n 2k δ α 1 δ α 2 · · · δ α k A . δ α δ α 1 δ α 2 · · · δ α k B (3.4)
The proof is obtained by evaluating LHS and RHS on arbitrary pair of elements in the algebra.
It is useful to summarize some properties of δ α which act as deformed derivations on the deformed polynomial algebra A n (R 2k+1 ) They approach ∂ α in the large n limit. At finite n they obey [δ α , δ β ] = 0 At finite n they transform as a vector under the SO(2k + 1)
Lie algebra of derivations that acts on B n , i.e we have
[L µν , δ α ] = δ να δ µ − δ µα δ ν (3.5)
These properties can be checked by acting with the LHS ad RHS on a general element of the algebra.
A useful identity on binomial coefficients
In proving that the above is the right co-product we find the following to be a useful identity.
A k=0 (−1) k (N + 1 + 2A)! (A − k)!(N + 1 + A + k)! = (N + 2A)! A!(N + A)! (3.6)
This is a special case of the following equation from [24] n r=0
(−1) r n r (r + b − 1)! (r + a − 1)! = (n + a − b − 1)!(b − 1)! (n + a − 1)!(a − b − 1)! (3.7)
Choose n = A, b = 1 and a = N + A + 2 in (3.7) to get precisely the desired equation For the LHS where we multiply first and take derivatives after, that is we compute
δ α .m 2 .A ⊗ B we have δ α . m 2 . Z µ(S) ⊗ Z µ(T ) = |S 1 | |S 2 | S 1 ⊂S S 2 ⊂T i⊂{S∪T }\{S 1 ∪S 2 } δ ( |S 1 | , |S 2 | )δ( µ(S 1 ) , µ(S 2 ) ) δ αµ i ((n − |S| − |T | + 2|S 1 |)! (n − |S| − |T | + |S 1 |)! Z µ({S∪T }\{S 1 ∪S 2 ∪i}) (3.8)
For the RHS of (3.4) we have
m 2 . ∆(δ α ) . (Z µ(S) ⊗ Z µ(T ) ) = |T 1 | |T 2 | |T 3 | |T 4 | δ ( |T 1 | , |T 2 | ) δ ( |T 3 | , |T 4 | ) T 1 ⊂S , T 2 ⊂T T 3 ⊂ S \ T 1 , T 4 ⊂ T \ T 2 i ⊂ { S∪T } \ {T 1 ∪T 2 ∪T 3 ∪T 4 } (−1) |T 1 | |T 1 |! (n − |S| − |T | + 2|T 1 | + 2|T 3 | + 1)! (n − |S| − |T | + 2|T 1 | + |T 3 | + 1)! δ αµ iδ ( µ(T 1 ) , µ(T 2 ) )δ( µ(T 3 ) , µ(T 4 ) ) Z µ({S∪T }\{T 1 ∪T 2 ∪T 3 ∪T 4 ∪i}) (3.9)
This follows from the the definition of ∆(δ α ) given in (3.2), the action of δ α described in (3.1), and the general structure of the product described in (2.13). The µ indices associated to the pair of subsets (T 1 , T 2 ) are contracted with Kronecker deltas coming from the tensor
product δ α 1 δ α 2 · · · δ α |T 1 | ⊗ δ α 1 δ α 2 · · · δ α |T 1 | appearing in ∆(δ α ).
The |T 1 |! arises because the same set of µ contractions can come from different ways of applying the δ α 's in ∆(δ α ).
The contractions involving the sets T 3 and T 4 of indices come from the multiplication m 2 .
We will manipulate (3.9) to show equality with (3.8).
We write S 1 = T 1 ∪ T 3 ⊂ S and S 2 = T 2 ∪ T 4 ⊂ T . We observe that the Z's in (3.9) only depend on these subsets. We have of course the relations |S 1 | = |T 1 | + |T 3 | and
|S 2 | = |T 2 | + |T 4 | and |S 1 | = |S 2 |.
For fixed subsets S 1 , S 2 we observe the identity :
T 1 ⊂S 1 , T 2 ⊂S 2 T 3 ⊂S 1 \T 1 , T 4 ⊂S 2 \T 2δ ( µ(T 1 ) , µ(T 2 ) )δ( µ(T 3 ) , µ(T 4 ) ) = (|T 1 | + |T 3 |)! |T 1 |!|T 3 |!δ ( µ(S 1 ) , µ(S 2 ) ) (3.10)
For fixed sets S 1 , S 2 , theδ( µ(S 1 ) , µ(S 2 ) ) is a sum of |S 1 |! terms. The sums on the LHS
contain |S 1 | |T 1 | |S 2 | |T 2 | = |S 1 | |T 1 | 2 choices of T 1 , T 2 ,( |S 1 | |T 1 | )( |S 2 | |T 2 | )|T1|!|T3|! |S 1 |! .
Given that the summands can be written in terms of S 1 and S 2 , the summations over the cardinalities of the T i subsets can be rewritten using
|T 1 | |T 2 | |T 3 | |T 4 | δ(|T 1 |, |T 2 |) δ(|T 3 |, |T 4 |) = |S 1 | |S 2 | δ(|S 1 |, |S 2 |) (3.11)
Using (3.10) and (3.11) we can rewrite (3.9) as
|S 1 | |S 2 | δ(|S 1 |, |S 2 |) |S 1 | |T 1 |=0 (−1) |T 1 | |T 1 |! (|T 1 | + |T 3 |)! |T 1 |!|T 3 |! S 1 ⊂S S 2 ⊂Tδ ( µ(S 1 ) , µ(S 2 ) ) i⊂{S∪T }\{S 1 ∪S 2 } δ αµ i (N + 2|T 1 | + 2|T 3 | + 1)! (N + 2|T 1 | + |T 3 | + 1)! Z µ({S∪T }\{S 1 ∪S 2 ∪i}) (3.12)
To simplify the combinatoric factors we have defined N ≡ n − |S| − |T |. We can further simplify as follows
|S 1 | |S 1 | |T 1 |=0 (−1) |T 1 | |S 1 |! (|S 1 | − |T 1 |)! (N + 2|S 1 | + 1)! (N + |S 1 | + |T 1 | + 1)! S 1 ⊂S S 2 ⊂T i⊂{S∪T }\{S 1 ∪S 2 }δ ( µ(S 1 ) , µ(S 2 ) ) δ αµ i Z µ({S∪T }\{S 1 ∪S 2 ∪i}) = |S 1 | (N + 2|S 1 |)! (N + |S 1 |)! S 1 ⊂S S 2 ⊂T i⊂{S∪T }\{S 1 ∪S 2 } δ( µ(S 1 ) , µ(S 2 ) ) δ αµ i Z µ({S∪T }\{S 1 ∪S 2 ∪i}) (3.13)
In the final equality we have used (3.6) by setting A = |S 1 | and k = |T 1 |. This establishes the equality of (3.8) and (3.9).
Derivations for the twisted algebra
B * n (R 2k+1 )
We defined above certain deformed derivations. This is reminiscent of quantum groups where the quantum group generators act on tensor products via a deformation of the usual co-product. In this context, classical and quantum co-products can be related by a Drinfeld twist [18], in which an element F living in U ⊗ U plays a role, where U is the quantum enveloping algebra. This suggests that we could twist the product m 2 of the algebra B n (R 2k+1 ) to get a new product m * 2 which defines a new algebra B * n (R 2k+1 ). These two algebras share the same underlying vector space. We want to define a star product m * 2 which is a twisting of the product m 2 , for which the δ α are really derivations rather than deformed derivations. It turns out that this becomes possible after we use in addition to the δ α a degree operator D. For symmetric elements of the form Z µ(S) the degree operator is defined to have eigenvalue |S|. In the following we will describe a twist which uses D, δ α and leads to a product m * 2 for which the δ α are really derivations. It will be interesting to see if the formulae given here are related to Drinfeld twists in a precise sense. The physical idea is that field theory actions can be written equivalently either using m 2 or m * 2 , somewhat along the lines of the Seiberg-Witten map [25].
Formula for The star product
Let us write m * 2 for the star product. It is useful to view m * 2 as a map from A n ⊗ A n to A n . With respect to the new product m * 2 , δ α obey the Leibniz rule at finite n.
δ α . m * 2 = m * 2 . ( δ α ⊗ 1 + 1 ⊗ δ α ) (4.1)
Equivalently for two arbitrary elements A and B of the algebra, we have, after writing
m * 2 (A ⊗ B) = A * B, δ α (A * B) = (δ α A) * B + A * (δ α B) (4.2)
We look for a formula for m * 2 as an expansion in terms of m 2 , the derivatives δ α , and using a function of the degree operator to be determined. The ansatz takes the form
m * 2 = ∞ l=0 1 n 2l h l (D) . m 2 . δ α 1 δ α 2 · · · δ α l ⊗ δ α 1 δ α 2 · · · δ α l (4.3)
The function h l (D) is determined by requiring that (4.2) hold for arbitrary A and B. It turns out, as we show later, that
h l (D) = D l (4.4)
For finite n the degrees of the operators do not exceed n. We can hence restrict the range of summation in (4.3) from 0 to n.
A useful identity in proving that with the choice (4.4) the product in (4.3) satisfies the Leibniz rule is
p 1 l=0 h l (s − 1) n − s + 1 p 1 − l = p 1 l=0 h l (s) n − s p 1 − l (4.5)
This follows from the combinatoric identity ( see for example [26] ) which can be written in the suggestive form
p 1 l=0 N k M p 1 − k = N + M p 1 (4.6)
Substituting N with s − 1 and M with n − s + 1 we have
p 1 l=0 s − 1 l n − s + 1 p 1 − l = n p 1 (4.7)
Substituting N with s and M with n − s we have
p 1 l=0 s l n − s p 1 − l = n p 1 (4.8)
Hence the identity (4.5) required of f is indeed satisfied by the choice (4.4).
Proof of the derivation property for the twisted product
We compute δ α .m * 2 .(Z µ(S) ⊗ Z µ(T ) ) using the definitions (4.3), (3.1) and the general form of the m 2 product in (2.13),to obtain
∞ l=0 |T 1 | |T 2 | δ(|T 1 |, l)δ(|T 2 |, l) |T 3 | T 4 | δ(|T 3 |, |T 4 |) T 1 ⊂S T 2 ⊂T T 3 ⊂S\T 1 T 4 ⊂T \T 2 i∈{S∪T }\{T 1 ∪T 2 ∪T 3 ∪T 4 } l! n 2l h l (|S| + |T | − 2l − 2|T 3 |) 1 n 2|T 3 | (n − |S| − |T | + 2l + 2|T 3 |)! (n − |S| − |T | + 2l + |T 3 |)! δ( µ(T 1 ) , µ(T 2 ) )δ( µ(T 3 ) , µ(T 4 ) ) δ µα i Z µ({S∪T }\{T 1 ∪T 2 ∪T 3 ∪T 4 ∪i}) (4.9) Similarly we compute m * 2 .(δ α ⊗ 1 + 1 ⊗ δ α ).(Z µ(S) ⊗ Z µ(T ) ) to obtain ∞ l=0 |T 1 | |T 2 | δ(|T 1 |, l)δ(|T 2 |, l) |T 3 | T 4 | δ(|T 3 |, |T 4 |) j∈S T 1 ⊂S\j T 2 ⊂T T 3 ⊂S\j∪T 1 T 4 ⊂T \T 2 + j∈T T 1 ⊂S T 2 ⊂T \j T 3 ⊂S\T 1 T 4 ⊂T \j∪T 2 δ αµ jδ ( µ(T 1 ) , µ(T 2 ) )δ( µ(T 3 ) , µ(T 4 ) ) Z µ({S∪T }\{j∪T 1 ∪T 2 ∪T 3 ∪T 4 }) l! n 2l 1 n 2|T 3 | (n − |S| − |T | + 2l + 2|T 3 | + 1)! (n − |S| − |T | + 2l + |T 3 | + 1)! h l (|S| + |T | − 2|T 1 | − 2|T 3 | − 1) (4.10)
At finite n the number of linearly independent operators Z we need to consider are bounded by n, i.e |S| ≤ n and |T | ≤ n. This means that the sum over l is also a finite sum.
Observe in (4.10) that the Z only depends on the unions T 1 ∪T 3 and T 2 ∪T 4 . We denote
S 1 ≡ T 1 ∪ T 3 and S 2 = T 2 ∪ T 4 . It follows that the cardinalities are related |S 1 | = |T 1 | + |T 3 | and |S 2 | = |T 2 | + |T 4 |.
The sums over subsets in (4.10) can be rearranged as
|S 1 | |S 2 | δ(|S 1 |, |S 2 |) S 1 ⊂S S 2 ⊂T j∈{S∪T }\{S 1 ∪S 2 } (4.11)
Hence we re-express (4.10) as
|S 1 | |S 2 | δ(|S 1 |, |S 2 |) S 1 ⊂S S 2 ⊂T j∈{S∪T }\{S 1 ∪S 2 } |S 1 | l=0 δ( µ(S 1 ) , µ(S 2 ) ) δ αµ j Z µ({S∪T }\{S 1 ∪S 2 ∪j}) (n − |S| − |T | + 2l + 2|T 3 | + 1)! (n − |S| − |T | + 2l + |T 3 | + 1)! l! 1 n 2|T 3 |+2l |S 1 |! l!(|S 1 | − l)! (4.12)
The binomial factor |S 1 |! l!(|S 1 |−l)! appears in using the conversion (3.10) of a sum of products ofδ's to a singleδ. Simplifying (4.11)
|S 1 | |S 2 | δ(|S 1 |, |S 2 |) S 1 ⊂S S 2 ⊂T j∈{S∪T }\{S 1 ∪S 2 } 1 n 2|S 1 |δ ( µ(S 1 ) , µ(S 2 ) ) δ αµ j Z µ({S∪T }\{S 1 ∪S 2 ∪j}) |S| 1 l=0 (n − |S| − |T | + 2|S 1 | + 1)! (n − |S| − |T | + |S 1 | + l + 1)! h l (|S| + |T | − 2|S 1 | − 1) |S 1 |! (|S 1 | − l)! (4.13)
Similar manipulations can be done starting from (4.9) to collect the products ofδ's into a singleδ. We end up with
|S 1 | |S 2 | δ(|S 1 |, |S 2 |) S 1 ⊂S S 2 ⊂T j∈{S∪T }\{S 1 ∪S 2 } 1 n 2|S 1 |δ ( µ(S 1 ) , µ(S 2 ) ) δ αµ j Z µ({S∪T }\{S 1 ∪S 2 ∪j}) |S| 1 l=0 (n − |S| − |T | + 2|S 1 |)! (n − |S| − |T | + |S 1 | + l)! h l (|S| + |T | − 2|S 1 |) |S 1 |! (|S 1 | − l)! (4.14)
This expression is the same as (4.13) except in the sum appearing in the last line. Since (4.13) has the derivative acting before the m * 2 , the h l (D) in the m * 2 has an argument which is the degree |S|+|T |−2|S 1 |−1. This is because Z µ(S∪T ) has degree |S|+|T | and the degree gets reduced by 2|S 1 | = 2|T 1 | + 2|T 3 | due to the |T 1 | contractions from the derivatives in m * 2 and the |T 3 | contractions from the product m 2 in the expression for m * 2 . Finally the reduction by 1 is due to the δ α which has already acted before the m * 2 and hence before h l (D). In (4.14) the argument of h l is larger by 1 because δ α acts before m * 2 . The ratio of factorials also has a relative shift of 1 because they arise from m 2 according to (2.13) and the m 2 acts before the δ α in one case and after it in the other. The validity of (4.1) or equivalently the equality of (4.9) and (4.10) will follow from the equality of the sums
|S 1 | l=0 (n − |S| − |T | + 2|S 1 |)! (n − |S| − |T | + |S 1 | + l)! h l (|S| + |T | − 2|S 1 |) |S 1 |! (|S 1 | − l)! = |S| 1 l=0 (n − |S| − |T | + 2|S 1 | + 1)! (n − |S| − |T | + |S 1 | + l + 1)! h l (|S| + |T | − 2|S 1 | − 1) |S 1 |! (|S 1 | − l)! (4.15)
Substituting h l (D) = D l and defining u = |S| + |T | − 2|S 1 | to simplify formulae, the LHS of (4.15) becomes
|S| 1 l=0 (n − u)! (n − u − |S 1 | + l)! u! l!(u − l)! |S 1 |! (|S 1 | − l)! = |S 1 |! |S 1 | l=0 n − u |S 1 | − l u l = n! (n − |S 1 |)! (4.16)
In the last step we used (4.6) ( see for example [26] ) with N = u and M = (n − u). For the RHS of (4.15) we get
|S 1 | l=0 (n − u + 1)! (n − u − |S 1 | + l + 1)! (u − 1)! l!(u − l − 1)! |S 1 |! (|S 1 | − l)! = |S 1 |! |S 1 | l=0 n − u + 1 |S 1 | − l u − 1 l = n! (n − |S 1 |)! (4.17)
Here we have used (4.6) with N = u − 1 and M = n − u + 1. This establishes the equation (4.1), and also shows that (4.9) and (4.10) can be simplified to
|S 1 | |S 2 | δ(|S 1 |, |S 2 |) S 1 ⊂S S 2 ⊂T j∈{S∪T }\{S 1 ∪S 2 } 1 n 2|S 1 |δ ( µ(S 1 ) , µ(S 2 ) ) δ αµ j Z µ({S∪T }\{S 1 ∪S 2 ∪j}) n! (n − |S 1 |)! (4.18)
It is also useful to note, as a corollary, that
Z µ(S) * Z µ(T ) = |S 1 | |S 2 | δ(|S 1 |, |S 2 |) S 1 ⊂S S 2 ⊂T n! (n − |S 1 |)! 1 n 2|S 1 | δ( µ(S 1 ) , µ(S 2 ) ) Z µ(S∪T \S 1 ∪S 2 ) (4.19)
5. Towards abelian gauge theory for B * n (R 2k+1 )
Gauge transformations and non-associativity
The construction of gauge theory starts with the definition of covariant derivatives D α = δ α − iA α . These are used to define covariant field strengths which lead to invariant actions. Consider some field Φ in the fundamental of the gauge group. Under a gauge transformation by a gauge parameter ǫ the variation is given, as usual, by
δΦ = iǫ * Φ (5.1)
The desired covariance property of the proposed covariant derivative is
δ(D µ Φ) = ǫ * (D µ Φ) (5.2)
We look for the transformation δA µ which will guarantee this covariance. Expanding
δ(D µ Φ) we get δ(D µ Φ) = δ((δ µ − iA µ ) * Φ) = iδ µ (ǫ * Φ) − i(δA µ ) * Φ + A µ * (ǫ * Φ) = i(δ µ ǫ) * Φ + iǫ * (δ µ Φ) − i(δA µ ) * Φ + A µ * (ǫ * Φ) (5.3)
In the last line we used the fact the derivatives δ µ obey the correct Leibniz rule with respect to the star product (4.1). This manipulation would be more complicated if we used the product m 2 , as indicated by (3.4). Expanding the RHS of (5.2) we have
iǫ * (D µ Φ) = iǫ * (δ µ Φ) + ǫ * (A µ * Φ) (5.4)
Equating LHS and RHS we see that we require
(δA µ ) * Φ = (δ µ ǫ) * Φ − iA µ * (ǫ * Φ) + iǫ * (A µ * Φ) = (δ µ ǫ) * Φ − i(ǫ * Φ) * A µ + iǫ * (Φ * A µ ) (5.5)
In the second line we took advantage of the commutativity of the star product to rewrite the RHS. Now the additional term in the variation of the gauge field is an associator.
The associator for 3 general elements Φ 1 ,
Φ 2 , Φ 3 in B * n (R 2k+1 ), denoted as [Φ 1 , Φ 2 , Φ 3 ], is defined as [Φ 1 , Φ 2 , Φ 3 ] = (Φ 1 * Φ 2 ) * Φ 3 − Φ 1 * (Φ 2 * Φ 3 ) (5.6)
We introduce an operator E depending on Φ 1 , Φ 3 which acts on Φ 2 to give the associator.
It has an expansion in powers of derivatives.
[Φ 1 , Φ 2 , Φ 3 ] = E(Φ 1 , Φ 3 )Φ 2 = E α 1 (Φ 1 , Φ 3 ) * δ α 1 Φ 2 + E α 1 α 2 (Φ 1 , Φ 3 ) * δ α 1 δ α 2 Φ 2 + · · · (5.7)
The explicit form of the coefficients in the expansion is worked out in Appendix 2 , where we also define an operator F (Φ 1 , Φ 2 ) related to the associator as
[Φ 1 , Φ 2 , Φ 3 ] = F (Φ 1 , Φ 2 )Φ 3 .
The gauge transformation of A µ can now be written as
(δA µ ) * Φ = (δ µ ǫ) * Φ − i[ǫ, Φ, A µ ] (δA µ ) = (δ µ ǫ) − iE(ǫ, A µ ) (δA µ ) = (δ µ ǫ) − iE α 1 (ǫ, A µ ) * δ α 1 − iE α 1 α 2 (ǫ, A µ ) * δ α 1 δ α 2 + · · · (5.8)
This leads to a surprise. We cannot in general restrict A µ to be a function of the Z's which generate B * n (R 2k+1 ). Even if we start with such gauge fields, the gauge transformation will cause a change by E(ǫ, A µ ) which is an operator that acts on B * n (R 2k+1 ), but is not an operator of multiplication by an element of B * n (R 2k+1 ) as the first term is. Instead it involves the derivatives δ α .
This means that we should generalize the gauge field A µ to µ which should be understood as having an expansion, where A µ is just the first term.
A µ = A µ (Z) + A µα 1 (Z) * δ α 1 + A µα 1 α 2 (Z) * δ α 1 δ α 2 + · · · + A µα 1 ···α n * δ α 1 δ α 2 · · · δ α n (5.9)
Now if we enlarge the configuration space of gauge fields, it is also natural to enlarge the configuration space of gauge parameters, introducingǫ
ǫ = ǫ(Z) + ǫ α 1 (Z) * δ α 1 + ǫ α 1 α 2 (Z) * δ α 1 δ α 2 + · · · + ǫ α 1 α 2 ···α n * δ α 1 δ α 2 · · · δ α n (5.10)
This will allow us the possibility that, after an appropriate gauge fixing, we recover familiar gauge fields which are functions of the coordinates only and not coordinates and derivatives.
On physical grounds, given that the non-associative 2k-sphere A n (S 2k ) arises as the algebra describing the base space of field theory for spherical brane worldvolumes [2,3,27,5,6] we expect that gauge theory on these algebras, and the related algebras
A * n (R 2k+1 ), B * n (R 2k+1 ), B * n (S 2k ) should exist.
We also know that they must approach ordinary gauge theory in the large n limit. It is also reasonable to expect that the large n limit is approached smoothly. Hence we expect that gauge fields which are functions of coordinates must be a valid way to describe the configuration space at finite n. The surprise which comes from exploring the finite n structure of the gauge transformations is that such a configuration space is only a partially gauge fixed description.
We write out the gauge transformations of µ which follow from covariance of D µ when we keep the first term in the derivative expansion of the generalized gauge transformation ǫ and the first term in the derivative expansion of µ . The two operators E and F related to the associator, which we introduced in appendix 2, are both useful.
δA µ = [δ µ ǫ + iA µα 1 * (δ α 1 ǫ) − iǫ α 1 * (δ α 1 A µ ) + · · ·] [−iE β 1 (ǫ, A µ ) * δ β 1 + iF β 1 (A µα 1 , δ α 1 ǫ) * δ β 1 − iF β 1 (ǫ α 1 , δ α 1 A µ ) * δ β 1 + · · ·] + [−iE β 1 β 2 (ǫ, A µ ) * δ β 1 δ β 2 − iE β 1 (ǫ, A µα 1 ) * δ β 1 δ α 1 + iF β 1 β 2 (A µα 1 , δ α 1 ǫ) * δ β 1 δ β 2 − iF β 1 β 2 (ǫ α 1 , δ α 1 A µ ) * δ β 1 δ β 2 + · · ·] (5.11)
We have separated the terms which involve functions, from those involving one derivative operators, and two-derivative operators etc. We have not exhibited terms involving higher derivative transformations of A µ , but it should be possible to exhibit them more explicitly using the general formulae for E and F operators derived in Appendix 2. If we set the derivative parts in A µ to zero, e.g A µα 1 = 0, (5.11) show that the purely coordinate dependent pieces still get modified from the usual δ µ ǫ, and from the requirement that the first and second derivative corrections to A µ vanish, we get
− iE β 1 (ǫ, A µ ) − iF β 1 (ǫ α 1 , δ α 1 A µ ) + · · · = 0 − iE β 1 β 2 (ǫ, A µ ) − iF β 1 β 2 (ǫ α 1 , δ α 1 A µ ) + · · · = 0 (5.12)
For finite n the terms left as dotted should be expressible as a finite sum since powers of derivatives higher than the n'th automatically vanish on any element of the algebra B * n (R 2k+1 ).
Formula for the associator
We consider the associator for a triple of general elements Z µ(S) , Z µ(T ) , Z µ(V ) where S, T, V are sets of distinct positive integers. We use (4.19) for Z µ(S) * Z µ(T ) . Then we
compute (Z µ(S) * Z µ(T ) ) * Z µ(V ) as (Z µ(S) * Z µ(T ) ) * Z µ(V ) = |S 1 |,|S 2 | |S 3 |,|S 4 | δ(|S 1 |, |S 2 |)δ(|S 3 |, |S 4 |) S 1 ⊂S S 2 ⊂T S 3 ⊂{S∪T } S 4 ⊂V 1 n 2|S 1 |+2|S 3 | n! (n − |S 1 |)! n! (n − |S 3 |)!δ ( µ(S 1 ) , µ(S 2 ) )δ( µ(S 3 ) , µ(S 4 ) ) Z µ({S∪T }\{S 1 ∪S 2 ∪S 3 ∪S 4 }) (5.13)
It is useful to separate the sum over subsets S 3 and S 4 to distinguish the delta functions δ SV between indices belonging to the sets S and V and delta functions δ T V between indices belonging to the sets T and V . We decompose the cardinalities of the sets 5.14) and the sets as
|S 3 | = |S (1) 3 | + |S (2) 3 | |S 4 | = |S (1) 4 | + |S (2) 4 | (S 3 = S (1) 3 ∪ S (2) 3 where S (1) 3 ⊂ S \ S 1 S (2) 3 ⊂ T \ S 2 S 4 = S (1) 4 ∪ S (2) 4 where S(1)4 ⊂ V S(2)(Z µ(S) * Z µ(T ) ) * Z µ(V ) = |S 1 |,|S 2 | |S (1) 3 |,|S(1)
| |S
3 |,|S
4 | δ(|S 1 |, |S 2 |) δ(|S (1) 3 |, |S(2)4 |) δ(|S (2) 3 |, |S(1)
4 |)
S 1 ⊂S,S 2 ⊂T S (1) 3 ⊂{S\S 1 } S (2) 3 ⊂{T \S 2 } S (1) 4 ⊂V S (2) 4 ⊂Vδ ( µ(S 1 ) , µ(S 2 ) ) δ( µ(S (1) 3 ) , µ(S (1) 4 ) )δ( µ(S(2)
3 ) , µ(S
(2) 4 ) ) Z µ({S∪T }\{S 1 ∪S 2 ∪S 3 ∪S 4 }) 1 n 2|S 1 |+2|S 2 | n! (n − |S 1 |)! n! (n − |S (1) 3 | − |S (2) 3 |)! (5.16)
Note that the combinatoric factor needed to rewrite theδ( µ(S 3 ) , µ(S 4 ) ) in terms of the sum of productsδ( µ(S
3 ) , µ(S (1) 4 ) )δ( µ(S(1)
3 ) , µ(S
4 ) ) is 1. The expressioñ δ( µ(S 3 ) , µ(S 4 ) ) contains |S 3 |! Kronecker delta's. When we sum the product of twõ δ's we have In a similar way we can write Z µ(S) * (Z µ(T ) * Z µ(V ) ). After separating theδ from the second multiplication into a product of twoδ's containing pairings of S with V and pairings of S with T , we have an expression of the same form above except that the factor
n! (n−|S 1 |)! n! (n−|S (1) 3 |−|S (2) 3 |)! is replaced by n! (n−|S (2) 3 |)! n! (n−|S 1 |−|S (1) 3 )|!
. Hence we can write down an expression for the associator (Z µ(S) * Z µ(T ) ) * Z µ(V ) − Z µ(S) * (Z µ(T ) * Z µ(V ) ) as a sum of the above form with a coefficient which is a difference of these two factors. It is instructive to rewrite the final outcome of the multiplications and the associator using a notation for the summation labels which is more informative and less prejudiced by the order in which the multiplication was done. We write R ST for the subset of S which is paired with indices in T , and R T S for the subset of indices in T which are paired with indices in S. The cardinalities are denoted as usual by |R ST | = |R T S |. Similarly we use subsets R SV , R V S for pairings between the sets S and V , and the subsets R T V , R V T . This way we can write the associator as
(Z µ(S) * Z µ(T ) ) * Z µ(V ) − Z µ(S) * (Z µ(T ) * Z µ(V ) ) = R ST ,R T S R SV ,R V S R T V ,R V Tδ ( µ(R ST ) , µ(R T S ) )δ( µ(R SV ) , µ(R V S ) ) δ( µ(R T V ) , µ(R V T ) ) n 2|R ST |+2|R SV |+2|R T V | Z µ( { S∪T ∪V }\{R ST ∪R T S ∪R SV ∪R V S ∪R T V ∪R V T } ) n! (n − |R ST |)! n! (n − |R SV | − |R V T |)! − n! (n − |R T V |)! n! (n − |R ST | − |R SV |)! (5.17)
Let us define A(|R ST |, |R SV |, |R T V |) to be the coefficient in the expression above
A(|R ST |, |R SV |, |R T V |) ≡ n! (n − |R ST |)! n! (n − |R SV | − |R V T |)! − n! (n − |R T V |)! n! (n − |R ST | − |R SV |)! (5.18)
We observe that exchanging the S and V labels in A is an antisymmetry
A(|R V T |, |R V S |, |R ST |) = −A(|R ST |, |R SV |, |R T V |) (5.19)
This is to be expected and is nice check on the above formulae. Indeed for a commutative algebra as we have here
[Z µ(S) , Z µ(T ) , Z µ(V ) ] = (Z µ(S) * Z µ(T ) ) * Z µ(V ) − Z µ(S) * (Z µ(T ) * Z µ(V ) ) = Z µ(V ) * (Z µ(S) * Z µ(T ) ) − (Z µ(T ) * Z µ(V ) ) * Z µ(S) = Z µ(V ) * (Z µ(T ) * Z µ(S) ) − (Z µ(V ) * Z µ(T ) ) * Z µ(S) = −[Z µ(V ) , Z µ(T ) , Z µ(S) ]
(5.20)
Simple Examples of Associator
As another check on the general formula (5.17) we consider an example of the form
[Z µ(S) , Z µ 1 , Z µ 2 ]. (Z µ(S) * Z µ 1 ) = Z µ 1 µ(S) + i∈S 1 n δ µ 1 µ i Z µ(S\i) (Z µ(S) * Z µ 1 ) * Z µ 2 = Z µ 1 µ 2 µ(S) + δ µ 1 µ 2 n Z µ(S) + i∈S 1 n δ µ 2 µ i Z µ 1 µ(S\i) + i,j∈S 1 n 2 δ µ 1 µ i δ µ 2 µ j Z µ(S\i,j) (5.21)
When we multiply in the other order
Z µ 1 * Z µ 2 = Z µ 1 µ 2 + δ µ 1 µ 2 n Z µ(S) * (Z µ 1 * Z µ 2 ) = Z µ 1 µ 2 µ(S) + 1 n i∈S δ µ 1 µ i Z µ 2 µ(S\i) + 1 n i∈S δ µ 2 µ i Z µ 1 µ(S\i) + n(n − 1) n 4 i,j∈S δ µ 1 µ i δ µ 2 µ j Z µ(S\i,j) + δ µ 1 µ 2 n Z µ(S) (5.22)
Hence the associator is 6. Gauge Theory on B * n (S 2k )
[Z µ(S) , µ 1 , µ 2 ] = (Z µ(S) * Z µ 1 ) * Z µ 2 − Z µ(S) * (Z µ 1 * Z µ 2 ) = 1 n 3 i,j∈S δ µ 1 µ i δ µ 2 µ j Z µ(S\i,
We will obtain an action for the deformed algebra of functions B * n (S 2k ) which is obtained by applying the constraint of constant radius to the algebra B * n (R 2k+1 ).
A finite n action which approaches the abelian Yang-Mills action on S 2k
We begin by writing the ordinary pure Yang Mills action on S 2k in terms of the embedding coordinates in R 2k+1 . We describe the S 2k in terms of µ z µ z µ = 1. The derivatives ∂ ∂z µ can be expanded into an angular part and a radial part.
∂ ∂z µ = ∂θ a ∂z µ ∂ ∂θ a + ∂r ∂z µ ∂ ∂r = P µ + z µ r ∂ ∂r (6.1)
We have denoted as P µ = P a µ ∂ ∂θ a the projection of the derivative along the sphere. Consider the commutator [P µ − iA µ , P ν − iA ν ] for gauge fields A µ satisfying z µ A µ = 0 and which are functions of the angular variables only. A general gauge field A µ has an expansion
A µ = P a µ A a + z µ r A r . The condition z µ A µ = 0 guarantees that A r = 0, hence A µ = P a µ A a (6.2)
The following are useful observations
[D µ , D ν ] = [P µ − iA µ , P ν − iA ν ] = P a µ P b ν F ab + L a µν D a = P a µ P b ν F ab + z µ D ν − z ν D µ (6.3)
We can then express
P a µ P b ν F ab = [D µ , D ν ] − (z µ D ν − z ν D µ ) (6.4)
Now observe that the P a µ P b µ = G ab , i.e the inverse of the metric induced on the sphere by its embedding in R 2k+1 , which is also the standard round metric. Hence we can write the Yang Mills action as
d 4 θ √ GG ac G bd F ac F bd = d 4 θ √ G([D µ , D ν ] − (z µ D ν − z ν D µ )) 2 (6.5)
We define a commutative non-associative B * n (S 2k+1 ) by imposing conditions Z αα = (n−1) n R 2 . Z operators with more indices contracted e.g Z ααββ can be deduced because they appear in products like Z αα * Z ββ and their generalizations. We have introduced an extra generator R which corresponds to the radial coordinate transverse to the sphere.
The projection operators P α obey P α = δ α − Q α . We have already shown that δ α acts as a derivation of the star product. The behaviour of Q α can be obtained similarly.
In general we can expect it to be a deformed derivation. In classical geometry :
δ α R = Z α R Q α R = Z α R Q α Z β = Z α Z β R 2 (6.6)
This suggests that the definitions at finite n :
δ α R = Z α R Q α R = Z α R Q α Z µ(S) = |S| R 2 Z α * Z µ(S) (6.7)
In the classical case Q α is a derivation so its action on a polynomial in the Z's is defined by knowing its action on the Z's and using Leibniz rule. In the finite n case we have given the action on a general element of the algebra and we leave the deformation of the Leibniz rule to be computed. The deformed Leibniz rule can be derived by techniques similar to the ones used in section 3. Following section 5, we can determine the gauge transformation of A µ by requiring covariance of D µ = P µ − iA µ . The deformation of Leibniz rule implies that the gauge transformation law will pick up extra terms due to
P α (ǫ * Φ) = (δ α − Q α )(ǫ * Φ) = (δ α ǫ) * (Φ) + ǫ * (δ α Φ) − (Q α ǫ) * Φ − ǫ * (Q α Φ) − (Q (1) α ǫ) * (Q (2) α Φ) + · · · (6.8)
The last term is a schematic indication of the form we may expect for the corrections to the Leibniz rule, following section 2. More details on the deformed Leibniz rule for Q α are given in Appendix 3.
As in section 5, the non-zero associator will require generalizing the gauge parameters and gauge fields in order to make the gauge invariance manifest. It is not hard to write down a finite n action ( without manifest gauge invariance ) which reproduces the classical action on the sphere
([P µ − iA µ , P ν − iA ν ] − (Z µ * D ν − Z ν * D µ )] 2 (6.9)
It has the same form as (6.5). The A µ are chosen to satisfy Z µ * A µ = 0 to guarantee they have components tangential to the sphere, and they are expanded in symmetric traceless polynomials in the algebra B * n (S 2k ). The integral is defined to pick out the coefficient of the integrand which is proportional to the identity function in the algebra B n (S 2k ). It is reasonable to guess that this action can be obtained as a gauge fixed form of an action where the gauge invariances are manifest at finite n. The equations to be solved now involve the operators E and F as well as the additional terms due to the corrections to Leibniz rule for Q α .
Outlook
The non-abelian generalization and Matrix Brane constructions
In the bulk of this paper we have considered abelian theories on a class of commutative but non-associative algebras related to the fuzzy 2k-spheres. It is also possible to generalize the calculations to non-abelian theories. Indeed these are the kinds of theories that show up in brane constructions [2], [3], [27]. In this case we need to consider covariant derivatives of the form δ µ − iA µ where the A µ are matrices whose entries take values in the A n (S 2k ).
The Φ are also Matrices containing entries which take values in the algebra. Again we define the transformation of δΦ = ǫ * Φ or writing out to include the Matrix indices δΦ ij = ǫ ik * Φ kj . As in the abelian case, we can construct the gauge transformation law for A µ by requiring covariance of P µ − iA µ . It will very interesting to construct these non-abelian generalizations in detail. They would answer a very natural question which arises in the D-brane applications where one constructs a higher brane worldvolume from a lower brane worldvolume. There is evidence from D-brane physics that the fuzzy sphere ansatze on the lower brane worldvolume leads to physics equivalent to that described by the higher brane worldvolume with a background non-abelian field strength. Following the logic of [11] it should be possible to start with the lower brane worldvolume and derive an action for the fluctuations which is a non-abelian theory on the higher dimensional sphere.
Thus the evidence gathered for the agreement of the physical picture ( brane charges, energies etc. ) from lower brane and higher brane worldvolumes would be extended into a derivation of the higher brane action from the lower brane action. Since the only algebra structure we can put on the truncated spectrum of fuzzy spherical harmonics at finite n is a non-associative one [5], the techniques of this paper provide an avenue for constructing this non-abelian theory.
An alternative description of the fluctuations in these Matrix brane constructions is in terms of an abelian theory on a higher dimensional geometry SO(2k + 1)/U (k) [6,28].
Combining this with the physical picture discussed above of a non-abelian theory on the commutative/non-associative sphere, we are lead to expect that there should be a duality between the abelian theory on the SO(2k +1)/U (k) and the abelian theory on S 2k [6]. The explicit construction of the detailed form of the non-abelian theories on the the fuzzy sphere algebras A n (S 2k ) will also be interesting since it will allow us to explore the mechanisms of this duality. There has been a use of effective field theory on the higher dimensional geometry in the quantum hall effect [29,30].
Physical applications of the abelian theory
In applications of the fuzzy 4 sphere to the ADS/CFT as in [31] [32] we have abelian gauge fields in spacetime ( in addition to gravity fields etc. ) so we should expect an abelian gauge theory on the fuzzy sphere if we want the fuzzy 4-sphere to be a model for spacetime. In the context of the of longitudinal 5-branes [2] in the DLCQ ( Discrete Light cone quantization ) of M-Theory, it was originally somewhat surprising that only spherical 5-brane numbers of the form (n+1)(n+2)(n+3) 6 for integer n could be described. We may ask for example why there is not a DLCQ description of a single spherical 5-brane. The abelian field theories considered here would be candidate worldvolume descriptions for the single 5-brane in the DLCQ of M-theory, but it is not clear how to directly derive them from BFSS Matrix theory.
Other short comments
We have only begun to investigate the structure of gauge theories related to commutative, non-associative algebras. For applications to Matrix Theory fuzzy spheres, it will be interesting to work directly with A n (R 2k+1 ) and A n (S 2k ) ( along with the twisted algebras A * n (R 2k+1 ) and A * n (S 2k ) ), instead of the B n (R 2k+1 ) and B n (S 2k ) ( along with their twisted versions B * n (R 2k+1 ) B * n (S 2k ) ) which were introduced for simplicity, by considering a large D limit of A n (R D+1 ). Instanton equations for gauge theories on these algebras will be interesting. In connecting these field theories on A n (S 2k ) to the Matrix Theory fuzzy 2k-spheres, it will be necessary to expand around a spherically symmetric instanton background. Developing gauge fixing procedures and setting up perturbation theory are other directions to be explored. The projection operators P α together with the SO(2k + 1) generators L µν form the conformal algebra SO(2k + 1, 1). It will be interesting to see how a finite n deformation of this acts on the finite n commutative/non-associative algebra of the fuzzy 2k-sphere.
By replacing δ µν with η µν ( the SO(2k, 1) Lorentzian invariant ) in (2.10)(2.13)(4. 19) we get commutative, non-associative algebras which deform Lorentzian spacetime. Since the commutator [Z µ , Z ν ] is zero, we do not have a θ µν which breaks Lorentz invariance. If we assign standard dimensions of length to Z and introduce a dimensionful deformation parameter θ to make sure all terms in (2.13)(4.19) are of same dimension, we only need a θ which is scalar under the Lorentz group. It will be very interesting to study these algebras as Lorentz invariant deformations of spacetime in the context of the transplanckian problem for example, along the lines of [33].
The generalized gauge fields and gauge parameters which we were lead to, as a consequence of formulating gauge theory for the non-associative algebras, are strongly reminiscent of constructions that have appeared in descriptions of the geometry of W -algebras [34], where higher symmetric tensor fields appear as gauge parameters and gauge fields. It is possible that insights from W-geometry can be used in further studies of gauge theories on commutative, non-associative algebras. If a connection to W-algebras can be made, the equations such as (5.12) and its generalizations, constraining the generalized gauge parameters (5.10) by requiring that the gauge fields remain purely coordinate dependent, would be conditions which define an embedding of the U (1) gauge symmetry inside a W-algebra.
The embedding is trivial when the non-associativity is zero, given simply by setting the ǫ α 1 , ǫ α 1 α 2 · · · = 0, but requires the higher tensors to be non-zero and related to each other when the non-associativity is turned on. It is also notable that generalized gauge fields involving dependence on derivatives have appeared [35] when considering non-associative algebras related to non-zero background H fields [36]. Further geometrical understanding of these features appears likely to involve the BV formalism [37]. By using (5.17) together with (7.5) and converting m c 2 to m * 2 we can get formulas for the associator in terms of m * 2 . We δ β 1 ···β m (δ α 1 · · · δ α l δ a 1 · · · δ a r 1 δ b 1 · · · δ b r 2 Φ 1 * δ α 1 · · · α l δ a 1 · · · δ a r 1 δ c 1 · · · δ c r 3 Φ 2 ) * δ β 1 · · · δ β m δ b 1 · · · δ b r 2 δ c 1 · · · δ c r 3 Φ 3 (7.6)
[Φ 1 , Φ 2 , Φ 3 ] =
The a, b, c, α, β indices all run from 1 to 2k + 1. This expresses the associator [Φ 1 , Φ 2 , Φ 3 ]
as an operator F depending on Φ 1 , Φ 2 and acting on Φ 3
[Φ 1 , Φ 2 , Φ 3 ] = F α 1 (Φ 1 , Φ 2 ) * δ α 1 Φ 3 + F α 1 α 2 (Φ 1 , Φ 2 ) * δ α 1 δ α 2 Φ 3 + · · · (7.7)
The successive terms are subleading in the 1/n expansion.
We can also use the second line of (7.6) to write the associator as δ β 1 ···β m (δ α 1 · · · α l δ a 1 · · · δ a r 1 δ b 1 · · · δ b r 2 Φ 1 * δ α 1 · · · α l δ b 1 · · · δ b r 2 δ c 1 · · · δ c r 3 Φ 3 ) * δ β 1 · · · β m δ b 1 · · · δ b r 1 δ a 1 · · · δ a r 1 Φ 2 (7.8) This gives the associator as an operator E depending on Φ 1 , Φ 3 and acting on Φ 2
[Φ 1 , Φ 2 , Φ 3 ] =[Φ 1 , Φ 2 , Φ 3 ] = E α 1 (Φ 1 , Φ 3 ) * δ α 1 Φ 3 + E α 1 α 2 (Φ 1 , Φ 3 ) * δ α 1 δ α 2 Φ 2 + · · · (7.9)
The successive terms are subleading in the 1/n expansion.
( −2|T 1 | n + |T 1 ||S| n ) n! n 2|T 1 | (n − |T 1 |)! δ αµ iδ ( µ(T 1 ) , µ(T 2 ) ) Z µ(S∪T \{T 1 ∪T 2 }) + 1 R 2 i,T 1 ⊂S T 2 ⊂T ( −2|T 1 | n + |T 1 ||T | n ) n! n 2|T 1 | (n − |T 1 |)! δ αµ iδ ( µ(T 1 ) , µ(T 2 ) ) Z µ(S∪T \{T 1 ∪T 2 }) (7.10)
We can write this deformation of the Leibniz rule in terms of operators acting on Z µ(S) ⊗ Z µ(T ) as follows Q α (Z µ(S) * Z µ(T ) ) − (Q α Z µ(S) ) * Z µ(T ) − Z µ(S) * (Q α Z µ(T ) ) = − 1 R 2 n k=0 a 1 ···a k 2k n 2k n! k!(n − k)! m c 2 (m c 2 ⊗ 1)
(1 ⊗ δ a 1 · · · δ a k ⊗ δ a 1 · · · δ a k )(Z α ⊗ Z µ(S) ⊗ Z µ(T ) )
+ 1 R 2 n k=0 b 1 ···b k 1 n 2k n! k!(n − k)! ( −2k n + |S|k n 2 ) m c 2 (m c 2 ⊗ 1)(1 ⊗ δ b 1 · · · δ b k ⊗ δ b 1 · · · δ b k )(δ a ⊗ 1 ⊗ δ a )(Z α ⊗ Z µ(S) ⊗ Z µ(T ) ) + 1 R 2 n k=0 b 1 ···b k 1 n 2k n! k!(n − k)! ( −2k n + |T |k n 2 )
m c 2 (m c 2 ⊗ 1)(1 ⊗ δ b 1 · · · δ b k ⊗ δ b 1 · · · δ b k )(δ a ⊗ δ a ⊗ 1)(Z α ⊗ Z µ(S) ⊗ Z µ(T ) ) (7.11)
Now we can convert the classical products m c 2 into expressions involving m * 2 using the first (7.13) to give m c 2 in terms of m c 2 to find the coefficientsg k (n, D) which appear in the inverse formula m c 2 = n k=0 a 1 ···a k m 2 .(δ a 1 · · · δ a k ⊗ δ a 1 · · · δ a k )g k (n, D) (7.14) leads to sums of the form g k (n, D) = n l=0 a 1 ···a l g a 1 (n, D)g a 2 (n, D − 2a 1 ) · · · g a l (n, D − 2a 1 − 2a 2 · · · 2a l−1 ) k! a 1 !a 2 ! · · · a l ! (7.15) There is a sum over l, and for each l a sum over choices of ordered sets of positive integers a 1 , a 2 · · · a l which satisfy a 1 + a 2 + · · · a l = k. The first few examples ofg 1 ,g 2 ,g 3 ,g 4 ,g 5
were done by using Maple ( some of them are also easily checked by hand ). They all agree with a simple general formula which is g k (n, D) = (−1) k n 2 k (n − D + 2k) (n − D + k − 1)! (n − D)! (7.16)
An analytic proof for general k would be desirable. Thisg k can be used to write the associator and deformed Leibniz rules involving the product m 2 .
of the deformed derivation property (3.4) proceeds by writing out both sides of the equation for a general pair of elements A, B of the form Z µ(S) and Z µ(T ) .
and the twoδ's on the LHS contain a sum of |T 1 |!|T 2 |! = (|T 1 |!) 2 terms, each being a product of Kronecker deltas. The combinatoric factor in (3.10) is the ratio
! for the number of Kronecker δ's in the product of twoδ's we get exactly |S 3 |!. This explains why the combinatoric factor is 1.
if |S| = 1 the associator is zero. The single δ terms cancel in the associator. This follows from (5.17) because A(1, two δ terms can be read off from (5.17) by noting that for |R ST | = |R SV | = 1 and|R T V | = 0 , we have 1 n 2|R ST |+2|R SV |+2|R T V | A(|R ST |, |R SV |, |R T V |) developedabove can lead to the construction of operators E and F related to the associator. The operators are described in Appendix 2 and used in the first part of this section.
Acknowledgments: I would like to thank Luigi Cantini, David Gross, Bruno Harris, Pei Ming Ho, Chris Hull, David Lowe, Shahn Majid, Robert de Mello-Koch , Rob Myers, Martin Rocek, Jae Suk Park, Bill Spence, Dennis Sullivan, Gabriele Travaglini for very instructive discussions. I thank the organizers of the Simons Workshop at Stony Brook for hospitality while part of this work was done. This research was supported by DOE grant DE-FG02/19ER40688-(Task A) at Brown, and by a PPARC Advanced Fellowship at Queen Mary.
=0 a 1 ···a r 1 b 1 ···b r 2 c 1 ···c r r 1 (n)f r 2 +r 3 (n) − f r 3 (n)f r 1 +r 2 (n)]
=0 a 1 ···a r 1 b 1 ···b r 2 c 1 ···c r r 1 (n)f r 2 +r 3 (n) − f r 3 (n)f r 1 +r 2 (n)]
Appendix 1 : The classical product in terms of the star product m *2We introduce an algebra B c n (R 2k+1 ) with product m c 2 which, as a vector space, is the same as B n (R 2k+1 ) and B * n (R 2k+1 ). The product resembles the classical product in that it is just concatenates the indices on the Z's. m c 2 (Z µ(S) ⊗ Z µ(T ) ) = Z µ(S∪T ) (7.1)The formula(4.19)for the product m * 2 can be rewritten in terms of m cIt is very useful to have the inverse formula where m c 2 is expressed in terms of m * 2 . This can be used to write general formulae for the associator by manipulating(5.17).Proving this inversion formula makes use of a combinatoric identity. Let us define f a (n) =The combinatoric identity isWe are summing over ways of writing the positive integer k as a sum of integers a i ≥ 1.For each such splitting of k, the symmetry factor S(a 1 , a 2 , · · · a l ) is the product n 1 !n 2 ! · · · where n 1 is the number of 1's , n 2 the number of 2's etc. We have checked the identity for k up to 6 using Maple and find the simple form off given above. An analytic proof for general k would be desirable.Appendix 2 : Operators related to the AssociatorIn (5.17) we have written down the associator of three general elements of the alge-. This can be expressed in terms of the product m cAppendix 3 : Deformed Leibniz Rule for Q αUsing the definition of Q α we can work out the deformation of its Leibniz rule.Since an arbitrary element of B * n (R 2k+1 ) can be expanded in terms of Z µ(S) we can replace Z µ(S) and Z µ(T ) above by arbitrary elements Φ 1 , Φ 2 of the algebra.Appendix 4 : m c 2 in terms of m 2 We have found above that expressing the non-associativity of m * 2 in terms of operators and finding the deformed Leibniz rule for Q α with respect to m * 2 are conveniently done by having a formula for the classical product in terms of m * 2 . In this paper we started with m 2 and found that if we twist it into m * 2 we get another commutative, non-associative product with nicer properties, notably one on which δ α acts as a derivation. If one chooses to work with m 2 , it is convenient to have a formula for m c 2 in terms of m 2 . Let us first note that the defining formula for m 2 (2.13) can be expressed as a formula for m 2 in terms of m c 2 m 2 = n k=0 a 1 ···a k m c 2 .(δ a 1 · · · δ a k ⊗ δ a 1 · · · δ a k ) (n − D + 2k)! k!(n − D + k)! (7.13)D acting on an element of B n ⊗ B n is just the sum of the degrees of the two elements. For example on Z µ(S) ⊗ Z µ(T ) it is |S| + |T |. Let us define g k (n, D) = (n−D+2k)! k!(n−D+k)! Inverting
On finite 4-D Quantum Field Theory in noncommutative geometry. H Grosse, C Klimcik, P Presnajder, hep-th/9602115Commun.Math.Phys. 180H. Grosse, C. Klimcik and P. Presnajder, "On finite 4-D Quantum Field Theory in non- commutative geometry," [hep-th/9602115], Commun.Math.Phys.180:429-438,1996.
Longitudinal 5-branes as 4-spheres in matrix theory. J Castellino, S Lee, W Taylor, hep-th/9712105Nucl.Phys. 526J. Castellino, S. Lee, W. Taylor, "Longitudinal 5-branes as 4-spheres in matrix theory," [hep-th/9712105], Nucl.Phys. B526 (1998) 334-350.
Non-abelian brane intersections. N Constable, R Myers, O Tafjord, hep- th/0102080JHEP. 010623N. Constable, R. Myers and O. Tafjord, "Non-abelian brane intersections," [hep- th/0102080], JHEP 0106:023,2001.
The Fuzzy sphere. J Madore, Class.Quant.Grav. 9J. Madore, "The Fuzzy sphere, " Class.Quant.Grav.9:69-88,1992
On spherical harmonics for fuzzy spheres in diverse dimensions. S Ramgoolam, hep-th/0105006Nucl. Phys. B. 610461S. Ramgoolam, "On spherical harmonics for fuzzy spheres in diverse dimensions," Nucl. Phys. B 610, 461 (2001) [hep-th/0105006].
Higher dimensional geometries from matrix brane constructions. P M Ho, S Ramgoolam, arXiv:hep-th/0111278Nucl. Phys. B. 627266P. M. Ho and S. Ramgoolam, "Higher dimensional geometries from matrix brane constructions," Nucl. Phys. B 627, 266 (2002) [arXiv:hep-th/0111278].
M-Theory as a matrix model : a conjecture. " T Banks, N Seiberg, S Shenker, hep-th/9610043Phys.Rev. 55"T. Banks, N. Seiberg and S. Shenker, M-Theory as a matrix model : a conjecture," [hep-th/9610043], Phys.Rev.D55:5112-5128,1997.
. N Ishibashi, H Kawai, Y Kitazawa, A Tsuchiya, arXiv:hep-th/9612115Nucl. Phys. B. 498467N. Ishibashi, H. Kawai, Y. Kitazawa and A. Tsuchiya, Nucl. Phys. B 498, 467 (1997) [arXiv:hep-th/9612115].
O Ganor, S Ramgoolam, W Taylor, hep-th/9611202Branes, fluxes and duality in M(atrix)-theory. 492O. Ganor, S. Ramgoolam, W. Taylor, "Branes, fluxes and duality in M(atrix)-theory," [hep-th/9611202], Nucl.Phys.B492:191-204,1997
. A Connes, M R Douglas, A Schwarz, arXiv:hep-th/9711162JHEP. 98023A. Connes, M. R. Douglas and A. Schwarz, JHEP 9802, 003 (1998) [arXiv:hep- th/9711162].
. H Aoki, N Ishibashi, S Iso, H Kawai, Y Kitazawa, T Tada, arXiv:hep-th/9908141Nucl. Phys. B. 565176H. Aoki, N. Ishibashi, S. Iso, H. Kawai, Y. Kitazawa and T. Tada, Nucl. Phys. B 565, 176 (2000) [arXiv:hep-th/9908141].
matrix model and tachyon condensation. N Seiberg, arXiv:hep-th/0008013JHEP. 00093N. Seiberg, matrix model and tachyon condensation," JHEP 0009, 003 (2000) [arXiv:hep-th/0008013].
. D A Lowe, H Nastase, S Ramgoolam, arXiv:hep-th/0303173Nucl. Phys. B. 66755D. A. Lowe, H. Nastase and S. Ramgoolam, Nucl. Phys. B 667, 55 (2003) [arXiv:hep- th/0303173].
Noncommutative gauge theory on fuzzy sphere from matrix model. S Iso, Y Kimura, K Tanaka, K Wakatsuki, hep-th/0101102Nucl. Phys. B. 604121S. Iso, Y. Kimura, K. Tanaka and K. Wakatsuki, "Noncommutative gauge theory on fuzzy sphere from matrix model," Nucl. Phys. B 604, 121 (2001) [hep-th/0101102].
. A B Hammou, M Lagraa, M M Sheikh-Jabbari, arXiv:hep-th/0110291Phys. Rev. D. 6625025A. B. Hammou, M. Lagraa and M. M. Sheikh-Jabbari, Phys. Rev. D 66, 025025 (2002) [arXiv:hep-th/0110291].
. E Batista, S Majid, arXiv:hep-th/0205128J. Math. Phys. 44107E. Batista and S. Majid, J. Math. Phys. 44, 107 (2003) [arXiv:hep-th/0205128].
. T L Curtright, G I Ghandour, C K Zachos, Coproducts, Matrices, J. Math. Phys. 32676T. L. Curtright, G. I. Ghandour and C. K. Zachos, Coproducts, U And R Matrices," J. Math. Phys. 32, 676 (1991).
. V G Drinfeld, Leningrad Math. Jour. 11419V.G. Drinfeld, Leningrad Math. Jour. 1 , 1419, 1990
. A Y Alekseev, A Recknagel, V Schomerus, arXiv:hep-th/9908040JHEP. 990923A. Y. Alekseev, A. Recknagel and V. Schomerus, JHEP 9909, 023 (1999) [arXiv:hep- th/9908040].
. A Jevicki, M Mihailescu, S Ramgoolam, arXiv:hep-th/0008186A. Jevicki, M. Mihailescu and S. Ramgoolam, arXiv:hep-th/0008186.
. H Grosse, J Madore, H Steinacker, arXiv:hep-th/0103164J. Geom. Phys. 43205H. Grosse, J. Madore and H. Steinacker, J. Geom. Phys. 43, 205 (2002) [arXiv:hep- th/0103164].
Foundations of quantum group theory. S Majid, CUP. S. Majid, "Foundations of quantum group theory" CUP 1995
A guide to quantum groups. V Chari, A Pressley, CUP. V. Chari and A. Pressley, "A guide to quantum groups, " CUP 1994
. I S Gradshtein, I M Rhyzik, Eq. 0.160.2Academic PressI.S.Gradshtein and I.M.Rhyzik, Academic Press 2000, Eq. 0.160.2
. N Seiberg, E Witten, arXiv:hep-th/9908142JHEP. 990932N. Seiberg and E. Witten, JHEP 9909, 032 (1999) [arXiv:hep-th/9908142].
. I S Gradshtein, I M Rhyzik, Eq. 0.156.1Academic PressI.S.Gradshtein and I.M.Rhyzik, Academic Press 2000, Eq. 0.156.1
. P Cook, R De Mello Koch, J Murugan, arXiv:hep-th/0306250P. Cook, R. de Mello Koch and J. Murugan, arXiv:hep-th/0306250.
. Y Kimura, arXiv:hep-th/0204256Nucl. Phys. B. 637177Y. Kimura, Nucl. Phys. B 637, 177 (2002) [arXiv:hep-th/0204256].
A Four Dimensional Generalization of the Quantum Hall Effect. S-C Zhang, J-P Hu, Science. 294823S-C Zhang, J-P Hu "A Four Dimensional Generalization of the Quantum Hall Effect," Science 294 (2001) 823
. B A Bernevig, C-H Chern, J-P Hu, N Toumbas, S-C Zhang, cond-mat/0206164Annals Phys. 300185B.A.Bernevig, C-H Chern, J-P Hu, N. Toumbas, S-C Zhang, cond-mat/0206164, An- nals Phys. 300 (2002) 185
Noncommutative gravity from the ADS/CFT correspondence. A Jevicki, S Ramgoolam, hep-th/9902059JHEP. 990432A. Jevicki and S. Ramgoolam, " Noncommutative gravity from the ADS/CFT corre- spondence," [hep-th/9902059], JHEP 9904 (1999) 032.
Matrix theory, AdS/CFT and Higgs-Coulomb equivalence. M Berkooz, H Verlinde, hep-th/9907100JHEP. 991137M. Berkooz and H. Verlinde, "Matrix theory, AdS/CFT and Higgs-Coulomb equiva- lence," [hep-th/9907100], JHEP 9911 (1999) 037.
. R Brandenberger, P M Ho, arXiv:hep-th/0203119Phys. Rev. D. 6623517AAPPS Bull.R. Brandenberger and P. M. Ho, Phys. Rev. D 66, 023517 (2002) [AAPPS Bull. 12N1, 10 (2002)] [arXiv:hep-th/0203119].
. C M Hull, arXiv:hep-th/9302110C. M. Hull, arXiv:hep-th/9302110.
. P M Ho, arXiv:hep-th/0103024JHEP. 011126P. M. Ho, JHEP 0111, 026 (2001) [arXiv:hep-th/0103024].
. L Cornalba, R Schiappa, arXiv:hep-th/0101219Commun. Math. Phys. 22533L. Cornalba and R. Schiappa, Commun. Math. Phys. 225, 33 (2002) [arXiv:hep- th/0101219].
. I A Batalin, G A Vilkovisky, Phys. Rev. D. 282567Erratum-ibid. D 30, 508 (1984)I. A. Batalin and G. A. Vilkovisky, Phys. Rev. D 28, 2567 (1983) [Erratum-ibid. D 30, 508 (1984)].
| [] |
[
"R-TOSS: A Framework for Real-Time Object Detection using Semi-Structured Pruning",
"R-TOSS: A Framework for Real-Time Object Detection using Semi-Structured Pruning"
] | [
"Abhishek Balasubramaniam [email protected] \nDepartment of Electrical and Computer Engineering\nColorado State University\nFort CollinsCOUSA\n",
"Febin P Sunny [email protected] \nDepartment of Electrical and Computer Engineering\nColorado State University\nFort CollinsCOUSA\n",
"Sudeep Pasricha [email protected] \nDepartment of Electrical and Computer Engineering\nColorado State University\nFort CollinsCOUSA\n"
] | [
"Department of Electrical and Computer Engineering\nColorado State University\nFort CollinsCOUSA",
"Department of Electrical and Computer Engineering\nColorado State University\nFort CollinsCOUSA",
"Department of Electrical and Computer Engineering\nColorado State University\nFort CollinsCOUSA"
] | [] | Object detectors used in autonomous vehicles can have high memory and computational overheads. In this paper, we introduce a novel semi-structured pruning framework called R-TOSS that overcomes the shortcomings of state-of-the-art model pruning techniques. Experimental results on the JetsonTX2 show that R-TOSS has a compression rate of 4.4× on the YOLOv5 object detector with a 2.15× speedup in inference time and 57.01% decrease in energy usage. R-TOSS also enables 2.89 × compression on RetinaNet with a 1.86 × speedup in inference time and 56.31% decrease in energy usage. We also demonstrate significant improvements compared to various state-of-the-art pruning techniques. | 10.48550/arxiv.2303.02191 | [
"https://export.arxiv.org/pdf/2303.02191v1.pdf"
] | 257,364,940 | 2303.02191 | e291dd9e1791d01b4b95b4671bd3eb10f8fc11eb |
R-TOSS: A Framework for Real-Time Object Detection using Semi-Structured Pruning
Abhishek Balasubramaniam [email protected]
Department of Electrical and Computer Engineering
Colorado State University
Fort CollinsCOUSA
Febin P Sunny [email protected]
Department of Electrical and Computer Engineering
Colorado State University
Fort CollinsCOUSA
Sudeep Pasricha [email protected]
Department of Electrical and Computer Engineering
Colorado State University
Fort CollinsCOUSA
R-TOSS: A Framework for Real-Time Object Detection using Semi-Structured Pruning
pruningobject detectionYOLOv5RetinaNetJetson TX2model compressioncomputer vision
Object detectors used in autonomous vehicles can have high memory and computational overheads. In this paper, we introduce a novel semi-structured pruning framework called R-TOSS that overcomes the shortcomings of state-of-the-art model pruning techniques. Experimental results on the JetsonTX2 show that R-TOSS has a compression rate of 4.4× on the YOLOv5 object detector with a 2.15× speedup in inference time and 57.01% decrease in energy usage. R-TOSS also enables 2.89 × compression on RetinaNet with a 1.86 × speedup in inference time and 56.31% decrease in energy usage. We also demonstrate significant improvements compared to various state-of-the-art pruning techniques.
I. INTRODUCTION
In recent years, autonomous vehicles (AVs) have received immense attention due to their potential to improve driving comfort and reduce injuries from vehicle crashes. A report from the National Highway Traffic Safety Administration (NHTSA) indicated that in 2021, more than 31,720 people were involved in fatal accidents on U.S. roadways [1]. These accidents were found to predominantly be caused by distracted drivers who contributed to ~94% of them. AVs can help mitigate human errors and avoid such accidents with the help of their superior perception systems. A perception system helps AVs understand their surroundings with the help of an array of sensors that can include Lidars, Radars, and Cameras. Object detection is an important component of such perception systems [2].
AVs must process a huge amount of data in real-time to provide precise corrections to the vehicle controller to maintain its course, speed, and direction. To assist with vehicle path planning and control, AVs rely on object detectors to provide information about the obstacles in their surroundings. These object detectors must satisfy two important conditions: 1) maintaining high accuracy, and 2) providing inference in real-time (~tens of milliseconds). In recent years researchers have been able to design machine learning models for object detection with high accuracy, but these models are generally very compute-intensive and often combined with sensor fusion task which helps in providing the input to these models by combining data from various sensors [3] [35]. Apart from these object detectors Avs also must process immense data as a part of the Advance Driver Assistance System (ADAS) for operation safety and security such as in-vehicle communication and vehicle-to-x (V2X) protocols which can increase the computational cost and power usage [4][25] [33]. This is a challenge because onboard computers in AVs are resource-constrained, with strict limits on power dissipation and computational capabilities.
Object detection is a compute and memory-intensive task involving both classification and regression. Typically, all machine learning based object detectors can be classified into two types: 1) Two-stage detectors and 2) Single-stage detectors. Two-stage detectors consist of a two-stage detection process that involves a region proposal stage and subsequent object classification stage. The regional proposal stage often consists of a Regional Proposal Network (RPN) which proposes several Region of Interests (ROIs) in an input image (e.g., from a camera sensor in an AV). These ROIs are used to classify objects in them. The objects are then surrounded by bounding boxes to localize them. Examples of two-stage detectors include R-CNN [5], Fast R-CNN [6], and Faster R-CNN [7]. In contrast to two-stage detectors, single-stage detectors use a single feed-forward network which involves both classification and regression to create the bounding boxes to localize objects. Singlestage detectors are lightweight and faster than two-stage detectors. Some examples of single-stage detectors are YOLOv5 [8] (You Only Look Once), RetinaNet [9], YOLOR [10], and YOLOX [11].
Unfortunately, even single-stage detectors are compute and memory intensive, so deploying and executing them on embedded and IoT boards in AVs remains a bottleneck [12]. To address this bottleneck, many techniques have been proposed in recent years, such as pruning, quantization, and knowledge distillation, to compress and optimize object detector execution, with an emphasis on improving inference time while preserving model accuracy. Pruning techniques in particular have been shown to be very effective in increasing the sparsity of object detector models, by carefully removing redundant weights that do not impact overall accuracy. Such sparse models require fewer computations, and can be compressed to reduce latency, memory, and energy costs.
In this paper, we introduce the R-TOSS object detector pruning framework to achieve efficient pruning of object detectors used in AVs. Unlike traditional pruning algorithms that can generally be classified as structured pruning [19]- [23] or unstructured pruning [13]- [18], we utilize a more niche approach that involves semistructured pruning. Our approach involves applying specific kernel patterns to prune convolutional kernels and associated connectivity. The novel contributions of our proposed pruning framework for object detectors are as follows:
An approach for reducing computational cost of iterative pruning by using depth first search to generate parent-child kernel computational graphs, to be pruned together; A pruning technique to prune 1×1 kernel weights to increase achievable model sparsity; An implementation of kernel pruning without connectivity pruning, to preserve kernel information for inference, that can help retain model accuracy; A detailed comparison against multiple state-of-the-art pruning approaches to showcase the effectiveness of our novel framework, in terms of mAP, latency, energy usage, and achieved sparsity. The rest of the paper is organized as follows: Section II provides a detailed background in terms of state-of-the-art object detector models and pruning techniques; Section III describes the motivation for this work; Section IV introduces our framework and provides a deep dive into the algorithms proposed; Section V showcases our experimental results, and Section VI presents our conclusions.
II. BACKGROUND AND RELATED WORK
A. Object Detectors
Object detectors are used in AVs for various tasks such as traffic sign and traffic light recognition, lane recognition, vehicle tracking, etc. These object detectors can be classified into two categories, twostage and single-stage detectors. To evaluate an object detector, irrespective of the category, mean average precision (mAP) and intersection over union (IoU) metrics are used. mAP is the mean of the ratio of precision to recall for individual object classes, with a higher value indicating a more accurate object detector. IoU measures the overlap between the predicted bounding box and the ground truth bounding box.
Two-stage object detectors: Two-stage detectors use a two-stage process consisting of a region proposal and object classification. R-CNN [4] was one of the first deep learning-based object detectors to be proposed. The algorithm's novelty came with an efficient selective search algorithm for ROI proposals, which dramatically decreased the overall number of regions needed to be processed. The regions were fed into a convolutional neural network (CNN) for feature extraction. The CNN output was sent to a support vector machine (SVM) for classifying the object in the region. Even though the reduction in ROI proposals was revolutionary in terms of minimizing inference, the R-CNN algorithm cannot infer in real time, as it takes ~40s to process a single input image.
To address the latency challenge, the same authors proposed Fast R-CNN [6]. In Fast R-CNN, a CNN is used to generate convolution feature maps of the input images rather than for feature extraction. The feature maps are used for ROI identification and the ROIs are warped to squares using a pooling layer, which is further transformed into a vector to be fed into a fully connected (FC) layer. The feature vector from the FC layer is used for object class prediction in the ROI, using a softmax layer, and a bounding box regressor is used for coordinate prediction. Fast R-CNN exhibited inference speeds around ~2s, significantly faster than R-CNN, but the latency is still high, making it unusable in a real-time scenario.
Faster R-CNN [7] tackled the high latency caused by the region proposal mechanism in both the prior R-CNN works, by directly feeding the image to the CNN and letting the CNN learn to perform ROI prediction. This remarkably reduced the latency to ~0.2s.
Despite their desirable accuracies, two-stage detectors are bulky and have best case latencies in the hundreds of milliseconds on highend GPUs. These latencies and resource overheads make them impractical for embedded real-time use cases, such as in AVs.
Single-stage detectors: Single-stage detectors are much faster than two-stage detectors because they use a single feed-forward network without any intermediate stage for ROI proposals. The YOLO algorithm was revolutionary when it came out in 2016 as it reframed object detection as a single-stage regression problem, from image feature extraction, to bounding box generation, and object classification. The follow-up variants of YOLO made it faster and more accurate while preserving the single shot detection philosophy. YOLOv4 introduced two important techniques: 'bag of freebies' (BoF) which involves improved methods for data augmentation and regularization during training and 'bag of specials' (BoS) which is a post processing module that leads to better mAP and faster inference [26]. YOLOv5 [8] proposed additional data augmentation and loss calculation improvements. It also used auto-learning bounding box anchors to adapt to a given dataset. Even though YOLO models provide good inference speed, they have a class imbalance problem when detecting small objects. This issue was addressed in the RetinaNet [9] detector that used a focal loss function during training and a separate network for classification and bounding box regression. Table 1 shows a comparison of various two-stage and single stage object detectors on the COCO dataset [27].
While single-stage detectors are faster than two-stage detectors, they still incur significant inference times when deployed on an embedded board. To further reduce latency, model compression techniques, such as pruning, quantization, and knowledge distillation, are essential to consider. Quantization requires specialized hardware support for efficient deployment, which may not be available in embedded boards. Knowledge distillation requires the student model to be robust in order to absorb and retain the information from the teacher model, which requires both time and complex computation. Compared to its counterparts, pruning is neither computationally complex nor hardware bound, so we focus on pruning for accelerating object detector inference in this work.
B. DNN Model Pruning
Pruning an object detection model aims to reduce model footprint and computational complexity by removing weight parameters from the model using some criteria. Consider a deep learning model with number of layers. The most computeintensive operation of a deep learning model is the Convolution (Conv) layer. If each Conv layer has number of kernels with number of non-zero weights, during inference, the computational cost of the model is a function of ( × × ). This computational cost increases dramatically as the parameters involved increases, as is the trend in modern deep learning models. By performing parameter pruning, we can induce sparsity in the model which will decrease the parameters in and through kernel pruning we can also decrease . This decreases the overall computational cost. Emerging computing platforms provide software compression techniques [28] which can compress the input and weight matrices in response to the presence of zero valued (pruned) parameters, thus skipping them entirely during model execution. The skipping operation may optionally also be performed by the hardware with specifically designed hardware [29].
Pruning approaches from prior work can be classified into three major categories: unstructured pruning, structured pruning, and semi-structured or pattern-based pruning. Unstructured pruning: In unstructured pruning, redundant weights (Fig 1(a)) are pruned opportunistically, while keeping the loss to minimum which helps in retaining the accuracy of the model. Several unstructured pruning schemes have been proposed, such as: weight magnitude pruning, that focuses on replacing a set of weight below a predefined threshold to zero [13], [14]; gradient magnitude pruning, that prunes a set of weights whose gradients are below a predefined threshold [15], [16]; synaptic flow pruning, which is an iterative pruning technique that uses a global scoring scheme and prunes a set of weights until the global score drops below a threshold [17]; and second order derivative pruning, that calculates the second order derivative of weight by replacing a set of weights by zero and keeping the loss of the network close to the original loss [18]. These approaches negatively impact thread level parallelism due to the load imbalance caused by different level of sparsity from different weight matrices. Irregular sparsity also affects memory performance due to changes it creates in data access locality, leading to reduced benefits from caching across various platforms (GPUs, CPUs, TPUs).
Structured pruning: In structured pruning, an entire filter ( Fig. 1 (c)) [19]- [21] or consecutive channels ( Fig. 1(b)) [22], [23] are pruned, to increase sparsity of the model. Filter/channel pruning provides a more uniform weight matrix and reduces the size of the model. The reduced matrix help in reducing the number of multiply and accumulate (MAC) operations compared to that of unstructured pruning. However, structured pruning also decreases the accuracy of the model since weights that can be contributing to the overall accuracy of the model will also be pruned along with the redundant weights. Structured pruning can also be used with acceleration algorithms like TensorRT [24]. Unlike unstructured pruning, due to the uniform nature of the weight matrix, structured pruning can better utilize the hardware acceleration provided by various platforms in terms of memory and bandwidth [21], [23] Semi-structured pruning: Semi-structured pruning, also called pattern pruning, is a combination of structured pruning and unstructured pruning schemes ( Fig. 1(d)). This type of pruning utilizes kernel patterns that can be used as a mask on a kernel. A mask prevents the weights it covers from being pruned, inducing partial sparsity in a kernel. By evaluating the effectiveness of the pruned kernel, by utilizing L2 norm for example, the most effective pattern [4] two-stage 42% 0.02 Fast R-CNN [5] two-stage 19.7% 0.5 Faster R-CNN [6] two-stage 78.9% 7 RetinaNet [8] single-stage 61.1% 90 YOLOv4 [24] single-stage 65.7% 62 YOLOv5 [7] Single-stage 56.4% 140 masks can be identified and deployed during inference. Since the kernel patterns can only prune a fixed number of weights inside a kernel, they will induce lesser sparsity than that of its counterparts [30], [31]. To overcome this issue, pattern pruning is applied together with connectivity pruning which prunes some of the kernels entirely. However, most modern object detectors have a large number of 1×1 kernels which contain redundant weights that are not pruned during this process. This is because, pattern pruning techniques typically focus on kernels with sizes 3×3 and above, that have more candidate weights for pruning. Connectivity pruning also reduces the accuracy of the model since several important weights in a particular kernel are also removed during this process. However, kernel pattern pruning due to its semi-structured nature can still leverage hardware parallelism to reduce inference times of the model [31].
III. MOTIVATION
Object detectors designed for use in AVs require high accuracy, but consequently these models also have overheads such as a large memory footprint and higher inference time [38]. To overcome these issues, we need to come up with a model that can be lightweight and can achieve high accuracy. Single-stage detectors such as YOLOv5, RetinaNet, Detection Transformer (DETR), and YOLOR are a good starting point to achieve real-time object detection goals, but these models still have a high memory footprint which can decrease model performance. Table 2 summarizes the inference time as the size of the object detector model increases, on a Jetson TX2. In order to reduce latency of operation, while retaining model accuracy a pruning technique can be employed. Among pruning techniques, pattern-based semi-structured pruning can offer better sparsity over unstructured pruning, while ensuring better accuracy than structured pruning techniques. Semi-structured pruning also allows for more regular weight matrix shapes, allowing the hardware to better accelerate the model inference. Simultaneously, it does not prune entire kernel weights, unlike structured pruning, thus retaining more information and hence ensuring better accuracies. Hence, pattern-based pruning techniques can generate models with high sparsity and high accuracy, ideally.
However, a caveat of pattern-based pruning, which limits the achievable sparsity and hence the inference acceleration benefits, is that current techniques primarily focus on 3×3 kernels. Most stateof-the-art models such as YOLOv5, RetinaNet and DETR consist of 68.42%, 56.14% and 63.46% of small 1×1 kernels, respectively. So, to increase the sparsity of such models, pattern-based pruning techniques sometimes employ connectivity pruning on these 3×3 kernels [30]. But the 'last kernel per layer' criteria used in connectivity pruning contributes to loss of important information which can affect the accuracy of the model. So, we elect to avoid connectivity pruning in our pruning framework. Moreover, this technique still does not address the 1×1 kernels, which constitute a significant portion of the kernels, as mentioned above.
To address these shortcomings, we propose a three-step pruning approach to prune 1×1 kernels: 1) group 1×1 kernels to form 3×3 temporary weight matrices; 2) apply kernel pattern pruning on these weight matrices; 3) decompose the temporary weight matrices to 1×1 kernels and reassign to their original layers. Our approach thus increases the sparsity of the model while preserving important information which contributes to the accuracy of the model.
IV. R-TOSS PRUNING FRAMEWORK
In this section, we describe our novel R-TOSS pruning framework and detail how we have implemented the previously mentioned improvements to the kernel pruning technique on the YOLOv5 and RetinaNet object detectors. A straightforward approach to pruning, while retaining much of the original performance of the model, is to adopt an iterative pruning approach. But this is a naïve approach as an iterative approach can quickly become unwieldy in terms of computational cost and time requirement as the model sizes increase. As mentioned in Section III.C, the model sizes of modern object detectors are increasing, but for many application spaces which employ them, such as AVs, their accuracy cannot be compromised. Our R-TOSS framework (Fig. 2) adopts an iterative pruning scheme with several optimizations for reducing computational cost and time overheads. We start by using a depth first search (DFS) algorithm which is used to find the parent-child layer couplings within the model. The parent-child graphs thus obtained are used to reduce the computation requirements for pruning. The reduction in computation costs happen as the pruning at the parent layer is reflected in its child layers within the graph. We follow up DFS with identifying the 3×3 and 1×1 kernels within the sub-graphs and applying kernel size specific pruning to them. These algorithms are discussed in detail, in the following subsections.
A. DFS algorithm
Algorithm 1 shows the pseudocode for the DFS algorithm. Using the pretrained model as input, we compute the computational graph (G) using the gradients obtained from backpropagation. An empty list (group_list) is initialized (line 2) to store the parent-child layer groups. We then traverse the model layers (l) and apply DFS search on the computational graph G to identify the parent of that layer. If a layer does not have any parent layer, then we assign that layer as its own parent layer (lp) (lines 7 -9) and this becomes a group. If a layer is identified as the child layer (lc) to any layer in group_list (line 5) then this layer now becomes the parent layer (lp) of the child layer (lc) and added to that group (lines 5 -6). Each parent layer (lp) can have multiple child layers (lc) but each child layer can only have one parent layer (lp). This process continues until all the layers are assigned to a group. Since layers in each group have coupled channels in them, they also share their kernel properties, hence they can share the same kernel patterns.
B. Selecting kernel patterns
We generate pattern masks in all possible combinations via standard combinatorics, using the following equation:
( ) = = ! ! ( − )! (1)
where, n is the size of the matrix and k is the size of the pattern mask. We then narrow down the number of kernel patterns used, using the following two criteria: 1) we drop all patterns without adjacent non-zero weights; this is done to keep the semi-structured nature of the kernel patterns; 2) we select the most used kernel pattern by calculating the L2 norm of the kernel using random initiations in the range [-1, 1]. The value of k can range from 1 to 8, which can generator 8 different type of pattern groups. To increase the sparsity level of the model, the number of non-zero weight in a pattern should be lower. Prior work [20], [30] on kernel pattern pruning has used 4-entry patterns that consist of 4 non-zero weights in a kernel. But this leads to models with relatively low sparsity and to overcome this issue the authors of these works have utilized connectively pruning. Due to the drawbacks of connectivity pruning discussed in section II, we propose to use 3-entry pattern (3EP) and 2-entry pattern (2EP) kernel patterns, which uses 3 and 2 non-zero weights respectively, in our R-TOSS framework.
C. 3×3 kernel pruning
Algorithm 2 shows the pseudocode of the 3×3 kernel pattern pruning using the proposed kernel patterns, examples of which are illustrated in Fig 3. We start by using the 3×3 parent kernels weights (KW) from Algorithm 1 as input and initializing a variable (shape) to store the shape of the kernel weights (line 1). We also create a pattern dictionary (kernel_patterns_dict) consisting of 3EP (Fig 3(a)) and 2EP (Fig 3(b)) patterns (line 3). We then traverse the 3×3 kernels and store the weight matrices of the current 3×3 kernel in the layer as temp_kernel (line 5). We then initialize an empty list (L2_dict) that can store the L2 norm of the temp_kernel after applying the kernel pattern from the pattern dictionary. We then iterate through the kernel pattern in the kernel_patterns_dict and calculate the L2norm of the kernel after applying the kernel pattern. This L2norm is stored in the L2_dict list along with the key of the current pattern from the kernel_patterns_dict (lines 7-10). We then find the best kernel pattern for the temp_kernel by using the L2norm value from the L2_dict and store the index of the kernel pattern in the bestfit variable (line 11). The index from bestfit is now used as the kernel pattern for the kernel and updated to its original weight matrices Kw (lines [12][13][14]. We then iterate through all the kernels in the parent layer and store this as the kernel mask for the rest of the 3×3 kernels in the parent layer group (lP) from Algorithm 1. Once suitable patterns for the parent kernels are found, those patterns are also applied to the corresponding children, by utilizing the convolution mapping. We also apply this pattern matching approach to the 1×1 kernels by performing a 1×1 to 3×3 kernel transformation (see Section IV.D). Since we apply the same kernel mask to all the kernels in a particular group, we can reduce the time taken by the framework to prune the entire model. From experiments, we reduced the total number of patterns required to 21 patterns. Since we have only 21 pre-defined kernel patterns at inference, the kernels with similar patterns are grouped together, which can reduce the overall computational cost and speed up inference.
D. 1×1 kernel transformation
By performing 1 × 1 to 3 × 3 transformation we remove connectivity pruning from kernel pruning. This can ensure we can maintain the accuracy of the model and mitigate losses that arise from connectivity pruning. 1×1 kernel pruning can also speedup inference by grouping similar kernel patterns together. Algorithm 3 shows the pseudocode for performing 1×1 kernel pruning. We start by using 1× 1 kernel weights Kw from the parent layer from Algorithm 1 (group_list) as input. We then initialize a list FL that is used to store the flattened 1×1 kernel weights from Kw (lines 1-2). Subsequently, a temp_array for storing the temporary weight matrices is initialized. We iterate through the flattened array FL and group every 9 weights in the list into 3×3 temporary weight matrices that are stored in temp_array (lines [5][6][7][8][9][10][11]. This process continues till we reach the end of the list or if the values are less than 9. At this point the left-over weights are considered as zero weights and pruned (line 13). We then use Algorithm 2 to perform 3×3 kernel pruning with the temporary 3×3 weight matrices from temp_array (line 14). The output matrices from Algorithm 2 are stored back into temp_array which is transformed back into 1× 1 kernels and appended back into the original 1×1 kernel weights (lines 15-16).
V. EXPERIMENTS AND RESULTS
In this section, we evaluate our proposed R-TOSS framework on Nvidia RTX 2080Ti and Jetson TX2 platforms in terms of sparsity, mAP, accuracy, and energy usage of the model pruned using our framework and compare it to state-of-the-art pruning techniques.
A. Experimental setup
Our framework is implemented on YOLOv5s, which is a smaller variant of the well-known YOLOv5 with 25 layers and 7.02 Million parameters [8] and RetinaNet that consists of 186 layers and 36.49 million parameters [9]. We implemented object detectors with R-TOSS using Python and Pytorch and trained them on an NVIDIA RTX 2080Ti GPU [36]. The trained framework is then evaluated using the RTX 2080Ti GPU and also deployed on a Jetson TX2 [37] embedded AI computing device. We use the KITTI automotive dataset [39], with an input frame size of 640×640, and a split of 60:40, for training and inference, respectively. The KITTI dataset is a widely used dataset comprised of traffic scenes, making it ideal for AV perception model training. We measure inference times across models in terms of milliseconds (ms) and the mAP with an IoU threshold of 0.5 AP@[.5:.95].
B. Sensitivity analysis on R-TOSS pruning framework
We performed a sensitivity analysis study on our R-TOSS framework to determine the impact of considering different sizes of kernel patterns. Table 3 shows the results of the study, performed on the RTX 2080Ti. We explored 4-entry patterns (4EP) and 5-entry patterns (5EP), with 4 and 5 nonzero weights in a kernel, respectively, along with our 3EP and 2EP patterns discussed earlier.
From the results it can be seen that while the 2EP pattern performs better in terms of sparsity induced, inference time, and energy usage on YOLOv5s, it has lesser mAP than when using the 3EP pattern. We can also observe that 2EP performs better in terms of sparsity induced, mAP, inference time, and energy usage on the RetinaNet model. The performance improvement in terms of inference time and energy usage is due to the higher achieved sparsity of the models. The results indicate that that our proposed 3EP and 2EP kernel patterns can provide faster and also more accurate results than that with the 4EP and 5EP variants of R-TOSS.
In the next subsection, we compare the performance of 3EP and 2EP variants of R-TOSS with other state-of-the-art pruning frameworks.
(a) Sparsity ratio (YOLOv5s) (b) Sparsity ratio (RetinaNet) Fig. 4: Comparison of sparsity achieved using different frameworks.
C. Comparison results with other pruning frameworks
We compared the R-TOSS-3EP and R-TOSS-2EP variants of our proposed framework with a Base Model (BM) which does not use any pruning, PATDNN (PD) [30] which is a pattern-based pruning technique that uses a 4 entry pattern on 3×3 kernels along with connectivity pruning to increase sparsity, Neural Magic SparseML (NMS) [14] which is an unstructured weight pruning approach that uses the magnitude of the weights in a layer, with the weights below a threshold being pruned, Network Slimming (NS) [23] which uses channel pruning in which a channel is pruned based on a scaling factor for the channel in a layer, Pruning filters (PF) [20] which performs filter granularity weighted pruning, where the total sum of filters weights is calculated and filters below a corresponding threshold are pruned [20], Neural pruning (NP) [21] which uses a combination of filter pruning along with unstructured weight pruning where L1 norm is used to perform weight pruning and L2 regularization is used to perform filter pruning. Fig. 4 shows the comparison of the sparsity ratio with other pruning frameworks from prior work, with results normalized to the baseline model BM. It can be observed that our proposed R-TOSS-2EP framework achieves very high sparsity across both object detector models. We were able to achieve 2.9 × and 4.4 × compression on the YOLOv5s model with R-TOSS-3EP and R-TOSS-2EP, respectively. Similarly, a 2.4× and 2.89× improvement in compression ratio was achieved for RetinaNet with R-TOSS 3EP and R-TOSS 2EP, respectively. Fig. 5 shows the mAP comparison. One can observe that R-TOSS-3EP and R-TOSS-2EP were able to achieve an mAP of 79.45 % and 82.9% on RetinaNet which is 8.06% and 10.98% better than the best performing framework (NMS). For YOLOv5s, the 3EP R-TOSS framework variant was outperformed slightly by the PD framework. Our inference time results in Fig. 6 show that on RTX 2080 Ti, R-TOSS-3EP and R-TOSS-2EP were able to achieve a 1.86× and 1.97× speedup in execution time for YOLOv5s and a 1.87× and 2.1× speedup on RetinaNet compared to BM. We also outperform the best performing prior work framework (PD) by 8% and 13.3% for YOLOv5s and 43.3% and 49.6% for RetinaNet with R-TOSS-2EP and R-TOSS-3EP, respectively. Similarly, on Jetson TX2, R-TOSS-3EP and R-TOSS-2EP were able to achieve a 2.12× and 2.15× speedup in inference time on YOLOv5s model and 1.56× and 1.87× speedup on RetinaNet compared to BM. R-TOSS-3EP and R-TOSS-2EP also outperformed PD with 2.6% and 4.27% faster execution time on YOLOv5s and 5.94% and 21.62% on RetinaNet. The models pruned using our framework also perform better in terms of energy consumption. Fig. 7 shows the comparison of reduction in use of energy among various frameworks on both YOLOv5s and RetinaNet. For YOLOv5s, R-TOSS-2EP and R-TOSS-3EP were able to achieve 45.5% and 48.23% energy reduction over BM and a 6.5 % and 11.2% energy reduction over PD on RTX 2080Ti; and on the Jetson TX2 it achieved 54.90% and 57.01% reduction over BM and 1.84% and 6.43% reduction over PD. We also observed similar trends on RetinaNet, with 48% and 55.75% reduction of energy usage over BM and 42.46% and 50.97% reduction over PD on RTX 2080 Ti; as well as 56.31% and 70.12% reduction over BM and 18.26% and 44.10% reduction of energy usage over PD on Jetson TX2. From the results it can be observed that R-TOSS-2EP especially retains the ability to detect tiny objects (the car in this example), along with better confidence scores than NP and PD. As AVs rely on fast and accurate inference to take time critical driving decision, R-TOSS can help achieve both speed and accuracy while keeping the energy usage lower than that of other state-of-the-art pruning techniques we have compared with.
VI. CONCLUSIONS
In this paper we proposed a new pruning framework (R-TOSS) that is able to outperform several state-of-art pruning frameworks in terms of compression ratio and inference time. We were also able to increase the mAP of the object detectors compared to the mAP of the baseline models. Overall, our framework achieves significant compression ratios while improving mAP performance on two stateof-the-art object detection models, YOLOv5s and RetinaNet. The proposed framework achieves these results without any compiler optimization or additional hardware requirement. Experimental results on JetsonTX2 show that our pruning framework has a model compression rate of 4.4× on YOLOv5s and 2.89× on RetinaNet while outperforming the original model as well as several state-ofthe-art pruning frameworks in terms of accuracy and inference time.
Fig. 1 :
1Illustration of different pruning methods
Fig. 2 :
2An overview of the proposed R-TOSS pruning framework
of mAP achieved using different frameworks.
in models after using the pruning frameworks (a) Energy usage (YOLOv5s) (b) Energy usage (RetinaNet) Fig 7: Reduction in energy in models after using the pruning frameworks.
Fig 8 .
8Comparison of inference output with other pruning techniques on KITTI automotive dataset using RetinaNet Fig 8 illustrates the performance of different frameworks on a test case from the KITTI dataset.
TABLE 1 :
1Metrics comparison of two-stage vs single-stage detectors Name Type mAP Inference rate (fps) R-CNN
TABLE 2 :
2Comparison of model sizes vs. execution timeModels
Number of parameters (Millions) Execution time (sec)
YOLOv5 [8]
7.02
0.7415
YOLOX [11]
8.97
1.23
RetinaNet [9]
36.49
6.8
YOLOv7 [34]
36.90
6.5
YOLOR [10]
37.26
6.89
DETR [32]
41.52
7.6
TABLE 3 :
3Table showing sensitivity analysis of R-TOSS framework in terms of induced sparsity, mAP, and inference time for YOLOv5s and RetinaNet Entry pattern VatiantYOLOv5s
RetinaNet
Reduction ratio
mAP
Inference Time (ms) Energy Usage (J) Reduction ratio mAP
Inference Time (ms) Energy Usage (J)
R-TOSS (5EP)
1.79×
72.6
11.09
0.97
1.45×
66.09
157.24
14.27
R-TOSS (4EP)
2.24×
70.45
10.98
0.91
1.6×
75.8
150.58
13.62
R-TOSS (3EP)
2.9×
78.58
6.9
0.478
2.4×
79.45
72.98
6.45
R-TOSS (2EP)
4.4×
76.42
6.5
0.454
2.89×
82.9
64.83
5.50
ACKNOWLEDGEMENTS This research is supported by NSF grant CNS-2132385
. Automated Driving Systems, NHTSA. last accessed on: 11/04/2022Automated Driving Systems, NHTSA, https://www.nhtsa.gov/ [last accessed on: 11/04/2022]
Advanced Driver Assistance Systems: A Path Toward Autonomous Vehicles. V Kukkala, J Tunnell, S Pasricha, IEEE Consumer Electronics. 75V. Kukkala, J. Tunnell, S. Pasricha, "Advanced Driver Assistance Systems: A Path Toward Autonomous Vehicles", IEEE Consumer Electronics, Vol. 7, Iss. 5, Sept 2018.
Lyft 3D object detection for autonomous vehicles. M Sampurna, AIFGR. ElsevierM. Sampurna, et al., "Lyft 3D object detection for autonomous vehicles," AIFGR, Elsevier, 2021. 119-136.
V K Kukkala, S V Thiruloga, S Pasricha, Roadmap for Cybersecurity in Autonomous Vehicles. IEEE Consumer Electronics11V. K. Kukkala, S. V. Thiruloga, and S. Pasricha, "Roadmap for Cybersecurity in Autonomous Vehicles", Vol. 11, Iss. 6, pp. 13-23, IEEE Consumer Electronics, Nov 2022.
Rich feature hierarchies for accurate object detection and semantic segmentation. G , Ross , IEEE CVPR. G, Ross, et al. "Rich feature hierarchies for accurate object detection and semantic segmentation," IEEE CVPR, 2014.
Fast R-CNN. R Girshick, IEEE ICCV. R. Girshick, "Fast R-CNN," IEEE ICCV, 2015.
Faster R-CNN: an Approach to Real-Time Object Detection. R Gavrilescu, IEEE EPE. R. Gavrilescu, et al., "Faster R-CNN: an Approach to Real-Time Object Detection," IEEE EPE, 2018.
. Yolov5, last accessed on: 11/04/2022YoloV5, https://docs.ultralytics.com/ [last accessed on: 11/04/2022]
Focal Loss for Dense Object Detection. T Y Lin, arXiv:1708.02002T. Y. Lin, et al., "Focal Loss for Dense Object Detection," arXiv:1708.02002, 2018.
You Only Learn One Representation: Unified Network for Multiple Tasks. C Y Wang, arXiv:2105.04206C. Y. Wang, et al., "You Only Learn One Representation: Unified Network for Multiple Tasks," arXiv:2105.04206, 2021.
Z Ge, arXiv:2107.08430YOLOX: Exceeding YOLO Series in 2021. Z. Ge, et al., "YOLOX: Exceeding YOLO Series in 2021," arXiv:2107.08430, 2021.
Deep learning for generic object detection: A survey. L Liu, IJCV. L. Liu,et al., "Deep learning for generic object detection: A survey," IJCV, 2020.
T Gale, arXiv:1902.09574The state of sparsity in deep neural networks. T. Gale, et al., "The state of sparsity in deep neural networks," arXiv:1902.09574, 2019.
Inducing and Exploiting Activation Sparsity for Fast Inference on Deep Neural Networks. M Kurtz, ICML. M. Kurtz et al., "Inducing and Exploiting Activation Sparsity for Fast Inference on Deep Neural Networks," in ICML, 2020.
Importance estimation for neural network pruning. P Molchanov, IEEE CVPR. P. Molchanov, et al., "Importance estimation for neural network pruning," in IEEE CVPR, 2019.
Snip: Single-shot network pruning based on connection sensitivity. N Lee, arXiv:1810.02340N. Lee, "Snip: Single-shot network pruning based on connection sensitivity," arXiv:1810.02340, 2018.
Pruning neural networks without any data by iteratively conserving synaptic flow. H Tanaka, 2020H. Tanaka, et al., "Pruning neural networks without any data by iteratively conserving synaptic flow," in NeurIPS 2020.
Picking Winning Tickets Before Training by Preserving Gradient Flow. C Wang, ICLR. C. Wang, et. al., "Picking Winning Tickets Before Training by Preserving Gradient Flow," in ICLR, 2020.
Edge devices object detection by filter pruning. V Crescitelli, IEEE ETFA. V. Crescitelli, et. al., "Edge devices object detection by filter pruning," IEEE ETFA, 2021.
Pruning filters for efficient convnets. H Li, arXiv:1608.08710H. Li, et al., "Pruning filters for efficient convnets," arXiv:1608.08710, 2016.
H Wang, arXiv:2012.09243Neural pruning via growing regularization. H. Wang, et al., "Neural pruning via growing regularization," arXiv:2012.09243, 2020.
Localization-aware channel pruning for object detection. X Zihao, Neurocomputing. 403X. Zihao, et al., "Localization-aware channel pruning for object detection," Neurocomputing vol. 403, 2020.
Learning efficient convolutional networks through network slimming. L Zhuang, IEEE ICCV. L. Zhuang, et al., "Learning efficient convolutional networks through network slimming," in IEEE ICCV, 2017.
TensorRT-Based Framework and Optimization Methodology for Deep Learning Inference on Jetson Boards. J , Eunjin , ACM Transactions on Embedded Computing Systems. J, EunJin, et al. "TensorRT-Based Framework and Optimization Methodology for Deep Learning Inference on Jetson Boards." ACM Transactions on Embedded Computing Systems, 2022.
SEDAN: Security-Aware Design of Time-Critical Automotive Networks. V Kukkala, S Pasricha, T H Bradley, IEEE Transactions on Vehicular Technology (TVT). 698V. Kukkala, S. Pasricha, T. H. Bradley, "SEDAN: Security-Aware Design of Time-Critical Automotive Networks", IEEE Transactions on Vehicular Technology (TVT), vol. 69, no. 8, Aug 2020.
YOLOv4: Optimal Speed and Accuracy of Object Detection. B , Alexey , B, Alexey et al. "YOLOv4: Optimal Speed and Accuracy of Object Detection." ArXiv:2004.10934, 2020.
Microsoft coco: Common objects in context. L , Tsung-Yi , SpringerChamL, Tsung-Yi, et al. "Microsoft coco: Common objects in context." European conference on computer vision. Springer, Cham, 2014.
Accelerating sparsity in the Nvidia Ampere architecture. J Pool, last accessed: 11/4/2022J. Pool, "Accelerating sparsity in the Nvidia Ampere architecture," Available on: https://tinyurl.com/4439phxe [last accessed: 11/4/2022]
SONIC: A Sparse Neural Network Inference Accelerator with Silicon Photonics for Energy-Efficient Deep Learning. F Sunny, M Nikdast, S Pasricha, ACM ASPDACF. Sunny, M. Nikdast, and S. Pasricha, "SONIC: A Sparse Neural Network Inference Accelerator with Silicon Photonics for Energy- Efficient Deep Learning," ACM ASPDAC, 2022.
Patdnn: Achieving real-time dnn execution on mobile devices with pattern-based weight pruning. N Wei, International Conference on Architectural Support for Programming Languages and Operating Systems. N. Wei, et al., "Patdnn: Achieving real-time dnn execution on mobile devices with pattern-based weight pruning," in International Conference on Architectural Support for Programming Languages and Operating Systems, 2020.
YOLObile: Real-time object detection on mobile devices via compression-compilation co-design. C Yuxuan, AAAI Conference on Artificial Intelligence. C. Yuxuan, et al., "YOLObile: Real-time object detection on mobile devices via compression-compilation co-design," in AAAI Conference on Artificial Intelligence, 2021.
End-to-end object detection with transformers. C Nicolas, SpringerChamC. Nicolas, et al., "End-to-end object detection with transformers," ECCV. Springer, Cham, 2020.
LATTE: LSTM Self-Attention based Anomaly Detection in Embedded Automotive Platforms. V K Kukkala, S V Thiruloga, S Pasricha, IEEE/ACM CODES+ISSS. 2021V. K. Kukkala, S. V. Thiruloga, and S. Pasricha, "LATTE: LSTM Self- Attention based Anomaly Detection in Embedded Automotive Platforms", IEEE/ACM CODES+ISSS (ESWEEK), 2021.
YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. W , Chien-Yao , arXiv:2207.02696arXiv preprintW, Chien-Yao, et al., "YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors." arXiv preprint arXiv:2207.02696 (2022).
VESPA: Optimizing Heterogeneous Sensor Placement and Orientation for Autonomous Vehicles. J Dey, W Taylor, S Pasricha, IEEE Consumer Electronics. 102J. Dey, W. Taylor, S. Pasricha, "VESPA: Optimizing Heterogeneous Sensor Placement and Orientation for Autonomous Vehicles", IEEE Consumer Electronics, 10(2), Mar 2021.
. Nvidia RTX 20 Series. Available on. last accessed on 11/4/2022Nvidia RTX 20 Series; Available on: https://www.nvidia.com/en- us/geforce/20-series/ [last accessed on 11/4/2022]
. Jetson Tx2 Module, Available on. last accessed on: 11/4/2022Jetson TX2 Module; Available on: https://elinux.org/Jetson_TX2 [last accessed on: 11/4/2022]
Robust Perception Architecture Design for Automotive Cyber-Physical Systems. J Dey, S Pasricha, IEEE Computer Society Annual Symposium on VLSI. 2022J. Dey, S. Pasricha, "Robust Perception Architecture Design for Automotive Cyber-Physical Systems", IEEE Computer Society Annual Symposium on VLSI (ISVLSI), 2022.
| [] |
[
"3-FORMS AND ALMOST COMPLEX STRUCTURES ON 6-DIMENSIONAL MANIFOLDS",
"3-FORMS AND ALMOST COMPLEX STRUCTURES ON 6-DIMENSIONAL MANIFOLDS"
] | [
"Martin Panák ",
"Jiří Vanžura "
] | [] | [] | This article deals with 3-forms on 6-dimensional manifolds, the first dimension where the classification of 3-forms is not trivial. There are three classes of multisymplectic 3-forms there. We study the class which is closely related to almost complex structures.Let V be a real vector space. Recall that a k-form ω (k ≥ 2) is called multisymplectic if the homomorphismis injective. There is a natural action of the general linear group G(V ) on Λ k V * , and also on Λ k ms V * , the subset of the multisymplectic forms. Two multisymplectic forms are called equivalent if they belong to the same orbit of the action. For any form ω ∈ Λ k V * define a subsetIf dim V = 6 and k = 3 the subset Λ 3 ms V * consists of three orbits. Let e 1 , . . . , e 6 be a basis of V and α 1 , . . . , α 6 the corresponding dual basis. Representatives of the three orbits can be expressed in the formMultisymplectic forms are called of type 1, resp. of type 2, resp. of type 3 accordning to which orbit they belong to. There is the following characterisation of the orbits:(1) ω is of type 1 if and only if ∆(2) ω is of type 2 if and only if ∆(ω) = {0}.(3) ω is of type 3 if and only if ∆(ω) is a 3-dimensional subspace.1991 Mathematics Subject Classification. 53C15, 58A10. | null | [
"https://export.arxiv.org/pdf/math/0305312v1.pdf"
] | 17,687,579 | math/0305312 | 89f62caf2b24675ca197d17918e0ed050e28314f |
3-FORMS AND ALMOST COMPLEX STRUCTURES ON 6-DIMENSIONAL MANIFOLDS
22 May 2003
Martin Panák
Jiří Vanžura
3-FORMS AND ALMOST COMPLEX STRUCTURES ON 6-DIMENSIONAL MANIFOLDS
22 May 2003Typeset by A M S-T E X 1and phrases 3-formalmost complex structure6-dimensional manifold
This article deals with 3-forms on 6-dimensional manifolds, the first dimension where the classification of 3-forms is not trivial. There are three classes of multisymplectic 3-forms there. We study the class which is closely related to almost complex structures.Let V be a real vector space. Recall that a k-form ω (k ≥ 2) is called multisymplectic if the homomorphismis injective. There is a natural action of the general linear group G(V ) on Λ k V * , and also on Λ k ms V * , the subset of the multisymplectic forms. Two multisymplectic forms are called equivalent if they belong to the same orbit of the action. For any form ω ∈ Λ k V * define a subsetIf dim V = 6 and k = 3 the subset Λ 3 ms V * consists of three orbits. Let e 1 , . . . , e 6 be a basis of V and α 1 , . . . , α 6 the corresponding dual basis. Representatives of the three orbits can be expressed in the formMultisymplectic forms are called of type 1, resp. of type 2, resp. of type 3 accordning to which orbit they belong to. There is the following characterisation of the orbits:(1) ω is of type 1 if and only if ∆(2) ω is of type 2 if and only if ∆(ω) = {0}.(3) ω is of type 3 if and only if ∆(ω) is a 3-dimensional subspace.1991 Mathematics Subject Classification. 53C15, 58A10.
The forms ω 1 and ω 2 have equivalent complexifications. From this point of view the forms of type 3 are exceptional. You can find more about these forms in [V].
A multisymplectic k-form on a manifold M is a section of Λ k T * M such that its restriction to the tangent space T x M is multisymplectic for any x ∈ M , and is of type i in x ∈ M , i = 1, 2, 3, if the restriction to T x M is of type i. A multisymplectic form on M can change its type as seen on σ = dx 1 ∧ dx 2 ∧ dx 3 + dx 1 ∧ dx 4 ∧ dx 5 + dx 2 ∧ dx 4 ∧ dx 6 + sin(x 3 + x 4 )dx 3 ∧ dx 5 ∧ dx 6 + sin(x 3 + x 4 )dx 4 ∧ dx 5 ∧ dx 6 , a 3-form on R 6 . σ is of type 3 on the submanifold given by the equation x 3 +x 4 = kπ, k ∈ N. If x 3 + x 4 ∈ (kπ, (k + 1)π), k even, then σ is of type 1 and if x 3 + x 4 ∈ (kπ, (k + 1)π), k odd, then σ is of type 2. Let us point out that σ is closed and invariant under the action of the group (2πZ) 6 and we can factor σ to get a form changing the type on R 6 /(2πZ) 6 , which is the 6-dimensional torus, that is σ is closed on a compact manifold.
The goal of this paper is to study the forms of type 2. We denote ω = ω 2 .
3-forms on vector spaces
Let J be an automorphism of a 6-dimensional real vector space V satisfying J 2 = −I. Further let V C = V ⊕ iV be the complexification of V . There is the standard decomposition V C = V 1,0 ⊕ V 0,1 . Consider a non-zero form γ of type (3, 0) on V C and set γ 0 = Re γ, γ 1 = Im γ.
For any v 1 ∈ V there is v 1 + iJv 1 ∈ V 0,1 , and consequently γ(i(v 1 + iJv 1 ), v 2 , v 3 ) = 0 for any v 2 , v 3 ∈ V . This implies γ 0 (i(v 1 + iJv 1 ), v 2 , v 3 ) = 0 and γ 1 (i(v 1 + iJv 1 ), v 2 , v 3 ) = 0. Thus 0 = γ 0 (i(v 1 + iJv 1 ), v 2 , v 3 ) = γ 0 (iv 1 , v 2 , v 3 ) − γ 0 (Jv 1 , v 2 , v 3 ).
Similarly we can proceed with γ 1 and we get γ 0 (iv 1 , v 2 , v 3 ) = γ 0 (Jv 1 , v 2 , v 3 ), γ 1 (iv 1 , v 2 , v 3 ) = γ 1 (Jv 1 , v 2 , v 3 )
for any v 1 , v 2 , v 3 ∈ V . Moreover there is γ 0 (w 1 , w 2 , w 3 ) = Re(−γ(i 2 w 1 , w 2 , w 3 )) = Re(−iγ(iw 1 , w 2 , w 3 )) = Im(γ(iw 1 , w 2 , w 3 )) = γ 1 (iw 1 , w 2 , w 3 ), for any w 1 , w 2 , w 3 ∈ V C and that is γ 1 (w 1 , w 2 , w 3 ) = −γ 0 (iw 1 , w 2 , w 3 ). Finally,
γ 0 (Jv 1 , v 2 , v 3 ) = γ 0 (iv 1 , v 2 , v 3 ) = Re(γ(iv 1 , v 2 , v 3 )) = Re(iγ(v 1 , v 2 , v 3 )) = Re(γ(v 1 , iv 2 , v 3 )) = Re(γ(v 1 , Jv 2 , v 3 )) = γ 0 (v 1 , Jv 2 , v 3 ).
Along these lines we obtain
γ 0 (Jv 1 , v 2 , v 3 ) = γ 0 (v 1 , Jv 2 , v 3 ) = γ 0 (v 1 , v 2 , Jv 3 ), γ 1 (Jv 1 , v 2 , v 3 ) = γ 1 (v 1 , Jv 2 , v 3 ) = γ 1 (v 1 , v 2 , Jv 3 ),
that is both forms γ 0 and γ 1 are pure with respect to the complex structure J.
Lemma.
The real 3-forms γ 0 |V and γ 1 |V (on V ) are multisymplectic.
Proof. Let us assume that v 1 ∈ V is a vector such that for any vectors
v 2 , v 3 ∈ V (γ 0 |V )(v 1 , v 2 , v 3 ) = 0 or equivalently γ 0 (v 1 , v 2 , v 3 ) = 0. There are uniquely determined vectors w 1 , w 2 , w 3 ∈ V 1,0 such that v 1 = w 1 +w 1 , v 2 = w 2 +w 2 , v 3 = w 3 +w 3 . Then 0 = γ 0 (v 1 , v 2 , v 3 ) = Re(γ(w 1 +w 1 , w 2 +w 2 , w 3 +w 3 )) = Re(γ(w 1 , w 2 , w 3 )) = γ 0 (w 1 , w 2 , w 3 )
(for a fixed w 1 , and arbitrary w 2 , w 3 ∈ V 1,0 ). Because iw 2 ∈ V 1,0 , we find that
γ 0 (iw 1 , w 2 , w 3 ) = γ 0 (w 1 , iw 2 , w 3 ) = 0.
Moreover γ 1 (w, w ′ , w ′′ ) = −γ 0 (iw, w ′ , w ′′ ) for any w, w ′ , w ′′ ∈ V C , and we get
γ 1 (w 1 , w 2 , w 3 ) = −γ 0 (iw 1 , w 2 , w 3 ) = 0 for arbitrary w 2 , w 3 ∈ V 1,0 . Thus γ(w 1 , w 2 , w 3 ) = γ 0 (w 1 , w 2 , w 3 ) + iγ 1 (w 1 , w 2 , w 3 ) = 0 for arbitrary w 2 , w 3 ∈ V 1,0 .
Because γ is a non-zero complex 3-form on the complex 3-dimensional vector space V 1,0 , we find that w 1 = 0, and consequently v 1 = 0. This proves that the real 3-form γ 0 |V is multisymplectic. We find that the real 3-form γ 1 |V is also multisymplectic likewise.
Lemma.
The forms γ 0 |V and γ 1 |V satisfy ∆(γ 0 |V ) = {0} and ∆(γ 1 |V ) = {0}.
Proof. The complex 3-form γ is decomposable, and therefore γ∧γ = 0. This implies that for any w ∈ V C (ι w γ) ∧ (ι w γ) = 0. Similarly for any w ∈ V C (ι wγ ) ∧ (ι wγ ) = 0.
Obviously γ 0 = (1/2)(γ +γ). Let v ∈ V be such that (ι v γ 0 ) ∧ (ι v γ 0 ) = 0. Then 0 = (ι v γ 0 ) ∧ (ι v γ 0 ) = 1 4 (ι v γ + ι vγ ) ∧ (ι v γ + ι vγ ) = 1 2 (ι v γ) ∧ (ι vγ ).
But ι v γ is a form of type (2, 0) and ι vγ a form of type (0, 2). Consequently the last wedge product vanishes if and only if either ι v γ = 0 or ι vγ = 0. By virtue of the preceding lemma this implies that v = 0.
Lemma 2 shows that the both forms γ 0 |V and γ 1 |V are of type 2. As a final result of the above considerations we get the following result.
Corollary.
Let γ be a 3-form on V C of the type (3, 0). Then the real 3-forms (Re γ)|V and (Im γ)|V on V are multisymplectic and of type 2.
Let ω be a 3-form on V such that ∆(ω) = {0}. This means that for any v ∈ V , v = 0 there is (ι v ω) ∧ (ι v ω) = 0. This implies that rank ι v ω ≥ 4. On the other hand obviously rank ι v ω ≤ 4. Consequently, for any v = 0 rank ι v ω = 4. Thus the kernel K(ι v ω) of the 2-form ι v ω has dimension 2. Moreover v ∈ K(ι v ω). Now we fix a non-zero 6-form on θ on V . For any v ∈ V there exists a unique vector
Q(v) ∈ V such that (ι v ω) ∧ ω = ι Q(v) θ.
The mapping Q : V → V is obviously a homomorphism. If v = 0 then (ι v ω)∧ω = 0, and Q is an automorphism. It is also obvious that if v = 0, then the vectors v and Q(v) are linearly independent (apply ι v to the last equality). We evaluate ι Q(v) on the last equality and we get
(ι Q(v) ι v ω) ∧ ω + (ι v ω) ∧ (ι Q(v) ω) = 0 −(ι v ι Q(v) ω) ∧ ω + (ι v ω) ∧ (ι Q(v) ω) = 0 −ι v [(ι Q(v) ω) ∧ ω] + 2(ι v ω) ∧ (ι Q(v) ω) = 0
Now, apply ι v to the last equality:
(ι v ω) ∧ (ι v ι Q(v) ω) = 0.
If the 1-form ι v ι Q(v) ω were not the zero one then it would exist a 1-form σ such that ι v ω = σ ∧ ι v ι Q(v) ω, and we would get
(ι v ω) ∧ (ι v ω) = σ ∧ ι v ι Q(v) ω ∧ σ ∧ ι v ι Q(v) ω = 0,
which is a contradiction. Thus we have proved the following lemma.
Lemma. For any
v ∈ V there is ι Q(v) ι v ω = 0, i. e. Q(v) ∈ K(ι v ω).
This lemma shows that if v = 0, then K(ι v ω) = [v, Q(v)]. Applying ι Q(v) to the equality (ι v ω) ∧ ω = ι Q(v) θ and using the last lemma we obtain easily the following result.
Lemma. For any
v ∈ V there is (ι v ω) ∧ (ι Q(v) ω) = 0.
Lemma 4 shows that v ∈ K(ι Q(v) ω). Because v and Q(v) are linearly independent, we can see that
K(ι Q(v) ω) = [v, Q(v)] = K(ι v ω). If v = 0, then Q 2 (v) ∈ K(ι Q(v) ω), and consequently there are a(v), b(v) ∈ R such that Q 2 (v) = a(v)v + b(v)Q(v). For any v ∈ V (ι Q(v) ω) ∧ ω = ι Q 2 (v) θ.
Let us assume that v = 0. Then
(ι Q(v) ω) ∧ ω = a(v)ι v θ + b(v)ι Q(v) θ, and applying ι v we obtain b(v)ι v ι Q(v) θ = 0, which shows that b(v) = 0 for any v = 0. Consequently, Q 2 (v) = a(v)v for any v = 0. 6. Lemma. Let A : V → V be an automorphism, and a : V \{0} → R a function such that A(v) = a(v)v for any v = 0.
Then the function a is constant.
Proof. The condition on A means that every vector v of V is an eigenvector of A with the eigenvalue a(v). But the eigenvalues of two different vectors have to be the same otherwise their sum would not be an eigenvector.
Applying Lemma 6 on Q 2 we get Q 2 = aI. If a > 0, then
V = V + ⊕ V − , and Qv = √ av for v ∈ V + , Qv = − √ av for v ∈ V − .
At least one of the subspaces V + and V − is non-trivial. Let us assume for example
that V + = {0}. Then there is v ∈ V + , v = 0, and Qv = √ av,
which is a contradiction because the vectors v and Qv are linearly independent. This proves that a < 0. We can now see that the automorphisms
J + = 1 √ −a Q and J − = − 1 √ −a Q satisfy J 2 + = −I and J 2 − = −I, i. e. they define complex structures on V and J − = −J + . Setting θ + = √ −aθ, θ − = − √ −aθ we get (ι v ω) ∧ ω = ι J+v θ + , (ι v ω) ∧ ω = ι J−v θ − .
In the sequel we shall denote J = J + . The same results which are valid for J + hold also for J − .
Lemma.
There exists a unique (up to the the sign) complex structure J on V such that the form ω satisfies the relation
ω(Jv 1 , v 2 , v 3 ) = ω(v 1 , Jv 2 , v 3 ) = ω(v 1 , v 2 , Jv 3 ) for any v 1 , v 2 , v 3 ∈ V.
We recall that such a form ω is usually called pure with respect to J.
Proof.
We shall prove first that the complex structure J defined above satisfies the relation. By virtue of Lemma 4 for any v, v ′ ∈ V ω(v, Jv, v ′ ) = 0. Therefore we get
0 = ω(v 1 + v 2 , J(v 1 + v 2 ), v 3 ) = ω(v 1 , Jv 2 , v 3 ) + ω(v 2 , Jv 1 , v 3 ) = −ω(Jv 1 , v 2 , v 3 ) + ω(v 1 , Jv 2 , v 3 ), which gives ω(Jv 1 , v 2 , v 3 ) = ω(v 1 , Jv 2 , v 3 ).
Obviously, the opposite complex structure −J satisfies the same relation. We prove that there is no other complex structure with the same property. LetJ be a complex structure on V satisfying the above relation. We set A =JJ −1 . Then we get
ω(v 1 , Av 2 , Av 3 ) = ω(v 1 ,JJv 2 ,JJv 3 ) =ω(v 1 , Jv 2 ,J 2 Jv 3 ) = −ω(v 1 , Jv 2 , Jv 3 ) = − ω(v 1 , v 2 , J 2 v 3 ) = ω(v 1 , v 2 , v 3 ).
Any automorphism A satisfying this identity is ±I. Really, the identity means that that A is an automorphism of the 2-form
ι v ω. Consequently, A preserves the kernel K(ι v ω) = [v, Jv].
On the other hand it is obvious that any subspace of the form [v, Jv] is the kernel of ι v ω. Considering V as a complex vector space with the complex structure J, we can say that every 1-dimensional complex subspace is the kernel of the 2-form ι v ω for some v ∈ V , v = 0, and consequently is invariant under the automorphism A. Similarly as in Lemma 6 we conclude, that A = λI, λ ∈ C.
If we write λ = λ 0 + iλ 1 , then A = λ 0 I + λ 1 J and
ω(v 1 , v 2 , v 3 ) = ω(v 1 , Av 2 , Av 3 ) =ω(v 1 , λ 0 v 2 + λ 1 Jv 2 , λ 0 v 3 + λ 1 Jv 3 ) =λ 2 0 ω(v 1 , v 2 , v 3 ) + λ 0 λ 1 ω(v 1 , v 2 , Jv 3 ) + λ 0 λ 1 ω(v 1 , Jv 2 , v 3 ) + λ 2 1 ω(v 1 , Jv 2 , Jv 3 ) (λ 2 0 − λ 2 1 − 1)ω(v 1 , v 2 , v 3 ) + 2λ 0 λ 1 ω(v 1 , v 2 , Jv 3 ) = 0
We shall use this last equation together with another one obtained by writing Jv 3 instead of v 3 . In this way we get the system
(λ 2 0 − λ 2 1 − 1)ω(v 1 , v 2 , v 3 ) + 2λ 0 λ 1 ω(v 1 , v 2 , Jv 3 ) = 0 −2λ 0 λ 1 ω(v 1 , v 2 , v 3 ) + (λ 2 0 − λ 2 1 − 1)ω(v 1 , v 2 , Jv 3 ) = 0.
Because it has a non-trivial solution there must be
λ 2 0 − λ 2 1 − 1 2λ 0 λ 1 −2λ 0 λ 1 λ 2 0 − λ 2 1 − 1 = 0.
It is easy to verify that the solution of the last equation is λ 0 = ±1 and λ 1 = 0. This finishes the proof.
We shall now consider the vector space V together with a complex structure J, and a 3-form ω on V which is pure with respect to this complex structure. Firstly we define a real 3-form γ 0 on V C . We set
γ 0 (v 1 , v 2 , v 3 ) = ω(v 1 , v 2 , v 3 ), γ 0 (iv 1 , v 2 , v 3 ) = ω(Jv 1 , v 2 , v 3 ), γ 0 (iv 1 , iv 2 , v 3 ) = ω(Jv 1 , Jv 2 , v 3 ), γ 0 (iv 1 , iv 2 , iv 3 ) = ω(Jv 1 , Jv 2 , Jv 3 ), for v 1 , v 2 , v 3 ∈ V .
Then γ 0 extends uniquely to a real 3-form on V C . We can find easily that
γ 0 (iw 1 , w 2 , w 3 ) = γ 0 (w 1 , iw 2 , w 3 ) = γ 0 (w 1 , w 2 , iw 3 )
for any w 1 , w 2 , w 3 ∈ V C . Further, we set
γ 1 (w 1 , w 2 , w 3 ) = −γ 0 (iw 1 , w 2 , w 3 ) for w 1 , w 2 , w 3 ∈ V C .
It is obvious that γ 1 is a real 3-form satisfying
γ 1 (iw 1 , w 2 , w 3 ) = γ 1 (w 1 , iw 2 , w 3 ) = γ 1 (w 1 , w 2 , iw 3 )
for any w 1 , w 2 , w 3 ∈ V C . Now we define γ(w 1 , w 2 , w 3 ) = γ 0 (w 1 , w 2 , w 3 ) + iγ 1 (w 1 , w 2 , w 3 ) for w 1 , w 2 , w 3 ∈ V C .
It is obvious that γ is skew symmetric and 3-linear over R and has complex values. Moreover
γ(iw 1 , w 2 , w 3 ) = γ 0 (iw 1 , w 2 , w 3 ) + iγ 1 (iw 1 , w 2 , w 3 ) = −γ 1 (w 1 , w 2 , w 3 ) − iγ 0 (i 2 w 1 , w 2 , w 3 ) = −γ 1 (w 1 , w 2 , w 3 ) + iγ 0 (w 1 , w 2 , w 3 ) = = i[γ 0 (w 1 , w 2 , w 3 ) + iγ 1 (w 1 , w 2 , w 3 )] = iγ(w 1 , w 2 , w 3 ),
which proves that γ is a complex 3-form on V C . Now we prove that γ is a form of type (3, 0). Obviously, it suffices to prove that for
v 1 + iJv 1 ∈ V 0,1 and v 2 , v 3 ∈ V there is γ(v 1 + iJv 1 , v 2 , v 3 ) = 0. Really, γ(v 1 + iJv 1 , v 2 , v 3 ) = γ(v 1 , v 2 , v 3 ) + iγ(Jv 1 , v 2 , v 3 ) =γ 0 (v 1 , v 2 , v 3 ) + iγ 1 (v 1 , v 2 , v 3 ) + iγ 0 (Jv 1 , v 2 , v 3 ) − γ 1 (Jv 1 , v 2 , v 3 ) =γ 0 (v 1 , v 2 , v 3 ) − iγ 0 (iv 1 , v 2 , v 3 ) + iγ 0 (Jv 1 , v 2 , v 3 ) + γ 0 (iJv 1 , v 2 , v 3 )]. Now γ 0 (iJv 1 , v 2 , v 3 )] = ω(J 2 v 1 , v 2 , v 3 ) = −ω(v 1 , v 2 , v 3 ) = −γ 0 (v 1 , v 2 , v 3 ) and the real part of the last expression is zero, further γ 0 (Jv 1 , v 2 , v 3 ) = ω(Jv 1 , v 2 , v 3 ) = γ 0 (iv 1 , v 2 , v 3
) and the complex part of the expression is zero as well. Now we get easily the following proposition.
Proposition.
Let ω be a real 3-form on V satisfying ∆(ω) = {0}, and let J be a complex structure on V (one of the two) such that
ω(Jv 1 , v 2 , v 3 ) = ω(v 1 , Jv 2 , v 3 ) = ω(v 1 , v 2 , Jv 3 ).
Then there exists on V C a unique complex 3-form γ of type (3, 0) such that ω = (Re γ)|V.
Remark.
The complex structure J on V can be introduced also by means of the Hitchin's invariant λ, as in [H]. Forms of type 2 form an open subset U in Λ 3 V * . Hitchin has shown that this manifold also carries an almost complex structure, which is integrable. Hitchin uses the following way to introduce an almost complex structure on U . U ⊂ Λ 3 V * can be seen as a symplectic manifold (let θ be a fixed element in Λ 6 V * ; one defines the symplectic form Θ on Λ 3 V * by the equation ω 1 ∧ ω 2 = Θ(ω 1 , ω 2 )θ). Then the derivative of the Hamiltonian vector field corresponding to the function −λ(ω) on U gives an integrable almost complex structure on U . That was for the Hitchin's construction.
There is another way of introducing the (Hitchin's) almost complex structure on U . Given a 3-form ω ∈ U we choose the complex structure J ω on V (one of the two), whose existence is guaranteed by the lemma 7. Then we define endomorphisms A Jω and D Jω of Λ k V * by
(A Jω Ω)(v 1 , . . . , v k ) = Ω(J ω v 1 , . . . , J ω v k ), (D Jω Ω)(v 1 , . . . , v k ) = k i=1 Ω(v 1 , . . . , v i−1 , J ω v i , v i+1 , . . . , v k ).
Then A Jω is an automorphism of ΛV * and D Jω is a derivation of ΛV * . If k = 3 then the automorphism − 1 2 (A Jω +D Jω ) of Λ 3 V * (= T ω U ) gives a complex structure on U and coincides with the Hitchin's one.
3-forms on manifolds
We use facts from the previous section to obtain some global results on 3-forms on 6-dimensional manifolds. We shall denote by X, Y , Z the real vector fields on a (real) manifold M and by V , W the complex vector fields on M .
(i) J + + J − = 0, (ii) ω(J + X 1 , X 2 , X 3 ) = ω(X 1 , J + X 2 , X 3 ) = ω(X 1 , X 2 , J + X 3 ), (iii) ω(J − X 1 , X 2 , X 3 ) = ω(X 1 , J − X 2 , X 3 ) = ω(X 1 , X 2 , J − X 3 ),
for any vector fields X 1 , X 2 , X 3 .
At each point x ∈ M consider a 1-dimensional subspace of the space T 1 1x (M ) of tensors of type (1, 1) at x generated by the tensors J +x and J −x . The above considerations show that it is a 1-dimensional subbundle J ⊂ T 1 1 (M ).
Lemma.
The 1-dimensional vector bundles J and Λ 6 T * (M ) are isomorphic.
Proof. Let us choose a riemannian metric g 0 on T M . If x ∈ M and v, v ′ ∈ T x M we define a riemannian metric g by the formula
g(v, v ′ ) = g 0 (v, v ′ ) + g 0 (J + v, J + v ′ ) = g 0 (v, v ′ ) + g 0 (J − v, J − v ′ ).
It is obvious that for any v, v ′ ∈ T x M we have
g(J + v, J + v ′ ) = g(v, v ′ ), g(J − v, J − v ′ ) = g(v, v ′ ).
We now define
σ + (v, v ′ ) = g(J + v, v ′ ), σ − (v, v ′ ) = g(J − v, v ′ ).
It is easy to verify that σ + and σ − are nonzero 2-forms on T x M satisfying σ + +σ − = 0. We define an isomorphism h : J → Λ 6 T * M . Let x ∈ M and let A ∈ J x . We can write A = aJ + , A = −aJ − .
We set
hA = aσ + ∧ σ + ∧ σ + = −aσ − ∧ σ − ∧ σ − .
Corollary.
There exist two almost complex structures J + and J − on M such that
(i) J + + J − = 0, (ii) ω(J + X 1 , X 2 , X 3 ) = ω(X 1 , J + X 2 , X 3 ) = ω(X 1 , X 2 , J + X 3 ), (iii) ω(J − X 1 , X 2 , X 3 ) = ω(X 1 , J − X 2 , X 3 ) = ω(X 1 , X 2 , J − X 3 )
, for any vector fields X 1 , X 2 , X 3 if and only if the manifold M is orientable.
Hence the assertions in the rest of the article can be simplified correspondingly if M is an orientable manifold.
12. Lemma. Let J be an almost complex structure on M such that for any vector fields X 1 , X 2 , X 3 ∈ X (M ) there is
ω(JX 1 , X 2 , X 3 ) = ω(X 1 , JX 2 , X 3 ) = ω(X 1 , X 2 , JX 3 ).
If ∇ is a linear connection on M such that ∇ω = 0, then also ∇J = 0.
Proof. Let Y ∈ X (M ), and let us consider the covariant derivative ∇ Y . We get
0 = (∇ Y ω)(JX 1 , X 2 , X 3 ) = Y (ω(JX 1 , X 2 , X 3 ) − ω((∇ Y J)X 1 , X 2 , X 3 ) −ω(J∇ Y X 1 , X 2 , X 2 ) − ω(JX 1 , ∇ Y X 2 , X 3 ) − ω(JX 1 , X 2 , ∇ Y X 3 ), 0 = (∇ Y ω)(X 1 , JX 2 , X 3 ) = Y (ω(JX 1 , X 2 , X 3 ) − ω(∇ Y X 1 , JX 2 , X 3 ) −ω(X 1 , (∇ Y J)X 2 , X 3 ) − ω(X 1 , J∇ Y X 2 , X 3 ) − ω(X 1 , JX 2 , ∇ Y X 3 ).
Because the above expressions are equal we find easily that ω((∇ Y J)X 1 , X 2 , X 3 ) = ω(X 1 , (∇ Y J)X 2 , X 3 ).
We denote A = ∇ Y J. Extending in the obvious way the above equality, we get ω(AX 1 , X 2 , X 3 ) = ω(X 1 , AX 2 , X 3 ) = ω(X 1 , X 2 , AX 3 ).
Moreover J 2 = −I, and applying ∇ Y to this equality, we get AJ + JA = 0.
We know that K(ι X ω) = [X, JX]. Furthermore ω(X, AX, X ′ ) = ω(X, X, AX ′ ) = 0, ω(X, AJX, X ′ ) = ω(X, JX, AX ′ ) = 0, which shows that A preserves the distribution [X, JX]. By the very same arguments as in Lemma 7 we can see that A = λ 0 I + λ 1 J. Consequently
(λ 0 I + λ 1 J)J + J(λ 0 I + λ 1 J) = 0 −2λ 1 I + 2λ 0 J = 0, which implies λ 0 = λ 1 = 0. Thus ∇ Y J = A = 0.
The statement of the previous lemma can be in a way reversed, and we get 13. Proposition. Let ω be a real 3-form on a 6-dimensional differentiable manifold M satisfying ∆(ω x ) = {0} for any x ∈ M . Let J be an almost complex structure on M such that for any vector fields X 1 , X 2 , X 3 ∈ X (M ) there is ω(JX 1 , X 2 , X 3 ) = ω(X 1 , JX 2 , X 3 ) = ω(X 1 , X 2 , JX 3 ).
Then there exists a symmetric connection∇ on M such that∇ω = 0 if and only if the following conditions are satisfied (i) dω = 0, (ii) the almost complex structure J is integrable.
Proof. First, we prove that the integrability of the structure J and the fact that ω is closed implies the existence of a symmetric connection with respect to which ω is parallel. For any connection ∇ on M we shall denote by the same symbol its complexification. Namely, we set
∇ X0+iX1 (Y 0 + iY 1 ) = (∇ X0 Y 0 − ∇ X1 Y 1 ) + i(∇ X0 Y 1 + ∇ X1 Y 0 ).
Let us assume that there exists a symmetric connection • ∇ such that • ∇J = 0. We shall consider a 3-form γ of type (3, 0) such that (Re γ)|T M = ω. Our next aim is to try to find a symmetric connection
∇ V W = • ∇ V W + Q(V, W ) satisfying ∇ V γ = 0. Obviously, the connection ∇ is symmetric if and only if Q(V, W ) = Q(W, V ). Moreover, ∇ V γ = 0 hints that ∇J = 0. 0 = (∇ V J)W = ∇ V (JW )−J∇ V W = • ∇ V (JW )+Q(V, JW )−J • ∇ V W −JQ(V, W ),
which shows that we should require Q(JV, W ) = Q(V, JW ) = JQ(V, W ).
Because
• ∇J = 0, we can immediately see that for any V ∈ X C (M ) the covariant derivative • ∇ V γ is again a form of type (3, 0). Consequently there exists a uniquely determined complex 1-form ρ such that
• ∇ V γ = ρ(V )γ. Then (∇ V γ)(W 1 , W 2 , W 3 ) =V (γ(W 1 , W 2 , W 3 )) − γ(∇ V W 1 , W 2 , W 3 ) − γ(W 1 , ∇ V W 2 , W 3 ) − γ(W 1 , W 2 , ∇ V W 3 ) =V (γ(W 1 , W 2 , W 3 )) − γ( • ∇ V W 1 , W 2 , W 3 ) − γ(W 1 , • ∇ V W 2 , W 3 ) − γ(W 1 , W 2 , • ∇ V W 3 ) − γ(Q(V, W 1 ), W 2 , W 3 ) − γ(W 1 , Q(V, W 2 ), W 3 ) − γ(W 1 , W 2 , Q(V, W 3 )) =ρ(V )γ(W 1 , W 2 , W 3 ) − γ(Q(V, W 1 ), W 2 , W 3 ) − γ(W 1 , Q(V, W 2 ), W 3 ) − γ(W 1 , W 2 , Q(V, W 3 )).
In other words ∇ V γ = 0 if and only if
ρ(V )γ(W 1 , W 2 , W 3 ) = γ(Q(V, W 1 ), W 2 , W 3 )+γ(W 1 , Q(V, W 2 ), W 3 ) + γ(W 1 , W 2 , Q(V, W 3 )).
Sublemma. If dγ = 0, then ρ is a form of type (1, 0).
Proof. Let V 1 ∈ T 0,1 (M ). Because • ∇ is symmetric dγ = −A( • ∇γ), where A denotes the alternation. We obtain 0 = −4!(dγ)(V 1 , V 2 , V 3 , V 4 ) = π sign(π)( • ∇ Vπ1 γ)(V π2 , V π3 , V π4 ) + τ sign(τ )( • ∇ V1 γ)(V τ 2 , V τ 3 , V τ 4 ) = 3!( • ∇ V1 γ)(V 2 , V 3 , V 4 ) = 3!ρ(V 1 )γ(V 2 , V 3 , V 4 ).
The first sum is taken over all permutations π satisfying π1 > 1, and the second one is taken over all permutations of the set {2, 3, 4}. The first sum obviously vanishes, and ρ(V 1 ) = 0. This finishes the proof.
We set now
Q(V, W ) = 1 8 [ρ(V )W − ρ(JV )JW + ρ(W )V − ρ(JW )JV ].
It is easy to see that Q(JV,
W ) = Q(V, JW ) = JQ(V, W ). For V, W 1 , W 2 , W 3 ∈ T 1,0 (M ) we can compute 8γ(Q(V, W 1 ), W 2 , W 3 ) =γ(ρ(V )W 1 − ρ(JV )JW 1 + ρ(W 1 )V − ρ(JW 1 )JV, W 2 , W 3 ) =γ(2ρ(V )W 1 + 2ρ(W 1 )V, W 2 , W 3 ) = 2ρ(V )γ(W 1 , W 2 , W 3 ) + 2ρ(W 1 )γ(V, W 2 , W 3 ),
where we used for V ∈ T (1,0) (M ) that ρ(JV ) = iρ (V ) and γ(JV, V ′ , V ′′ ) = iγ(V, V ′ , V ′′ ), since γ is of type (3, 0) and ρ of type (1, 0). Similarly we can compute γ(W 1 , Q(V, W 2 ), W 3 ) and γ(W 1 , W 2 , Q(V, W 3 )). Without a loss of generality we can assume that the vector fields W 1 , W 2 , W 3 are linearly independent (over C). Then we can find uniquely determined complex functions
f 1 , f 2 , f 3 such that V = f 1 W 1 + f 2 W 2 + f 3 W 3 .
Then we get
ρ(W 1 )γ(V, W 2 , W 3 ) + ρ(W 2 )γ(W 1 , V, W 3 ) + ρ(W 3 )γ(W 1 , W 2 , V ) =f 1 ρ(W 1 )γ(W 1 , W 2 , W 3 ) + f 2 ρ(W 2 )γ(W 1 , W 2 , W 3 ) + f 3 ρ(W 3 )γ(W 1 , W 2 , W 3 ) =ρ(f 1 W 1 + f 2 W 2 + f 3 W 3 )γ(W 1 , W 2 , W 3 ) = ρ(V )γ(W 1 , W 2 , W 3 ).
Finally we obtain
γ(Q(V, W 1 ), W 2 , W 3 )+γ(W 1 , Q(V, W 2 ), W 3 ) + γ(W 1 , W 2 , Q(V, W 3 )) =ρ(V )γ(W 1 , W 2 , W 3 ).
which proves ∇ V γ = 0. Let us continue in the main stream of the proof. We shall now use the complex connection ∇. For X, Y ∈ T M we shall denote ∇ 0
X Y = Re ∇ X Y and ∇ 1 X Y = Im ∇ X Y . This means that we have ∇ X Y = ∇ 0 −γ 0 (J∇ 1 X Y 1 , Y 2 , Y 3 ) − γ 0 (Y 1 , J∇ 1 X Y 2 , Y 3 ) − γ 0 (Y 1 , Y 2 , J∇ 1 X Y 3 ) = = X(γ 0 (Y 1 , Y 2 , Y 3 )) − γ 0 (∇ 0 X Y 1 + J∇ 1 X Y 1 , Y 2 , Y 3 ) −γ 0 (Y 1 , ∇ 0 X Y 2 + J∇ 1 X Y 2 , Y 3 ) − γ 0 (Y 1 , Y 2 , ∇ 0 X Y 3 + J∇ 1 X Y 3 ). We define now∇ X Y = ∇ 0 X Y + J∇ 1 X Y.
It is easy to verify that∇ is a real connection. Moreover, the previous equation shows that∇ γ 0 = 0.
Furthermore, it is very easy to see that the connection∇ is symmetric. The inverse implication can be proved easily.
Let us use the standard definition of integrability of a k-form ω on M , that is every x ∈ M has a neighbourhood N such that ω has the constant expresion in dx i , x i being suitable coordinate functions on N .
Corollary.
Let ω be a real 3-form on a 6-dimensional differentiable manifold M satisfying ∆(ω x ) = {0} for any x ∈ M . Let J be an almost complex structure on M such that for any vector fields X 1 , X 2 , X 3 ∈ X (M ) there is
ω(JX 1 , X 2 , X 3 ) = ω(X 1 , JX 2 , X 3 ) = ω(X 1 , X 2 , JX 3 ).
Then ω is integrable if and only if there exists a symmetric connection ∇ preserving ω, that is ∇ω = 0.
Proof. Let ∇ be a symmetric connection such that ∇ω = 0. Then according to the previous proposition dω = 0 and J is integrable. Then we construct the complex form γ on T C M of type (3, 0) such and Re γ|T x M = ω, for any x ∈ M (point by point, according to Proposition 8). Moreover if ω is closed than so is γ. That is γ = f · dz 1 ∧ dz 2 ∧ dz 3 , where z 1 , z 2 , and z 3 are (complex) coordinate functions on M , dz 1 , dz 2 , dz 3 are a basis of Λ 1,0 M and f a function on M . Further 0 = dγ = ∂γ + ∂γ = ∂f · dz 1 ∧ dz 2 ∧ dz 3 + ∂f · dz 1 ∧ dz 2 ∧ dz 3 .
Evidently ∂γ = 0, which means ∂f = 0 and f is holomorphic. Now we exploit a standard trick. There exists a holomorphic function F (z 1 , z 2 , z 3 ) such that ∂F ∂z 1 = f . We introduce new complex coordinatesz 1 = F (z 1 , z 2 , z 3 ),z 2 = z 2 , andz 3 = z 3 . Then γ = f dz 1 ∧dz 2 ∧dz 3 = dz 1 ∧dz 2 ∧dz 3 . Now writez 1 = x 1 +ix 4 ,z 2 = x 2 +ix 5 , andz 3 = x 3 + ix 6 for real coordinate functions x 1 , x 2 , x 3 , x 4 , x 5 , and x 6 on M. There is 15. Corollary. Let ω be a real 3-form on a 6-dimensional differentiable manifold M satisfying ∆(ω x ) = {0} for any x ∈ M . Let J be an almost complex structure on M such that for any vector fields X 1 , X 2 , X 3 ∈ X (M ) there is ω(JX 1 , X 2 , X 3 ) = ω(X 1 , JX 2 , X 3 ) = ω(X 1 , X 2 , JX 3 ).
Then ω is integrable if and only if the following conditions are satisfied (i) dω = 0, (ii) the almost complex structure J is integrable.
Observation.
There is an interesting relation between structures given by a form of type 2 on 6-dimensional vector spaces and G 2 -structures on 7-dimensional ones (G 2 being the exeptional Lie group, the group of automorphisms of the algebra of Caley numbers and also the group of automorphism of the 3-form given below), i.e. structures given by a form of the type α 1 ∧ α 2 ∧ α 3 + α 1 ∧ α 4 ∧ α 5 − α 1 ∧ α 6 ∧ α 7 + α 2 ∧ α 4 ∧ α 6 + α 2 ∧ α 5 ∧ α 7 + +α 3 ∧ α 4 ∧ α 7 − α 3 ∧ α 5 ∧ α 6 , where α 1 , . . . , α 7 are the basis of the vector space V . If we restrict form of this type to any 6-dimensional subspace of V we get a form of type 2.
G 2 structures are well studied and there is known a lot of examples of G 2 structures.
Thus any G 2 structure on a 7-dimensional manifold gives a structure of type 2 on any 6-dimensional submanifold. Thus we get a vast variety of examples. See for example [J].
X (M ) stands for the set of all (real) vector fields on M , X C (M ) means all the complex vector fields on M . A 3-form ω on M is called the form of type 2 if for every x ∈ M there is ∆(ω x ) = {0}. Let ω be a form of type 2 on M and let U ⊂ M be an open orientable submanifold. Then there exists an everywhere nonzero differentiable 6-form on U . In each T x M , x ∈ U construct J − and J + as in Lemma 7. The construction is evidently smooth on U . Thus 9. Lemma. Let ω be a form of type 2 on M and let U ⊂ M be an orientable open submanifold. Then there exist two differentiable almost complex structures J + and J − on U such that
This shows that ∇ 0 is a real connection while ∇ 1 is a real tensor field of type (1, 2). We have alsoThese equations show that the connection ∇ 0 is symmetric, and that the tensor ∇ 1 is also symmetric. Moreover, we have. This shows that the real part (as well as the complex one, which gives in fact the same identity) is zero. Using the relations between γ 0 and γ 1 we get
dx 6 of T * N in some neighbourhoud N ⊂ M of x such that ω has constant expression in all T x M , x ∈ N . Then the flat connection ∇ given by the coordinate system x 1. . . , Conversely, if ω is integrable, then for any x ∈ M there is a basis dx 1. x 6 is symmetric and ∇ω = 0 on N . We use the partition of the unity and extend ∇ over the whole M . We can reformulate the Proposition 13 as " The Darboux theorem for type 2 forms": ReferencesConversely, if ω is integrable, then for any x ∈ M there is a basis dx 1 , . . . , dx 6 of T * N in some neighbourhoud N ⊂ M of x such that ω has constant expression in all T x M , x ∈ N . Then the flat connection ∇ given by the coordinate system x 1 , . . . , x 6 is symmetric and ∇ω = 0 on N . We use the partition of the unity and extend ∇ over the whole M . We can reformulate the Proposition 13 as " The Darboux theorem for type 2 forms": References
The geometry of three-forms in six dimensions. Nigel Hitchin, arXiv:math.DG/0010054J. Differential Geometry. 55Nigel Hitchin, The geometry of three-forms in six dimensions, J. Differential Geometry 55 (2000), 547 -576, arXiv:math.DG/0010054.
Compact manifolds with special holonomy. D D Joyce, Oxford Mathematical Monographs. Oxford University PressJoyce, D.D., Compact manifolds with special holonomy, Oxford Mathematical Monographs, Oxford University Press, 2000.
One kind of multisymplectic structures on 6-manifolds. J Vanžura, Steps in Differential Geometry, Proceedings of the Colloquium on Differential Geometry. HungaryDebrecenVanžura, J., One kind of multisymplectic structures on 6-manifolds., Steps in Differential Geometry, Proceedings of the Colloquium on Differential Geometry, July 25-30, 2000, De- brecen, Hungary, 375-391.
| [] |
[
"Optical alignment and orientation of excitons in ensemble of core/shell CdSe/CdS colloidal nanoplatelets",
"Optical alignment and orientation of excitons in ensemble of core/shell CdSe/CdS colloidal nanoplatelets"
] | [
"O O Smirnova \nIoffe Institute\nRussian Academy of Sciences\n194021St. PetersburgRussia\n",
"I V Kalitukha \nIoffe Institute\nRussian Academy of Sciences\n194021St. PetersburgRussia\n",
"A V Rodina \nIoffe Institute\nRussian Academy of Sciences\n194021St. PetersburgRussia\n",
"G S Dimitriev \nIoffe Institute\nRussian Academy of Sciences\n194021St. PetersburgRussia\n",
"V F Sapega \nIoffe Institute\nRussian Academy of Sciences\n194021St. PetersburgRussia\n",
"O S Ken \nIoffe Institute\nRussian Academy of Sciences\n194021St. PetersburgRussia\n",
"V L Korenev \nIoffe Institute\nRussian Academy of Sciences\n194021St. PetersburgRussia\n",
"N V Kozyrev \nIoffe Institute\nRussian Academy of Sciences\n194021St. PetersburgRussia\n",
"S V Nekrasov \nIoffe Institute\nRussian Academy of Sciences\n194021St. PetersburgRussia\n",
"Yu G Kusrayev \nIoffe Institute\nRussian Academy of Sciences\n194021St. PetersburgRussia\n",
"D R Yakovlev \nIoffe Institute\nRussian Academy of Sciences\n194021St. PetersburgRussia\n\nExperimentelle Physik 2\nTechnische Universität Dortmund\n44221DortmundGermany\n",
"B Dubertret \nLaboratoire de Physique et d'étude des Matériaux\nESPCI\nCNRS\n75231ParisFrance\n",
"M Bayer \nExperimentelle Physik 2\nTechnische Universität Dortmund\n44221DortmundGermany\n"
] | [
"Ioffe Institute\nRussian Academy of Sciences\n194021St. PetersburgRussia",
"Ioffe Institute\nRussian Academy of Sciences\n194021St. PetersburgRussia",
"Ioffe Institute\nRussian Academy of Sciences\n194021St. PetersburgRussia",
"Ioffe Institute\nRussian Academy of Sciences\n194021St. PetersburgRussia",
"Ioffe Institute\nRussian Academy of Sciences\n194021St. PetersburgRussia",
"Ioffe Institute\nRussian Academy of Sciences\n194021St. PetersburgRussia",
"Ioffe Institute\nRussian Academy of Sciences\n194021St. PetersburgRussia",
"Ioffe Institute\nRussian Academy of Sciences\n194021St. PetersburgRussia",
"Ioffe Institute\nRussian Academy of Sciences\n194021St. PetersburgRussia",
"Ioffe Institute\nRussian Academy of Sciences\n194021St. PetersburgRussia",
"Ioffe Institute\nRussian Academy of Sciences\n194021St. PetersburgRussia",
"Experimentelle Physik 2\nTechnische Universität Dortmund\n44221DortmundGermany",
"Laboratoire de Physique et d'étude des Matériaux\nESPCI\nCNRS\n75231ParisFrance",
"Experimentelle Physik 2\nTechnische Universität Dortmund\n44221DortmundGermany"
] | [] | We report on the experimental and theoretical studies of optical alignment and optical orientation effects in an ensemble of core/shell CdSe/CdS colloidal nanoplatelets. The dependences of three Stokes parameters on the magnetic field applied in the Faraday geometry are measured under continuous wave resonant excitation of the exciton photoluminescence. Theoretical model is developed to take into account both bright and dark exciton states in the case of strong electron and hole exchange interaction and random in-plane orientation of the nanoplatelets in ensemble. The data analysis allows us to estimate the time and energy parameters of the bright and dark excitons. The optical alignment effect enables identification of the exciton and trion contributions to the photoluminescence spectrum even in the absence of a clear spectral line resolution.Colloidal semiconductor nanocrystals are of interest for various fields of chemistry, physics, biology, and medicine and are successfully used in various optoelectronic devices. Being synthesized from many different semiconductor materials, they can have different geometries, resulting in zero-dimensional quantum dots, onedimensional nanorods, or two-dimensional nanoplatelets (NPLs). CdSe NPLs demonstrated small inhomogeneous broadening [1], as confirmed later also for NPLs of different composition[2][3][4]. Semiconductor NPLs can be considered as model systems to study physics in two dimensions, similar to quantum well heterostructures and layered materials, like graphene or transition-metal dichalcogenides.arXiv:2212.06134v1 [cond-mat.mes-hall] 12 Dec 2022 | null | [
"https://export.arxiv.org/pdf/2212.06134v1.pdf"
] | 254,564,393 | 2212.06134 | 784304d9d2385dc1691f770f31ca997ab42430fc |
Optical alignment and orientation of excitons in ensemble of core/shell CdSe/CdS colloidal nanoplatelets
(Dated: December 13, 2022)
O O Smirnova
Ioffe Institute
Russian Academy of Sciences
194021St. PetersburgRussia
I V Kalitukha
Ioffe Institute
Russian Academy of Sciences
194021St. PetersburgRussia
A V Rodina
Ioffe Institute
Russian Academy of Sciences
194021St. PetersburgRussia
G S Dimitriev
Ioffe Institute
Russian Academy of Sciences
194021St. PetersburgRussia
V F Sapega
Ioffe Institute
Russian Academy of Sciences
194021St. PetersburgRussia
O S Ken
Ioffe Institute
Russian Academy of Sciences
194021St. PetersburgRussia
V L Korenev
Ioffe Institute
Russian Academy of Sciences
194021St. PetersburgRussia
N V Kozyrev
Ioffe Institute
Russian Academy of Sciences
194021St. PetersburgRussia
S V Nekrasov
Ioffe Institute
Russian Academy of Sciences
194021St. PetersburgRussia
Yu G Kusrayev
Ioffe Institute
Russian Academy of Sciences
194021St. PetersburgRussia
D R Yakovlev
Ioffe Institute
Russian Academy of Sciences
194021St. PetersburgRussia
Experimentelle Physik 2
Technische Universität Dortmund
44221DortmundGermany
B Dubertret
Laboratoire de Physique et d'étude des Matériaux
ESPCI
CNRS
75231ParisFrance
M Bayer
Experimentelle Physik 2
Technische Universität Dortmund
44221DortmundGermany
Optical alignment and orientation of excitons in ensemble of core/shell CdSe/CdS colloidal nanoplatelets
(Dated: December 13, 2022)
We report on the experimental and theoretical studies of optical alignment and optical orientation effects in an ensemble of core/shell CdSe/CdS colloidal nanoplatelets. The dependences of three Stokes parameters on the magnetic field applied in the Faraday geometry are measured under continuous wave resonant excitation of the exciton photoluminescence. Theoretical model is developed to take into account both bright and dark exciton states in the case of strong electron and hole exchange interaction and random in-plane orientation of the nanoplatelets in ensemble. The data analysis allows us to estimate the time and energy parameters of the bright and dark excitons. The optical alignment effect enables identification of the exciton and trion contributions to the photoluminescence spectrum even in the absence of a clear spectral line resolution.Colloidal semiconductor nanocrystals are of interest for various fields of chemistry, physics, biology, and medicine and are successfully used in various optoelectronic devices. Being synthesized from many different semiconductor materials, they can have different geometries, resulting in zero-dimensional quantum dots, onedimensional nanorods, or two-dimensional nanoplatelets (NPLs). CdSe NPLs demonstrated small inhomogeneous broadening [1], as confirmed later also for NPLs of different composition[2][3][4]. Semiconductor NPLs can be considered as model systems to study physics in two dimensions, similar to quantum well heterostructures and layered materials, like graphene or transition-metal dichalcogenides.arXiv:2212.06134v1 [cond-mat.mes-hall] 12 Dec 2022
However, the spin physics of colloidal nanocrystals is still in its infancy compared to the rather mature field of spintronics based on epitaxially grown semiconductor quantum wells and quantum dots. The properties of colloidal and epitaxial nanostructures can differ considerably due to much stronger confinement of charge carriers in colloidal structures, leading to different properties such as the strongly enhanced Coulomb interaction (enhanced exciton binding energy and fine structure energy splitting), the possibility of photocharging and surface magnetism, the reduction of the phonon bath influence, etc. Experimental techniques widely used in spin physics of heterostructures can be, however, readily applied to colloidal quantum dots and NPLs. Among them are polarized photoluminescence (PL) spectroscopy, including optical orientation and optical alignment methods.
Methods of optical orientation and optical alignment is used to study the population and coherence of different states in solids. Optical orientation is the excitation of states with a certain projection of angular momenta by circularly polarized light. In optics, optical orientation manifests itself in the polarization of the photoluminescence (PL). Optical alignment consists in the excitation of states with a certain direction of dipole moment by linearly polarized light [5]. In semiconductors, optical orientation and alignment can be observed for both electron-hole pairs and excitons.
Optical orientation of electron-hole pairs consists in creating of preferential populations of electron and hole sublevels with different spin projections [6]. Their recombination results in a circularly polarized PL whether or not this pair was generated in the same act of light absorption. On the contrary, optical alignment involves the preservation of correlation (coherence) between the spins of the electron and hole in the pair. Such correlation is possible only for the recombination of an electron and a hole generated in one act of absorption of linearly polarized light [7]. Usually, however, electron and hole belonging to different pairs recombine, correlation is not preserved and the alignment effect is not observed for electron-hole pairs.
Under resonant excitation of the exciton, a bound electron-hole pair is generated, so both optical orientation and alignment are preserved. In the case of excitons in GaAs-type nanostructures [8], the optical orientation is given by the difference of populations of bright exciton states with a projection of the total angular momentum +1 and −1 due to the absorption of circularly polarized light. The optical alignment consists in creating a coherent superposition of a pair of optically active states ±1 by linearly polarized light. Both the difference of populations and coherence evolve in time under the influence of magnetic interactions of different nature (Zeeman, exchange, etc.). If optical orientation and alignment of excitons preserve during their lifetime, the PL is partially polarized. In this case three Stokes parameters unambiguously determine the polarization state of the ensemble of bright excitons. Optical orientation and alignment are not independent. The conversion of exciton coherence to population and vice versa is observed [9].
The study of optical orientation and alignment provides comprehensive information on the fine structure of exciton spin levels. In a zero magnetic field, the ground state of the exciton in NPLs is quadruple degenerated. The exchange interaction between an electron and a hole splits this state into an optically active doublet (bright excitons) and two energetically close optically forbidden singlets (dark excitons). Exciton localization in the NPL lowers the symmetry of the system, and optically active doublet splits into two sublevels [10,11], which are linearly polarized in two orthogonal directions, whose orientation is given by the symmetry of the localizing potential. Knowledge of the fine structure of the exciton (the directions of the main axes and the values of splittings) allows one to infer the symmetry of the nanostructure and the selection rules, which is important for understanding the radiation efficiency and pattern of NPLs. Unfortunately, practically one often deal with an ensemble of NPLs various both in shape and size. In this case, characteristic splittings in the exciton fine structure are indistinguishable in the PL spectrum due to a large energy broadening of optical transitions. Nevertheless, the exciton fine structure is evident in the polarization of exciton PL even in the absence of spectral resolution [8,9,11,12].
Nanostructures are often doped or photocharged with electrons or holes. A significant contribution to the optical properties of such nanostructures is made by threeparticle complexes -trions, consisting of two electrons (or holes) in the singlet state and one unpaired hole (or electron). Trion PL overlaps the exciton PL and complicates the interpretation of optical orientation experiments [13]. Optical alignment becomes essential in this case. There is no correlation between electron and hole spins in the trion, and therefore there is no optical alignment effect. The presence of exciton optical alignment allows one to distinguish between the exciton contribution to the linear polarization even in the absence of a clear spectral resolution between excitons and trions.
Optical orientation and alignment spectroscopy have proven to be powerful tools for studying the spin structure in semiconductors. These methods contributed previously a lot to the characterization of electronic excitations in semiconductors: the times of dynamic processes [14][15][16][17], effective g-factors of carriers [18] and parameters of exciton fine structure [9]. Optical orientation, optical alignment, and polarization conversion effects were observed in epitaxial nanostructures [19,20]. Among the family of colloidal semiconductor nanostructures (chemically synthesized in solution or dispersed in a dielectric matrix semiconductor nanocrystals synthesized), there is a report on the observation and theoretical description of the optical orientation and alignment in inorganic perovskite nanocrystals [21].
In conventional II-VI semiconductor colloidal nanocrystals the optical orientation and alignment have not been studied so far. We observe optical alignment and optical orientation of bright and dark exciton in core/shell CdSe/CdS NPLs. We develop a theory of polarized PL in order to describe the effects taking into account the peculiarities associated with colloidal NPLs: the large electron-hole exchange interaction resulting in large bright and dark exciton splitting and the random in-plane orientation of the NPLs in the ensemble. The analysis of the experimental data allows us to estimate the anisotropic energy splitting in bright and dark exciton states in zero magnetic field, the g-factors of bright and dark excitons, and the bright and dark exciton pseudospin lifetimes.
I. EXPERIMENT
NPLs with 4 monolayers (MLs) CdSe core and CdS shell (schematically represented in Fig. 1) of different thicknesses were synthesized in the group of Benoit Dubertret in Paris, see Methods. The samples were investigated previously by means of a pump-orientation-probe technique, detecting the electron spin coherence at room temperature [22]. In particular, the electron g-factor and its dependence on the shell thickness were reported. In the present paper we conducted the lowtemperature and temperature-dependent studies of the PL properties of the NPLs with different shell thickness. We study the effects of the optical alignment and optical orientation of excitons, which were not observed The effects of optical alignment and optical orientation are qualitatively the same for samples with a different shell thickness. We focus here on the NPLs with a medium CdS shell thickness of 3.1 nm at each side of the CdSe core. The PL spectrum of this sample at a temperature of T = 1.5 K measured under either resonant E exc = 1.960 eV (Figure 2(a) in Supplementary Information (SI)) or non-resonant E exc = 2.380 eV ( Figure S2(a)) excitation consists of one broad band. This is typical for CdSe/CdS core/shell NPLs [23][24][25], in contrast to the bare core CdSe NPLs which PL spectrum has separated exciton and trion lines [26].
Polarization of light is characterized by three Stokes parameters. Two of them characterize linear polarization degree in x, y axes and in x , y axes rotated by 45 • de- (a) PL spectrum (black), amplitudes of the broad (blue squares) and narrow (red squares, multiplied by 5) parts of the optical alignment contour. Arrows are pointing at the detection energies for panels (b) and (d). (b) Experimental data (symbols) and theoretical calculations (solid lines of corresponding colors) of the optical alignment (orange squares), the rotation of the linear polarization plane (green triangles) and the optical orientation (black circles) measured in the Faraday geometry at E det = 1.955 eV. (c) Temperature dependence of the amplitude of the broad (blue squares) and narrow (red squares) parts of the optical alignment contour. Inset: temperature dependence of their HWHM (denoted as B 1/2 ). B 1/2 of narrow part is multiplied by 10. (d) Experimental data (symbols) and model calculations (solid lines) of the optical alignment (orange squares), the rotation of the linear polarization plane (green triangles) and the optical orientation (black circles) measured in the Faraday geometry at E det = 1.943 eV. grees around z axis (P l and P l , respectively). The third one characterizes the circular polarization degree (P c ). For the light propagating along z direction polarizations are defined as
P l = I x − I y I x + I y , P l = I x − I y I x + I y , P c = I + − I − I + + I − .(1)
Here I x(y) is the intensity of horizontally (vertically) polarized component, I x (y ) is the intensity of 45 • (−45 • ) polarized component, and I +(−) is the intensity of σ + (σ − ) polarized component of light.
In general, one can perform nine measurements, taking into account the combination of three options for the polarization of the incident and detected light. We denote each measurement as P β α , which combines αpolarized excitation P 0 α and β-polarized detection P β with α, β = c, l, l . For these samples due to random in-plane orientation of the NPLs on the substrate the optical alignment is independent of specific linear polarization of the exciting light (P l l ∼ P l l ), which is also true for the effect of rotation of the linear polarization plane (P l l ∼ P l l ). For the same reason, all the effects associated with the conversion of initial linear polarization to circular polarization and vice versa are absent, as evidenced by the zero experimental dependence on the magnetic field P l c (B) = P c l (B) = P l c (B) = P c l (B) = 0. Therefore, to describe polarization-dependent effects it is sufficient to investigate three nontrivial results: the effect of the optical orientation, measured as P c c , the effect of the optical alignment, measured as P l l , and the effect of the rotation of the linear polarization plane, measured as P l l . While P l l and P c c are present in the absence of magnetic field, P l l manifests itself only in the nonzero magnetic field in the Faraday geometry. We study these three Stokes parameters in the magnetic fields up to 4.5 T applied in the Faraday geometry with B k, where k is wave vector of the incident light directed perpendicular to the substrate (and opposite to the wave vector of the emitting light). However, we keep the definition of the circular polarization sign P 0 c and P c with the respect to the same direction of z.
A. Optical alignment and optical orientation in continuous wave experiment
The optical orientation, P c c , and optical alignment, P l l , are observed only under the resonant excitation of excitons. While the P c c is observed all over the PL spectrum, the P l l depends crucially on detection energy. Figure 2(a) shows the spectrum of the optical alignment (blue) under linearly polarized continuous wave (cw) excitation with the laser energy E exc = 1.960 eV in comparison with the PL spectrum (black). The optical alignment effect has minimum at the PL maximum and increases toward the high-energy part of the spectrum up to P l l = 10%. This spectral dependence suggests that the PL at the highenergy part of the spectrum is contributed by the excitons. The PL around its maximum is mostly contributed by negative trions, composed of two electrons and one hole, because the optical alignment is not expected for singlet trions or any fermion quasi-particles. We verify the exciton origin of the PL at the high-energy part of the spectrum by the PL time-resolved measurements (the results are discussed in the subsection I C).
For ensemble of NPLs the two preferred orientations of the NPL anisotropic axis c with respect to the substrate are expected: vertically oriented NPLs (edge-up, Figure 1(a)) with c lying in the substrate plane and horizontally oriented NPLs (face-down, Figure 1(b)) with c perpendicular to the substrate [24,25]. The edge-up NPLs emit linearly polarized light both under linear and circular polarized excitation (k z). However, the contribution of the PL from these edge-up NPLs to P l l is constant under the application of the magnetic field in the Faraday geometry allowing for subtraction it from the experimental data and refer to the field-dependent P l l as to the effect of the optical alignment of excitons in the face-down NPLs. Figure 2(b) shows the dependence of the optical alignment P l l (orange squares) on the magnetic field in the Faraday geometry at E det = 1.955 eV. The optical alignment contour consists of two parts: the broad one has HWHM (half width at half maximum, denoted as B 1/2 ) about 3 T and the narrow one -less than 0.1 T. The spectral dependences of the amplitudes of both parts of the P l l contour are shown in Figure 2(a) with blue (broad part) and red (narrow part) symbols. The narrow part of the optical alignment contour is resonant with respect to the laser excitation energy: the effect amplitude reaches the maximum when the PL is detected about 5 meV below the laser energy regardless of the particular laser energy in a reasonable range and rapidly vanishes with varying of detection energy. Figure 2(c) shows the temperature dependence of broad (blue squares) and narrow (red squares) parts of the optical alignment. The dependences of the amplitudes are presented on the main panel. Broad part is 10% at T = 1.5 K, decrease and disappear above 80 K. Narrow part of the optical alignment contour has a dip in the region under 5 K, 2% amplitude around T = 5 K, and then decrease similar to the broad part and disap-pear above 60 K. Dependences of HWHMs are in the inset in Figure 2(c). HWHM for both contributions have weak temperature dependence. Figure 2(b) also shows the effect of the rotation of the linear polarization plane P l l (B) (green triangles), which results in nonlinear antisymmetric dependence on the magnetic field applied in the Faraday geometry. The value of linear polarization degree in the magnetic field of 4 T is 3%. In addition, Figure 2(b) includes the recovery of the optical orientation P c c (B) (black circles), which manifests itself in increasing in the magnetic field. The amplitude of the polarization recovery curve is 6%. Both effects have a characteristic field of 3 T corresponding to the broad part of the optical alignment contour.
The magnetic field dependences of all three Stokes parameters -P l l , P l l and P c c -measured at the detection energy E det = 1.943 eV are presented in Figure 2(d). This energy is 17 meV lower than the excitation energy, thus the narrow part of the optical alignment contour is absent here. The amplitudes of the optical alignment and the polarization recovery curve decreased to 4.5% and 3%, respectively. The characteristic value of the rotation of the linear polarization plane decrease to 1.5% in B = 4 T. The characteristic magnetic field of all three dependences remains the same of 3 T in comparison with Figure 2(b).
B. Raman scattering
Determination of the electron g-factor g e is done by means of spin-flip Raman scattering (SFRS) spectroscopy. SFRS spectra in co-and cross-circular polarizations received under resonant excitation at E exc = 1.936 eV in the Faraday magnetic field B F = 4 T at T = 1.5 K are shown in Figure 3(a).
Both Stokes (with positive Raman shifts) and anti-Stokes (with negative Raman shifts) regions of spectra exhibit line at the energy 0.38 meV. This line is attributed to the spin flip of the electron. Figure 3(b) shows the magnetic field dependence of Raman shift of electron spin flip line. Its approximation with Zeeman energy µg e B F gives the g-factor of electron g e = 1.67. This value is close to the value 1.59 determined for this sample at room temperature in [22].
Raman scattering spectroscopy in zero magnetic field allows one to measure the energy splitting between the bright (optically allowed) and the dark (optically forbidden) exciton states. Figure 3(c) shows Raman spectrum at E exc = 1.964 eV at T = 1.5 K. The spectrum shows a line at energy 0.8 meV. This line corresponds to the emission of the dark exciton after energy relaxation from the initially excited bright state. Thus, the energy splitting between the bright and the dark exciton states ∆E AF = 0.8 meV.
C. Time-resolved PL
The analysis of the temperature-dependent and magnetic-field dependent PL decays allows us to prove that the high energy part of the spectrum (Figure 2(a)) is contributed mostly by the exciton recombination. At the same time, trion determines dynamics at the PL maximum.
The PL spectra under pulsed excitation (both resonant and non-resonant) at T = 1.5 K consist of a broad line with maximum at 1.930 eV, similar to the cw PL spectrum. It is shown in Supplementary Information ( Figure S2(a)).
The dynamics of the PL obtained under resonant pulsed excitation with E exc = 1.960 eV and detected at the high-energy part of the PL spectrum (E det = 1.946 eV) is shown in Figure 4(a,b). These excitationdetection conditions correspond to the exciton PL. Exciton PL decay has short and long contributions. The short decay (within nanoseconds) is shown in Figure 4(a). It corresponds to the bright (optically allowed, A) exciton dynamics [27] with the lifetime τ A = 0.9 ns. The long decay in zero magnetic field is shown in Figure 4(b) (blue curve). This component, characterized by a decay rate Γ L , is observed at T = 1.5 K due to the population of the bright exciton state and its admixture to the dark (optically forbidden, F) exciton state.
PL decay is measured for the temperature range up to 90 K (typical data are shown in Figure S3(a)). The excitation-detection conditions corresponding to the resonant excitation of the exciton PL are the same for all the temperatures since the PL spectrum practically does not shift or change the shape with temperature (Figure S2(a)). Figure 4(c) shows the temperature dependence of the decay rate Γ L . Such a behavior is typical for the exciton recombination [27]. It is caused by the temperature-induced redistribution of the bright (characterised by the recombination rate Γ A ) and the dark (characterised by Γ F ) exciton states populations (see SI section S1). The analysis of the Γ L (T ) allows one to determine the exciton parameters [27] including the bright and dark exciton recombination rates, as well as the relaxation rates and the energy splitting between the bright and the dark exciton states ∆E AF caused by the electron-hole exchange interaction (see SI, S1). Already mentioned bright exciton low-temperature lifetime τ A = 0.9 ns can be also expressed via
τ A = (Γ A + γ 0 ) −1 ,
where γ 0 gives the relaxation rate from bright to dark exciton [27]. In conjunction with this, the fitting of the Γ L (T ) dependence with Eq. (S6) allows us to determine
γ 0 = 0.31 ns −1 , Γ A = 0.8 ns −1 , Γ F = 0.004 ns −1 and ∆E AF = 0.8 meV.
The PL decay depends also on the magnetic field applied in the Voigt geometry ( Figure 4(b)). The dependence of Γ L (B), shown in Figure 4(d), is caused by the field-induced mixing of the bright and the dark exciton states. Early determined exciton parameters allow us also to describe Γ L (B) via Eq. (S6).
Temperature series of the PL decay are also recorded under non-resonant excitation energy E exc = 2.380 eV ( Figure S3(c)) with E det = 1.952 eV. Γ L (T ) dependence coincides with the case of the resonant excitation (Figure S3(d)). Thus, independently of the excitation energy and in some range of the detection energies, the PL around 1.946 eV (where the effect of the optical alignment is observed, see Figure 2(a)), is contributed mostly by the exciton recombination. In contrast, the trion PL dynamics at 1.930 eV ( Figure S3(b)) does not depend on the temperature (Figure 4(c)). This is typical for trions which do nor have dark state [24,26].
The electron-hole exchange interaction, as well as the direct Coulomb interaction in NPLs, is enhanced by the strong spatial confinement along c axis and by the dielectric contrast between the NPL and its surrounding ligands [27,28]. As a result, in bare core 4 ML CdSe NPLs ∆E AF and the excition binding energy reach 4.5 meV and 270 meV, correspondingly [28]. The presence of the CdS shell decreases the electron-hole overlap because of the leakage of the significant part of the electron wave function into the shell and also decreases influence of the dielectric contrast. This results into decrease of ∆E AF to 0.8 meV in the studied NPLs and the respective decrease of the exciton binding energy. However, the latter remains large excluding the possibility to excite the unbound electron-hole pair at the resonant excitation energy where the effects of the optical alignment and the ptical orientation are observed.
Thus, the effects of the optical alignment and optical orientation of excitons are observed in core/shell CdSe/CdS NPLs. Our detailed studies of these effects and of the time-resolved PL decay allowed us to confirm that at the high energy edge of the spectrum the PL mostly comes from the exciton recombination. Further, we focus only on the effects of the optical alignment and optical orientation of excitons and develop a theoretical analysis for the excitons with large bright-dark splitting. We consider only the contribution from the horizontally lying NPLs (c z, Figure 1(b)) taking into account their in-plane random orientation.
II. THEORY
We develop a theory of the optical alignment and optical orientation of excitons in the face-down NPLs oriented horizontally at the substrate as schematically shown in Figure 1 [29] and be of different length. The in-plane anisotropy of the studied NPLs is not large [22], however, is present. Therefore, we introduce two coordinate frames: the laboratory frame with axes x, y, z and the frame related to the NPL with axes X, Y along NPL edges and Z axis directed along c as shown in Figure 5(a). We consider the normal incidence of the exciting light to the sample with k z Z and reverse direction of the detected light ( Figure 6), so that the vector of the light polarization e = (e x , e y , 0) is always in the NPL plane. However, the X, Y axes of the NPL frame might be rotated by angle α with respect to the x, y axes of the laboratory frame. In the following consideration, the external magnetic field is applied in the Faraday geometry B k c.
A. Bright and dark exciton contributions to the PL polarization
The band-edge exciton states in CdSe based NPLs comprise bright and dark excitons, for more details see SI, section S1. These excitons are formed from the electrons with the spin projection s Z = ±1/2 on the NPL c axis and the heavy holes with the spin projection j Z = ±3/2. In the absence of the external magnetic field and any anisotropy-related splittings, the bright (A) and dark (F) excitons have two-fold degenerate | ± 1 states described by the wave functions Ψ m A with projections m A = s Z + j Z = ±1 and two-fold degenerate | ± 2 states Ψ m F described by the wave functions with projections m F = s Z + j Z = ±2. Due to small perturbations caused by interactions with phonons or internal magnetic fields, the | + 2 (| − 2 ) states are coupled to the | + 1 (| − 1 ) states and emit circularly polarized light [30].
In the absence of an external magnetic field the bright (dark) exciton states are split byhΩ X (hΩ F X ) into linearly polarized dipoles |X , |Y (and analogously com-posed |F X , |F Y ) and described by the wave functions
Ψ X = Ψ +1 + Ψ −1 √ 2 , Ψ Y = −i Ψ +1 − Ψ −1 √ 2 ,(2)Ψ F X = Ψ +2 + Ψ −2 √ 2 , Ψ F Y = −i Ψ +2 − Ψ −2 √ 2 . (3)
The anisotropic splittinghΩ X is associated with the long-range electron-hole exchange interaction in the presence of the anisotropy of the NPL shape in the plane [32,33]. The splitting between the dark exciton states hΩ F X has a different nature -even without an in-plane anisotropy it can originate from the cubic-anisotropy contribution to the short-range electron-hole exchange interaction ∼ α=X,Y,Z σ α J 3 α , where σ is a pseudo-vector composed of Pauli matrices and J is a pseudo-vector of matrices of the angular momentum 3/2 [8]. The fine structure of the band-edge exciton taking into account the anisotropic splitting in the absence of an external magnetic field is shown schematically in Figure S1(b).
The Hamiltonian, taking into account the electronhole exchange terms and the Zeeman field-induced term for B z for the exciton wave function in the basis {Ψ +1 , Ψ −1 , Ψ +2 , Ψ −2 }, has the matrix form:
H AF = H A 0 0 H F =h 2 Ω Z Ω X 0 0 Ω X −Ω Z 0 0 0 0 −2∆E AF /h + Ω F Z Ω F X 0 0 Ω F X −2∆E AF /h − Ω F Z ,(4)wherehΩ Z = g A µ B B,hΩ F Z = g F µ B B, g A,F -bright,
dark exciton g-factors, µ B -Bohr magneton. The Hamiltonian (4) does not include any perturbations that can directly mix bright and dark exciton states. Therefore, the eigenstates of this Hamiltonian, Ψ ± A and Ψ ± F , comprise only the linear combinations of the functions Ψ ±1 and Ψ ±2 , respectively, with the energy eigenvalues E ± A (B) and E ± F (B):
E ± A = ± 1 2h Ω A = ± 1 2h Ω 2 X + Ω 2 Z ,(5)E ± F = −∆E AF ± 1 2h Ω F = = −∆E AF ± 1 2h Ω 2 F X + Ω 2 F Z .(6)
The exciton energy structure and its evolution in the magnetic field is shown in SI (section S1, Figure S1). The evolution of the bright exciton states in external magnetic field is shown in a larger scale in Figure 5(b). The polarization of light emitted by excitons can be described using the spin density matrix formalism [9,31]. The strong exchange interaction leads to a large splitting between the states of the bright and dark excitons ∆E AF ∼ 1 ÷ 5 meV, that is much larger than the inverse exciton lifetimes. It makes the states of the dark and bright excitons incoherent to each other. This, in turn, allows us to neglect non-diagonal block terms of the density matrix ρ AF,m A m F , ρ AF,m F m A and to write the blockdiagonal density matrix for the four exciton states
ρ AF = ρ A 0 0 ρ F = ρ +1,+1 ρ +1,−1 0 0 ρ −1,+1 ρ −1,−1 0 0 0 0 ρ +2,+2 ρ +2,−2 0 0 ρ −2,+2 ρ −2,−2 ,(7)
where the 2 × 2 density matrices ρ A and ρ F characterize isolated two-level systems of bight and dark exciton states {Ψ +1 , Ψ −1 } and {Ψ +2 , Ψ −2 }, respectively. For both two-level systems we can introduce bright and dark exciton pseudospin s A and s F , respectively, and express the bright and dark density matrices as
ρ A,F = 1 2 N A,F + σ · s A,F = N A,F 2 (1 + σ · S A,F ) . (8)
Here N A = (ρ +1,+1 +ρ −1,−1 ) and N F = (ρ +2,+2 +ρ −2,−2 ) are the bright and dark exciton populations and the average pseudospins are
S A = s A /N A , S F = s F /N F with S A,F = 1/2.
The bright exciton states with m A = ±1 absorb and emit circular polarized light. The dark exciton states m F = ±2 interact with light only due to the admixture of m A = ±1 states and inherit their circular polarization selection rules. The direct absorption of the light by the dark exciton states can be safely neglected, however, their contribution to the PL is important at low temperatures due to their significant population. Accordingly, the total intensity from the colloidal NPLs I = I A + I F consists of the intensities of the bright exciton I A = Γ A N A and the dark exciton I F = Γ F N F (the recombination rates Γ A,F of both excitons are assumed to be purely radiative).
Then, the polarization of light emitted by excitons from a single horizontally oriented NPL with axes X, Y can be presented as:
P C = i(e X e * Y − e * X e Y ) = AP CA + FP CF , P L = |e X | 2 − |e Y | 2 = AP LA + FP LF , (9) P L = e X e * Y + e * X e Y = AP L A + FP L F .
Here e X and e Y are the projections of the emitted light polarization vector e on the X, Y axes, A = I A /(I A +I F ) and F = I F /(I A + I F ) characterize the contributions from the bright and dark exciton correspondingly. The partial light polarizations are related to the density matrix and averaged pseudospins components as
P CA = ρ +1,+1 − ρ −1,−1 ρ +1,+1 + ρ −1,−1 = 2S AZ , P CF = ρ +2,+2 − ρ −2,−2 ρ +2,+2 + ρ −2,−2 = 2S F Z ,(10)P LA = − ρ +1,−1 + ρ −1,+1 ρ +1,+1 + ρ −1,−1 = 2S AX , P LF = − ρ +2,−2 + ρ −2,+2 ρ +2,+2 + ρ −2,−2 = 2S F X , P L A = i ρ +1,−1 − ρ −1,+1 ρ +1,+1 + ρ −1,−1 = 2S AY , P L F = i ρ +2,−2 − ρ −2,+2 ρ +2,+2 + ρ −2,−2 = 2S F Y .
In the laboratory frame the registered polarization depends on the orientation of the NPL rotated around the laboratory axis z by an angle α ( Figure 5(a)). Meanwhile, the total intensity I does not depend on the angle α. Therefore, the linear polarizations registered in the laboratory frame depend on the rotation angle of the single NPL as follows P l (α) = P L cos(2α) + P L sin(2α), P l (α) = −P L sin(2α) + P L cos(2α),
P c = P C .(11)
Thus, to describe the effects of the optical orientation and alignment of the excitons we need to find first the components of the bright and the dark exciton pseudospins in the external magnetic field.
B. Pseudospin components in the magnetic field in the Faraday geometry
The dynamics of the density matrix ρ AF is defined by the equation:
∂ρ AF ∂t = 1 ih [H AF , ρ AF ],(12)
allowing us to obtain two separate equations for the bright and dark exciton pseudospin s A,F :
ds A dt + s A × Ω A = 0, ds F dt + s F × Ω F = 0.(13)
Each pseudospin rotates in its own effective magnetic field with a frequency Ω A,F = Ω X,F X + Ω Z,F Z that in-clude both the external magnetic field and the field directed along X pseudospin component that causes the anisotropic splitting. The interaction between the bright and the dark exciton pseudospins is realized by their mutual pumping during the relaxation between them, which can be accounted for in the framework of the kinetic equations. The pseudospins dynamics taking into account generation, relaxation and recombination is described by a system of kinetic equations, phenomenologically written as follows
ds A dt + s A T A + s A × Ω A = s 0 A τ A ,(14)ds F dt + s F T F + s F × Ω F = s 0 F τ F .(15)
Here the right-hand terms describe the generation of bright and dark pseudospins both due to the initial pumping by the polarized light and due to the mutual relaxation between bright and dark excitons and will be discussed in section II C. T A,F are pseudospin lifetimes 1/T A,F = 1/τ A,F + 1/τ sA,sF , where τ A,F are the exciton lifetimes specified in SI, Eq. (S2) and τ sA,sF are the spin relaxation times. Note, that the kinetic Eqs. (14), (15) should be supplemented by the system of rate equations for the bright and the dark exciton populations N A and N F given in SI, Eq. (S3). In general case, the exciton spin relaxation times and therefore the exciton pseudospin lifetimes T A,F have to be considered as the tensors of the second rank. Depending on the relaxation mechanism, the main axis of these tensors and the main values may depend both on the value and on the direction of the effective fields Ω A,F . The consideration of the particular mechanism is beyond the scope of the present paper.
For the bright exciton we consider two possibilities: (i) isotropic spin relaxation times allowing to consider T A as a scalar; (ii) anisotropic spin relaxation times assuming that T A has two independent values T 1 and T 2 related to the longitudinal relaxation time τ s,1 accompanied by the energy relaxation between the eigenstates Ψ ± A and the dephasing time τ s,2 caused by the pseudospin rotation around the effective field Ω A (see Figure 5(b),(c)). For the dark exciton, we restrict the consideration by assuming the isotropic spin relaxation times, which allows us to consider T F as a scalar. As it will be shown later, this is sufficient to describe the effects under study.
In the following, we consider only steady state solutions of Eqs. (14,15) under cw excitation. In this case, the averaged pseudospins S A = s A /N 0 A and S F = s F /N 0 F , where N 0 A,F are the steady state solutions of the rate Eqs. (S1, S3), satisfy the same system of kinetic Eqs. (14,15). Then, in the case of the isotropic spin relaxation times and under the additional restrictions Ω F T F 1 and Ω A T A 1, the steady state solutions can be writ-ten as
S A = T A τ A (S 0 A Ω A )Ω A Ω 2 A , S F = T F τ F (S 0 F Ω F )Ω F Ω 2 F .(16)
One can see, that under the considered restriction, allowing many pseudospin rotations around the effective field during its lifetime, the steady state pseudospin is always directed along the effective field [9,19]. It is easy to demonstrate that the same property remains even with the consideration of the spin relaxation anisotropy. Applied magnetic field in the Faraday geometry converts linearly polarized dipoles |X , |Y into circular components | ± 1 > ( Figure 5(b)). This restores the circular polarization of PL when Ω Z T A 1. However, the S Y component is always vanishing in the considered geometry and the S Z component is vanishing in zero magnetic field. For this reason, the above restrictions do not allow one to describe the effects of the rotation of the linear polarization plane and the optical orientation, which are observed in the experiment. We assume that the main effects come from the bright exciton recombination and, therefore, consider further the steady state solutions of Eqs. (14,15) for S A assuming arbitrary relations between the main values of the spin lifetime tensor T A and Ω A for the cases (i), (ii).
In the case of isotropic relaxation time (case (i)), the general solution for the pseudospin S A steady-state components in a certain magnetic field in the Faraday geometry reads:
S AX = T A (S 0 AX (1 + Ω 2 X T 2 A ) − Ω Z T A S 0 AY + Ω Z Ω X T 2 A S 0 AZ ) τ A (1 + Ω 2 A T 2 A ) , S AY = T A (S 0 AY + Ω Z T A S 0 AX − Ω X T A S 0 AZ ) τ A (1 + Ω 2 A T 2 A ) ,(17)S AZ = T A (S 0 AZ (1 + Ω 2 Z T 2 A ) + Ω X T A S 0 AY + Ω X Ω Z T 2 A S 0 AX ) τ A (1 + Ω 2 A T 2 A )
.
The solutions for the case (ii) taking into account the anisotropy of the relaxation times are given in the SI, section S4.
C. Pseudospin generation
We consider now the generation terms at the righthand side of the kinetic eqs (14,15). We characterize the polarization of the exciting light by two Stokes parameters
P 0 c = i(e 0 x e 0 * y − e 0 * x e 0 y ) , P 0 l = |e 0 x | 2 − |e 0 y | 2 ,(18)P 0 l = e 0 x e 0 * y + e 0 * x e 0 y ,
where e 0 x , e 0 y are projections of the light polarization vector e 0 on the laboratory axes x, y. For the resonant or close to the resonant excitation conditions we neglect the small oscillator strength of the dark exciton and assume that only | ± 1 states are excited by the laser. At low temperature, when there are no transitions from the dark exciton to the bright one, the generation term for the bright exciton pseudospin S 0 A = S 0 and s 0 A = N 0 A S 0 A . For the NPL with axes X, Y , rotated about the laboratory axis z by an angle α, it is related to the exciting light polarization as
S 0 X = γ l 2 P 0 l cos(2α), S 0 Y = γ l 2 P 0 l sin(2α),(19)S 0 Z = γ c P 0 c .
Here the parameters γ l ≤ 1 and γ c ≤ 1 are introduced to account for the possible loss of the pumped polarization during the relaxation in the case when the excitation is not exactly resonant. Hereafter, by P 0 l and P 0 c we mean γ l P 0 l and γ c P 0 c , respectively, thus taking into account the loss of the polarization for the excitons created in lying NPLs. Moreover, analyzing the absolute values of the polarizations, one has to take into account the loss of the polarization caused by the contribution to the recombination coming from the vertically oriented NPLs. These losses can be also included in the factors γ l and γ c .
We assume that the generation terms S 0 F and s 0 F = N 0 F S 0 F for the dark exciton do not contain the contribution from the laser excitation and originate totally from the relaxation of the polarized population from bright to dark exciton. In the case of a non-resonant excitation, the dark exciton state population N 0 F can be created via the relaxation from the initially excited states and not from the band-edge bright exciton. However, in this case, the polarization will be lost during the relaxation process. We assume that the perturbation matrix elements responsible for the relaxation from bright to dark exciton states admix the exciton states | + 1 to | + 2 and | − 1 to | − 2 with the same probabilities as shown in Figure S1(c). Therefore, the circular polarization is preserved during excitation transfer and S 0 F Z = S AZ . As for the pseudospin components associated with the linear polarization, they can be lost due to the phases of the wave functions. Below we assume that S 0 F Y = 0 and consider two extreme cases for the S 0 F X component: (a) S 0 F X = 0 and (b) S 0 F X = S AX due to the transfer of the liner polarization from the anisotropically split |X to |F X and |Y to |F Y as shown in Figure S1(b). The case (b) can be realized, for example, if the coupling between bright and dark states is caused by the anisotropic internal magnetic field directed predominantly along X axis. Consideration of the microscopic mechanisms responsible for this field or other total or partial transfer of the linear polarization from bright to dark exciton is beyond the scope of this paper.
As the temperature increases, the generation terms are modified due to the acceleration of the bright-to-dark and temperature activation of the dark-to-bright exciton relaxations:
S 0 A = (1 − f ) S 0 + f S * F ,(20)where f = (1 − Γ A τ A )(1 − Γ F τ F ).
The derivation of this expression is given in SI. From hereafter, we keep only equations in low temperature regime in the main text. The transfered terms S * F and s * F = N 0 F S * F also depend on the relaxation conditions between the bright and the dark excitons. We assume that S * F Z = S F Z and S * F Y = 0 and for the two cases under consideration (a) S * F X = 0 or (b) S * F X = S F X .
D. Bright exciton contribution to the PL polarization from the ensemble of NPLs
Recall that NPLs are randomly oriented in the plane of the substrate. For this reason, some contributions to the polarization from a single NPLs, such as the conversion of the linear to circular polarization, disappear upon averaging over the ensemble. Since the total intensity does not depend on the in-plane orientation of the NPL, only the averaging of the polarizations coming from each NPL over the angle α is needed. The circular polarization is not affected by the angular dependence and the optical orientation effect comes from the S AZ component generated by P 0 c . As for the linear polarized components, the nonvanishing contributions coming from the initial P 0 l excitation and given by Eqs. (21) and (II C) for the bright exciton in the case (i) at low-temperature regime are the following contributions (see also orange arrows in Figure 6):
P l lA (α) = T A P 0 l τ A (1 + Ω 2 A T 2 A ) (1 + Ω 2 X T 2 A ) cos 2 (2α) + sin 2 (2α) ,(21)P l l (α) = T A P 0 l Ω Z T A τ A (1 + Ω 2 A T 2 A )
sin 2 (2α) + cos 2 (2α) .
After averaging we get results for three Stokes parameters in the laboratory frame:
P l lA = T A 2τ A P 0 l (2 + Ω 2 X T 2 A ) (1 + Ω 2 A T 2 A ) , P l l A = T A τ A T A Ω Z P 0 l (1 + Ω 2 A T 2 A ) ,(22)P c cA = T A τ A P 0 c (1 + Ω 2 Z T 2 A ) (1 + Ω 2 A T 2 A )
.
The conversion of the circular polarization to the linear and vice versa are vanishing after averaging. The expressions for the effects alter with the consideration of the relaxation time anisotropy in the case (ii), and takes form:
P l lA = P 0 l 2τ A T 2 (2 + Ω 2 X T 2 2 ) (1 + Ω 2 A T 2 2 ) + (T 1 − T 2 ) Ω 2 X Ω 2 A , P l l A = P 0 l τ A T 2 2 Ω Z 1 + Ω 2 A T 2 2 ,(23)P c cA = P 0 c τ A T 2 (1 + Ω 2 Z T 2 2 ) (1 + Ω 2 A T 2 2 ) + (T 1 − T 2 ) Ω 2 Z Ω 2 A .
Eqs. (23) are transformed into Eqs. (II D) when
T 1 = T 2 = T A .
E. Dark exciton contribution to the ensemble polarization
Let us consider two extreme cases of the relaxation between the bright and dark exciton states, described in the section II C. All possible component transformations for the transfer of linear and circular components are shown in Figure S4. The detected contributions to the optical alignment and to the rotation of the linear polarization plane are highlighted on the right in orange and green, respectively. If only S z component of the pseudospin is conserved during excitation transfer from the bright exciton to the dark one (solid red and violet arrows in Figure S4), only two contributions each from the dark exciton occur in P l l and P l l at low temperature. If there is a generation of the S 0 F X as well (dashed red arrows) in Figure S4), both effects include two additional contributions more (dashed violet arrows each in Figure S4).
Here we give the solutions taking into account the anisotropy of the relaxation time in the bright exciton for the two notable contributions. Answers for all the contributions to the measured polarization from the dark exciton at low temperature without taking into account the anisotropy of the relaxation time in the bright one are given in the SI, section S3.
The effect of the optical orientation in the case of the only transferred S Z to S F Z consist in one term for P c c :
P c cF = − T F P 0 c τ F (1 + Ω 2 F Z T 2 F ) (1 + Ω 2 F T 2 F ) (T 2 Ω 2 X + T 1 Ω 2 Z (1 + T 2 2 Ω 2 A ) τ A Ω 2 A (1 + T 2 2 Ω 2 A )
.
(24) One can see, that in zero external magnetic field P c cF is vanishing in the limit case Ω F X T F 1, which as we will see below is well satisfied for the dark exciton in our NPLs.
Activation of the pseudospin linear component transfer enables the most significant in magnitude contribution from the dark exciton to the optical alignment by direct transfer of S X to S F X (shown by red dashed arrows in Figure 6):
P l lF = T F P 0 l 2τ F (1 + Ω 2 F X T 2 F ) (1 + Ω 2 F T 2 F ) T 2 Ω 2 Z + T 1 Ω 2 X (1 + T 2 2 Ω 2 A ) τ A Ω 2 A (1 + T 2 2 Ω 2 A )
.
(25) Other nonvanishing contributions are given in SI, see S3.
It is important to note, that all effects of the conversion from linear to circular polarization in bright or dark exciton are vanishing upon averaging over randomly oriented NPL ensemble. All nonvanishing contributions of the dark exciton state to the optical alignment effect are shown in SI, Figure S4. In addition to the random orientation of the NPLs in ensemble, the exciton parameters in the NPLs, such as the anisotropic exciton splittings Ω X and Ω XF , can be characterized by some dispersion (marked in orange) and rotation of the linear polarization plane P l l (marked in green) for the NPL oriented at the angle α ( Figure 5(a)). The main contribution to the optical alignment from the dark exciton is shown on the top with red arrows (See in more details in SI, section S3).
in the ensemble. To account for it, an additional averaging with the distribution function can be carried out, as it has been done, for example, in Ref. [21] for the ensemble of perovskite nanocrystals.
To sum up, here, in this section, we have given the theoretical blocks necessary to describe the polarization effects in the ensemble of colloidal nanoplatelets. Below comes the analysis of experimental data by means of the developed theory in order to determine quantitative ranges for the exciton parameters of the ensemble of NPLs.
III. MODELING OF EXPERIMENTAL DATA AND DISCUSSION
A. Analysis of the magnetic field dependences of the Stokes parameters
In this subsection, we analyze the experimental magnetic-field dependences of three Stokes parameters, P l l , P l l , and P c c , shown in Figure 2(b) and 2(d) in for the low temperature regime. We start from the contributions coming from the bright exciton resulting in the broad contours in all three Stokes parameters in Figure 2
(b).
Regardless of the amplitudes, we can write down three conditions on the field dependences in a fixed magnetic field. In the field of B 1/2 = 3 T the half-maximum (HWHM) of the optical alignment effect is reached. The pseudospin lifetime can be different for longitudinal and transverse relaxation, and can also depend on the magnitude of the external magnetic field as was discussed in Section II. To avoid an excessive number of undetermined parameters, we analyze two extreme cases, neglecting either the field dependence of the spin lifetime or anisotropy of the relaxation time.
Here in the main text we consider the cases (i) without and (ii) with account of the pseudospin relaxation anisotropy but assume that T A , T 1 , T 2 do not depend on the magnetic field. In the SI (see S4), we consider the case (iii) assuming T 1 = T 2 = T A but taking into account the dependence of the T A on the magnetic field. For clarity, we introduce the variables X A = Ω X T A = Ω X T 1 and
Z A = Ω Z (B 1/2 )T A = Ω Z (B 1/2 )T 1 .
Other two conditions are the ratio of the linear polarizations P l l A /P l lA in a B 1/2 field and the recovery of the initial optical orientation in the same field P c cA (0)/P c cA (B 1/2 ). In the case (i), without taking into account the anisotropy of the relaxation time and its magnetic field dependence, these conditions take the form:
P l lA (B 1/2 ) P l lA (0) = 1 + X 2 A 1 + X 2 A + Z 2 A = 1 2 ,(26)P l l A P l lA (B 1/2 ) = 2Z A 2 + X 2 A , P c cA (0) P c cA (B 1/2 ) = 1 + X 2 A + Z 2 A (1 + X 2 A )(1 + Z 2 A ) .(27)
All three conditions for magnetic field dependences for the two variables X A , Z A generally cannot be satisfied simultaneously, as can be seen in blue curves in Figure S5. Figure S5 Accounting for relaxation times anisotropy (the case (ii)) leads to the appearance of the additional parameter characterising the anisotropy t 12 = T 1 /T 2 , where T 1 , T 2 are related to relaxation times τ s, and τ s,⊥ respectively ( Figures 5(b), 5(c)). Analogous conditions can be obtained from Eqs. (23) and given in SI, see Eqs. (S17).
With these conditions in the case (ii) we obtain fixed parameter bindings X A = Ω X T 1 = 1, Z A = Ω Z T 1 = 1.7 and the pseudospin lifetime anisotropy t 12 = T 1 /T 2 = 2 (orange curves in Figure S5). These bindings, however, give the infinite number of parameter sets (T 1 , g A = hZ A /µ B B 1/2 ,hΩ X =hX A /T 1 ) which describe a broad part of all three magnetic field dependences shown in Figure 2(b) for the detection close to the resonance.
Alternative set of the fixed parameters can be found for the case (iii) by assuming the magnetic field dependence of the spin lifetimes T 1 = T 2 = T A (B) as discussed in SI, see S4 and green curves in Figure S4. The obtained parameters for X A = Ω X T A = 2.08, X A (B 1/2 ) = Ω X T A (B 1/2 ) = 1.5 and Z A (B 1/2 ) = Ω Z T A (B 1/2 ) = 1.22 are slightly different from the previously found. Next, similar procedures is done for the magnetic-field dependences shown in Figure 2(c) for the slightly non-resonant detection energy. The resulting parameters of the bright exciton are close to the parameters found for the resonance detection (see SI, section S4).
We next analyze the narrow part in the P l l (B) dependence which can be seen in Figure 2(b) and is absent in Figure 2(d). We focus here first only on its magnetic-field dependence, however its amplitude is zero in the case (a) and nonzero only in the case (b), when the linear polarization is transfered from bright to dark exciton as shown in Figure S5 by the red dashed arrow. The HWHM of the narrow contour is reached in the field of B F 1/2 = 60 mT. In the main contribution to P l lF of Eq. (25) the terms associated with the bright exciton in such a small field are still close to those at zero field. Therefore, the condition for the HWHM of the narrow contour can be obtained as:
P l lF (B F 1/2 ) P l lF (0) = (1 + Ω 2 F X T 2 F ) (1 + Ω 2 F T 2 F ) = 1 2 .(28)
In the limit case Ω F X T F 1 this condition results in Ω F Z (B F 1/2 ) = Ω F X . We will see below that this condition is well satisfied for the dark exciton.
Thus, we have specified the fixed parameter bindings. Nevertheless, the goal of our work is to determine some specific ranges of possible exciton parameters. To extract the parameters of interest Ω X,F X and g A,F , let us analyze the amplitudes of the effects at low temperature.
B. Amplitude analysis
In this subsection we consider the amplitudes (the values at zero magnetic field) of the effects of optical alignment, P l l (0), and optical orientation, P c c (0), at low temperature. As we mentioned above, the amplitudes can be affected by the depolarization factor coming from the recombination of the excitons in the vertically oriented (edge-up) NPLs as well as by the initial loss of the polarization in the case of the non-resonant excitation. Both effects will be taken into account by allowing the initial polarizations P 0 l , P 0 c ≤ 1. Next, we recall that there are two contributions to the total polarizations coming from the bright and dark excitons and weighted by the factors A = I A /(I A + I F ) = Γ A τ A and F = I F /(I A + I F ) = 1 − A = γ 0 τ A at low temperatures. Then, the bright exciton contributions to the optical alignment and orientation effects are given by:
AP l lA (0) = P 0 l Γ A 2 T 1 + T 2 1 + Ω 2 XA T 2 2 ,(29)AP c cA (0) = P 0 c Γ A T 2 1 + Ω 2 XA T 2 2 .
The dark exciton contribution to the amplitude of optical alignment effect is given by (see Eq. (25)):
FP l lF (0) = P 0 l T 1 T F γ 0 2τ F ,(30)
while its contribution to the amplitude of the optical orientation effect is vanishing in the case Ω F X T F 1 (according to Eq. (24)).
At this stage we recall the exciton parameters defined from the PL decay data: Γ A = 0.8 ns −1 , γ 0 = 0.31 ns −1 and τ F = Γ −1 F = 250 ns. We can use now the experimental amplitude 0.1 of the broad contour of the linear alignment effect at low temperature (see Figure 2(a)) as the bright exciton contribution and determine the value of the P 0 l T 1 product from Eq. (29). We use the previously determined sets of the parameters bindings and obtain in the case (ii) P 0 l T 1 ≈ 0.18 ns with X A = Ω X T 1 = 1, T 2 = T 1 /2, and alternatively in the case (iii) P 0 l T 1 ≈ 0.21 ns with X A = Ω X T 1 (0) = 2.08 and T 2 = T 1 . As P 0 l ≤ 1, this gives us the lower limit for the values of T 1 . The upper limit T 1 ≤ τ A = 0.9 ns comes from the bright exciton lifetime.
For the range of the bright exciton pseudospin lifetime 0.2 ns ≤ T 1 ≤ 0.9 ns, obtained in the case (a), we get the intervals for anisotropic splitting of states and g-factor of the bright exciton:hΩ X ∈ [0.9, 5.9] µeV and g A = 0.034±0.015 (see Table S1 in the SI). Corresponding sets of the parameters allow us to describe the dependences on the magnetic field in the Faraday geometry of the optical alignment, the rotation of the linear polarization plane and the optical orientation, shown in Figure 2(b). We used P 0 l = 0.75 and P 0 c = 0.47. The alternative set in the case (iii) gives ushΩ X ∈ [1.5, 6.2] µeV and g A = 0.02 ± 0.011 (see Table S2 in the SI). Numerical analysis of amplitudes is presented in the S3. The data analysis of the amplitudes for the data from Figure 2(d) is presented in S3.
Next, we turn to the parameters of the dark exciton. We assume that the narrow contour in Figure 2(b) originates from the dark exciton contribution to the optical alignment effect. For its amplitude, we can estimate the lifetime of the dark exciton spin as T F = 125 ns in the case (a) or T F = 108 ns in the case (b). These values imply the dark exciton spin relaxation times τ sF = 252 ns or τ sF = 189 ns, respectively, comparable with the dark exciton lifetime τ F = 250 ns.
The longitudinal g-factors of the bright and dark exciton in colloidal NPLs are determined by the expression:
g A = −g e − 3g h , g F = g e − 3g h(31)
where g e,h are the electron and hole g-factors. The value of g e = 1.67 for electron is obtained by spin-flip Raman scattering spectroscopy (Fig. 3b). However, the hole gfactor is unknown for the hole in the studied NPLs. In CdSe NPLs with thick shell in Ref. [24] the value of g h = −0.4 was obtained in low magnetic fields. By knowing the g-factor of the bright exciton g A , we determine the range of the g h values comparable with the value from Ref. [24] and thereby also determine the range of the values g F = 3.36 ± 0.015 for the dark exciton g-factor.
And at last, using the relation Ω F X = Ω F Z (B F 1/2 ) and determined g F , we estimate the anisotropic splitting of the dark exciton in zero magnetic field ashΩ F X = (11.65±0.05) µeV. Thus, for the determined values of T F we obtain Ω F X T F ∼ 2000 to justify the approximation Ω F X T F 1 used. The fact that the splitting between the dark exciton states in zero magnetic field turned out to be larger than the anisotropic splitting between the bright exciton states does not contradict the conditions of the problem. These splittings may have a different nature. The splitting between the states of the bright exciton is driven by the anisotropy of the NPLs in the plane which is not large for the studied NPLs. For the dark exciton, the cubic by J α term of the Hamiltonian also leads to the splitting of its states.
Thus, we have determined all parameters of the bright and dark exciton fine structure with a good accuracy. The largest uncertainty remain about the anisotropic splitting between spin sublevels of the bright exciton and the relaxation time between them. For better access to these parameters as well as for the analysis of the temperature dependences of the amplitudes of the optical alignment (see Figure 2(c)) we plan the future time-resolved studies of the optical alignment and optical orientation effects in NPLs.
IV. CONCLUSIONS
We present the first observation of exciton optical alignment and optical orientation effects in the ensemble of core/shell CdSe/CdS colloidal NPLs and develop a model describing the dependences of these effects on the magnetic field in the Faraday geometry. The presence of a radiative recombination from the dark exciton state leads to the appearance of two contours of the optical alignment in a longitudinal magnetic field -the broad one and the narrow one. The transition of the optical alignment from the bright to the dark exciton takes place only under the strongly resonant conditions (when the difference between detection and excitation energies is about 5 meV). However, the main contribution to the optical alignment as well as to the rotation of the plane of the linear polarization and optical orientation originates from the bright exciton. We determined all main parameters of the exciton fine structure and dynamic processes in the studied NPLs. We conclude that the observation of the effects becomes possible due to the large contribution of the bright exciton to the PL intensity even at low temperatures. The presence of the CdS shell in studied CdSe/CdS NPLs results in a decrease of the electronhole exchange interaction and corresponding decrease of the bright to dark exciton energy splitting and relaxation rate (in comparison with the CdSe NPLs without shell). We have found that the anisotropic splitting of the dark exciton states in zero magnetic field can be larger than that of the bright exciton states, that could be due to the cubic terms in the exchange interaction. It is the small anisotropic splitting of the bright exciton in studied NPLs that allowed us to observe the rotation of the plane of the linear polarization in the magnetic field in the Faraday geometry, as well as the optical orientation effect in zero magnetic field.
METHODS
Sample fabrication.
A set of CdSe/CdS core/shell NPLs with different shell thicknesses is studied. The fabrication procedure is described in Refs. 34 and 35. The parameters of all studied samples can be found in Table 1 of Ref. 22. NPLs are passivated with oleic acid and stored in a mixed solvent consisting of 40% heptane and 60% decane. For study in cryostat, NPLs in solvent are drop-casted on Si substrate. All NPL samples were grown from the same CdSe core with an average lateral dimension of (13.7 ± 0.2) × (10.8 ± 0.2) nm 2 and a thickness of 1.2 nm (i.e., 4 monolayers). The CdSe/CdS NPLs have a total thickness of 3.8 ± 0.5 nm (very thin shell), 4.6 ± 0.6 nm (thin), 7.4 ± 1.0 nm (medium shell), 11.6 ± 1.6 nm (thick shell), 19.1 ± 1.6 nm (very thick shell), including the thicknesses of the CdSe core and CdS shells on both sides. Although the effects in all samples are similar, results reported here focus on the data for the NPLs with medium shell thickness (sample number MP170214A) of 3.1 nm at each side of the core.
Continuous wave experiment. For polarized PL spectroscopy and Raman scattering (RS) spectroscopy the sample is placed on the variable temperature insert (1.5 -300 K) of the helium bath cryostat with superconducting solenoid (up to 5 T). Magnetic field is applied either parallel to the light wave vector (Faraday geometry) or perpendicular to it (Voigt geometry). For the excitation of PL and RS, a DCM dye laser is used. The laser power densities focused on the sample does not exceed 5 W/cm 2 . PL and RS are measured in a backscattering geometry and are analyzed by a Jobin-Yvone U-1000 double monochromator equipped with a cooled GaAs photo-multiplier and conventional photon counting electronics. Linear and circular polarization of the PL are measured using photo-elastic modulator (PEM) in the detection path. PEM modulates circular polarization of light between σ + and σ − at a frequency of 42 kHz synchronized with the detector. Together with linear polarizer and λ/4 plate, PEM allows one to measure PL polarization P l , P l , and P c as described in the main text.
Time-resolved experiment. The sample is placed into the bath cryostat with a variable temperature insert (1.5 K -300 K) and a superconducting solenoid (up to 6 T). As the photoexcitation sources we use two semiconductor pulsed lasers with the photon energies of 2.38 eV and 1.958 eV, pulse duration 50 ps, and the repetition rate ranging from 0.5 MHz to 5 MHz. The average power of the photoexcitation was controlled at 1 mW/cm 2 . The PL was spectrally resolved by the double spectrometer with the 900 gr/mm gratings in the dispersion subtraction regime. Part of the PL band with the width of less than the 0.5 nm was detected using a photomultiplier tube designed for photon counting, and measured with time resolution with the conventional time-correlated single photon counting setup (instrumental response about 100 ps).
ASSOCIATED For square nanoplatelets in the absence of an external magnetic field, the system could be considered within the framework of a three-level model consisting of the unexcited state |G and the states of the bright |A and dark |F exciton ( Figure S1). Because of the large exchange splitting between the latter ∆E AF compared to the inverse characteristic times, the polarization can be described in terms of the eigenstates populations N A , N F . The presence of splittings between the sublevels of brighthΩ X and darkhΩ F X excitons requires that all four levels be taken into account explicitly. During the analysis of the experimental data, it was found that the consideration of the four-level system in terms of the populations enables to describe the effect of optical alignment P l l but does not allow one to obtain a nonzero orientation P c c in the zero magnetic field, as well as the rotation of the linear polarization plane P l l in the magnetic field in the Faraday geometry. Effective frequency associated with the splitting between sublevels of the bright exciton turns out to be comparable with its inverse spin lifetime (Ω X T A ∼ 1).
FIG. S1. a) A scheme of the lower energy levels of the bright |A (±1) and dark |F (±2) excitons splitted at ∆EAF , b) the considered transitions between states of the bright and dark excitons with characteristic rates in the absence of an external magnetic field; c) splitting of states in the magnetic field in the Faraday geometry for the maximal g-factors and splittings in B = 0.
In the main text, the system is considered in terms of pseudospins s A,F and average pseudospins of bright and dark excitons S A,F . To describe the polarization effects under study P l l , P l l , P c c in terms of the total pseudospin, defined as s A,F = N A,F S A,F , a complementary analysis of population redistribution is required. The system of equations describing the dynamic transfer of excitons between the bright and dark exciton levels, taking into account relaxation between them and recombination, can be written in the following matrix form:
∂ ∂t N A N F = A N A N F + G A G F , A = −(Γ A + γ 0 + γ th ) γ th γ 0 + γ th −(Γ F + γ th ) = − 1 τ A γ th γ 0 + γ th − 1 τ F ,(S1)
where Γ A,F are the radiation recombination rate of bright and dark exciton, γ 0 is the relaxation rate from the bright state to the dark one at zero temperature, γ th is the thermally activated phonon-assisted relaxation γ th = γ 0 N B , where N B (∆E AF ) is the Bose-Einstein phonon occupation N B (E) = 1/(exp(E/k B T ) − 1), k B is the Boltzmann constant. Lifetimes of the bright and dark exciton:
τ −1 A = Γ A + γ 0 + γ th , τ −1 F = Γ F + γ th .(S2)
S2
Exciton generation is determined by G A,F (G A + G F = 1). We consider two ways of optical excitation: constant-wave pumping and impulse excitation. In the first case, we are interested in stationary solutions with G A (t), G F (t) = const:
N A = τ A G A + (1 − Γ F τ F )G F Γ F τ F + Γ A τ A (1 − Γ F τ F ) (S3) N F = τ F (1 − Γ A τ A )G A + Γ F τ F Γ F τ F + Γ A τ A (1 − Γ F τ F ) . (S4)
In the quasi-resonant excitation we assume that only bright exciton with much effective oscillator strength is excited
(G A = 1, G F = 0). At low temperature Γ F τ F = 1. Therefore, N A = τ A G A , N F = τ F (1 − Γ A τ A ).
In the second mode the states are excited by an impulse at the initial moment of time and . For the resonant excitation we assume that only bright exciton with much effective oscillator strength is excited (G A = 1, G F = 0). In the case of non-resonant excitation after supposedly fast relaxation from the excited level higher in energy we consider G A = G F = 0.5. The dynamics of the total photoluminescence intensity from the system in question is presented as follows:
I(t) = Γ A N A (t) + Γ F N F (t) = B 1 e −Γ S t + B 2 e −Γ L t ,(S5)
where Γ S,L are the eigenvalues of the matrix A (S1) and the constants B 1 , B 2 can be found from the initial conditions. In the absence of relaxation from the lower levels, when γ th = 0, that is satisfied at zero temperature, the eigenvalues are equal to the inverse of the lifetimes τ −1 A , τ − F −1 . Analysis of the photoluminescence kinetics makes it possible to determine some of the parameters, namely, the recombination rates of bright and dark excitons Γ A , Γ F , the energy splitting between these levels E AF , and the relaxation rate at zero temperature γ 0 .
The expression for the fast (short) Γ S and asymptotic (long) Γ L decay rates of photoluminescence in the terms of the three-level model can be written as follows:
Γ S,L = 1 2 Γ A + Γ F + γ 0 coth ∆E AF 2k B T ± (γ 0 + Γ A − Γ F ) 2 + γ 2 0 sinh −2 ∆E AF 2k B T ,(S6)
At low temperature, Γ L = Γ F , and at saturation,
Γ L = Γ A +Γ F 2 .
The parameters γ 0 , ∆E AF determine the bend in the temperature dependence and have some consistency. Fast component at low temperature allows one to fix the sum of rates Γ A + γ 0 = 1.11 ns −1 .
The magnetic field in the Voigt geometry mixes the bright and dark exciton states, causing additional activation of radiative recombination of the lower energy states:
Γ F = Γ F (B = 0) + g e µ B B ∆E AF 2 . (S7)
The experimentally measured dependences of the asymptotic rate on temperature and on the magnitude of the magnetic field in the Voigt geometry are shown in Figure 4 (c) and (d), respectively. The data can be coherently described by theoretical dependences for parameter values Γ A = 0.8 ns −1 , Γ F = 0.004 ns −1 , γ 0 = 0.3 ns −1 , ∆E AF = 0.8 meV.
S2. Pseudospin components
Taking into account the relaxation time anisotropy, the expressions for the pseudospin components under arbitrary initial conditions are written as follows: The case of the only transferred S Z consist in two terms related to conversion of linear polarization into circular in the bright exciton and vice versa in the dark one for P l l and P l l .
S AX = S 0 AX (T 2 Ω 2 Z + T 1 Ω 2 X (1 + T 2 2 Ω 2 A ) − S 0 AY T 2 2 Ω Z Ω 2 A + S 0 Z Ω X Ω Z (T 1 (1 + T 2 2 Ω 2 A ) − T 2 ) τ A Ω 2 A (1 + T 2 2 Ω 2 A ) S AY = T 2 (S 0 AY + S 0 AX T 2 Ω Z − S 0 Z T 2 Ω X ) τ A (1 + T 2 2 Ω 2 A ) ,(S8)S AZ = S 0 Z (T 2 Ω 2 X + T 1 Ω 2 Z (1 + T 2 2 Ω 2 A ) + S 0 AY T 2 2 Ω X Ω 2 A + S 0 AX Ω X Ω Z (T 1 (1 + T 2 2 Ω 2 A ) − T 2 ) τ A Ω 2 A (1 + T 2 2 Ω 2 A ) S3P l lF = − T 2 F Ω F X Ω X (Ω 2 A T 2 2 − Ω F Z T F Ω Z (T 1 (1 + T 2 2 Ω 2 A ) − T 2 )) 2τ A τ F (1 + Ω 2 F T 2 F )Ω 2 A (1 + Ω 2 A T 2 A ) (S9) S4 P l l F = T 2 F Ω F X Ω X (Ω 2 A T 2 2 Ω F Z T F − Ω Z (T 1 (1 + T 2 2 Ω 2 A ) − T 2 )) 2τ A τ F Ω 2 A (1 + Ω 2 A T 2 A )(1 + Ω 2 F T 2 F ) (S10)
The effect of optical orientation due to the transfer of the S Z component has one contribution:
P c cF = − T F P 0 c (1 + Ω 2 F Z T 2 F ) τ F (1 + Ω 2 F T 2 F ) (T 2 Ω 2 X + T 1 Ω 2 Z (1 + T 2 2 Ω 2 A )) τ A Ω 2 A (1 + T 2 2 Ω 2 A ) (S11)
With a complete transfer of pseudospin components, there are 4 separate contributions to the experimental alignment effect: two are connected with the direct transfer of linear components, the other two are connected with the conversion of linear components in the bright exciton to circular ones, and the inverse conversion in the dark exciton. The last two terms have the same form. Contributions associated with the transfer of linear components of the pseudospin:
P l lF = T F P 0 l (Ω 2 A T 2 + (T 2 Ω 2 Z + T 1 Ω 2 X (1 + Ω 2 A ))(1 + Ω 2 F X T 2 F )) 2τ A τ F Ω 2 A (1 + Ω 2 A T 2 2 )(1 + Ω 2 F T 2 F ) (S12) P l lF = − T F T 2 P 0 l Ω Z T 2 Ω F Z T F τ A τ F (1 + Ω 2 A T 2 2 )(1 + Ω 2 F T 2 F )(S13)
There are four different contributions to the rotation associated with the transfer of the linear components of the pseudospin.
P l l F = T F T 2 P 0 l (Ω Z T 2 + Ω F Z T F ) 2τ A τ F (1 + Ω 2 A T 2 2 )(1 + Ω 2 F T 2 F ) (S14) P l l F = T F P 0 l (Ω 2 A T 2 2 Ω Z (1 + Ω 2 F X T 2 F ) + (T 2 Ω 2 Z + T 1 Ω 2 X (1 + Ω 2 A ))Ω F Z T F ) 2τ A τ F Ω 2 A (1 + Ω 2 A T 2 2 )(1 + Ω 2 F T 2 F ) (S15)
The transfer of linear components enables two more contributions due to conversions of circular polarization to linear polarization and vice versa:
P c cF = T F Ω F X T F Ω X (Ω F Z T F Ω Z (T 1 (1 + T 2 2 Ω 2 A ) − T 2 ) − Ω 2 A T 2 2 ) τ F (1 + Ω 2 F T 2 F )Ω 2 A (1 + Ω 2 A T 2 2 )
(S16) S4: Analysis of the Stokes parameters in magnetic field in the Faraday geometry
Relaxation time anisotropy
Magnetic-field conditions:
P l lA (B 1/2 ) P l lA (B = 0T) = (1 + X 2 A )(2Z 2 A + X 2 A (1 + t 12 (1 + X 2 A + Z 2 A ))) (1 + t 12 (1 + X 2 A ))(X 2 A + Z 2 A )(1 + X 2 A + Z 2 A ) (S17) P l l A P l lA = 2Z A (X 2 A + Z 2 A ) 2Z 2 A + X 2 A (1 + t 12 (1 + X 2 A + Z 2 A )) ,(S18)
P c cA (B = 0T) P c cA (B 1/2 )
= (X 2 A + Z 2 A )(1 + X 2 A + Z 2 A ) (1 + X 2 A )(X 2 A + t 12 Z 2 A (1 + X 2 A + Z 2 A ))(S19)
At low temperature I A /(I A + I F ) = Γ A τ A . Hence, for our experiment at T 1 = 2T 2 we obtain the following fourth condition and parameter binding: This condition, in the case of a fixed X A = 1, imposes the requirement of P 0 l T 1 = 0.18. Further, we can define some restrictions for the parameter values. It is clear that the lifetime of the pseudospin should be less than the lifetime of the exciton T 1 , T 2 ≤ τ A . At low temperature, the fast component of photoluminescence decay is determined by the lifetime of the bright exciton. The experimentally measured kinetics at T = 1.5K gives τ A = 0.9 ns (see Figure 2 (c)). Thus we obtain an upper limit T 1 ≤ 0.9. Another restriction is associated with the fact that P 0 l ≤ 1. To get the proper amplitude with Γ A = 0.8 we need to have T 1 ≥ 0.2. Hence, by varying T 1 in the range 0.2 ns ≤ T 1 ≤ 0.9 ns we get valid intervals for the parameters. Possible values of the parameters are presented in the Table S1.
For the amplitude of optical orientation From this expression we get P 0 l /P 0 c = 0.58. Then to extract the parameters of the dark exciton we can use the amplitude of the narrow contour at low temperature:
FP l lF (0) = γ 0
T F T 1 P 0 l 2τ F = 0.014.(S22)
Here we used the fact that at low temperature I F /(I A + I F ) = γ 0 τ A . The long component of photoluminescence decay at T = 1.5K is determined by the lifetime of the dark exciton. The kinetics gives τ F = 250 ns (see S1). S1. Sets of bright exciton parameters in the allowed ranges at fixed ratios T1 = 2T2, XA = ΩX T1 = 1, ZA = ΩZ T1 = 1.7, ΓAP 0 l T1 = 0.14, and P 0 c /P 0 l = 0.65, allowing description of the broad part of of the Stokes parameter dependences on magnetic field in the Faraday geometry. As for γ 0 , we can estimate it from low-temperature bright exciton lifetime τ A = (Γ A + γ 0 ) − 1 = 0.9 ns with fixed Γ A = 0.8ns −1 : γ 0 = 0.3ns −1 . For our range of g A we get g F ∈ [3.31, 3.358]. ThereforehΩ F X = (11.55 ± 0.05) µeV. The fact that the splitting between the dark exciton states in the absence of a magnetic field turned out to be larger than the anisotropic splitting between the bright exciton states does not contradict the problem conditions, since they have a different nature. The splitting between the states of the bright exciton is driven by the anisotropy of the nanoplatelets in the plane. For the dark exciton, the cubic by J term of the Hamiltonian also leads to the splitting of its states.
Magnetic field dependence of the relaxation
In terms of X A = Ω X T A (0) and Z A = Ω Z T A (B 1/2 ) we can introduce a parameter that is responsible for the variation of the pseudospin lifetime in the external magnetic field:
1 d = X AB 1/2 X A = T A (B 1/2 ) T A (0)(S23)
The conditions on the magnetic field dependences can be rewritten as follows:
P l lA (B 1/2 ) P l lA (0) = 1 d 2d 2 + X 2 A X 2 A + d 2 (1 + Z 2 A ) 1 + X 2 A 2 + X 2 A = 1 2 , P l l A (B 1/2 ) P l lA (B 1/2 ) = 2d 2 Z A 2d 2 + X 2 A , P c cA (0) P c cA (B 1/2 ) = 1 d X 2 A + d 2 (1 + Z 2 A ) (1 + X 2 A )(1 + Z 2 A )
.
(S24) For the given experimental values, the solution is the set of parameters: Z A = 1.2, X A = 2, d = 1.5, what can be seen graphically in Figure S5 in green.
The amplitude of optical alignment has the form:
P l lA = Γ A P 0 l T A (0) 2 2 + X 2 A0 1 + X 2 A0 = 0.1.(S25)
From this we get Γ A S 0 AX T A (0) = 0.084. With the analogous restrictions Γ A ≤ 1/τ A = 1.33 ns −1 , S 0 AX ≤ 0.5 for the pseudospin T A we obtain 0.13 ≤ T A (0) ≤ 0.75. Sets of the obtained bright exciton parameters are presented in the Table S2.
FIG. 1 .
1Preferred orientations of the NPLs on the substrate: (a) vertically oriented NPLs (edge-up) with c lying in the substrate plane and (b) horizontally oriented NPLs (face-down) with anisotropic NPL axis c perpendicular to the substrate.
FIG. 2 .
2Polarized PL spectroscopy of CdSe/CdS NPLs under resonant excitation with the energy Eexc = 1.960 eV.
FIG. 3 .
3Raman scattering spectroscopy of CdSe/CdS NPLs. (a) SFRS spectra received under Eexc = 1.936 eV in the Faraday magnetic field BF = 4 T at T = 1.5 K. Both spectra are measured under σ + polarized excitation. Blue (red) spectrum is σ + (σ − ) polarized component of measured signal, correspondingly. (b) Experimental dependence of Raman shift of electron spin flip line on magnetic field applied in the Faraday geometry (symbols) and its linear fit (line). (c) Raman scattering spectrum under excitation energy Eexc = 1.964 eV at zero magnetic field at T = 1.5 K. Laser is shown by red filled curve.
FIG. 4 .
4Time-resolved PL spectroscopy of CdSe/CdS NPLs under resonant excitation with the energy Eexc = 1.960 eV. (a) PL dynamics at short times (blue) with monoexponential fit (red). Black curve represents instrumental response function (IRF). (b) PL dynamics in different magnetic fields applied in the Voigt geometry. (c) Experimental temperature dependence of trion decay rate (black squares) and exciton long decay rate ΓL (blue circles) with calculated one by Eq. (S6) (red line). (d) Experimental dependence of the asymptotic rate ΓL on the magnetic field (blue squares) applied in the Voigt geometry and the theoretical curve (red solid line).
FIG. 5 .
5(a) A scheme of a single nanoplatelet with its own axes X, Y and laboratory frame: x, y, z and rotated by 45 • around z-axis x , y , the magnetic field in the Faraday geometry. (b) Representation of the conversion of linearly polarized states |X , |Y of the bright exciton, splitted by the energyhΩX , into circularly polarized components | + 1 , | − 1 in the case of gA > 0. A similar transformation takes place for dark exciton states |F X , |F Y , | + 2 , | − 2 and is shown inFigure S1(c). (c) Pseudospin SA in the effective magnetic field with frequency ΩA. Anisotropic relaxation times τs1,s2 correspond to longitudinal (b) and transverse (c) relaxation, respectively.
FIG. 6 .
6Scheme of the linear polarization P 0 l conversion in the bright exciton into detectable effects of optical alignment P l l
(a) corresponds to the first of conditions (26) for the optical alignment (HWHM condition), Figures S5(b) and S5(c) correspond to two other ratios plotted for the parameters satisfying the first HWHM condition. The horizontal dashed lines indicate the experimental values P l l (B 1/2 )/P l l (B 1/2 ) = 0.62 and P c c (0)/P c c (B 1/2 ) = 0.5 extracted from Figure 2(b).
FIG. S3 .
S3Time-resolved PL spectroscopy of the colloidal CdSe/CdS NPLs in wide temperature range. (a) PL dynamics under resonant excitation detected on exciton. (b) PL dynamics under resonant excitation detected on trion. (c) PL dynamics under non-resonant excitation detected on exciton. (d) Experimental dependence of decay rate ΓL on temperature for resonant (blue circles) and non-resonant (blue open circles) excitation of exciton and resonant excitation of trion (black squares). Theoretical curve is given by red line. S3. Dark exciton: two extreme cases of relaxation mechanism
. S4. Polarization transfer scheme: transmission of linear (dashed red arrows) and circular (solid red arrows) components. In the last column the nonzero contributions to the optical alignment are marked in orange; the rotation of the linear polarization plane is marked in green. FIG. S5. Three conditions on the field dependences of Stokes parameters: at T1 = T2 there is no joint solution (blue curves), at T1 = 2T2 (orange curves) and in case of alternative analysis (see Appendix) at ZA = 1.2, XA0 = 2, d = 1.5 a solution appears (green curves).
S2. Sets of bright exciton parameters in the allowed ranges in the case of TA(B) at fixed ratios XA0 = ΩX TA(0) = 2.08, XAB 1/2 /XA0 = TA(B 1/2 )/TA(0) = 2/3, ZA = ΩZ TA(B 1/2 ) = 1.22, allowing description of the broad part of of the Stokes parameter dependences on magnetic field in the Faraday geometry.
(b). The c axis of all NPLs under consideration is thus directed perpendicular to the substrate and along the [001] crystallographic direction. Generally, the NPL edges can be directed either along [100] and [010] or along [110] and [110] directions
Optical alignment and orientation of excitons in ensemble of core/shell CdSe/CdS colloidal nanoplatelets O. O. Smirnova 1 , I. V. Kalitukha 1 , A. V. Rodina 1 , G. S. Dimitriev 1 , V. F. Sapega 1 , O. S. Ken 1 , V. L. Korenev 1 , N. V. Kozyrev 1 , S. V. Nekrasov 1 , Yu. G. Kusrayev 1 , D. R. Yakovlev 1,2 , B. Dubertret 3 , and M. Bayer 2 Laboratoire de Physique et d'étude des Matériaux, ESPCI, CNRS, 75231 Paris, France. S1.Kinetics of the PL: temperature and magnetic field dependencesCONTENT
Supporting Information.
AUTHOR INFORMATION
Corresponding Authors:
Olga O. Smirnova, Email: [email protected]
Anna V. Rodina, Email: [email protected]
Dmitri R. Yakovlev, Email:
dmitri.yakovlev@tu-
dortmund.de
SUPPORTING INFORMATION
1 Ioffe Institute, Russian Academy of Sciences, 194021 St. Petersburg, Russia
2 Experimentelle Physik 2, Technische Universität Dortmund, 44221 Dortmund, Germany
3
FIG. S2. PL and RS spectroscopy of the colloidal CdSe/CdS NPLs. (a) Normalized PL spectra at T = 1.5 K under resonant cw (red) and pulsed (black) excitation and non-resonant pulsed (blue) excitation. PL spectra (black, pulsed resonant excitation) normalized in way to conserve the ratio between PL intensities at different temperatures. (b) Raman scattering spectrum under excitation energy Eexc = 1.964 eV at zero magnetic field at T = 1.5 K. Laser is shown by red line. Inset: The dependence of bright-dark exciton energy splitting EAF on the excitation energy.1.89
1.90
1.91
1.92
1.93
1.94
0
1
2
3
1.960 eV, cw
1.960 eV, pulsed
2.380 eV, pulsed
(a)
(b)
E
exc
Normalized
PL intensity (arb. units)
Energy (eV)
40 K
10 K
1.5 K
0
1
2
3
4
0.0
0.2
0.4
0.6
0.8
1.0
E
exc
= 1.964 eV
Normalized
RS intensity (arb. units)
Raman shift (meV)
laser
E
AF
0.8 meV
1.89 1.92 1.95
0.0
0.2
0.4
0.6
0.8
E
AF (meV)
E
exc (eV)
0
50
100
150
200
250
300
350
400
1E-5
1E-4
1E-3
0.01
0.1
1
Exciton
T = 1.5 K
T = 3.5 K
T = 8.5 K
T = 90 K
(a)
(b)
(c)
(d)
E
exc
= 1.960 eV
E
det
= 1.946 eV
Normalized TRPL
intensity (arb. units)
Time (ns)
0
50
100
150
200
250
300
350
400
1E-5
1E-4
1E-3
0.01
0.1
1
T = 1.5 K
T = 4 K
T = 10 K
T = 50 K
Exciton
E
exc
= 2.380 eV
E
det
= 1.952 eV
Normalized TRPL
intensity (arb. units)
Time (ns)
1
10
100
0.01
0.1
1
Trion
Exciton
Decay rate (ns
-1
)
Temperature (K)
0
50
100
150
200
250
300
350
400
1E-5
1E-4
1E-3
0.01
0.1
1
Trion
E
exc
= 1.960 eV
E
det
= 1.930 eV
T = 1.5 K
T = 15 K
T = 40 K
Normalized TRPL
intensity (arb. units)
Time (ns)
TABLE
TABLE
Quasi 2D colloidal CdSe platelets with thicknesses controlled at the atomic level. S Ithurria, B Dubertret, 10.1021/ja807724eJ. Am. Chem. Soc. 130Ithurria, S.; Dubertret, B. Quasi 2D colloidal CdSe platelets with thicknesses controlled at the atomic level. J. Am. Chem. Soc. 2008, 130, 16504-16505. https://doi.org/10.1021/ja807724e
Two-Dimensional Colloidal Nanocrystals. M Nasilowski, B Mahler, E Lhuillier, S Ithurria, B Dubertret, 10.1021/acs.chemrev.6b00164Chem. Rev. 116Nasilowski, M.; Mahler, B.; Lhuillier, E.; Ithur- ria, S.; Dubertret, B. Two-Dimensional Colloidal Nanocrystals. Chem. Rev. 2016, 116, 1093-10982. https://doi.org/10.1021/acs.chemrev.6b00164
Ultrathin One-and Two-Dimensional Colloidal Semiconductor Nanocrystals: Pushing Quantum Confinement to the Limit. A C Berends, C De Mello Donega, 10.1021/acs.jpclett.7b01640J. Phys. Chem. Lett. 8Berends, A. C.; de Mello Donega, C. Ultrathin One-and Two-Dimensional Colloidal Semiconductor Nanocrystals: Pushing Quantum Confinement to the Limit. J. Phys. Chem. Lett. 2017, 8, 4077-4090. https://doi.org/10.1021/acs.jpclett.7b01640
Colloidal Atomic Layer Deposition (c-ALD) using Self-Limiting Reactions at Nanocrystal Surface Coupled to Phase Transfer between Polar and Nonpolar Media. S Ithurria, D V Talapin, 10.1021/ja308088dJ. Am. Chem. Soc. 134Ithurria, S.; Talapin, D. V. Colloidal Atomic Layer Deposition (c-ALD) using Self-Limiting Reactions at Nanocrystal Surface Coupled to Phase Transfer between Polar and Nonpolar Media. J. Am. Chem. Soc. 2012, 134, 18585-18590. https://doi.org/10.1021/ja308088d
Optical orientation and polarized luminescence of excitons in semiconductors. G E Pikus, E L Ivchenko, ExcitonsRashba, E. I., Sturge, M. D.North-Holland, AmsterdamPikus, G. E.; Ivchenko, E. L. Optical orientation and polarized luminescence of excitons in semiconductors. In Excitons; Rashba, E. I., Sturge, M. D., Eds.; North- Holland, Amsterdam, 1982, Chap. 6.
Optical Orientation; Elsevier. F Meier, B P Zakharchenya, Meier, F.; Zakharchenya, B. P. Optical Orientation; El- sevier, 2012.
Alignment and orientation of hot excitons and polarized luminescence. G L Bir, G E Pikus, E L Ivchenko, Sov. Phys. Izvestia. 40Bir, G. L.; Pikus, G. E.; Ivchenko, E. L. Alignment and orientation of hot excitons and polarized luminescence. Sov. Phys. Izvestia 1976, 40, 1866-1871.
. E L Ivchenko, G E Pikus, Other Superlattices, Heterostructures, Ivchenko, E. L.; Pikus, G. E. Superlattices and Other Heterostructures;
Determination of Interface Preference by Observation of Linear-to-Circular Polarization Conversion under Optical Orientation of Excitons in Type-II GaAs/AlAs Superlattices. R I Dzhioev, H M Gibbs, E L Ivchenko, G Khitrova, V L Korenev, M N Tkachuk, B P Zakharchenya, 10.1103/PhysRevB.56.13405Phys. Rev. B. 5613405Dzhioev, R. I.; Gibbs, H. M.; Ivchenko, E. L.; Khitrova, G.; Korenev, V. L.; Tkachuk, M. N.; Za- kharchenya, B. P. Determination of Interface Preference by Observation of Linear-to-Circular Polarization Con- version under Optical Orientation of Excitons in Type-II GaAs/AlAs Superlattices. Phys. Rev. B 1997, 56, 13405. https://doi.org/10.1103/PhysRevB.56.13405
Fine structure of localized exciton levels in quantum wells. S V Goupalov, E L Ivchenko, A V Kavokin, J. Exp. and Theor. Phys. 86Goupalov, S. V.; Ivchenko, E. L.; Kavokin, A. V. Fine structure of localized exciton levels in quantum wells. J. Exp. and Theor. Phys. 1998, 86, 388-394.
Fine structure of excitonic levels in quantum dots. R I Dzhioev, B P Zakharchenya, E L Ivchenko, V L Korenev, Yu G Kusraev, N N Ledentsov, V M Ustinov, A E Zhukov, A F Tsatsul'nikov, 10.1134/1.567429Translated from Pis'ma Zh. Eksp. Teor. Fiz. 6565JETP Lett.Dzhioev, R. I.; Zakharchenya, B. P.; Ivchenko, E. L.; Ko- renev, V. L.; Kusraev, Yu. G.; Ledentsov, N. N.; Ustinov, V. M.; Zhukov, A. E.; Tsatsul'nikov, A. F. Fine structure of excitonic levels in quantum dots. JETP Lett. 1997, 65, 804-809. https://doi.org/10.1134/1.567429 [Trans- lated from Pis'ma Zh. Eksp. Teor. Fiz 1997, 65, 766].
Optical orientation and alignment of excitons in quantum dotsPhys. R I Dzhioev, B P Zakharchenya, E L Ivchenko, V L Korenev, Yu G Kusraev, N N Ledentsov, V M Ustinov, A E Zhukov, A F Tsatsul'nikov, 10.1134/1.1130397Translated from Fiz. Tverd. 40Solid StateDzhioev, R. I.; Zakharchenya, B. P.;Ivchenko, E. L.; Korenev, V. L.; Kusraev, Yu. G.; Ledentsov, N. N.; Ustinov, V. M.; Zhukov, A. E.; Tsatsul'nikov, A. F. Optical orientation and alignment of excitons in quantum dotsPhys. Solid State 1998, 40, 790-793. https://doi.org/10.1134/1.1130397 [Translated from Fiz. Tverd. Tela 1998, 40, 858-861].
Optical orientation of donor-bound excitons in nanosized InP/InGaP islands.Phys. Solid State. R I Dzhioev, B P Zakharchenya, V L Korenev, P E Pak, D A Vinokurov, O V Kovalenkov, I S Tarasov, 10.1134/1.1130606Translated from Fiz. Tverd. 40Dzhioev, R. I.; Zakharchenya, B. P.; Korenev, V. L.; Pak, P. E.; Vinokurov, D. A.; Kovalenkov, O. V.; Tarasov, I. S. Optical orientation of donor-bound exci- tons in nanosized InP/InGaP islands.Phys. Solid State 1998, 40, 1587-1593. https://doi.org/10.1134/1.1130606 [Translated from Fiz. Tverd. Tela 1998, 40, 1745-1752].
Optical Orientation and Alignment of Free Excitons in GaSe during Resonance Excitation. Experiment. E M Gamarts, E L Ivchenko, M I Karaman, V P Mushinskii, G E Pikus, B S Razbirin, A N Starukhin, Zh. Eksp. Teor. Fiz. 461113JETPGamarts, E. M.; Ivchenko, E. L.; Karaman, M. I.; Mushinskii, V. P.; Pikus, G. E.; Razbirin, B. S.; Starukhin, A. N. Optical Orientation and Alignment of Free Excitons in GaSe during Resonance Excitation. Ex- periment. JETP 1977, 46, 590. [Translated from Zh. Eksp. Teor. Fiz. 1977 73, 1113]
Optical Orientation of Carriers in Interband Transitions in Semiconductors. A I Ekimov, V I Safarov, JETP Lett. 12Ekimov, A. I.; Safarov, V. I. Optical Orientation of Car- riers in Interband Transitions in Semiconductors. JETP Lett. 1970, 12, 198.
Influence of Spin Relaxation of "Hot" Electrons in the Effectiveness of Optical Orientation in Semiconductors. A I Ekimov, V I Safarov, JETP Lett. 495Ekimov, A. I.; Safarov, V. I. Influence of Spin Relaxation of "Hot" Electrons in the Effectiveness of Optical Orien- tation in Semiconductors. JETP Lett. 1971, 13, 495.
Depolarization of Hot Photoluminescence of Gallium Arsenide Crystals in a Magnetic Field. Determination of the Energy Relaxation Times of Hot Electrons. V D Dymnikov, D N Mirlin, L P Nikitin, V I Perel, I I Reshina, V F Sapega, JETP. 53911Translated from Zh. Eksp. Teor. Fiz. 1981, 80, 1766Dymnikov, V. D.; Mirlin, D. N.; Nikitin, L. P.; Perel, V. I.; Reshina, I. I.; Sapega, V. F. Depolarization of Hot Photoluminescence of Gallium Arsenide Crystals in a Magnetic Field. Determination of the Energy Relaxation Times of Hot Electrons. JETP 1981, 53, 911. [Translated from Zh. Eksp. Teor. Fiz. 1981, 80, 1766]
Investigation of Semiconductor Paramagnetism Using Luminescence Polarization in a Weak Magnetic Field. R I Dzhioev, B P Zakharchenya, V G Fleisher, JETP Lett. 174Dzhioev, R. I.; Zakharchenya, B. P.; Fleisher, V. G. In- vestigation of Semiconductor Paramagnetism Using Lu- minescence Polarization in a Weak Magnetic Field JETP Lett. 1973, 17, 174.
Optical Orientation of Excitons and Carriers in Quantum Dots. Yu G Kusrayev, 10.1088/0268-1242/23/11/114013Semicond. Sci. Technol. 23114013Kusrayev, Yu. G. Optical Orientation of Excitons and Carriers in Quantum Dots. Semicond. Sci. Tech- nol. 2008, 23, 114013. https://doi.org/10.1088/0268- 1242/23/11/114013
Two-step versus one-step model of the interpolarization conversion and statistics of CdSe/ZnSe quantum dot elongations. A V Koudinov, B R Namozov, Yu G Kusrayev, S Lee, M Dobrowolska, J K Furdyna, 10.1103/PhysRevB.78.045309Phys. Rev. B. 7845309Koudinov, A. V.; Namozov, B. R.; Kusrayev, Yu. G.; Lee, S.; Dobrowolska, M.; Furdyna, J. K. Two-step versus one-step model of the interpolariza- tion conversion and statistics of CdSe/ZnSe quan- tum dot elongations. Phys. Rev. B 2008, 78, 045309. https://link.aps.org/doi/10.1103/PhysRevB.78.045309
Yassievich, I. N. Optical Orientation and Alignment of Excitons in Ensembles of Inorganic Perovskite Nanocrystals. M O Nestoklon, S V Goupalov, R I Dzhioev, O S Ken, V L Korenev, Yu G Kusrayev, V F Sapega, C De Weerd, L Gomez, T Gregorkiewicz, J Lin, K Suenaga, Y Fujiwara, L B Matyushkin, 10.1103/PhysRevB.97.235304Phys. Rev. 235304Nestoklon, M. O.; Goupalov, S. V.; Dzhioev, R. I.; Ken, O. S.; Korenev, V. L.; Kusrayev, Yu. G.; Sapega, V. F.; de Weerd, C.; Gomez, L.; Gre- gorkiewicz, T.; Lin, J.; Suenaga, K.; Fujiwara, Y.; Matyushkin, L. B.; Yassievich, I. N. Optical Orienta- tion and Alignment of Excitons in Ensembles of Inorganic Perovskite Nanocrystals. Phys. Rev. B 2018, 97, 235304. https://doi.org/10.1103/PhysRevB.97.235304
Charge Separation Dynamics in CdSe/CdS Core/Shell Nanoplatelets Addressed by Coherent Electron Spin Precession. D Feng, D R Yakovlev, B Dubertret, M Bayer, 10.1021/acsnano.0c02402ACS Nano. 14Feng, D.; Yakovlev, D. R.; Dubertret, B.; Bayer, M. Charge Separation Dynamics in CdSe/CdS Core/Shell Nanoplatelets Addressed by Coherent Electron Spin Precession. ACS Nano 2020, 14, 7237-7244. https://doi.org/10.1021/acsnano.0c02402
Magneto-Optics of Excitons Interacting with Magnetic Ions in CdSe/CdMnS Colloidal Nanoplatelets. E V Shornikova, D R Yakovlev, D O Tolmachev, V Ivanov, Yu, I V Kalitukha, V F Sapega, D Kudlacik, Yu G Kusrayev, A A Golovatenko, S Shendre, S Delikanli, H Demir, M Bayer, ACS Nano. 14Shornikova, E. V.; Yakovlev, D. R.; Tolmachev, D. O.; Ivanov, V. Yu.; Kalitukha, I. V.; Sapega, V. F.; Kudlacik, D.; Kusrayev, Yu. G.; Golovatenko, A. A.; Shendre, S.; Delikanli, S.; Demir, H.; Bayer, M. Magneto-Optics of Excitons Interacting with Magnetic Ions in CdSe/CdMnS Colloidal Nanoplatelets. ACS Nano 2020, 14, 9032-9041.
Electron and Hole g-Factors and Spin Dynamics of Negatively Charged Excitons in CdSe/CdS Colloidal Nanoplatelets with Thick Shells. E V Shornikova, L Biadala, D R Yakovlev, D Feng, V F Sapega, N Flipo, A A Golovatenko, M A Semina, A V Rodina, A A Mitioglu, M V Ballottin, P C M Christianen, Yu G Kusrayev, M Nasilowski, B Dubertret, M Bayer, Nano Lett. 18Shornikova, E. V.; Biadala, L.; Yakovlev, D. R.; Feng, D.; Sapega, V. F.; Flipo, N.; Golovatenko, A. A.; Semina, M. A.; Rodina, A. V.; Mitioglu, A. A.; Ballot- tin, M. V.; Christianen, P. C. M.; Kusrayev, Yu. G.; Nasilowski, M.; Dubertret, B.; Bayer, M. Electron and Hole g-Factors and Spin Dynamics of Negatively Charged Excitons in CdSe/CdS Colloidal Nanoplatelets with Thick Shells. Nano Lett. 2018, 18, 373-380.
. 10.1021/acs.nanolett.7b04203https://pubs.acs.org/doi/abs/10.1021/acs.nanolett.7b04203
Surface Spin Magnetism Controls the Polarized Exciton Emission from CdSe Nanoplatelets. E V Shornikova, A A Golovatenko, D R Yakovlev, A V Rodina, L Biadala, G Qiang, A Kuntzmann, M Nasilowski, B Dubertret, A Polovitsyn, I Moreels, M Bayer, Nat. Nanotechnol. 2020Shornikova, E. V.; Golovatenko, A. A.; Yakovlev, D. R.; Rodina, A. V.; Biadala, L.; Qiang, G.; Kuntz- mann, A.; Nasilowski, M.; Dubertret, B.; Polovitsyn, A.; Moreels, I.; Bayer, M. Surface Spin Magnetism Controls the Polarized Exciton Emission from CdSe Nanoplatelets. Nat. Nanotechnol. 2020, 15, 277-282. https://www.nature.com/articles/s41565-019-0631-7
Negatively Charged Excitons in CdSe Nanoplatelets. E V Shornikova, D R Yakovlev, L Biadala, S A Crooker, V V Belykh, M V Kochiev, A Kuntzmann, M Nasilowski, B Dubertret, M Bayer, 10.1021/acs.nanolett.9b04907Nano lett. 20Shornikova, E. V.; Yakovlev, D. R.; Biadala, L.; Crooker, S. A.; Belykh, V. V.; Kochiev, M. V.; Kuntzmann, A.; Nasilowski, M.; Dubertret, B.; Bayer, M. Negatively Charged Excitons in CdSe Nanoplatelets. Nano lett. 2020, 20, 1370-1377. https://doi.org/10.1021/acs.nanolett.9b04907
Addressing the Exciton Fine Structure in Colloidal Nanocrystals: The Case of CdSe Nanoplatelets. E V Shornikova, L Biadala, D R Yakovlev, V F Sapega, Yu G Kusrayev, A A Mitioglu, M V Ballottin, P C M Christianen, V V Belykh, M V Kochiev, N N Sibeldin, A A Golovatenko, A V Rodina, N A Gippius, A Kuntzmann, Y Jiang, M Nasilowski, B Dubertret, M Bayer, 10.1039/C7NR07206FNanoscale. 10Shornikova, E. V.; Biadala, L.; Yakovlev, D. R.; Sapega, V. F.; Kusrayev, Yu. G.; Mitioglu, A. A.; Ballottin, M. V.; Christianen, P. C. M.; Belykh, V. V.; Kochiev, M. V.; Sibeldin, N. N.; Golovatenko, A. A.; Rodina, A. V.; Gippius, N. A.; Kuntzmann, A.; Jiang, Y.; Nasilowski, M.; Dubertret, B.; Bayer, M. Addressing the Exciton Fine Structure in Colloidal Nanocrystals: The Case of CdSe Nanoplatelets. Nanoscale 2018, 10, 646-656. https://doi.org/10.1039/C7NR07206F
Exciton Binding Energy in CdSe Nanoplatelets Measured by One-and Two-Photon Absorption. E V Shornikova, D R Yakovlev, N A Gippius, G Qiang, B Dubertret, A H Khan, A Di Giacomo, I Moreels, M Bayer, 10.1021/acs.nanolett.1c04159Nano Lett. 21Shornikova, E. V.; Yakovlev, D. R.; Gippius, N. A.; Qiang, G.; Dubertret, B.; Khan, A. H.; Di Giacomo, A.; Moreels, I.; Bayer, M. Exciton Binding Energy in CdSe Nanoplatelets Measured by One-and Two- Photon Absorption. Nano Lett. 2021, 21, 10525-10531. https://pubs.acs.org/doi/full/10.1021/acs.nanolett.1c04159
Atomistics of Asymmetric Lateral Growth of Colloidal Zincblende CdSe Nanoplatelets. D.-E Yoon, J Lee, H Yeo, J Ryou, Y K Lee, Y.-H Kim, D C Lee, 10.1021/acs.chemmater.1c00563Chem. Mater. 33Yoon, D.-E.; Lee, J.; Yeo, H.; Ryou, J.; Lee, Y. K.; Kim, Y.-H.; Lee, D. C. Atomistics of Asym- metric Lateral Growth of Colloidal Zincblende CdSe Nanoplatelets. Chem. Mater. 2021, 33, 4813-4820. https://doi.org/10.1021/acs.chemmater.1c00563
Radiative recombination from dark excitons in nanocrystals: Activation mechanisms and polarization properties. A V Rodina, Al L Efros, 10.1103/PhysRevB.93.155427Phys. Rev. B. 9315542Rodina, A. V.; Efros, Al. L. Radiative re- combination from dark excitons in nanocrys- tals: Activation mechanisms and polarization properties Phys. Rev. B 2016, 93, 15542. https://link.aps.org/doi/10.1103/PhysRevB.93.155427
. K Blum, Density Matrix Theory and Applications. Blum, K. Density Matrix Theory and Applications;
Electron-Hole Long-Range Exchange Interaction in Semiconductor Quantum Dots. S V Goupalov, E L Ivchenko, 10.1016/S0022-0248(98J. Cryst. Growth. 184Goupalov, S. V.; Ivchenko, E. L. Electron-Hole Long- Range Exchange Interaction in Semiconductor Quan- tum Dots. J. Cryst. Growth 1998, 184, 393-397. https://doi.org/10.1016/S0022-0248(98)80083-3
Influence of Morphology on the Blinking Mechanisms and the Excitonic Fine Structure of Single Colloidal Nanoplatelets. Z Hu, A Singh, S V Goupalov, J A Hollingsworth, H Htoon, 10.1039/C8NR06234JNanoscale. 10Hu, Z.; Singh, A.; Goupalov, S. V.; Hollingsworth, J. A.; Htoon, H. Influence of Morphology on the Blinking Mech- anisms and the Excitonic Fine Structure of Single Col- loidal Nanoplatelets. Nanoscale 2018, 10, 22861-22870. https://doi.org/10.1039/C8NR06234J
Continuous Transition from 3D to 1D Confinement Observed during the Formation of CdSe Nanoplatelets. S Ithurria, G Bousquet, B Dubertret, J. Am. Chem. Soc. 133Ithurria, S.; Bousquet, G.; Dubertret, B. Contin- uous Transition from 3D to 1D Confinement Ob- served during the Formation of CdSe Nanoplatelets. J. Am. Chem. Soc. 2011, 133, 3070-3077.
. 10.1021/ja110046dhttps://doi.org/10.1021/ja110046d
Core/Shell Colloidal Semiconductor Nanoplatelets. B Mahler, B Nadal, C Bouet, G Patriarche, B Dubertret, 10.1021/ja307944dJ. Am. Chem. Soc. 134Mahler, B.; Nadal, B.; Bouet, C.; Patriarche, G.; Dubertret, B. Core/Shell Colloidal Semiconduc- tor Nanoplatelets. J. Am. Chem. Soc. 2012, 134, 18591-18598. https://doi.org/10.1021/ja307944d
| [] |
[
"Optical Frequency Comb Noise Characterization Using Machine Learning",
"Optical Frequency Comb Noise Characterization Using Machine Learning"
] | [
"Giovanni Brajato \nDTU Fotonik\nØrsteds Pl\nTechnical University of Denmark\nBuilding 3432800Kgs. LyngbyDenmark\n",
"Lars Lundberg \nDept. of Microtechnology and Nanoscience\nPhotonics Laboratory\nChalmers University of Technology\nSE-41296GöteborgSweden\n",
"Victor Torres-Company \nDept. of Microtechnology and Nanoscience\nPhotonics Laboratory\nChalmers University of Technology\nSE-41296GöteborgSweden\n",
"Darko Zibar \nDTU Fotonik\nØrsteds Pl\nTechnical University of Denmark\nBuilding 3432800Kgs. LyngbyDenmark\n"
] | [
"DTU Fotonik\nØrsteds Pl\nTechnical University of Denmark\nBuilding 3432800Kgs. LyngbyDenmark",
"Dept. of Microtechnology and Nanoscience\nPhotonics Laboratory\nChalmers University of Technology\nSE-41296GöteborgSweden",
"Dept. of Microtechnology and Nanoscience\nPhotonics Laboratory\nChalmers University of Technology\nSE-41296GöteborgSweden",
"DTU Fotonik\nØrsteds Pl\nTechnical University of Denmark\nBuilding 3432800Kgs. LyngbyDenmark"
] | [] | A novel tool, based on Bayesian filtering framework and expectation maximization algorithm, is numerically and experimentally demonstrated for accurate frequency comb noise characterization. The tool is statistically optimum in a mean-square-errorsense, works at wide range of SNRs and offers more accurate noise estimation compared to conventional methods. | 10.1049/cp.2019.0889 | [
"https://arxiv.org/pdf/1904.11951v1.pdf"
] | 135,464,245 | 1904.11951 | 26090ef1e5d096a1987c1be2bb715706fcf456f9 |
Optical Frequency Comb Noise Characterization Using Machine Learning
Giovanni Brajato
DTU Fotonik
Ørsteds Pl
Technical University of Denmark
Building 3432800Kgs. LyngbyDenmark
Lars Lundberg
Dept. of Microtechnology and Nanoscience
Photonics Laboratory
Chalmers University of Technology
SE-41296GöteborgSweden
Victor Torres-Company
Dept. of Microtechnology and Nanoscience
Photonics Laboratory
Chalmers University of Technology
SE-41296GöteborgSweden
Darko Zibar
DTU Fotonik
Ørsteds Pl
Technical University of Denmark
Building 3432800Kgs. LyngbyDenmark
Optical Frequency Comb Noise Characterization Using Machine Learning
1MACHINE-LEARNINGFREQUENCY COMBSPHASE ESTIMATION
A novel tool, based on Bayesian filtering framework and expectation maximization algorithm, is numerically and experimentally demonstrated for accurate frequency comb noise characterization. The tool is statistically optimum in a mean-square-errorsense, works at wide range of SNRs and offers more accurate noise estimation compared to conventional methods.
Introduction
Optical frequency combs (OFCs) are envisioned to play a significant role in the next generation of high-speed optical as well as optical-wireless communication systems due to their ability to provide frequency-stable and low phase noise comb lines from a single optical source [1]. The optical phase noise of the individual comb lines is dictated by the optical reference. Importantly, since there is only two degrees of freedom in setting the comb spectrum, the optical phase noise can be correlated among frequency lines. For the application in WDM communication systems [2], accurate noise characterization in terms of phase correlation matrix is essential. The correlation matrix provides information about the phase correlation between comb lines and is important for the design of receiver digital signal processing (DSP) algorithms [3].
In general, it is challenging to obtain the correlation matrix, as accurate optical phase tracking of comb lines is required. Homodyne detection schemes in combination with pulse shaping has been used before [4], but the solution does not measure the correlation matrix in a line-resolved manner, as required in optical communication systems. In a recent work [5], the correlation matrix has been obtained with frequency line resolution based on dual-comb spectroscopy (simultaneous electronic down-conversion with the aid of another frequency comb). This approach (see Fig. 1) captures the amplitude and phase variations of all the comb lines. The signal processing is done based on standard Fourier processing. However, this method is not statistically optimum in a mean-square-error-sense (MSE) and its performance cannot be guaranteed over a wide range of signal-to-noiseratios (SNRs). Typically, the SNR varies, as the power spectrum density of the frequency comb is not flat. Most importantly, for low-linewidths approaching 100 Hz, the conventional approach for optical phase estimation is inaccurate. It would therefore be useful to have a characterization tool that works over a wide range of SNRs and performs optimum optical phase tracking irrespective of the magnitude of the linewidth.
In this paper, we present a novel machine learning (ML) framework for computation of the phase correlation matrix from dual-comb spectroscopy measurements. The method is based on joint and statistically optimum, in terms of MSE, optical phase tracking. This approach is independent of the magnitude of the linewidth. The framework is investigated numerically and experimentally, and significant advantages over the state-of-the-art method are demonstrated in terms of the accuracy of the estimated correlation matrix and differential phase noise variance.
Machine learning framework
A detailed schematic of the set-up used of numerical and experimental investigations is shown in Fig. 1. The goal is to perform joint optical phase tracking and extract the correlation matrix of the incoming (source) optical comb that is mixed in a balanced receiver with a strong local oscillator (LO) comb. This approach assumes either that the phase noise properties of the incoming comb and the LO comb are equal, or that the phase-noise contribution of the LO comb can be neglected.
The optical phase tracking is performed after the photocurrent has been sampled by the ADC. Given the sampled photocurrent, , statistically optimal phase tracking is obtained by Bayesian filtering [6]. Implementation of the Bayesian filtering framework requires a state-space model that consists of: 1) a measurement equation, which describes the relation between and the time-varying optical phase, and, 2) state equation, which describes the evolution of the optical phase difference between the signal and the LO:
= − .
Our proposed state-space model is the following:
= + ⋮ (1) = + = sin( + ) +(2)
Here, are the discrete-time samples after the ADC, is an integer representing time, = 1/ is the sampling frequency of the ADC and = 2 are the line amplitudes. is the responsivity of the photodiodes, and are the powers of the signal and LO frequency comb lines.
is the measurement noise contribution associated with the shot noise. It is assumed that has Gaussian distribution with zero mean and variance . The frequency difference between the comb lines is expressed as . The finite linewidth of the comb lines can be modelled by assuming the phase noise dynamics (1) as a Markov random walk [7], where = [ , … , ] is a Langevin source with covariance matrix .
Optical phase tracking can be performed once the state-space model has been defined. The main idea behind Bayesian filtering is to provide a recursive algorithm that computes a statistically optimum joint estimation of phases , for = 1. . . , given . The optimum estimates will be the means = [ , … , ] , where ⊤ represents the transpose operation and correspond to the phases that minimize the mean squared error (MSE) [8]. In this paper, the Bayesian filtering framework is implemented using the extended Kalman filter (EKF), which we use to compute the mean value . Additionally, the system also includes unknown static parameters that need to be jointly estimated together with the dynamic parameters. The comb relative frequencies and mean amplitudes can easily be extracted by inspection of the signal power spectral density. Other parameters such the measurement noise variance and the covariance of the phase noise needs to be inferred from the data. To learn them, we decided to use Expectation Maximization (EM) algorithm [9]. The EM iterates over the training data by forward filtering and backward smoothing. At each iteration, the optimal parameters ( ( .) , ( .) ) which maximizes the model likelihood on the observed data are returned. The iterative process repeats until convergence.
Numerical results
The numerical simulation of the system is done by introducing a linear relationship on the phase noise. The generation of a frequency comb with a specific intra-line correlation is done in order to test if the algorithms can recover the same correlation after corrupting the data with white noise samples. Following the same signal structure of electro-optical frequency combs, we generate line-dependent phase noise, i.e. = + . Here, and are the carrier and RF phase noise, generated as Wiener processes. The integer line index takes values in {−24,24}, giving 49 lines in total. After generating the signal, we compare the ML method with a conventional technique for phase extraction. It consists of bandpass filters for each comb line, using 30 MHz of bandwidth. For each filtered line, a Hilbert transform is performed to extract the phase of the individual line.
In Fig. 2, we show the true covariance matrix together with the correlation matrix obtained by the conventional and ML method. The average SNR is varied from 16.53 dB to 29.05 dB. The proposed ML framework is able to extract the correlation matrix that is in an excellent agreement with the true one. This is due to its ability to filter out additive measurement noise, a property that is not part of the conventional method. The filtering capabilities of the latter are limited, as it cannot remove the noise within its bandwidth. We can see indeed that the conventional method fails to capture the phase correlation, especially on lower SNR values. Next, we check the differential phase variance extracted using the proposed ML framework with the conventional approach. The differential phase is calculated from the extracted phases as = − . From the way the data is generated, the differential phase true variance is a parabolic function of the line index. In Fig. 3 we can see that our ML method to extract the variance is more accurate than a conventional one, as the ML curve overlaps the true variance curve. The conventional method suffer from measurement noise that affects the variance estimation. Our ML algorithm is capable to filter out such noise and recover the original phase variance.
Experimental results
Next, we test if the optimality of our algorithm still holds when applied to experimental data. We compare the proposed ML framework with the same conventional approach described in the previous section. The optical phase tracking is performed on a digitized quadrature obtained from a down-mixed electrooptical frequency comb. The down-converted comb consists of 49 comb lines, spaced 50 MHz and centred at ca. 4.5 GHz, obtained using the same setup as in [5], for which we show the signal power spectrum in Fig. 1b. In Fig. 4, we compare the differential phase variance extracted using the proposed ML framework with the conventional approach. As showed in [5], we expect the differential phase to follow a quadratic trend, which can also be seen on the variance per line of both methods. However, we observe a discrepancy between the two curves, similar to the one seen in Fig. 3. Our ML method reveals a clearer quadratic-dependentline variance than the conventional method. This indicates that the proposed method is more robust to measurement noise, and that it provides a better phase estimate.
Conclusions
We have introduced, and numerically as well as experimentally investigated a novel machine learning based framework for accurate noise characterization of optical frequency combs using dual-comb spectroscopy measurements. It has been demonstrated numerically and experimentally that the proposed framework provides more accurate phase noise characterization compared to the conventional approach and provides the degree of phase correlation over the full bandwidth in a line-resolved manner. The method holds the potential to become a reference tool for frequency noise characterization that will benefit OFC based communication systems. Conv. ML Fig. 3: (Numerical) Empirical variance calculated on the extracted differential phases using machine learning (blue curve) and a conventional method (black curve).
Fig. 2 :
2(Numerical) True correlation matrices (left matrices) shown together with the correlation matrices obtained by the conventional method (central matrices) and the machine learning method (right matrices). Each row is a simulation with different values of the average SNR, indicated on the left.
Fig. 1 :
1(a) System set-up for numerical and experimental investigations. Red lines represent optical paths, blue lines represent electrical/digital paths. PD: photodiode. ADC: analog-to-digital converter. BP: band-pass filter. ∠ℋ(⋅): phase extractor of the analytic signal computed using the Hilbert transform. EM-EKF: expectation maximization algorithm with extended Kalman filter. : phases extracted with conventional method. : phases extracted with machine learning method. (b) Power spectrum density of the downconverted frequency comb in the experiment.
Fig. 4 :
4(Experimental) Empirical variance calculated on the extracted differential phases using machine learning (blue curve) and a conventional method (black curve).
Single-source chipbased frequency comb enabling extreme parallel data transmission'. H Hu, F Da Ros, M Pu, Nature Photonics. 128469Hu, H., Da Ros, F., Pu, M., et al.: 'Single-source chip- based frequency comb enabling extreme parallel data transmission', Nature Photonics, 2018, 12, (8), p. 469.
Laser Frequency Combs for Coherent Optical Communications. V Torres-Company, J Schroeder, A Fülöp, Journal of Lightwave Technology. 373Torres-Company, V., Schroeder, J., Fülöp, A., et al.: 'Laser Frequency Combs for Coherent Optical Communications', Journal of Lightwave Technology, 2019, 37, (3), pp. 1663-1670.
Frequency comb-based WDM transmission systems enabling joint signal processing. L Lundberg, M Karlsson, A Lorences-Riesgo, Applied Sciences. 8718Lundberg, L., Karlsson, M., Lorences-Riesgo, A., et al.: 'Frequency comb-based WDM transmission systems enabling joint signal processing', Applied Sciences, 2018, 8, (5), p. 718.
Spectral noise correlations of an ultrafast frequency comb. R Schmeissner, J Roslund, C Fabre, Physical review letters. 11326263906Schmeissner, R., Roslund, J., Fabre, C., et al.: 'Spectral noise correlations of an ultrafast frequency comb', Physical review letters, 2014, 113, (26), 263906..
Phase Correlation Between Lines of Electro-Optical Frequency Combs. L Lundberg, M Mazur, A Fiilöp, Conference on Lasers and Electro-Optics (CLEO). San Jose, CALundberg, L., Mazur, M., Fiilöp, A., et al.: 'Phase Correlation Between Lines of Electro-Optical Frequency Combs', Conference on Lasers and Electro-Optics (CLEO), San Jose, CA, 2018, pp. 1-2.
Bayesian filtering and smoothing. S Särkkä, Cambridge University Press3Särkkä, S.: 'Bayesian filtering and smoothing', vol. 3, Cambridge University Press, 2013.
Few-cycle pulses directly from a laser', In 'Few-cycle laser pulse generation and its applications. F X Kärtner, U Morgner, Schibli, SpringerBerlin, HeidelbergKärtner, F. X., Morgner, U., Schibli, et al.: 'Few-cycle pulses directly from a laser', In 'Few-cycle laser pulse generation and its applications', Springer, 2004, Berlin, Heidelberg, pp. 73-136.
Machine learning techniques in optical communication. D Zibar, M Piels, R Jones, Journal of Lightwave Technology. 346Zibar, D., Piels, M., Jones, R., et al.: 'Machine learning techniques in optical communication', Journal of Lightwave Technology, 2016, 34, (6), pp. 1442-1452.
Sigma-Point Filtering and Smoothing Based Parameter Estimation in Nonlinear Dynamic Systems. J Kokkala, A Solin, S Särkkä, Journal of advances in information fusion. 11Kokkala, J., Solin, A., Särkkä, S.: 'Sigma-Point Filtering and Smoothing Based Parameter Estimation in Nonlinear Dynamic Systems', Journal of advances in information fusion, 2016, 11, (1), pp 15-30.
| [] |
[
"Efficiency through disinformation",
"Efficiency through disinformation"
] | [
"Richard Metzler \nNew England Complex Systems Institute\n24 Mt. Auburn St02138CambridgeMAUSA\n\nDepartment of Physics\nMassachusetts Institute of Technology\n02139CambridgeMAUSA\n",
"Mark Klein \nCenter for Coordination Science\nMassachusetts Institute of Technology\n02139CambridgeMAUSA\n",
"Yaneer Bar-Yam \nNew England Complex Systems Institute\n24 Mt. Auburn St02138CambridgeMAUSA\n"
] | [
"New England Complex Systems Institute\n24 Mt. Auburn St02138CambridgeMAUSA",
"Department of Physics\nMassachusetts Institute of Technology\n02139CambridgeMAUSA",
"Center for Coordination Science\nMassachusetts Institute of Technology\n02139CambridgeMAUSA",
"New England Complex Systems Institute\n24 Mt. Auburn St02138CambridgeMAUSA"
] | [] | We study the impact of disinformation on a model of resource allocation with independent selfish agents: clients send requests to one of two servers, depending on which one is perceived as offering shorter waiting times. Delays in the information about the servers' state leads to oscillations in load. Servers can give false information about their state (global disinformation) or refuse service to individual clients (local disinformation). We discuss the tradeoff between positive effects of disinformation (attenuation of oscillations) and negative effects (increased fluctuations and reduced adaptability) for different parameter values.Competition for limited resources occurs in many different situations, and it often involves choosing the resource least popular among competitors -one can think of drivers who want to take the least crowded road, investors who want to buy hot stocks before other buyers drive the price up, computers that send requests to the least busy servers, and many more. From an individual perspective, agents in these scenarios act selfishly -they want to achieve their particular aims. At the same time, this selfish behavior can be beneficial for the system as a whole, insofar as it leads to effective resource utilization [1]; however, this is not always the case. From the point of view of the system, the problem then becomes one of distribution of resources, rather than competition.The most commonly studied model in this context, the Minority Game (MG) [2, 3, 4], has agents choose one of two alternatives, basing their decision on a short history of the global state of the system. A multitude of possible strategies for the agents can be conceived. One recurring theme in many of the MG's variations is oscillations of preferences: in some cases, preferences oscillate in time[5,6], whereas in others, a reaction of the system to a given history pattern will be followed by the opposite of that reaction the next time this pattern occurs[7,8]. The presence of oscillations indicates suboptimal resource utilization. Their source is the fact that agents make their decision based on obsolete data, i.e., there is a delay between the time that the information underlying their decision is generated and the time their decision is evaluated. This is often obscured by the use of discrete time steps in most variations of the MG. In this paper, we study a continuous-time MG-like scenario with an explicit time delay, which was inspired by the competition of computers for network server service, but can serve as a model case for other problems.We also introduce a new way to think about controlling the dynamics of the system. In previous papers, possible ways to improve efficiency were explored from the point of view of agents' strategies: how should an agent behave to achieve maximum payoff? The result, however, was measured as an aggregate quantity -the total degree of resource utilization. In this paper, we assume that the agents are selfish and short-sighted, and their strategy is not accessible to modification. Responsibility for system efficiency lies with the servers, who can influence behavior by providing incorrect information. We first introduce the system, study its native dynamics, and determine under what circumstances control measures can improve efficiency. We then present various possible scenarios of influencing the global behavior.The model -The system we consider [9] consists of two servers R 1 and R 2 , which offer the same service to a number N of clients. Clients send data packets to one of the servers. After a travel time τ T , the packets arrive at the server, and are added to a queue. Servers need a time τ P to process each request. We choose the time scale such that t P = 1. When a client's request is completed, the server sends a "done" message (which takes another τ T to arrive) to the client. The client is then idle for a time τ I , after which it sends a new packet. Clients receive information about the state of each server. They decide which server offers shorter waiting times based on this information, and send their packets to the respective server. However, for various reasons, the information they receive is obsolete -they have access to the length of the queues a delay time τ D ago.The system can be solved simply, if both servers accept all incoming requests and demand is distributed uniformly enough, such that both servers are busy at all times. The only relevant variables are N 1 (t) and N 2 (t), the number of clients whose data is in the queue or being processed by R 1 and R 2 , respectively, at time t; we treat them as continuous variables. Idle clients do not have to be taken into account explicitly; neither do clients who are waiting for their "done" message from the server -for our purposes, they are the same as idle agents. We will first solve the problem neglecting agents whose message is travelling to the server, then include non-vanishing travel times.There are only two processes which change the length of the queues: (a) Due to processed requests, both N 1 and N 2 decrease by 1 per time unit. (b) In the same time span, two clients (whose data was processed by R 1 and R 2 a time τ T + τ I ago) compare the obsolete values N 1 (t − τ D ) and N 2 (t − τ D ) and add their requests to the queue according | null | [
"https://export.arxiv.org/pdf/cond-mat/0312266v1.pdf"
] | 117,169,087 | cond-mat/0312266 | 879062ed633ec73af5016eae94e3c7297282bc26 |
Efficiency through disinformation
10 Dec 2003
Richard Metzler
New England Complex Systems Institute
24 Mt. Auburn St02138CambridgeMAUSA
Department of Physics
Massachusetts Institute of Technology
02139CambridgeMAUSA
Mark Klein
Center for Coordination Science
Massachusetts Institute of Technology
02139CambridgeMAUSA
Yaneer Bar-Yam
New England Complex Systems Institute
24 Mt. Auburn St02138CambridgeMAUSA
Efficiency through disinformation
10 Dec 2003
We study the impact of disinformation on a model of resource allocation with independent selfish agents: clients send requests to one of two servers, depending on which one is perceived as offering shorter waiting times. Delays in the information about the servers' state leads to oscillations in load. Servers can give false information about their state (global disinformation) or refuse service to individual clients (local disinformation). We discuss the tradeoff between positive effects of disinformation (attenuation of oscillations) and negative effects (increased fluctuations and reduced adaptability) for different parameter values.Competition for limited resources occurs in many different situations, and it often involves choosing the resource least popular among competitors -one can think of drivers who want to take the least crowded road, investors who want to buy hot stocks before other buyers drive the price up, computers that send requests to the least busy servers, and many more. From an individual perspective, agents in these scenarios act selfishly -they want to achieve their particular aims. At the same time, this selfish behavior can be beneficial for the system as a whole, insofar as it leads to effective resource utilization [1]; however, this is not always the case. From the point of view of the system, the problem then becomes one of distribution of resources, rather than competition.The most commonly studied model in this context, the Minority Game (MG) [2, 3, 4], has agents choose one of two alternatives, basing their decision on a short history of the global state of the system. A multitude of possible strategies for the agents can be conceived. One recurring theme in many of the MG's variations is oscillations of preferences: in some cases, preferences oscillate in time[5,6], whereas in others, a reaction of the system to a given history pattern will be followed by the opposite of that reaction the next time this pattern occurs[7,8]. The presence of oscillations indicates suboptimal resource utilization. Their source is the fact that agents make their decision based on obsolete data, i.e., there is a delay between the time that the information underlying their decision is generated and the time their decision is evaluated. This is often obscured by the use of discrete time steps in most variations of the MG. In this paper, we study a continuous-time MG-like scenario with an explicit time delay, which was inspired by the competition of computers for network server service, but can serve as a model case for other problems.We also introduce a new way to think about controlling the dynamics of the system. In previous papers, possible ways to improve efficiency were explored from the point of view of agents' strategies: how should an agent behave to achieve maximum payoff? The result, however, was measured as an aggregate quantity -the total degree of resource utilization. In this paper, we assume that the agents are selfish and short-sighted, and their strategy is not accessible to modification. Responsibility for system efficiency lies with the servers, who can influence behavior by providing incorrect information. We first introduce the system, study its native dynamics, and determine under what circumstances control measures can improve efficiency. We then present various possible scenarios of influencing the global behavior.The model -The system we consider [9] consists of two servers R 1 and R 2 , which offer the same service to a number N of clients. Clients send data packets to one of the servers. After a travel time τ T , the packets arrive at the server, and are added to a queue. Servers need a time τ P to process each request. We choose the time scale such that t P = 1. When a client's request is completed, the server sends a "done" message (which takes another τ T to arrive) to the client. The client is then idle for a time τ I , after which it sends a new packet. Clients receive information about the state of each server. They decide which server offers shorter waiting times based on this information, and send their packets to the respective server. However, for various reasons, the information they receive is obsolete -they have access to the length of the queues a delay time τ D ago.The system can be solved simply, if both servers accept all incoming requests and demand is distributed uniformly enough, such that both servers are busy at all times. The only relevant variables are N 1 (t) and N 2 (t), the number of clients whose data is in the queue or being processed by R 1 and R 2 , respectively, at time t; we treat them as continuous variables. Idle clients do not have to be taken into account explicitly; neither do clients who are waiting for their "done" message from the server -for our purposes, they are the same as idle agents. We will first solve the problem neglecting agents whose message is travelling to the server, then include non-vanishing travel times.There are only two processes which change the length of the queues: (a) Due to processed requests, both N 1 and N 2 decrease by 1 per time unit. (b) In the same time span, two clients (whose data was processed by R 1 and R 2 a time τ T + τ I ago) compare the obsolete values N 1 (t − τ D ) and N 2 (t − τ D ) and add their requests to the queue according
to this information. We write delay-differential equations for N i :
dN 1 dt = 2Θ(N 2 (t − τ D ) − N 1 (t − τ D )) − 1; dN 2 dt = 2Θ(N 1 (t − τ D ) − N 2 (t − τ D )) − 1,(1)
where Θ stands for the Heaviside step function. This can be simplified even more by introducing A(t) = N 1 (t)− N 2 (t), the difference in queue lengths:
dA dt = −2 sign(A(t − τ D )).(2)
This has a steady-state solution
A(t) = 2 τ D tri t 4τ D + φ ,(3)
where tri(x) is the triangle function tri(x) = 4x − 1 for 0 ≤ x < 1/2, −4(x − 1/2) + 1 for 1/2 ≤ x < 1, periodic in 1,
and φ is a phase determined by initial conditions. Eq. (3) shows that the solution is oscillatory. The frequency of oscillation is only determined by the delay, and the amplitude by the ratio of delay time to processing time -the total number of clients does not play a role. Clients typically spend much of their time with their request in the queue, and adding more clients only makes both queues longer. Also, if the delay goes to zero, so does the amplitude of oscillations: the minority game is trivial if agents can instantaneously and individually respond to the current state. Fig. 1 shows that computer simulations are in good agreement with Eq. (3) and in particular that the treatment of N i as continuous variables works well even for small amplitudes.
Introducing a non-vanishing travel time τ T has the same effect on A(t) as increasing the delay time: it leads to the delay-differential equation
dA dt (t + τ T ) = −2sign(A(t − τ D )).(5)
The solution is given by Eq. (3) with t D replaced by t D + t T .
The impact of idle servers -The case where oscillations become so strong that servers go idle periodically (2τ D > N ) can be treated in a similar framework, for t T = 0: once the difference in queue lengths reaches the value ±N , one queue ceases to process requests. Hence, the rate of requests at the other server drops from 2 to 1 -exactly the rate at which it keeps processing them. The queue length at the active server therefore stays constant for some time τ L . An example of the resulting curve can be seen in Fig. 1 (bottom). Starting from the time where A(t) crosses the zero line, it will take a time τ D for clients to realize that they are using the "wrong" server, so τ D = τ L + N/2, or τ L = τ D − N/2. The period T of the oscillations is then T = 2τ L + 2N = 2τ D + N , which is smaller than 4τ D . Data throughput of the system drops from 2 to 1 + N/(2τ D ) (in units of 1/τ P ). All of this is again in good agreement with simulations.
The results above specify the system parameters for which oscillations affect throughput, and how strong the impact is. We now consider ways for the servers to reduce oscillations. Two methods suggest themselves: global disinformation from both servers and individual rejection by each server.
Global disinformation -If the servers have control over the information that clients receive on the servers' status, they can intentionally make the information unreliable. Let us assume clients have a probability p of receiving the wrong answer, and accordingly choose the "wrong" server. The update equations are:
dN 1 dt = 2[(1 − p)Θ(−A(t − τ D )) + p Θ(A(t − τ D ))] − 1; dN 2 dt = 2[(1 − p)Θ(A(t − τ D )) + p Θ(−A(t − τ D ))] − 1,(6)
leading to This equation has the form of Eq. (2) with a prefactor of 1 − 2p, and has a steady-state solution
dA dt = −2(1 − 2p)sign(A(t − τ D )).(7)A(t) = 2 τ D (1 − 2p)tri t 4τ D + φ ,(8)
for p < 1/2. At p = 1/2, no information is available: clients' decisions are random, and queue lengths perform a random walk, whose fluctuations are not captured by the deterministic framework we are using. Even for values p < 1/2, fluctuations may become larger than the typical amplitude of oscillations, and thus dominate the dynamics. For p > 1/2, users migrate systematically from the less busy to the busier server, until one is idle much of the time, and the other has almost all clients in its queue. The trade-off between reduced oscillations and increased fluctuations can be seen in Fig. 2. Rather than measuring the amplitude of oscillations, the root mean square A rms = A 2 1/2 of A(t) is shown. For a pure triangle function of amplitude a, one gets A rms = a/ √ 3. For small p, the amplitude is reduced linearly; for larger p, fluctuations increase, dominating the dampened oscillations. When the amplitude of the undisturbed system is small, fluctuations have a large impact. As the amplitude of oscillations gets larger, the impact of fluctuations becomes smaller, and the value of p where fluctuations dominate moves closer to 1/2.
Under the influence of randomness, A performs a biased random walk: let us assume that server R 2 is currently preferred. In each unit of time, A/4 increases by 1 with probability p 2 (the two clients processed both go to R 1 ), stays constant with probability p(1 − p), and decreases by 1 with probability (1 − p) 2 (both clients move from R 1 to R 2 ). To reproduce quantitatively the effects of fluctuations, one can numerically average A 2 over such a random walk that takes place in two phases: the first phase lasts until A = 0 is reached; the second takes another τ D steps until the direction of the bias is reversed. The probability distribution of A at the beginning of the half-period has to be chosen self-consistently such that it is a mirror-image of the probability distribution at the end; the proper choice for A/4 is a Gaussian restricted to positive values with mean (1 − 2p)τ D and variance 2p(1 − p)τ D . The numerical results are shown in Fig. 2; they agree well with values from the simulation. Note that in the above treatment, we neglected complications like multiple crossings of the A = 0 line.
Adaptability -Another aspect that determines what degree of disinformation should be chosen is adaptability of the system. The reason to have public information about the servers in the first place is to be able to compensate for changes in the environment: e.g., one server might change its data throughput due to hardware problems, or some clients might, for whatever reason, develop a strong preference for one of the servers. If the other agents can respond to these changes, global coordination remains good; if their ability to understand the situation is too distorted by disinformation, global efficiency suffers.
Let us assume that a fraction b of agents decide to send their requests to server 1, regardless of the length of queues. Under global disinformation, out of the fraction 1 − b that is left, another fraction d will send their requests to the wrong server, and yet another fraction d is needed to compensate for that. So a fraction (1 − b)(1 − 2d) is left to compensate for the action of the biased group, and the maximum level of disinformation that still leads to reasonable coordination is p = (1 − 2b)/(2 − 2b) -larger levels lead to large differences in queue length, and finally to loss of data throughput by emptying a queue. That estimate is confirmed by simulations (see Fig. 2). Similar arguments hold if the preferences vary slowly compared to oscillation times. On the other hand, if the preferences of the biased agents oscillate in time with a period smaller than the delay time, they average out and have little effect on the dynamics.
A similar argument also applies if the servers have different capacity. Let us say R 1 has a processing time τ P 1 , whereas R 2 has τ P 2 > τ P 1 . A fraction f = τ P 2 /(τ P 1 + τ P 2 ) > 1/2 of clients should choose R 1 -if p is smaller than 1 − f , the queue of R 1 will not become empty; otherwise it will.
Individual rejection -Even if servers cannot influence the public information on queue status, they can influence the behavior of clients directly: they claim they are not capable of processing a request, and reject it -let us say, with a constant probability r. Compared to global disinformation, a new possibility arises that a request bounces back and forth several times between the servers, but that adds nothing new in principle: the fraction of requests that end up at the server that they tried at first is (1 − r) + r 2 (1 − r) + r 4 (1 − r) + . . . = 1/(r + 1), whereas a fraction r/(r + 1) will be processed at the other server. This is equivalent to setting p = r/(1 + r) in the "global disinformation" scheme, and gives equivalent results. Choosing r close enough to 1 reduces the amplitude of oscillations dramatically; however, each message is rejected a large number of times on average, generating large amounts of extra traffic.
Load-dependent rejection -Rather than setting a constant rejection rate, it seems intuitive to use a scheme of load-dependent rejection (LDR), in which r i depends on the current length of the queue. This is being considered for preventing the impact of single server overload in Ref. [10]. For example, let us consider the case where r i = cN i if cN i < 1, and 1 otherwise, with some appropriately chosen constant c. The analysis from the "indiviual rejection" section can be repeated with the additional slight complication of two different rejection rates r 1 and r 2 . A fraction (1−r 1 )/(1−r 1 r 2 ) of agents who initially try server 1 ends up being accepted by it, whereas a fraction r 1 (1−r 2 )/(1−r 1 r 2 ) finally winds up at server 2, and vice versa for clients who attempted 2 first. Combining the resulting delay-differential equations for N 1 and N 2 into one for A, one obtains
dA dt = 2 1 − r 1 r 2 (Θ(−A(t − τ D ))(1 − 2r 1 + r 1 r 2 ) − Θ(A(t − τ D ))(1 − 2r 2 + r 1 r 2 )).(9)
We can now substitute the load-dependent rates. We write them as follows: r 1 =r + c ′ A, r 2 =r − c ′ A, with r = (r 1 + r 2 )/2 and c ′ = c/2. For small amplitudes A relative to the total number of players N , the deviation fromr does not play a significant role, and it isr that determines behavior, yielding the same results as a constant rejection rate. For larger relative amplitudes, the oscillations are no longer pure triangle waves, but have a more curved tip. Figure 3 shows A rms for load-dependent rejection, compared to constant-rate rejection with r =r. These nonlinear effects make LDR more efficient at suppressing oscillations, at least if 2τ D is not small compared to N . They also provide for a restoring force that suppresses fluctuations effectively. It follows that LDR is better at improving data throughput in parameter regimes where servers empty out, which is confirmed by simulations.
We note one problem with LDR: if c > 2/N , both servers have the maximal number 1/c < N/2 clients in their queue most of the time, while the rest of clients are rejected with probability 1 from both servers. For effective operation, this means that the constant c in LDR should chosen smaller than 2/N , which requires knowledge of N .
Discussion -We have introduced a model for the coordination of clients' requests for service from two equivalent servers, found the dependencies of the resulting oscillations on the parameters of the model, and determined when and how these oscillations decrease data throughput.
We have then suggested a number of server-side ways to dampen the oscillations, which involve purposely spreading wrong information about the state of the servers. All of these schemes can achieve an improvement, showing that the presence of faulty or incomplete information can be beneficial for coordination. The margins for improvement are higher in the regime of large numbers, when the amplitude is on the order of many tens or hundreds rather than a few individuals -in the latter case, increased fluctuations can outweigh the benefits of reduced oscillation. While some disinformation generally improves performance, monitoring of the average load and amplitude is necessary to choose the optimal degree of disinformation. The basic ingredients of the server-client scenario (delayed public information and minority-game-like structure) appear in many circumstances. One can think of traffic guidance systems that recommend one of two alternative routes, stock buying recommendations in magazines, career recommendations by employment centers, and others. Exploring the impact of disinformation on these problems is certainly worthwhile.
τ D ) simulation: τ D =5 τ D =10 τ D =15 theory: τ D =5 τ D =10 τ D =15
FIG. 1 :
1Unmodified dynamics of the system. Top: Comparison between simulations with N = 100 and τT = 0 to Eq. (3). The delay is τD = 5, yielding a period of 20 and an amplitude of 10. Bottom: An example of A(t) in the regime where servers go idle periodically.
τ D =10, b=0 τ D =10, b=0.1 τ D =10, b=0.2 τ D =10, b=0.3 theory, b=0, no fluctuations FIG. 2: Top: Global disinformation -the y-axis shows the root mean square difference in queue lengths, rescaled by the amplitude of the undisturbed system. The theory for negligible fluctuations is given by Eq. (8); agreement with simulations becomes better as absolute amplitudes increase. The impact of fluctuations can be modeled using a discrete random walk. Bottom: Adaptability -if a percentage b of clients always chooses the same server, disinformation disrupts coordination for values p < ∼ (1 − 2b)/(2 − 2b), indicated by vertical lines for several values of b. constant rejection, τ D =15, N=200 FIG. 3: Load-dependent rejection: compared to rejection with a constant rate, LDR is more efficient at suppressing both oscillations and fluctuations. The former becomes more pronounced if 2τD is not small compared to N .
An Inquiry into the Nature and Causes of the Wealth of Nations. A Smith, Methuen and Co., LtdLondonA. Smith, An Inquiry into the Nature and Causes of the Wealth of Nations (Methuen and Co., Ltd, London, 1904).
. D Challet, Y.-C Zhang, Physica A. 246407D. Challet and Y.-C. Zhang, Physica A 246, 407 (1997).
. D Challet, M Marsili, Phys. Rev. E. 606271D. Challet and M. Marsili, Phys. Rev. E 60, R6271 (1999).
Minority game homepage. with extensive bibliographyMinority game homepage (with extensive bibliography), http://www.unifr.ch/econophysics/minority.
. E Nakar, S Hod, cond-mat/0206056E. Nakar and S. Hod (2002), cond-mat/0206056.
. G Reents, R Metzler, W Kinzel, Physica A. 299253G. Reents, R. Metzler, and W. Kinzel, Physica A 299, 253 (2001).
. M Marsili, D Challet, cond-mat/0004376M. Marsili and D. Challet (2000), cond-mat/0004376.
. R Savit, R Manuca, R Riolo, Phys. Rev. Lett. 822203R. Savit, R. Manuca, and R. Riolo, Phys. Rev. Lett. 82, 2203 (1999).
Handling Resource Use Oscillation in Open Multi-Agent Systems. M Klein, Y Bar-Yam, aAMAS Workshop. M. Klein and Y. Bar-Yam, Handling Resource Use Oscillation in Open Multi-Agent Systems. aAMAS Workshop (2003).
Recommendations on Queue Management and Congestion Avoidance in the Internet. B Braden, D Clark, Network Working Group. B. Braden, D. Clark, et al., Recommendations on Queue Management and Congestion Avoidance in the Internet. Network Working Group (1998).
| [] |
[
"Hierarchical Subquery Evaluation for Active Learning on a Graph",
"Hierarchical Subquery Evaluation for Active Learning on a Graph"
] | [
"Oisin Mac \nUniversity College London\n\n",
"Aodha Neill \nUniversity College London\n\n",
"D F Campbell \nUniversity College London\n\n",
"Jan Kautz \nUniversity College London\n\n",
"Gabriel J Brostow \nUniversity College London\n\n"
] | [
"University College London\n",
"University College London\n",
"University College London\n",
"University College London\n",
"University College London\n"
] | [] | To train good supervised and semi-supervised object classifiers, it is critical that we not waste the time of the human experts who are providing the training labels. Existing active learning strategies can have uneven performance, being efficient on some datasets but wasteful on others, or inconsistent just between runs on the same dataset. We propose perplexity based graph construction and a new hierarchical subquery evaluation algorithm to combat this variability, and to release the potential of Expected Error Reduction.Under some specific circumstances, Expected Error Reduction has been one of the strongest-performing informativeness criteria for active learning. Until now, it has also been prohibitively costly to compute for sizeable datasets. We demonstrate our highly practical algorithm, comparing it to other active learning measures on classification datasets that vary in sparsity, dimensionality, and size. Our algorithm is consistent over multiple runs and achieves high accuracy, while querying the human expert for labels at a frequency that matches their desired time budget. | 10.1109/cvpr.2014.79 | [
"https://arxiv.org/pdf/1504.08219v1.pdf"
] | 5,639,593 | 1504.08219 | a1dfdbdabe23e252fd07abd6b05dbea8a3d31bc5 |
Hierarchical Subquery Evaluation for Active Learning on a Graph
Oisin Mac
University College London
Aodha Neill
University College London
D F Campbell
University College London
Jan Kautz
University College London
Gabriel J Brostow
University College London
Hierarchical Subquery Evaluation for Active Learning on a Graph
To train good supervised and semi-supervised object classifiers, it is critical that we not waste the time of the human experts who are providing the training labels. Existing active learning strategies can have uneven performance, being efficient on some datasets but wasteful on others, or inconsistent just between runs on the same dataset. We propose perplexity based graph construction and a new hierarchical subquery evaluation algorithm to combat this variability, and to release the potential of Expected Error Reduction.Under some specific circumstances, Expected Error Reduction has been one of the strongest-performing informativeness criteria for active learning. Until now, it has also been prohibitively costly to compute for sizeable datasets. We demonstrate our highly practical algorithm, comparing it to other active learning measures on classification datasets that vary in sparsity, dimensionality, and size. Our algorithm is consistent over multiple runs and achieves high accuracy, while querying the human expert for labels at a frequency that matches their desired time budget.
Introduction
Bespoke object recognizers are almost mature enough to be useful to people in practice. A major hurdle is how to procure enough training labels to tune a semi-supervised model for a specified classification task. While unskilled Mechanical Turkers are willing to label images of food at $1.40 per image [25], the costs are massive for recruiting and paying specialists like doctors or scientists. Whether they are experts or part of an online crowd, people need practical and reliable Active Learning (AL) to suggest which unlabeled image they, as the oracle, should label next. Choosing the query images in the right order gives better classification after fewer interrogations of the oracle.
During a training session, the classifier model starts with only unlabeled examples, picks one, queries the human for its label, and then quickly re-trains the classifier so the process can repeat with queries selected among the remaining unlabeled examples. We therefore work within the popular graph based semi-supervised learning (SSL) framework, where each image is represented as a vertex in a weighted graph, weights encode similarity between image feature vectors, and vertices that have already been queried have labels. Whether the human is done providing class labels or not, classification of all datapoints is performed directly in feature space by propagating available label information over the graph.
Designing a graph based AL framework requires three steps: 1) building a graph of the unlabeled datapoints in feature-space, 2) selection of an AL criterion for measuring the informativeness of possible queries, and 3) selecting an inference method for evaluating the criterion on the graph. There are many benefits to this framework, but forming the right combination of these three is an acknowledged challenge. The other steps are especially influenced by the AL criterion, chosen to decide which unlabeled image will be the next query. In particular, Expected Error Reduction (EER) is very attractive (see § 3.1), but naive incarnations of it are prohibitively costly. Each query put to the oracle is preceded by computing "subqueries" to each unlabeled example; a subquery simulates how the updated predictions would change if that individual datapoint received this or that label from the oracle.
We therefore propose a method for graph construction that is good in its own right, but crucially, organizes the data so that the EER criterion can be exploited effectively. Building on our graph construction, our main contribution is the proposed hierarchical subquery evaluation, which allows us to ask the oracle for a label that maximizes EER, without having to compute EER exhaustively for all unlabeled images, and without heuristics that hurt the overall learning curve. Our many experiments show that the significant benefits of computing EER by traversing our hierarchical representation of the data are 1) that we can cope with datasets having a broad variety of sparsity, dimensionality, and size, 2) that we balance exploration vs. exploitation to get good accuracy quickly and refine decision boundaries as needed within the time budget specified by the user, and 3) that empirically, we have highly consistent accuracy when labeling a given dataset. Our experiments benchmark our approach against alternative AL criteria and alternative graph constructions, and establish the repeatability of our approach across different datasets.
Related Work
Here we cover only the most relevant related works, and recommend [27] for a thorough overview of active learning. Active learning has been successfully applied to many different computer vision problems including tracking [32], image categorization [17], object detection [31], semantic segmentation [30], and image [1] and video segmentation [8], with both human and automatic oracles [18]. Compared to the body of work on active learning in general, there are relatively few active learning methods for image classification which facilitate interactive annotation. The challenge with creating interactive algorithms is that the time to retrain the model, once a labeled example is provided, can be long if not performed incrementally. This delay can also be further exacerbated by the type of active learning criterion used. Yao et al. [37] propose object detection based on efficient incremental training of Hough voting forests. Operating in real-time, their system is able to predict an annotation cost for an image and provides feedback to the user. However, they do not exploit the unlabeled data in the pool when updating their model. Batra et al.
[1] present a system for interactive image co-segmentation which asks the user to annotate the region deemed to be most informative by the current model. Wang et al. [34] perform cell image annotation using a semi-supervised graph labeling approach and exploit fast updating of the graph for interactive annotations. Unlike our work, they do not explore the merits of different active learning criteria.
Semi-Supervised Active Learning
In pool based active learning we have access to the unlabeled data up front, before querying the oracle. In contrast to standard supervised learning, semi-supervised learning (SSL) exploits structure in the unlabeled data. In this paper we are concerned with graph based SSL, however our proposed subquery evaluation scheme can be applied to any pool based active learning task where the unlabeled data is available during training. In graph based SSL, datapoints are represented as nodes in a graph and edges between the nodes encode similarity in feature space. The premise is that datapoints near each other in feature space share the same label. Graph based transductive algorithms can be efficient to evaluate in closed form, typically only requiring simple matrix operations to propagate label information around the graph. Graph Based SSL: Zhu et al. [40] propose an approach to SSL based on defining harmonic functions on Gaussian random fields. The advantage of their method is that, unlike graph cut based formulations [2], it produces a probability distribution over the possible class labels for each datapoint. Having real probabilities opens the door to a broader range of active learning strategies. The LGC method of Zhou et al. [38], adds additional regularization by balancing the information a node receives from the labeled set and its neighbors, but at the expense of allowing a labeled node to change class. For both methods it is also possible to include a label regularization term to address class imbalance in the data [34].
As the number of datapoints increases, it can quickly become infeasible to perform the large matrix inversions that are required by many graph based SSL algorithms. Iterative algorithms do not require a matrix inversion but can take many iterations to converge [39,38]. Options to overcome this scalability issue include reducing the effective graph size using mixture models in feature space [41], nonparametric regression of the labels through a subset of anchor nodes [22], or assuming the data to be dimensionally separable in order to approximate the eigenvectors of the normalized graph Laplacian [10]. Graph Construction: It is well known that graph based methods are highly sensitive to the choice of edge weights [16]. A standard approach for graph construction is to first sparsify the fully-connected graph and then reweight the remaining edges. Sparsification is important, because in higher dimensions, the distances between far away points become less meaningful. K-nearest neighbor and distance thresholding are common choices for sparsification. However, they suffer from the problem that the resulting graph can be uneven as there is no guarantee on the number of edges at each node. Approaches exist to guarantee regular graphs (the same number of edges at each node) but can be computationally costly [16]. However, for a small decrease in graph quality, it is possible to build approximately regular graphs at reduced cost [36]. In the reweighting step, a similarity measure between datapoints must be defined. One standard choice of similarity is the RBF kernel, and several methods have been proposed to define a suitable bandwidth parameter. If there are labeled datapoints it can be learned [40], alternatively it can be defined per dimension, based on the average distance between all neighbors [4], local distance [12], or by direct optimization [33]. Wang et al. [35] jointly learn the graph structure and label prediction by minimizing a cost function over the graph and its labeling. In this paper we propose a method for graph reweighting inspired by ideas from dimensionality reduction [13]. Active Learning on Graphs: Many different active learning criteria exist in the literature. Methods range from random querying, uncertainty sampling, margin reduction, density sampling, expected model change, and expected error reduction [27]. An optimal strategy would trade off between exploration and exploitation; initially exploring the space when there are few labels and uncertainty is high and then, when more annotations have been acquired, exploit this information to perform boundary refinement between the classes. Algorithms that switch between density based and uncertainty sampling typically require hyperparameters that are dataset specific [3], however more complex approaches strive to do this automatically [20,7]. Expected error reduction (EER) [26] performs this trade off naturally. Instead of measuring a surrogate, it seeks out datapoints that will make the overall class distributions on the unlabeled data more discriminative by attempting to reduce the model's future generalization error.
However, full EER requires O N 2 operations to determine which example minimizes the expected error under the current model, where N is the size of the dataset. This complexity stems from needing to retrain the model for each of the N subqueries in the unlabeled pool to evaluate their expected error. Efficient update methods for some commonly known algorithms exist, e.g. in graph based SSL making full EER only feasible on small graphs. Zhu et al. [42] demonstrated the superior performance of EER over other active learning criteria when combining it with their Gaussian fields formulation [40], and this serves as one of our baselines. Clustering Approaches: To cope with larger datasets, different approaches have been proposed to reduce the number of subqueries that must be evaluated. Strategies include only considering a subsample of the full data [26], or using the inherent structure of the data to limit influence and selection of subqueries [23]. Using the same manifold assumption as SSL, these methods cluster the data in its original feature space. Macskassy [23] explores graph based metrics, commonly used in community detection, to identify cluster centers (each assumed to contain the same class) that are then evaluated using EER. This is related to the hierarchical clustering method for category discovery of Vatturi [29]. However, by limiting subqueries to cluster centers, these clustering based approaches are unable to perform boundary refinement.
The hierarchical clustering, in [6], is used to define bounds on sampling statistics. Every one of their samples (a full query to the oracle) is randomly selected from a strict partition of a prespecified clustering (similar to a breadth first search) and only shares label information within its cluster. Our proposed method also uses a hierarchical representation, but differs as it uses the hierarchy for efficient sampling using EER, with the added advantages of graph based SSL, without sacrificing the ability to refine class boundaries.
Graph Based Semi-Supervised Framework
Here we review graph based SSL, and detail our innovations in § 4. In pool based learning, one has a dataset D = {(x 1 , y 1 ), ..., (x N , y N )} where each x i is a Q dimensional feature vector and y i ∈ 1, ..., C is its corresponding class label. We split D into two disjoint sets D u and D l , corresponding to the sets of unlabeled and labeled examples. For active learning, the set of labeled examples is initially empty as only the oracle knows the values of each y i . One can define a graph G with a set of vertices V, corresponding to the pool of N examples in D, and the set of edges is represented by a connectivity weight matrix W ∈ R N ×N . Each entry w ij in W represents the similarity in some feature space between datapoints x i and x j . Our goal is to estimate the distribution over the class labels for each of the nodes in the graph, f ic = P (y i = c | x i ). In matrix notation, these distributions, F , are represented as an N × C matrix, where each row is a different datapoint.
Zhu et al. [40] propose a method for semi-supervised learning based on Gaussian random fields and harmonic energy minimization (GRF). Their harmonic energy minimization can be computed in closed form using matrix operations on the graph Laplacian,
F u = (D uu − W uu ) −1 W ul Y l ,(1)
where D is a diagonal matrix with entries d ii = j w ij . The matrices are split into labeled and unlabeled parts
W = W ll W lu W ul W uu , and Y = Y l Y u .(2)
Again using matrix notation, Y is the same size as F where all entries are set to 0 except where the oracle labels datapoint x i with class c making y ic = 1.
Expected Error Reduction
Let P (y|x) be the unknown conditional distribution of output y over input x, and P (x) be the marginal input distribution. Taking the labeled data D l , we can produce a learner that estimates the class output distributionP D l (y|x) for a given input x. The expected error of such a learner is
EP D l = x L P (y|x),P D l (y|x) ,(3)
where we define L(·, ·) as a loss function that quantifies any error between the predicted output and the true value. In our learning problem, we consider multi-class classification tasks and use a 0/1 loss function
L P (y|x),P D l (y|x) = C y=1 P (y|x) I [y =ŷ] ,(4)
whereŷ = arg max yPD l (y|x) is the learner's MAP estimate of the class of x, and I[·] is a binary indicator function.
In the case of graph based SSL, we represent the marginal input distribution by the set of input samples {x i } and evaluate the integral of (3) as a summation over this set to produce
EP D l = N i=1 C yi=1 P (y i |x i ) I [y i =ŷ i ](5)
as the expected error. In practice, the true conditional distribution P (y|x) is unknown, so we approximate it using the current estimate of the learnerP D l (y|x).
In the context of active learning, we would like to select the oracle's next query (x q ,ŷ q ) from the unlabeled data D u , such that adding it to the labeled data D l would result in a new learner with a lower expected error. This leads to a greedy selection strategy. First, we determine the expected error (or risk) for combinations of each unlabeled example x q ∈ D u taking each possible label y q ∈ {1..C}
E +(xq,yq) P D l = N i=1 C yi=1P +(xq,yq) D l (y i |x i ) I y i =ŷ i +(xq,yq) ,(6)
whereP
+(xq,yq) D l
is the learner with (x q , y q ) added to the labeled data. We then calculate the expectation of this risk across the possible label values for y q . We use the learner's current posteriorP D l (y q |x q ) to approximate the unknown true distribution across y q to provide
E E +(xq,yq) P D l = C y =1P D l (y q = y | x q ) E +(xq,yq=y ) P D l(7)
as the expected risk. Finally, we select the queryx q with the smallest expected risk. For the remainder of the paper, we refer to this expected risk as the expected error that the EER criterion seeks to minimize.
Zhu et al. [42] integrated active learning into their GRF framework by exhaustively calculating the expected error over all possible unlabeled nodes. Even with the proposed matrix update efficiencies of Zhu et al., calculating the expected error for a datapoint is a linear operation and evaluating it over all unlabeled examples results in a time complexity of O |D u | 2 . This quadratic cost is prohibitively expensive as the dataset increases in size. We address this limitation using our proposed hierarchical subquery sampling approach presented in § 4.2.
Hierarchical Subquery Evaluation
Our method uses the EER active learning criterion while overcoming the expense of exhaustive sampling. It does this without sacrificing the desirable exploration/exploitation properties of EER, an issue with previous subsampling approaches. Before we discuss our hierarchical subquery search method, we first describe our graph construction technique that we have found to work well with the EER criterion and to be robust across a wide variety of datasets.
Perplexity Based Graph Construction
As noted previously, graph based SSL algorithms are very sensitive to the choice of similarity matrix W . If two datapoints x i and x j have the same label, we want their corresponding affinity w ij to be high, and if they are different we want it to be low. One popular choice of similarity kernel is the radial basis function (RBF),
w ij = exp(−γ i x i − x j 2 2 ).(8)
Here we use the L 2 distance, but other distances may be more appropriate depending on the data representation (e.g. histograms). We have now introduced a set of parameters γ i that control the bandwidth of the kernel. A single choice of γ is unlikely to be optimal across the whole dataset. We want each γ i to model the density of local space. Intuitively, we want a larger value of γ i in dense regions of the feature space and a smaller value in more sparse regions. We now define our similarity based on a successful unsupervised technique from dimensionality reduction. In Stochastic Neighbor Embedding (SNE) [13] the nonsymmetric similarity between points is represented as a conditional probability. w ji can be interpreted as the probability that x i would pick x j as its neighbor assuming there is a Gaussian with variance σ 2 i centered at x i , where γ i = 1/(2σ 2 i ). We perform the same binary search as SNE to find the values of γ i that best match a given level of perplexity (a measure of the effective number of local neighbors). The perplexity for a given choice of γ i is defined as
Perp(γ i ) = 2 − j wji log 2 wji .(9)
We enforce a valid similarity matrix W by symmetrizing the conditional probabilities, so w ji = 1 2 (w ij + w ji ).
Hierarchical EER
The EER criterion dictates that we pick the datapoint giving the lowest expected error to be labeled next. We refer to calculating the expected error of a single unlabeled datapoint as a subquery; the complexity of a single subquery is linear in the number of unlabeled datapoints. Together, the subqueries are internal calculations used to determine the next query that is sent to the oracle for labeling. We want to find the next query within a specified query budget. This means we do not have sufficient time to perform subqueries on all possible unlabeled nodes since this results in a quadratic cost ( § 3.1). Instead, we must identify an adaptive number of the best subqueries to sample within an allotted time, ideally sub-linear in the number of unlabeled nodes.
The smooth nature of the harmonic solutions, with respect to proximity of nodes on the graph, creates a redundancy in densely sampling all nodes; neighboring nodes will likely produce a very similar reduction in error when labeled. A hierarchical clustering of the graph, for example Subquery with the best overall EE chosen as next Query 1 Children of the best expected error (EE) in the current Active Set are the next to subquery and add to the expanded Active Set (a) The AuthorityShift clustering (b) Our hierarchical subquery sampling algorithm for the next active learning query Figure 1. Hierarchical clustering and subquery sampling strategy. (a) A hierarchical clustering is built using [5], shown here as a tree. At each level, every node in the tree is represented by a unique allocation (denoted by color) to a specific datapoint (the authority point in bold). (b) We use a hierarchical algorithm to determine the subqueries to perform; a subquery evaluates the expected error (EER criterion) shown as a number inside the node. (left) An active set, shown in orange, is constructed containing the children of labeled nodes; these are evaluated as the first subqueries, prioritizing from top to bottom. The active set is then expanded in a greedy fashion by including the children of the subquery with the lowest expected error, shown in pink. (right) We repeat this process until we have exhausted our subquery budget. The query for the oracle to label is chosen as the subquery with the lowest expected error (greatest EER). Figure 1(a), exploits these local correlations between neighboring nodes. Previous approaches to reducing the number of subqueries have included random sub-sampling [26] and using community detection to propose candidates [23]. The latter method is equivalent to performing a breadth first (coarse to fine) search of a cluster hierarchy where graph communities are represented as high level clusters. Similar breadth first searches of hierarchies have been used in active learning, albeit without the EER criterion [6,29].
The main advantage of the EER criterion is that it will trade-off the reduction in error achieved by either labeling an unknown region (exploration) or refining the decision boundaries under its current model (exploitation). Typically, the exploration mode will label nodes high up in the hierarchy whereas the detailed boundary refinement will occur in the leaves of the tree. While a breadth first approach can achieve good initial results, the active learner is stuck in an exploratory mode since it is effectively sampling on a graph density measure.
In our proposed approach, we allow the EER measure to perform the exploration/exploitation trade-off while still sub-sampling the unknown nodes to dramatically reduce the number of subqueries and therefore the cost. We achieve this by performing an adaptive search of the hierarchy.
Hierarchical Subquery Sampling
Authority-Shift Hierarchy Creation: We provide an illustrative example of the hierarchical clustering in Figure 1(a). We make use of the Authority-Shift algorithm of Cho and Lee [5]. It does not require a feature space but operates on the perplexity graph directly. This technique produces a hierarchical clustering on a graph by authority seeking: the process of allocating each node to a local 'authority' node (that represents the cluster). The calculation explores the steady state of a set of random walks on the graph at an appropriate scale. By increasing the scale parameter iteratively, a hierarchy of clusters can be built up to form a tree. This approach has two advantages. First, each cluster in the tree is represented by a specific datapoint that can be used to perform a subquery. Second, the clusters themselves encode walks on the graph under the same transition matrix used to evaluate the harmonic function, and therefore produce a summary of the results of calculating the expected error for all the datapoints in the cluster. Subquery Sampling: An overview of our hierarchical sampling algorithm is provided in Figure 1(b). We differ from previous breadth first searching strategies by allowing an adaptive search on the tree to greedily seek for the minimum reduction in expected error. Referring to the diagram, consider a set of data with the cluster hierarchy of Figure 1(a), where two nodes have already been queried and labeled; see the left side of Figure 1(b). First, we build an active set of unlabeled nodes containing the children of labeled nodes, starting at the root. We proceed to perform a batch of subqueries of this active set (shown in orange) to obtain the expected error (the numbers inside the nodes). We then expand the active set by adding the children of the subquery in the current active set with the minimum expected error (shown in pink). As the children are added to the active set, they are evaluated as subqueries; see the right side of Figure 1(b). This process repeats until we have exhausted our budget of subqueries (a limit on the size of the active set). We now select the member of this active set with the minimal expected error as the next query to be labeled by the oracle. We prioritize the subquery evaluation by the level in the hierarchy (top-to-bottom) and then by ranking the nodes based on the total number of their descendants.
The boxes in Figure 1(b) provide a toy illustration of the Figure 2. Our method outperforms the other baselines, including full EER [42] despite requiring far fewer subquery evaluations.
advantage of this approach. To refine the boundary between the two classes, we need to ask the oracle to label nodes at the edges of clusters; these are usually found low down in the hierarchy. Because the EER improves as one moves toward a decision boundary, the active set can move down into the tree when the EER criterion favors exploitation over the improvement of exploration; exploration occurs by labeling clusters at the top of the tree. Under breadth first search, a large number of queries would have to be performed before reaching nodes at the exploitation depth. As the learning curve evolves, the boundary refinement nodes will become increasingly localized, making it more unlikely that they will be found by random subqueries alone. We always take the root node of the tree as our first query (an open question for many algorithms) which we observe empirically to confer good performance and makes our algorithm deterministic. The tree construction means that the entire hierarchy has the potential to be navigated in O N log(N ) . Table 1 describes the 13 vision and standard machine learning datasets used for our experiments. These were chosen because they vary in size, density in their respective feature spaces, and have different numbers of classes. For all experiments, we start out with 3 random queries, construct graphs with 10 nearest neighbors based on the L 2 distance, use a perplexity value of 30, and query the oracle 50 times. For our method (HSE), we set the number of subqueries to be 25 log(N ), where N is the number of datapoints for a given dataset, and the initial queries are set as the first 3 nodes in the hierarchy. Data and code are available on our project webpage.
Experiments
Graph Construction: Graph based SSL algorithms can produce inferior performance with poor graphs. Using the method of Zhu et al. [42] to evaluate graphs, Table 2 compares our perplexity based graph construction method to four other baseline algorithms, testing this contribution in isolation. For mean, the bandwidth of the RBF kernel is set using the average distance between neighbors. For binary, we set a constant value for any two nodes that are connected and zero elsewhere. For knn, the bandwidth is set per datapoint proportional to its K-nearest neighbors. Finally, lle is the local linear embedding approach of [33]. Our perplexity based graph performs best overall. Active Learning Criteria: We compare our algorithm to seven different baselines, including GRF [40] with random, entropy, and margin based criteria [27], full EER [42], and the recent time varying combination approach RALF [7]. We also compare to two different subquery evaluation strategies, random [26] and breadth first [23]. Both competing subquery strategies are evaluated using the same number of subqueries as our method. All methods use our perplexity based graph with the exception of RALF which uses a binary based graph representation. Empirically, we found RALF to perform worse using other graphs. Table 1 summarizes our overall results as area under the learning curve on the unlabelled set.
Interestingly, our method outperforms full EER which requires O N 2 computations. We note that the full EER is still a greedy algorithm at each iteration and therefore, not necessarily globally optimal. Our approach will encourage exploration at the start, when only a few queries have been performed and the active set is at the top of the hierarchy, which is observed to offer improved performance. One noticeable exception is the Cropped Pascal dataset from [7]. Due to the high variability in each class, it is likely that this dataset does not conform to the clustering assumption of semi-supervised learning. Using an iterative label propagation algorithm with few propagation steps prevents RALF [7] from overfitting the dataset at the expense of worse marginals. Figure 2 illustrates learning curves for a subset of the datasets. Table 3. Average time (in seconds) per query for active learning methods with different area-under-learning curve and across datasets of varying complexity. Both RALF and HSE pick the next query in under a second. In our method, we allow 25 log(N ) subqueries per query rather than the full N 2 required for the Zhu method.
Discussion
Accurate AL is the key to saving human effort, but speed is also a factor when a human oracle's patience is finite. Generalizing slightly, our Active Learning approach performs as accurately or better than Zhu et al. [42], but does so with an effective computational complexity on par with Ebert et al. [7]. Their computational complexities are O N 2 and O N respectively, while ours is O N log(N ) with a low log(N ). In practice, with our Matlab implementation and default settings (used throughout), the combined subqueries needed to pick the oracle's next query finished in under a second, even for the largest datasets tested. For bigger datasets, users may opt to use our algorithm with fewer subqueries to keep the labeling interactive. Both those main competitors are very good, ex-celling on specific datasets. Therefore it is important that validation of our AL approach has considered accuracy, efficiency, and generalizability to a variety of situations. The online supplementary material further illustrates that across these datasets, our hierarchical subquery evaluation leads to accurate results in the form of steep learning curves with large areas under the curve, and that these results are consistent across multiple runs, as plotted with ±1 standard deviation from each curve's mean.
To tease apart the impact of our hierarchical subquery evaluation vs. our perplexity-based graph construction, we gave our graphs to the compatible AL baseline algorithms. Zhu et al. is among them, and without our graphs, performs worse than RALF. Within the flexible graph based SSL framework, other choices can also have an impact, so as part of the supplemental files, we also show that LGC, used by RALF, is not as effective for our label propagation as Zhu et al.'s GRF.
There are several exciting avenues for future work. Our approach is transductive, so it would be attractive to either embed new datapoints into our existing graph online, or to transfer learned parameters to an inductive model. It would also be interesting to budget subqueries to account for some labels taking more of the oracle's time or effort than others. Finally, our similarity graph is computed once offline and never updated. In future, we may wish to use the label information from the user to learn a feature representation online.
Figure 2 .
2Learning curves illustrating the performance of our approach versus three other baselines from
Table 1 .
1The shaded regions around each learning curve represents one standard deviation. Our method gives superior results compared to that of Zhu et al.[42] and as it is deterministic, results do not vary over different runs. In the last plot we illustrate the effect of increasing the number of subqueries for our method. As the number increases, so does the area under the curve.Table 2. Comparison of different graph construction methods. The results represent area under learning curves for the GRF method of Zhu et al.[42]. Our perplexity based method outperforms the other baselines.Dataset
mean binary knn [12] lle [33] per (ours)
Glass
0.775 0.743
0.758
0.787
0.818
Ecoli
0.795 0.768
0.777
0.791
0.832
Segment
0.837 0.860
0.853
0.892
0.903
FlickrMat
0.196 0.159
0.198
0.222
0.261
Coil20
0.641 0.597
0.616
0.729
0.729
LFW10
0.362 0.356
0.365
0.381
0.421
UIUCSport 0.528 0.452
0.527
0.529
0.650
Gait
0.686 0.646
0.672
0.579
0.668
Oil
0.941 0.937
0.924
0.962
0.943
Caltech4
0.981 0.973
0.977
0.971
0.986
Eth80
0.572 0.596
0.562
0.604
0.649
CpPascal08 0.146 0.102
0.159
0.141
0.074
15Scenes
0.344 0.304
0.353
0.378
0.535
Mean
0.600 0.576
0.595
0.613
0.651
Wins
1
0
1
2
10
Table 3
3depicts the average time required to present the next query to the user for the different active learning methods. RALF[7] scales linearly while full EER[42] soon becomes impractical as the the number of examples increases. On average, our method computes queries in under a second and performs better than both methods in terms of accuracy.Dataset
RALF [7] Zhu [42] HSE (ours)
Glass
0.003
0.008
0.291
Ecoli
0.004
0.016
0.302
Segment
0.005
0.056
0.276
FlickrMat
0.007
0.231
0.136
Coil20
0.011
0.950
0.369
LFW10
0.009
0.535
0.172
UIUCSport
0.009
0.507
0.172
Gait
0.012
1.610
0.257
Oil
0.010
1.008
0.339
Caltech4
0.011
1.435
0.351
Eth80
0.014
2.793
0.378
CpPascal08
0.041
12.189
0.753
15Scenes
0.033
9.405
0.710
Acknowledgements: Funding for this research was provided by EPSRC grants EP/K015664/1, EP/J021458/1 and EP/I031170/1.
iCoseg: Interactive co-segmentation with intelligent scribble guidance. D Batra, A Kowdle, D Parikh, J Luo, T Chen, CVPRD. Batra, A. Kowdle, D. Parikh, J. Luo, and T. Chen. iCoseg: In- teractive co-segmentation with intelligent scribble guidance. CVPR, 2010.
Learning from labeled and unlabeled data using graph mincuts. ICML. A Blum, S Chawla, A. Blum and S. Chawla. Learning from labeled and unlabeled data using graph mincuts. ICML, 2001.
Active learning for object classification: from exploration to exploitation. N Cebron, M R Berthold, Data Mining and Knowledge Discovery. N. Cebron and M. R. Berthold. Active learning for object classifica- tion: from exploration to exploitation. Data Mining and Knowledge Discovery, 2009.
Semi-supervised learning. O Chapelle, B Schölkopf, A Zien, MIT press CambridgeO. Chapelle, B. Schölkopf, A. Zien, et al. Semi-supervised learning. MIT press Cambridge, 2006.
Authority-shift clustering: Hierarchical clustering by authority seeking on graphs. M Cho, K Mulee, CVPRM. Cho and K. MuLee. Authority-shift clustering: Hierarchical clus- tering by authority seeking on graphs. CVPR, 2010.
Hierarchical sampling for active learning. ICML. S Dasgupta, D Hsu, S. Dasgupta and D. Hsu. Hierarchical sampling for active learning. ICML, 2008.
RALF: A reinforced active learning formulation for object class recognition. S Ebert, M Fritz, B Schiele, CVPRS. Ebert, M. Fritz, and B. Schiele. RALF: A reinforced active learn- ing formulation for object class recognition. CVPR, 2012.
Combining self training and active learning for video segmentation. A Fathi, M F Balcan, X Ren, J M Rehg, BMVCA. Fathi, M. F. Balcan, X. Ren, and J. M. Rehg. Combining self training and active learning for video segmentation. BMVC, 2011.
One-shot learning of object categories. L Fei-Fei, R Fergus, P Perona, PAMIL. Fei-Fei, R. Fergus, and P. Perona. One-shot learning of object categories. PAMI, 2006.
Semi-supervised learning in gigantic image collections. R Fergus, Y Weiss, A Torralba, NIPS. R. Fergus, Y. Weiss, and A. Torralba. Semi-supervised learning in gigantic image collections. NIPS, 2009.
UCI machine learning repository. A Frank, A Asuncion, A. Frank and A. Asuncion. UCI machine learning repository. 2010.
M Hein, M Maier, Manifold denoising. NIPS. M. Hein and M. Maier. Manifold denoising. NIPS, 2006.
G Hinton, S Roweis, Stochastic neighbor embedding. NIPS. G. Hinton and S. Roweis. Stochastic neighbor embedding. NIPS, 2002.
Finding rare classes: Active learning with generative and discriminative models. T Hospedales, S Gong, T Xiang, TKDET. Hospedales, S. Gong, and T. Xiang. Finding rare classes: Active learning with generative and discriminative models. TKDE, 2013.
Learned-Miller. Learning to align from scratch. G B Huang, M Mattar, H Lee, E , NIPS. G. B. Huang, M. Mattar, H. Lee, and E. G. Learned-Miller. Learning to align from scratch. NIPS, 2012.
Graph construction and bmatching for semi-supervised learning. T Jebara, J Wang, S.-F Chang, T. Jebara, J. Wang, and S.-F. Chang. Graph construction and b- matching for semi-supervised learning. ICML, 2009.
Multi-class active learning for image classification. A J Joshi, F Porikli, N Papanikolopoulos, CVPRA. J. Joshi, F. Porikli, and N. Papanikolopoulos. Multi-class active learning for image classification. CVPR, 2009.
Active Frame, Location, and Detector Selection for Automated and Manual Video Annotation. V Karasev, A Ravichandran, S Soatto, CVPRV. Karasev, A. Ravichandran, and S. Soatto. Active Frame, Loca- tion, and Detector Selection for Automated and Manual Video An- notation. CVPR, 2014.
Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. S Lazebnik, C Schmid, J Ponce, CVPRS. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. CVPR, 2006.
Adaptive active learning for image classification. X Li, Y Guo, CVPRX. Li and Y. Guo. Adaptive active learning for image classification. CVPR, 2013.
What, where and who? classifying events by scene and object recognition. Li-Jia Li, Li Fei-Fei, CVPRLi, Li-Jia and Fei-Fei, Li. What, where and who? classifying events by scene and object recognition. CVPR, 2007.
Large graph construction for scalable semi-supervised learning. ICML. W Liu, J He, S.-F Chang, W. Liu, J. He, and S.-F. Chang. Large graph construction for scalable semi-supervised learning. ICML, 2010.
Using graph-based metrics with empirical risk minimization to speed up active learning on networked data. S A Macskassy, S. A. Macskassy. Using graph-based metrics with empirical risk min- imization to speed up active learning on networked data. KDD, 2009.
Columbia object image library (coil-20). S A Nene, S K Nayar, H Murase, S. A. Nene, S. K. Nayar, and H. Murase. Columbia object image library (coil-20). 1996.
Platemate: Crowdsourcing nutrition analysis from food photographs. J Noronha, E Hysen, H Zhang, K Z Gajos, UISTJ. Noronha, E. Hysen, H. Zhang, and K. Z. Gajos. Platemate: Crowd- sourcing nutrition analysis from food photographs. UIST, 2011.
Toward optimal active learning through sampling estimation of error reduction. N Roy, A Mccallum, ICML. N. Roy and A. McCallum. Toward optimal active learning through sampling estimation of error reduction. ICML, 2001.
Active Learning. B Settles, Morgan & ClaypoolB. Settles. Active Learning. Morgan & Claypool, 2012.
Material perception: What can you see in a brief glance. L Sharan, R Rosenholtz, E Adelson, Journal of Vision. L. Sharan, R. Rosenholtz, and E. Adelson. Material perception: What can you see in a brief glance? Journal of Vision, 2009.
Category detection using hierarchical mean shift. P Vatturi, W.-K Wong, KDD. P. Vatturi and W.-K. Wong. Category detection using hierarchical mean shift. KDD, 2009.
Active learning for semantic segmentation with expected change. A Vezhnevets, J M Buhmann, V Ferrari, CVPRA. Vezhnevets, J. M. Buhmann, and V. Ferrari. Active learning for semantic segmentation with expected change. CVPR, 2012.
Large-scale live active learning: Training object detectors with crawled data and crowds. S Vijayanarasimhan, K Grauman, CVPRS. Vijayanarasimhan and K. Grauman. Large-scale live active learn- ing: Training object detectors with crawled data and crowds. CVPR, 2011.
Video annotation and tracking with active learning. C Vondrick, D Ramanan, NIPS. C. Vondrick and D. Ramanan. Video annotation and tracking with active learning. NIPS, 2011.
Label propagation through linear neighborhoods. F Wang, C Zhang, TKDEF. Wang and C. Zhang. Label propagation through linear neighbor- hoods. TKDE, 2008.
Active microscopic cellular image annotation by superposable graph transduction with imbalanced labels. J Wang, S.-F Chang, X Zhou, S Wong, CVPRJ. Wang, S.-F. Chang, X. Zhou, and S. Wong. Active microscopic cellular image annotation by superposable graph transduction with imbalanced labels. CVPR, 2008.
J Wang, T Jebara, S.-F Chang, Graph transduction via alternating minimization. ICML. J. Wang, T. Jebara, and S.-F. Chang. Graph transduction via alternat- ing minimization. ICML, 2008.
Fast graph construction using auction algorithm. J Wang, Y Xia, UAIJ. Wang and Y. Xia. Fast graph construction using auction algorithm. UAI, 2012.
Interactive object detection. A Yao, J Gall, C Leistner, L Van Gool, CVPRA. Yao, J. Gall, C. Leistner, and L. Van Gool. Interactive object detection. CVPR, 2012.
Learning with local and global consistency. D Zhou, O Bousquet, T N Lal, J Weston, B Schölkopf, NIPS. D. Zhou, O. Bousquet, T. N. Lal, J. Weston, and B. Schölkopf. Learn- ing with local and global consistency. NIPS, 2004.
Learning from labeled and unlabeled data with label propagation. X Zhu, Z Ghahramani, MSU-CSE-00-2School of Computer Science, CMUTechnical ReportX. Zhu and Z. Ghahramani. Learning from labeled and unlabeled data with label propagation. Technical Report MSU-CSE-00-2, School of Computer Science, CMU, 2002.
Semi-supervised learning using Gaussian fields and harmonic functions. ICML. X Zhu, Z Ghahramani, J Lafferty, X. Zhu, Z. Ghahramani, and J. Lafferty. Semi-supervised learning using Gaussian fields and harmonic functions. ICML, 2003.
Harmonic mixtures: combining mixture models and graph-based methods for inductive and scalable semisupervised learning. X Zhu, J Lafferty, ICML. X. Zhu and J. Lafferty. Harmonic mixtures: combining mixture models and graph-based methods for inductive and scalable semi- supervised learning. ICML, 2005.
Combining active learning and semi-supervised learning using Gaussian fields and harmonic functions. ICML workshops. X Zhu, J Lafferty, Z Ghahramani, X. Zhu, J. Lafferty, and Z. Ghahramani. Combining active learn- ing and semi-supervised learning using Gaussian fields and harmonic functions. ICML workshops, 2003.
| [] |
[
"Gowers Norms for Automatic Sequences",
"Gowers Norms for Automatic Sequences",
"Gowers Norms for Automatic Sequences",
"Gowers Norms for Automatic Sequences"
] | [
"Jakub Byszewski ",
"Jakub Konieczny ",
"Clemens Müllner ",
"Jakub Byszewski ",
"Jakub Konieczny ",
"Clemens Müllner "
] | [] | [
"DISCRETE ANALYSIS",
"DISCRETE ANALYSIS"
] | We show that any automatic sequence can be separated into a structured part and a Gowers uniform part in a way that is considerably more efficient than guaranteed by the Arithmetic Regularity Lemma. For sequences produced by strongly connected and prolongable automata, the structured part is rationally almost periodic, while for general sequences the description is marginally more complicated. In particular, we show that all automatic sequences orthogonal to periodic sequences are Gowers uniform. As an application, we obtain for any l ≥ 2 and any automatic set A ⊂ N 0 lower bounds on the number of l-term arithmetic progressions -contained in A -with a given difference. The analogous result is false for general subsets of N 0 and progressions of length ≥ 5. | null | [
"https://export.arxiv.org/pdf/2002.09509v3.pdf"
] | 211,258,694 | 2002.09509 | 15771118aea4104bfa4408bd6e118de90232f721 |
Gowers Norms for Automatic Sequences
2023
Jakub Byszewski
Jakub Konieczny
Clemens Müllner
Gowers Norms for Automatic Sequences
DISCRETE ANALYSIS
624202310.19086/da.75201Received 17 May 2021; Published 24 May 2023and phrases: Gowers normsautomatic sequenceshigher degree uniformity
We show that any automatic sequence can be separated into a structured part and a Gowers uniform part in a way that is considerably more efficient than guaranteed by the Arithmetic Regularity Lemma. For sequences produced by strongly connected and prolongable automata, the structured part is rationally almost periodic, while for general sequences the description is marginally more complicated. In particular, we show that all automatic sequences orthogonal to periodic sequences are Gowers uniform. As an application, we obtain for any l ≥ 2 and any automatic set A ⊂ N 0 lower bounds on the number of l-term arithmetic progressions -contained in A -with a given difference. The analogous result is false for general subsets of N 0 and progressions of length ≥ 5.
The study of various notions of uniformity for automatic sequences can be traced back at least as far as 1968, when Gelfond [Gel68] showed that the integers whose sum of base-k digits lie in a given residue class modulo l are well distributed in arithmetic progressions (subject to certain congruence conditions). In the same paper, Gelfond posed several influential questions on distribution of the sum of base-k digits within residue classes along subsequences which sparked much subsequent research [Kim99, MR09, MR10, MR15, DMR13, Mül18, MR18, DMR11, MS15,Spi18]. An accessible introduction can be found in [Mor08].
A systematic study of various notions of pseudorandomness was undertaken by Mauduit and Sarközy in [MS98] for the Thue-Morse and Rudin-Shapiro sequences. Specifically, they showed that these sequences do not correlate with periodic sequences, but do have large self-correlations. In this paper we consider a notion of pseudorandomness originating from higher order Fourier analysis, corresponding to Gowers uniformity norms (for more on Gowers norms, see Section 2). The second-named author showed [Kon19] that the Thue-Morse and Rudin-Shapiro sequences are highly Gowers uniform of all orders. Here, we obtain a similar result in a much more general context.
The celebrated Inverse Theorem for Gowers uniformity norms [GTZ12] provides a helpful criterion for Gowers uniformity. It asserts, roughly speaking, that any sequence which does not correlate with nilsequences of bounded complexity has small Gowers norms. We do not follow this path here directly, but want to point out some striking similarities to related results. For the purposes of this paper, there is no need to define what we mean by a nilsequence or its complexity, although we do wish to point out that nilsequences include polynomial phases, given by n → e (p(n)) where e(t) = e 2πit and p ∈ R [x].
For a number of natural classes of sequences, in order to verify Gowers uniformity of all orders it is actually sufficient to verify lack of correlation with linear phases n → e(nα) where α ∈ R, or even just with periodic sequences. In particular, Frantzikinakis and Host [FH17] showed that a multiplicative sequence which does not correlate with periodic sequences is Gowers uniform of all orders. Eisner and the second-named author showed [EK18] that an automatic sequence which does not correlate with periodic sequences also does not correlate with any polynomial phases. This motivates the following result. For the sake of brevity, we will say that a bounded sequence a : N 0 → C is highly Gowers uniform if for each d ≥ 1 there exists c = c d > 0 such that ∥a∥ U d [N] ≪ d N −c .
(1) (See Sec. 2.2.1 for the asymptotic notation and Sec. 2.2.2 for the definition of ∥a∥ U d [N] .)
Theorem A. Let a : N 0 → C be an automatic sequence and suppose that a does not correlate with periodic sequences in the sense that
lim N→∞ 1 N N−1 ∑ n=0 a(n)b(n) = 0
for any periodic sequence b : N 0 → C. Then a is highly Gowers uniform.
In fact, we obtain a stronger decomposition theorem. The Inverse Theorem is essentially equivalent to the Arithmetic Regularity Lemma [GT10a], which asserts, again roughly speaking, that any 1-bounded sequence f : [N] → [−1, 1] can be decomposed into a sum
f = f nil + f sml + f uni ,(2)
where the structured component f nil is a (bounded complexity) nilsequence, f sml has small L 2 norm and f uni has small Gowers norm of a given order. In light of the discussion above, one might expect that in the case when f is an automatic sequence, it should be possible to ensure that f nil is essentially a periodic sequence. This expectation is confirmed by the following new result, which is a special case of our main theorem. For standard terminology used, see Section 2 (for Gowers norms) and 3 (for automatic sequences). Rationally almost periodic sequences were first introduced in [BR02], and their properties are studied in more detail in [BKPLR16]. A sequence is rationally almost periodic (RAP) if it can be approximated by periodic sequences arbitrarily well in the Besicovitch metric; i.e., x : N 0 → Ω is RAP if for any ε > 0 there is a periodic sequence y : N 0 → Ω with |{n < N | x(n) ̸ = y(n)}| /N ≤ ε for large enough N.
Theorem B. Let a : N 0 → C be an automatic sequence produced by a strongly connected, prolongable automaton. Then there exists a decomposition a(n) = a str (n) + a uni (n),
where a str is rationally almost periodic and a uni is highly Gowers uniform (cf.
(1)).
Note that any RAP sequence can be decomposed as the sum of a periodic sequence and a sequence with a small L 1 norm. Hence, (3) can be brought into the form analogous to (2), with a periodic sequence in place of a general nilsequence. Furthermore, this decomposition works simultaneously for all orders.
For general automatic sequences we need a more general notion of a structured sequence. There are three basic classes of k-automatic sequences which fail to be Gowers uniform, which we describe informally as follows:
1. periodic sequences, whose periods may be assumed to be coprime to k;
2. sequences which are only sensitive to terminal digits, such as ν k (n) mod 2 where ν k (n) is the largest power of k which divides n;
3. sequences which are only sensitive to initial digits, such as ν k (n rev k + 1) mod 2 where n rev k denotes the result of reversing the base k digits of n.
By changing the basis, we can include in the last category also sequences which depend on the length of the expansion of n. For instance, if length k (n) denotes the length of the expansion of n in base k then length k (n) mod 2 depends only on the leading digit of n in base k 2 .
Our main result asserts that any automatic sequence can be decomposed as the sum of a structured part and a highly Gowers uniform part, where the structured part is a combination of the examples outlined above. More precisely, let us say that a k-automatic sequence a : N 0 → Ω is weakly structured if there exist a periodic sequence a per : N 0 → Ω per with period coprime to k, a forward synchronising k-automatic sequence a fs : N 0 → Ω fs and a backward synchronising k-automatic sequence a bs : N 0 → Ω bs , as well as a map F : Ω per × Ω fs × Ω bs → Ω such that a(n) = F a per (n), a fs (n), a bs (n) .
(4) (For definitions of synchronising sequences, we again refer to Sec. 3.) DISCRETE ANALYSIS, 2023:4, 62pp.
Theorem C. Let a : N 0 → C be an automatic sequence. Then there exists a decomposition a(n) = a str (n) + a uni (n),
where a str is weakly structured (cf. (4)) and a uni is highly Gowers uniform (cf.
(1)).
Remark 1.1. The notion of a weakly structured sequence is very sensitive to the choice of the basis. If k, k ′ ≥ 1 are both powers of the same integer k 0 then k-automatic sequences are the same as k ′ -automatic sequences, but k-automatic weakly structured sequences are not the same as a k ′ -automatic weakly structured sequences. If the sequence a in Theorem C is k-automatic then a str is only guaranteed to be weakly structured in some basis k ′ that is a power of k, but it does not need to be weakly structured in the basis k.
Example 1.2. Let a : N 0 → R be the 2-automatic sequence computed by the following automaton. Formal definitions of automata and the associated sequence can be found it Section 3. For now, it suffices to say that in order to compute a(n), n ∈ N 0 , one needs to expand n in base 2 and traverse the automaton using the edges corresponding to the consecutive digits of n and then read off the output at the final state. For instance, the binary expansion of n = 26 is (26) 2 = 11010, so the visited states are s 0 , s 1 , s 3 , s 3 , s 2 , s 3 and a(26) = 2.
Let b : N 0 → R be the sequence given by b(n) = (−1) ν 2 (n+1) , where ν 2 (m) is the largest value of ν such that 2 ν | m. For instance, ν 2 (27) = 0 and b(26) = 1. Then the structured part of a is a str = 2 + b, and the uniform part is necessarily given by a uni = a − a str . Note that b (and hence also a str and a uni ) can be computed by an automaton with the same states and transitions as above, but with different outputs. Let also c : N 0 → R denote the sequence given by c(n) = (−1) f (n) where f (n) is the number of those maximal blocks of 1s in the binary expansion of n that have length congruent to 2 or 3 modulo 4. For instance, f (26) = 1 and c(26) = −1. Then a uni = ( 1 2 + 1 2 b)c. This example is very convenient as it allows one to give easy representations of the structured and uniform part. However, the situation can be more complicated in general and we include another example to emphasize this fact. It turns out that the structured part can again be expressed using b, i.e., a str = 3b − 1, but it is very difficult to find a simple closed form for the uniform part. Indeed, even writing it as an automatic sequence requires an automaton with 6 states rather than the 5 states needed for a.
We discuss three possible applications of Theorems B and C as well as of the related estimates of Gowers norms of automatic sequences. Firstly, they can be used to study subsequences of automatic sequences along various sparse sequences. Secondly, they allow us to count solutions to linear equations with variables taking values in automatic sets, that is, subsets of N 0 whose characteristic functions are automatic sequences. Lastly, they give a wide class of explicit examples of sequences with small Gowers norms of all orders. We will address these points independently.
We start by discussing the treatment of automatic sequences along primes by the third author to highlight the usefulness of a structural result as in Theorem B. In [Mül17] a similar decomposition was used (with the uniform component satisfying a weaker property (called the Fourier-Property in [ADM]), which is almost the same as being Gowers uniform of order 1) together with the so called carry Property (see already [MR15] and a more general form in [Mül18]). This essentially allows one to reduce the problem to the case of structured and uniform sequences. The structured component is very simple to deal with, as it suffices to study primes in arithmetic progressions. The study of the uniform component followed the method of Mauduit and Rivat developed to treat the Rudin-Shapiro sequence along primes [MR15]. A similar approach was used by Adamczewski, Drmota and the third author to study the occurrences of digits in automatic sequences along squares [ADM]. It seems likely that a higher-order uniformity as in Theorem B might allow one to study the occurrences of blocks in automatic sequences along squares (see for example [DMR13,Mül18] for related results).
Recently, Spiegelhofer used the fact that the Thue-Morse sequence is highly Gowers uniform to show that the level of distribution of the Thue-Morse sequence is 1 [Spi18]. As a result, he proves that the sequence is simply normal along ⌊n c ⌋ for 1 < c < 2, i.e. the asymptotic frequency of both 0 and 1 in the Thue-Morse sequence along ⌊n c ⌋ is 1/2. This result, together with our structural result (Theorem B) indicates a possible approach to studying automatic sequences produced by strongly connected, prolongable automata along ⌊n c ⌋. As the structured component is rationally almost periodic, we can simply study ⌊n c ⌋ mod m to deal with the first component. The uniform component needs to be dealt with similarly to Spiegelhofer's treatment of the Thue-Morse sequence, but conditioned on ⌊n c ⌋ mod m, to take care of the structured component at the same time. For the possible treatment of all the subsequences of automatic sequences discussed above it is essential to have (for the uniform component) both some sort of Gowers uniformity as well as the carry Property. Both these properties are guaranteed by the decomposition used in this paper, while the Arithmetic Regularity Lemma cannot guarantee the carry Property for the uniform component.
Secondly, let us recall one of the many formulations of the celebrated theorem of Szemerédi on arithmetic progressions which says that any set A ⊂ N 0 with positive upper density d(A) = lim sup N→∞ |A ∩ [N]| /N > 0 contains arbitrarily long arithmetic progressions. It is natural to ask what number of such progressions are guaranteed to exist in A ∩ [N], depending on the length N and the density of A.
Following the work of Bergelson, Host and Kra (and Ruzsa) [BHK05], Green and Tao [GT10a] showed that for progressions of length ≤ 4, the count of d-term arithmetic progressions in a subset A ⊂ [N] is essentially greater than or equal to what one would expect for a random set of similar magnitude.
Theorem 1.4. Let 2 ≤ l ≤ 4, α > 0 and ε > 0. Then for any N ≥ 1 and any A ⊂ [N] of density |A| /N ≥ α there exist ≫ α,ε N values of m ∈ [N] such that A contains ≥ (α l − ε)N l-term arithmetic progressions with common difference m. The analogous statement is false for any l ≥ 5.
For automatic sets, the situation is much simpler: Regardless of the length l ≥ 1, the count of l-term arithmetic progressions in A ∩ [N] is, up to a small error, at least what one would expect for a random set.
Theorem D. Let l ≥ 3, and let A be an automatic set (that is, a subset of N 0 whose characteristic sequence is automatic). Then there exists C = O l,A (1) such that for any N ≥ 1 and ε > 0 there exist
≫ l,A ε C N values of m ∈ [N] such that A ∩ [N] contains ≥ (α l − ε)N l-term arithmetic progressions with common difference m, where α = |A| /N.
Thirdly, we remark that there are few examples of sequences that are simultaneously known to be highly Gowers uniform and given by a natural, explicit formula. Polynomial phases e(p(n)) (p ∈ R[x]) are standard examples of sequences that are uniform of order deg p − 1 but dramatically non-uniform of order deg p. Random sequences are highly uniform (cf. [TV06, Ex. 11.1.17]) but are not explicit. As already mentioned, many multiplicative sequences are known to be Gowers uniform of all orders, but with considerably worse bounds than the power saving which we obtain. For a similar result for a much simpler class of q-multiplicative sequences, see [FK19]. Examples of highly Gowers uniform sequences of number-theoretic origin in finite fields of prime order were found in [FKM13]; see also [Liu11] and [NR09] where Gowers uniformity of certain sequences is derived from much stronger discorrelation estimates.
Gowers norms 2.1 Notation
We use standard asymptotic notation -if f and g are two functions defined on (sufficiently large) positive integers, we write f ≪ g or f = O(g) if there exists a constant C > 0 such that | f (n)| ≤ C|g(n)| for all sufficiently large n. If the constant C is allowed to depend on some extra parameters (α, ε, etc.), we may specify that by writing f ≪ α,ε g or f = O α,ε (g). In some cases when such dependence is clear from the context, we may omit such indices (this is the case for example for the order d of Gowers uniformity norms, defined below).
We also use the Iverson bracket notation P for the value of a logical statement P, that is,
P = 1 if P is true; 0 otherwise.
Basic facts and definitions
Gowers norms, originally introduced by Gowers in his work on Szemerédi's theorem [Gow01], are a fundamental object in what came to be known as higher order Fourier analysis. For extensive background, we refer to [Gre] or [Tao12]. Here, we just list several basic facts. Throughout, we treat d (see below) as fixed unless explicitly stated otherwise, and allow all implicit error terms to depend on d.
For a finite abelian group G and an integer d ≥ 1, the Gowers uniformity norm on G of order d is defined for f : G → C by the formula
∥ f ∥ 2 d U d (G) = E ⃗ n∈G d+1 ∏ ⃗ ω∈{0,1} d C |⃗ ω| f (1⃗ ω ·⃗ n),(6)
where C denotes the complex conjugation, ⃗ ω and ⃗ n are shorthands for (ω 1 , . . . , ω d ) and (n 0 , n 1 , . . . , n d ),
respectively, |⃗ ω| = |{i ≤ d | ω i = 1}|, and 1⃗ ω ·⃗ n = n 0 + ∑ d i=1 ω i n i . More generally, for a family of functions f ⃗ ω : G → C with ⃗ ω ∈ {0, 1} d we can define the corresponding Gowers product ( f ⃗ ω ) ⃗ ω∈{0,1} d U d (G) = E ⃗ n∈G d+1 ∏ ⃗ ω∈{0,1} d C |⃗ ω| f ⃗ ω (1⃗ ω ·⃗ n).(7)
A simple computation shows that ∥ f ∥ U 1 (G) = | En∈G f (n)| and
∥ f ∥ 4 U 2 (G) = E n,m,l∈G f (n)f (n + m)f (n + l) f (n + m + l) = ∑ ξ ∈Ĝ f (ξ ) 4 ,
whereĜ is the group of characters G → S 1 andf (ξ ) = En∈Gξ (n) f (n).
One can show that definition (6) is well-posed in the sense that the right hand side of (6) is real and non-negative. If d ≥ 2, then ∥·∥ U d (G) is indeed a norm, meaning that it obeys the triangle inequality
∥ f + g∥ U d (G) ≤ ∥ f ∥ U d (G) + ∥g∥ U d (G) , is positive definite in the sense that ∥ f ∥ U d (G) ≥ 0 with equality if only if f = 0, and is homogeneous in the sense that ∥λ f ∥ U d (G) = |λ | ∥ f ∥ U d (G) for all λ ∈ C. If d = 1, then ∥·∥ U d (G)
is only a seminorm. Additionally, for any d ≥ 1 we have the nesting property
∥ f ∥ U d (G) ≤ ∥ f ∥ U d+1 (G) .
In this paper we are primarily interested in the uniformity norms on the interval [N], where N ≥ 1 is an integer. Any such interval can be identified with the subset [N] = {0, 1, . . . , N − 1} of a cyclic group Z/ NZ, where N is an integer significantly larger than N. For d ≥ 1 and f :
[N] → C we put ∥ f ∥ U d [N] = 1 [N] f U d (Z/ NZ) / 1 [N] U d (Z/ NZ) .(8)
The value of ∥ f ∥ U d [N] given by (8) is independent of N as long as N exceeds 2dN, and for the sake of concreteness we let N = N(N, d) be the least prime larger than 2dN (the primality assumption will make Fourier analysis considerations slightly easier at a later point). As a consequence of the corresponding properties for cyclic groups, ∥·∥ U d [N] is a norm for all d ≥ 2 and a seminorm for d = 1, and for all d ≥ 1 we have a slightly weaker nesting property
∥ f ∥ U d [N] ≪ d ∥ f ∥ U d+1 [N]
. Definition (8) can equivalently be expressed as
∥ f ∥ 2 d U d (G) = E ⃗ n∈Π(N) ∏ ⃗ ω∈{0,1} d C |⃗ ω| f (1⃗ ω ·⃗ n),(9)
where the average is taken over the set (implicitly dependent on d)
Π(N) = ⃗ n ∈ Z d+1 1⃗ ω ·⃗ n ∈ [N] for all ⃗ ω ∈ {0, 1} d .(10)
As a direct consequence of (9), we have the following phase-invariance:
If p ∈ R[x] is a polynomial of degree < d and g : [N] → C is given by g(n) = e(p(n)), then ∥ f ∥ U d [N] = ∥ f · g∥ U d [N] for all f : [N] → C.
(Here and elsewhere, e(t) = exp(2πit).) In particular, ∥g∥ U d [N] = 1. The analogous statement is also true for finite cyclic abelian groups. In particular, if p ∈ Z[x] is a polynomial of degree < d and g : Z/NZ → C is given by
g(n) = e(p(n)/N), then ∥ f ∥ U d (Z/NZ) = ∥ f · g∥ U d (Z/NZ) for all f : Z/NZ → C.
We will say that a bounded sequence a :
N 0 → C is uniform of order d ≥ 1 if ∥a∥ U d [N] → 0 as N → ∞.
The interest in Gowers norms stems largely from the fact that uniform sequences behave much like random sequences in terms of counting additive patterns. To make this intuition precise, for a (d + 1)tuple of sequences f 0 , f 1 , . . . , f d : N 0 → C let us consider the corresponding weighted count of arithmetic progressions
Λ N d ( f 0 , . . . , f d ) = ∑ n,m∈Z d ∏ i=0 ( f i 1 [N] )(n + im), so that in particular Λ N d (1 A , . . . , 1 A ) is the number of arithmetic progressions of length d +1 in A∩[N].
The following proposition is an easy variant of the generalised von Neumann theorem, see for example [Tao12, Exercise 1.3.23] We say that a function f :
X → C is 1-bounded if | f (x)| ≤ 1 for all x ∈ X.
Proposition 2.1. Let d ≥ 1 and let f 0 , f 1 , . . . , f d : N 0 → C be 1-bounded sequences. Then
Λ N d ( f 0 , . . . , f d ) ≪ N 2 min 0≤i≤d ∥ f i ∥ U d [N] .
As a direct consequence, if f i , g i :
N 0 → C are 1-bounded and ∥ f i − g i ∥ U d [N] ≤ ε for all 0 ≤ i ≤ d, then Λ N d ( f 0 , . . . , f d ) = Λ N d (g 0 , . . . , g d ) + O(εN 2 )
. In particular, if A ⊂ N 0 has positive asymptotic density α and 1 A − α1 N 0 is uniform of order d, then the count of (d + 1)-term arithmetic progressions in A ∩ [N] is asymptotically the same as it would be if A was a random set with density α.
It is often helpful to control Gowers norms by other norms which are potentially easier to understand.
Fourier analysis and reductions
We will use some simple Fourier analysis on finite cyclic groups Z/NZ. We equip Z/NZ with the normalised counting measure and its dual group Z/NZ (which is isomorphic to Z/NZ) with the counting measure. With these conventions, the Plancherel theorem asserts that for f :
Z/NZ → C we have E n∈Z/NZ | f (n)| 2 = ∥ f ∥ 2 L 2 (Z/NZ) = f 2 ℓ 2 (Z/NZ) = ∑ ξ ∈Z/NZ f (ξ ) 2 , wheref (ξ ) = En∈Z/NZ f (n)e(−ξ n/N).
Recall also that for f , g :
Z/NZ → C we have f * g =f ·ĝ where f * g(n) = Em∈Z/NZ f (m)g(n − m).
The following lemma will allow us to approximate characteristic functions of arithmetic progressions with smooth functions. While much more precise variants exist (cf. Erdős-Turán inequality), this basic result will be sufficient for the applications we have in mind. We say that a set P ⊂ Z/NZ is an arithmetic progression of length M if |P| = M and P takes the form
{am + b | m ∈ [M]} with a, b ∈ Z/NZ.
Lemma 2.3. Let N be prime and let P ⊂ Z/NZ be an arithmetic progression of length M ≤ N. Then for any 0 < η ≤ 1 there exists a function f = f P,η : Z/NZ → [0, 1] such that
1. ∥ f − 1 P ∥ L p (Z/NZ) ≤ η 1/p for each 1 ≤ p < ∞; 2. f ℓ 1 (Z/NZ) ≪ η −1/2 . Remark 2.4. We will usually take η = N −ε where ε > 0 is a small constant.
Proof. We pick f = 1 P * N K 1 a [K] , where a is the common difference of the arithmetic progression and the integer K ≥ 1 remains to be optimised. Note that f (n) ̸ = 1 P (n) for at most 2K values of n ∈ Z/NZ, and | f (n) − 1 P (n)| ≤ 1 for all n ∈ Z/NZ. Hence,
∥ f − 1 P ∥ L p ≤ (2K/N) 1/p .
Using the Cauchy-Schwarz inequality and Plancherel theorem we may also estimate
f ℓ 1 = N K 1 P ·1 a[K] ℓ 1 ≤ N K 1 P ℓ 2 · 1 a[K] ℓ 2 = N K ∥1 P ∥ L 2 · 1 a[K] L 2 ≤ (N/K) 1/2 .
It remains to put K = max(⌊ηN/2⌋ , 1) and note that if K = 1, then f = 1 P .
As a matter of general principle, the restriction of a Gowers uniform sequence to an arithmetic progression is again Gowers uniform. We record the following consequence of Lemma (2.3) which makes this intuition more precise.
Proposition 2.5. Let d ≥ 2 and α d = (d + 1)/(2 d−1 + d + 1). Let a : [N] → C be a 1-bounded function and let P ⊂ [N] be an arithmetic progression. Then
∥a1 P ∥ U d [N] ≪ ∥a∥ α d U d [N]
. (Recall that we allow the implicit constants to depend on d.)
Proof. Throughout the argument we consider d as fixed and allow implicit error terms to depend on d. Let N = N(N, d) be the prime with N < N ≪ N defined in Section 2.2. Let η > 0 be a small parameter, to be optimised in the course of the proof, and let f : Z/ NZ → [0, 1] be the approximation of 1 P such that
∥ f − 1 P ∥ L p d (Z/ NZ) ≪ η 1/p d and f ℓ 1 (Z/ NZ) ≪ η −1/2 ,∥a1 P ∥ U d [N] ≪ ∥a1 P ∥ U d (Z/ NZ) = a f 1 [N] + (1 P − f )1 [N] U d (Z/ NZ) ≤ a f 1 [N] U d (Z/ NZ) + a(1 P − f )1 [N] U d (Z/ NZ) .
We consider the two summands independently. For the first one, expanding f (n) = ∑ ξf (ξ )e(ξ n/ N) and using phase-invariance of Gowers norms we obtain
a f 1 [N] U d (Z/ NZ) ≤ f ℓ 1 (Z/ NZ) · a1 [N] U d (Z/ NZ) ≪ η −1/2 ∥a∥ U d [N] .
For the second one, it follows from Proposition 2.2 that
a(1 P − f )1 [N] U d (Z/ NZ) ≪ a(1 P − f )1 [N] L p d (Z/ NZ) ≤ ∥1 P − f ∥ L p d (Z/ NZ) ≤ η 1/p d .
It remains to combine the two estimates and insert the near-optimal value η = ∥a∥
1/(1/2+1/p d ) U d [N]
.
We will use Proposition 2.5 multiple times to estimate Gowers norms of restrictions of uniform sequences to sets which can be covered by few arithmetic progressions. For now, we record one immediate consequence, which will simplify the task of showing that a given sequence is Gowers uniform by allowing us to restrict our attention to uniformity norms on initial intervals whose length is a power of k.
Corollary 2.6. Let d ≥ 2 and k ≥ 2. Let a : N 0 → C be a 1-bounded sequence, and suppose that
∥a∥ U d [k L ] ≪ k −cL as L → ∞(11)
for a constant c > 0. Then
∥a∥ U d [N] ≪ k N −α d c as N → ∞.(12)
Proof. Let N be a large integer and put L = ⌈log k N⌉. We may then estimate
∥a∥ U d [N] ≪ a1 [N] U d [k L ] ≪ ∥a∥ α d U d [k L ] .
Remark 2.7. The argument is not specific to powers of k. The same argument shows that to prove that ∥a∥ U d [N] ≪ N −c , it suffices to check the same condition for an increasing sequence N i where the quotients N i+1 /N i are bounded.
3 Automatic sequences
Definitions
In this section we review the basic terminology concerning automatic sequences. Our general reference for this material is [AS03]. To begin with, we introduce some notation concerning digital expansions. For k ≥ 2, we let Σ k = {0, 1, . . . , k-1} denote the set of digits in base k. For a set X we let X * denote the monoid of words over the alphabet X, with the operation of concatenation and the neutral element being the empty word ε. In particular, Σ * k is the set of all possible expansions in base k (allowing leading zeros). While formally Σ k ⊂ N 0 , we use different fonts to distinguish between the digits 0, 1, 2 . . . and numbers 0, 1, 2, . . . ; in particular 11 = 1 2 denotes the string of two 1s, while 11 = 10 + 1 denotes the integer eleven. For a word w ∈ X * , we let |w| denote the length of the word w, that is, the number of letters it contains, and we let w rev denote the word whose letters have been written in the opposite order (for instance, 10110 rev = 01101).
For an integer n ∈ N 0 , the expansion of n in base k without leading zeros is denoted by (n) k ∈ Σ * k (in particular (0) k = ε). Conversely, for a word w ∈ Σ * k the corresponding integer is denoted by [w] k . We also let length k (n) = |(n) k | be the length of the expansion of n (in particular length k (0) = 0).
Leading zeros are a frequent source of technical inconveniences, the root of which is the fact that we cannot completely identify N 0 with Σ * k . This motivates us to introduce another piece of notation. For n ∈ N 0 we let (n) l k ∈ Σ l k denote the expansion of n in base k truncated or padded with leading zeros to length l, that is, (n) l k is the suffix of the infinite word 0 ∞ (n) k of length l (for example, (43) 8 2 = 00101011 and (43) 4 2 = 1011). A (deterministic finite) k-automaton without output A = (S, s 0 , Σ k , δ ) consists of the following data:
• a finite set of states S with a distinguished initial state s 0 ;
• a transition function δ : S × Σ k → S.
A (deterministic finite) k-automaton with output A = (S, s 0 , Σ k , δ , Ω, τ) additionally includes • an output function τ : S → Ω taking values in an output set Ω.
By an automaton we mean a k-automaton for some unspecified k ≥ 2. By default, all automata are deterministic, finite and with output. When we refer to automata without output, we say so explicitly.
The transition map δ : S ×Σ k → S extends naturally to a map (denoted by the same letter) δ : S ×Σ * k → S so that δ (s, uv) = δ (δ (s, u), v). If A = (S, s 0 , Σ k , δ , Ω, τ) is an automaton with output, then a A denotes the automatic sequence produced by A, which is defined by the formula a(n) = τ(δ (s 0 , (n) k )). More generally, for s ∈ S, a A,s denotes the automatic sequence produced by (S, s, Σ k , δ , Ω, τ); if A is clear from the context, we simply write a s . A sequence a : N 0 → Ω is k-automatic if it is produced by some k-automaton.
We say that an automaton (with or without output) with initial state s 0 and transition function δ is prolongable (or ignores the leading zeros) if δ (s 0 , 0) = s 0 . Any automatic sequence can be produced by an automaton ignoring leading zeros. We call an automaton A idempotent if it ignores the leading zeros and δ (s, 00) = δ (s, 0) for each s ∈ S, that is, if the map δ (·, 0) : S → S is idempotent.
Note that with the above definitions, automata read input forwards, that is, starting with the most significant digit. One can also consider the opposite definition, where the input is read backwards, starting from the least significant digit, that is, a rev A (n) = τ δ (s 0 , (n) rev k ) . The class of sequences produced by automata reading input forwards is precisely the same as the class of sequences produced by automata reading input backwards. However, the two concepts lead to different classes of sequences if we impose additional assumptions on the automata, such as synchronisation.
An automaton A is synchronising if there exists a synchronising word w ∈ Σ * k , that is, a word w such that the value of δ (s, w) does not depend on the state s ∈ S. Note that a synchronising word is by no means unique; indeed, any word w ′ containing a synchronising word as a factor is itself synchronising. As a consequence, if A is synchronising then the number of words w ∈ Σ l k that are not synchronising for A is ≪ k l(1−c) for some constant c > 0. An automatic sequence is forwards (resp. backwards) synchronising if it is produced by a synchronising automaton reading input forwards (resp. backwards).
An automaton A is invertible if for each j ∈ Σ k the map δ (·, j) : S → S is bijective and additionally δ (·, 0) = id S . A sequence is invertible if it is produced by an invertible automaton (reading input forwards). One can show that reading input backwards leads to the same notion, but we do not need this fact. Any invertible sequence is a coding of a generalised Thue-Morse sequence, meaning that there exists a group G and group elements id G = g 0 , g 1 , . . . , g k−1 such that the sequence is produced by an automaton with S = G, s 0 = e G and δ (s, j) = sg j for each j ∈ Σ k [DM12].
A state s in an automaton A is reachable if δ (s 0 , w) = s for some w ∈ Σ * k . Unreachable states in an automaton are usually irrelevant, as we may remove them from the automaton without changing the automatic sequence produced by it. We call two distinct states s, s ′ ∈ S satisfying τ(δ (s, v)) = τ(δ (s ′ , v)) for all v ∈ Σ * k nondistinguishable. One sees directly, that we could merge them (preserving outgoing arrows of one of the states) and still obtain a well-defined automaton producing a and having a smaller number of states. This leads us to the definition of a minimal automaton, i.e. an automaton with no unreachable states and no nondistinguishable states. It is classical, that for any automatic sequence there exists a minimal automaton producing that sequence (see for example [AS03, Corollary 4.1.9]).
An automaton A is strongly connected if for any two states s, s ′ of A there exists w ∈ Σ * k with δ (s, w) = s ′ . A strongly connected component of A is a strongly connected automaton A ′ whose set of states S ′ in a subset of S and whose transition function δ ′ is the restriction of the transition function δ of A; we often identify A ′ with S ′ . The following observation is standard, but we include the proof for the convenience of the reader.
Lemma 3.1. Let A be an automaton, as introduced above. Then there exists a word w ∈ Σ * k such that if v ∈ Σ * k contains w as a factor then for every state s of A, δ (s, v) belongs to a strongly connected component of A.
Proof. Let S = {s 0 , s 1 , . . . , s N−1 } be an enumeration of S. We construct inductively a sequence of words ε = w 0 , . . . , w N , with the property that δ (s i , w j ) belongs to a strongly connected component for any 0 ≤ i < j ≤ N. Once w j has been constructed, it is enough to define w j+1 = w j u, where u ∈ Σ * k is an arbitrary word such that δ (δ (s j , w j ), u) belong to a strongly connected component, which is possible since from any state there exists a path leading to a strongly connected component.
We can consider k-automata with or without output as a category. A morphism between automata without output A = (S, s 0 , Σ k , δ ) and A = (S ′ , s ′ 0 , Σ k , δ ′ ) is a map φ : S → S ′ such that φ (s 0 ) = s ′ 0 and φ (δ (s, j)) = δ ′ (φ (s), j) for all s ∈ S and j ∈ Σ k . A morphism between automata with output A = (S, s 0 , Σ k , δ , Ω, τ) and
A ′ = (S ′ , s ′ 0 , Σ k , δ ′ , Ω ′ , τ ′ ) is a pair (φ , σ )
where φ is a morphism between the underlying automata without output and σ : Ω → Ω ′ is a map such that σ (τ(s)) = τ ′ (φ (s)). In the situation above, a A ′ is the image of a A via a coding, that is, a A ′ (n) = σ (a A (n)) for all n ∈ N 0 . While this-perhaps overly abstract-terminology is not strictly speaking needed for our purposes, it will be helpful at a later point when we consider morphisms between group extensions of automata.
Change of base
A sequence a : N 0 → Ω is eventually periodic if there exists n 0 ≥ 0 and d ≥ 1 such that a(n + d) = a(n) for all n ≥ n 0 . Two integers k, k ′ ≥ 2 are multiplicatively independent if log(k)/ log(k ′ ) is irrational. A classical theorem of Cobham asserts that if k, k ′ ≥ 2 are two multiplicatively independent integers, then the only sequences which are both kand k ′ -automatic are the eventually periodic ones, and those are automatic in all bases. On the other hand, if k, k ′ ≥ 2 are multiplicatively dependent, meaning that k = k l 0 and k ′ = k l ′ 0 for some integers k 0 , l, l ′ ≥ 1, then the classes of k-automatic and k ′ -automatic sequences coincide.
Hence, when we work with a given automatic sequence that is not ultimately periodic, the base (denoted by k) is determined uniquely up to the possibility to replace it by its power k ′ = k t , t ∈ Q. We will take advantage of this possibility, which is useful because some of the properties discussed above (specifically synchronisation and idempotence) depend on the choice of base. We devote the remainder of this section to recording how various properties of automatic sequences behave when the base is changed. An instructive example to keep in mind is that n → length 2 (n) mod 2 is backwards synchronising in base 4 but not in base 2 (see Proposition 3.3 for details).
We first briefly address the issue of idempotency. Any automatic sequence is produced by an idempotent automaton, possibly after a change of basis [BK19b, Lem. 2.2]. Additionally, if the sequence a A is produced by the automaton A = (S, s 0 , Σ k , δ , Ω, τ) then for any power k ′ = k l , l ∈ N, there is a natural construction of a k ′ -automaton A ′ which produces the same sequence a A ′ = a A and is idempotent.
We next consider synchronising sequences. The following lemma provides a convenient criterion for a sequence to be synchronising.
Lemma 3.2. Let a : N 0 → Ω be a k-automatic sequence and let w ∈ Σ * k . Then the following conditions are equivalent:
1. the sequence a is produced by a k-automaton A reading input forwards (resp. backwards) for which w is synchronising;
2. there exists a map b : Σ * k → Ω such that for any u, v ∈ Σ * k we have a([uwv] k ) = b(v) (resp. a([uwv] k ) = b(u)).
Proof. For the sake of clarity we only consider the "forward" variant; the "backward" case is fully analogous. It is clear that (1) implies (2), so it remains to prove the reverse implication. Let A be a minimal k-automaton which produces a. We will show that if w satisfies (2) then it is synchronising for A.
Let s, s ′ ∈ S be any two states. Pick u, u ′ such that s = δ (s 0 , u) and s ′ = δ (s 0 , u ′ ). Since
τ(δ (s, wv)) = a([uwv] k ) = b(v) = a([u ′ wv] k ) = τ(δ (s ′ , wv)) for any v, w ∈ Σ * k , we get that τ(δ (δ (s, w), v)) = τ(δ (δ (s ′ , w), v)) for all v ∈ Σ * k .
This implies by minimality of A that δ (s, w) = δ (s ′ , w). Thus, we have showed that the word w is synchronising.
As a consequence, we obtain a good understanding of how a change of base affects the property of being synchronising.
Proposition 3.3. Let a : N 0 → Ω be a k-automatic sequence and let l ∈ N.
1. If a is forwards (resp. backwards) synchronising as a k-automatic sequence, then a is also forwards (resp. backwards) synchronising as a k l -automatic sequence.
2. If a is forwards synchronising as a k l -automatic sequence, then a is also forwards synchronising as a k-automatic sequence.
3. If l ≥ 2 then there exist backwards synchronising k l -automatic sequences which are not backwards synchronising as k-automatic sequences.
Proof.
1. Let w ∈ Σ * k be a synchronising word for a k-automaton producing a. Replacing w with a longer word if necessary, we may assume without loss of generality that the length of w is divisible by l. Hence, we may identify w with an element of Σ * k l ≃ Σ l k * in a natural way. It follows from Lemma 3.2 that w is a synchronising word for a k l -automaton producing a.
2. Let w ∈ Σ * k l ≃ Σ l k * be a synchronising word for a k l -automaton which produces a and consider the word w ′ = (w0) l ∈ Σ * k . This is set up so that if the expansion (n) k of an integer n ≥ 0 contains w ′ as a factor then (n) k l contains w as a factor. It follows from Lemma 3.2 that w ′ is a synchronising word for a k-automaton producing a.
3. Consider the sequence b(n) = length k (n) mod l. In base k l , the value of b(n) depends only on the leading digit of n, whence b is backwards synchronising. On the other hand, b(
[v] k ) ̸ = b([v0] k ) for all v ∈ Σ * k with [v] k ̸ = 0, whence b
is not backwards synchronising as a k-automatic sequence.
Derivation of the main theorems 4.1 Strongly connected case
Having set up the relevant terminology in Sections 2 and 3, we are now ready to deduce our main results, Theorems A, B, C and D from the following variant, applicable to strongly connected automata. We also address the issue of uniqueness of the decomposition in Theorems B and C. We say that a k-automatic sequence a : N 0 → Ω is strongly structured if there exists a periodic sequence a per : N 0 → Ω per with period coprime to k, a forwards synchronising k-automatic sequence a fs : N 0 → Ω fs , as well as a map F : Ω per × Ω fs → Ω such that a(n) = F a per (n), a fs (n) .
Note that thanks to Proposition 3.3 this notion does not change upon replacing the base k by a multiplicatively dependent one.
Theorem 4.1. Let a : N 0 → C be a k-automatic sequence produced by a strongly connected, prolongable automaton. Then there exists a decomposition
a = a str + a uni ,(14)
where a str is strongly structured (cf. (13)) and a uni is highly Gowers uniform (cf.
(1)).
Note that the formulation of Theorem 4.1 is very reminiscent of Theorem B, except that the assumptions on the structured part are different. Indeed, one is an almost immediate consequence of the other.
Proof of Theorem B assuming Theorem 4.1. The only difficulty is to show that any forwards synchronising automatic sequence is rationally almost periodic. This is implicit in [DDM15], and showed in detail in [BKPLR16,Proposition 3.4]. It follows that any strongly structured sequence is rationally almost periodic.
The derivation of Theorem C is considerably longer, and involves reconstruction of an automatic sequence produced by an arbitrary automaton from the automatic sequences produced by the strongly connected components.
Proof of Theorem C assuming Theorem 4.1. Let a : N 0 → C be an automatic sequence. We may assume (changing the base if necessary) that a is produced by an idempotent automaton A = (S, s 0 , Σ k , δ , C, τ) with δ (s 0 , 0) = s 0 . Throughout the argument we consider A to be fixed and we do not track dependencies of implicit error terms on A.
Let S 0 denote the set of states s ∈ S which lie in some strongly connected component of S which also satisfy δ (s, 0) = s (or, equivalently, δ (s ′ , 0) = s for some s ′ ∈ S 0 ). Note that each strongly connected component of S contains a state in S 0 . For each s ∈ S 0 , the sequence a s = a A,s is produced by a strongly connected automaton, so it follows from Theorem 4.1 that there exists a decomposition a s = a s,str + a s,uni , where a s,str is strongly structured and a s,uni is highly Gowers uniform. For s ∈ S 0 let a s,str (n) = F s a s,per (n), a s,fs (n) be a representation of a s,str as in (13). Let M be an integer coprime to k and divisible by the period of a s,per for each s ∈ S 0 (for instance, the least common multiple of these periods). Let z ∈ Σ * k be a word that is synchronising for a s,fs for each s ∈ S 0 (it can be obtained by concatenating synchronising words for all strongly connected components of A).
We will also need a word y ∈ Σ * k with the property that if we run A on input which includes y as a factor, we will visit a state from S 0 at some point when the input read so far encodes an integer divisible by M. More formally, we require that for each u ∈ Σ * k there exists a decomposition y = x 1 x 2 such that δ (s 0 , ux 1 ) ∈ S 0 and M | [ux 1 ] k . The word y can be constructed as follows. Take a word y 0 ∈ Σ * k with the property that δ (s, y 0 ) belongs to a strongly connected component for each s ∈ S, whose existence is guaranteed by Lemma 3.1. Let A ≥ 1 be an integer that is multiplicatively rich enough that M | k A − 1, and let B ≥ M − 1. Put y = y 0 (0 A−1 1) B . Then, using notation above, we can take
x 1 = y 0 (0 A−1 1) i , where i ≡ −[uy 0 ] l mod M.
For n ∈ N 0 such that (n) k contains yz as a factor, fix the decomposition (n) k = u n v n where δ (s 0 , u n ) ∈ S 0 , M | [u n ] k and u n is the shortest possible subject to these constraints. Note that v n contains z as a factor. Let Z ⊂ N 0 be the set of those n for which (n) k does not contain yz as a factor, and for the sake of completeness define u n = v n = ♢ for n ∈ Z, where ♢ is a symbol not belonging to Σ * k . Note also that there exists a constant γ > 0 such that
|Z ∩ [N]| ≪ N 1−γ .
We are now ready to identify the structured part of a, which is given by
a str (n) = ∑ s∈S 0 δ (s 0 , u n ) = s a s,str (n).(15)
(If n ∈ Z, the statement δ (s 0 , u n ) = s is considered to be false by convention, whence in particular a str (n) = 0; recall that δ (s 0 , u n ) = s uses the Iverson bracket notation, that is, δ (s 0 , u n ) = s equals 1 if δ (s 0 , u n ) = s, and equals 0 otherwise.) The uniform part is now necessarily given by a uni = a − a str . It remains to show that a str and a uni are strongly structured and highly Gowers uniform, respectively (note that strongly structured sequences are necessarily automatic). We begin with a str . For any s ∈ S 0 , we will show that n → δ (s 0 , u n ) = s is a backwards synchronising k-automatic sequence. This is most easily accomplished by describing a procedure which computes it. To this end, we consider an automaton that mimics the behaviour of A, and additionally keeps track of the remainder modulo M of the part of the input read so far. Next, we modify it so that if an arbitrary state s ′ in S 0 and residue 0 is reached, the output becomes fixed to s ′ = s . The output for all remaining pairs of states and residues are 0. More formally, we take
A ′ = (S × (Z/MZ), (s 0 , 0), Σ k , δ ′ , {0, 1}, τ ′ )), where δ ′ is given by δ ′ ((r, i), j) = (δ (r, j), ki + j mod M) if i ̸ = 0 or r ̸ ∈ S 0 , (r, i) otherwise,
and the output function is given by
τ ′ (r, i) = 0 if i ̸ = 0 or r ̸ ∈ S 0 , r = s otherwise.
It is clear that a A ′ = δ (s 0 , u n ) = s for all n ∈ N 0 . Additionally, since the output becomes constant once we read yz, this procedure gives rise to a backwards synchronising sequence. Hence, each of the summands in (15) is the product of a backwards synchronising sequence and a strongly structured sequence. Moreover, we have by Lemma 3.2 that the cartesian product of forwards (backwards) synchronizing k-automatic sequences is again a forwards (backwards) synchronizing k-automatic sequence. A synchronizing word for the new automaton can be constructed by concatenating synchronizing words of the individual automata. Thus, a str is weakly structured. Next, let us consider a uni . Thanks to Proposition 2.6, we only need to show that for any d ≥ 2 there exists a constant c > 0 such that ∥a uni ∥ U d [k L ] ≪ k −cL . Fix a choice of d and let L be a large integer. If n ∈ N 0 \ Z and s = δ (s 0 , u n ), then
a(n) = a ([u n v n ] k ) = a s ([v n ] k ) = a s,str ([v n ] k ) + a s,uni ([v n ] k ) = F s a s,per ([v n ] k ), a s,fs ([v n ] k ) + a s,uni ([v n ] k ) = F s a s,per (n), a s,fs (n) + a s,uni ([v n ] k ) = a s,str (n) + a s,uni ([v n ] k ),
where in the last line, we have used the fact M | [u n ] k and v n is synchronising for a s,fs . Since a str (n) = a s,str (n), it follows that
a uni (n) = a s,uni ([v n ] k ).
For a word x ∈ Σ * k containing yz as a factor and integer l ≥ 0, consider the interval
P = [w] k w ∈ xΣ l k = [x] k k l , ([x] k + 1) k l .(16)
Since u n and |v n | are constant on P, it follows from Proposition 2.5 and the assumption that a s,uni are highly Gowers uniform that
∥a uni 1 P ∥ U d [k L ] = ∥a s,uni 1 P ∥ U d [k L ] ≪ max s∈S 0 ∥a s,uni ∥ α d U d [k L ] ≪ k −c ′ L
for some constant 1 > c ′ > 0, which does not depend on P. It remains to cover [k L ] with a moderate number of intervals P of the form (16) and a small remainder set. Let η > 0 be a small parameter to be optimised in the course of the argument and let R be the set of those n ∈ [k L ] which are not contained in any progression P given by (16) with l ≥ (1 − η)L. Hence, if n ∈ R then the word yz does not appear in the leading ⌊ηL⌋ digits of (n) L k . It follows that |R| ≪ k −c ′′ 0 ηL and consequently
∥a uni 1 R ∥ U d [k L ] ≪ ∥a uni 1 R ∥ L p d [k L ] ≪ k −c ′′ ηL by Proposition 2.2, where c ′′ 0 > 0 and c ′′ = c ′′ 0 /p d are constants. Each n ∈ [k L ]
\ R belongs to a unique interval P given by (16) with l ≥ (1 − η)L and such that no proper suffix of x contains yz. There are ≤ k ηL such intervals, corresponding to the possible choices of initial ⌊ηL⌋ digits of (n) L k for n ∈ P. It now follows from the triangle inequality that
∥a uni ∥ U d [k L ] ≤ ∥a uni 1 R ∥ U d [k L ] + ∑ P ∥a uni 1 P ∥ U d [k L ] ≪ k −c ′′ ηL + k (η−c ′ )L . It remains to pick η = c ′ /2, leading to ∥a uni ∥ U d [k L ] ≪ k −cηL with c = c ′ min(c ′′ , 1)/2.
Finally, we record another reduction which will allow us to alter the initial state of the automaton in the proof of Theorem 4.1. As the proof of the following result is very similar and somewhat simpler than the proof of Theorem C discussed above, we skip some of the technical details. If fact, one could repeat said argument directly, only replacing S 0 with a smaller set (namely, a singleton); we do not pursue this route because a simpler and more natural argument is possible.
Proposition 4.2. Let A = (S, s 0 , Σ k , δ , Ω, τ) be a strongly connected, prolongable automaton and let S 0 ⊂ S be the set of s ∈ S such that δ (s, 0) = s. Then the following statements are equivalent:
1. Theorem 4.1 holds for a A,s for some s ∈ S 0 ; 2. Theorem 4.1 holds for a A,s for all s ∈ S 0 .
Proof. It is clear that (2) implies (1). For the other implication, we may assume that Theorem 4.1 holds for a A,s 0 = a A . Hence, there exists a decomposition a A = a str + a uni of a A as the sum of a strongly structured and highly Gowers uniform sequence. Let a str (n) = F a per (n), a fs (n) be a representation of a str as in (13).
Pick any s ∈ S 0 and pick u ∈ Σ * k , not starting with 0 and such that δ
(s 0 , u) = s, whence a A,s (n) = a A ([u(n) k ] k ) for all n ∈ N 0 . Since δ (s, 0) = s, we also have a A,s (n) = a A ([u0 m (n) k ] k ) for any m, n ∈ N 0 .
Let Q be a multiplicatively large integer, so that the period of a per divides k Q − 1, and put m(n) :
= Q − (length k (n) mod Q) ∈ {1, 2, . . . , Q}. For n ∈ N 0 put a ′ str (n) := a str ([u0 m(n) (n) k ] k ) and a ′ uni (n) := a A,s (n) − a ′ str (n). Clearly, a A,s = a ′ str + a ′ uni . Since the period of a per divides k Q − 1, for all n ∈ N 0 we have a per ([u0 m(n) (n) k ] k ) = a per (n + [u] k )(17)
Define the sequences a ′ per and a ′ fs by the formulas
a ′ per (n) := a per ([u0 m(n) (n) k ] k ), a ′ fs (n) := a fs ([u0 m(n) (n) k ] k )
. It follows from (17) that a ′ per is periodic. Since the sequence m(n) is k-automatic, so is a ′ fs . Indeed, in order to compute a ′ fs (n) it is enough to compute m(n) and a fs (
[u0 i (n) k ] k ) for 1 ≤ i ≤ Q.
Since a fs is forwards synchronising, it follows from Lemma 3.2 that so is a ′ fs . (Alternatively, one can also show that a ′ fs is automatic and forwards synchronising, by an easy modification of an automaton which computes a fs reading input from the least significant digit.) Since a str is given by
a ′ str (n) = F a ′ per (n), a ′ fs (n)
, it follows that a ′ str is strongly structured. To see that a ′ uni is highly Gowers uniform, we estimate the
Gowers norms a ′ uni U d [k L ] by covering [k L ] with intervals P = [k l , k l+1 ) (0 ≤ l < L) and using Proposition 2.5 to estimate a ′ uni 1 P U d [k L ] .
Uniqueness of decomposition
The structured automatic sequences we introduce in (4) and (13) are considerably easier to work with than general automatic sequences (cf. the proof of Theorem D below). However, they are still somewhat complicated and it is natural to ask if they can be replaced with a smaller class in the decompositions in Theorems C and 4.1. Equivalently, one can ask if there exist any sequences which are structured in our sense and highly Gowers uniform.
In this section we show that the weakly structured sequences defined in (4) are essentially the smallest class of sequences for which Theorem C is true and that the decomposition in (14) is essentially unique. As an application, we derive Theorem A as an easy consequence of Theorem C. Lemma 4.3. Let a : N 0 → C be a weakly structured k-automatic sequence such that
lim N→∞ E n<N a(n)b(n) = 0 (18)
for any periodic sequence b : N 0 → C. Then there exists a constant c > 0 such that
|{n < N | a(n) ̸ = 0}| ≪ N 1−c .(19)
Proof. Since a is weakly structured, we can represent it as
a(n) = F a per (n), a fs (n), a bs (n) ,(20)
using the same notation as in (4). Let M be the period of a per . Pick any residue r ∈ Z/MZ and synchronising words w, v ∈ Σ * k for a fs , a bs respectively. Assume additionally that w and v do not start with
0. Put x = a per (r) ∈ Ω per , y = a fs ([w] k ) and z = a bs ([v] k ). Our first goal is to show that F(x, y, z) = 0.
Let P be the infinite arithmetic progression
P = {n ∈ N 0 | n mod M = r and (n) k ∈ Σ * k w} .(21)
Since 1 P is periodic, we have the estimate
N−1 ∑ n=0 a(n)1 P (n) = N−1 ∑ n=0 F(x, y, a bs (n))1 P (n) = o(N) as N → ∞.(22)
Let L be a large integer an put
N 0 = [v] k k L and N 1 = ([v] k + 1)k L . Applying the above estimate (22) with N = N 0 , N 1 we obtain N 1 −1 ∑ n=N 0 a(n)1 P (n) = |[N 0 , N 1 ) ∩ P| F(x, y, z) = o(k L ) as L → ∞.(23)
This is only possible if F(x, y, z) = 0.
Since r, w, v were arbitrary, it follows that a(n) = 0 if (n) k is synchronising for both a fs and a bs . The estimate (19) follows immediately from the estimate on the number of non-synchronising words, discussed in Section 3.
Corollary 4.4.
1. If a : N 0 → C is both structured and highly Gowers uniform then there exists a constant c > 0 such that |{n < N | a(n) ̸ = 0}| ≪ N 1−c .
2. If a = a str + a uni = a ′ str + a ′ uni are two decompositions of a sequence a : N 0 → C as the sum of a weakly structured part and a highly Gowers uniform part then there exists a constant c > 0 such that
{n < N | a str (n) ̸ = a ′ str (n)} ≪ N 1−c .
Proof of Theorem A assuming Theorem C. Let a = a str + a uni be the decomposition of a as the sum of a weakly structured and a highly Gowers uniform part, whose existence is guaranteed by Theorem C. Then
lim sup N→∞ E n<N |a str (n)b(n)| = lim sup N→∞ E n<N |a(n)b(n)| = 0
for any periodic sequence b : N 0 → C, for instance by Proposition 2.5. Hence, it follows from Lemma 4.3 that there exists c > 0 such that |{n < N | a str (n) ̸ = 0}| ≪ N 1−c . In particular, a str is highly Gowers uniform, and hence so is a.
Remark 4.5. Since there exist non-zero weakly structured sequences which vanish almost everywhere, the decomposition in Theorem C is not quite unique. A prototypical example of such a sequence is the Baum-Sweet sequence b(n), taking the value 1 if all maximal blocks of zeros in (n) 2 have even length and taking the value 0 otherwise. It seems plausible that with a more careful analysis one could make the decomposition canonical. We do not pursue this issue further.
Combinatorial application
In this section we apply Theorem C to derive a result in additive combinatorics with a more direct appeal, namely Theorem D. We will need the following variant of the generalised von Neumann theorem.
n,m<N d ∏ i=0 (1 [N] f i )(n + im)1 P (m) ≪ min 0≤i≤d ∥ f i ∥ 2/3 U d [N] .
Proof. This is essentially Lemma 4.2 in [GT10a]. Using Lemma 2.3 to decompose 1 P into a sum of a trigonometric polynomial and an error term small in the L 1 norm, for any η > 0 we obtain the estimate
E n,m<N d ∏ i=0 (1 [N] f i )(n + im)1 P (m) (24) ≪ (1/η) 1/2 sup θ ∈R E n,m<N d ∏ i=0 (1 [N] f i )(n + im)e(θ m) + η.(25)Given θ ∈ R, put f ′ 0 (n) = e(−θ n) f 0 (n) and f ′ 1 (n) = e(θ n) f 1 (n), and f ′ i (n) = f i (n) for 1 < i ≤ d, so that ∥ f i ∥ U d [N] = ∥ f ′ i ∥ U d [N] for all 0 ≤ i ≤ d and d ∏ i=0 (1 [N] f i )(n + im)e(θ m) = d ∏ i=0 (1 [N] f ′ i )(n + im) for all n, m ∈ N 0 . Applying [GT10a, Lemma 4.2] to f ′ i we conclude that sup θ ∈R E n,m<N d ∏ i=0 (1 [N] f i )(n + im)e(θ m) ≪ min 0≤i≤d ∥ f i ∥ U d [N] .(26)
The claim now follows by optimising η.
Proof of Theorem D. Our argument follows a similar basic structure as the proof of Theorem 1.12 in [GT10a], although it is considerably simpler. Throughout the argument, d = l − 1 ≥ 2 and the k-automatic set A ⊂ N 0 are fixed and all error terms are allowed to depend on d, k and A. We also let N denote a large integer and put L = ⌈log k N⌉ and α = |A ∩ [N]| /N. Let 1 A = a str + a uni be the decomposition given by Theorem C, and let c 1 be the constant such that ∥a uni ∥ U d [N] ≪ N −c 1 . Let M be the period of the periodic component a per of a str and let η > 0 be a small parameter, to be optimised in the course of the argument. For notational convenience we additionally assume that ηL is an integer. Consider the arithmetic progression
P = n < N n ≡ 0 mod M and (n) L k ∈ 0 ηL Σ L−2ηL k 0 ηL .
Note |P| /N ≫ N −2η and that the second condition is just another way of saying that n ≡ 0 mod k L and n/k L < k −ηL . Our general goal is, roughly speaking, to show that many m ∈ P are common differences of many (d + 1)-term arithmetic progressions in A ∩ [N]. Towards this end, we will estimate the average (27) and expanding the product, we obtain the sum of 2 d+1 expressions of the form
E m∈P E n<N d ∏ i=0 1 A∩[N] (n + im).(27)Substituting 1 A∩[N] = 1 [N] (a str + a uni ) intoE m∈P E n<N d ∏ i=0 1 [N] a i (n + im),(28)
where a i = a str or a i = a uni for each 0 ≤ i ≤ d. If a i = a uni for at least one i then it follows from Lemma 4.6 that
E m∈P E n<N d ∏ i=0 1 [N] a i (n + im) ≪ N |P| ∥a uni ∥ 2/3 ≪ N 2η−2c 1 /3 .(29)
Inserting this into (27)
E m∈P E n<N d ∏ i=0 1 A∩[N] (n + im) = E m∈P E n<N d ∏ i=0 1 [N] a str (n + im) + O(N 2η−2c 1 /3 ).(30)
Next, we will replace each of the terms (1 [N] a str )(n + im) with (1 [N] a str )(n) at the cost of introducing another error term.
If (1 [N] a str )(n + im) ̸ = (1 [N] a str )(n) for some 0 ≤ i ≤ d, m ∈ P and n ∈ [N]
then at least one of the following holds:
1. either the words (n + im) L k and (n) L k differ at one of the first ηL/2 positions or n < N ≤ n + im;
2. the first ηL/2 digits of (n) L k do not contain a synchronising word for the backward synchronising component a bs of a str ;
3. the last ηL digits of (n) L k do not contain a synchronising word for the forward synchronising a fs component of a str .
Indeed, if neither of these conditions held, the first ηL/2 digits of n and n + im would coincide, as would their last ηL digits (because m ∈ P implies that the last ηL digits of m are zeros), and we would have a per (n) = a per (n + im) (because m ∈ P implies that m is divisible by M, the period of a per ); moreover, we would have a fs (n) = a fs (n + im) (because the common last ηL digits of (n) L k and (n + im) L k contain a synchronising word) and a bs (n) = a bs (n + im) (because the common first ηL/2 digits of (n) L k and (n + im) L k contain a synchronising word). It would then follow that a str (n + im) = a str (n). Moreover, the negation of condition (1) would guarantee that 1 [N]
(n) = 1 [N] (n + im), contradicting our assumption (1 [N] a str )(n + im) ̸ = (1 [N] a str )(n).
If m ∈ P and n ∈ [N] are chosen uniformly at random then (1) holds with probability ≪ N −η/2 , and there exist constants c bs and c fs (dependent on the synchronising words for the respective components of a str ) such that (2) and (3) hold with probabilities ≪ N −c bs η and ≪ N −c fs η respectively. Letting c 2 = min (1/2, c bs , c fs ) and using the union bound we conclude that
E m∈P E n<N d ∑ i=1 (1 [N] a str )(n + im) ̸ = (1 [N] a str )(n) ≪ N −c 2 η .(31)
Inserting (31) into (30) and removing the average over P we conclude that
E m∈P E n<N d ∏ i=0 1 A∩[N] (n + im) = E n<N a d+1 str (n) + O(N 2η−2c 1 /3 + N −c 2 η ).(32)
The main term in (32) can now be estimated using Hölder inequality:
E n<N a d+1 str (n) ≥ E n<N a str (n) d+1 ≥ α d+1 − O(N −c 1 ),(33)
where in the last transition we use the fact that
E n<N a str (n) = α − E n<N a uni (n) = α − O(N −c 1 ).
Combining (32) and (33) and letting η be small enough that c 2 η < min (2c 1 /3 − 2η, c 1 ), we obtain the desired bound for the average (27):
E m∈P E n<N d ∏ i=0 1 A∩[N] (n + im) ≥ α d+1 − O(N −c 2 η ),(34)
Finally, applying a reverse Markov's inequality to (34) we conclude that
E m∈P E n<N d ∏ i=0 1 A∩[N] (n + im) ≥ α d+1 − ε ≥ ε − O(N −c 2 η )(35)
for any ε > 0. Optimising the value of η for a given ε > 0 we conclude that there exists ≫ ε C N values of m such that
E n<N d ∏ i=0 1 A∩[N] (n + im) ≥ α d+1 − ε,
provided that ε > N −1/C for a certain constant C > 0 dependent on d, k and A. When ε < N −1/C , it is enough to use m = 0.
Remark 4.7. The proof is phrased in terms which appear most natural when η is a constant and ε is a small power of N. This choice is motivated by the fact that this case is the most difficult. However, the theorem is valid for all ε in the range (N −1/C , 1), including the case when ε is constant as N → ∞.
Alternative line of attack
In this section we describe an alternative strategy one could try to employ in the proof of our main theorems. Since this approach is possibly more natural, we find it interesting to see where the difficulties arise and to speculate on how this hypothetical argument would differ from the one presented in the remainder of the paper. As the material in this section is not used anywhere else and has purely motivational purpose, we do not include all of the definitions (which the reader can find in [GT10a]) nor do we prove all that we claim. Let a : N 0 → C be a sequence with |a(n)| ≤ 1 for all n ≥ 0. Fix d ≥ 1 and a small positive constant ε > 0 and let also F : R >0 → R >0 denote a rapidly increasing sequence (its meaning will become apparent in the course of the reasoning). The Arithmetic Regularity Lemma [GT10a] ensures that for each N > 0 there exists a parameter M = O(1) (allowed to depend on d, ε, F but not on N) and a decomposition a(n) = a str (n) + a sml (n) + a uni (n),
(n ∈ [N]),(36)
where a str , a sml and a uni : [N] → C are respectively structured, small and uniform in the following sense:
• a str (n) = F(g(n)Γ, n mod Q, n/N) where F is a function with Lipschitz norm ≤ M, Q is an integer with 1 ≤ Q ≤ M, g : N 0 → G/Γ is a (F(M), N)-irrational polynomial sequence of degree ≤ d − 1 and complexity ≤ M, taking values in a nilmanifold G/Γ;
• ∥a sml ∥ L 2 [N] ≤ ε; • ∥a uni ∥ U d [N] ≤ 1/F(M).
Note that F can always be replaced with a more rapidly increasing function and that definitions of many terms related to a str are currently not provided. The decomposition depends on N, but for now we let N denote a large integer and keep this dependence implicit.
Suppose now that a is additionally k-automatic. We can use the finiteness of the kernel of a to find α ≥ 0 and 0 ≤ r < s < k α such that a(k α n + r) = a(k α n + s) for all n ≥ 0. For the sake of simplicity, suppose that a stronger condition holds: for each q ∈ Z/QZ, there exist 0 ≤ r < s < k α as above with r ≡ s ≡ q (mod Q). Define also N ′ = N/k α and b str (n) = a str (k α n + s) − a str (k α n + r) for all n ∈ [N ′ ], and accordingly for b sml and b uni . Then b str + b sml + b uni = 0. In particular,
E n<N ′ |b str (n)| 2 ≤ E n<N ′ b sml (n)b str (n) + E n<N ′ b uni (n)b str (n) .
The first summand is O(ε) by Cauchy-Schwarz. It follows from the Direct Theorem for Gowers norms that, as long as F increases fast enough (the required rate depends on ε), the second summand is ≤ ε. Hence,
E n<N ′ |a str (k α n + r) − a str (k α n + s)| 2 = E n<N ′ |b str (n)| 2 = O(ε).(37)
Bearing in mind that k α n + r and k α n + s differ by a multiple of Q which is small compared to N, one can hope to derive from (37) that for each q ∈ Z/QZ and each t ∈ [0, 1], F(g(k α n + r)Γ, q,t) ≈ F(g(k α n + s)Γ, q,t),
(n ∈ [N ′ ]).(38)
(We intentionally leave vague the meaning of the symbol "≈".) From here, it is likely that one could show that F(x, n,t) is essentially constant with respect to x ∈ G/Γ. This could possibly be achieved by a more sophisticated variant of the argument proving Theorem B in [BK19a]. For the sake of exposition, let us rather optimistically suppose that F(x, n,t) = F(n,t) is entirely independent of x.
We are then left with the structured part taking the form a str (n) = F(n mod Q, n/N), which bears a striking similarity to the definition of a weakly structured automatic sequence. Unfortunately, there is no guarantee that a str produced by the Arithmetic Regularity Lemma is k-automatic (or that it can be approximated with a k-automatic sequence in an appropriate sense). Ensuring k-automaticity of a str seems to be a major source of difficulty. We note that (37) can be construed as approximate equality between a str (k α n + r) and a str (k α n + s), which suggests (but does not prove) that a str should be approximately equal to a k-automatic sequence a ′ str . If the line of reasoning outlined above succeeded, it would allow us to decompose an arbitrary automatic sequence as the sum of a weakly structured automatic sequence and an error term, which is small in an appropriate sense. However, it seems rather unlikely that this reasoning could give better bounds on the error terms than the rather poor bounds provided by the Arithmetic Regularity Lemma. Hence, in order to obtain the power saving, we are forced to argue along similar lines as in Section 6. It is also worth noting that while the decomposition produced by our argument can be made explicit, it is not clear how to extract an explicit decomposition from an approach using the Arithmetic Regularity Lemma. Finally, our approach also ensures that the uniform component fulfills the carry Property, which is essential to the possible applications discussed in Section 1, and which would be completely lost with the use of the Arithmetic Regularity Lemma.
group extensions of automata
Definitions
In order to deal with automatic sequences more efficiently, we introduce the notion of a group extension of an automaton. 1 A group extension of a k-automaton without output (k-GEA) is a sextuple T = (S, s 0 , Σ k , δ , G, λ ) consisting of the following data:
• a finite set of states S with a distinguished initial state s 0 ;
• a transition function δ : S × Σ k → S;
• a labelling λ : S × Σ k → G where (G, ·) is a finite group.
Note that T contains the data defining an automaton (S, s 0 , Σ k , δ ) without output and additionally associates group labels to each transition. Recall that the transition function δ extends naturally to a map (denoted by the same letter) δ : S × Σ * k → S such that δ (s, vu) = δ (δ (s, v), u) for all u, v ∈ Σ * k . The labelling function similarly extends to a map λ :
S × Σ * k → G such that λ (s, vu) = λ (s, v) · λ (δ (s, v), u)
for all u, v ∈ Σ * k . Thus, T can be construed as a means to relate a word w ∈ Σ * k to a pair consisting of the state δ (s 0 , w) and the group element λ (s 0 , w).
A group extension of a k-automaton with output (k-GEAO) T = (S, s 0 , Σ k , δ , G, λ , Ω, τ) additionally includes • an output function τ : S × G → Ω, where Ω is a finite set.
We use the term group extension of an automaton (GEA) to refer to a group extension of a k-automaton where k is left unspecified. The term group extension of an automaton with output (GEAO) is used accordingly.
Let T = (S, s 0 , Σ k , δ , G, λ , Ω, τ) be a group extension of a k-automaton with output. Then T produces the k-automatic map a T : Σ * k → Ω given by
a T (u) = τ (δ (s 0 , u), λ (s 0 , u)) ,(39)
which in particular gives rise to the k-automatic sequence (denoted by the same symbol) a T : N 0 → Ω via the natural inclusion N 0 → Σ * k , n → (n) k . Accordingly, we say that the GEA T produces a sequence a : N 0 → Ω if there exists a choice of the output function τ such that a = a T . More generally, to a pair (s, h) ∈ S × G we associate the k-automatic sequence
a T ,s,h (u) = τ (δ (s, u), h · λ (s, u)) .(40)
If the GEA T is clear from the context, we omit it in the subscript. Note that with this terminology, GEAs read input starting with the most significant digit. We could also define analogous concepts where the input is read from the least significant digit, but these will not play a role in our reasoning. A morhphism from T to another k-GEA T ′ = (S ′ , s ′ 0 , Σ k , δ ′ , G ′ , λ ′ ) without output is a pair (φ , π) where φ : S → S ′ is a map and π : G → G ′ is a morphism of groups obeying the following compatibility conditions:
• φ (s 0 ) = s ′ 0 and δ ′ (φ (s), j) = φ (δ (s, j)) for all s ∈ S, j ∈ Σ k ;
• λ ′ (φ (s), j) = π(λ (s, j)) for all s ∈ S, j ∈ Σ k .
If φ and π are surjective, we will say that T ′ is a factor of T . A morphism from T to another group extension of a k-automaton with output τ(s, g)) for all s ∈ S, g ∈ G.
T ′ = (S ′ , s ′ 0 , Σ k , δ ′ , G ′ , λ ′ , Ω ′ , τ ′ ) is a triple (φ , π, σ ) where (φ , π) is a morphism from T 0 = (S, s 0 , Σ k , δ , G, λ ) to T ′ 0 = (S ′ , s ′ 0 , Σ k , δ ′ , G ′ , λ ′ ) and σ : Ω → Ω ′ is compatible with (φ , π) in the sense that • τ ′ (φ (s), π(g)) = σ (
In the situation above the sequence a T ′ produced by T ′ is a coding of the sequence a T produced by T , that is, a T ′ (n) = σ • a T (n).
We say that a GEA T (with or without output) is strongly connected if the underlying automaton without output A = (S, s 0 , Σ k , δ ) is strongly connected. The situation is slightly more complicated for synchronisation. We say that a word w ∈ Σ * k synchronises T to a state s ∈ S if δ (s ′ , w) = s and λ (s ′ , w) = id G for each s ′ ∈ S, and that T is synchronising if it has a word that synchronises it to the state s 0 . 2 (This is different than terminology used in [Mül17].) Note that if T is synchronising then so is the underlying automaton but not vice versa, and that even if T is strongly connected and synchronising there is no guarantee that all states s ∈ S have a synchronising word. We also say that T (or T ) is prolongable if δ (s 0 , 0) = s 0 and λ (s 0 , 0) = id G . Finally, T is idempotent if it ignores the leading zeros and δ (s, 0) = δ (s, 00) and λ (s, 00) = λ (s, 0) for all s ∈ S.
As alluded to above, the sequence a T produced by the GEAO T is k-automatic. More explicitly, the GEAO T = (S, s 0 , Σ k , δ , G, λ , Ω, τ) gives rise to the automaton
A T = (S ′ , s ′ 0 , Σ k , δ ′ , Ω, τ) where S ′ = S × G, s ′ 0 = (s 0 , id G )
and δ ′ ((s, g), j) = (δ (s, j), g·λ (s, j)). Conversely, any automaton A = (S, s 0 , Σ k , δ , Ω, τ) can be identified with a GEAO T A = (S, s 0 , Σ k , δ , {id}, λ id , Ω, τ ′ ) with trivial group, λ id (s, j) = id and τ ′ (s, id) = τ(s). At the opposite extreme, any invertible automaton A can be identified with a GEAO
T inv A = ({s ′ 0 }, s ′ 0 , Σ k , δ ′ 0 , Sym(S), λ , Ω, τ ′ ) with trivial state set where δ ′ 0 (s ′ 0 , j) = s ′ 0 , λ (s ′ 0 , j) = δ (·, j) and τ ′ (s ′ 0 , g) = τ(g(s 0 )
). Accordingly, we will call any GEAO (or GEA) with a single state invertible and we omit the state set from its description: any invertible GEAO is fully described by the data (G, λ , Ω, τ).
Example 5.1. The Rudin-Shapiro sequence r(n) is given recursively by r(0) = +1 and r(2n) = r(n), r(2n + 1) = (−1) n r(n). It is produced by the following 2-automaton: where s 0 is the initial state, edge labelled j/± from s to s ′ is present if δ (s, j) = s ′ and λ (s, j) = ±1, and the output function is given by τ(s, g) = g. This is an example of an efficient GEAO, which will be defined shortly. Example 5.3. We also present a GEAO that produces the sequence a(n) defined in Example 1.3. The group is given by the symmetric group on 3 elements Sym(3), where we use the cyclic notation to denote the permutations. The output is given by τ(s 0,1,2 , id) = τ(s 0,1,2 , (23)) = 1, τ(s 3,4,2 , id) = τ(s 3,4,2 , (23)) = 4, τ(s 0,1,2 , (12)) = τ(s 0,1,2 , (132)) = 2, τ(s 3,4,2 , (12)) = τ(s 3,4,2 , (132)) = 5, τ(s 0,1,2 , (13)) = τ(s 0,1,2 , (123)) = 3, τ(s 3,4,2 , (13)) = τ(s 3,4,2 , (123)) = 3.
Efficient group extensions of automata
As we have seen, all sequences produced by GEAOs are automatic and conversely any automatic sequence is produced by a GEAO. In [Mül17] it is shown that any sequence can be produced by an especially well-behaved GEAO. We will now review the key points of the construction in [Mül17] and refer to that paper for more details. For the convenience of the Reader, we add the notation used in [Mül17] in square brackets. where Sym(m) acts onŜ by g · (s 1 , . . . , s m ) = (s g −1 (1) , . . . , s g −1 (m) ). Finally, forŝ ∈Ŝ and g ∈ G we set τ(ŝ, g) = τ (pr 1 (g ·ŝ)), where pr 1 denotes the projection onto the first coordinate. Put T = T A := (Ŝ,ŝ 0 , Σ k ,δ , G, λ , Ω,τ). Then the construction discussed so far guarantees that a A = a T [Mül17, Prop.
Let A = (S, s 0 , Σ k , δ , Ω, τ) [A = (S ′ , s ′ 0 , Σ k , δ ′ , τ ′ )] be
2.5] and also that T is strongly connected and that the underlying automaton of T is synchronising
[Mül17, Prop. 2.2].
The GEAO T is essentially unique with respect to the properties mentioned above, except for two important degrees of freedom: we may rearrange the elements of the m-tuples inŜ and we may changê s 0 to any other state beginning with s 0 . Let S 0 denote the image of δ (·, 0) and letŜ 0 ⊂ S m 0 denote the image ofδ (·, 0). The assumption that A is idempotent guarantees that for eachŝ ∈Ŝ 0 we haveδ (ŝ, 0) =ŝ and λ (ŝ, 0) = id. It follows that we may chooseŝ 0 ∈Ŝ 0 , so that T ignores the leading zeros, i.e. it is prolongable. Consequently, we may assume that T is idempotent.
Rearranging the m-tuples inŜ corresponds to replacing the labels λ (ŝ, j) (ŝ ∈Ŝ, j ∈ Σ k ) with conjugated labels λ ′ (h(ŝ), j) = h(ŝ)λ (ŝ, j)h(δ (ŝ, j)) −1 for any h :Ŝ → Sym(m) (to retainŝ 0 as a valid initial state, we also need to guarantee that h(ŝ 0 )(1) = 1). More generally, for u ∈ Σ * k we have λ ′ (h(ŝ), u) = h(ŝ)λ (ŝ, u)h(δ (ŝ, u)) −1 [Mül17, Prop. 2.6]. To avoid redundancies, we always assume that the group G is the subgroup of Sym(m) generated by all of the labels λ (ŝ, j) (ŝ ∈Ŝ, j ∈ Σ k ); such conjugation may allow us to replace G with a smaller group. In fact, we may ensure a minimality property [Mül17, Thm.
+ Cor. 2.26]:
(T 1 ) For anyŝ,ŝ ′ ∈Ŝ and sufficiently large l ∈ N we have
λ (ŝ, w) w ∈ Σ l k ,δ (ŝ, w) =ŝ ′ = G.
This property is preserved by any further conjugations, as long as we restrict to h :Ŝ → G.
The conditionT 1 guarantees that all elements of G appear as labels attached to paths between any two states. It is natural to ask what happens if additional restrictions are imposed on the integer [w] k corresponding to a path. The remainder of [w] k modulo k l (l ∈ N) records the terminal l entries of w and hence is of limited interest. We will instead be concerned with the remainder of [w] k modulo integers coprime to k. This motivates us to let gcd * k (A) denote the greatest among the common divisors of a set A ⊂ N 0 which are coprime to k and put (following nomenclature from [Mül17])
d ′ = d ′ T = gcd * k [w] k w ∈ Σ * k ,δ (ŝ 0 , w) =ŝ 0 , λ (ŝ, w) = id .(41)
After applying further conjugations, we can find a normal subgroup G 0 < G together with a group element g 0 ∈ G such that [Mül17, Thm. 2.16 + Cor. 2.26]:
(T 2 ) For anyŝ,ŝ ′ ∈Ŝ and 0 ≤ r < d ′ it holds that
λ (ŝ, w) w ∈ Σ * k , δ (ŝ, w) =ŝ ′ , [w] k ≡ r mod d ′ = G 0 g r 0 = g r 0 G 0 .
(T 3 ) For anyŝ,ŝ ′ ∈Ŝ, any g ∈ G 0 and any sufficiently large l ∈ N it holds that
gcd * k [w] k w ∈ Σ l k ,δ (ŝ, w) =ŝ ′ , λ (ŝ, w) = g = d ′ .
The properties listed above imply in particular that G/G 0 is a cyclic group of order d ′ generated by g 0 . We also mention that [Mül17] has a somewhat stronger variant ofT 3 which is not needed for our purposes. Let w be a word synchronising the underlying automaton of T toŝ 0 . Prolonging w if necessary we may assume without loss of generality that d ′ | [w] k and that w begins with 0. Repeating w if necessary we may further assume that λ (ŝ 0 , w) = id. Conjugating by h(ŝ) = λ −1 (ŝ, w) ∈ G 0 we may finally assume that λ (ŝ, w) = id for allŝ ∈Ŝ, and hence that the GEAO T is synchronising. Note that thanks to idempotence, for eachŝ ∈ S we have λ (ŝ, 0) = λ (ŝ, 0w) = λ (ŝ, w) = id G .
In broader generality, let us say that a GEAO T = (S, s 0 , Σ k , δ , G, λ , Ω, τ) (not necessarily arising from the construction discussed above) is efficient if it is strongly connected, idempotent, synchronising, λ (s, 0) = id G for all s ∈ S and it satisfies the "unhatted" versions of the propertiesT 1 ,T 2 andT 3 , that is, there exist d ′ = d ′ T , g 0 ∈ G and G 0 < G such that (T 1 ) For any s, s ′ ∈ S and sufficiently large l ∈ N we have
λ (s, w) w ∈ Σ l k , δ (s, w) = s ′ = G.
(T 2 ) For any s, s ′ ∈ S and 0 ≤ r < d ′ it holds that
λ (s, w) w ∈ Σ * k , δ (s, w) = s ′ , [w] k ≡ r mod d ′ = G 0 g r 0 = g r 0 G 0 .
(T 3 ) For any s, s ′ ∈ S, any g ∈ G 0 and any sufficiently large l ∈ N it holds that
gcd * k [w] k w ∈ Σ l k , δ (s, w) = s ′ , λ (s, w) = g = d ′ .
We let w T 0 denote a synchoronising word for T . The above discussion can be summarised by the following theorem. We note that this theorem is essentially contained in [Mül17], except for some of the reductions presented here. Additionally, [Mül17] contains a slightly stronger version of property T 2 where w is restricted to Σ l k for large l, which can be derived from properties T 1 and T 2 .
Theorem 5.4. Let A be a strongly connected idempotent automaton. Then there exists an efficient GEAO T which produces the same sequence: a A = a T .
In analogy with Proposition 4.2, the veracity of Theorem 4.1 is independent of the initial state of the group extension of an automaton with output.
Proposition 5.5. Let T = (S, s 0 , Σ k , δ , G, λ , Ω, τ) be an efficient GEAO and let S 0 ⊂ S denote the set of all states s ∈ S such that δ (s, 0) = s and λ (s, 0) = id G . Then the following conditions are equivalent.
1. Theorem 4.1 holds for a T ,s,h for some s ∈ S 0 , h ∈ G;
2. Theorem 4.1 holds for a T ,s,h for all s ∈ S 0 , h ∈ G;
Proof. Assume without loss of generality that Theorem 4.1 holds for a T , and let s ∈ S, h ∈ G. It follows from condition T 1 there exists u ∈ Σ * k such that a T ,s,h (n) = a T ([u(n) k ] k ). The claim now follows from Proposition 4.2 applied to the automaton A T corresponding to T discussed at the end of Section 5.1.
Representation theory
Let T = (S, s 0 , Σ k , δ , G, λ , Ω, τ) be an efficient GEAO (cf. Theorem 5.4) and T 0 = (S, s 0 , Σ k , δ , G, λ ) be the underlying GEA. In this section we use representation theory to separate the sequence a T produced by T into simpler components, later shown to be either strongly structured or highly Gowers uniform.
We begin by reviewing some fundamental results from representation theory. A (unitary) representation ρ of the finite group G is a homomorphism ρ : G → U(V ), where U(V ) denotes the group of unitary automorphisms of a finitely dimensional complex vector space V equipped with a scalar product. The representation ρ is called irreducible if there exists no non-trivial subspace W ⊊ V such that ρ(g)W ⊆ W for all g ∈ G. Every representation uniquely decomposes as the direct sum of irreducible representations.
The representation ρ induces a dual representation ρ * defined on the dual space V * , given by ρ * (g)(ϕ) = ϕ •ρ(g −1 ). Note that any element ϕ of V * can be represented as ϕ = ϕ v , where ϕ v (u) = ⟨u, v⟩ for v ∈ V , and V * inherits from V the scalar product given by the formula ⟨ϕ v , ϕ u ⟩ = ⟨u, v⟩. The representation ρ * is unitary with respect to this scalar product. For a given choice of orthonormal basis, the endomorphisms on V can be identified with matrices and V * can be identified with V . Under this identification, ρ * (g) is simply the complex conjugate of ρ(g).
There only exist finitely many equivalence classes of unitary irreducible representations of G and the matrix coefficients of irreducible representations of G span the space of all functions f : G → C (see e.g. [FH91, Cor 2.13, Prop. 3.29]; the latter can also be seen as a special case of the Peter-Weyl theorem).
Here, matrix coefficients of ρ are maps G → C of the form g → ⟨u, ρ(g)v⟩ for some u, v ∈ V . Hence, we have the following decomposition result.
Lemma 5.6. Let T be an efficient group extension of an automaton. The C-vector space of maps G → C is spanned by maps of the form α • ρ where ρ : G → V is an irreducible unitary representation of G and α is a linear map End(V ) → C.
We will call b : N 0 → C a basic sequence produced by T if it takes the form
b(n) = α • ρ(λ (s 0 , (n) k )) δ (s 0 , (n) k ) = s (n ∈ N 0 ),(42)
where ρ : G → U(V ) is an irreducible unitary representation of G, α is a linear map End(V ) → C, and s ∈ S is a state. As a direct consequence of Lemma 5.6 we have the following.
Corollary 5.7. Let T be an efficient group extension of an automaton. The C-vector space of sequences N 0 → C produced by T is spanned by basic sequences defined in (42).
It follows that in order to prove Theorem 4.1 in full generality it is enough to prove it for basic sequences. There are two significantly different cases to consider, depending on the size of the kernel ker ρ = {g ∈ G | ρ(g) = id V }. Theorem 4.1 follows immediately from the following result combined with Theorem 5.4 and Corollary 5.7.
Theorem 5.8. Let T be an efficient group extension of an automaton and let b be a basic sequence given by (42).
1. If G 0 ⊂ ker ρ then b is strongly structured.
2. If G 0 ̸ ⊂ ker ρ then b is highly Gowers uniform.
One of the items above is relatively straightforward and we prove it now. The proof of the other one occupies the remainder of the paper.
Proof of Theorem 5.8(1). We use the same notation as in Theorem 5.4. Since ρ vanishes on G 0 , it follows from property T 2 that ρ(λ (s, w)) = ρ(g [w] k 0 ) for any w ∈ Σ * k . In particular, the sequence n → α • ρ (λ (s 0 , (n) k )) is periodic with period d ′ . Since the underlying automaton of T is synchronising, so is the sequence n → δ (s 0 , (n) k ) = s . It follows that b is the product of a periodic sequence and a synchronising sequence, whence b is strongly structured.
Example 5.9. Let a, b, c be the sequences defined in Example 1.2. Recall the corresponding GEAO is introduced in Example 5.2. The group of the labels is G = {+1, −1}, and the corresponding group G 0 equals G. Note that G has two irreducible representations: the trivial one g → 1, and the non-trivial one g → g. The trivial representation gives rise to the basic sequences 1+b 2 and 1−b 2 , which are strongly structured. The non-trivial representations gives rise to the basic sequences 1+b 2 c and 1−b 2 c, which are highly Gowers uniform. We have a = 3 1+b 2 + 1−b 2 + 1+b 2 c. We close this section with a technical result which will play an important role in the proof of Theorem 5.8(2). Given two representations ρ : G → U(V ) and σ : H → U(W ) we can consider their tensor product
ρ ⊗ σ : G × H → U(V ⊗W ) which is uniquely determined by the property that (ρ ⊗ σ )(g, h)(v ⊗ w) = ρ(g)(v) ⊗ σ (h)(w) for all v ∈ V, w ∈ W .
(Note that V ⊗ W carries a natural scalar product such that ⟨v ⊗ w, v ′ ⊗ w ′ ⟩ V ⊗W = ⟨v, v ′ ⟩ V ⟨w, w ′ ⟩ W , with respect to which ρ ⊗ σ is unitary.) In particular, for D ≥ 0 we can define the D-fold tensor product ρ ⊗D : G D → U(V ⊗D ).
Proposition 5.10. Let ρ : G → U(V ) be an irreducible representation of a group G and let G 0 be a subgroup of G such that G 0 ̸ ⊂ ker ρ. Then for any D ≥ 1 we have
∑ g∈G D 0 ρ ⊗D (g) = 0. (43)
Proof. By the definition of the tensor product we find
∑ g∈G D 0 ρ ⊗D (g) = ω∈[D] ∑ g ω ∈G 0 ρ(g ω ) .
Thus it is sufficient to show that
P := E g∈G 0 ρ(g) = 0.(44)
A standard computation shows that ρ(h)P = P for each h ∈ G 0 , whence in particular P 2 = P. It follows that P is a projection onto the space U < V consisting of the vectors u ∈ V such that ρ(g)u = u for all g ∈ G 0 . Note that U ⊊ V because G 0 ̸ ⊂ ker ρ. We claim that U is an invariant space for ρ. It will suffice to verify that U is preserved by ρ(g 0 ), meaning that ρ(h)ρ(g 0 )u = ρ(g 0 )u for each u ∈ U and each h ∈ G 0 . Pick any h and let h ′ := g −1 0 hg 0 ∈ G 0 . Then, for each u ∈ U we have ρ(h)ρ(g 0 )u = ρ(g 0 )ρ(h ′ )u = ρ(g 0 )u.
Since ρ is irreducible, it follows that U = {0} is trivial. Consequently, P = 0.
6 Recursive relations and the cube groupoid 6.1 Introducing the Gowers-type averages
The key idea behind our proof of Theorem 5.8(2) is to exploit recursive relations connecting ∥a∥ U d [k L ] with ∥a∥ U d [k L−l ] for 0 < l < L. In fact, in order to find such relations we consider somewhat more general averages which we will shortly introduce. A similar idea, in a simpler form, was used in [Kon19].
Throughout this section, T = (S, s 0 , Σ k , δ , G, λ , Ω, τ) denotes an efficient GEAO, d ≥ 1 denotes an integer, and ρ : G → U(V ) denotes an irreducible unitary representation. All error terms are allowed to depend on d and T .
In order to study Gowers norms of basic sequences, we need to define certain averages of linear operators obtained from the representation ρ in a manner rather analogous as in the definition of Gowers norms, the key difference being that the tensor product replaces the product of scalars. We define the space (using the terminology of [Tao12, §2.2], we can construe it as a higher order Hilbert space)
E(V ) = E d (V ) := ⃗ ω∈{0,1} d |⃗ ω| even V ⊗ ⃗ ω∈{0,1} d |⃗ ω| odd V * .(45)
Recall that E(V ) has a natural scalar product; we let ∥·∥ denote the corresponding norm on E(V ) and the operator norm on End(E(V )).
The representation ρ of G on V induces a representation ρ ρ ρ of the group
G [d] = ∏ ⃗ ω∈{0,1} d G on E(V ), given by the formula ρ ρ ρ(g) := ⃗ ω∈{0,1} d C |⃗ ω| ρ(g ⃗ ω ) = ⃗ ω∈{0,1} d |⃗ ω| even ρ(g ⃗ ω ) ⊗ ⃗ ω∈{0,1} d |⃗ ω| odd ρ * (g ⃗ ω ),(46)
where g = (g ⃗ ω ) ⃗ ω∈{0,1} d and C ρ = ρ * denotes the dual representation (C 2 ρ = ρ). This is nothing else than the external tensor product of copies of ρ on V and ρ * on V * , and as such it is irreducible and unitary with respect to the induced scalar product on E(V ).
Using r as a shorthand for (r ⃗ ω ) ⃗ ω∈{0,1} d , we consider the set
R := r ∈ Z [d] ∃ ⃗ t ∈ [0, 1) d+1 ∀⃗ ω ∈ {0, 1} d r ⃗ ω = 1⃗ ω · ⃗ t . Definition 6.1. For s = (s ⃗ ω ) ⃗ ω∈{0,1} d ∈ S [d] , r = (r ⃗ ω ) ⃗ ω∈{0,1} d ∈ R and L ≥ 0 we define the averages A(s, r; L) ∈ End(E(V )) by the formula A(s, r; L) = 1 k (d+1)L ∑ ⃗ n∈Z d+1 ∏ ⃗ ω∈{0,1} d 1⃗ ω ·⃗ n + r ⃗ ω ∈ [k L ] (47) × ∏ ⃗ ω∈{0,1} d δ (s 0 , (1⃗ ω ·⃗ n + r ⃗ ω ) k ) = s ⃗ ω × ⃗ ω∈{0,1} d C |⃗ ω| ρ (λ (s 0 , (1⃗ ω ·⃗ n + r ⃗ ω ) k )) .
Let us now elucidate the connection between the averages (47) and Gowers norms. For s ∈ S we let s [d] = (s) ⃗ ω∈{0,1} d denote the 'constant' cube with copies of s on each coordinate. Lemma 6.2. Let b be a basic sequence produced by T , written in the form (42) for some linear map α : End(V ) → C and s ∈ S. Then
∥b∥ U d [k L ] ≪ A(s [d] , 0; L) 1/2 d ,(48)
where the implicit constant depends on α.
Proof. Let α * : End(V * ) → C denote the conjugate dual map given by the formula α * (ψ * ) = α(ψ).
For ⃗ ω ∈ {0, 1} d let α ⃗ ω := α if |⃗ ω| is even and α ⃗ ω := α * if |⃗ ω| odd. Using the natural identification End(E(V )) ∼ = ⃗ ω∈{0,1} d |⃗ ω| even End(V ) ⊗ ⃗ ω∈{0,1} d |⃗ ω| odd End(V * ),
we define a linear map α α α : End(E(V )) → C by the formula
α α α ⃗ ω∈{0,1} d ψ ⃗ ω = ∏ ⃗ ω∈{0,1} d α ⃗ ω (ψ ⃗ ω ).
With these definitions, an elementary computation shows that
∥b∥ 2 d U d [k L ] = k (d+1)L Π(k L ) α α α(A(s [d] , 0; L)).(49)
The factor k (d+1)L /Π(k L ), corresponding to the different normalisations used in (47) and (9), has a finite limit as L → ∞. Since α α α is linear, we have |α α α(B)| ≪ ∥B∥ and (48) follows.
Remark 6.3. 1. Generalising (49), the average α α α(A(s, r; L)) can be construed (up to a multiplicative factor and a small error term) as the Gowers product of the 2 d functions n → b(n + r ⃗ ω ) for all ⃗ ω ∈ {0, 1} d .
2. As seen from the formulation of Lemma 6.2, we are ultimately interested in the averages (47) when r = 0. The non-zero values of r correspond to ancillary averages, which naturally appear in the course of the argument.
3. Note that for r = 0 the first product on the right hand side of (47) simply encodes the condition that ⃗ n ∈ Π(k L ). The normalising factor k −(d+1)L ensures that A(s, r; L) remain bounded as L → ∞.
Our next goal is to obtain a recursive relation for the averages given by (47). Note that any ⃗ n ∈ Z d+1 can be written uniquely in the form
⃗ n = k l ⃗ m +⃗ e where ⃗ e ∈ [k l ] d+1 and ⃗ m ∈ Z d+1 . Let v = (s, r) ∈ S [d] × Rk (d+1)L ∑ s ′ ∈S [d] ∑ r ′ ∈N [d] 0 ∑ ⃗ m∈Z d+1 ∑ ⃗ e∈[k l ] d+1 (50) ∏ ⃗ ω∈{0,1} d 1⃗ ω · ⃗ m + r ′ ⃗ ω ∈ [k L−l ] · 1⃗ ω ·⃗ e + r ⃗ ω k l = r ′ ⃗ ω × ∏ ⃗ ω∈{0,1} d δ (s 0 , (1⃗ ω · ⃗ m + r ′ ⃗ ω ) k ) = s ′ ⃗ ω · δ (s ′ ⃗ ω , (1⃗ ω ·⃗ e + r ⃗ ω ) l k ) = s ⃗ ω × ⃗ ω∈{0,1} d C |⃗ ω| ρ λ (s 0 , (1⃗ ω · ⃗ m + r ′ ⃗ ω ) k ) · C |⃗ ω| ρ λ (s ′ ⃗ ω , (1⃗ ω ·⃗ e + r ⃗ ω ) l k ) .
In this formula the term corresponding to (s ′ , r ′ ,⃗ m,⃗ e ) vanishes unless r ′ belongs to R. Indeed, since r is in R, we can write r ⃗ ω = ⌊1⃗ ω · ⃗ t⌋ for some ⃗ t ∈ [0, 1) d+1 , and then the corresponding term vanishes unless
r ′ ⃗ ω = 1⃗ ω ·⃗ e + r ⃗ ω k l = 1⃗ ω ·⃗ e + 1⃗ ω · ⃗ t k l = 1⃗ ω ·⃗ e + 1⃗ ω · ⃗ t k l = ⌊1⃗ ω · ⃗ t ′ ⌋, where ⃗ t ′ := (⃗ e + ⃗ t )/k l ∈ [0, 1) d+1 .
The key feature of formula (50) is that the two inner sums over ⃗ m and ⃗ e can be separated, leading to
A(v; L) = ∑ v ′ ∈S [d] ×R A(v ′ ; L − l) · M(v ′ , v; l),(51)
where the expression M(v ′ , v; l) is given for any v = (s, r) and v ′ = (s ′ , r ′ ) in S [d] × R by the formula
M(v ′ , v; l) = 1 k (d+1)l ∑ ⃗ e∈[k l ] d+1 1⃗ ω ·⃗ e + r ⃗ ω k l = r ′ ⃗ ω (52) × ∏ ⃗ ω∈{0,1} d δ (s ′ ⃗ ω , (1⃗ ω ·⃗ e + r ⃗ ω ) l k ) = s ⃗ ω × ⃗ ω∈{0,1} d C |⃗ ω| ρ λ (s ′ ⃗ ω , (1⃗ ω ·⃗ e + r ⃗ ω ) l k ) .
The form of the expression above is our main motivation for introducing in the next section the category V.
The category V d (T )
To keep track of the data parametrising the averages defined above, we define the d-dimensional category V = V d (T ) associated to the GEAO T (or, strictly speaking, to the underlying group extension of an automaton without output). The objects Ob V of this category are the pairs v = (s, r) ∈ S [d] × R. Since R and S are finite, there are only finitely many objects. The morphisms of V will help us keep track of the objects v ′ = (s ′ , r ′ ) appearing in formulae (51) and (52). These morphisms are parametrised by the tuples
(l,⃗ e, s ′ , r) ∈ N 0 × [k l ] d+1 × S [d] × R = Mor V .
The tuple (l,⃗ e, s ′ , r) describes an arrow from v ′ = (s ′ , r ′ ) to v = (s, r), where s = (s ⃗ ω ) ⃗ ω and r ′ = (r ′ ⃗ ω ) ⃗ ω are given by the formulae
s ⃗ ω = δ (s ′ ⃗ ω , (1⃗ ω ·⃗ e + r ⃗ ω ) l k ) and r ′ ⃗ ω = 1⃗ ω ·⃗ e + r ⃗ ω k l .(53)
We will denote this morphism by e = (l,⃗ e ) : v ′ → v. The number deg( e) := l is called the degree of e. In order to define the composition of morphisms, we state the following lemma.
Lemma 6.4. If e ′ = (l ′ , ⃗ e ′ ) is a morphism from v ′′ to v ′ and e = (l,⃗ e) is a morphism from v ′ to v, then e ′′ = (l + l ′ , k l ⃗ e ′ +⃗ e) is a morphism from v ′′ to v.
Proof. Using the same notation as above, for each ⃗ ω ∈ {0, 1} d we have the equality
(1⃗ ω · ⃗ e ′′ + r ′′ ⃗ ω ) l+l ′ k = (1⃗ ω · ⃗ e ′ + r ′ ⃗ ω ) l ′ k (1⃗ ω ·⃗ e + r ⃗ ω ) l k (54)
which allows us to verify that
δ s ′′ ⃗ ω , (1⃗ ω · ⃗ e ′′ + r ′′ ⃗ ω ) l ′′ k = δ s ′′ ⃗ ω , (1⃗ ω · ⃗ e ′ + r ′ ⃗ ω ) l ′ k (1⃗ ω ·⃗ e + r ⃗ ω ) l k = δ s ′ ⃗ ω , (1⃗ ω ·⃗ e + r ⃗ ω ) l k = s ⃗ ω ,
and by basic algebra we have
r ′′ ⃗ ω = 1⃗ ω · ⃗ e ′ + r ′ ⃗ ω k l ′ = 1⃗ ω · ⃗ e ′ + 1ω·⃗ e+r ⃗ ω k l k l ′ = k l (1⃗ ω · ⃗ e ′ ) + 1ω ·⃗ e + r ⃗ ω k l ′ +l = 1⃗ ω · ⃗ e ′′ + r ⃗ ω k l ′′ .
Lemma 6.4 allows us to define the composition of two morphisms e ′ = (l ′ , ⃗ e ′ ) : v ′′ → v ′ and e = (l,⃗ e ) : v ′ → v as e ′′ = e ′ • e := (l ′′ , ⃗ e ′′ ) = (l + l ′ , k l ⃗ e ′ +⃗ e) : v ′′ → v. The composition is clearly associative, and for each object v the map (0, ⃗ 0) : v → v is the identity map. This shows that V is indeed a category.
We let Mor(v ′ , v) denote the set of morphism from v ′ to v. The degree induces an N 0 -valued gradation on this set, which means that
Mor(v ′ , v) decomposes into a disjoint union ∏ ∞ l=0 Mor l (v ′ , v), where Mor l (v ′ , v)
is the set of morphisms e : v ′ → v of degree l. The degree of the composition of two morphisms is equal to the sum of their degrees. A crucial property of the category V is that morphisms can also be uniquely decomposed in the following sense.
e ′′ = k l ⃗ e ′ +⃗ e, where ⃗ e ′ ∈ [k l ′ ] d+1 and ⃗ e ∈ [k l ] d+1 . Thus, we can define v ′ = (s ′ , r ′ ) by the formulae s ′ ⃗ ω := δ (s ′′ ⃗ ω , (1⃗ ω · ⃗ e ′ + r ′ ω ) l ′ k ) and r ′ ⃗ ω := 1⃗ ω ·⃗ e + r ⃗ ω k l ′ .
A computation analogous to the one showing that composition of morphisms is well-defined shows that (l ′ + l, ⃗ e ′′ ) = (l ′ , ⃗ e ′ ) • (l,⃗ e). Conversely, it is immediate that such a decomposition is unique.
Remark 6.6. As a particular case of (51), we can recover A(v; L) from M(v ′ , v; L). Indeed, it follows from (51) that
A(v; L) = ∑ v ′ ∈S [d] ×R A(v ′ ; 0) · M(v ′ , v; L).(55)
Recalling the definition of A(v ′ ; 0) in (47) we see that the only non-zero terms in the sum (55) above correspond to objects of the form v ′ = (s
[d] 0 , r ′ ) where r ′ ∈ R is such that there exists ⃗ n ∈ Z d+1 with r ′ ⃗ ω = 1⃗ ω ·⃗ n for each ⃗ ω ∈ {0, 1} d .
Let R ′ ⊂ R denote the set of all r ′ with the property just described and note that if r ′ ∈ R ′ then A(s
[d] 0 , r ′ ; 0) = id E(V ) is the identity map. It follows that A(v; L) = ∑ r ′ ∈R ′ M((s [d] 0 , r ′ ), v; L).(56)
We stress that 0 ∈ R ′ , but as long as d ≥ 2, R ′ contains also other elements. For instance, when d = 2 the set R consists of exactly the elements (r 00 , r 01 , r 10 , r 11 ) of the form (0, 0, 0, 0), (0, 0, 0, 1), (0, 0, 1, 1), (0, 1, 0, 1), (0, 1, 1, 1), (0, 1, 1, 2), while R ′ consists of elements of the form (0, 0, 0, 0), (0, 0, 1, 1), (0, 1, 0, 1), (0, 1, 1, 2).
The subcategory U d (T )
The object
v 0 = v T 0 = (s [d] 0 , 0 [d] ) ∈ Ob V(57)
is called the base object. In the recurrence formulae above the objects of particular importance are those which map to the base object. We define a (full) subcategory U of V, whose objects are those among v ∈ Ob V for which Mor(v, v 0 ) ̸ = / 0 and Mor(v 0 , v) ̸ = / 0 (in fact, we will prove in Lemma 6.7 that the former condition is redundant), and whose morphisms are the same as those in V.
Lemma 6.7. There exists l 0 ≥ 0 such that Mor l (v, v 0 ) ̸ = / 0 for any v ∈ Ob V and any l ≥ l 0 .
Proof. We first consider objects of the form v = (s, 0). Letting e 0 = [w T 0 ] k 3 and e i = 0 for 1 ≤ i ≤ d and taking sufficiently large l we find a morphism e = (l,⃗ e) : v → v 0 .
In the general case, since
Mor(v, v 0 ) ⊂ Mor(v, v ′ ) • Mor(v ′ , v 0 ), it only remains to show that for each object v = (s, r) ∈ Ob V there exists some v ′ = (s ′ , 0) ∈ Ob V such that Mor(v, v ′ ) ̸ = / 0. Since r ∈ R, there exists a vector ⃗ t ∈ [0, 1) d+1 such that r ⃗ ω = ⌊1⃗ ω · ⃗ t⌋ for all ⃗ ω ∈ {0, 1} d .(58)
It follows from piecewise continuity of the floor function that there exists an open set of ⃗ t ∈ [0, 1) d+1 that fulfill (58). Hence, one can pick, for any sufficiently large l ≥ 0, ⃗ t of the form ⃗ t =⃗ e/k l , where ⃗ e ∈ [k l ] d+1 . Choosing s ′ ⃗ ω = δ (s ⃗ ω , (1⃗ ω ·⃗ e) l k ) finishes the proof.
Corollary 6.8. Let v, v ′ ∈ Ob V . If Mor(v, v ′ ) ̸ = / 0 and v ∈ Ob U , then v ′ ∈ Ob U .
Proof. By Lemma 6.7, we have Mor(v ′ , v 0 ) ̸ = / 0. Moreover, we find by Lemma 6.4, that Mor
(v 0 , v ′ ) ⊃ Mor(v 0 , v) • Mor(v, v ′ ) ̸ = / 0.
Lemma 6.9. Let s ∈ S and let v = (s [d] , 0) ∈ Ob V . Then v ∈ Ob U .
Proof. It is enough to show that v 0 is reachable from v. Let w ∈ Σ * k be a word synchronising the underlying automaton of T to s. Let e 0 = [w] k , e i = 0 for 1 ≤ i ≤ d and let l > |w| + log k (d). Then we have the morphism e = (l,⃗ e ) : v 0 → v, as needed. 3 We recall that w T 0 is a synchronizing word for T , i.e. for any s ∈ S we have δ (s, w T 0 ) = s 0 , λ (s, w T 0 ) = id.
The cube groupoid
By essentially the same argument as in (51) we conclude that for any
v, v ′ , v ′′ ∈ S [d] × R we have M(v, v ′′ ; L) = ∑ v ′ ∈S [d] ×R M(v, v ′ ; L − l) · M(v ′ , v ′′ ; l).(59)
Regarding the group G [d] as a category with one object, we define the d-dimensional fundamental functor λ λ λ = λ λ λ d
T : V d (T ) → G [d]
as follows. All objects are mapped to the unique object of G [d] and an arrow e = (l,⃗ e)
: v = (s, r) → v ′ = (s ′ , r ′ ) is mapped to λ λ λ ( e) = (λ ⃗ ω ( e)) ⃗ ω∈{0,1} d = λ (s ⃗ ω , (1⃗ ω ·⃗ e + r ′ ⃗ ω ) l k ⃗ ω∈{0,1} d .(60)
It follows from Lemma 6.4 that λ λ λ is indeed a functor.
We are now ready to rewrite M in a more convenient form:
M(v, v ′ ; l) = ∑ e∈Mor l (v,v ′ ) ρ ρ ρ(λ λ λ ( e)).(61)
In order to keep track of the terms appearing in (61), we introduce the families of cubes Q d l . For two
objects v, v ′ ∈ Ob V the cube family Q d l (T )(v, v ′ ) is defined to be the subset of G [d] given by Q d l (T )(v, v ′ ) = {λ λ λ ( e) | e ∈ Mor l (v, v ′ )}.(62)
Frobenius-Perron theory
In this section we review some properties of nonnegative matrices and their spectra. For a matrix W we let ρ(W ) denote its spectral radius. By Gelfand's formula, for any matrix norm ∥·∥ we have
ρ(W ) = lim l→∞ W l 1/l .(63)
If W,W ′ are two matrices of the same dimensions, then we say that W ≥ W ′ if the matrix W −W ′ has nonnegative entries. Accordingly, W > W ′ if W −W ′ has strictly positive entries. In particular, W has nonnegative entries if and only if W ≥ 0.
Let W = (W i j ) i, j∈I be a nonnegative matrix with rows and columns indexed by a (finite) set I. For J ⊂ I, we let W [J] = (W i j ) i, j∈J denote the corresponding principal submatrix. We define a directed graph with the vertex set I and with an arrow from i ∈ I to j ∈ I whenever W i j > 0. We say that i ∈ I dominates j ∈ I if there is a directed path from i to j 4 , and that i and j are equivalent if they dominate each other. We refer to the equivalence classes of this relation as the classes of W . We say that a class J 1 dominates a class J 2 if j 1 dominates j 2 for some (equivalently, all) j 1 ∈ J 1 and j 2 ∈ J 2 . This is a weak partial order on the set of classes.
A nonnegative matrix W is called irreducible if it has only one class. The Frobenius-Perron theorem says that every irreducible matrix has a real eigenvalue λ equal to its spectral radius, its multiplicity is Proposition 6.10. Let W = (W i j ) i, j∈I be a nonnegative matrix such that the matrices W l are jointly bounded for all l ≥ 0. Let N ≤ W be a nonnegative matrix, and let J ⊂ I be a basic class of W such that
N[J] ̸ = W [J]
. Then there is a constant γ < 1 such that for all i ∈ I and j ∈ J we have
(N l ) i j ≪ γ l as l → ∞.(64)
Proof. Let V = R I denote the vector space with basis I equipped with the standard Euclidean norm. We identify matrices indexed by I with linear maps on V and let ∥A∥ denote the operator norm of a matrix A (in fact, we could use any norm such that 0 ≤ A 1 ≤ A 2 implies ∥A 1 ∥ ≤ ∥A 2 ∥). For J ⊂ I let V [J] denote the vector subspace of V with basis J. By Gelfand's theorem, the spectral radius of W can be computed as ρ(W ) = lim l→∞ W l 1/l . Since the matrices W l are jointly bounded, we have ρ(W ) ≤ 1. Furthermore, if ρ(W ) < 1, then there is some λ < 1 such that W l ≤ λ l for l large enough, and hence all entries of W l (and a fortiori of N l ) tend to zero at an exponential rate, proving the claim. Thus, we may assume that ρ(W ) = 1.
Step 1. No two distinct basic blocks of W dominate each other.
Proof. Let J 1 and J 2 be distinct basic blocks of W , and for the sake of contradiction suppose that J 1 dominates J 2 . By Frobenius-Perron theorem applied to the matrices W [J 1 ] and W [J 2 ], there are vectors x 1 ∈ V [J 1 ] and x 2 ∈ V [J 2 ] with x 1 , x 2 > 0 and W [J 1 ]x 1 = x 1 , W [J 2 ]x 2 = x 2 . Since J 1 dominates J 2 , there exists m ≥ 1 such that any vertex i ∈ J 1 is connected to any vertex j ∈ J 2 by a path of length < m. Let U := 1 m (I +W + · · · +W m−1 ). It follows (cf. [Min88, Thm. I.2.1]) for a sufficiently small value of ε > 0 that we have
Ux 1 ≥ x 1 + εx 2 , Ux 2 ≥ x 2 .(65)
Iterating (65), for any l ≥ 0 we obtain U l x 1 ≥ x 1 + lεx 2 .
On the other hand, powers of U are jointly bounded because the powers of W are jointly bounded, which yields a contradiction. Step 2. We have N[J] l x < W [J] l x for all l ≥ |J|.
Let x ∈ V [J], x > 0,Proof. As N[J] ̸ = W [J], there exist i, i ′ ∈ J such that N[J] i,i ′ < W [J] i,i ′ . Since x > 0, we have (N[J] l x) j < (W [J] l x
) j for each j ∈ J that is an endpoint of a path of length l containing the arrow i, i ′ . As W [J] is irreducible, such path exists for all l ≥ |J|.
Step 3. We have N l x ≪ γ l for some γ < 1 as l → ∞ Proof. Since ρ(W [K]) < 1, it follows from Gelfand's theorem that for any sufficiently large n we have
∥N[K] n ∥ ≤ ∥W [K] n ∥ < 1.(67)
By
Step 2, for any sufficiently large n there exist λ < 1 and v ∈ V [K] such that
N n x ≤ λ x + v.(68)
Pick n, λ and v such that (67) and (68) hold, and assume additionally that λ is close enough to 1 so that
∥N[K] n ∥ ≤ λ , whence N n v = N[K] n v ≤ λ v.(69)
Applying (68) iteratively, for any l ≥ 0 we obtain N ln x ≤ λ l x + lλ l−1 v.
It follows that
Step 3 holds with any γ such that γ < λ 1/n .
Since x > 0 (as an element of V [J]) the claim (64) follows immediately from Step 3.
From recursion to uniformity
In Section 7 we obtain a fairly complete description of the cubes Q d l (v, v ′ ). The main conclusion is the following (for a more intuitively appealing equivalent formulation, see Theorem 7.17).
Theorem 6.11. There exist cubes g v ∈ G [d] , v ∈ Ob U , and a threshold l 0 ≥ 0 such that for each l ≥ l 0 and each v, v ′ ∈ Ob U we have
Q d (T )(v, v ′ ) = g −1 v G [d] 0 Hg v ′ ,
where H < G [d] is given by
H = g 1⃗ ω·⃗ e 0 ⃗ ω∈{0,1} d ⃗ e ∈ N d+1 0 .
Presently, we show how the above result completes the derivation of our main theorems. We will need the following corollary.
Corollary 6.12. There exists l 0 ≥ 0 such that for all l ≥ l 0 we have
G [d] 0 ⊂ Q d l (v 0 , v 0 ).(70)
Proof. Follows directly from the observation that id
[d]
G ∈ H (where we use the notation from Theorem 6.11) and G 0 is normal in G.
Proof of Theorem 5.8(2). Recall that in (49) we related the Gowers norms in question to the averages A(v; L) with v ∈ Ob V taking the form v = (s [d] , 0) and that by Lemma 6.9 the relevant cubes belong to Ob U . Hence, it will suffice to show that for any v ∈ Ob U we have the bound ∥A(v; L)∥ ≪ k −cL for a positive constant c > 0.
Let us write A and M (defined in (47) and (52) respectively) in the matrix forms:
A(L) = A(v; L) v∈Ob V and M(L) = M(v, v ′ ; L) v,v ′ ∈Ob V ;
note that the entries of the matrices A(L) and M(L) are elements of End(E(V )). This allows us to rewrite the recursive relations (51) and (59) as matrix multiplication:
A(l + l ′ ) = A(l)M(l ′ ), M(l + l ′ ) = M(l)M(l ′ ), (l, l ′ ≥ 0).(71)
Consider also the real-valued matrices N(L) and W (L), of the same dimension as M(L), given by
N(L) v,v ′ = M(L) v,v ′ 2 = 1 k (d+1)L ∑ e∈Mor L (v,v ′ ) ρ ρ ρ(λ λ λ ( e)) 2 W (L) v,v ′ = |Mor L (v, v ′ )| k (d+1)L .
Note that 0 ≤ N(l) ≤ W (l) for each l ≥ 0 by a straightforward application of the triangle inequality and the fact that ρ ρ ρ is unitary. Moreover, for reasons analogous to (71) we also have
N(l + l ′ ) ≤ N(l)N(l ′ ) W (l + l ′ ) = W (l)W (l ′ ), (l, l ′ ≥ 0).(72)
As a consequence, W (l) = W l , where W := W (1). It also follows directly from how morphisms are defined that W (l) v,v ′ ≤ 1 for all v, v ′ ∈ Ob V and l ≥ 0. Let l 0 be the constant from Corollary 6.12. Then, by Proposition 5.10 we have N(l) v 0 ,v 0 ̸ = W (l) v 0 ,v 0 for all l ≥ l 0 . We are now in position to apply Proposition 6.10, which implies that there exits γ < 1 such that for any v ∈ V and any u ∈ U we have
N(l 0 ) l v,u ≪ γ l/l 0 .(73)
Using with (72), (73) can be strengthened to
N(L) v,u ≪ γ L .(74)
Finally, using (71) and the fact that all norms on finitely dimensional spaces are equivalent, for any u ∈ Ob U and L ≥ 0 we conclude that
∥A(u; L)∥ = ∥A(L) u ∥ = ∑ v∈V A(0) v M(L) v,u ≪ ∑ v∈V N(L) v,u ≪ γ L .(75)
7 Cube groups 7.1 Groupoid structure
We devote the remainder of this paper to proving Theorem 6.11, which provides a description of the cube sets Q d l (v, v ′ ). In this section we record some basic relations between the Q d
l (v, v ′ ) for different v, v ′ ∈ Ob V .
Our key intention here is to reduce the problem of describing Q d l (T )(v, v ′ ) for arbitrary v, v ′ ∈ Ob U to the special case when v = v ′ = v T 0 . Lemma 7.1. Let T be an efficient GEA and let v, v ′ , v ′′ ∈ Ob V and l, l ′ ≥ 0. Then
Q d l ′ (T )(v, v ′ ) · Q d l (T )(v ′ , v ′′ ) ⊆ Q d l+l ′ (T )(v, v ′′ ).
Proof. This is an immediate consequence of the fact that λ λ λ is a functor.
Lemma 7.2. Let T be an efficient GEA and v, v ′ ∈ Ob U . Then the limit
Q d (T )(v, v ′ ) = lim l→∞ Q d l (T )(v, v ′ )(76)
exists. Moreover, there exist cubes g v ∈ G [d] such that for any v, v ′ ∈ Ob U the limit in (76) is given by (76) is just a shorthand for the statement that there exists
Q d (T )(v, v ′ ) = g −1 v · Q d (T )(v T 0 , v T 0 ) · g v ′ . (77) Remark 7.3. Since Q d l (T )(v, v ′ ) are finite,l 0 = l 0 (T , v, v ′ ) ≥ 0 and a set Q d (T )(v, v ′ ) such that Q d l (T )(v, v ′ ) = Q d (T )(v, v ′ ) for all l ≥ l 0 . Proof. Note first that Q d 1 (T )(v T 0 , v T 0 ) ̸ = 0 contains the identity cube id [d]
G , arising from the morphism
(1, ⃗ 0) : v T 0 → v T 0 . It follows from Lemma 7.1 that the sequence Q d l (T )(v T 0 , v T 0 ) is increasing in the sense that Q d l (T )(v T 0 , v T 0 ) ⊆ Q d l+1 (T )(v T 0 , v T 0 ) for each l ≥ 0. Since the ambient space G [d] is finite, it follows that the sequence Q d l (T )(v T 0 , v T 0 )
needs to stabilise, and in particular the limit (76) exists for v = v ′ = v T 0 . It follows from Lemma 7.2 that for any m, m ′ , l ≥ 0 we have the inclusion
Q d m ′ (T )(v T 0 , v) · Q d l (T )(v, v ′ ) · Q d m (T )(v ′ , v T 0 ) ⊆ Q d m+m ′ +l (T )(v T 0 , v T 0 ). Since there exist morphisms v T 0 → v, v ′ → v T 0 , there exist m, m ′ ≥ 0 and g v , g v ′ (any elements of Q d m (T )(v T 0 , v) and Q d m ′ (T )(v ′ , v T 0 ) −1 respectively) such that for all l ≥ 0 we have g v · Q d l (T )(v, v ′ ) · g −1 v ′ ⊆ Q d m+m ′ +l (T )(v T 0 , v T 0 ).
We thus conclude that if l ≥ 0 is sufficiently large then
Q d l (T )(v, v ′ ) ⊆ g −1 v · Q d (T )(v T 0 , v T 0 ) · g v ′ .(78)
Reasoning in a fully analogous manner (with pairs (v, v ′ ) and (v T 0 , v T 0 ) swapped), for sufficiently large l we obtain the reverse inclusion
Q d (T )(v T 0 , v T 0 ) ⊆ h −1 v · Q d l (T )(v, v ′ ) · h v ′ ,(79)
for some cubes h v , h v ′ ∈ G [d] . Comparing cardinalities we conclude that both (78) and (79) are in fact equalities. Hence, the limit (76) exists for all v, v ′ ∈ Ob U and
Q d (T )(v, v ′ ) = g −1 v · Q d (T )(v T 0 , v T 0 ) · g v ′ .(80)
Note that g v and g v are determined up to multiplication on the left by an element of Q d (T )(v T 0 , v T 0 ) and we may take
g v T 0 = g v T 0 = id [d] G . Hence, Q d (T )(v T 0 , v T 0 ) is a group. It now follows from Lemma 7.1 that Q d (T )(v T 0 , v) · Q d (T )(v, v T 0 ) ⊆ Q d (T )(v T 0 , v T 0 ), or equivalently Q d (T )(v T 0 , v T 0 ) · g v g −1 v · Q d (T )(v T 0 , v T 0 ) ⊆ Q d (T )(v T 0 , v T 0 ),(81)meaning that g v g −1 v ∈ Q d (T )(v T 0 , v T 0 )
. Hence, we may take g v = g v , since we can multiply g v from the left with (
g v g −1 v ) −1 ∈ Q d (T )(v T 0 , v T 0 ).
As a consequence of Lemma 7.2, the sets Q d (T )(v, v ′ ) for v, v ′ ∈ Ob U form a groupoid, in the sense that we have the following variant of Lemma 7.1.
Corollary 7.4. Let T be an efficient GEA and let v, v ′ , v ′′ ∈ Ob U . Then
Q d (T )(v, v ′ ) · Q d (T )(v ′ , v ′′ ) = Q d (T )(v, v ′′ ).
In particular, in order to understand all of the sets Q d (v, v ′ ) (up to conjugation) it will suffice to understand one of them. This motivates us to put
Q d (T ) = Q d (T )(v T 0 , v T 0 ).(82)
We also mention that the sets Q d (T ) are easy to describe for small values of d.
Lemma 7.5. Let T be an efficient GEA and d ∈ {0, 1}. Then
Q d (T ) = G [d] .
Proof. Immediate consequence of the definition of Q d (T ) and property T 1 .
Characteristic factors
A morphism between GEA T andT given by (φ , π) is a factor map if both φ : S →S and π : G →Ḡ are surjective. In this case,T is a factor of T . The group homomorphism π induces a projection map π π π : G [d] →Ḡ [d] . As λ λ λ is a functor, π π π(Q d (T )) ⊂ Q d (T ) for all d ≥ 0. In fact, for large l ≥ 0 we have the following commutative diagram:
Mor l (v 0 , v 0 ) Mor l (v 0 ,v 0 ) Q d l (T ) Q d l (T ) id λ λ λ λ λ λ π π π
The map labelled id takes the morphism (l,⃗ e) : v 0 → v 0 to morphism given by the same data (l,⃗ e) :v 0 → v 0 . We will say that the factorT of T is characteristic if for each d ≥ 0 we have the equality Q d (T ) = π π π −1 Q d (T ) . Note that ifT is a characteristic factor of T then the cube groups Q d (T ) are entirely described in terms of the simpler cube groups Q d (T ). It is also easy to verify that ifT is a characteristic factor of T then any characteristic factor ofT is also a characteristic factor of T . For instance, a GEA is always its own factor, which is always characteristic. A possibly even more trivial 5 example of a factor is the trivial GEA T triv with a single state, trivial group, and the other data defined in the only possible way. In fact, T triv is the terminal object, meaning that it is a factor of any GEA . The trivial GEA is a characteristic factor of T if and only if Q d (T ) = G [d] for all d ≥ 0.
Lemma 7.6. Let T be an efficient GEA and let (φ , π) be a factor map from T toT . If ker π ⊂ G 0 then T is an efficient GEA and d ′ T = d ′T .
Proof. We verify each of the defining properties of an efficient GEA in turn. It is clear thatT is strongly connected and thatT is synchronising; in fact, if w ∈ Σ * k is synchronising to the state s ∈ S for T then w is also synchronising to the state φ (s) ∈S forT . We also find thatT is idempotent andλ (s, 0) = id for alls ∈S. Put alsoḠ 0 = π(G 0 ) andḡ 0 = π(g 0 ).
For T 1 , lets,s ′ ∈S and let s ∈ φ −1 (s) and s ′ ∈ φ −1 (s ′ ). Then
λ (s, w) w ∈ Σ l k ,δ (s) =s ′ ⊇ π (λ (s, w)) w ∈ Σ l k , δ (s) = s ′ =Ḡ,
and the reverse inclusion is automatic. For T 2 , let. Lets,s ′ ∈S and s ∈ φ −1 (s). Then
λ (s, w) w ∈ Σ * k ,δ (s, w) =s ′ , [w] k ≡ r mod d ′ = s ′ ∈φ −1 (s ′ ) π(λ (s, w)) w ∈ Σ * k , δ (s, w) = s ′ , [w] k ≡ r mod d ′ = s ′ ∈φ −1 (s ′ ) π(g r 0 G 0 ) =ḡ r 0Ḡ 0 .
For T 3 , lets,s ′ ∈S,ḡ ∈Ḡ 0 , let s ∈ φ −1 (s). Then
gcd * k [w] k w ∈ Σ l k ,δ (s, w) =s ′ ,λ (s, w) =ḡ = gcd * k g∈π −1 (ḡ) s ′ ∈φ −1 (s ′ ) [w] k w ∈ Σ l k , δ (s, w) = s ′ , λ (s, w) = g = gcd * k c(g, s ′ ) g ∈ π −1 (ḡ), s ′ ∈ φ −1 (s ′ ) = d ′ ,
where c(g, s ′ ) is, thanks to T 3 for T , given by
c(g, s ′ ) = gcd * k [w] k w ∈ Σ l k , δ (s, w) = s ′ , λ (s, w) = g = d ′ .
no pun intended
Group quotients
Let T = (S, s 0 , Σ k , δ , G, λ ) be a GEA . One of the basic ways to construct a factor of T is to leave the state set unaltered and replace G with a quotient group. More precisely, for a normal subgroup H < G, we can consider the quotient GEA without output T /H = (S, s 0 , Σ k , δ , G/H,λ ) with the same underlying automaton and group labels given byλ (s, j) = λ (s, j) ∈ G/H for s ∈ S, j ∈ Σ k . Thus defined GEA is a factor of T , with the factor map given by (id S , π), where π : G → G/H is the quotient map. The purpose of this section is to identify an easily verifiable criterion ensuring that the factor T /H is characteristic. As a convenient byproduct, this will allow us to mostly suppress the dependency on the dimension d from now on. In fact, it is not hard to identify the maximal normal subgroup of G such that the corresponding factor is characteristic. Let H < G be normal and let π : G → G/H denote the quotient map. For any d ≥ 0, the map π π π : Q d (T ) → Q d (T /H) is surjective and for any g ∈ Q d (T ) we have π π π −1 (π π π(g)) = gH [d] . It follows that T /H is characteristic if and only if H [d] ⊂ Q d (T ). In particular, if T /H is characteristic then Q d (T ) contains all cubes with an element of h at one vertex and id G elsewhere. In order to have convenient access to such cubes, for g ∈ G and ⃗ σ ∈ {0, 1} d put
c d ⃗ σ (h) = h ⃗ ω=⃗ σ ⃗ ω∈{0,1} d = (c ⃗ ω ) ⃗ ω∈{0,1} d where c ⃗ ω = h if ⃗ ω = ⃗ σ , id G if ⃗ ω ̸ = ⃗ σ .(83)
We also use the shorthand ⃗ 1 = (1, 1, . . . , 1) ∈ {0, 1} d , where d will always be clear from the context. This motivates us to define
K = K(T ) = h ∈ G c d ⃗ σ (h) ∈ Q d (T ) for all d ≥ 0 and ⃗ σ ∈ {0, 1} d .(84)
Since
c d ⃗ σ : G → G [d]
is a group homomorphism for each d ≥ 0 and ⃗ σ ∈ {0, 1} d , K is a group. As any cube can be written as a product of cubes with a single non-identity entry, the condition H [d] ⊂ Q d (T ) for all d ≥ 0 holds if and only if H < K. If T is an efficient group extension of an automaton then (84) and T 2 guarantee that K < G 0 .
Proposition 7.7. Let T be an efficient GEA and let H < G be a normal subgroup. Then the following conditions are equivalent:
1. T /H is a characteristic; 2. H < K(T ).
Proof. Immediate consequence of the above discussion.
We devote the remainder of this section to obtaining a description of K that is easier to work with. Fix a value of d ≥ 0 for now, and let T be a GEA. For each 1 ≤ j ≤ d + 1, there is a natural projection π j : {0, 1} d+1 → {0, 1} d which discards the j-th coordinate, that is, π j (ω 1 , ω 2 , . . . , ω j−1 , ω j , ω j+1 , . . . ω d+1 ) = (ω 1 , . . . , ω j−1 , ω j+1 , . . . , ω d+1 ) Accordingly, for each 1 ≤ j ≤ d + 1, we have the embedding ι j :
G [d] → G [d+1]
which copies the entries along the j-th coordinate, that is,
ι j (g) = g π j (⃗ ω) ⃗ ω∈{0,1} d+1 .
Lemma 7.8. Let 1 ≤ j ≤ d + 1 and let T be an efficient GEA. Then
ι j Q d (T ) ⊂ Q d+1 (T ).(85)
Proof. Let e = (l,⃗ e) : v T 0 → v T 0 be a morphism in V d (T ), and let g = λ λ λ ( e) be an element of Q d (T ). Then there is a corresponding morphism f = (l,
⃗ f ) : v T 0 → v T 0 in V d+1 (T )
obtained by inserting 0 in ⃗ e at j-th coordinate, that is,
( f 0 , f 1 , . . . , f j−1 , f j , f j+1 , .
. . , f d+1 ) = (e 0 , e 1 , . . . , e j−1 , 0, e j , . . . , e d ).
It follows directly from the definition of λ λ λ that λ λ λ ( f ) = ι j (λ λ λ ( e)). Since e was arbitrary, (85) follows.
Corollary 7.9. Let T be an efficient GEA. Then g [d] ∈ Q d (T ) for all d ≥ 0 and g ∈ G. Moreover, the group K is normal in G and contained in G 0 .
Proof. The first statement follows from Lemma 7.5. The second one follows, since
c d ⃗ σ (ghg −1 )g [d] = g [d] c d ⃗ σ (h) for all d ≥ 0, σ ∈ {0, 1} d and g, h ∈ G.
Lemma 7.10. Let T be an efficient GEA and let h ∈ G. Suppose that for each d ≥ 0 there exists
⃗ ρ = ⃗ ρ(d) ∈ {0, 1} d such that c d ⃗ ρ (h) ∈ Q d (T ). Then h ∈ K.
Proof. We need to show that c d ⃗ σ (h) ∈ Q d (T ) for each d ≥ 0 and ⃗ σ ∈ {0, 1} d . We proceed by double induction, first on d and then on |{i ≤ d | σ i ̸ = ρ i }|, where ⃗ ρ = ⃗ ρ(d). The cases d = 0 and ⃗ σ = ⃗ ρ are clear.
Suppose now that d ≥ 1 and ⃗ σ ̸ = ⃗ ρ. For the sake of notational convenience, assume further that ⃗ ρ = ⃗ 1; one can easily reduce to this case by reflecting along relevant axes. By inductive assumption (with respect to ⃗ σ ),
Q d (T ) contains c d ⃗ ω (h) for all ⃗ ω ∈ {0, 1} d with |⃗ ω| > |⃗ σ |.
Moreover, by inductive assumption (with respect to d) and as
Q d−1 (T ) is a group, we have {id, h} [d−1] ⊆ Q d−1 (T ). Consider the product g = ∏ ⃗ ω≥⃗ σ c d ⃗ ω (h) = (g ⃗ ω ) ⃗ ω∈{0,1} d where g ⃗ ω = h if ⃗ ω ≥ ⃗ σ , id G otherwise,
where the order on {0, 1} d is defined coordinatewise, meaning that ⃗ ω ≥ ⃗ σ if and only if ω j ≥ σ j for all 1 ≤ j ≤ d. It follows from Lemma 7.8 that g ∈ Q d (T ). In fact g ∈ ι j ({id, h} [d−1] ) ⊆ ι j (Q d−1 (T )) for each 1 ≤ j ≤ d such that σ j = 0. It remains to notice that all terms in the product defining g, except for c d ⃗ σ (h), are independently known to belong to Q d (T ).
The following reformulation of Lemma 7.10 above will often be convenient.
Corollary 7.11. Let T be an efficient GEA and let g, h ∈ G. Suppose that for each d ≥ 0, the group Q d (T ) contains a cube with h on one coordinate and g on all the remaining 2 d − 1 coordinates. Then g ≡ h mod K.
We are now ready to state the criterion for characteristicity of the quotient GEA in terms of the generating set.
Corollary 7.12. Let T be an efficient GEA, let X ⊂ G be any set and put H := ⟨X⟩ G be the normal closure of X. Suppose that for each h ∈ X and d ≥ 0 there exists ⃗ ρ ∈ {0, 1} d such that c d ⃗ ρ (h) ∈ Q d (T ). Then the factor T /H is characteristic.
State space reduction
In this section we consider another basic way of constructing factor maps, namely by removing redundancies in the set of states. Ultimately, we will reduce the number of states to 1 by repeatedly applying Proposition 7.7 (which simplifies the group structure and hence makes some pairs of states equivalent) and Proposition 7.14 below (which identifies equivalent states, leading to a smaller GEA). The following example shows the kind of redundancy we have in mind.
Example 7.13. Consider the base-3 analogue of the Rudin-Shapiro sequence, given by the following GEA with G = {+1, −1} and output function τ(s, g) = g (cf. Example 5.1). Motivated by the example above, for a GEA T we consider the equivalence relation ∼ of S, where s ∼ s ′ if and only if λ (s, u) = λ (s ′ , u) for all u ∈ Σ * k . Equivalently, ∼ is the minimal equivalence relation such that s ∼ s ′ implies that λ (s, j) = λ (s ′ , j) and δ (s, j) ∼ δ (s ′ , j) for all j ∈ Σ k . We define the reduced GEA T red = (S,s 0 , Σ k ,δ ,λ , G), whereS = S/∼,δ (s, j) = δ (s, j) andλ (s, j) = λ (s, j) for all s ∈ S, j ∈ Σ k . There is a natural factor map T →T given by (φ , id G ) where φ : S → S/∼ takes s ∈ S to its equivalence class. Note that if T is natural, then Lemma 7.6 guarantees that so is T red .
Proposition 7.14. Let T be an efficient GEA. Then the factor T red is characteristic.
Q d (T red ) = s,s ′ ∈S [d] 0 Q d (T )((s, 0), (s ′ , 0)).(86)
Let l be a large integer and let
⃗ f = ([w T 0 ] k , 0, . . . , 0) ∈ N d+1 0 6 . Then 1⃗ ω · ⃗ f = [w T 0 ] k for each ⃗ ω ∈ {0, 1} d , whence we have the morphism f = (l, ⃗ f ) : (s, 0) → v T 0 with λ λ λ ( f ) = id [d] G for any s ∈ S [d]
0 . It follows from Lemma 7.2, that we can take g (s,0) = id [d] G , and said Lemma guarantees that
Q d (T ) ((s, 0), (s ′ , 0)) = Q d (T ) for all s, s ′ ∈ S [d]
0 . Inserting this into (86) we conclude that Q d (T red ) = Q d (T ), meaning that T red is a characteristic factor of T .
Host-Kra cube groups
The groups Q d (T ) can be viewed as distant analogues of Host-Kra cube groups, originating from the work of these two authors in ergodic theory [HK05,HK08] (the name, in turn, originates from [GT10b]).
Let G be a group and let d ≥ 0. The Host-Kra cube group HK d (G) is the subgroup of G [d] generated by the upper face cubes g ω j =1 ⃗ ω∈{0,1} d where 1 ≤ j ≤ d and g ∈ G. If G is abelian then HK d (G) consists of the cubes g
= (g ⃗ ω ) ⃗ ω∈{0,1} d where g ⃗ ω = h 0 ∏ d j=1 h ω j j for some sequence h 0 , h 1 , . . . , h d ∈ G.
In general, let G = G 0 = G 1 ⊇ G 2 ⊇ . . . be the lower central series of G, where for each i ≥ 1 the group G i+1 is generated by the commutators ghg −1 h −1 with g ∈ G i , h ∈ G. Let also ⃗ σ 1 ,⃗ σ 2 , . . . ,⃗ σ 2 d be an ordering of {0, 1} d consistent with inclusion in the sense that if ⃗ σ i ≤ ⃗ σ j (coordinatewise) then i ≤ j. Then HK d (G) consists precisely of the cubes which can be written as g 1 g 2 . . . g 2 d where for each j there exists g j ∈ G | ⃗ σ j | such that g j = g j,⃗ ω ⃗ ω∈{0,1} d and g j,⃗ ω = g j if ⃗ ω ≥ ⃗ σ j (coordinatewise) and g j,⃗ ω = id G otherwise. The Host-Kra cube groups are usually considered for nilpotent groups G, that is, groups such that G s+1 = {id G } for some s ∈ N, called the step of G. (In fact, one can consider the Host-Kra cube groups corresponding to filtrations other than the lower central series, but these are not relevant to the discussion at hand.)
Let T be an invertible efficient GEA given by (Σ k , G, λ ). Then a direct inspection of the definition shows that Q d (T ) consists of all the cubes of the form (λ ((1⃗ ω ·⃗ e) k )) ⃗ ω∈{0,1} d where⃗ e ∈ N k 0 . In particular, 6 We recall that w T 0 is a synchronizing word for T , i.e. for any s ∈ S we have δ (s, w T 0 ) = s 0 , λ (s, w T 0 ) = id.
letting e i = 0 for i ̸ = j and taking e j ∈ N 0 such that λ ((e j ) k ) = g (whose existence is guaranteed by T 1 ) we conclude that Q d (T ) contains the upper face cube corresponding to any g ∈ G and 1 ≤ j ≤ d. Hence,
Q d (T ) ⊇ HK d (G).(87)
In fact, the cube (λ ((1⃗ ω ·⃗ e) k )) ⃗ ω∈{0,1} d belongs to HK d (G) if ⃗ e ∈ N d+1 0 has non-overlapping digits in the sense that for each m there is at most one j such that the m-th digit of (e j ) k is non-zero. Since the cube groups HK d (G) are relatively easy to describe, especially in the abelian case, one can view the indices [Q d (T ) : HK d (G)] (d ≥ 0) as a measure of complexity of T . We will ultimately reduce to the case when Q d (T ) = HK d (G).
As alluded to above, the inclusion in (87) can be strict. For instance, one can show that Q 2 (T ) = HK 2 (G) if and only if λ ((e 0 ) k )λ ((e 0 +e 1 +e 2 ) k ) ≡ λ ((e 0 +e 1 ) k )λ ((e 0 +e 2 ) k ) mod G 2 for all e 0 , e 1 , e 2 ∈ N 0 .
Suppose now, more generally, that Q d (T ) = HK d (G) for all d ≥ 0. Put G ∞ := lim n→∞ G n . It follows from Lemma 7.10 that K(T ) = G ∞ . If G is nilpotent then K(T ) = {id G } is trivial and consequently T has no proper characteristic factors. If G is not nilpotent then the factor T /G ∞ is characteristic, and one can check that Q d (T /G ∞ ) = HK d (G/G ∞ ). In particular, iterating this reasoning we see that if Q d (T ) = HK d (G) then T has a characteristic factor given by (Σ k ,Ḡ,λ ) where G is a nilpotent group. In fact, this is only possible if G is a cyclic group, as shown by the following lemma. Since its importance is purely as a motivation and we do not use it in the proof of our main results, we only provide a sketch of the proof.
Lemma 7.15. Let T be an invertible efficient GEA given by (Σ k , G, λ ). Assume further that G is nilpotent and Q d (T ) = HK d (G) for all d ≥ 0. Then G is a subgroup of Z/(k − 1)Z and λ ((n) k ) = λ (1) n for all n ∈ Σ k .
Sketch of a proof. Let s be the step of G so that G s+1 = {id G }, and for ease of writing identify λ with a map λ : N 0 → G. Since λ λ λ = λ [d] maps parallelepipeds of the form (1⃗ ω ·⃗ e) ⃗ ω∈{0,1} d for ⃗ e ∈ N d+1 0 to Q d (T ) = HK d (G), the sequence λ is a polynomial with respect to the lower central series (see e.g. [GT12, Def. 1.8 and Prop. 6.5 ] for the relevant definition of a polynomial sequence). It follows [GT10a, Lem. A.1] that there exist g i ∈ G i for 0 ≤ i ≤ s such that λ (n) = g 0 g n 1 g
( n 2 ) 2 . . . g ( n s ) s , (n ∈ N 0 ).(88)
Moreover, g i are uniquely determined by the sequence λ . Note also that g 0 = id G since λ (0) = id G . We will show that g i = id G for all i ≥ 2. In fact, we will show by induction on r that g 2 , g 3 , . . . , g r ∈ G r+1 for each r ≥ 1 (the case r = 1 being vacuously true). Pick r ≥ 2 and assume that g 2 , g 3 , . . . , g r ∈ G r . We will work modulo G r+1 , which means that (the projections of) all of g 1 , g 2 , . . . , g r commute: g i g j G r+1 = g j g i G r+1 . It follows directly from how the sequence λ is computed by T that for any m ≥ 0 and any I ⊂ N 0 with |I| = m we have λ ∑ l∈I k l = λ ([10 j 1 10 j 2 . . . 10 j l ] k ) = λ (1) m = g m 1 ,
for some j 1 , . . . , j r ≥ 0. Let J = {l 1 , . . . , l r } be any set of cardinality |J| = r. Substituting (88) in (89) and taking the oscillating product over all subsets I ⊂ J we conclude that
g k l 1 k l 2 ·····k lr r ≡ ∏ I⊂J λ ∑ l∈I k l (−1) |I| ≡ id G (mod G r+1 ),(90)
meaning that the order of g r in G/G r+1 divides a power of k: g k Lr r ∈ G r+1 for some L r ≥ 0. (Equation (90) can be verified by a direct computation, relying on the fact that the finite difference operator reduces the degree of any polynomial by 1.)Reasoning inductively, we show that for each j = r − 1, r − 2, . . . , 2 there exists L j ≥ 0 such that g k L j j ∈ G r+1 : towards this end, it is enough to repeat the same computation as above with |J| = j and min J ≥ max(L j+1 , . . . , L r ). In particular, there exists L * ≥ 0 such that for all n ≥ 0 divisible by L * we have
λ (n) = g n 1 g ( n 2 ) 2 . . . g ( n s ) s ≡ g n 1 mod G r+1 .(91)
Next, recall that from how λ is computed by T it also follows that λ is invariant under dilation by k in the sense that for any n ≥ 0 and any l ≥ 0 we have
λ nk l = λ (n).(92)
Taking l ≥ L * and combining (88), (91) and (92), for any n ≥ 0 we obtain g k l n 1 ≡ λ (k l n) = λ (n) = g n 1 g
( n 2 ) 2 . . . g ( n s ) s mod G r+1 .(93)
Since the representation of the sequence λ in the form (88) is unique, it follows that g r ≡ g r−1 ≡ · · · ≡ g 2 ≡ id G mod G r+1 , which finishes this part of the argument.
We have now shown that g 2 = g 3 = · · · = g s = id G . It remains to notice that since g k 1 = λ (k) = λ (1) = g 1 and λ : N 0 → G is surjective, the group G is cyclic and |G| | k − 1.
As suggested by the above lemma, group extensions of automata which arise from cyclic groups will play an important role in our considerations. Let k ≥ 2 denote the basis, which we view as fixed. For m ≥ 1 define the invertible GEA
Z(m) := (Σ k , Z/mZ, λ m ) , λ m : Σ k ∋ j → j mod m ∈ Z/mZ.(94)
We will primarily be interested in the case when m | k − 1. 1. There exists a pair of distinct mistakable states s, s ′ ∈ S.
2. There exists a pair of distinct strongly mistakable states s, s ′ ∈ S.
3. There exist infinitely many words in Σ * k which are not synchronising for A.
Proof. As any pair of strongly mistakable states is mistakable, (2) implies (1). Moreover, as we have remarked above, (1) implies (3).
In the reverse direction, (3) implies (1): indeed, if (3) holds, then there exist infinitely many words u i ∈ Σ * k (i ∈ N) with corresponding quadruples r i , r ′ i , s i , s ′ i ∈ S such that s i ̸ = s ′ i and δ (r i , u i ) = s i , δ (r ′ i , u i ) = s ′ i . Any pair s, s ′ ∈ S such that s = s i and s ′ = s ′ i for infinitely many values of i is mistakable, so (1) holds. It remains to show that (1) implies (2). By definition, it follows from (1) that there exists a word u = u 1 u 2 . . . u l ∈ Σ * k with |u| = l ≥ |S| 2 and states r, r ′ , s, s ′ ∈ S with s ̸ = s ′ such that δ (r, u) = s and δ (r ′ , u) = s ′ . For 0 ≤ i ≤ l, let s i and s ′ i be the states reached form r and r ′ respectively after reading the first i digits of u. More precisely, s i , s ′ i are given by s 0 = r, s ′ 0 = r ′ and s i = δ (s i−1 , u i ), s ′ i = δ (s ′ i−1 , u i ) for all 1 ≤ i ≤ l. Note that since s l ̸ = s ′ l we have more generally s i ̸ = s ′ i for all 0 ≤ i ≤ l. By the pigdeonhole principle, there exists a pair of indices 0 ≤ i < j ≤ l and a pair of states t,t ′ such that s i = s j = t and s ′ i = s ′ j = t ′ . Put v = u i+1 u i+2 . . . u j so that δ (t, v) = t and δ (t ′ , v) = t ′ . Finally, put w = v |G| so that δ (t, w) = t and δ (t ′ , w) = t ′ and by the Lagrange's theorem we have λ (t, w) = λ (t, v) |G| = id G and likewise λ (t ′ , w) = id G . It follows that t,t ′ are strongly mistakable.
Proposition 7.19. Let T be an efficient GEA. Then T has a characteristic factorT such that every sufficiently long word is synchronizing for the underlying automaton.
The proof of Proposition 7.19 proceeds by iterating the following lemma.
Lemma 7.20. Let T be an efficient GEA and let H < G be given by H = λ (s, u) −1 λ (s ′ , u) : s and s ′ are strongly mistakable, u ∈ Σ * k G .
(95)
ThenT /H is a characteristic factor of T .
Proof. Recall from Section 7.3 that it will suffice to verify that H < K = K(T ). Let h be one of the generators of H in (95). Pick a pair of strongly mistakable states s, s ′ ∈ S and a word u ∈ Σ * k such that h = λ (s, u) −1 λ (s ′ , u). Replacing u with uw T 0 , where w T 0 is a synchronizing word of T , we may assume without loss of generality that u synchronises the underlying automaton to s 0 , so in particular δ (s, u) = δ (s ′ , u) = s 0 .
In order to construct the relevant morphism (l,⃗ e) : v T 0 → v T 0 , we first need to specify several auxiliary words with certain helpful properties, described by the diagram below. Let w be a word such that δ (s 0 , w) = s and λ (s 0 , w) = id G , whose existence is guaranteed by property T 1 . Let v 1 be a word such that δ (s, v 1 ) = s, δ (s ′ , v 1 ) = s ′ , and λ (s, v 1 ) = λ (s ′ , v 1 ) = id G , which exists because s, s ′ are strongly mistakable. Lastly, let v 0 be a word such that δ (s, v 0 ) = δ (s ′ , v 0 ) = s ′ and λ (s ′ , v 0 ) = λ (s ′ , v 0 ) = id G .
One can obtain such a word by concatenating w T 0 with a word taking s 0 to s ′ with identity group label, whose existence is guaranteed by property T 1 .
s 0 s s ′ w/id G v 1 /id G v 1 /id G v 0 /id G 0/id G v 0 /id G
We may additionally assume that the words v 1 and v 0 have the same length m; otherwise we can replace them with v |v 1 | 0 and v |v 2 | 1 respectively. Note that v 0 ̸ = v 1 since s ̸ = s ′ . Assume for concreteness that
[v 0 ] k < [v 1 ] k ; the argument in the case [v 0 ] k > [v 1 ] k is analogous. Let v = ([v 1 ] k − [v 0 ] k ) m
k be the result of subtracting v 0 from v 1 . Put also l = |w| + dm + |u|. We are now ready to define the coordinates e i , which are given by
e 0 = [w v 0 v 0 . . . v 0 d times u] k ; e j = [v 0 m 0 m . . . 0 m d − j times 0 |u| ] k (0 < j ≤ d).
This definition is set up so that for each ⃗ ω ∈ {0, 1} d we have
1⃗ ω ·⃗ e = [wv ω 1 v ω 2 . . . v ω d u] k .
Since u synchronises the underlying automaton of T to s 0 and 1⃗ ω ·⃗ e < k l for each ⃗ ω ∈ {0, 1} d , it follows directly from (53) that we have a morphism e = (l,⃗ e) : v T 0 → v T 0 , and so λ λ λ ( e) ∈ Q d (T ). Our next step is to compute λ λ λ ( e).
It follows directly from the properties of w, v 0 and v 1 listed above that δ (s 0 , wv ω 1 v ω 2 . . . v ω j ) = s, if ω 1 = ω 2 = · · · = ω j = 1, s ′ , otherwise.
for any ⃗ ω ∈ {0, 1} d and 0 ≤ j ≤ d (the case j = 0 corresponds to δ (s 0 , w) = s). Hence, for any ⃗ ω ∈ {0, 1} d different from ⃗ 1 we have
λ (s 0 , (1ω ·⃗ e) l k ) = λ (s 0 , w)λ (s, v 1 ) j−1 λ (s, v 0 )λ (s ′ , v ω j+1 ) . . . λ (s ′ , v ω d )λ (s ′ , u) = λ (s ′ , u),
where j is the first index with ω j = 0. For ⃗ ω = ⃗ 1 we obtain a similar formula, which simplifies to λ (s 0 , ( ⃗ 1 ·⃗ e) l k ) = λ (s, u).
Since d ≥ 0 was arbitrary, it follows from Corollary 7.11 that λ (s, u) ≡ λ (s ′ , u) mod K, and consequently H < K, as needed.
Proof of Proposition 7.19. Let T ′ := (T /H) red , where H = H(T ) is given by (95). Recall that T ′ is efficient by Lemma 7.6. Note that either 1. T ′ is a proper factor of T ; or 2. all sufficiently long words synchronise the underlying automaton of T .
Indeed, if (2) does not hold then it follows from Lemma 7.18 that there exists a pair of distinct strongly mistakable states s, s ′ ∈ S. The definition of H guarantees that the images of those states in T /H give rise to the same label maps:λ (s, u) =λ (s ′ , u) for all u ∈ Σ * k . It follows that s and s ′ are mapped to the same state in (T /H) red . In particular, (T /H) red has strictly fewer states than T .
Iterating the construction described above, we obtain a sequence of characteristic factors
T ′ → T ′′ → · · · → T (n) → T (n+1) → . . . ,
where T (n+1) = T (n) ′ = T (n) /H(T (n) ) red for each n ≥ 0. Since all objects under consideration are finite, this sequence needs to stabilise at some point, meaning that there exists n ≥ 0 such that T (n) = T (n+1) = · · · :=T . SinceT ′ =T , it follows from the discussion above that all sufficiently long words are synchronising for the underlying automaton ofT . By Lemma 7.20,T is a characteristic factor of T .
Example 7.21. Consider the GEA described by the following diagram, where g, h ∈ G are two distinct group elements. The word 0 is synchronising for the GEA and no word in {1, 2} * is synchronising for the underlying automaton. The states s 1 and s 2 are strongly mistakable and the loops are given by 1 m where m is any common multiple of the orders of g and h. The group H in Lemma 7.20 is generated by gh −1 and its conjugates, and the GEA T ′ =T in the proof of Proposition 7.19 is obtained by collapsing s 1 and s 2 into a single state.
Invertible factors
In this section we further reduce the number of states of the GEA under consideration. In fact, we show that it is enough to consider GEA with just a single state. Recall that such GEAs with one states are called invertible. where x ⃗ ω = (k M − d + |⃗ ω|) M k ∈ Σ M k . Since for each ⃗ ω ∈ {0, 1} d the word (1⃗ ω ·⃗ e) l k ends with w T 0 and (1⃗ ω ·⃗ e) k < k L , the data constructed above describes a morphism e = (l,⃗ e) : v T 0 → v T 0 . s s 0 start
s 1 s ′ 1 v/id u/λ (s, u) u ′ /λ (s, u ′ ) x ⃗ ω w T 0 /λ (s ′ 1 , x ⃗ ω ) 0 M w T 0 /id
Our next step is to compute λ λ λ ( e). In fact, we only need some basic facts rather than a complete description. For ⃗ ω ̸ = 1 d we have
λ s 0 , (1⃗ ω ·⃗ e) l k ) = λ (s 0 , v)λ (s, u ′ )λ (δ (s, u ′ ), x ⃗ ω )λ (δ (s, u ′ x ⃗ ω ), w T 0 ) = λ (s 0 , u ′ )λ (s ′ 1 , x ⃗ ω )
, where the state s ′ 1 = δ (s, u ′ ) is independent of s because u ′ is synchronising for A, and λ (s, u ′ ) = λ (s 0 , u ′ ) because T is (N, L)-nondiscriminating. Similarly, λ s 0 , ( ⃗ 1 ·⃗ e) l k ) = λ (s, u0 M ) = λ (s, u).
Note that out of all the coordinates of λ λ λ ( e), only one depends on s. Let s ′ ∈ S be any other state, and let e ′ : v T 0 → v T 0 be the result of applying the same construction as above with s ′ in place of s. Then λ λ λ ( e)λ λ λ ( e ′ ) −1 = c d ⃗ 1 λ (s, u)λ (s ′ , u) −1 ∈ Q d (T ).
Since d ≥ 0 was arbitrary, it follows from Lemma 7.10 that λ (s, u) ≡ λ (s ′ , u) mod K. Since s, s ′ ∈ S were arbitrary, H < K and hence T /H is a characteristic factor. LetT = T /H. ThenT is (N, L)-nondiscriminating because T is. Moreover, it follows directly from the definition of H thatλ (s, u) =λ (s ′ , u) for all s, s ′ ∈ S, whenceT is (N + 1, L)-nondiscriminating.
Proof of Proposition 7.22. Let L ≥ 0 be large enough that all words of length ≥ L are synchronising for A. Applying Lemma 7.24 we can construct a sequence of characteristic factors T = T 0 → T 1 → · · · → T k L such that for each 0 ≤ N ≤ k L the GEA T N is (N, L)-nondiscriminating. In particular, T has a characteristic factorT = T k L which is (k L , L)-nondiscriminating. Hence,T ′ is (N, M)-nondiscriminating for all N, M ≥ 0 by Lemma 7.23. Next, it follows directly from the construction thatT red is invertible. It remains to recall thatT red is a characteristic factor of T by Lemma 7.14.
Example 7.25. Consider the GEA described by the following diagram. Then each of the first three applications of Lemma 7.24 removes one of the group labels g i . s 0 s 1 0/id 1/id 2/id 3/id 0/id 1/g 1 2/g 2 3/g 3
Invertible group extensions of automata
In this section we deal exclusively with invertible group extensions of automata. As pointed out in Section 5.1, an invertible GEA can be identified with a triple (Σ k , G, λ ) where λ : Σ k → G is a labelling map. By a slight abuse of notation we identify λ with a map N 0 → G, denoted with the same symbol, λ (n) = λ ((n) k ). Recall that the cyclic group extensions of automata Z(m) were defined in Section 7.5.
Proposition 7.26. Let T be an invertible efficient group extension of a k-automaton. Then T has a characteristic factor of the form Z(m) for some m which divides k − 1.
Proof. Following the usual strategy (cf. Propositions 7.19 and 7.22), we will consider the normal subgroup of G given by H = λ (n + 1)λ (1) −1 λ (n) −1 : n ≥ 0 G .
A simple inductive argument shows that λ (n) ≡ λ (1) n mod H for all n ≥ 0, and in fact H is the normal subgroup of G generated by λ (n)λ (1) −n for n ≥ 0. In particular, G/H is cyclic. We will show that the factor T /H is characteristic. Fix d ≥ 0, take any n ≥ 0. Let t = |G| so that g t = id G for all g ∈ G. Consider the vector ⃗ e ∈ N d+1 0 given by e 0 = nk td + 1; e j = (k t − 1)k (d− j)t (1 ≤ j ≤ d).
Put also l = |(n) k | + td + 1 so that 1⃗ ω ·⃗ e < k l for all ⃗ ω ∈ {0, 1} d and hence we have a morphism e = (l,⃗ e) : v T 0 → v T 0 . We next compute λ λ λ ( e). If ⃗ ω ∈ {0, 1} d \ { ⃗ 1} and 0 ≤ j ≤ d be the largest index such that ω j = 0, then (1⃗ ω ·⃗ e) l k = 0(n) k v ω 1 v ω 2 . . . v ω j−1 0 t−1 10 t(d− j) ,
where v 1 = (k t − 1) k ∈ Σ t k and v 0 = 0 t ∈ Σ t k . Since λ (v 0 ) = λ (v 1 ) = id G , we have λ (1⃗ ω ·⃗ e) l k = λ (n)λ (1).
By a similar reasoning, λ ( ⃗ 1 ·⃗ e) l k = λ (n + 1) .
Since d ≥ 0 was arbitrary, it follows by Corollary 7.11 that λ (n + 1) ≡ λ (n)λ (1) mod K. Since n was arbitrary, H < K and T /H = (Σ k , G/H,λ ) is characteristic. Let m denote the order the cyclic group G/H. Becauseλ (n) =λ (1) n for all n ≥ 0, T /H is isomorphic to Z(m), and because λ (1) = λ (k) ≡ λ (1) k mod H, m is a divisor of k − 1.
The end of the chase
In this section we finish the proof of the main result of this section. This task is virtually finished -we just need to combine the ingredients obtained previously.
Proof of Theorem 6.11. Chaining together Propositions 7.19, 7.22 and 7.26 we conclude that the efficient GEA T has a characteristic factor of the form Z(m) with m | k − 1. By Lemma 7.16 it follows that m = d ′ T .
Clemens Müllner
Institut für Diskrete Mathematik und Geometrie TU Wien Wiedner Hauptstr. 8-10 1040 Wien, Austria clemens muellner tuwien ac at
Example 1 . 3 .
13Let a : N 0 → R be the 2-automatic sequence computed by the following automaton. DISCRETE ANALYSIS, 2023:4, 62pp.
Lemma 4 . 6 .
46Fix d ≥ 2. Let f 0 , f 1 , . . . , f d : [N] → C be 1-bounded sequences and let P ⊂ [N] be an arithmetic progression. Then
E
00 is the initial state, an edge labelled j from s to s ′ is present if δ (s, j) = s ′ and the output function is given by τ(s 00 ) = τ(s 01 ) = +1 and τ(s 10 ) = τ(s 11 ) = −1. Alternatively, r is produced by the GEAO with group G = {+1, −1},
Example 5. 2 .
2Recall the sequence a(n) defined in Example 1.2. It is produced by the GEAO with group G = {+1, −1}, use the same conventions as in Example 5.1 above and the output is τ(s 0,2 , +1) = 4, τ(s 0,2 , −1) = 2, τ(s 1,3 , +1) = 1, τ(s 1,3 , −1) = 1.
an idempotent k-automaton. Let m [n 0 ] be the smallest possible cardinality of a set {δ (s, w) | s ∈ S} with w ∈ Σ * k . The states of the GEAOŜ ⊂ S m [S ⊂ (S ′ ) n 0 ] consist of ordered m-tuples of distinct statesŝ = (s 1 , s 2 , . . . , s m ) of A, no two of which contain the same set of entries. The transition function is defined by the condition that forŝ = (s 1 , . . . , s m ) ∈Ŝ and j ∈ Σ k the entries ofδ (ŝ, j) are, up to rearrangement, δ (s 1 , j), . . . , δ (s m , j). The initial state is any m-tupleŝ 0 = (s 0,1 , . . . , s 0,m ) ∈Ŝ with s 0,1 = s 0 . The group G [∆] consists of permutations of {1, 2, . . . , m}, G ⊂ Sym(m). The group labels are chosen so that forŝ =(s 1 , .. . , s m ) ∈Ŝ and j ∈ Σ k the label g = λ (ŝ, j) is the unique permutation such that δ (ŝ, j) = δ (s g(1) , j), . . . , δ (s g(m) , j) .Hence, δ (s 1 , j), . . . , δ (s m , j) can be recovered by permuting the entries ofδ (ŝ, j) according to λ (ŝ, j) [Mül17, Lem. 2.4]. More generally, for all u ∈ Σ * k we have (δ (s 1 , u), . . . , δ (s m , u)) = λ (ŝ, u) ·δ (ŝ, u),
be arbitrary. Writing ⃗ n as above in the definition of A(v; L), and letting s ′ ∈ S [d] and r ′ ∈ N
Lemma 6 . 5 .
65Let e ′′ : v ′′ → v be a morphism and let 0 ≤ l ′ ≤ deg( e ′′ ) be an integer. Then there exist unique morphisms e ′ and e with e ′′ = e ′ • e and deg( e ′ ) = l ′ . Proof. Put v ′′ = (s ′′ , r ′′ ), v = (s, r), l = deg( e ′′ ) − l ′ and e ′′ = (l ′ + l ′′ , ⃗ e ′′ ). Then there exists a unique decompositon ⃗
nonempty subset J ⊂ I we have ρ(W [J]) ≤ ρ(W ), and the inequality is strict if W is irreducible and J ̸ = I [Min88, Cor. II.2.1 & II.2.2]. We call a class J ⊂ I basic if ρ(W [J]) = ρ(W ), and nonbasic otherwise.
be the eigenvector of W [J] with eigenvalue 1. Let K be the union of all the classes of W dominated by J except for J itself. By Step 1 all the classes in K are nonbasic, and the subspace V [K] is W -invariant. The spectral radius of the matrix W [K] is equal to the maximum of the spectral radii of W [J ′ ] taken over all the classes J ′ ⊂ K, and hence ρ(W [K]) < 1.
The states s 1 and s 2 serve the same purpose and can be identified, leading to a smaller GEA:
Proof. Pick any d ≥ 0. Let S 0 = {s ∈ S | s ∼ s 0 } be the equivalence class of s 0 . Any morphism e = (l,⃗ e) :v 0 →v 0 in T red can be lifted to a morphism (l,⃗ e) : (s, 0) → (s ′ , 0) in T , where s, s ′ ∈ S any morphism (l,⃗ e) : (s, 0) → (s ′ , 0) in T with s, s ′ ∈ S [d] 0 gives rise to the corresponding morphism (l,⃗ e) :v 0 →v 0 . Hence,
Lemma 7 . 16 .
716Fix k ≥ 2 and let m, m ′ ≥ 1 and let T be an efficient group extension of a k-automaton. 1. If m | k − 1 then the GEA Z(m) is efficient, λ m (u) = [u] k mod m for all u ∈ Σ * k , and Q d (Z(m)) = HK d (Z/mZ). 2. If m, m ′ | k − 1 then Z(m) is a factor of Z(m ′ ) if and only if m | m ′ . The factor is not characteristic unless m = m ′ .
We equip [N] with the normalised counting measure, whence ∥ f ∥ L p ([N]) = ( En<N | f (n)| p ) 1/p . The following bound is a consequence of Young's inequality (see e.g. [ET12] for a derivation). Proposition 2.2. Let d ≥ 1 and p d = 2 d /(d + 1). Then ∥ f ∥ U d [N] ≪ ∥ f ∥ L p d ([N]) for any f : [N] → C.
we conclude that we may replace the function 1 A∩[N] under the average with 1 [N] a str at the cost of introducing a small error term:
This construction was called a (naturally induced) transducer in[Mül17], but this name seems better suited here. One main motivation for this name is the fact that this construction corresponds to a group extension for the related dynamical systems, as was shown in[LM18].
It is not common to require a synchronizing word to a specific state, but this will not be a serious restriction for this paper.
We note that i always dominates itself via the empty path.
DISCRETE ANALYSIS, 2023:4, 62pp.
AcknowledgmentsThe authors thank the anonymous reviewers for their careful reading of the paper and the feedback.Proof.1. Each of the defining properties of an efficient GEA can be verified directly (we take d ′ 0 = 1 and G 0 = G).2. This easily follows from the fact that Z/mZ is a subgroup of Z/m ′ Z if and only if m | m ′ .3. Suppose first that Z(m) is a factor of T and the factor map is given by (φ , π). Then for any w ∈ Σ * k with δ (s 0 , w) = s 0 and λ (s 0 , w) = id G we haveHence, by property T 2 , m | d ′ . In the opposite direction, property T 2 guarantees that Z(d ′ ) is a factor of T , with the group homomorphism given by g r(three coordinates of g determine the projection of the fourth to Z/d ′ T Z). On the other hand, since Z(m) is characteristic, we have p = 1/m. It follows that m ≥ d ′ T .We are now ready to reformulate our description of the cube groups Q d (T ) in Theorem (6.11) in a more succinct way using the language of characteristic factors. Equivalence of the said theorem and the following result is easily seen once one unwinds the definitions.Theorem 7.17. Let T be an efficient GEA. Then Z(d ′ T ) is a characteristic factor of T .Strong synchronisationRecall that efficient GEA are built on automata that are synchronising. A stronger synchronisation property is enjoyed, for example, by the GEA producing the Rudin-Shapiro sequence discussed in Example 5.1: all sufficiently long words are synchronising for the underlying automaton (in fact, all nonempty words have this property). In this section we show that, passing to a characteristic factor, we can ensure this stronger synchronisation property for the underlying automata in general. Let T be a GEA. For the purposes of this section, we will say that a pair of states s, s ′ ∈ S is mistakable if for every length l there exists a word u ∈ Σ * k with |u| ≥ l and two states r, r ′ ∈ S such that δ (r, u) = s and δ (r ′ , u) = s ′ . Note that in this situation u cannot be a synchronising word for the underlying automaton unless s = s ′ . We will also say that the pair s, s ′ ∈ S is strongly mistakable if there exists a nonempty word w ∈ Σ * k \ {ε} such that δ (s, w) = s and δ (s ′ , w) = s ′ , while λ (s, w) = λ (s ′ , w) = id G . As the terminology suggests, if s, s ′ are strongly mistakable then they are also mistakable (we may take u = w l and r = s, r ′ = s ′ ). The following lemma elucidates the connection between mistakable states and synchronisation.Lemma 7.18. Let T be a natural tranducer and let A be the underlying automaton. Then the following properties are equivalent:Proof. It is clear that the property of being (N, L)-nondiscriminating becomes stronger as N increases. The values of N above k L will be mostly irrelevant: if T is (k L , L)-nondiscriminating then it is immediate that it is (N, k L )-nondiscriminating for all N ≥ 0. By assumption, T is (k L , L)-nondiscriminating for at least one L ≥ 1. Let L denote the set of all L ≥ 0 with the aforementioned property (in particular, 0 ∈ L).If L 1 , L 2 ∈ L then also L 1 + L 2 ∈ L. Indeed, any u ∈ Σ L 1 +L 2 k can be written as u = u 1 u 2 with u 1 ∈ Σ L 1 k and u 2 ∈ Σ L 2 k , whence for any s, s ′ ∈ S we have λ (s, u) = λ (s 0 , u 1 )λ (s 0 , u 2 ) = λ (s ′ , u). Moreover, if L ∈ L and L ̸ = 0 then L − 1 ∈ L. Indeed, if u ∈ Σ L−1 k then for any s, s ′ ∈ S we have λ (s, u) = λ (s 0 , u0) = λ (s ′ , u). It remains to note that the only set L ⊂ N 0 with all of the properties listed above is N 0 .Lemma 7.24. Let T be an efficient group extension of a k-automaton, let A be the underlying automaton and 0 < N < k L . Suppose that every word in Σ L k is synchronising for A and that T is (N, L)nondiscriminating. Then T has a characteristic factor T ′ which is (N + 1, L)-nondiscriminating.Proof. Following a strategy similar to the one employed in the proof of Proposition 7.19, let u = (N) L k and consider the normal subgroup of G given byWe aim to use Proposition 7.7 to show that T /H is a characteristic factor of T . Fix for now the dimension d ≥ 0 and an integer M such that k M > d. Pick s ∈ S and a word v such that δ (s 0 , v) = s and λ (s 0 , v) = id G , whose existence is guaranteed by property T 1 . We recall that w T 0 denotes a word that synchronizes T to s 0 . Consider ⃗ e ∈ N d+1 0 given byPut also l := |v| + L + M + w T 0 and let u ′ := (N − 1) L k . These definitions are arranged so that for each ⃗ ω ∈ {0, 1} d the word (1⃗ ω ·⃗ e) l k takes the formAUTHORS
It will be convenient to say for any N, L ≥ 0 that a GEA T is (N, L)-nondiscriminating if λ (s, u) = λ (s ′ , u) for all s, s ′ ∈ S and all u ∈ Σ L k such that [u] k < N. In particular, any GEA T is vacuously (0, L)-nondiscriminating for all L ≥ 0, and if T is additionally efficient then it is (1, L)-nondiscriminating for all L ≥ 0 (recall that efficiency implies that λ (s, 0) = id G for all s ∈ S). Then T has an invertible characteristic factor. Our proximate goal on the path to prove Proposition 7.22 is to find a characteristic factor that is (N, L)-nondiscriminating for all N, L ≥ 0. Indeed, note that any invertible GEA is (N, L)-nondiscriminating for all N, L ≥ 0. Conversely, as we will shortly see, a GEA that is (N, L)-nondiscriminating for all N, L ≥ 0 can be reduced to an invertible GEA by removing redundant statesProposition 7.22. Let T be an efficient GEA such that all sufficiently long words are synchronising for the underlying automaton. Then T has an invertible characteristic factor. It will be convenient to say for any N, L ≥ 0 that a GEA T is (N, L)-nondiscriminating if λ (s, u) = λ (s ′ , u) for all s, s ′ ∈ S and all u ∈ Σ L k such that [u] k < N. In particular, any GEA T is vacuously (0, L)- nondiscriminating for all L ≥ 0, and if T is additionally efficient then it is (1, L)-nondiscriminating for all L ≥ 0 (recall that efficiency implies that λ (s, 0) = id G for all s ∈ S). Our proximate goal on the path to prove Proposition 7.22 is to find a characteristic factor that is (N, L)-nondiscriminating for all N, L ≥ 0. Indeed, note that any invertible GEA is (N, L)-nondiscriminating for all N, L ≥ 0. Conversely, as we will shortly see, a GEA that is (N, L)-nondiscriminating for all N, L ≥ 0 can be reduced to an invertible GEA by removing redundant states.
Let T be an efficient group extension of a k-automaton. Lemma, Suppose that there exist L ≥ 1 and N ≥ k L such that T is (N, L)-nondiscriminating. Then T is (N, L)-nondiscriminating for all N, L ≥ 0. ReferencesLemma 7.23. Let T be an efficient group extension of a k-automaton. Suppose that there exist L ≥ 1 and N ≥ k L such that T is (N, L)-nondiscriminating. Then T is (N, L)-nondiscriminating for all N, L ≥ 0. References
(logarithmic) densities for automatic sequences along primes and squares. Boris Adamczewski, Michael Drmota, and Clemens Müllner.Boris Adamczewski, Michael Drmota, and Clemens Müllner. (logarithmic) densities for automatic sequences along primes and squares. 5
Automatic sequences. Jean- , Paul Allouche, Jeffrey Shallit, Cambridge University Press1112CambridgeJean-Paul Allouche and Jeffrey Shallit. Automatic sequences. Cambridge University Press, Cambridge, 2003. 11, 12
Multiple recurrence and nilsequences. Vitaly Bergelson, Bernard Host, Bryna Kra, Invent. Math. 1602With an appendix by Imre RuzsaVitaly Bergelson, Bernard Host, and Bryna Kra. Multiple recurrence and nilsequences. Invent. Math., 160(2):261-303, 2005. With an appendix by Imre Ruzsa. 6
Automatic sequences and generalised polynomials. Jakub Byszewski, Jakub Konieczny, Canadian Journal of Mathematics. To appear. 24Jakub Byszewski and Jakub Konieczny. Automatic sequences and generalised polynomials. Canadian Journal of Mathematics, 2019. To appear. 24
A density version of Cobham's Theorem. To appear in Acta Arithmetica. Jakub Byszewski, Jakub Konieczny, arXiv:1710.07261[math.CO].13Jakub Byszewski and Jakub Konieczny. A density version of Cobham's Theorem. To appear in Acta Arithmetica, 2019+. arXiv: 1710.07261 [math.CO]. 13
Rationally almost periodic sequences, polynomial multiple recurrence and symbolic dynamics. Vitaly Bergelson, Joanna Kułaga-Przymus, Mariusz Lemańczyk, Florian K Richter, 315Vitaly Bergelson, Joanna Kułaga-Przymus, Mariusz Lemańczyk, and Florian K. Richter. Ra- tionally almost periodic sequences, polynomial multiple recurrence and symbolic dynamics, 2016. 3, 15
Squarefree numbers, IP sets and ergodic theory. V Bergelson, I Ruzsa, Paul Erdős and his mathematics. Budapest11János Bolyai Math. Soc.V. Bergelson and I. Ruzsa. Squarefree numbers, IP sets and ergodic theory. In Paul Erdős and his mathematics, I (Budapest, 1999), volume 11 of Bolyai Soc. Math. Stud., pages 147-160. János Bolyai Math. Soc., Budapest, 2002. 3
Automatic sequences generated by synchronizing automata fulfill the Sarnak conjecture. Jean-Marc Deshouillers, Michael Drmota, Clemens Müllner, Studia Math. 231115Jean-Marc Deshouillers, Michael Drmota, and Clemens Müllner. Automatic sequences generated by synchronizing automata fulfill the Sarnak conjecture. Studia Math., 231(1):83- 95, 2015. 15
Generalized Thue-Morse sequences of squares. Michael Drmota, Johannes F Morgenbesser, Israel J. Math. 19012Michael Drmota and Johannes F. Morgenbesser. Generalized Thue-Morse sequences of squares. Israel J. Math., 190:157-193, 2012. 12
The sum-of-digits function of polynomial sequences. Michael Drmota, Christian Mauduit, Joël Rivat, J. Lond. Math. Soc. 841Michael Drmota, Christian Mauduit, and Joël Rivat. The sum-of-digits function of polyno- mial sequences. J. Lond. Math. Soc., 84(1):81-102, 2011. 2
The Thue-Morse sequence along squares is normal. Michael Drmota, Christian Mauduit, Joël Rivat, Preprint. 25Michael Drmota, Christian Mauduit, and Joël Rivat. The Thue-Morse sequence along squares is normal, 2013. Preprint. 2, 5
Automatic sequences as good weights for ergodic theorems. Tanja Eisner, Jakub Konieczny, Discrete Contin. Dyn. Syst. 388Tanja Eisner and Jakub Konieczny. Automatic sequences as good weights for ergodic theorems. Discrete Contin. Dyn. Syst., 38(8):4087-4115, 2018. 2
Large values of the Gowers-Host-Kra seminorms. Tanja Eisner, Terence Tao, J. Anal. Math. 1178Tanja Eisner and Terence Tao. Large values of the Gowers-Host-Kra seminorms. J. Anal. Math., 117:133-186, 2012. 8
Representation theory. William Fulton, Joe Harris, Graduate Texts in Mathematics. 12930Springer-VerlagA first course, Readings in MathematicsWilliam Fulton and Joe Harris. Representation theory, volume 129 of Graduate Texts in Mathematics. Springer-Verlag, New York, 1991. A first course, Readings in Mathematics. 30
Higher order Fourier analysis of multiplicative functions and applications. Nikos Frantzikinakis, Bernard Host, J. Amer. Math. Soc. 301Nikos Frantzikinakis and Bernard Host. Higher order Fourier analysis of multiplicative functions and applications. J. Amer. Math. Soc., 30(1):67-157, 2017. 2
On uniformity of q-multiplicative sequences. Aihua Fan, Jakub Konieczny, Bulletin of the London Mathematical Society. 6Aihua Fan and Jakub Konieczny. On uniformity of q-multiplicative sequences. Bulletin of the London Mathematical Society, 2019. 6
An inverse theorem for Gowers norms of trace functions over F p. Étienne Fouvry, Emmanuel Kowalski, Philippe Michel, Math. Proc. Cambridge Philos. Soc. 1552Étienne Fouvry, Emmanuel Kowalski, and Philippe Michel. An inverse theorem for Gowers norms of trace functions over F p . Math. Proc. Cambridge Philos. Soc., 155(2):277-295, 2013. 6
Gel'fond. Sur les nombres qui ont des propriétés additives et multiplicatives données. A O , Acta Arith. 132A. O. Gel'fond. Sur les nombres qui ont des propriétés additives et multiplicatives données. Acta Arith., 13:259-265, 1967/1968. 2
A new proof of Szemerédi's theorem. W T Gowers, Geom. Funct. Anal. 113W. T. Gowers. A new proof of Szemerédi's theorem. Geom. Funct. Anal., 11(3):465-588, 2001. 7
Higher-Order Fourier Analysis, I. (Notes available from the author). Ben Green, Ben Green. Higher-Order Fourier Analysis, I. (Notes available from the author). 7
An arithmetic regularity lemma, an associated counting lemma, and applications. Ben Green, Terence Tao, An irregular mind. Budapest2149János Bolyai Math. Soc.Ben Green and Terence Tao. An arithmetic regularity lemma, an associated counting lemma, and applications. In An irregular mind, volume 21 of Bolyai Soc. Math. Stud., pages 261-334. János Bolyai Math. Soc., Budapest, 2010. 2, 6, 20, 23, 49
Linear equations in primes. Ben Green, Terence Tao, Ann. of Math. 171248Ben Green and Terence Tao. Linear equations in primes. Ann. of Math. (2), 171(3):1753- 1850, 2010. 48
The quantitative behaviour of polynomial orbits on nilmanifolds. Ben Green, Terence Tao, Ann. of Math. 175249Ben Green and Terence Tao. The quantitative behaviour of polynomial orbits on nilmani- folds. Ann. of Math. (2), 175(2):465-540, 2012. 49
An inverse theorem for the Gowers U s+1. Ben Green, Terence Tao, Tamar Ziegler, Ann. of Math. 1762Ben Green, Terence Tao, and Tamar Ziegler. An inverse theorem for the Gowers U s+1 [N]- norm. Ann. of Math. (2), 176(2):1231-1372, 2012. 2
Nonconventional ergodic averages and nilmanifolds. Bernard Host, Bryna Kra, Ann. of Math. 161248Bernard Host and Bryna Kra. Nonconventional ergodic averages and nilmanifolds. Ann. of Math. (2), 161(1):397-488, 2005. 48
Parallelepipeds, nilpotent groups and Gowers norms. Bernard Host, Bryna Kra, Bull. Soc. Math. France. 136348Bernard Host and Bryna Kra. Parallelepipeds, nilpotent groups and Gowers norms. Bull. Soc. Math. France, 136(3):405-437, 2008. 48
On the joint distribution of q-additive functions in residue classes. Dong-Hyun Kim, J. Number Theory. 742Dong-Hyun Kim. On the joint distribution of q-additive functions in residue classes. J. Number Theory, 74(2):307-336, 1999. 2
Gowers norms for the Thue-Morse and Rudin-Shapiro sequences. Jakub Konieczny, arXiv:1611.09985To appear in Annales de l'Institut Fourier. 232math.NTJakub Konieczny. Gowers norms for the Thue-Morse and Rudin-Shapiro sequences. To appear in Annales de l'Institut Fourier, 2019+. arXiv: 1611.09985 [math.NT]. 2, 32
Gowers uniformity norm and pseudorandom measures of the pseudorandom binary sequences. Huaning Liu, Int. J. Number Theory. 75Huaning Liu. Gowers uniformity norm and pseudorandom measures of the pseudorandom binary sequences. Int. J. Number Theory, 7(5):1279-1302, 2011. 6
Automatic sequences are orthogonal to aperiodic multiplicative functions. Mariusz Lemańczyk, Clemens Müllner, arXiv:1811.00594arXiv e-printsMariusz Lemańczyk and Clemens Müllner. Automatic sequences are orthogonal to aperiodic multiplicative functions. arXiv e-prints, page arXiv:1811.00594, Nov 2018. 24
Henryk Minc, Nonnegative matrices. Wiley-Interscience Series in Discrete Mathematics and Optimization. New YorkWiley-Interscience Publication39Henryk Minc. Nonnegative matrices. Wiley-Interscience Series in Discrete Mathematics and Optimization. John Wiley & Sons, Inc., New York, 1988. A Wiley-Interscience Publication. 39
Gelfond's sum of digits problems. Johannes Morgenbesser, TU WienMaster's thesisJohannes Morgenbesser. Gelfond's sum of digits problems. Master's thesis, TU Wien, 2008. 2
La somme des chiffres des carrés. Christian Mauduit, Joël Rivat, Acta Math. 2031Christian Mauduit and Joël Rivat. La somme des chiffres des carrés. Acta Math., 203(1):107- 148, 2009. 2
Sur un problème de Gelfond: la somme des chiffres des nombres premiers. Christian Mauduit, Joël Rivat, Ann. of Math. 1712Christian Mauduit and Joël Rivat. Sur un problème de Gelfond: la somme des chiffres des nombres premiers. Ann. of Math. (2), 171(3):1591-1646, 2010. 2
Prime numbers along Rudin-Shapiro sequences. Christian Mauduit, Joël Rivat, J. Eur. Math. Soc. (JEMS). 17105Christian Mauduit and Joël Rivat. Prime numbers along Rudin-Shapiro sequences. J. Eur. Math. Soc. (JEMS), 17(10):2595-2642, 2015. 2, 5
Rudin-Shapiro sequences along squares. Christian Mauduit, Joël Rivat, Trans. Amer. Math. Soc. 37011Christian Mauduit and Joël Rivat. Rudin-Shapiro sequences along squares. Trans. Amer. Math. Soc., 370(11):7899-7921, 2018. 2
On finite pseudorandom binary sequences. II. The Champernowne, Rudin-Shapiro, and Thue-Morse sequences, a further construction. Christian Mauduit, András Sárközy, J. Number Theory. 732Christian Mauduit and András Sárközy. On finite pseudorandom binary sequences. II. The Champernowne, Rudin-Shapiro, and Thue-Morse sequences, a further construction. J. Number Theory, 73(2):256-276, 1998. 2
Normality of the Thue-Morse sequence along Piatetski-Shapiro sequences. Clemens Müllner, Lukas Spiegelhofer, arXiv:1511.01671IIPreprint.math.NTClemens Müllner and Lukas Spiegelhofer. Normality of the Thue-Morse sequence along Piatetski-Shapiro sequences, II, 2015. Preprint. arXiv:1511.01671 [math.NT]. 2
Automatic sequences fulfill the Sarnak conjecture. Clemens Müllner, Duke Math. J. 1661730Clemens Müllner. Automatic sequences fulfill the Sarnak conjecture. Duke Math. J., 166(17):3219-3290, 2017. 5, 24, 26, 27, 28, 29, 30
The Rudin-Shapiro sequence and similar sequences are normal along squares. Clemens Müllner, Canad. J. Math. 7055Clemens Müllner. The Rudin-Shapiro sequence and similar sequences are normal along squares. Canad. J. Math., 70(5):1096-1129, 2018. 2, 5
On the Gowers norm of pseudorandom binary sequences. Harald Niederreiter, Joël Rivat, Bull. Aust. Math. Soc. 792Harald Niederreiter and Joël Rivat. On the Gowers norm of pseudorandom binary sequences. Bull. Aust. Math. Soc., 79(2):259-271, 2009. 6
The level of distribution of the Thue-Morse sequence. Lukas Spiegelhofer, arXiv:1803.0168925arXiv e-printsLukas Spiegelhofer. The level of distribution of the Thue-Morse sequence. arXiv e-prints, page arXiv:1803.01689, Mar 2018. 2, 5
Higher order Fourier analysis. Terence Tao, Graduate Studies in Mathematics. 14232American Mathematical SocietyTerence Tao. Higher order Fourier analysis, volume 142 of Graduate Studies in Mathemat- ics. American Mathematical Society, Providence, RI, 2012. 7, 8, 32
Additive combinatorics. Terence Tao, Van Vu, Cambridge Studies in Advanced Mathematics. 1056Cambridge University PressTerence Tao and Van Vu. Additive combinatorics, volume 105 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, 2006. 6
| [] |
[
"LOCAL TERMS FOR TRANSVERSAL INTERSECTIONS",
"LOCAL TERMS FOR TRANSVERSAL INTERSECTIONS"
] | [
"Yakov Varshavsky "
] | [] | [] | The goal of this note is to show that in the case of "transversal intersections" the "true local terms" appearing in the Lefschetz trace formula are equal to the "naive local terms". To prove the result, we extend the method of [Va], where the case of contracting correspondences is treated. Our new ingredients are the observation of Verdier [Ve] that specialization of anétale sheaf to the normal cone is monodromic and the assertion that local terms are "constant in families". As an application, we get a generalization of the Deligne-Lusztig trace formula [DL]. | 10.1112/s0010437x23007091 | [
"https://arxiv.org/pdf/2003.06815v3.pdf"
] | 212,725,133 | 2003.06815 | 2d5685619824748d2a72d92ba890e679547a4f3f |
LOCAL TERMS FOR TRANSVERSAL INTERSECTIONS
25 Nov 2021
Yakov Varshavsky
LOCAL TERMS FOR TRANSVERSAL INTERSECTIONS
25 Nov 2021
The goal of this note is to show that in the case of "transversal intersections" the "true local terms" appearing in the Lefschetz trace formula are equal to the "naive local terms". To prove the result, we extend the method of [Va], where the case of contracting correspondences is treated. Our new ingredients are the observation of Verdier [Ve] that specialization of anétale sheaf to the normal cone is monodromic and the assertion that local terms are "constant in families". As an application, we get a generalization of the Deligne-Lusztig trace formula [DL].
Introduction
Let f : X → X be a morphism of schemes of finite type over an algebraically closed field k, let ℓ be a prime number different from the characteristic of k, and let F ∈ D b c (X, Q ℓ ) be equipped with a morphism u : f * F → F . Then for every fixed point x ∈ Fix(f ) ⊆ X, one can consider the restriction u x : F x → F x , hence we can consider its trace Tr(u x ) ∈ Q ℓ , called the "naive local term" of u at x.
On the other hand, if x ∈ Fix(f ) ⊆ X is an isolated fixed point, one can also consider the "true local term" LT x (u) ∈ Q ℓ , appearing in the Lefschetz-Verdier trace formula, so the natural question is when these two locals terms are equal.
Motivated by work of many people, including Illusie [Il], Pink [Pi] and Fujiwara [Fu], it was shown in [Va] that this is the case when f is "contracting near x", by which we mean that the induced map of normal cones N x (f ) : N x (X) → N x (X) maps N x (X) to the zero section. In particular, this happens when the induced map of Zariski tangent spaces d x (f ) : T x (X) → T x (X) is zero.
A natural question is whether the equality LT x (u) = Tr(u x ) holds for a more general class of morphisms. For example, Deligne asked whether the equality holds when x is the only fixed point of d x (f ) : T x (X) → T x (X), or equivalently, when the linear map d x (f ) − Id : T x (X) → T x (X) is invertible. Note that when X is smooth at x, this condition is equivalent to the fact that the graph of f intersects transversally with the diagonal at x.
The main result of this note gives an affirmative answer to Deligne's question. Moreover, in order to get an equality LT x (u) = Tr(u x ) it suffices to assume a weaker condition that x is the only fixed point of N x (f ) : N x (X) → N x (X) (see Corollary 4.11). In particular, we show this in the case when f is an automorphism of X of finite order, prime to the characteristic of k, or, more generally, a "semisimple" automorphism (see Corollary 5.6).
Actually, as in [Va], we show a more general result (see Theorem 4.10) in which a morphism f is replaced by a correspondence, and a fixed point x is replaced by a c-invariant closed subscheme Date: November 29, 2021. This research was partially supported by the ISF grant 822/17. Z ⊆ X. Moreover, instead of showing the equality of local terms we show a more general "local" assertion that in some cases the so-called "trace maps" commute with restrictions. Namely, we show it in the case when c has "no almost fixed points in the punctured tubular neighborhood of Z" (see Definition 4.4).
As an easy application, we prove a generalization of the Deligne-Lusztig trace formula (see Theorem 5.9).
To prove our result, we follow the strategy of [Va]. First, using additivity of traces, we reduce to the case when F x ≃ 0. In this case, Tr(u x ) = 0, thus we have to show that LT x (u) = 0. Next, using specialization to the normal cone, we reduce to the case when f : X → X is replaced by N x (f ) : N x (X) → N x (X) and F by its specialization sp x (F ). In other words, we can assume that X is a cone with vertex x, and f is G m -equivariant.
In the contracting case, treated in [Va], the argument stops there. Indeed, after passing to normal cones we can assume that f is the constant map with image x. In this case, our assumption F x ≃ 0 implies that f * F ≃ 0, thus u = 0, hence LT x (u) = 0.
In general, by a theorem of Verdier [Ve], we can assume that F is monodromic. Since it is enough to show an analogous assertion for sheaves with finite coefficients, we can thus assume that F is G m -equivariant with respect to the action (t, y) → t n (y) for some n.
Since f is homotopic to the constant map with image x (via the homotopy f t (y) := t n f (y)) it suffices to show that local terms are "constant in families". We deduce the latter assertion from the fact that local terms commute with nearby cycles.
The paper is organized as follows. In Section 1 we introduce correspondences, trace maps and local terms. In Section 2 we define relative correspondences and formulate Proposition 2.5 asserting that in some cases trace maps are "constant in families". In Section 3 we study a particular case of relative correspondences, obtained from schemes with an action of an algebraic monoid (A 1 , ·).
In Section 4 we formulate our main result (Theorem 4.10) asserting that in some cases trace maps commute with restrictions to closed subschemes. We also deduce an affirmative answer to Deligne's question, discussed earlier. In Section 5 we apply the results of Section 4 to the case of an automorphism and deduce a generalization of the Deligne-Lusztig trace formula.
Finally, we prove Theorem 4.10 in Section 6 and prove Proposition 2.5 in Section 7. I thank Luc Illusie, who explained to me a question of Deligne several years ago and expressed his interest on many occasions. I also thank David Hansen and Jared Weinstein for their comments and stimulating questions (see 5.7) and thank Helene Esnault and Nick Rozenblyum for their interest.
Notation
For a scheme X, we denote by X red the corresponding reduced scheme. For a morphism of schemes f : Y → X and a closed subscheme Z ⊆ X, we denote by f −1 (Z) ⊆ Y the schematic inverse image of Z.
Throughout most of the paper all schemes will be of finite type over a fixed algebraically closed field k. The only exception is Section 7, where all schemes will be of finite type over a spectrum of a discrete valuation ring over k with residue field k.
We fix a prime ℓ, invertible in k, and a commutative ring with identity Λ, which is either finite and is annihilated by some power of ℓ, or a finite extension of Z ℓ or Q ℓ .
To each scheme X as above, we associate a category D b ctf (X, Λ) of "complexes of finite tordimension with constructible cohomology" (see [SGA4 1 2 , Rapport 4.6] when Λ is finite and [De, in other cases). This category is known to be stable under the six operations f * , f ! , f * , f ! , ⊗ and RHom (see [SGA4 1 2 ,Th. finitude,1.7]). For each X as above, we denote by π X : X → pt := Spec k the structure morphism, by Λ X ∈ D b ctf (X, Λ) the constant sheaf with fiber Λ, and by K X = π ! X (Λ pt ) the dualizing complex of X. We will also write RΓ(X, ·) (resp. RΓ c (X, ·)) instead of π X * (resp π X! ).
For an embedding i : Y ֒→ X and F ∈ D b ctf (X, Λ), we will often write F | Y instead of i * F . We will freely use various base change morphisms (see, for example, [SGA4,XVII,2.1.3 and XVIII,3.1.12.3,3.1.13.2, 3.1.14.2]), which we will denote by BC.
1. Correspondences and trace maps 1.1. Correspondences. (a) By a correspondence, we mean a morphism of schemes of the form c = (c l , c r ) : C → X × X, which can be also viewed as a diagram X
c l ←− C cr −→ X. (b) Let c : C → X × X and b : B → Y × Y be correspondences. By a morphism from c to b, we mean a pair of morphisms [f ] = (f, g), making the following diagram commutative (1.1) X c l ← −−− − C cr − −−− → X f g f Y b l ← −−− − B br − −−− → Y. (c) A correspondence c : C → X × X gives rise to a Cartesian diagram Fix(c) − −−− → C c X ∆ − −−− → X × X,
where ∆ : X → X × X is the diagonal map. We call Fix(c) the scheme of fixed points of c.
(d) We call a morphism [f ] from (b) Cartesian, if the right inner square of (1.1) is Cartesian.
Restriction of correspondences.
Let c : C → X × X be a correspondence, W ⊆ C an open subscheme, and Z ⊆ X a locally closed subscheme.
(a) We denote by c| W : W → X × X the restriction of c.
(b) Let c| Z : c −1 (Z × Z) → Z × Z be the restriction of c. By definition, the inclusion maps Z ֒→ X and c −1 (Z × Z) ֒→ C define a morphism c| Z → c of correspondences. (c) We say that Z is (schematically) c-invariant, if c −1 r (Z) ⊆ c −1 l (Z).
This happens if and only if we have c −1 (Z × Z) = c −1 r (Z) or, equivalently, the natural morphism of correspondences c| Z → c from (b) is Cartesian.
1.3. Remark. Our conventions slightly differ from those of [Va,1.5.6]. For example, we do not assume that Z is closed, our notion of c-invariance is stronger than the one of [Va,1.5.1], and when Z is c-invariant, then c| Z in the sense of [Va] is the correspondence c −1 (Z × Z) red → Z × Z.
Cohomological correspondences.
Let c : C → X × X be a correspondence, and let F ∈ D b ctf (X, Λ). (a) By c-morphism or a cohomological correspondence lifting c, we mean an element of
Hom c (F , F ) := Hom(c * l F , c ! r F ) ≃ Hom(c r! c * l F , F ). (b) Let [f ] : c → b be a Cartesian morphism of correspondences (see 1.1(d)). Then every b- morphism u : b * l F → b ! r F gives rise to a c-morphism [f ] * (u) : c * l (f * F ) → c ! r (f * F ) defined as a composition c * l (f * F ) ≃ g * (b * l F ) u −→ g * (b ! r F ) BC −→ c ! r (f * F ), where base change morphism BC exists, because [f ] is Cartesian.
(c) As in [Va,1.1.9], for an open subset W ⊆ C, every c-morphism u gives rise to a c| W -morphism
u| W : (c * l F )| W → (c ! r F )| W . (d)
It follows from (b) and 1.2(c) that for a c-invariant subscheme Z ⊆ X, every c-morphism u gives rise to a c| Z -morphism u| Z (compare [Va,1.5.6(a)]).
1.5. Trace maps and local terms. Fix a correspondence c : C → X × X.
(a) As in [Va,1.2.2], to every F ∈ D b ctf (X, Λ) we associate the trace map
T r c : Hom c (F , F ) → H 0 (Fix(c), K Fix(c) ). (b) For an open subset β of Fix(c), 1 we denote by T r β : Hom c (F , F ) → H 0 (β, K β )
the composition of T r c and the restriction map
H 0 (Fix(c), K Fix(c) ) → H 0 (β, K β ).
(c) If in addition β is proper over k, we denote by
LT β : Hom c (F , F ) → Λ
the composition of T r β and the integration map π β! : H 0 (β, K β ) → Λ. (d) In the case when β is a connected component of Fix(c), 2 which is proper over k, LT β (u) is usually called the (true) local term of u at β.
Relative correspondences
2.1. Relative correspondences. Let S be a scheme over k. By a relative correspondences over S, we mean a morphism c = (c l , c r ) : C → X × S X of schemes over S, or equivalently, a correspondence c = (c l , c r ) : C → X × X such that c l and c r are morphisms over S.
(a) For a correspondence c as above and a morphism g : S ′ → S of schemes over k we can form a relative correspondence g * (c) := c × S S ′ over S ′ . Moreover, it follows from 1.4(b) that every c-morphism u ∈ Hom c (F , F ) gives rise to the g * (c)-morphism g * (u) ∈ Hom g * (c) (g * F , g * F ), where g * F ∈ D b ctf (X × S S ′ , Λ) denotes the * -pullback of F . (b) For a geometric point s of S, let i s : {s} → S be the canonical map, and we set c s := i * s (c). Then, by (a), every c-morphism u ∈ Hom c (F , F ) gives rise to a c s -morphism u s := i * s (u) ∈ Hom cs (F s , F s ). Thus we can form the trace map T r cs (u s ) ∈ H 0 (Fix(c s ), K Fix(cs) ).
Remark.
In other words, a relative correspondence c over S gives rise a family of correspondences c s : C s → X s × X s , parameterized by a collection of geometric points s of S. Moreover, every c-morphism u gives rise to a family of c s -morphisms u s ∈ Hom cs (F s , F s ), thus a family of trace maps T r cs (u s ) ∈ H 0 (Fix(c s ), K Fix(cs )).
Proposition 2.5 below, whose proof will be given in Section 7, asserts that in some cases the assignment s → T r cs (u s ) is "constant".
2.3. Notation. We say that a morphism f : X → S is a topologically constant family, if the reduced scheme X red is isomorphic to a product Y × S red over S.
Claim 2.4. Assume that f : X → S is a topologically constant family, and that S is connected. Then for every two geometric points s, t of S, we have a canonical identification RΓ(
X s , K Xs ) ≃ RΓ(X t , K Xt ), hence H 0 (X s , K Xs ) ≃ H 0 (X t , K Xt ). Proof. Set K X/S := f ! (Λ S ) ∈ D b ctf (X, Λ) and F := f * (K X/S ) ∈ D b ctf (S, Λ).
Our assumption on f implies that for every geometric point s of S, the base change morphisms
F s = RΓ(s, F s ) → RΓ(X s , i * s (K X/S )) → RΓ(X s , K Xs ) are isomorphisms.
Furthermore, the assumption also implies that F is constant, that is, isomorphic to a pullback of an object in D b ctf (pt, Λ). Then for every specialization arrow α : t → s, the specialization map α * : F s → F t (see [SGA4,VIII,7]) is an isomorphism (because F is locally constant), and does not depend on the specialization arrow α (only on s and t). Thus the assertion follows from the assumption that S is connected.
Proposition 2.5. Let c : C → X ×X be a relative correspondence over S such that S is connected, and that Fix(c) → S is a topologically constant family.
Then
for every c-morphism u ∈ Hom c (F , F ) such that F is ULA over S, the assignment s → T r cs (u s ) is "constant", that is, for every two geometric points s, t of S, the identification H 0 (X s , K Xs ) ≃ H 0 (X t , K Xt ) from Claim 2.4 identifies T r cs (u s ) with T r ct (u t ).
In particular, we have T r cs (u s ) = 0 if and only if T r ct (u t ) = 0.
3. An (A 1 , ·)-equivariant case 3.1. Construction. Fix a scheme S over k and a morphism µ : X × S → X.
(a) A correspondence c : C → X × X gives rise to the correspondence c S = c µ S : C S → X S × S X S over S, where C S := C × S and X S := X × S, while c Sl , c Sr : C × S → X × S are given by c Sr := c r × Id S and c Sl := (µ, pr S ) • (c l × Id S ), that is, c Sl (y, s) = (µ(c l (y), s), s) and c Sr (y, s) = (c r (y), s) for all y ∈ C and s ∈ S.
(b) For every geometric point s of S, we get an endomorphism µ s := µ(−, s) : X s → X s . Then
c s := i * s (c S ) is the correspondence c s = (µ s • c l , c r ) : C s → X s × X s . In particular, for every s ∈ S(k) we get a correspondence c s : C → X × X. (c) Suppose we are give F ∈ D b ctf (X, Λ), a c-morphism u ∈ Hom c (F , F ) and a morphism v : µ * F → F S in D b ctf (X S , Λ), where we set F S := F ⊠ Λ S ∈ D b ctf (X S , Λ). To this data we associate a c S -morphism u S ∈ Hom cS (F S , F S ), defined as a composition c * Sl (F S ) ≃ (c l × Id S ) * (µ * F ) v → (c l × Id S ) * (F S ) ≃ (c * l F ) ⊠ Λ S u → (c ! r F ) ⊠ Λ S ≃ c ! Sr (F S ). (d) For every geometric point s of S, morphism v restricts to a morphism v s = i * s (v) : µ * s F → F , and the c s -morphism u s := i * s (u S ) : c * l µ * s F → c ! r F decomposes as u s : c * l µ * s F vs −→ c * l F u −→ c ! r F . 3.2.
Remarks. For a morphism µ : X × S → X and a closed point a ∈ S, we set S a := S {a}, and µ a := µ| X×S a :
X × S a → X. Let F ∈ D b ctf (X, Λ) be such that µ * a F ≃ 0. (a) Every morphism v a : (µ a ) * F → F S a uniquely extend to a morphism v : µ * F → F S . Indeed, let j : X × S a ֒→ X × S and i : X × {a} ֒→ X × S be the inclusions. Using distinguished triangle j ! j * µ * F → µ * F → i * i * µ * F and the assumption that i * µ * F ≃ µ * a F ≃ 0, we conclude that the map j ! j * µ * F → µ * F is an isomorphism. Therefore the restriction map j * : Hom(µ * F , F S ) → Hom(j * µ * F , j * F S ) ≃ Hom(j ! j * µ * F , F S ) is an isomorphism, as claimed. (b) Our assumption µ * a F ≃ 0 implies that Hom ca (F , F ) = Hom(c * l µ * a F , c ! r F ) ≃ 0. 3.3. Equivariant case. (a)
Assume that S is an algebraic monoid, acting on X, and that µ :
X × S → X is the action map. We say that F ∈ D b ctf (X, Λ) is weakly S-equivariant, if we are given a morphism v : µ * F → F S such that v 1 : F = µ * 1 F → F is the identity.
In particular, the construction of 3.1 applies, so to every c-morphism u ∈ Hom c (F , F ) we associate a c S -morphism u S ∈ Hom cS (F S , F S ).
(b) In the situation of (a), the correspondence c 1 equals c, and the assumption on v 1 implies that the c-morphism u 1 equals u.
3.4. Basic example. (a) Let X be a scheme, equipped with an action µ : X × A 1 → X of the algebraic monoid (A 1 , ·), let µ 0 : X → X be the induced (idempotent) endomorphism, and let Z = Z X ⊆ X be the scheme of µ 0 -fixed points, also called the zero section. Then Z X ⊆ X is a locally closed subscheme, while µ 0 : X → X factors as X → Z X ֒→ X, thus inducing a projection pr X : X → Z X , whose restriction to Z X is the identity.
(
b) The correspondence X → (Z X ⊆ X pr X −→ Z X ) is functorial. Namely, every (A 1 , ·)-equivariant morphism f : X ′ → X induced a morphism Z f : Z X ′ → Z X between zero sections, and we have an equality Z f • pr X ′ = pr X •f of morphisms X ′ → Z X .
(c) Let c : C → X × X be any correspondence. Then the construction of 3.1 gives rise to a relative correspondence c A 1 :
C A 1 → C A 1 × C A 1 over A 1 , hence a family of correspondences c t : C → X × X, parameterized by t ∈ A 1 (k).
(d) For every t ∈ A 1 (k), the zero section Z ⊆ X is µ t -invariant, and the induced map µ t | Z is the identity. Therefore we have an inclusion Fix(c| Z ) ⊆ Fix(c t | Z ) of schemes of fixed points.
(e) For every t ∈ G m (k), we have an equality Fix(c t | Z ) = Fix(c| Z ). Indeed, one inclusion was shown in (d), while the opposite one follows from first one together with identity (c t ) t −1 = c.
(f) Since µ 0 factors through Z ⊆ X, we have an equality Fix(c 0 | Z ) = Fix(c 0 ). Moreover, if Z is c-invariant, we have an equality Fix(c 0 | Z ) = Fix(c| Z ). Indeed, one inclusion was shown in (d), while the opposite one follows from the inclusion Fix(c 0 | Z ) ⊆ c −1 r (Z) = c −1 (Z × Z). 3.5. Twisted action. Assume that we are in the situation of 3.4. For every n ∈ N, we can consider the n-twisted action µ(n) : X × A 1 → X of (A 1 , ·) on X given by formula µ(n)(x, t) = µ(x, t n ). It gives rise to the family of correspondences c
µ(n) t : C → X × X such that c µ(n) t = c t n .
Clearly, µ(n) restricts to an n-twisted action of G m on X.
Proposition 3.6. Let X be an (A 1 , ·)-equivariant scheme, and let c :
C → X × X be a corre- spondence such that Z = Z X ⊆ X is closed and c-invariant, Fix(c) Fix(c| Z ) = ∅ and the set {t ∈ A 1 (k) | Fix(c µ t ) Fix(c µ t | Z ) = ∅} is finite. Then for every weakly G m -equivariant F ∈ D b ctf (X, Λ) (see 3.3(a)
) with respect to the n-twisted action (see 3.5) such that F | Z = 0 and every c-morphism u ∈ Hom c (F , F ), we have T r c (u) = 0.
Proof. Consider the n-twisted action µ(n) : X × A 1 → X, and let µ(n) 0 : X × G m → X be the induced n-twisted action of G m . The weakly G m -equivariant structure on F gives rise to the morphism v 0 : (µ(n) 0 ) * F → F Gm (see 3.3(a)).
Next, since µ(n) 0 = µ 0 : X → X factors through Z, while F | Z = 0, we conclude that (µ(n) 0 ) * F ≃ 0. Therefore morphism v 0 extends uniquely to the morphism v : µ(n) * F → F A 1 (see 3.2(a)). Thus by construction 3.1(c), our c-morphism u gives rise to the c µ(n)
A 1 -morphism u A 1 ∈ Hom c µ(n) A 1 (F A 1 , F A 1 ) such that u 1 = u (see 3.3(b)).
Notice that since u 0 ∈ Hom c0 (F , F ) = 0 (see 3.2(b)), we have T r c0 (u 0 ) = 0. We would like to apply Proposition 2.5 to deduce that T r c (u) = T r c1 (u 1 ) = 0.
Consider the set T :
= {t ∈ A 1 (k) | Fix(c µ t n ) Fix(c µ t n | Z ) = ∅}.
Then 0 / ∈ T (by 3.4(f)), and our assumption says that T is finite, and 1 / ∈ T . Then S := A 1 T ⊆ A 1 is an open subscheme, and 0, 1 ∈ S. Let c µ(n) S be the restriction of c µ(n) A 1 to S, and it suffices to show that Fix(c µ(n) S ) → S is a topologically constant family, thus Proposition 2.5 applies.
We claim that we have the equality Fix(c
µ(n) S ) red = Fix(c µ(n) S | Z×S ) red = Fix(c| Z ) red × S of locally closed subschemes of C × S.
For this it suffices to show that for every t ∈ S(k) we have equalities Fix(c t n ) red = Fix(c t n | Z ) red = Fix(c| Z ) red . The left equality follows from the identity Fix(c µ t n ) Fix(c µ t n | Z ) = ∅ used to define S. Since Z is c-invariant, while the right equality follows from observations 3.4(e),(f).
Equivariant correspondences.
Let c : C → X × X be an (A 1 , ·)-equivariant correspondence, by which we mean that both C and X are equipped with an action of a monoid (A 1 , ·), and both projections c l , c r : C → X are (A 1 , ·)-equivariant.
(a) Note that the subscheme of fixed points Fix(c) ⊆ C is (A 1 , ·)-invariant, correspondence c induces a correspondence Z c : Z C → Z X × Z X between zero sections, and we have an equality Fix(Z c ) = Z Fix(c) of locally closed subschemes of C.
(b) By 3.1(a), correspondence c gives rise to a relative correspondence c A 1 : C A 1 → X A 1 × X A 1 over A 1 . Equip the monoid (A 1 , ·) act on X A 1 and C A 1 by the product of its actions on X and C and the trivial action on A 1 . Then c A 1 is an (A 1 , ·)-equivariant correspondence, and the induced correspondence Z c A 1 between zero sections is the product of Z c (see (a)) and Id A 1 :
A 1 → A 1 × A 1 .
(c) Using (b), for every t ∈ A 1 (k), we get an (A 1 , ·)-equivariant correspondence c t : C → X × X, which satisfy Z ct = Z c and Z Fix(ct) = Fix(Z c ) (use (a)).
3.8. Cones. Recall (see 3.4(a)) that for every (A 1 , ·)-equivariant scheme X, there is a natural projection pr X : X → Z X . We say that X is a cone, if the projection pr X : X → Z is affine. In concrete terms this means that X ≃ Spec(A), where A = ∞ n=0 A n is a graded quasi-coherent O Z -algebra, where A 0 = O Z , and each A n is a coherent O Z -module. In this case, the zero section Z X ⊆ X is automatically closed.
(b) In the situation of (a), the open subscheme X Z X ⊆ X is G m -invariant, and the quotient (X Z X )/G m is isomorphic to P roj(A) over Z X , hence is proper over Z X .
(c) Notice that if c : C → X × X is an (A 1 , ·)-equivariant correspondence such that C and X are cones, then Fix(c) is a cone as well (compare 3.7(a)).
Our next goal is to show that in some cases the finiteness assumption in Proposition 3.6 is automatic.
Lemma 3.9. Let c : C → X × X be an (A 1 , ·)-equivariant correspondence over k such that X is a cone with zero section Z, C is a cone with zero section c −1 r (Z), and Fix(c| Z ) is proper over k.
Then the set {t ∈ A 1 (k) | Fix(c t ) Fix(c t | Z ) = ∅} is finite.
Proof. We let c A 1 be as in 3.7(b), and set Fix(c A 1 ) ′ := Fix(c A 1 ) Fix(c A 1 | Z A 1 ) ⊆ Fix(c A 1 ). We have to show that the image of the projection π : Fix(c A 1 ) ′ → A 1 is a finite set.
Note that the fiber of Fix(c A 1 ) ′ over 0 ∈ A 1 is Fix(c 0 ) Fix(c 0 | Z ) = ∅ (by 3.4(f)). It thus suffices to show that the image of π is closed. By 3.8(c), we conclude that Fix(c A ) is a cone, while using 3.7(a),
(b) we conclude that Z Fix(c A ) = Fix(c A 1 | Z A 1 ) = Fix(c| Z ) × A 1 .
It now follows from 3.8(b) that the open subscheme Fix(c A 1 ) ′ ⊆ Fix(c A 1 ) is G m -invariant, and that π factors through the quotient Fix(c A 1 ) ′ /G m , which is proper over Fix(c| Z ) × A 1 . Since Fix(c| Z ) is proper over k by assumption, the projection π : Fix(c A 1 ) ′ /G m → A 1 is therefore proper. Hence the image of π is closed, completing the proof. (a) Recall that to a pair (X, Z), where X is a scheme and Z ⊆ X a closed subscheme, one associates the normal cone N Z (X) defined to be N Z (X) = Spec( ∞ n=0 (I Z ) n /(I Z ) n+1 ), where I Z ⊆ O X is the sheaf of ideals of Z. By definition, N Z (X) is a cone in the sense of 3.8, and Z ⊆ N Z (X) is the zero section.
(b) The assignment (X, Z) → (N Z (X), Z) is functorial. Namely, every morphism f :
X ′ → X such that Z ′ ⊆ f −1 (Z) gives rise to an (A 1 , ·)-equivariant morphism N Z ′ (X ′ ) → N Z (X), whose induced morphism between zero sections is f | Z ′ : Z ′ → Z. (c) By (b), every morphism f : X ′ → X induces morphism N Z (f ) : N f −1 (Z) (X ′ ) → N Z (X), lifting f | Z : f −1 (Z) → Z. Moreover, the induced map N f −1 (Z) (X ′ ) → N Z (X) × Z f −1 (Z)
is a closed embedding, and we have an equality
N Z (f ) −1 (Z) = f −1 (Z) ⊆ N f −1 (Z) (X ′ ).
The following standard assertion will be important later.
Lemma 4.2. Assume that N Z (X) is set-theoretically supported on the zero section, that is,
N Z (X) red = Z red . Then Z red ⊆ X red is open.
Proof. Since the assertion is local on X, we can assume that X is affine. Moreover, replacing X by X red , we can assume that X is reduced. Then our assumption implies that there exists n such that I n Z = I n+1 Z . Using Nakayama lemma, we conclude that the localization of I n Z at every x ∈ Z is zero. Thus the localization of I Z at every point z ∈ Z is zero, which implies that Z ⊆ X is open, as claimed.
Application to correspondences. (a)
Let c : C → X × X be a correspondence, and Z ⊆ X a closed subscheme. Then, by 4.1, correspondence c gives rise to an (A 1 , ·)-equivariant correspondence N Z (c) :
N c −1 (Z×Z) (C) → N Z (X) × N Z (X) such that the induced correspondence between zero sections is c| Z : c −1 (Z × Z) → Z × Z.
(b) Combining observations 3.8(c) and 3.7(a), we get that Fix(N Z (c)) is a cone with zero section Fix(c| Z ). Moreover, N Fix(c|Z ) (Fix(c)) is closed subscheme of Fix(N Z (c)) (see [Va,Cor 1.4.5]).
(c) By 3.7(b), for every t ∈ A 1 (k) we get a correspondence
N Z (c) t : N c −1 (Z×Z) (C) → N Z (X) × N Z (X).
Moreover, every Fix(N Z (c) t ) is a cone with zero section Fix(c| Z ) (use 3.8(c) and 3.7(c)).
Definition 4.4. Let c : C → X × X be a correspondence, and let Z ⊆ X be a closed subscheme.
(a) We say that c has no fixed points in the punctured tubular neighborhood of Z, if correspondence N Z (c) satisfies Fix(N Z (c)) Fix(c| Z ) = ∅.
(b) We say that c has no almost fixed points in the punctured tubular neighborhood of Z, if Fix(N Z (c)) Fix(c| Z ) = ∅, and the set {t ∈ A 1 (k) | Fix(N Z (c) t ) Fix(c| Z ) = ∅} is finite.
4.5. Remarks. (a) The difference N c −1 (Z×Z) (C) c −1 (Z × Z)
can be thought as the punctured tubular neighborhood of c −1 (Z × Z) ⊆ C. Therefore our condition 4.4(a) means that a point
y ∈ N c −1 (Z×Z) (C) c −1 (Z × Z) is not a fixed point of N Z (c), that is, N Z (c) l (y) = N Z (c) r (y).
(b) Condition 4.4(b) means that there exists an open neighbourhood U ∋ 1 in A 1 such that for every y ∈ N c −1 (Z×Z) (C) c −1 (Z × Z) we have µ t (N Z (c) l (y)) = N Z (c) r (y) for every t ∈ U . In other words, y is not an almost fixed point of N Z (c).
4.6. The case of a morphism. (a) Let f : X → X be a morphism, and let x ∈ Fix(f ) be a fixed point. We take c be the graph Gr f = (f, Id X ) of f , and set Z := {x}.
Then N x (X) := N Z (X) is a closed conical subset of the tangent space T x (X), the morphism
N x (f ) : N x (X) → N x (X) is (A 1 , ·)-equivariant, thus Fix(N x (c)) = Fix(N x (f )) is a conical subset of N x (X) ⊆ T x (X)
. Thus Gr f has no fixed points in the punctured tubular neighborhood of x if and only if set-theoretically Fix N
x (f ) = {x}. (b) Let T x (f ) : T x (X) → T x (X) be the differential of f at x. Then Fix T x (f ) = {x} if and only if the linear map T x (f ) − Id : T x (X) → T x (X)
is invertible, that is, Gr f intersects with ∆ X at x transversally in the strongest possible sense. In this case, Gr f has no fixed points in the punctured tubular neighborhood of x (by (a)).
(c) Assume now that X is smooth at x. Then, by (a) and (b), Gr f has no fixed points in the punctured tubular neighborhood of x if and only if Gr f intersects with ∆ X at x transversally.
Though the next result is not needed for what follows, it shows that our setting generalizes the one studied in [Va].
Lemma 4.7. Assume that c is contracting near Z in the neighborhood of fixed points in the sense of [Va,2.1.1(c)]. Then c has no almost fixed points in the punctured tubular neighborhood of Z. Moreover, the subset of A 1 (k), defined in Definition 4.4(b), is empty.
Proof.
Choose an open neighborhood W ⊆ C of Fix(c) such that c| W is contracting near Z (see [Va,2.1.1(b)]). Then Fix(c| W ) = Fix(c), hence we can replace c by c| W , thus assuming that c is contracting near Z. In this case, the set-theoretic image of N Z (c) l : N c −1 (Z×Z) (C) → N Z (X) lies in the zero section. Therefore for every t ∈ A 1 (k) the set-theoretic image of the map Fix(N Z (c) t ) → N Z (X) lies in the zero section, implying the assertion.
By Lemma 4.7, the following result is a generalization of [Va,Thm 2.1.3(a)].
Lemma 4.8. Let c : C → X × X be a correspondence, which has no fixed points in the punctured tubular neighborhood of Z ⊆ X. Then the closed subscheme Fix(c| Z ) red ⊆ Fix(c) red is open. .1(b)), while our assumption implies that we have an equality Fix(c| Z ) red = Fix(N Z (c)) red . Therefore we have an equality Fix(c| Z ) red = N Fix(c|Z ) (Fix(c)) red , from which our assertion follows by Lemma 4.2.
Proof. Recall that we have inclusions
Fix(c| Z ) red ⊆ N Fix(c|Z ) (Fix(c)) red ⊆ Fix(N Z (c)) red (by 4
Notation
Let c : C → X ×X be a correspondence, which has no fixed points in the punctured tubular neighborhood of Z ⊆ X. Then by Lemma 4.8, Fix(c| Z ) ⊆ Fix(c) is an open subset, thus (see 1.5(b)) to every c-morphism u ∈ Hom c (F, F ) one can associate an element T r Fix(c|Z ) (u) ∈ H 0 (Fix(c| Z ), K Fix(c|Z ) ). Now we are ready to formulate the main result of this note, which by Lemma 4.7 generalizes [Va,Thm 2.1.3(b)].
Theorem 4.10. Let c : C → X × X be a correspondence, and let Z ⊆ X be a c-invariant closed subscheme such that c has no fixed points in the punctured tubular neighborhood of Z.
(a) Assume that c has no almost fixed points in the punctured tubular neighborhood of Z. Then for every c-morphism u ∈ Hom c (F , F ), we have an equality
T r Fix(c|Z ) (u) = T r c|Z (u| Z ) ∈ H 0 (Fix(c| Z ), K Fix(c|Z ) ).
(b) Every connected component β of Fix(c| Z ), which is proper over k, is also a connected component of Fix(c). Moreover, for every c-morphism u ∈ Hom c (F , F ), we have an equality
LT β (u) = LT β (u| Z ).
As an application, we now deduce the result, stated in the introduction.
Corollary 4.11. Let f : X → X be a morphism, and let x ∈ Fix(f ) be a fixed point such that the induced map of normal cones N x (f ) : N x (X) → N x (X) has no non-zero fixed points. Then:
(a) Point x is an isolated fixed point of f . (b) For every morphism u : f * F → F with F ∈ D b
ctf (X, Λ), we have an equality LT x (u) = Tr(u x ). In particular, if F = Λ and u is the identity, then LT x (u) = 1.
Proof. As it was observed in 4.6(a), the assumption implies that {x} ⊆ X is a closed Gr f -invariant subscheme, and correspondence Gr f has no fixed points in the punctured tubular neighborhood of {x}. Therefore part (a) follows from Lemma 4.8, while the first assertion of (b) is an immediate corollary of Theorem 4.10. The second assertion of (b) now follows from the obvious observation that Tr(u x ) = 1.
The case of group actions
Lemma 5.1. Let D be a diagonalizable algebraic group acting on a scheme X, and let Z ⊆ X be a D-invariant closed subscheme. Then D acts on the normal cone N Z (X), and the induced morphism N Z D (X D ) → N Z (X) D on D-fixed points is an isomorphism.
Proof. By the functoriality of the normal cone (see 4.1(b)), D acts on the normal cone N Z (X), so it remains to show that the map N Z D (X D ) → N Z (X) D is an isomorphism.
Assume first that D is a finite group of order prime to the characteristic of k. Note that every z ∈ Z D has a D-invariant open affine neighbourhood U ⊆ X. Thus, replacing X by U and Z by Z ∩ U , we can assume that X and Z are affine. Then we have to show that the map
k[N Z (X)] D ∼ = k[N Z (X) D ] → k[N Z D (X D )] is an isomorphism.
The latter assertion follows from the fact that the functor of coinvariants M → M D is exact on k[D]-modules. Namely, the exactness of (−) D implies that the isomorphism k[X] D ∼ → k[X D ] induces an isomorphism (I Z ) D ∼ → I Z D , and the rest is easy. To show the case of a general D, notice that the set of torsion elements D tor ⊆ D is Zariski dense. Since X, Z and N Z (X) are Noetherian, therefore there exists a finite subgroup D ′ ⊆ D such that X D = X D ′ and similarly for Z and N Z (X). Therefore the assertion for D follows from that for D ′ , shown above.
Corollary 5.2. Let D be as in Lemma 5.1, let g ∈ D, and let Z ⊆ X be a g-invariant closed subscheme. Then g induces an endomorphism of the normal cone N Z (X), and the induced morphism N Z g (X g ) → N Z (X) g on g-fixed points is an isomorphism.
Proof. Let D ′ := g ⊆ D be the Zariski closure of the cyclic group g ⊆ D. Then D ′ is a diagonalizable group, and we have an equality X g = X D ′ and similarly for Z g and N Z (X) g . Thus the assertion follows from Lemma 5.1 for D ′ . 5.3. Example. Let g : X → X be an automorphism of finite order, which is prime to the characteristic of k. Then the cyclic group g ⊆ Aut(X) is a diagonalizable group, thus Corollary 5.2 applies in this case. So for every g-invariant closed subscheme Z ⊆ X, the natural morphism N Z g (X g ) → N Z (X) g is an isomorphism.
As a consequence we get a class of examples, when the condition of Definition 4.4(a) is satisfied.
Corollary 5.4. Let G be a linear algebraic group acting on X.
(a) Let g ∈ G, let g be the Zariski closure of the cyclic group generated by g, let s ∈ g be a semisimple element, and let Z ⊆ X be an s-invariant closed subscheme such that (X Z) s = ∅. Then g has no fixed points in the punctured tubular neighborhood of Z.
(b) Let g ∈ G be semisimple, and let Z ⊆ X be a g-invariant closed subscheme such that (X Z) g = ∅. Then g has no fixed points in the punctured tubular neighborhood of Z.
Proof. (a) We have to show that N Z (X) g Z = ∅. By assumption, we have N Z (X) g ⊆ N Z (X) s . Therefore, it suffices to show that N Z (X) s Z = N Z (X) s Z s = ∅. Since s is semisimple, we conclude from Corollary 5.2 that N Z (X) s = N Z s (X s ). Since (X s ) red = (Z s ) red , by assumption, we conclude that N Z s (X s ) red = (Z s ) red , implying the assertion.
(b) is a particular case of (a).
5.5.
Example. An important particular case of Corollary 5.4(a) is when s = g s is the semisimple part of g, that is, g = g s g u is the Jordan decomposition.
The following result gives a version of Corollary 4.11, whose assumptions are easier to check.
Corollary 5.6. Let G and g ∈ G be as in Corollary 5.4(b), and let x ∈ X g be an isolated fixed point of g. Then the induced map of normal cones g : N x (X) → N x (X) has no non-zero fixed points. Therefore for every morphism u : g * F → F with F ∈ D b ctf (X, Λ), we have an equality LT x (u) = Tr(u x ).
Proof. The first assertion follows from Corollary 5.4(b), while the second one follows from Corollary 4.11(b).
5.
7. An application. Corollary 5.6 is used in the work of D. Hansen, T. Kaletha and J. Weinstein (see [HKW,Prop. 5.6.2]).
As a further application, we get a slight generalization of the Deligne-Lusztig trace formula.
Notation.
To every proper endomorphism f : X → X and a morphism u : f * F → F with F ∈ D b ctf (X, Λ), one associates an endomorphism RΓ c (u) : RΓ c (X, F ) → RΓ c (X, F ) (compare [Va,1.1.7]).
Moreover, for an f -invariant closed subscheme Z ⊆ X, we set U := X Z and form endo-
morphisms RΓ c (u| Z ) : RΓ c (Z, F | Z ) → RΓ c (Z, F | Z ) and RΓ c (u| U ) : RΓ c (U, F | U ) → RΓ c (U, F | U ) (compare 1.4(d)).
Theorem 5.9. Let G be a linear algebraic group acting on a separated scheme X, let g ∈ G be such that X has a g-equivariant compactification, and let s ∈ g be a semisimple element.
Then X s ⊆ X is a closed g-invariant subscheme, and for every morphism u : g * F → F with F ∈ D b ctf (X, Λ), we have an equality of traces Tr(RΓ c (u)) = Tr(RΓ c (u| X s )) (see 5.8).
Proof. Using the equality Tr(RΓ c (u)) = Tr(RΓ c (u| X s )) + Tr(RΓ c (u| X X s )), it remains to show that Tr(RΓ c (u| X X s )) = 0. Thus, replacing X by X X s and u by u| X X s , we may assume that X s = ∅, and we have to show that Tr(RΓ c (u)) = 0. Choose a g-equivariant compactification X of X, and set Z := (X X) red . Let j : X ֒→ X be the open inclusion, and set F := j ! F ∈ D b c (X, Q ℓ ). Since X ⊆ X is g-invariant, our morphism u extends to a morphism u = j ! (u) : g * F → F , and we have an equality Tr(RΓ c (u)) = Tr(RΓ c (u)) (compare [Va,1.1.7]). Thus, since X is proper, the Lefschetz-Verdier trace formula says that
Tr(RΓ c (u)) = Tr(RΓ c (u)) = β∈π0(X g ) LT β (u),
so it suffices to show that each local term LT β (u) vanishes. Since X g ⊆ X s = ∅, we have (X g ) red = (Z g ) red . Thus every β is a connected component of Z g . In addition, g have no fixed points in the punctured neighborhood of Z (by Corollary 5.4(a)). Therefore, by Theorem 4.10, we have an equality LT β (u) = LT β (u| Z ). However, the latter expression vanishes, because F | Z = 0, therefore u| Z = 0. This completes the proof.
Corollary 5.10. Let X be a scheme over k, let g : X → X be an automorphism of finite order, and let s be a power of g such that s is of order prime to the characteristic of k. Then for every morphism u : g * F → F with F ∈ D b ctf (X, Λ), we have an equality of traces Tr(RΓ c (u)) = Tr(RΓ c (u| X s )).
Proof. Notice that since g is an automorphism of finite order, X has a g-invariant open dense subscheme U . Using additivity of traces Tr(RΓ c (u)) = Tr(RΓ c (u| U )) + Tr(RΓ c (u| X U )), and Noetherian induction on X, we can therefore assume that X is affine. Then X has a g-equivariant compactification, so the assertion follows from Theorem 5.9. 5.11. Example. Applying Corollary 5.10 in the case when F = Q ℓ and u is the identity, we recover the identity Tr(g, RΓ c (X, Q ℓ )) = Tr(g, RΓ c (X s , Q ℓ )), proven in [DL,Thm. 3.2].
6. Proof of Theorem 4.10 6.1. Deformation to the normal cone (see [Va,1.4
.1 and Lem 1.4.3]).
Let R = k[t] (t) be the localization of k[t] at (t), set D := Spec R, and let η and s be the generic and the special points of D, respectively.
(a) Let X be a scheme over k, and Z ⊆ X a closed subscheme. Recall ( [Va,1.4.1]) that to this data one associate a scheme X Z over X D := X × D, whose generic fiber (that is, fiber over η ∈ D) is X η := X × η, and special fiber is the normal cone N Z (X).
(b) We have a canonical closed embedding Z D ֒→ X Z , whose generic fiber is the embedding Z η ֒→ X η , and special fiber is Z ֒→ N Z (X).
(c) The assignment (X, Z) → X Z is functorial, that is, for every morphism f : (X ′ , Z ′ ) → (X, Z) there exists a unique morphism X ′ Z ′ → X Z lifting f D (see [Va,Lem 1.4.3]). In particular, f gives rise to a canonical morphism N Z ′ (X ′ ) → N Z (X) from 4.1(b).
(d) Let c : C → X × X be a correspondence, and let Z ⊆ X be a closed subscheme. Then, by (c), one gets the correspondence c Z : C c −1 (Z×Z) (C) → X Z × X Z over D, whose generic fiber is c η , and special fiber is the correspondence N Z (c) :
N c −1 (Z×Z) (C) → N Z (X) × N Z (X) from 4.3(a).
(e) By (b), we have a canonical closed embedding Fix(c| Z ) D ֒→ Fix( c Z ) over D, whose generic fiber is the embedding Fix(c| Z ) η ֒→ Fix(c) η , and special fiber is Fix(c| Z ) ֒→ Fix(N Z (c)).
6.2. Specialization to the normal cone. Assume that we are in the situation of 6.1.
(a) As in [Va,1.3.2], we have a canonical functor sp XZ : D b ctf (X, Λ) → D b ctf (N Z (X), Λ). Moreover, for every F ∈ D b ctf (X, Λ), we have a canonical morphism sp cZ : Hom c (F , F ) → Hom NZ (c) (sp XZ (F ), sp XZ (F )).
(b) As in [Va,1.3.3(b)], we have a canonical specialization map
sp Fix( cZ ) : H 0 (Fix(c), K Fix(c) ) → H 0 (Fix(N Z (c)), K Fix(NZ (s)) ),
which is an isomorphism when Fix( c Z ) → D is a topologically constant family.
(c) Applying [Va,Prop 1.3.5] in this case, we conclude that for every F ∈ D b ctf (X, Λ), the following diagram is commutative (6.1)
Hom c (F , F ) T rc − −−− → H 0 (Fix(c), K Fix(c) ) sp c Z sp Fix( c Z ) Hom NZ (c) (sp XZ (F ), sp XZ (F )) T r N Z (c) −−−−−→ H 0 (Fix(N Z (c)), K Fix(NZ (s)) ).
Now we are ready to prove Theorem 4.10, mostly repeating the argument of [Va,Thm 2.1.3(b)].
6.3. Proof of Theorem 4.10(a).
Step 1. We may assume that Fix(c) red = Fix(c| Z ) red .
Proof. By Lemma 4.8, there exists an open subscheme W ⊆ C such that W ∩ Fix(c) red = Fix(c| Z ) red . Replacing c by c| W and u by u| W , we can assume that Fix(c) red = Fix(c| Z ) red .
Step 2. We may assume that F | Z ≃ 0, and it suffices to show that in this case T r c (u) = 0.
Proof. Set U := X Z, and let i : Z ֒→ X and j : U ֒→ X be the embeddings. Since Z is c-invariant, one can associate to u two c-morphisms [Va,1.5.9]). Then, by the additivity of the trace map [Va,Prop. 1.5.10], we conclude that
[i Z ] ! (u| Z ) ∈ Hom c (i ! (F | Z ), i ! (F | Z )) and [j U ] ! (u| U ) ∈ Hom c (j ! (F | U ), j ! (F | U )) (seeT r c (u) = T r c ([i Z ] ! (u| Z )) + T r c ([j U ] ! (u| U )).
Moreover, using the assumption Fix(c| Z ) red = Fix(c) red and the commutativity of the trace map with closed embeddings [Va,Prop 1.2.5], we conclude that
T r c ([i Z ] ! (u| Z )) = T r c|Z (u| Z ).
Thus it remains to show that T r c ([j U ] ! (u| U )) = 0. For this we can replace F by j ! (F | U ) and u by [j U ] ! (u| U ). In this case, F | Z ≃ 0, and it remains to show that T r c (u) = 0 as claimed.
Step 3. Specialization to the normal cone. By the commutative diagram (6.1), we have an equality T r NZ (c) (sp cZ (u)) = sp F ix( cZ ) (T r c (u)).
Thus to show the vanishing of T r c (u), it suffices to show that (i) the map sp Fix( cZ ) is an isomorphism, and (ii) we have T r NZ (c) (sp cZ (u)) = 0.
Step 4. Proof of Step 3(i). By observation 6.2(b), it suffices to show that the closed embedding Fix(c| Z ) D,red ֒→ Fix( c Z ) red (see 6.1(b)) is an isomorphism. Moreover, we can check separately the corresponding assertions for generic and special fibers.
For generic fibers, the assertions follows from our assumption Fix(c) red = Fix(c| Z ) red (see Step 1), while the assertion for special fibers Fix(c| Z ) red = Fix(N Z (c)) red follows from our assumption that c has no fixed points in the punctured tubular neighborhood of Z.
Step 5. Proof of Step 3(ii). By a standard reduction, one can assume that Λ is finite. We are going to deduce the assertion from Proposition 3.6 applied to the correspondence N Z (c) and a weakly G m -equivariant sp XZ (F ) ∈ D ctf (N Z (X), Λ).
Note that the zero section Z ⊆ N Z (X) is closed (by 4.1(a)). Next, since Z is c-invariant, we have c −1 (Z × Z) = c −1 r (Z). Therefore if follows by 4.1(c) that Z ⊆ N Z (X) is N Z (c)-invariant, and the correspondence N Z (c) t | Z is identified with Z NZ (c) = c| Z . Since c has no almost fixed points in the punctured tubular neighborhood of Z, we therefore conclude that N Z (c) satisfies the assumptions of Proposition 3.6. So it remains to show that sp XZ (F )| Z ≃ 0 and that sp XZ (F ) is weakly G m -equivariant with respect to the n-twisted action for some n.
Both assertions follow from results of Verdier [Ve]. Namely, the vanishing assertion follows from isomorphism sp XZ (F )| Z ≃ F | Z (see [Ve,§8,(SP5)] or [Va,Prop. 1.4.2]) and our assumption F | Z ≃ 0 (see Step 2). The equivariance assertion follows from the fact that sp XZ (F ) is monodromic (see [Ve,§8,(SP1)]), because Λ is finite (use [Ve,Prop 5.1]).
6.4. Proof of Theorem 4.10(b). The first assertion follows from Lemma 4.8. To show the second one, choose an open subscheme W ⊆ C such that W ∩ Fix(c) red = β red . Replacing c by c| W , we can assume that β red = Fix(c) red = Fix(c| Z ) red , thus Fix(c| Z ) is proper over k.
As it was already observed in Step 5 of 6.3, the correspondence N Z (c)| Z is identified with c| Z . Thus Fix(N Z (c)| Z ) = Fix(c| Z ) is proper over k. It now follows from Lemma 3.9 that the finiteness condition in Definition 4.4(b) is satisfied automatically, therefore c has no almost fixed points in the tubular neighborhood of Z (see 4.5(c)). Now the equality LT β (u) = LT β (u| Z ) follows from obvious equalities T r β (u) = T r Fix(c|Z ) (u), T r β (u| Z ) = Tr c|Z (u| Z ) and part (a).
Proof of Proposition 2.5
We are going to reduce the result from the assertion that trace maps commute with nearby cycles.
7.1. Set up. Let D be a spectrum of a discrete valuation ring over k with residue field k, and f : X → D a morphism of schemes of finite type.
(a) Let η, η and s be the generic, the geometrically generic and the special point of D, respectively. We denote by X η , X η and X s the generic, the geometric generic and the special fiber of X, respectively, and let i η : X η → X, i η : X η → X, i s : X s → X and π η : X η → X η be the canonical morphisms.
(b) For every F ∈ D(X, Λ), we set F η := i * η (F ), F η := i * η (F ) and F s := i * s (F ). For every F η ∈ D(X η , Λ), we set F η := π * η (F η ).
(c) Let Ψ = Ψ X : D b
• the second equality follows from the commutative diagram of Proposition 7.6;
• the third equality follows from definition of Sp X in 7.3(d);
• the last equality follows from the commutative diagram of 7.5(a). Now the assertion follows from Lemma 7.4.
by which we mean that β ⊆ Fix(c) is a locally closed subscheme such that β red ⊆ Fix(c) red is open 2 that is, β is a closed connected subscheme of Fix(c) such that β red ⊆ Fix(c) red is open
ctf (X η , Λ) → D b ctf (X s , Λ) be the nearby cycle functor. By definition, it is defined by the formula Ψ X (F η ) := i * s i η * (F η ). (d) Consider functor Ψ X := i * s • i η * : D(X η , Λ) → D(X s , Λ). Then we have Ψ X (F η ) = Ψ X (F η ) for all F η ∈ D b ctf (X η , Λ). 7.2. ULA sheaves. Assume that we are in the situation of 7.1.(a) We have a canonical isomorphism Ψ X •i * η ≃ i * s •i η * •i * η of functors D b ctf (X, Λ) → D b ctf (X s , Λ). In particular, the unit map Id → i η * • i * η induces a morphism of functors i *7.3. Construction. Assume that we are in the situation of 7.1.(a) For every F η ∈ D(X η , Λ), consider composition(d) Using the observation K X η ≃ π * η (K Xη ), we denote by Sp X the compositionLemma 7.4. Assume that f : X → D is a topologically constant family (see 2.3). Then the specialization map Sp X : RΓ(X η , K X η ) → RΓ(X s , K Xs ) of 7.3(c) coincides with the canonical identification of Claim 2.4.Proof. Though the assertion follows by straightforward unwinding the definitions, we sketch the argument for the convenience of the reader. As in the proof of Claim 2.4, we set K X/D := f ! (Λ D ) and F := f * (K X/D ). Consider diagram (7.1)We claim that the diagram (7.1) is commutative. Since top left, top right and bottom left inner squares are commutative by functoriality, it remain to show the commutativity of the right bottom inner square. In other words, it suffices to show the commutativity of the following diagramBy the commutativity of (7.1), it remains to show that the top arrow) ≃ RΓ(s, F s ) = F s of (7.1) equals the inverse of the specialization mapBut this follows from the commutativity of the following diagramSpecialization of cohomological correspondences.Let c : C → X × X be a correspondence over D, let c η : C η → X η × X η , c η : C η → X η × X η and c s : C s → X s × X s be the generic, the geometric generic and the special fibers of c, respectively. Fix F η ∈ D b ctf (X η , Λ). (a) Using the fact that the projection π η : η → η is pro-étale, we have the following commutative diagram). Proposition 7.6. In the situation 7.5, the following diagram is commutativeProof. The assertion is a small modification[Va,Prop 1.3.5], and the argument of[Va,Prop 1.3.5] proves our assertion as well. Alternatively, the assertion can be deduced from the general criterion of[Va,Section 4]. Namely, repeating the argument of[Va,4.1.4(b)] word-by-word, one shows that the nearby cycle functors Ψ · together with base change morphisms define a compactifiable cohomological morphism in the sense of[Va,4.1.3]. Therefore the assertion follows from (a small modification of)[Va,Cor 4.3.2].Lemma 7.7. Let c : C → X × X be a correspondence over D. Then for every F ∈ D b ctf (X, Λ) and u ∈ Hom c (F , F ), the following diagram is commutative). Proof. The assertion is a rather straightforward diagram chase. Indeed, it suffices to show the commutativity of the following diagram:. We claim that all inner squares of (7.2) are commutative. Namely, the middle inner square is commutative by functoriality, while the commutativity of the left and the right inner squares follows by formulas Ψ · = i * s • i η * and definitions of the base change morphisms. Now we are ready to show Proposition 2.5. 7.8. Proof of Proposition 2.5. Without loss of generality we can assume that s is a specialization of t of codimension one. Then there exists a spectrum of a discrete valuation ring D and a morphism f : D → S whose image contains s and t. Taking base change with respect to f we can assume that S = D, t = η is the geometric generic point, while s is the special point.Then we have equalities T r cs (u s ) = T r cs (Ψ c (u η )) = Sp Fix(c) (T r cη (u η )) = Sp Fix(c) (π * η (T r cη (u η ))) = Sp Fix(c) (T r c η (u η )), where• the first equality follows from the fact that the isomorphism F s → Ψ X (F η ) from 7.2(b) identifies u s with Ψ c (u η ) (by Lemma 7.7);
M Artin, A Grothendieck, J.-L Verdier, Théorie des topos et cohomologieétale des Schémas. Springer-Verlag269M. Artin, A. Grothendieck, J.-L. Verdier et al., Théorie des topos et cohomologieétale des Schémas, Lecture Notes in Mathematics 269, 270, 305, Springer-Verlag, 1972-1973.
. P Deligne, Lecture Notes in Mathematics. 569Springer-VerlagCohomologieétaleP. Deligne et al., Cohomologieétale, Lecture Notes in Mathematics 569, Springer-Verlag, 1977.
. P Deligne, La Weil, Inst. HautesÉtudes Sci. Publ. Math. IIP. Deligne, La conjecture de Weil. II, Inst. HautesÉtudes Sci. Publ. Math. 52 (1980), 137-252.
Representations of Reductive Groups Over Finite Fields. P Deligne, G Lusztig, Ann of Math. 103P. Deligne and G. Lusztig, Representations of Reductive Groups Over Finite Fields, Ann of Math. 103 (1976), 103-161.
Rigid geometry, Lefschetz-Verdier trace formula and Deligne's conjecture. K Fujiwara, Invent. Math. 1273K. Fujiwara, Rigid geometry, Lefschetz-Verdier trace formula and Deligne's conjecture, Invent. Math. 127 (1997), no. 3, 489-533.
On the Kottwitz conjecture for moduli spaces of local shtukas. D Hansen, T Kaletha, J Weinstein, arXiv:1709.06651preprintD. Hansen, T. Kaletha and J. Weinstein, On the Kottwitz conjecture for moduli spaces of local shtukas, preprint, arXiv:1709.06651.
L Illusie, Formule De Lefschetz, Coholologie ℓ-adique et fonctions L, SGA5. Springer-Verlag589L. Illusie, Formule de Lefschetz, in Coholologie ℓ-adique et fonctions L, SGA5, Lecture Notes in Mathematics 589, Springer-Verlag, 1977, pp. 73-137.
On the calculation of local terms in the Lefschetz-Verdier trace formula and its application to a conjecture of Deligne. R Pink, Ann. of Math. 1353R. Pink, On the calculation of local terms in the Lefschetz-Verdier trace formula and its application to a conjecture of Deligne, Ann. of Math. 135 (1992), no. 3, 483-525.
Lefschetz-Verdier trace formula and a generalization of a theorem of Fujiwara. Y Varshavsky, Geom. Funct. Anal. 17Y. Varshavsky, Lefschetz-Verdier trace formula and a generalization of a theorem of Fujiwara, Geom. Funct. Anal. 17 (2007), 271-319.
Spécialisation de faisceaux et monodromie modérée. J.-L Verdier, Analysis and topology on singular spaces. J.-L. Verdier, Spécialisation de faisceaux et monodromie modérée, in Analysis and topology on singular spaces, 332-364, Astérisque 101-102, 1983,
| [] |
[
"Nuclear shape fluctuations in high-energy heavy ion collisions",
"Nuclear shape fluctuations in high-energy heavy ion collisions"
] | [
"Aman Dimri \nDepartment of Chemistry\nStony Brook University\n11794Stony BrookNYUSA\n",
"Somadutta Bhatta \nDepartment of Chemistry\nStony Brook University\n11794Stony BrookNYUSA\n",
"Jiangyong Jia \nDepartment of Chemistry\nStony Brook University\n11794Stony BrookNYUSA\n\nPhysics Department\nBrookhaven National Laboratory\nUpton11976NYUSA\n"
] | [
"Department of Chemistry\nStony Brook University\n11794Stony BrookNYUSA",
"Department of Chemistry\nStony Brook University\n11794Stony BrookNYUSA",
"Department of Chemistry\nStony Brook University\n11794Stony BrookNYUSA",
"Physics Department\nBrookhaven National Laboratory\nUpton11976NYUSA"
] | [] | Atomic nuclei often exhibit a quadrupole shape that fluctuates around some average profile. We investigate the impact of nuclear shape fluctuation on the initial state geometry in heavy ion collisions, particularly its eccentricity ε2 and inverse size d⊥, which can be related to the elliptic flow and radial flow in the final state. The fluctuation in overall quadrupole deformation enhances the variances and modifies the skewness and kurtosis of the ε2 and d⊥ in a controllable manner. The fluctuation in triaxiality reduces the difference between prolate and oblate shape for any observable, whose values, in the large fluctuation limit, approach those obtained in collisions of rigid triaxial nuclei. The method to disentangle the mean and variance of the quadrupole deformation is discussed. PACS numbers: 25.75.Gz, 25.75.Ld, 25.75.-1I. INTRODUCTIONUltra-relativistic heavy ion physics aims to understand the dynamics and properties of the Quark-Gluon Plasma (QGP) created in collisions of atomic nuclei at very high energy[1]. Achieving this goal is currently limited by the lack of understanding of the initial condition, i.e. how the energy is deposited in the overlap region before the formation of QGP [2]. The energy deposition process is not calculable from first principles and is often parameterized via phenomenological approaches with multiple free parameters[3]. On the other hand, heavy atomic nuclei are well-studied objects exhibiting a wide range of shapes and radial profiles[4], which are often characterized by a few collective nuclear structure parameters such as quadrupole, triaxial, and octupole deformations, nuclear radius and skin thickness. One can leverage species with similar mass numbers but different structures, such as isobars, to directly probe the energy deposition mechanism and hence constrain the initial condition. The efficacy of this approach has been investigated recently[5][6][7].One good example demonstrating this possibility is the 96 Ru+ 96 Ru and 96 Zr+ 96 Zr collisions, recently carried out by the STAR Collaboration at the relativistic heavy ion collider[8,9]. Ratios of many bulk observables between the isobars, such as harmonic flow v n , charged particle multiplicity N ch , and average transverse momentum ⟨p T ⟩, have been measured, which show significant and observable-and centrality-dependent deviation from unity. Model studies show that these ratios are insensitive to final-state effects and are controlled mainly by the differences of the collective nuclear structure parameters between 96 Ru and 96 Zr [10]. Comparing the calculations with experimental data, Refs.[5,11]have estimated structure parameters that are broadly consistent with general knowledge from low energy. However, these studies also suggest a sizable octupole collectivity for Zr, not predicted by mean field structure models[12]. The rich and versatile information from isobar or isobar-like collisions provides a new constraint on the heavy ion initial condition and a new way to probe nuclear structure at high energy[13].However, it is important to point out that atomic nuclei in the ground state often do not have a static shape, but can fluctuate due to interplay between collective modes and single-particle states[14]. The potential energy surface of such species usually has shallow minimums as a function of deformation parameters, such as quadruple deformation β and triaxiality γ. The ground state nuclear wave function is often treated as a mixture of configurations with different (β, γ) values[4,15]. Then there are the phenomena of shape coexistence, which happens when the same nuclei can have multiple low-lying states with widely different shapes but small energy differences[16]. From the nuclear structure side, the quadrupole fluctuations can be estimated from the sum rules of matrix elements of various moments of quadrupole operators that can be measured experimentally[17,18]. From the heavy ion collision side, the shape fluctuations can be accessed using multi-particle correlations, which probe moments of the nucleon position in the initial condition[19]. For instance, the elliptic flow v 2 in each event is proportional to the elliptic eccentricity ε 2 , v 2 = kε 2 calculable from participating nucleons[20]. Therefore, the fluctuations of flow are related to fluctuations of quadruple deformation via their respective moments: ⟨v m 2 ⟩ = k m ⟨ε m 2 ⟩ ∝ ⟨β m ⟩ , m = 2, 4... In principle, one could * Electronic address: [email protected] | 10.1140/epja/s10050-023-00965-1 | [
"https://export.arxiv.org/pdf/2301.03556v1.pdf"
] | 255,546,211 | 2301.03556 | 5aedf82e64819f44b7726042f1cdf5bd77be0a92 |
Nuclear shape fluctuations in high-energy heavy ion collisions
Aman Dimri
Department of Chemistry
Stony Brook University
11794Stony BrookNYUSA
Somadutta Bhatta
Department of Chemistry
Stony Brook University
11794Stony BrookNYUSA
Jiangyong Jia
Department of Chemistry
Stony Brook University
11794Stony BrookNYUSA
Physics Department
Brookhaven National Laboratory
Upton11976NYUSA
Nuclear shape fluctuations in high-energy heavy ion collisions
(Dated: January 10, 2023)numbers: 2575Gz2575Ld2575-1
Atomic nuclei often exhibit a quadrupole shape that fluctuates around some average profile. We investigate the impact of nuclear shape fluctuation on the initial state geometry in heavy ion collisions, particularly its eccentricity ε2 and inverse size d⊥, which can be related to the elliptic flow and radial flow in the final state. The fluctuation in overall quadrupole deformation enhances the variances and modifies the skewness and kurtosis of the ε2 and d⊥ in a controllable manner. The fluctuation in triaxiality reduces the difference between prolate and oblate shape for any observable, whose values, in the large fluctuation limit, approach those obtained in collisions of rigid triaxial nuclei. The method to disentangle the mean and variance of the quadrupole deformation is discussed. PACS numbers: 25.75.Gz, 25.75.Ld, 25.75.-1I. INTRODUCTIONUltra-relativistic heavy ion physics aims to understand the dynamics and properties of the Quark-Gluon Plasma (QGP) created in collisions of atomic nuclei at very high energy[1]. Achieving this goal is currently limited by the lack of understanding of the initial condition, i.e. how the energy is deposited in the overlap region before the formation of QGP [2]. The energy deposition process is not calculable from first principles and is often parameterized via phenomenological approaches with multiple free parameters[3]. On the other hand, heavy atomic nuclei are well-studied objects exhibiting a wide range of shapes and radial profiles[4], which are often characterized by a few collective nuclear structure parameters such as quadrupole, triaxial, and octupole deformations, nuclear radius and skin thickness. One can leverage species with similar mass numbers but different structures, such as isobars, to directly probe the energy deposition mechanism and hence constrain the initial condition. The efficacy of this approach has been investigated recently[5][6][7].One good example demonstrating this possibility is the 96 Ru+ 96 Ru and 96 Zr+ 96 Zr collisions, recently carried out by the STAR Collaboration at the relativistic heavy ion collider[8,9]. Ratios of many bulk observables between the isobars, such as harmonic flow v n , charged particle multiplicity N ch , and average transverse momentum ⟨p T ⟩, have been measured, which show significant and observable-and centrality-dependent deviation from unity. Model studies show that these ratios are insensitive to final-state effects and are controlled mainly by the differences of the collective nuclear structure parameters between 96 Ru and 96 Zr [10]. Comparing the calculations with experimental data, Refs.[5,11]have estimated structure parameters that are broadly consistent with general knowledge from low energy. However, these studies also suggest a sizable octupole collectivity for Zr, not predicted by mean field structure models[12]. The rich and versatile information from isobar or isobar-like collisions provides a new constraint on the heavy ion initial condition and a new way to probe nuclear structure at high energy[13].However, it is important to point out that atomic nuclei in the ground state often do not have a static shape, but can fluctuate due to interplay between collective modes and single-particle states[14]. The potential energy surface of such species usually has shallow minimums as a function of deformation parameters, such as quadruple deformation β and triaxiality γ. The ground state nuclear wave function is often treated as a mixture of configurations with different (β, γ) values[4,15]. Then there are the phenomena of shape coexistence, which happens when the same nuclei can have multiple low-lying states with widely different shapes but small energy differences[16]. From the nuclear structure side, the quadrupole fluctuations can be estimated from the sum rules of matrix elements of various moments of quadrupole operators that can be measured experimentally[17,18]. From the heavy ion collision side, the shape fluctuations can be accessed using multi-particle correlations, which probe moments of the nucleon position in the initial condition[19]. For instance, the elliptic flow v 2 in each event is proportional to the elliptic eccentricity ε 2 , v 2 = kε 2 calculable from participating nucleons[20]. Therefore, the fluctuations of flow are related to fluctuations of quadruple deformation via their respective moments: ⟨v m 2 ⟩ = k m ⟨ε m 2 ⟩ ∝ ⟨β m ⟩ , m = 2, 4... In principle, one could * Electronic address: [email protected]
constrain the mean and variance of quadrupole fluctuations from the ⟨β 2 ⟩ and ⟨β 4 ⟩, which in terms can be determined from ⟨v 2 2 ⟩ and ⟨v 4 2 ⟩. This paper extends our previous study to investigate the influence of fluctuations of quadruple deformation parameters (β, γ) to several selected two-, three-and four-particle heavy-ion observables. We first derive simple analytical relations between these observables and the means and variances of (β, γ). We then perform a more realistic Glauber model simulation, assuming Gaussian fluctuations, to quantify the region of validity of these relations. We discuss the sensitivity of these observables on the nuclear shape, as well as the prospect of separating the average shape from shape fluctuations.
II. EXPECTATION AND MODEL SETUP
We consider the eccentricity vector 2 and inverse transverse size d ⊥ , which are estimators for elliptic flow V 2 ≡ v 2 e 2iΨ2 and average transverse momentum ⟨p T ⟩ or radial flow, calculated from the transverse position of nucleon participants in each event,
2 = − ⟨r 2 ⊥ e i2φ ⟩ ⟨r 2 ⊥ ⟩ , d ⊥ = N part ⟨r 2 ⊥ ⟩,(1)
where r ⊥ is the transverse radius and N part is the number of participating nucleons. Following the heuristic argument from Ref. [21], for collisions of nuclei with small quadrupole deformation, the eccentricity vector and d ⊥ in a given event have the following leading-order form:
δd ⊥ d ⊥ ≈ δ d + p 0 (Ω p , γ p )β p + p 0 (Ω t , γ t )β t , 2 ≈ 0 + p 2 (Ω p , γ p )β p + p 2 (Ω t , γ t )β t ,(2)
where the scalar δ d and vector 0 are valued for spherical nuclei, and we are considering the general situation where the projectile and target, denoted by subscripts "p" and "t", have different deformation values. The p 0 and p 2 are phase space factors, which depend on γ and the Euler angles Ω.
Since the fluctuations of δ d ( 0 ) are uncorrelated with p 0 (p 2 ), an average over collisions with different Euler angles is expected to give the following leading-order expressions for the variances, skewness, and kurtosis of the fluctuations c 2, {2} ≡ ⟨ε 2 2 ⟩ = ⟨ε 2 0 ⟩ + ⟨p 2 (γ p )p * 2 (γ p )⟩ β 2 p + ⟨p 2 (γ t )p * 2 (γ t )⟩ β 2
t c d {2} ≡ ⟨ δd ⊥ d ⊥ 2 ⟩ = ⟨δ 2 d ⟩ + ⟨p 0 (γ p ) 2 ⟩ β 2 p + ⟨p 0 (γ t ) 2 ⟩ β 2 t Cov ≡ ⟨ε 2 2 δd ⊥ d ⊥ ⟩ = ⟨ε 2 0 δ d ⟩ + ⟨p 0 (γ p )p 2 (γ p )p 2 (γ p ) * ⟩ β 3 p + ⟨p 0 (γ t )p 2 (γ t )p 2 (γ t ) * ⟩ β 3 t c d {3} ≡ ⟨ δd ⊥ d ⊥ 3 ⟩ = ⟨δ 3 d ⟩ + ⟨p 0 (γ p ) 3 ⟩ β 3 p + ⟨p 0 (γ t ) 3 ⟩ β 3 t c 2, {4} ≡ ⟨ε 4 2 ⟩ − 2 ⟨ε 2 2 ⟩ 2 = ⟨ε 4 0 ⟩ − 2 ⟨ε 2 0 ⟩ 2 + ⟨p 2 2 p 2 * 2 ⟩ ⟨β 4 ⟩ − 2 ⟨p 2 p * 2 ⟩ 2 ⟨β 2 ⟩ 2 p + ⟨p 2 2 p 2 * 2 ⟩ ⟨β 4 ⟩ − 2 ⟨p 2 p * 2 ⟩ 2 ⟨β 2 ⟩ 2 t .(3)
These quantities relate directly to the final state observables, ⟨v 2
2 ⟩, ⟨(δp T ⟨p T ⟩) 2 ⟩, ⟨v 2 2 δp T ⟨p T ⟩ ⟩, ⟨(δp T ⟨p T ⟩) 3 ⟩ and ⟨v 4 2 ⟩ − 2 ⟨v 2 2 ⟩ 2 , respectively.
Previous studies have demonstrated that the moments ⟨p 2 0 ⟩, ⟨p 2 p * 2 ⟩, and ⟨p 2 2 p 2 * 2 ⟩ are independent of γ, while ⟨p 0 p 2 p * 2 ⟩ and ⟨p 3 0 ⟩ have leading order dependence on γ: c + b cos(3γ). Here, c ≪ b for ⟨p 0 p 2 p * 2 ⟩, whereas c ≲ b for ⟨p 3 0 ⟩ [21]. In the presence of quadrupole fluctuations, we also need to average these quantities over "independent" fluctuations for projectile and target, giving,
⟨ δd ⊥ d ⊥ 2 ⟩ = a 0 + b 0 2 ⟨β 2 p ⟩ + ⟨β 2 t ⟩ = a 0 + b 0 ⟨β 2 ⟩ ⟨ε 2 2 ⟩ = a 1 + b 1 2 (⟨β 2 p ⟩ + ⟨β 2 t ⟩) = a 1 + b 1 ⟨β 2 ⟩ ⟨ε 2 2 δd ⊥ d ⊥ ⟩ = a 2 − 1 2 ⟨(c 2 + b 2 cos(3γ p ))β 3 p ⟩ + ⟨(c 2 + b 2 cos(3γ t ))β 3 t ⟩ = a 2 − ⟨(c 2 + b 2 cos(3γ))β 3 ⟩ ⟨ δd ⊥ d ⊥ 3 ⟩ = a 3 + 1 2 ⟨(c 3 + b 3 cos(3γ p ))β 3 p ⟩ + ⟨(c 3 + b 3 cos(3γ t )β 3 t ⟩ = a 3 + ⟨(c 3 + b 3 cos(3γ))β 3 ⟩ ⟨ε 4 2 ⟩ − 2 ⟨ε 2 2 ⟩ 2 = a 4 + b 4 2 ⟨β 4 p ⟩ + ⟨β 4 t ⟩ − c 4 2 ⟨β 2 p ⟩ 2 + ⟨β 2 t ⟩ 2 = a 4 + b 4 ⟨β 4 ⟩ − c 4 ⟨β 2 ⟩ 2 ,(4)
where the averages are performed over fluctuations in β and γ, and the coefficients a n , b n and c n are centralitydependent positive quantities satisfying c 2 ≪ b 2 and c 3 ≲ b 3 [21]. The second part of these equations is obtained by assuming that the fluctuations of the projectile and target are sampled from the same probability density distributions. For a more quantitative estimation, we consider the liquid-drop model where the nucleon density distribution has a sharp surface. For head-on collisions with zero impact parameter, it predicts the following simple relations [21],
δd ⊥ d ⊥ = 5 16π β 2 cos γD 2 0,0 + sin γ √ 2 D 2 0,2 + D 2 0,−2 , 2 = − 15 2π β 2 cos γD 2 2,0 + sin γ √ 2 D 2 2,2 + D 2 2,−2 ,(5)
where the D l m,m ′ (Ω) are the Wigner matrices. The analytical results obtained for various cumulants are listed in Table I. They provide approximate estimates for the values of b n in most central collisions. To make further progress, we consider the case where the fluctuations of β and γ are independent of each other. The observables in Eq. 4 and Table I can be expressed in terms of central moments. Assuming Gaussian fluctuations with meansβ orγ and variances σ β or σ γ , Eq. 4 becomes
⟨(δd ⊥ d⊥) 2 ⟩ ⟨(δd ⊥ d⊥) 3 ⟩ ⟨(δd ⊥ d⊥) 4 ⟩ − 3 ⟨(δd ⊥ d⊥) 2 ⟩ 2 1 32π ⟨β 2 ⟩ √ 5 896π 3 2 ⟨cos(3γ)β 3 ⟩ − 3 14336π 2 7 ⟨β 2 ⟩ 2 − 5 ⟨β 4 ⟩ ⟨ε 2 2 ⟩ ⟨ε 4 2 ⟩ − 2 ⟨ε 2 2 ⟩ 2 ⟨ε 6 2 ⟩ − 9 ⟨ε 4 2 ⟩ ⟨ε 2 2 ⟩ + 12 ⟨ε 2 2 ⟩ 3 4 3 4π ⟨β 2 ⟩ − 9 112π 2 7 ⟨β 2 ⟩ 2 − 5 ⟨β 4 ⟩ 81 256π 3 ⟨β 2 ⟩ 3 − 45 14 ⟨β 4 ⟩ ⟨β 2 ⟩ − 1175 6006 ⟨β 6 ⟩ + 25 3003 ⟨cos(6γ)β 6 ⟩ ⟨ε 2 2 (δd⊥ d⊥)⟩ ⟨ε 2 2 (δd⊥ d⊥) 2 ⟩ − ⟨ε 2 2 ⟩ ⟨(δd ⊥ d⊥) 2 ⟩ ⟨ 2 2 * 4 ⟩ − 3 √ 5 112π 3 2 ⟨cos(3γ)β 3 ⟩ − 3 1792π 2 7 ⟨β 2 ⟩ 2 − 5 ⟨β 4 ⟩ 45 56π 2 ⟨β 4 ⟩⟨ε 2 2 ⟩ = a 1 + b 1 (β 2 + σ 2 β ) , ⟨ δd ⊥ d ⊥ 2 ⟩ = a 0 + b 0 (β 2 + σ 2 β ) ⟨ε 2 2 δd ⊥ d ⊥ ⟩ = a 2 − (b 2 e − 9σ 2 γ 2 cos(3γ) + c 2 )β(β 2 + 3σ 2 β ) ⟨ δd ⊥ d ⊥ 3 ⟩ = a 3 + (b 3 e − 9σ 2 γ 2 cos(3γ) + c 3 )β(β 2 + 3σ 2 β ) ⟨ε 4 2 ⟩ − 2 ⟨ε 2 2 ⟩ 2 = a 4 + b 4 (β 4 + 6σ 2 ββ 2 + 3σ 4 β ) − c 4 (β 2 + σ 2 β ) 2 .(6)
where we have used the well-known expression for Gaussian smearing of an exponential function, ⟨e inγ ⟩ = e − n 2 σ 2 γ 2 e inγ .
If the fluctuations of β and γ are non-Gaussian, one should also consider the higher cumulants of β. For example, ⟨β 3 ⟩ =β(β + 3σ 2 β ) + k 3,β and ⟨β 4 ⟩ =β 4 + 6σ 2 ββ 2 + 3σ 4 β + 4βk 3,β + k 4,β , where k 3,β = ⟨(β −β) 3 ⟩ and k 4,β = ⟨(β −β) 4 ⟩ − 3 ⟨(β −β) 2 ⟩ 2 are the skewness and kurtosis of the β fluctuation. The expectation value of cos(nγ) can be expressed via the cumulant generating function of γ. Keeping the cumulants k m,γ up to leading order correction in skewness and kurtosis, k 3,γ = ⟨(γ −γ) 3 ⟩ and k 4,γ = ⟨(γ −γ) 4 ⟩ − 3 ⟨(γ −γ) 2 ⟩ 2 , we have,
⟨cos(nγ)⟩ = 1 2 ⟨e inγ ⟩ + ⟨e −inγ ⟩ = 1 2 exp ∞ m=1 κ m,γ (in) m m! + exp ∞ m=1 κ m,γ (−in) m m! = exp ∞ m=1 κ 2m,γ (−1) m (n) 2m 2m! cos ∞ m=1 κ 2m+1,γ (−1) m (n) 2m+1 (2m + 1)! + nγ ≈ e − n 2 σ 2 γ 2 + n 4 k 4,γ 24 cos nγ + n 3 6 k 3,γ ≈ e − n 2 σ 2 γ 2 cos(nγ) + sin(nγ) n 3 6 k 3,γ (1 + n 4 24 k 4,γ ).(7)
Clearly, the net effect of skewness is a rotation ofγ by k 3,γ n 2 6, and the net effect of kurtosis is to increase or decrease the overall variation with γ depending on its sign. For a more realistic estimation of the influences of shape fluctuations, we perform a Monte-Carlo Glauber model simulation of 238 U+ 238 U collisions. The setup of the model and the data used in this analysis are the same as those used in our previous work [19]. We simulate ultra-central collisions with zero impact parameter, where the impact of nuclear deformation reaches maximum. The nucleon distribution is described by a deformed Woods-Saxon function
ρ(r, θ, φ) = ρ 0 1 + e [r−R(θ,φ) a] , R(θ, φ) = R 0 (1 + β[cos γY 2,0 (θ, φ) + sin γY 2,2 (θ, φ)]) ,(8)
where the nuclear surface R(θ, φ) is expanded into spherical harmonics Y 2,m in the intrinsic frame. Each nucleus is assigned a random (β, γ) value, sampled from Gaussian distributions with means (β,γ) and variances (σ β , σ γ ). The nucleus is then rotated by random Euler angles before they are set on a straight line trajectory towards each other along the z direction. Furthermore, three quark constituents are generated for each nucleon according to the quark Glauber model from Ref. [22]. From this, the nucleons or the constituent quarks in the overlap region are identified, which are used to calculate the ε 2 and d ⊥ defined in Eqs. (1), and the results are presented as a function of deformation parameters.
For the study of the β fluctuation, we fix the γ = 0 (prolate nucleus) and choose 11 values each forβ 2 and σ 2 β with 0, 0.01,...,0.09, 0.1. So a total of 11 × 11 = 121 simulations have been performed. For the study of the γ fluctuation, we fix β = 0.28 and choose sevenγ and seven σ γ values: cos(3γ) = 1, 0.87, 0.5, 0, −0.5, 0.87, −1 and σ γ = 0, π 18, 2π 18, ..., 6π 18, so a total of 7 × 7 = 49 simulation have been performed. For each case, about 50 Million events were generated and all the observables were calculated. Our discussion is mainly based on the nucleon Glauber model, and the results from the quark Glauber model are included in the Appendix.
III. IMPACT OF TRIAXIALITY FLUCTUATION
Due to the three-fold symmetry of nuclei shape in triaxiality, the γ dependence of a given observable can be
generally expressed as a 0 + ∑ ∞ n=1 [a n cos(3nγ) + b n sin(3nγ)] e − n 2 σ 2 γ 2
. If we further impose the condition that a random fluctuation for a triaxial nucleus does not impact the value of the observable, which is found to be true in our analysis.
This leads to the γ dependence of the form a 0 + ∑ ∞ n=1 a n (cos(3nγ) − cos(3n π
6 )) + b n (sin(3nγ) − sin(3n π 6 )) e − n 2 σ 2 γ 2 .
We first discuss the impact of triaxiality fluctuation on three-particle observables ⟨ε 2 2 δd⊥ d⊥ ⟩ and ⟨(δd ⊥ d ⊥ ) 3 ⟩. We first subtract them by the values for the undeformed case, to isolate the second term in Eq. 4 containing the triaxiality. Figure 1 show the results obtained for different values of cos 3γ as a function of σ γ . The values for triaxial nucleus cos(3γ) = 0 are indeed independent of σ γ . The fluctuation of γ reduces the difference between the prolateγ = 0 and the oblateγ = π 3 shape. This reduction is largely described by e − 9σ 2 γ 2 cos(3γ), except for a small asymmetry between γ = 0 andγ = π 3, clearly visible for ⟨(δd ⊥ d ⊥ ) 3 ⟩.
We account for this small asymmetry by including higher-order terms in the fit function allowed by symmetry. Keeping leading and subleading terms, we have,
⟨ε 2 2 δd ⊥ d ⊥ ⟩ − ⟨ε 2 2 δd ⊥ d ⊥ ⟩ β=0 = a ′ 0 + (a ′ 1 cos(3γ) + b ′ 1 [sin(3γ) − 1])e − 9σ 2 γ 2 + (a ′ 2 [cos(6γ) + 1] + b ′ 2 sin(6γ))e − 36σ 2 γ 2 β 3 (9) = a 0 + (a 1 cos(3γ) + b 1 [sin(3γ) − 1])e − 9σ 2 γ 2 + (a 2 [cos(6γ) + 1] + b 2 sin(6γ))e − 36σ 2 γ 2 .(10)
The same fit function is also used to describe ⟨(δd ⊥ d ⊥ ) 3 ⟩. The parameters in the first line and those in the second line differ by a scale factorβ 3 = 0.28 3 = 0.021. From the values of parameters displayed in Fig. 1, we concluded that the magnitude of the high-order order terms is less than 2% of the magnitude of a 1 for ⟨ε 2 2 δd⊥ d⊥ ⟩ but reaches up to 5% for ⟨(δd ⊥ d ⊥ )
3 ⟩. Figure 1 shows that the signature of triaxiality in heavy ion collisions is greatly reduced for large value of σ γ , often found in γ-soft nuclei. A twenty-degree fluctuation in triaxiality, for example, reduces the signal by nearly 50%. It would be difficult to distinguish between static rigid triaxial nuclei and nuclei with large fluctuations aroundγ = π 6 using heavy ion collisions. In particular, nuclei that fluctuate uniformly between prolate and oblate shapes would give the same three-particle correlation signal as rigid triaxial nuclei! Such strong smearing also degrades the prospects of using higher-order cumulants of ε 2 to infer the value of σ γ .
For the other three observables, ⟨ε 2 2 ⟩, ⟨(δd ⊥ d ⊥ ) 2 ⟩ and ⟨ε 4 2 ⟩ − 2 ⟨ε 2 2 ⟩ 2 , γ dependence are known to be very weak [19].
Nevertheless, up to a few percent dependence is observed, which can also be parameterized by Eq. 9, except that we should changeβ 3 toβ 2 for the variances and toβ 4 for ⟨ε 4 2 ⟩ − 2 ⟨ε 2 2 ⟩ 2 . However, sinceβ is fixed at 0.28, all these observables can be parameterized by Eq. 10. The data and the results of the fits are shown in Fig. 2. First, we observe that the parameter a 0 , representing the baseline contribution associated withβ is by far the largest, and the other terms only cause a few percent of modulation. Secondly, while the ⟨ε 2 2 ⟩ and ⟨ε 4 2 ⟩ − 2 ⟨ε 2 2 ⟩ 2 can be largely described by including the cos(3γ) term, the description of ⟨(δd ⊥ d ⊥ ) 2 ⟩ requires the inclusion of sin(3γ), cos(6γ) and sin (6γ) terms with comparable magnitudes. Lastly, all three observables have no sensitivity toγ at large σ γ .
IV. IMPACT OF QUADRUPLE DEFORMATION FLUCTUATION
Next, we consider the impact of β fluctuations. For this purpose, we shall fix the γ to be prolate shape, e.g cos(3γ) = 1. Figure 3 displays the finding for two-particle observables ⟨ε 2 2 ⟩ and ⟨(δd ⊥ d ⊥ ) 2 ⟩, again they are corrected by the undeformed baseline. Although approximately-linear dependencies onβ 2 are observed for both observables, the slopes of the data points also vary with σ β . To describe this feature, we include two higher-order terms,
⟨ε 2 2 ⟩ − ⟨ε 2 2 ⟩ β=0 or ⟨ δd ⊥ d ⊥ 2 ⟩ − ⟨ δd ⊥ d ⊥ 2 ⟩ β=0 = c 1 ⟨β 2 ⟩ + c 2 ⟨β 3 ⟩ + c 3 ⟨β 4 ⟩ = c 1 (β 2 + σ 2 β ) + c 2β (β 2 + 3σ 2 β ) + c 3 (β 4 + 6σ 2 ββ 2 + 3σ 4 β )(11)
The fits including only the leading term and all three terms are shown in the first row and the last row of Fig. 3, respectively. The fits in the middle row include the c 1 and c 3 terms for ⟨ε 2 2 ⟩, while they include c 1 and c 2 terms for ⟨(δd ⊥ d ⊥ ) 2 ⟩. Clearly, the behavior of ⟨(δd ⊥ d ⊥ ) 2 ⟩ at largeβ or σ β requires the presence of the ⟨β 3 ⟩ term in Eq. 11
with a negative coefficient c 2 < 0. In general, a large fluctuation σ β tends to reduce the slope of the dependence on β 2 . For the three-particle correlators, we include three terms in the fitting function as
⟨ε 2 2 δd ⊥ d ⊥ ⟩ − ⟨ε 2 2 δd ⊥ d ⊥ ⟩ β=0 or ⟨ δd ⊥ d ⊥ 3 ⟩ − ⟨ δd ⊥ d ⊥ 3 ⟩ β=0 = c 1 ⟨β 3 cos(3γ)⟩ + c 2 ⟨β 4 cos(3γ)⟩ + c 3 ⟨β 5 cos(3γ)⟩ = [c 1β (β 2 + 3σ 2 β ) + c 2 (β 4 + 6σ 2 ββ 2 + 3σ 4 β ) + c 3 (β 5 + 10σ 3 ββ 2 + 15βσ 4 β )] cos(3γ)(12)
The fitting results are shown in Fig. 4 as a function ofβ 3 for the prolate case cos(3γ) = 1. Inclusion of the high-order terms, mostly contribution from the ⟨β 5 ⟩ component, improves the description of ⟨ε 2 2 δd⊥ d⊥ ⟩ in the region of large σ β . However, they are not sufficient to describe the ⟨(δd ⊥ d ⊥ ) 3 ⟩ in the region of largeβ and σ β . In particular, the fit also misses all points atβ = 0. We checked that the fit can be systematically improved by including more higher moment terms, albeit only very slowly. Lastly, we consider the four-particle observable c 2,ε {4} = ⟨ε 4 2 ⟩ − 2 ⟨ε 2 2 ⟩ 2 . According to findings in Fig. 3, the Taylor expansion of ⟨ε 2 2 ⟩ should give the first two terms as c 1 ⟨β 2 ⟩ + c 2 ⟨β 4 ⟩, similarly the first few terms of ⟨ε 4 2 ⟩ has the form of a 0 ⟨β 2 ⟩ 2 + a 1 ⟨β 4 ⟩ + a 2 ⟨β 6 ⟩. Therefore, the natural expression for c 2,ε {4} up to second order correction should be
c 2,ε {4} − c 2,ε {4} β=0 = a 0 ⟨β 2 ⟩ 2 + a 1 ⟨β 4 ⟩ + a 2 ⟨β 6 ⟩ − (c 1 ⟨β 2 ⟩ + c 2 ⟨β 4 ⟩) 2 ≈ a 1 ⟨β 4 ⟩ − b 1 ⟨β 2 ⟩ 2 + a 2 ⟨β 6 ⟩ − b 2 ⟨β 2 ⟩ ⟨β 4 ⟩ = a 1 (β 4 + 6β 2 σ 2 β + 3σ 4 β ) − b 1 (β 2 + σ 2 β ) 2 + a 2 (β 6 + 15β 4 σ 2 β + 45β 2 σ 4 β + 15σ 6 β ) − b 2 (β 2 + σ 2 β )(β 4 + 6β 2 σ 2 β + 3σ 4 β )(13)
with b 1 = c 2 1 − a 0 and b 2 = 2c 1 c 2 . The leading order correction includes the first two terms with a 1 and b 1 , while the remaining two terms are the subleading-order corrections.
The results from the Glauber model and the fit to Eq. 13 are shown in the left panel of Fig. 5. The strong variation of c 2,ε {4} with bothβ and σ β is captured nicely by the fit. For small values of σ β , the deformation has a negative contribution to c 2,ε {4} that is proportional toβ 4 . For large values of σ β , c 2,ε {4} becomes positive. A previous study shows that the centrality fluctuation also tends to give a positive value of c 2,ε {4} [23]. Therefore, a negative 2 ⟩ (β, σ β ) (left column) and ⟨(δd ⊥ d⊥) 2 ⟩ (β, σ β ) (right column) calculated in U+U collisions with zero impact parameter. The top row shows the fits to Eq. 11 with only the leading term and the last row shows the fits with all three terms. The middle row show the fits including c1 and c3 terms for ⟨ε 2 2 ⟩ and c1 and c2 terms for ⟨(δd ⊥ d⊥) 2 ⟩. distribution can also be described by the following alternative form, 5: The fit of the c2,ε{4}(β, σ β ) data calculated in U+U collisions with zero impact parameter to Eq. 13 (left) and Eq. 14 (right).
c 2,ε {4} − c 2,ε {4} β=0 ≈ a 1 2 (6β 2 σ 2 β + 3σ 4 β −β 4 ) + a 2 (β 4 σ 2 β + 27β 2 σ 4 β + 9σ 6 β −β 6 )(14)
The contribution of residual terms is only a few percent. Indeed, a fit of this form describes the data very well as shown in the right panel of Fig. 5. This behavior provides clear intuition on how the fluctuation terms containing σ β compete with the terms containing onlyβ. For example, assumingβ = σ β , the contribution from fluctuation-related terms is a factor of 9 (37) times theβ 4 (β 6 ) in the leading-order (subleading order). Thus, even a relatively small 0 0.005 0.01 and quark Glauber model (right). The inserts shows the K dependence of χ 2 , which is calculated as
4 β 0.5 − 0.4 − 0.3 − 0.2 − 0.1 − 0 3 − 10 × 0,0 2 〉 2 2 ε 〈 -k 〉 2 4 ε 〈 - 2 〉 2 2 ε 〈 -k 〉 2 4 ε 〈 2 β σ 0.4 β 0.5 − 0.4 − 0.3 − 0.2 − 0.1 − 0 3 − 10 × 0,0 2 〉 2 2 ε 〈 -k 〉 2 4 ε 〈 - 2 〉 2 2 ε 〈 -k 〉 2 4 ε 〈 2χ 2 = ∑ i ∑ j (fij −fi) 2 σ 2 i.j ,
where fij = f (βi, σ β,j ,fi = ∑ j fij ∑ j and σij is the statistical error bar on the i, j-th data point.
fluctuation could have a stronger impact on c 2,ε {4} than the modestly largeβ. Note that the liquid-drop model results in Table I predict b 1 = 7 5a 1 , slightly smaller than the Glauber model expectation.
Experimentally, we can measure ⟨v 2 2 ⟩ and ⟨v 2 4 ⟩, which are linearly related to ⟨ε 2 2 ⟩ and ⟨ε 4 2 ⟩, respectively. Thus, it is natural to ask whether one could constrain theβ and σ β from these two quantities. So far we have learned that the combination in the cumulant definition c 2,ε {4} = ⟨ε 4 2 ⟩ − 2 ⟨ε 2 2 ⟩ 2 is not sufficient to achieve such separation. Motivated by this fact, we tried a more general combination f (β, σ β ; k) = ⟨ε 4 2 ⟩ − k ⟨ε 2 2 ⟩ 2 , and identify the k value for which the f (β, σ β ; k) have the least variation in σ β . The best value found is k = k 0 = 2.541, for which the data points follow an approximately-linear dependenceβ 4 as shown in the left panel of Fig. 6. The right panel of Fig. 6 shows a similar exercise in the quark Glauber model which gives a nearly identical k 0 value. The data points yet do not collapse on a single curve, implying a small σ β dependence remaining. The amount of spread is estimated to be about 25% relative for a givenβ, corresponding to a variation ofβ of about 1 − 4 √ 0.75 = 7%. This 7% value is the best precision for determining theβ in the Glauber model using this method. The determinedβ value can then be plugged into Eq. 11 (considering only the leading order is sufficient for ⟨ε 2 2 ⟩ as shown in Fig. 3) to determine the σ β . In the study of flow fluctuations in heavy ion collisions, it is often desirable to calculate the normalized quantities between high-order cumulant and low-order cumulants, which have the advantage of canceling the final state effects. Here we study the following three quantities following the convention from Ref. [21],
ρ = ⟨ε 2 2 δd⊥ d⊥ ⟩ − ⟨ε 2 2 δd⊥ d⊥ ⟩ β=0 ⟨ε 2 2 ⟩ − ⟨ε2⟩ β=0 ⟨ δd⊥ d⊥ 2 ⟩ − ⟨ δd⊥ d⊥ 2 ⟩ β=0 , nc d {3} = ⟨( δd⊥ d⊥ ) 3 ⟩ − ⟨( δd⊥ d⊥ ) 3 ⟩ β=0 ⟨ δd⊥ d⊥ 2 ⟩ − ⟨ δd⊥ d⊥ 2 ⟩ β=0 3 2 , ncε{4} = c2,ε{4} − c2,ε{4} β=0 ⟨ε 2 2 ⟩ − ⟨ε 2 2 ⟩ β=0 2(15)
Since 96 Zr has little quadruple deformation β Zr ≈ 0, these quantities can be constructed from measurements in 96 Ru+ 96 Ru and 96 Zr+ 96 Zr collisions. The results from our Glauber model calculation are shown in Fig. 7 for prolate nuclei cos(3γ) = 0. The values of ρ are nearly independent ofβ and have a weak dependence on σ β . In the largeβ region, ρ quickly converges to a value around −0.62 nearly independent of σ β . In the moderateβ region sayβ ∼ 0.2, the ρ first decreases quickly to a value around -0.6, but then increases gradually with σ β . The values of nc d {3} have similar convergence trends towards largē β around 0.4, but much more slowly compare to ρ. The nc ε {4} has a negative and nearly constant value when σ β = 0, while it increases rather quickly with σ β . Even for a value of σ 2 β = 0.01, the nc ε {4} stays positive untilβ 2 > 0.06. For larger values of σ 2 β , the nc ε {4} decreases with increasingβ 2 , but always remains positive over the range ofβ studied.
V. SUMMARY
We studied the impact of the fluctuations of nuclear quadrupole deformation on the heavy ion observables in a Monte Carlo Glauber model. In particular, we focus on eccentricity ε 2 and inverse size d ⊥ in each event, which can be related to the event-wise elliptic flow and mean transverse momentum in the final state. The triaxiality γ has a strong impact on three-particle correlators, but the impact diminishes for larger σ γ . In particular, when σ γ is large, the observables do not distinguish between prolate deformation and oblate deformation, i.e. the values of all observables approach those obtained in collisions of rigid triaxial nuclei with the same β. The mean and variance of quadrupole fluctuations, β and σ 2 β , have a strong influence on all observables. The influence on two-particle observables ⟨ε 2 2 ⟩ and ⟨(δd ⊥ d ⊥ ) 2 ⟩ is proportional to ⟨β 2 ⟩ =β 2 + σ 2 β , however, the ⟨(δd ⊥ d ⊥ ) 2 ⟩ also has a sizable subleading order term proportional to ⟨β 3 ⟩. The three-particle observables to the leading order are proportional to ⟨cos(3γ)β 3 ⟩ = cos(3γ)β(β + 3σ 2 β ), whereas the four-particle observables to the leading order are proportional to ⟨β 4 ⟩ =β 4 + 6σ 2 ββ 2 + 3σ 4 β . Hence, the variance of β fluctuation has a stronger impact thanβ for these higher-order observables.
By combining two and four-particle cumulant of ε 2 , we have constructed a simple formula to constrain parametersβ and σ β simultaneously. Such separation becomes less effective when σ β is comparable or larger thanβ. In the future, it would be interesting to carry out a full hydrodynamic model simulation to quantify the efficacy of this method on the final state flow observables. This research is supported by DOE DE-FG02-87ER40331.
⟩ (left) and ⟨(δd ⊥ d⊥) 3 ⟩ (right) on smearing in triaxiality σγ for different values ofγ. The lines indicate a simultaneous fit to Eq. 10 with the parameter values displayed on the plot.
FIG. 2 :
2The dependence of ⟨ε 2 2 ⟩ (left), ⟨(δd ⊥ d⊥) 2 ⟩ (middle), and ⟨ε 4 2 ⟩ − 2 ⟨ε 2 2 ⟩ 2 (right) on σγ for different values ofγ. The dashed lines indicate a simultaneous fit to Eq. 10, with fit results are displayed on the plot.
FIG. 3 :
3c 2,ε {4} which decreases further in central collisions would be an unambiguous indication for a large static quadrupole deformation of the colliding nuclei. The values of the fit parameters show some interesting relations, i.e. b 1 ≈ 3 2a 1 and b 2 ≈ 2a 2 . This means that the The simultaneous fit of the ⟨ε 2
FIG. 4 :
4The simultaneous fit of the ⟨ε 2 2 δd⊥ d⊥ ⟩ (β, σ β ) (left column) and ⟨(δd ⊥ d⊥) 3 ⟩ (β, σ β ) (right column) calculated in U+U collisions with zero impact parameter. The top row shows the results of the fit to Eq. 12 with only the leading term and the second row shows the fits with all three terms. The fit results imply that the contribution from ⟨β 4 ⟩ is negligible, though.
FIG. 7 :
7The normalized three-particle correlators, ρ (left) and nc d {3} (middle) and normalized four-particle correlator ncε{4} (right) defined in Eq. 11 as a function ofβ 2 for different values of σ 2 β .
FIG. 8 :FIG. 9 :
89Comparison of the five observables between nucleon Glauber model (symbols) and quark Glauber model (lines with matching colors) as a function of σγ for different values ofγ. Comparison of the five observables between nucleon Glauber model (symbols) and quark Glauber model (lines with matching colors) as a function ofβ 2 for different values of σ β .
TABLE I :
IThe leading-order results of various cumulants calculated for the nucleus with a sharp surface via Eq. 5. The two nuclei are placed with zero impact parameter and results are obtained by averaging over random orientations.
AppendixThe default results in this paper are obtained with the nucleon Glauber model. We have repeated the analysis for the quark Glauber model and compared it with the nucleon Glauber model results in Figs. 8 and 9 for the impact of γ fluctuation and β fluctuation, respectively. The trends are mostly very similar. A few exceptions are observed. In particular, the results of the two models are shifted vertically from each other inFig. 8. In the case of β fluctuation inFig. 9, the variance c d {2} and skewness c d {3} are systematically different between the two models in the highβ region.
. W Busza, K Rajagopal, W Van Der Schee, 10.1146/annurev-nucl-101917-020852arXiv:1802.04801Ann. Rev. Nucl. Part. Sci. 68hep-phW. Busza, K. Rajagopal, and W. van der Schee, Ann. Rev. Nucl. Part. Sci. 68, 339 (2018), arXiv:1802.04801 [hep-ph] .
. J E Bernhard, J S Moreland, S A Bass, J Liu, U Heinz, 10.1103/PhysRevC.94.024907arXiv:1605.03954Phys. Rev. C. 9424907nucl-thJ. E. Bernhard, J. S. Moreland, S. A. Bass, J. Liu, and U. Heinz, Phys. Rev. C 94, 024907 (2016), arXiv:1605.03954 [nucl-th] .
. G Giacalone, arXiv:2208.06839nucl-thG. Giacalone, (2022), arXiv:2208.06839 [nucl-th] .
. M Bender, P.-H Heenen, P.-G Reinhard, 10.1103/RevModPhys.75.121Rev. Mod. Phys. 75121M. Bender, P.-H. Heenen, and P.-G. Reinhard, Rev. Mod. Phys. 75, 121 (2003).
. J Jia, C.-J Zhang, arXiv:2111.15559nucl-thJ. Jia and C.-J. Zhang, (2021), arXiv:2111.15559 [nucl-th] .
. G Nijs, W Van Der Schee, arXiv:2112.13771nucl-thG. Nijs and W. van der Schee, (2021), arXiv:2112.13771 [nucl-th] .
. J Jia, G Giacalone, C Zhang, arXiv:2206.10449nucl-thJ. Jia, G. Giacalone, and C. Zhang, (2022), arXiv:2206.10449 [nucl-th] .
. M Abdallah, STAR10.1103/PhysRevC.105.014901arXiv:2109.00131Phys. Rev. C. 10514901nucl-exM. Abdallah et al. (STAR), Phys. Rev. C 105, 014901 (2022), arXiv:2109.00131 [nucl-ex] .
Constraints on neutron skin thickness and nuclear deformations using relativistic heavy-ion collisions from STAR. Haojie Xu, STAR CollabrationChunjian Zhang, STAR CollabrationHaojie Xu and Chunjian Zhang (STAR Collabration), Constraints on neutron skin thickness and nuclear deformations using relativistic heavy-ion collisions from STAR, "https://indico.cern.ch/event/895086/contributions/4724887/,https: //indico.cern.ch/event/895086/contributions/4749420/," (2022).
. C Zhang, S Bhatta, J Jia, 10.1103/PhysRevC.106.L031901arXiv:2206.01943Phys. Rev. C. 10631901nucl-thC. Zhang, S. Bhatta, and J. Jia, Phys. Rev. C 106, L031901 (2022), arXiv:2206.01943 [nucl-th] .
. C Zhang, J Jia, 10.1103/PhysRevLett.128.022301arXiv:2109.01631Phys. Rev. Lett. 12822301nucl-thC. Zhang and J. Jia, Phys. Rev. Lett. 128, 022301 (2022), arXiv:2109.01631 [nucl-th] .
. Y Cao, S E Agbemava, A V Afanasjev, W Nazarewicz, E Olsen, 10.1103/PhysRevC.102.024311arXiv:2004.01319Phys. Rev. C. 10224311nucl-thY. Cao, S. E. Agbemava, A. V. Afanasjev, W. Nazarewicz, and E. Olsen, Phys. Rev. C 102, 024311 (2020), arXiv:2004.01319 [nucl-th] .
. B Bally, arXiv:2209.11042nucl-exB. Bally et al., (2022), arXiv:2209.11042 [nucl-ex] .
. T Otsuka, Y Tsunoda, T Togashi, N Shimizu, T Abe, 10.1051/epjconf/201817802003European Physical Journal Web of Conferences. 1782003European Physical Journal Web of ConferencesT. Otsuka, Y. Tsunoda, T. Togashi, N. Shimizu, and T. Abe, in European Physical Journal Web of Conferences, European Physical Journal Web of Conferences, Vol. 178 (2018) p. 02003.
. M Bender, P.-H Heenen, 10.1103/PhysRevC.78.024309arXiv:0805.4383Phys. Rev. C. 7824309nucl-thM. Bender and P.-H. Heenen, Phys. Rev. C 78, 024309 (2008), arXiv:0805.4383 [nucl-th] .
. K Heyde, J L Wood, 10.1103/RevModPhys.83.1467Rev. Mod. Phys. 831467K. Heyde and J. L. Wood, Rev. Mod. Phys. 83, 1467 (2011).
. K Kumar, 10.1103/PhysRevLett.28.249Phys. Rev. Lett. 28249K. Kumar, Phys. Rev. Lett. 28, 249 (1972).
. A Poves, F Nowacki, Y Alhassid, 10.1103/PhysRevC.101.054307arXiv:1906.07542Phys. Rev. C. 10154307nucl-thA. Poves, F. Nowacki, and Y. Alhassid, Phys. Rev. C 101, 054307 (2020), arXiv:1906.07542 [nucl-th] .
. J Jia, 10.1103/PhysRevC.105.014905arXiv:2106.08768Phys. Rev. C. 10514905nucl-thJ. Jia, Phys. Rev. C 105, 014905 (2022), arXiv:2106.08768 [nucl-th] .
. D Teaney, L Yan, 10.1103/PhysRevC.83.064904arXiv:1010.1876Phys. Rev. C. 8364904nucl-thD. Teaney and L. Yan, Phys. Rev. C 83, 064904 (2011), arXiv:1010.1876 [nucl-th] .
. J Jia, 10.1103/PhysRevC.105.044905arXiv:2109.00604Phys. Rev. C. 10544905nucl-thJ. Jia, Phys. Rev. C 105, 044905 (2022), arXiv:2109.00604 [nucl-th] .
. C Loizides, 10.1103/PhysRevC.94.024914arXiv:1603.07375Phys. Rev. 9424914nucl-exC. Loizides, Phys. Rev. C94, 024914 (2016), arXiv:1603.07375 [nucl-ex] .
. M Zhou, J Jia, 10.1103/PhysRevC.98.044903arXiv:1803.01812Phys. Rev. C. 9844903nucl-thM. Zhou and J. Jia, Phys. Rev. C 98, 044903 (2018), arXiv:1803.01812 [nucl-th] .
| [] |
[
"Geronimus transformations for sequences of d-orthogonal polynomials",
"Geronimus transformations for sequences of d-orthogonal polynomials"
] | [
"D Barrios Rolanía \nDpto. Matemática Aplicada a la Ingeniería Industrial\nUniversidad Politécnica de Madrid\n\n",
"J C García-Ardila \nDpto. Matemática Aplicada a la Ingeniería Industrial\nUniversidad Politécnica de Madrid\n\n"
] | [
"Dpto. Matemática Aplicada a la Ingeniería Industrial\nUniversidad Politécnica de Madrid\n",
"Dpto. Matemática Aplicada a la Ingeniería Industrial\nUniversidad Politécnica de Madrid\n"
] | [] | In this paper an extension of the concept of Geronimus transformation for sequences of d-orthogonal polynomials {P n } is introduced. The transformed sequences {P (k) n }, for k = 1, . . . , d, are analyzed and some relationships between these new sequences of polynomials are given. Also the associated Hessenberg matrices and their transformed are studied. | 10.1007/s13398-019-00765-7 | [
"https://arxiv.org/pdf/1905.08746v1.pdf"
] | 160,009,379 | 1905.08746 | b657d20d13f61e5fd843bf115fcedc5bac17b52c |
Geronimus transformations for sequences of d-orthogonal polynomials
D Barrios Rolanía
Dpto. Matemática Aplicada a la Ingeniería Industrial
Universidad Politécnica de Madrid
J C García-Ardila
Dpto. Matemática Aplicada a la Ingeniería Industrial
Universidad Politécnica de Madrid
Geronimus transformations for sequences of d-orthogonal polynomials
arXiv:1905.08746v1 [math.FA] 21 May 2019
In this paper an extension of the concept of Geronimus transformation for sequences of d-orthogonal polynomials {P n } is introduced. The transformed sequences {P (k) n }, for k = 1, . . . , d, are analyzed and some relationships between these new sequences of polynomials are given. Also the associated Hessenberg matrices and their transformed are studied.
Introduction
For a fixed d ∈ N, let us consider a vector of linear functionals (u 1 , . . . , u d ),
u i : P[x] −→ C q(x) −→ u i , q , i = 1, 2, . . . , d ,
defined on the space P[x] of polynomials with complex coefficients. The notion of dorthogonality for this kind of vectors (u 1 , . . . , u d ) ∈ (P[x] ′ ) d was introduced in [6] and used since 1987 in several applications of Approximation Theory (see [1,2] for example).
Definition 1 (J. Van Iseghem [6]) We say that the sequence of polynomials {P n }, n ∈ N, is a d-orthogonal sequence (d-OPS in the sequel) if deg P n (x) = n for all n ∈ N and there exists a vector of functionals (u 1 , . . . , u d ) such that u j , x m P n = 0, n ≥ md + j, m ≥ 0, u j , x n P dn+j−1 = 0, n ≥ 0, (1) for each j = 1, . . . , d.
We say that the vector of functionals (u 1 , u 2 , . . . , u d ) is regular if there exists a sequence of polynomials {P n }, n ∈ N, satisfying (1).
Definition 2 If
(1) is verified, then we say that (u 1 , . . . , u d ) is a vector of orthogonality for the d-OPS {P n }.
For each {P n } being a d-OPS, the uniqueness of an associated vector of orthogonality is not guaranteed. In fact, given one of such vectors (u 1 , . . . , u d ), {P n } is also a d-OPS with respect to (v 1 , . . . , v d ) defined as
v 1 = u 1 , v 2 = u 2 + λ (2) 1 u 1 , . . . v d = u d + λ (d) d−1 u d−1 + · · · + λ (d) 2 u 2 + λ (d) 1 u 1 , where λ (j)
i ∈ C, j = 2, . . . , d, i = 1, . . . , j − 1, are randomly chosen. Conversely, if (u 1 , . . . , u d ) is a vector of linear functionals and the sequence {P n } is a d-OPS with respect to (u 1 , . . . , u d ), then {P n } is uniquely determined except a constant for each polynomial (see [8]). In particular, if k n denotes the leading coefficient of P n then the sequence { 1 kn P n } is called sequence of monic d-orthogonal polynomials (monic d-OPS). Hereafter we alway deal with this kind of sequences of polynomials.
A relevant characterization of d-OPS is the following (see also [8]).
Theorem 1 (J. Van Iseghem [6]) {P n }, n ∈ N, is a d-OPS if and only if there exist coefficients {a n,n−k , n ≥ 0 , k = 0, . . . , d} such that the following (d + 2)-term recurrence relation is satisfied,
xP n (x) = P n+1 (x) + d k=0
a n,n−k P n−k (x), a n,n−d = 0, n ≥ 0,
P 0 (x) = 1, P −1 (x) = P −2 (x) = · · · = P −d (x) = 0. (2)
In our research, starting from a d-OPS {P n } and an associated vector of orthogonality (u 1 , u 2 , . . . , u d ), we introduce d new vectors of functionals (u . . , d, whose study is the object of this paper. In fact, for m = 1, it is possible to take a vector of functionals (u
(1) 1 , u (1) 2 , . . . , u (1) d ) verifying (x − a)u (1) 1 = u d , u (1) i = u i−1 , i = 2, . . . , d.(3)
In this case u
(1) 1 = u d x − a + M 1 δ a ,(4)
where δ a is the Dirac Delta function supported in x = a (this is, δ a , q = q(a) for any q ∈ P[x]) and M 1 ∈ C is an arbitrary value. Therefore, u
1 is not uniquely determined from (u 1 , u 2 , . . . , u d ). Similary, if m = 1, 2, . . . , d − 1, and (u
(m) 1 , u (m) 2 , . . . , u (m) d ) is a vector of orthogonality for the d-OPS {P (m) n }, then (u (m+1) 1 , u (m+1) 2 , . . . , u (m+1) d ) verifying (x − a)u (m+1) 1 = u (m) d , u (m+1) i = u (m) i−1 , i = 2, . . . , d.(5)
is a new vector of functionals. As in (4) more than one vector of functionals can be defined, being
u (m+1) 1 = u (m) d x − a + M m+1 δ a with M 2 , . . . , M d ∈ C.
From the above, the vectors of functionals (u
u (r) i = u (r+q) i+q , i = 1, . . . , d − q. (x − a)u (r+q) i−d+q , i = d − q + 1, . . . , d,(6)
The vectors of functionals (u [7] and studied in several later works in the case d = 1 (see [3,4,9] for instance).
The values chosen for M 1 , . . . , M d can help to determine the regularity of the corresponding vectors of functionals. As we explain later, the vector of orthogonality (u 1 , u 2 , . . . , u d ) from which we start plays an essential role in the construction of (u
(m) n = 0, n ∈ N, where d (m) n = u (m) 1 , P 0 · · · u (m) n , P 0 . . . · · · . . . u (m) 1 , P n−1 · · · u (m) n , P n−1 , m > n, u (m) 1 , P n−m · · · u (m) m , P n−m . . . · · · . . .P (m) n (x) = 1 d (m) n u (m) 1 , P n−m · · · u (m) m , P n−m P n−m (x) . . . . . . . . . u (m) 1 , P n−1 · · · u (m) m , P n−1 P n−1 (x) u (m) 1 , P n · · · u (m) m , P n P n (x)
, n ≥ m.
Theorem 2 extends previous results corresponding to the case d = 1. More specifically, the following corollary is an immediate consequence, which was proved in [5] with other arguments.
Corollary 1 Let (u 1 , . . . , u d ) be a regular vector of linear functionals and consider {P n }, n ∈ N, its corresponding d-OPS. Then (u
(1) 1 , . . . , u (1) d ) is regular if only if u (1) 1 , P n = 0 for each n ≥ 0. In such case, {P(1)
n } exists and
P (1) n (x) = 1 u (1) 1 , P n−1 P n (x) u (1) 1 , P n P n−1 (x) u(1)1 , P n−1 , n ∈ N.
The (d + 2)-recurrence relation (2) motives our interes in the following Hessenberg banded matrix, univocally determined by each d-OPS. Set
J = a 0,0 1 a 1,0 a 1,1 1 . . . . . . . . . . . . a d,0 a d,1 · · · a d,d 1 0 a d+1,1 . . . . . . . . . . . . , a d+i,i = 0, i = 0, 1, . . . ,
whose entries are the coefficients of the recurrence relation (2). Assume that (u 1 , . . . , u d ) is a vector of orthogonality associated with J and consider that (u n } verifying the corresponding (d + 2)-term recurrence relation whose coefficients define a new Hessenberg banded matrix
J (m) = a (m) 0,0 1 a (m) 1,0 a (m) 1,1 1 . . . . . . . . . . . . a (m) d,0 a (m) d,1 · · · a (m) d,d 1 0 a (m) d+1,1 . . . . . . . . . . . . , a (m) d+i,i = 0, i = 0, 1, . . . ,L (m) = 1 γ (m) 1 1 γ (m) 2 . . . . . . , m = 1, . . . , d,(8)
and there exists an upper triangular matrix
U = u 1 1 u 2 1 . . . . . . such that J (m) − aI = L (m) · · · L (1) U L (d) · · · L (m+1) , m = 1, . . . , d.(9)
In the rest of the paper we consider a regular vector of functionals (u 1 , . . . , u d ) and we assume that {P n } is the monic d-OPS associated with (u 1 , . . . , u d ).
In Section 2 some connections between the d sequences of polynomials {P (m) n } are studied, as well as some factorizations for matrices J (m) − aI, in the case that the vectors of functionals (u (m) 1 , . . . , u (m) d ), m = 1, . . . , d, are regular. The auxiliary results introduced in Section 2 are the basis of the proof of Theorem 3, which will be easily obtained from these relationships. Finally, Section 3 is devoted to prove Theorem 2 and Theorem 3.
Geronimus transformation for vectors of orthogonality
Theorem 2 provides some conditions easy to check which guarantee the regularity of the analized vectors of functionals. For instance, if m = 1 we have that (u
(1) 1 , . . . , u (1) d ), is regular when d (1) n = u (1) 1 , P n−1 = 0 , n ∈ N. This is, from (4), u d x − a , P n−1 + M 1 P n−1 (a) = 0 , n ≥ 1.
Hence, condition P n (a) = 0 , n ∈ N,
permits to choose M 1 ∈ C, M 1 = − u d x − a , P n−1 P n (a) , ∀n ∈ N,(10)
for obtaining a vector of orthogonality (u
there exists {P (k) n }, n ∈ N, which is a d-OPS with respect to (u (k) 1 , u (k) 2 , . . . , u (k) d ). The sequence {P (k) n } satisfy a (d + 2)-term recurrence relation, xP (k) n (x) = P (k) n+1 (x) + d s=0 a (k) n,n−s P (k) n−s (x), a (k) n,n−d = 0, n ≥ 0, P (k) 0 (x) = 1, P (k) −1 (x) = P (k) −2 (x) = · · · = P (k) −d (x) = 0. (12)
Next, we analyze the relation between the sequences {P Lemma 1 With the above notation, for each n ∈ N there exists a set of complex numbers {γ (r,q) n,k : k = n − q, . . . , n − 1, γ (r,q) n,n−q } = 0 such that
P (r+q) n (x) = P (r) n (x) + n−1 s=n−q γ (r,q) n,s P (r) s (x), n ≥ 0.(13)
Proof.-From (6), for j ∈ {1, . . . , d} fixed we have
u (r) i , x k P (r+q) dn+j = 0 , k = 0, 1, . . . , n − 1 u (r) i , x n P (r+q) dn+j = 0 , i + q ≤ j , i = 1, . . . , d − q,(14)
and
u (r) i , x k P (r+q) dn+j = 0 , k = 0, 1 . . . , n − 2 u (r) i , x n−1 P (r+q) dn+j = 0 , i − d + q ≤ j , i = d − q + 1, . . . , d.(15)
On the other hand, for each n ∈ N there exists a set {γ (r,q) dn+j,s : s = 0, . . . , dn + j − 1} such that
P (r+q) dn+j (x) = P (r) dn+j (x) + dn+j−1 s=0 γ (r,q) dn+j,s P (r) s (x) .(16)
Thus, using (14) and (15)
(k+1)d−1 s=0 γ (r,q) dn+j,s u (r) d , x k P (r) s = 0 , k = 0, . . . , n − 2.(17)
Due to u
γ (r,q) dn+j,s u (r) d−q , x n−1 P (r) s = 0 . Then, γ (r,q) dn+j,d(n−1) = γ (r,q) dn+j,d(n−1)+1 = · · · = γ (r,q) dn+j,dn−q−1 = 0 .(19)
Using (18) and (19), (16) reduces to
P (r+q) dn+j (x) = P (r) dn+j (x) + dn+j−1 s=dn−q γ (r,q) dn+j,s P (r) s (x) , j = 1, . . . , d .(20)
To obtain (13), firstly assume j ≤ q. Then from (15),
γ (r,q) dn+j,dn−q u (r) d−q+1 , x n−1 P (r) dn−q = 0 γ (r,q) dn+j,dn−q u (r) d−q+2 , x n−1 P (r) dn−q + γ (m) dn+j,dn−q+1 u (r) d − q + 2, x n−1 P (r) dn−q+1 = 0 . . . . . . . . . γ (r,q) dn+j,dn−q u (r) d−q+j , x n−1 P (r) dn−q + · · · + γ (r,q) dn+j,dn+j−q−1 u (r) d−q+j , x n−1 P (r) dn+j−q−1 = 0 Thus γ (r,q) dn+j,dn−q = γ (r,q) dn+j,dn−q+1 = · · · = γ (r,q) dn+j,dn+j−q−1 = 0. (21) Moreover, since (x − a)u (r+q) j+1 = u (r) d−q+j+1 , γ (r,q) dn+j,dn+j−q = u (r) d−q+j+1 , x n−1 P (r+q) dn+j u (r) d−q+j+1 , x n−1 P (r) dn+j−q = u (r+q) j+1 , (x − a)x n−1 P (r+q) dn+j u (r) d−q+j+1 , x n−1 P (r) dn+j−q = 0.
Secondly, assume j > q. Then, from (14) and (15), Lemma 2 For each n ∈ N we have
γ (r,q) dn+j,dn−q u (r) d−q+1 , x n−1 P (r) dn−q = 0 . . . γ (r,q) dn+j,dn−q u (r) d , x n−1 P (r) dn−q + · · · + γ (r,q) dn+j,dn−1 u (r) d , x n−1 P (r) dn−1 = 0 γ (r,q) dn+j,dn−q u (r) 1 , x n P (r) dn−q + · · · + γ (r,q) dn+j,dn u (r) 1 , x n P (r) dn = 0 . . . . . . . . . . . . γ (r,q) dn+j,dn−q u (r) j−q , x n P (r) dn−q + · · · + γ (r,q) dn+j,dn+j−q−1 u (r) j−q , x n P (r) dn+j−q−1 = 0 j+1 = u (r) j−q+1 (see (6)), γ (r,q) dn+j,dn+j−q−1 = u (r) j−q+1 , x n P (r+q) dn+j u (r) j−q+1 , x n P (r) dn+j−q = u (r+q) j+1 , x n P (r+q) dn+j u (r) j−q+1 , x n P((x − a)P (r) n (x) = P (r+q) n+1 (x) + n s=n−d+q α (r,q) n,s P (r+q) s (x) ,(23)
where α (r,q) n,n−d+q = 0.
Proof.-For n ∈ N and j ∈ {1, . . . , d} the polynomial (x − a)P
From (6) we have
u (r+q) i , (x − a)x k P (r) dn+j = 0 , k = 0, 1 . . . n − 1 u (r+q) i , (x − a)x n P (r) dn+j = 0 , d − q + i ≤ j , i = 1, . . . , q,(24)
and u
(r+q) i , x k P (r) dn+j = 0, k = 0, 1 . . . , n − 1, i = q + 1, . . . , d,(25)
Then, similar arguments to those used in the proof of Lemma 1 lead to (23).
Remark 1 (13) can be rewritten as
P (r+q) (x) = L (r,q) P (r) (x),(26)
where
P (k) (x) = P (k) 0 (x), P (k) 1 (x), . . . T , k = r, r + q,
and L (r,q) is a lower triangular banded matrix,
L (r,q) = 1 γ (r,q) 1,0 1 . . . . . . . . . γ (r,q) q,0 γ (r,q) q,1 · · · 1 0 γ (r,q) q+1,1 . . . . . . 0 . . . . . . . . . . . . ,
In the case that (u
(k) 1 , . . . , u (k) d )
, k ∈ {r + 1, . . . , r + q − 1}, are regular vectors of functionals then the same above reasoning can be done replacing q by 1 and substituting r succesively by r + 1, . . . , r + q − 1 in (26). In this way,
P (r+2) (x) = L (r+1,1) P (r+1) (x) = L (r+1,1) L (r,1) P (r) (x) P (r+3) (x) = L (r+2,1) P (r+2) (x) = L (r+2,1) L (r+1,1) L (r,1) P (r) (x) . . . . . . . . . P (r+q) (x) = L (r+q−1,1) P (r+q−1) (x) = L (r+q−1,1) · · · L (r,1) P (r) (x).
Hence,
P (r+q) (x) = L (r+q) L (r+q−1) · · · L (r+1) P (r) (x).(27)
In (27) and in the sequel we denote L (s+1) := L (s,1) , s = r, r + 1, . . . , r + q − 1.
Notice that these matrices L (s+1) are bi-diagonal as in (8) ) are regular vectors of functionals). We underline that, under these conditions, the following factorization of L (r,q) is verified,
L (r,q) = L (r+q) L (r+q−1) · · · L (r+1) .(29)
In the same way, (23) can be rewritten as
(x − a)P (r) (x) = N (r,q) P (r+q) (x) ,(30)
where N (r,q) is a lower Hessenberg (d + 2 − q)-banded matrix,
N (r,q) = α (r,q) 0,0 1 α (r,q) 1,0 α (r,q) 1,1 1 . . . . . . . . . . . . α (r,q) d−q,0 α (r,q) d−q,1 · · · α (r,q) d−q,d−q 1 0 α (r,q) d−q+1,1 . . . . . . 0 . . . . . . . . . . . . . . . , α (m) d−q+i,i = 0 , i = 0, 1, . . . (31)
Lemma 1 and Lemma 2 provide relationships between the matrices J (r) and J (r+q) . This is sumarized in the following result.
Theorem 4 With the above notation we have
J (r) − aI = N (r,q) L (r,q)(32)
and
J (r+q) − aI = L (r,q) N (r,q) .(33)
Proof.-As we have done
(J (r) − aI)P (r) (x) = (x − a)P (r) (x) = N (r,q) P (r+q) (x) = N (r,q) L (r,q) P (r) (x).
From here, and taking into account that {P
Proofs of the main results
Proof of Theorem 2
We will give the proof when n ≥ m. The case n < m follows in a similar way.
In the first place, we take m ∈ {1, . . . , d} and we suppose that (u
1 , P n−m · · · u (m) 1 , P n−1 . . . · · · . . . u (m) m , P n−m · · · u (m) m , P n−1 γ (0,m) n,n−m . . . γ (0,m) n,n−1 = − u (m) 1 , P n . . . u (m) m , P n .(35)
This is, (t 1 , . . . , t m ) = (γ
We check that (36) has a unique solution. In fact, if t (m) n , . . . , t (m) n−m+1 T is another solution then we define the polynomial
Q (m) n (x) = P n (x) + t (m) n P n−1 (x) + · · · + t (m) n−m+1 P n−m (x).
In what follows we will prove u (m) k , (x − a) r Q (m) n = 0, k = 1, . . . , d, n ≥ dr + k, n ≥ m, r ≥ 0 .
In the case that k ∈ {1, . . . , m}, then u T is a solution of (35). Therefore, for r > 0 and n ≥ dr + k, since (6) With our hypotheses, we have the factorization (29) for L (q,r) . Then, if r + q = d, we have L (r,d−r) = L (d) L (d−1) · · · L (r+1) .
On the other hand, from (30) (also for q = d − r),
Finally, using (46) and (49) in (32) we arrive to (9) and the result is proved.
d
), m = 1, . . . , d, constructed in (3)-(5) extend the concept of Geronimus transformation of (u 1 , . . . , u d ) introduced in
. . . , d. One of our goals is to characterize the regularity of the new vectors of functionals verifying (3)-(5), which is done in the following result.
Theorem 2
2Let (u 1 , . . . , u d ) be a vector of orthogonality and let {P n }, n ∈ N, be its corresponding d-OPS. Then (u , m = 1, . . . , d, is regular if and only if d
. . . , d, are vectors of orthogonality verifying (3)-(5). Then, for each m = 1, . . . , d, there exists a d-OPS {P (m)
d
). We are concerned to finding relations between J and J (m) , m = 1, . . . , d. Our second main result is the following.
. . . , d, be a set of vector of orthogonality verifying (3)-(5) and assume that {P
, n ∈ N , m = 0, 1, . . . , d, are the corresponding d-OPS (where (u = (u 1 , u 2 , . . . , u d ) and {P (0) n } = {P n }). Then there exist d lower triangular matrices
, for m = 1, 2, . . . , k, k ≤ d, when P n (a), P(1) n (a), . . . , P be directly defined from (u 1 , u 2 , . . . , u d ) verifying (6) even if the vectors of functionals , m = 1, . . . , k − 1, are not previously defined. On the other hand, we emphasize that the vectors of functionals verifying (3)-(5) are related by(6), which is independent on whether (u is or not regular.In the remainder of this section we take r ∈ {0, . . . , d − 1} and q ∈ {1, . . . , d − r} fixed and we assume that u = r, r + q, are regular. Then for k = r, r + q,
− 2
2s = dk + i − 1, taking successively k = 0, 1, . . . , n
n
} is a basis of P[x], (32) is verified. To prove (33) notice that(J (r+q) − aI)P (r+q) (x) = (x − a)L (r,q) P (r) (x) = L (r,q) N (r,q) P (r+q) (x).Using again the fact that {P (r+q) n }, n ∈ N, is a basis of P[x], (33) follows.
regular. From Lemma 1 for r = 0 and q = m, we have P (m) (x) = L (0,m) P(x) (see (26)), where L (0,m) is a lower triangular (m + 1)-banded matrix as in (30).This is, P (m) n (x) = P n (x) + γ
. . . . . . . . . . . . . . . . . . u (m) m , P n−m t 1 + · · · + u (m) m , P n−1 t m = − u
(x − a) r−1 P n−m = u d−m+k , (x − a) r−1 P n + t (m) n u d−m+k , (x − a)
(x − a)P (r) (x) = N (r,d−r) P (d) (x).(47)If r = 0 then (47) becomes (44). If r > 0, substituting r by r − 1 in (47) and multiplying by L (r−1,1) we obtain(x − a)L (r−1,1) P (r−1) (x) = L (r−1,1) N (r−1,d−r+1) P (d) (x).From this and (26), using the notation introduced in(28),(x − a)P (r) (x) = L (r) N (r−1,d−r+1) P (d) (x).(48)Comparing(47)and (48), N (r,d−r) = L (r) N (r−1,d−r+1) , which is verified for r = 1, 2, . . . , d. Iterating the procedure, N (r,d−r) = L (r) L (r−1) N (r−2,d−r+2) = · · · = L (r) L (r−1) . . . L (1) N (0,d) = L (r) L (r) . . . L (1) U .
{0, 1, . . . , d − 1}, q ∈ {1, . . . , d}, are related by(r)
1 , u
(r)
2 , . . . , u
(r)
d ) and (u
(r+q)
1
, u
(r+q)
2
, . . . , u
(r+q)
d
),
r ∈
On the contrary, if k = m + p with p ∈ {1, . . . , d − m} and n ≥ dr + k then, again since (6), we obtainThen, since (38)-(39) we see that (37) is verified for any k ∈ {1, . . . , d}.Obviously, (37) implies u (m)k , x r Q (m) n = 0, n ≥ rd + k. Doing a similar analysis, we obtain that for n = drn }, n ∈ N, is also a monic d-OPS, which contradicts the uniqueness of {P (m) n }. This proves the uniqueness of solutions for the system (35). Finally, applying the well-known Cramer Rule to solve (36) we arrive to(7).Reciprocally, assume d (m) n = 0 and define P (m) n as in(7). We want to show that {P(40) If k ≤ m and r = 0, then we see on the right hande side of (40) that u(41) because the entries in the last column of the determinant are zero.If k > m, then k = m + p with p ∈ {1, . . . , d − m}. Therefore,(42) also because the last column of the determinant has all the entries equal to zero.Moreover, substituting n by ds + k − 1, s ∈ N in (41)-(42) and expanding the determinants by their last columns we seeProof of Theorem 3Now, we will see that the proof of Theorem 3 is an easy consequence of Section 2.For k = 0, 1, . . . , d − 1, taking r = d − k − 1 and q = k + 1 in(30),In particular, (x − a)P(x) = U P (d) (x) ,where U = N (0,d) is a bi-diagonal upper triangular matrix whose entries in the diagonal (denoted by s 0 , s 1 , . . .) are different from zero. That is, (44) can be rewritten as (x − a)P n (x) = P (d)n+1 (x) + s n P (d) n (x) , n = 0, 1, . . . .
Saff Higher-Order Three-Term Recurrences and Asymptotics of Multiple Orthogonal Polynomials. A I Aptekarev, V A Kalyagin, E B , Constr Approx. 30175223A.I. Aptekarev, V.A. Kalyagin, E.B. Saff Higher-Order Three-Term Recurrences and Asymptotics of Multiple Orthogonal Polynomials, Constr Approx 30 (2009), 175223.
High-order recurrence relations, Hermite-Padeé approximation and Nikishin systems. D Barrios Rolanía, J S Geronimo, G López Lagomasino, Mat. Sb. 2093102137D. Barrios Rolanía, J. S. Geronimo, G. López Lagomasino, High-order recurrence re- lations, Hermite-Padeé approximation and Nikishin systems, Mat. Sb. 209 (3) (2018), 102137.
Darboux transformation and perturbation of linear functionals. M I Bueno, F Marcellán, Linear Algebra Appl. 384M. I. Bueno, F. Marcellán, Darboux transformation and perturbation of linear func- tionals, Linear Algebra Appl. 384 (2004), 215-242.
Multiple Geronimus transformations. M Derevyagin, J C García-Ardila, F Marcellán, Linear Algebra Appl. 454M. Derevyagin, J. C. García-Ardila, F. Marcellán, Multiple Geronimus transforma- tions. Linear Algebra Appl. 454 (2014), 158-183.
A note on the Geronimus transformation and Sobolev orthogonal polynomials. M Derevyagin, F Marcellán, Numer. Algorithms. 67M. Derevyagin, F. Marcellán, A note on the Geronimus transformation and Sobolev orthogonal polynomials, Numer. Algorithms. 67 (2014) 271-287 .
approximants de Padé vectoriales. V Iseghem, These. Univ. des Sci et Tech. de LiIle Flandre Artois. V. Iseghem, approximants de Padé vectoriales, These. Univ. des Sci et Tech. de LiIle Flandre Artois, (1987)
On polynomials orthogonal with regard to a given sequence of numbers. J Geronimus, Comm. Inst. Sci. Math. Mec. Univ. Kharkoff [Zapiski Inst. Mat. Mech. 4RussianJ. Geronimus, On polynomials orthogonal with regard to a given sequence of numbers, Comm. Inst. Sci. Math. Mec. Univ. Kharkoff [Zapiski Inst. Mat. Mech.] (4) 17 (1940), 3-18 (Russian).
P Maroni, L'orthogonalité et les récurrences de polynômes d'ordre supérieurà deux Annales de la Faculté des Sciences de Toulouse 5. 10P. Maroni, L'orthogonalité et les récurrences de polynômes d'ordre supérieurà deux Annales de la Faculté des Sciences de Toulouse 5, 10 (1) (1989) 105-139
Rational spectral transformations and orthogonal polynomials. A Zhedanov, J. Comput. Appl. Math. 85A. Zhedanov, Rational spectral transformations and orthogonal polynomials, J. Com- put. Appl. Math. 85 (1997), 67-86.
| [] |
[
"QUANTITATIVE GREEN'S FUNCTION ESTIMATES FOR LATTICE QUASI-PERIODIC SCHRÖDINGER OPERATORS",
"QUANTITATIVE GREEN'S FUNCTION ESTIMATES FOR LATTICE QUASI-PERIODIC SCHRÖDINGER OPERATORS"
] | [
"Hongyi Cao ",
"ANDYunfeng Shi ",
"Zhifei Zhang "
] | [] | [] | In this paper, we establish quantitative Green's function estimates for some higher dimensional lattice quasi-periodic (QP) Schrödinger operators. The resonances in the estimates can be described via a pair of symmetric zeros of certain functions and the estimates apply to the sub-exponential type non-resonant conditions. As the application of quantitative Green's function estimates, we prove both the arithmetic version of Anderson localization and finite volume version of ( 1 2 −)-Hölder continuity of the integrated density of states (IDS) for such QP Schrödinger operators. This gives an affirmative answer to Bourgain's problem in[Bou00]. We say ω ∈ R satisfies the Diophantine condition if there are τ > 1 and γ > 0 so that kω = inf l∈Z |l − kω| ≥ γ |k| τ for ∀ k ∈ Z \ {0}. | 10.1007/s11425-022-2126-8 | [
"https://export.arxiv.org/pdf/2209.03808v2.pdf"
] | 252,118,922 | 2209.03808 | 80d45bc5c9c20370c33fa18a5faf8c16213f5461 |
QUANTITATIVE GREEN'S FUNCTION ESTIMATES FOR LATTICE QUASI-PERIODIC SCHRÖDINGER OPERATORS
5 Oct 2022
Hongyi Cao
ANDYunfeng Shi
Zhifei Zhang
QUANTITATIVE GREEN'S FUNCTION ESTIMATES FOR LATTICE QUASI-PERIODIC SCHRÖDINGER OPERATORS
5 Oct 2022
In this paper, we establish quantitative Green's function estimates for some higher dimensional lattice quasi-periodic (QP) Schrödinger operators. The resonances in the estimates can be described via a pair of symmetric zeros of certain functions and the estimates apply to the sub-exponential type non-resonant conditions. As the application of quantitative Green's function estimates, we prove both the arithmetic version of Anderson localization and finite volume version of ( 1 2 −)-Hölder continuity of the integrated density of states (IDS) for such QP Schrödinger operators. This gives an affirmative answer to Bourgain's problem in[Bou00]. We say ω ∈ R satisfies the Diophantine condition if there are τ > 1 and γ > 0 so that kω = inf l∈Z |l − kω| ≥ γ |k| τ for ∀ k ∈ Z \ {0}.
Introduction
Consider the QP Schrödinger operators
H = ∆ + λV (θ + nω)δ n,n ′ on Z d , (1.1)
where ∆ is the discrete Laplacian, V : T d = (R/Z) d → R is the potential and nω = (n 1 ω 1 , . . . , n d ω d ). Typically, we call θ ∈ T d the phase, ω ∈ [0, 1] d the frequency and λ ∈ R the coupling . Particularly, if V = 2 cos 2πθ and d = 1, then the operators (1.1) become the famous almost Mathieu operators (AMO). Over the past decades, the study of spectral and dynamical properties of lattice QP Schrödinger operators has been one of the central themes in mathematical physics. Of particular importance is the phenomenon of Anderson localization (i.e., pure point spectrum with exponentially decaying eigenfunctions). Determining the nature of the spectrum and the eigenfunctions properties of (1.1) can be viewed as a small divisor problem, which depends sensitively on features of λ, V, ω, θ and d. Then substantial progress has been made following Green's function estimates based on a KAM type multi-scale analysis (MSA) of Fröhlich-Spencer [FS83]. More precisely, Sinai [Sin87] first proved the Anderson localization for a class of 1D QP Schrödinger operators with a C 2 cosine-like potential assuming the Diophantine frequency 1 . The proof focuses on eigenfunctions parametrization and the resonances are overcome via a KAM iteration scheme. Independently, Fröhlich-Spencer-Wittwer [FSW90] extended the celebrated method of Fröhlich-Spencer [FS83] originated from random Schrödinger operators case to the QP one, and obtained similar Anderson localization result with [Sin87]. The proof however uses estimates of finite volume Green's functions based on the MSA and the eigenvalue variations. Both [Sin87] and [FSW90] were inspired essentially by arguments of [FS83]. Eliasson [Eli97] applied a reducibility method based on KAM iterations to general Gevrey QP potentials and established the pure point spectrum for corresponding Schrödinger operators. All these 1D results are perturbative in the sense that the required perturbation strength depends heavily on the Diophantine frequency (i.e., localization holds for |λ| ≥ λ 0 (V, ω) > 0). The great breakthrough was then made by Jitomirskaya [Jit94,Jit99], in which the non-perturbative methods for control of Green's functions (cf. [Jit02]) were developed first for AMO. The nonperturbative methods can avoid the usage of multi-scale scheme and the eigenvalue variations. This will allow effective (even optimal in many cases) and independent of ω estimate on λ 0 . In addition, such methods can provide arithmetic version of Anderson localization which means the removed sets on both ω and θ when obtaining localization have an explicit arithmetic description (cf. [Jit99,JL18] for details). In contrast, the current perturbation methods seem only providing certain measure or complexity bounds on these sets. Later, Bourgain-Jitomirskaya [BJ02] extended some results of [Jit99] to the exponential long-range hopping case (thus the absence of Lyapunov exponent) and obtained both nonperturbative and arithmetic Anderson localization. Significantly, Bourgain-Goldstein [BG00] generalized the non-perturbative Green's function estimates of Jitomirskaya [Jit99] by introducing the new ingredients of semi-algebraic sets theory and subharmonic function estimates, and established the non-perturbative Anderson localization 2 for general analytic QP potentials. The localization results of [BG00] hold for arbitrary θ ∈ T and a.e. Diophantine frequencies (the permitted set of frequencies depends on θ), and there seems no arithmetic version of Anderson localization results in this case. We would like to mention that the Anderson localization can also be obtained via reducibility arguments based on Aubry duality [JK16,AYZ17].
If one increases the lattice dimensions of QP operators, the Anderson localization proof becomes significantly difficult. In this setting, Chulaevsky-Dinaburg [CD93] and Dinaburg [Din97] first extended results of Sinai [Sin87] to the exponential long-range operator with a C 2 cosine type potential on Z d for arbitrary d ≥ 1. However, in this case, the localization holds assuming further restrictions on the frequencies (i.e., localization only holds for frequencies in a set of positive measure, but without explicit arithmetic description). Subsequently, the remarkable work of Bourgain-Goldstein-Schlag [BGS02] established the Anderson localization for the general analytic QP Schrödinger operators with (n, θ, ω) ∈ Z 2 × T 2 × T 2 via Green's function estimates. In [BGS02] they first proved the large deviation theorem (LDT) for the finite volume Green's functions by combining MSA, matrix-valued Cartan's estimates and semi-algebraic sets theory. Then by using further semi-algebraic arguments together with LDT, they proved the Anderson localization for all θ ∈ T 2 and ω in a set of positive measure (depending on θ). While the restrictions of the frequencies when achieving LDT are purely arithmetic and do not depend on the choice of potentials, in order to obtain the Anderson localization it needs to remove an additional frequencies set of positive measure. The proof of [BGS02] is essentially 2 i.e., Anderson localization assuming the positivity of the Lyapunov exponent. In the present context by nonperturbative Anderson localization we mean localization if |λ| ≥ λ 0 = λ 0 (V ) > 0 with λ 0 being independent of ω.
two-dimensional and a generation of it to higher dimensions is significantly difficult. In 2007, Bourgain [Bou07] successfully extended the results of [BGS02] to arbitrary dimensions, and one of his key ideas is allowing the restrictions of frequencies to depend on the potential by means of delicate semi-algebraic sets analysis when proving LDT for Green's functions. In other words, for the proof of LDT in [Bou07] there has already been additional restrictions on the frequencies, which depends on the potential V and is thus not arithmetic. The results of [Bou07] have been largely generalized by Jitomirskaya-Liu-Shi [JLS20] to the case of both arbitrarily dimensional multi-frequencies and exponential long-range hopping. Very recently, Ge-You [GY20] applied a reducibility argument to higher dimensional long-range QP operators with the cosine potential, and proved the first arithmetic Anderson localization assuming the Diophantine frequency.
Definitely, the LDT type Green's function estimates methods are powerful to deal with higher dimensional QP Schrödinger operators with general analytic potentials. However, such methods do not provide the detailed information on Green's functions and eigenfunctions that may be extracted by purely perturbative method based on Weierstrass preparation type theorem. As an evidence, in the celebrated work [Bou00], Bourgain developed the method of [Bou97] further to first obtain the finite volume version of ( 1 2 −)-Hölder continuity of the IDS for AMO. The proof shows that the Green's functions can be controlled via certain quadratic polynomials, and the resonances are completely determined by zeros of these polynomials. Using this method then yields a surprising quantitative result on the Hölder exponent of the IDS, since the celebrated method of Goldstein-Schlag [GS01] which is non-perturbative and works for more general potentials does not seem to provide explicit information on the Hölder exponent. In 2009, by using KAM reducibility method of Eliasson [Eli92], Amor [Amo09] obtained the first 1 2 -Hölder continuity result of the IDS for 1D and multi-frequency QP Schrödinger operators with small analytic potentials and Diophantine frequencies. Later, the one-frequency result of Amor was largely generalized by Avila-Jitomirskaya [AJ10] to the non-perturbative case via the quantitative almost reducibility and localization method. In the regime of the positive Lyapunov exponent, Goldstein-Schlag [GS08] successfully proved the ( 1 2m −)-Hölder continuity of the IDS for 1D and one-frequency QP Schrödinger operators with potentials given by analytic perturbations of certain trigonometric polynomials of degree m ≥ 1. This celebrated work provides in fact the finite volume version of estimates on the IDS. We remark that the Hölder continuity of the IDS for 1D and multi-frequency QP Schrödinger operators with large potentials is hard to prove. In [GS01], by using the LDT for transfer matrix and the avalanche principle, Goldstein-Schlag showed the weak Hölder continuity (cf. (1.2)) of the IDS for 1D and multi-frequency QP Schrödinger operators assuming the positivity of the Lyapunov exponent and strong Diophantine frequencies. The weak Hölder continuity of the IDS for higher dimensional QP Schrödinger operators has been established in [Sch01,Bou07,Liu20]. Very recently, Ge-You-Zhao [GYZ22] proved the ( 1 2m −)-Hölder continuity of the IDS for higher dimensional QP Schrödinger operators with small exponential long-range hopping and trigonometric polynomial (of degree m) potentials via the reducibility argument. By Aubry duality, they can obtain the ( 1 2m −)-Hölder continuity of the IDS for 1D and multi-frequency QP operators with a finite range hopping.
Of course, the references mentioned as above are far from complete and we refer the reader to [Bou05,MJ17,Dam17] for more recent results on the study of both Anderson localization and the Hölder regularity of the IDS for lattice QP Schrödinger operators.
1.1. Bourgain's problems. The remarkable Green's function estimates of [Bou00] should be not restricted to the proof of ( 1 2 −)-Hölder regularity of the IDS for AMO only. In fact, in [Bou00] (cf. Page 89), Bourgain made three comments on the possible extensions of his method:
(1) In fact, one may also recover the Anderson localization results from [Sin87] and [FSW90] in the perturbative case; (2) One may hope that it may be combined with nonperturbative arguments in the spirit of [BG00, GS01] to establish ( 1 2 −)-Hölder regularity assuming positivity of the Lyapunov exponent only;
(3) It may also allow progress in the multi-frequency case (perturbative or nonperturbative) where regularity estimates of the form (0.28) 3 are the best obtained so far. An extension of (2) has been accomplished by Goldstein-Schlag [GS08]. The answer to the extension of (1) is highly nontrivial due to the following reasons:
• The Green's function on good sets (cf. Section 3 for details) only has a sub-exponential off-diagonal decay estimate rather than an exponential one required by proving Anderson localization; • At the s-th iteration step (s ≥ 1), the resonances of [Bou00] are characterized as min{ θ + kω − θ s,1 , θ + kω − θ s,2 } ≤ δ s ∼ δ C s 0 , C > 1. However, the symmetry information of θ s,1 and θ s,2 is missing. Actually, in [Bou00], it might be θ s,1 + θ s,2 = 0 because of the construction of resonant blocks;
• If one tries to extend the method of Bourgain [Bou00] to higher lattice dimensions, there comes new difficulty: the resonant blocks at each iteration step could not be the cubes similar to the intervals appeared in the 1D case. To extend the method of Bourgain [Bou00] to higher lattice dimensions and recover the Anderson localization, one has to address the above issues, which is our main motivation of this paper.
H(θ) = ε∆ + cos 2π(θ + n · ω)δ n,n ′ , ε > 0, (1.3)
where the discrete Laplacian ∆ is defined as
∆(n, n ′ ) = δ n−n ′ 1,1 , n 1 := d i=1 |n i | . 3
i.e, a weak Hölder continuity estimate
|N (E) − N (E ′ )| ≤ e − log 1 |E−E ′ | ζ , ζ ∈ (0, 1), (1.2)
where N (·) denotes the IDS.
For the diagonal part of (1.3), we have
θ ∈ T = R/Z, ω ∈ [0, 1] d and n·ω = d i=1 n i ω i .
Throughout the paper, we assume that ω ∈ R τ,γ for some 0 < τ < 1 and γ > 0 with
R τ,γ = ω ∈ [0, 1] d : n · ω = inf l∈Z |l − n · ω| ≥ γe − n τ for ∀ n ∈ Z d \ {0} , (1.4) where n := sup 1≤i≤d |n i | .
We aim to extend the method of Bourgain [Bou00] to higher lattice dimensions and establish quantitative Green's function estimates assuming (1.4). As the application, we prove the arithmetic version of Anderson localization and the finite volume version of ( 1 2 −)-Hölder continuity of the IDS for (1.3). 1.2.1. Quantitative Green's function estimates. The first main result of this paper is a quantitative version of Green's function estimates, which will imply both arithmetic Anderson localization and the finite volume version of ( 1 2 −)-Hölder continuity of IDS. The estimates on Green's function are based on multi-scale induction arguments.
Let Λ ⊂ Z d and denote by R Λ the restriction operator. Given E ∈ R, the Green's function (if exists) is defined by
T −1 Λ (E; θ) = (H Λ (θ) − E) −1 , H Λ (θ) = R Λ H(θ)R Λ .
Recall that ω ∈ R τ,γ and τ ∈ (0, 1). We fix a constant c > 0 so that 1 < c 20 < 1 τ .
At the s-th iteration step, let δ −1 s (resp. N s ) describe the resonance strength (resp. the size of resonant blocks) defined by
N s+1 = log γ δ s 1 c 5 τ , log γ δ s+1 = log γ δ s c 5 , δ 0 = ε 1 10 , where [x] denotes the integer part of x ∈ R. If a ∈ R, let a = dist(a, Z) = inf l∈Z |l − a|. For z = a + √ −1b ∈ C with a, b ∈ R, define z = |b| 2 + a 2 .
Denote by dist(·, ·) the distance induced by the supremum norm on R d . Then we have
Theorem 1.1. Let ω ∈ R τ,γ . Then there is some ε 0 = ε 0 (d, τ, γ) > 0 so that, for 0 < ε ≤ ε 0 and E ∈ [−2, 2], there exists a sequence {θ s = θ s (E)} s ′ s=0 ⊂ C (s ′ ∈ N ∪ {+∞})
with the following properties. Fix any θ ∈ T. If a finite set Λ ⊂ Z d is s-good (cf. (e) s of the Statement 3.1 for the definition of s-good sets, and Section 3 for the definitions of {θ s } s ′ s=0 , the sets P s , Q s , Ω s k ), then
T −1 Λ (E; θ) < δ −3 s−1 sup {k∈Ps: Ω s k ⊂Λ} θ + k · ω − θ s −1 · θ + k · ω + θ s −1 < δ −3 s , T −1 Λ (E; θ)(x, y) < e − 1 4 | log ε|· x−y 1 for x − y > N c 3 s .
In particular, for any finite set Λ ⊂ Z d , there exists some Λ satisfying
Λ ⊂ Λ ⊂ k ∈ Z d : dist(k, Λ) ≤ 50N c 2 s so that, if min k∈ Λ * min σ=±1 ( θ + k · ω + σθ s ) ≥ δ s , then T −1 Λ (E; θ) ≤ δ −3 s−1 δ −2 s ≤ δ −3 s , |T −1 Λ (E; θ)(x, y)| ≤ e − 1 4 | log ε|· x−y for x − y > N c 3 s , where Λ * = k ∈ 1 2 Z d : dist(k, Λ) ≤ 1 2 .
Let us refer to Section 3 for a complete description of our Green's function estimates.
1.2.2. Arithmetic Anderson localization and Hölder continuity of the IDS. As the application of quantitative Green's function estimates, we first prove the following arithmetic version of Anderson localization for H(θ). Let τ 1 > 0 and define Θ τ1 = {(θ, ω) ∈ T × R τ,γ : 2θ + n · ω ≤ e − n τ 1 holds for finitely many n ∈ Z d }.
We have Theorem 1.2. Let H(θ) be given by (1.3) and let 0 < τ 1 < τ . Then there exists some ε 0 = ε 0 (d, τ, γ) > 0 such that, if 0 < ε ≤ ε 0 , then for (θ, ω) ∈ Θ τ1 , H(θ) satisfies the Anderson localization.
Remark 1.1. It is easy to check both mes(T\Θ τ1,ω ) = 0 and mes(R τ,γ \Θ τ1,θ ) = 0, where Θ τ1,ω = {θ ∈ T : (θ, ω) ∈ Θ τ1 }, Θ τ1,θ = {ω ∈ R τ,γ : (θ, ω) ∈ Θ τ1 } and mes(·) denotes the Lebesgue measure. Thus Anderson localization can be established either by fixing ω ∈ R τ,γ and removing θ in the spirit of [Jit99], or by fixing θ ∈ T and removing ω in the spirit of [BG00,BGS02].
The second application is a proof of the finite volume version of ( 1 2 −)-Hölder continuity of the IDS for H(θ). For a finite set Λ, denote by #Λ the cardinality of Λ. Let
N Λ (E; θ) = 1 #Λ #{λ ∈ σ(H Λ (θ)) : λ ≤ E}
and denote by
N (E) = lim N →∞ N ΛN (E; θ) (1.5) the IDS, where Λ N = {k ∈ Z d : k ≤ N } for N > 0.
It is well-known that the limit in (1.5) exists and is independent of θ for a.e. θ.
Theorem 1.3. Let H(θ) be given by (1.3) and let ω ∈ R τ,γ . Then there exists some ε 0 = ε 0 (d, τ, γ) > 0 such that if 0 < ε ≤ ε 0 , then for any small µ > 0 and 0 < η < η 0 (d, τ, γ, µ), we have for sufficiently large N depending on η,
sup θ∈T,E∈R (N ΛN (E + η; θ) − N ΛN (E − η; θ)) ≤ η 1 2 −µ .
(1.6)
In particular, the IDS is Hölder continuous of exponent ι for any ι ∈ (0, 1 2 ).
Let us give some remarks on our results.
(1) The Green's function estimates can be extended to the exponential longrange hopping case, and may not be restricted to the cosine potential. Except for the proof of arithmetic Anderson localization and the finite volume version of ( 1 2 −)-Hölder regularity of the IDS, the quantitative Green's function estimates should have potential applications in other problems, such as the estimates of Lebesgue measure of the spectrum, dynamical localization, the estimates of level spacings of eigenvalues and finite volume version of localization. We can even expect fine results in dealing with the Melnikov's persistency problem (cf. [Bou97]) by employing our Green's function estimates method.
(2) As mentioned previously, Ge-You [GY20] proved the first arithmetic Anderson localization result for higher dimensional QP operators with the exponential long-range hopping and the cosine potential via their reducibility method. Our result is valid for frequencies satisfying the sub-exponential non-resonant condition (cf. (1.4)) of Rüssmann type [Rüs80], which slightly generalizes the Diophantine type localization result of [GY20]. While the Rüssmann type condition is sufficient for the use of classical KAM method, it is not clear if such condition still suffices for the validity of MSA method. Definitely, the localization result of both [GY20] and the present work is perturbative 4 . Finally, since our proof of arithmetic Anderson localization is based on Green's function estimates, it could be improved to obtain the finite volume version of Anderson localization as that obtained in [GS11]. (3) Apparently, using the Aubry duality together with Amor's result [Amo09] has already led to the 1 2 -Hölder continuity of the IDS for higher dimensional QP operators with small exponential long-range hopping and the cosine potential assuming Diophantine frequencies. So our result of ( 1 2 −)-Hölder continuity is weaker than that of [Amo09] in the Diophantine frequencies case. However, we want to emphasize that the method of Amor seems only valid for estimating the limit N (E) and provides no precise information on the finite volume quantity N Λ (E; θ). In this context, our result (cf. (1.6)) is also new as it gives uniform upper bound on the number of eigenvalues inside a small interval. In addition, our result also improves the upper bound on the number of eigenvalues of Schlag (cf. Proposition 2.2 of [Sch01]) in the special case that the potential is given by the cosine function.
1.3. Notations and structure of the paper.
• Given A ∈ C and B ∈ C, we write A B (resp. A B) if there is some C = C(d, τ, γ) > 0 depending only on d, τ, γ so that |A| ≤ C|B| (resp. |A| ≥ C|B|). We also denote A ∼ B ⇔ 1 C < A B < C, and for some D > 0, A
D ∼ B ⇔ 1 CD < A B < CD. • The determinant of a matrix M is denoted by det M. 4
In fact, Bourgain [Bou02] has proven that the non-perturbative localization can not be expected in dimensions d ≥ 2. More precisely, consider H (2) = λ∆ + 2 cos 2π(θ + n · ω)δ n,n ′ on Z 2 . Using Aubry duality together with result of Bourgain [Bou02] yields for any λ = 0, there exists a set Ω ⊂ T 2 of positive measure with the following property, namely, for ω ∈ Ω, there exists a set Θ ⊂ T of positive measure, s.t., for θ ∈ Θ, H (2) does not satisfy Anderson localization.
• For n ∈ R d , let n 1 := d i=1 |n i | and n := sup 1≤i≤d |n i | . Denote by dist(·, ·) the distance induced by · on R d , and define
diam Λ = sup k,k ′ ∈Λ k − k ′ . Given n ∈ Z d , Λ 1 ⊂ 1 2 Z d and L > 0, denote Λ L (n) = {k ∈ Z d : k − n ≤ L} and Λ L (Λ 1 ) = {k ∈ Z d : dist(k, Λ 1 ) ≤ L}. In particular, write Λ L = Λ L (0). • Assume Λ ′ ⊂ Λ ⊂ Z d . Define the relatively boundaries as ∂ + Λ Λ ′ = {k ∈ Λ : dist(k, Λ ′ ) = 1}, ∂ − Λ Λ ′ = {k ∈ Λ : dist(k, Λ \ Λ ′ ) = 1} and ∂ Λ Λ ′ = {(k, k ′ ) : k − k ′ = 1, k ∈ ∂ − Λ Λ ′ , k ′ ∈ ∂ + Λ Λ ′ }. • Let Λ ⊂ Z d and let T : ℓ 2 (Z d ) → ℓ 2 (Z d ) be a linear operator. Define T Λ = R Λ T R Λ ,
where R Λ is the restriction operator. Denote by ·, · the standard inner product on ℓ 2 (Z d ). Set T Λ (x, y) = δ x , T Λ δ y for x, y ∈ Λ. By T Λ we mean the standard operator norm of T Λ . The spectrum of the operator T is denoted by σ(T ). Finally, I typically denotes the identity operator. The paper is organized as follows. The key ideas of the proof are introduced in §2. The proofs of Theorems 1.1, 1.2 and 1.3 are presented in §3, §4 and §5, respectively. Some useful estimates can be found in the appendix.
Key ideas of the proof
The main scheme of our proof is definitely adapted from Bourgain [Bou00]. The key ingredient of the proof in [Bou00] is that the resonances in dealing with Green's function estimates can be completely determined by the roots of some quadratic polynomials. The polynomials were produced in a Fröhlich-Spencer type MSA induction procedure. However, in the estimates of Green's functions restricted on the resonant blocks, Bourgain applied directly the Cramer's rule and provided estimates on certain determinants. It turns out these determinants can be well controlled via estimates of previous induction steps, the Schur complement argument and Weierstrass preparation theorem. It is the preparation type technique that yields the desired quadratic polynomials. We emphasize that this new method of Bourgain is fully free from eigenvalues variations or eigenfunctions parametrization.
However, in order to extend the method to achieve arithmetic version of Anderson localization in higher dimensions, some new ideas are required:
• The off-diagonal decay of the Green's function obtained by Bourgain [Bou00] is sub-exponential rather than exponential, which is not sufficient for a proof of Anderson localization. We resolve this issue by modifying the definitions of the resonant blocks Ω s k ⊂ Ω s k ⊂ Z d , and allowing diam Ω s k ∼ (diam Ω s k ) ρ , 0 < ρ < 1. This sublinear bound is crucial for a proof of exponential off-diagonal decay. In the argument of Bourgain, it requires actually that ρ = 1. Another issue we want to highlight is that Bourgain just provided outputs of iterating resolvent identity in many places of the paper [Bou00], but did not present the details. This motivates us to write down the whole iteration arguments that is also important to the exponential decay estimate.
• To prove Anderson localization, one has to eliminate the energy E ∈ R appeared in the Green's function estimates by removing θ or ω further. Moreover, if one wants to prove an arithmetic version of Anderson localization, a geometric description of resonances (i.e., the symmetry of zeros of certain functions appearing as the perturbations of quadratic polynomials in the present context) is essential. Precisely, at the s-th iteration step, using the Weierstrass preparation theorem Bourgain [Bou00] had shown the existence of zeros θ s,1 (E) and θ s,2 (E), but provided no symmetry information. Indeed, the symmetry property of θ s,1 (E) and θ s,2 (E) relies highly on that of resonant blocks Ω s k . However, in the construction of Ω s k in [Bou00], the symmetry property is missing. In this paper, we prove in fact
θ s,1 (E) + θ s,2 (E) = 0.
The main idea is that we reconstruct Ω s k so that it is symmetrical about k and allow the center k ∈ 1 2 Z d . • In the construction of resonant blocks [Bou00], the property that
Ω s ′ k ′ ∩ Ω s k = ∅ ⇒ Ω s ′ k ′ ⊂ Ω s k for s ′ < s (2.1)
plays a center role. In the 1D case, Ω s k can be defined as an interval so that (2.1) holds true. This interval structure of Ω s k is important to get desired estimates using resolvent identity. However, to generalize this argument to higher dimensions, one needs to give up the "interval" structure of Ω s k in order to fulfill the property (2.1). As a result, the geometric description of Ω s k becomes significantly complicated, and the estimates relying on resolvent identity remain unclear. We address this issue by proving that Ω s k can be constructed satisfying (2.1) and staying in some enlarged cubes, such as
Λ N c 2 s ⊂ Ω s k − k ⊂ Λ N c 2 s +50N c 2 s−1 .
• We want to mention that in the estimates of zeros for some perturbations of quadratic polynomials, we use the standard Róuche theorem rather than the Weierstrass preparation theorem as in [Bou00]. This technical modification avoids controlling the first order derivatives of determinants and simplifies significantly the proof.
The proofs of both Theorem 1.2 and Theorem 1.3 follow from the estimates in Theorem 1.1.
Quantitative Green's function estimates
The spectrum σ(H(θ)) ⊂ [−2, 2] since H(θ) ≤ 1 + 2dε < 2 if 0 < ε < 1 2d . In this section, we fix θ ∈ T, E ∈ [−2, 2]. Write E = cos 2πθ 0 with θ 0 ∈ C. Consider T (E; θ) = H(θ) − E = D n δ n,n ′ + ε∆, (3.1) where D n = cos 2π(θ + n · ω) − E. (3.2)
For simplicity, we may omit the dependence of T (E; θ) on E, θ below. We will use a multi-scale analysis induction to provide estimates on Green's functions. Of particular importance is the analysis of resonances, which will be described by zeros of certain functions appearing as perturbations of some quadratic polynomials. Roughly speaking, at the s-th iteration step, the set Q s ⊂ 1 2 Z d of singular sites will be completely described by a pair of symmetric zeros of certain functions, i.e.,
Q s = σ=±1 {k ∈ P s : θ + k · ω + σθ s < δ s } .
While the Green's functions restricted on Q s can not be generally well controlled, the algebraic structure of Q s combined with the non-resonant condition of ω may lead to fine separation property of singular sites. As a result, one can cover Q s with a new generation of resonant blocks Ω s+1 k (k ∈ P s+1 ). It turns out that one can control T −1 Ω s+1 k via zeros ±θ s+1 of some new functions which are also perturbations of quadratic polynomials in the sense that
det T Ω s+1 k ∼ δ −2 s θ + k · ω − θ s+1 · θ + k · ω + θ s+1 .
The key point is that some
T −1 Ω s+1 k while Ω s+1 k intersecting Q s become controllable 5
in the (s + 1)-th step. Moreover, the completely uncontrollable singular sites form the (s + 1)-th singular sites, i.e.,
Q s+1 = σ=±1 {k ∈ P s+1 : θ + k · ω + σθ s+1 < δ s+1 } .
Now we turn to the statement of our main result on the multi-scale type Green's function estimates. Define the induction parameters as follows.
N s+1 = | log γ δ s | 1 c 5 τ , | log γ δ s+1 | = | log γ δ s | c 5 . Thus N c 5 s − 1 ≤ N s+1 ≤ (N s + 1) c 5 .
We first introduce the following statement.
Statement 3.1 (P s (s ≥ 1)). Let Q ± s−1 = {k ∈ P s−1 : θ + k · ω ± θ s−1 < δ s−1 } , Q s−1 = Q + s−1 ∪ Q − s−1 , (3.3) Q ± s−1 = k ∈ P s−1 : θ + k · ω ± θ s−1 < δ 1 100 s−1 , Q s−1 = Q + s−1 ∪ Q − s−1 .
(3.4) We distinguish the following two cases:
(C1) s−1 . dist( Q − s−1 , Q + s−1 ) > 100N c s , (3.5) (C2) s−1 . dist( Q − s−1 , Q + s−1 ) ≤ 100N c s . (3.6) Let Z d ∋ l s−1 = 0 if (3.5) holds true, i s−1 − j s−1 if (3.6) holds true, where i s−1 ∈ Q + s−1 , j s−1 ∈ Q − s−1 such that i s−1 − j s−1 ≤ 100N c s in (C2) s−1 . Set Ω 0 k = {k} (k ∈ Z d ). Let Λ ⊂ Z d be a finite set. We say Λ is (s − 1)-good iff k ′ ∈ Q s ′ , Ω s ′ k ′ ⊂ Λ, Ω s ′ k ′ ⊂ Ω s ′ +1 k ⇒ Ω s ′ +1 k ⊂ Λ for s ′ < s − 1, {k ∈ P s−1 : Ω s−1 k ⊂ Λ} ∩ Q s−1 = ∅. (3.7)
Then (a) s . There are P s ⊂ Q s−1 so that the following holds true. We have in the case (C1) s−1 that
P s = Q s−1 ⊂ k ∈ Z d + 1 2 s−1 i=0 l i : min σ=±1 θ + k · ω + σθ s−1 < δ s−1 . (3.8) For the case (C2) s−1 , we have P s ⊂ k ∈ Z d + 1 2 s−1 i=0 l i : θ + k · ω < 3δ 1 100 s−1 , or P s ⊂ k ∈ Z d + 1 2 s−1 i=0 l i : θ + k · ω + 1 2 < 3δ 1 100 s−1 .
(3.9)
For every k ∈ P s , we can find resonant blocks Ω s k , Ω s k ⊂ Z d with the following properties. If (3.5) holds true, then
Λ Ns (k) ⊂ Ω s k ⊂ Λ Ns+50N c 2 s−1 (k), Λ N c s (k) ⊂ Ω s k ⊂ Λ N c s +50N c 2 s−1 (k),
and if (3.6) holds true, then
Λ 100N c s (k) ⊂ Ω s k ⊂ Λ 100N c s +50N c 2 s−1 (k), Λ N c 2 s (k) ⊂ Ω s k ⊂ Λ N c 2 s +50N c 2 s−1 (k).
These resonant blocks are constructed satisfying the following two properties.
(a1) s . Ω s k ∩ Ω s ′ k ′ = ∅ (s ′ < s) ⇒ Ω s ′ k ′ ⊂ Ω s k , Ω s k ∩ Ω s ′ k ′ = ∅ (s ′ < s) ⇒ Ω s ′ k ′ ⊂ Ω s k , dist( Ω s k , Ω s k ′ ) > 10 diam Ω s k for k = k ′ ∈ P s . (3.10) (a2) s . The translation of Ω s k , Ω s k − k ⊂ Z d + 1 2 s−1 i=0 l i
is independent of k ∈ P s and symmetrical about the origin.
(b) s . Q s−1 is covered by Ω s k (k ∈ P s ) in the sense that, for every k ′ ∈ Q s−1 , there exists k ∈ P s such that Ω s−1 k ′ ⊂ Ω s k . (3.11) (c) s . For each k ∈ P s , Ω s k contains a subset A s k ⊂ Ω s k with #A s k ≤ 2 s such that Ω s k \ A s k is (s − 1)-good. Moreover, A s k − k is independent of k and is symmetrical about the origin. (d) s .
There is θ s = θ s (E) ∈ C with the following properties. Replacing θ + n · ω by z + (n − k) · ω, and restricting z in
{z ∈ C : min σ=±1 z + σθ s < δ 1 10 4 s }, (3.12) then T Ω s k becomes M s (z) = T (z) Ω s k −k = (cos 2π(z + n · ω)δ n,n ′ − E + ε∆) Ω s k −k . Then M s (z) ( Ω s k −k)\(A s k −k) is invertible and we can define the Schur comple- ment S s (z) = M s (z) A s k −k − R A s k −k M s (z)R ( Ω s k −k)\(A s k −k) M s (z) ( Ω s k −k)\(A s k −k) −1 × R ( Ω s k −k)\(A s k −k) M s (z)R A s k −k . Moreover, if z belongs to the set defined by (3.12), then we have max x y |S s (z)(x, y)| < 4 + s−1 l=0 δ l < 10, (3.13) and det S s (z) δs−1 ∼ z − θ s · z + θ s . (3.14) (e) s . We say a finite set Λ ⊂ Z d is s-good iff k ′ ∈ Q s ′ , Ω s ′ k ′ ⊂ Λ, Ω s ′ k ′ ⊂ Ω s ′ +1 k ⇒ Ω s ′ +1 k ⊂ Λ for s ′ < s, {k ∈ P s : Ω s k ⊂ Λ} ∩ Q s = ∅.
(
3.15)
Assuming Λ is s-good, then
T −1 Λ < δ −3 s−1 sup {k∈Ps: Ω s k ⊂Λ} θ + k · ω − θ s −1 · θ + k · ω + θ s −1 (3.16) < δ −3 s , T −1 Λ (x, y) < e −γs x−y 1 for x − y > N c 3 s , (3.17) where γ 0 = 1 2 | log ε|, γ s = γ s−1 (1 − N 1 c −1 s ) 3 . Thus γ s ց γ ∞ ≥ 1 2 γ 0 = 1 4 | log ε|. (f ) s . We have k ∈ Z d + 1 2 s−1 i=0 l i : min σ=±1 θ + k · ω + σθ s < 10δ 1 100 s ⊂ P s . (3.18)
The main theorem of this section is Theorem 3.2. Let ω ∈ R τ,γ . Then there is some ε 0 (d, τ, γ) > 0 so that for 0 < ε ≤ ε 0 , the statement P s holds for all s ≥ 1.
The following three subsections are devoted to prove Theorem 3.2.
3.1. The initial step. Recalling (3.1)-(3.2) and cos 2πθ 0 = E, we have
|D n | = 2| sin π(θ + n · ω + θ 0 ) sin π(θ + n · ω − θ 0 )| ≥ 2 θ + n · ω + θ 0 · θ + n · ω − θ 0 .
Denote δ 0 = ε 1/10 and
P 0 = Z d , Q 0 = {k ∈ P 0 : min( θ + k · ω + θ 0 , θ + k · ω − θ 0 ) < δ 0 }. We say a finite set Λ ⊂ Z d is 0-good iff Λ ∩ Q 0 = ∅. Lemma 3.3. If the finite set Λ ⊂ Z d is 0-good, then T −1 Λ < 2 D −1 Λ < δ −2 0 , (3.19) |T −1 Λ (x, y)| < e −γ0 x−y 1 for x − y > 0. (3.20) where γ 0 = 5| log δ 0 | = 1 2 | log ε|. Proof of Lemma 3.3. Assuming Λ is 0-good, we have D −1 Λ < 1 2 δ −2 0 , εD −1 Λ ∆ Λ < dεδ −2 0 < 1 2 δ 7 0 < 1 2 .
Thus
T −1 Λ = I + εD −1 Λ ∆ Λ −1 D −1 Λ and I + εD −1 Λ ∆ Λ −1 may be expanded in the Neumann series (I + εD −1 Λ ∆ Λ ) −1 = +∞ i=0 (−εD −1 Λ ∆ Λ ) i . Hence T −1 Λ < 2 D −1 Λ < δ −2 0 , which implies (3.19).
In addition, if x − y 1 > i, then
(εD −1 Λ ∆ Λ ) i D −1 Λ (x, y) = 0. Hence |T −1 Λ (x, y)| = | i≥ x−y 1 (εD −1 Λ ∆ Λ ) i D −1 Λ (x, y)| < δ 7 x−y 1−2 0 .
In particular,
|T −1 Λ (x, y)| < e −γ0 x−y 1 for x − y > 0 with γ 0 = 5| log δ 0 | = 1 2 | log ε|, which yields (3.20).
3.2. Verification of P 1 . If Λ ∩ Q 0 = ∅, then the Neumann series argument of previous subsection does not work. Thus we use the resolvent identity argument to estimate T −1 Λ , where Λ is 1-good (1-good will be specified later) but might intersect with Q 0 (not 0-good).
First, we construct blocks Ω 1 k (k ∈ P 1 ) to cover the singular point Q 0 . Second, we get the bound estimate
T −1 Ω 1 k < δ −2 0 θ + k · ω − θ 1 −1 · θ + k · ω + θ 1 −1 ,
where Ω 1 k is an extension of Ω 1 k , and θ 1 is obtained by analyzing the root of the
equation det T (z − k · ω) Ω 1 k = 0 about z. Finally, we combine the estimates of T −1 Ω 1 k to get that of T −1 Λ by resolvent identity assuming Λ is 1-good. Recall that 1 < c 20 < 1 τ . Let N 1 = | log γ δ 0 | 1 c 5 τ . Define Q ± 0 = k ∈ Z d : θ + k · ω ± θ 0 < δ 0 , Q 0 = Q + 0 ∪ Q − 0 , Q ± 0 = k ∈ Z d : θ + k · ω ± θ 0 < δ 1 100 0 , Q 0 = Q + 0 ∪ Q − 0 . We distinguish three steps. STEP1: The case (C1) 0 Occurs : i.e., dist Q − 0 , Q + 0 > 100N c 1 . (3.21) Remark 3.1. We have in fact dist Q − 0 , Q + 0 = dist Q + 0 , Q − 0 . Thus (3.21) also implies dist Q + 0 , Q − 0 > 100N c 1 .
We refer to the Appendix A for a detailed proof.
Assuming (3.21), we define
P 1 = Q 0 = {k ∈ Z d : min( θ + k · ω + θ 0 , θ + k · ω − θ 0 ) < δ 0 }. (3.22)
Associate every k ∈ P 1 an N 1 -block Ω 1 k := Λ N1 (k) and an N c
1 -block Ω 1 k := Λ N c 1 (k). Then Ω 1 k − k ⊂ Z d is independent of k ∈ P 1 and symmetrical about the origin. If k = k ′ ∈ P 1 , k − k ′ ≥ min 100N c 1 , | log γ 2δ 0 | 1 τ ≥ 100N c 1 . Thus dist Ω 1 k , Ω 1 k ′ > 10 diam Ω 1 k for k = k ′ ∈ P 1 . For k ∈ Q − 0 , we consider M 1 (z) := T (z) Ω 1 k −k = (cos 2π(z + n · ω)δ n,n ′ − E + ε∆) n∈ Ω 1 k −k defined in z ∈ C : |z − θ 0 | < δ 1 10 0 . (3.23) For n ∈ ( Ω 1 k − k) \ {0}, we have for 0 < δ 0 ≪ 1, z + n · ω − θ 0 ≥ n · ω − |z − θ 0 | ≥ γe −N cτ 1 − δ 1 10 0 ≥ γe −| log γ δ 0 | 1 c 4 − δ 1 10 0 > δ 1 10 4 0 . For n ∈ Ω 1 k − k, we have z + n · ω + θ 0 ≥ θ + (n + k) · ω + θ 0 − |z − θ 0 | − θ + k · ω − θ 0 ≥ δ 1 100 0 − δ 1 10 0 − δ 0 > 1 2 δ 1 100 0 . Since δ 0 ≫ ε, we have by Neumann series argument M 1 (z) ( Ω 1 k −k)\{0} −1 < 3δ − 1 50 0 .
Now we can apply the Schur complement lemma (cf. Lemma B.1 in the appendix) to provide desired estimates. By Lemma B.1, M 1 (z) −1 is controlled by the inverse of the Schur complement (of ( Ω 1
k − k) \ {0}) S 1 (z) = M 1 (z) {0} − R {0} M 1 (z)R ( Ω 1 k −k)\{0} M 1 (z) ( Ω 1 k −k)\{0} −1 R ( Ω 1 k −k)\{0} M 1 (z)R {0} = −2 sin π(z − θ 0 ) sin π(z + θ 0 ) + r(z) = g(z)((z − θ 0 ) + r 1 (z)),
where g(z) and r 1 (z) are analytic functions in the set defined by (3.23) satisfying
|g(z)| ≥ 2 z + θ 0 > δ 1 100 0 and |r 1 (z)| < ε 2 δ −1 0 < ε. Since |r 1 (z)| < |z − θ 0 | for |z − θ 0 | = δ 1 10 0 , using Róuche theorem implies the equation (z − θ 0 ) + r 1 (z) = 0
has a unique root θ 1 in the set of (3.23) satisfying
|θ 0 − θ 1 | = |r 1 (θ 1 )| < ε, |(z − θ 0 ) + r 1 (z)| ∼ |z − θ 1 |. Moreover, θ 1 is the unique root of det M 1 (z) = 0 in the set (3.23). Since z + θ 0 > 1 2 δ 1 100 0 and |θ 0 − θ 1 | < ε, we get z + θ 1 ∼ z + θ 0 ,
which shows for z being in the set of (3.23),
|S 1 (z)| ∼ z + θ 1 · z − θ 1 , (3.24) M 1 (z) −1 < 4 1 + (M 1 (z) ( Ω 1 k −k)\{0} ) −1 2 1 + |S 1 (z)| −1 < δ −2 0 z + θ 1 −1 · z − θ 1 −1 . (3.25)
where in the first inequality we use Lemma B.1. Now, for k ∈ Q + 0 , we consider
M 1 (z) in z ∈ C : |z + θ 0 | < δ 1 10 0 .
(3.26)
The similar argument shows that det M 1 (z) = 0 has a unique root θ ′ 1 in the set of (3.26). We will show θ 1 + θ ′ 1 = 0. In fact, by Lemma C.1, det M 1 (z) is an even function of z. Then the uniqueness of the root implies θ ′ 1 = −θ 1 . Thus for z being in the set of (3.26), both (3.24) and (3.25) hold true as well. Finally, since M 1 (z) is 1-periodic, (3.24) and (3.25) remain valid for
z ∈ {z ∈ C : min σ=±1 z + σθ 0 < δ 1 10 0 }.
(3.27) From (3.22), we have θ + k · ω belongs to the set of (3.27). Thus for k ∈ P 1 , we get
T −1 Ω 1 k = M 1 (θ + k · ω) −1 < δ −2 0 θ + k · ω − θ 1 −1 · θ + k · ω + θ 1 −1 . (3.28) STEP2: The case (C2) 0 Occurs: i.e., dist Q − 0 , Q + 0 ≤ 100N c 1 . Then there exist i 0 ∈ Q + 0 and j 0 ∈ Q − 0 with i 0 − j 0 ≤ 100N c 1 , such that θ + i 0 · ω + θ 0 < δ 0 , θ + j 0 · ω − θ 0 < δ 1 100 0 . Denote l 0 = i 0 − j 0 . Then l 0 = dist(Q + 0 , Q − 0 ) = dist( Q + 0 , Q − 0 ). Define O 1 = Q − 0 ∪ (Q + 0 − l 0 ). For k ∈ Q + 0 , we have θ + (k − l 0 ) · ω − θ 0 < θ + k · ω + θ 0 + l 0 · ω + 2θ 0 < δ 0 + δ 0 + δ 1 100 0 < 2δ 1 100 0 . Thus O 1 ⊂ o ∈ Z d : θ + o · ω − θ 0 < 2δ 1 100 0 . For every o ∈ O 1 , define its mirror point o * = o + l 0 . Next define P 1 = 1 2 (o + o * ) : o ∈ O 1 = o + l 0 2 : o ∈ O 1 . (3.29) Associate every k ∈ P 1 with a 100N c 1 -block Ω 1 k := Λ 100N c 1 (k) and a N c 2 1 -block Ω 1 k := Λ N c 2 1 (k). Thus Q 0 ⊂ k∈P1 Ω 1 k and Ω 1 k − k ⊂ Z d + l0
2 is independent of k ∈ P 1 and symmetrical about the origin. Notice that
min l 0 2 · ω + θ 0 , l 0 2 · ω + θ 0 − 1 2 = 1 2 l 0 · ω + 2θ 0 ≤ 1 2 ( θ + i 0 · ω + θ 0 + θ + j 0 · ω − θ 0 ) < δ 1 100 0 .
Since δ 0 ≪ 1, only one of
l 0 2 · ω + θ 0 < δ 1 100 0 , l 0 2 · ω + θ 0 − 1 2 < δ 1 100 0 holds true. First, we consider the case l 0 2 · ω + θ 0 < δ 1 100 0 . (3.30) Let k ∈ P 1 . Since k = 1 2 (o + o * ) = (o + l0 2 ) (for some o ∈ O 1 ), we have θ + k · ω ≤ θ + o · ω − θ 0 + l 0 2 · ω + θ 0 < 3δ 1 100 0 . (3.31) Thus if k = k ′ ∈ P 1 , we obtain k − k ′ ≥ log γ 6δ 1 100 0 1 τ ∼ N c 5 1 ≫ 10N c 2 1 , which implies dist( Ω 1 k , Ω 1 k ′ ) > 10 diam Ω 1 k for k = k ′ ∈ P 1 . Consider M 1 (z) := T (z) Ω 1 k −k = ((cos 2π(z + n · ω)δ n,n ′ − E + ε∆) n∈ Ω 1 k −k in z ∈ C : |z| < δ 1 10 3 0 .
(3.32)
For n = ± l0 2 and n ∈ Ω 1 k − k, we have
n · ω ± θ 0 ≥ n ∓ l 0 2 · ω − l 0 2 ω + θ 0 > γe −(2N c 2 1 ) τ − δ 1 100 0 > 2δ 1 10 4 0 .
Thus for z being in the set of (3.32) and n = ± l0 2 , we have
z + n · ω ± θ 0 ≥ n · ω ± θ 0 − |z| > δ 1 10 4 0 . Hence | cos 2π(z + n · ω) − E| ≥ δ 2× 1 10 4 0 ≫ ε.
Using Neumann series argument concludes
M 1 (z) ( Ω 1 k −k)\{± l 0 2 } −1 < δ −3× 1 10 4 0 .
(3.33) Thus by Lemma B.1, M 1 (z) −1 is controlled by the inverse of the Schur complement of ( Ω 1
k − k) \ {± l0 2 }, i.e., S 1 (z) = M 1 (z) {± l 0 2 } − R {± l 0 2 } M 1 (z)R ( Ω 1 k −k)\{± l 0 2 } × M 1 (z) ( Ω 1 k −k)\{± l 0 2 } −1 R ( Ω 1 k −k)\{± l 0 2 } M 1 (z)R {± l 0 2 } . Clearly, det S 1 (z) = det M 1 (z) {± l 0 2 } + O(ε 2 δ − 3 10 4 0 ) = 4 sin π(z + l 0 2 · ω − θ 0 ) sin π(z + l 0 2 · ω + θ 0 ) × sin π(z − l 0 2 · ω − θ 0 ) sin π(z − l 0 2 · ω + θ 0 ) + O(ε 1.5 ).
If l 0 = 0, then det S 1 (z) = −2 sin π(z − θ 0 ) sin π(z + θ 0 ) + O(ε 1.5 ).
In this case, the argument is easier, and we omit the discussion. In the following, we deal with l 0 = 0. By (3.30) and (3.32), we have
z + l 0 2 · ω − θ 0 ≥ l 0 · ω − l 0 2 · ω + θ 0 − |z| > γe −(100N c 1 ) τ − δ 1 100 0 − δ 1 10 3 0 > δ 1 10 4 0 , z − l 0 2 · ω + θ 0 ≥ l 0 · ω − l 0 2 · ω + θ 0 − |z| > γe −(100N c 1 ) τ − δz 1 ≡ l 0 2 · ω + θ 0 (mod Z), |z 1 | = l 0 2 · ω + θ 0 < δ 1 100 0 . Then det S 1 (z) ∼ z + l 0 2 · ω − θ 0 · z − l 0 2 · ω + θ 0 · |(z − z 1 )(z + z 1 ) + r 1 (z)| δ 2 10 4 0 ∼ | (z − z 1 ) (z + z 1 ) + r 1 (z)|,
where r 1 (z) is an analytic function in the set of (3.32) with |r 1 (z)| < ε ≪ δ 1 10 3 0 . Applying Róuche theorem shows the equation (z − z 1 ) (z + z 1 ) + r 1 (z) = 0 has exact two roots θ 1 , θ ′ 1 in the set of (3.32), which are perturbations of ±z 1 . Notice that |z| < δ
θ ′ 1 = −θ 1 . Moreover, we have |θ 1 − z 1 | ≤ |r 1 (θ 1 )| 1 2 < ε 1 2 , | (z − z 1 ) (z + z 1 ) + r 1 (z)| ∼ | (z − θ 1 ) (z + θ 1 ) |.
Thus for z being in the set of (3.32), we have
det S 1 (z) δ0 ∼ z − θ 1 · z + θ 1 , (3.34) which concludes S 1 (z) −1 ≤ Cδ −1 0 z − θ 1 −1 · z + θ 1 −1 .
Recalling (3.33), we get since Lemma B.1
M 1 (z) −1 < 4 1 + (M 1 (z) ( Ω 1 k −k)\{0} ) −1 2 1 + S 1 (z) −1 < δ −2 0 z + θ 1 −1 · z − θ 1 −1 (3.35)
Thus for the case (3.30), both (3.34) and (3.35) are established for z belonging to z ∈ C : z < δ 1 10 3 0 since M 1 (z) is 1-periodic (in z). By (3.31), for k ∈ P 1 , we also have
T −1 Ω 1 k = M 1 (θ + k · ω) −1 < δ −2 0 θ + k · ω − θ 1 −1 · θ + k · ω + θ 1 −1 . (3.36)
For the case l 0 2 · ω + θ 0 − 1 2 < δ 1 100 0 ,
(3.37)
we have for k ∈ P 1 ,
θ + k · ω − 1 2 < 3δ 1 100 0 . (3.38) Consider M 1 (z) := T (z) Ω 1 k −k = ((cos 2π(z + n · ω)δ n,n ′ − E + ε∆) n∈ Ω 1 k −k in z ∈ C : |z − 1 2 | < δ 1 10 3 0 .
(3.39)
By the similar argument as above, we get
M 1 (z) ( Ω 1 k −k)\{± l 0 2 } −1 < δ −3× 1 10 4 0 .
Thus M 1 (z) −1 is controlled by the inverse of the Schur complement of ( Ω 1
k − k) \ {± l0 2 }: S 1 (z) = M 1 (z) {± l 0 2 } − R {± l 0 2 } M 1 (z)R ( Ω 1 k −k)\{± l 0 2 } × M 1 (z) ( Ω 1 k −k)\{± l 0 2 } −1 R ( Ω 1 k −k)\{± l 0 2 } M 1 (z)R {± l 0 2 } . Direct computation shows det S 1 (z) = det M 1 (z) {± l 0 2 } + O(ε 2 δ − 3 10 4 0 ) = 4 sin π(z + l 0 2 · ω − θ 0 ) sin π(z + l 0 2 · ω + θ 0 ) × sin π(z − l 0 2 · ω − θ 0 ) sin π(z − l 0 2 · ω + θ 0 ) + O(ε 1.5 ).
By (3.37) and (3.39), we have
z + l 0 2 · ω − θ 0 ≥ l 0 · ω − l 0 2 · ω + θ 0 − 1 2 − |z − 1 2 | > γe −(100N c 1 ) τ − δ 1 100 0 − δ 1 10 3 0 > δ 1 10 4 0 , z − l 0 2 · ω + θ 0 ≥ l 0 · ω − l 0 2 · ω + θ 0 − 1 2 − |z − 1 2 | > γe −(100N c 1 ) τ − δ 1 100 0 − δ 1 10 3 0 > δ 1 10 4 0 . Let z 1 satisfy z 1 ≡ l 0 2 · ω + θ 0 (mod Z), |z 1 − 1 2 | = l 0 2 · ω + θ 0 − 1 2 < δ 1 100 0 . Then det S 1 (z) ∼ z + l 0 2 · ω − θ 0 · z − l 0 2 · ω + θ 0 · |(z − z 1 )(z − (1 − z 1 )) + r 1 (z)| δ 2 10 4 0 ∼ | (z − z 1 ) (z − (1 − z 1 )) + r 1 (z)|,
where r 1 (z) is an analytic function in the set of (3.39) with |r 1 (z)| < ε ≪ δ
θ ′ 1 = 1 − θ 1 . Moreover, |θ 1 −z 1 | ≤ |r 1 (θ 1 )| 1 2 < ε 1 2 , | (z − z 1 ) (z − 1 + z 1 )+r 1 (z)| ∼ | (z − θ 1 ) (z − (1 − θ 1 )) |.
Thus for z belonging to the set of (3.39), we have
det S 1 (z) δ0 ∼ z − θ 1 · z − (1 − θ 1 ) = z − θ 1 · z + θ 1 and M 1 (z) −1 < δ −2 0 z − θ 1 −1 · z + θ 1 −1 .
Thus for the case (3.37), both (3.34) and (3.35) hold for z being in {z ∈ C : z − 1 2 < δ 1 10 3 0 }. By (3.38), for k ∈ P 1 , we obtain
T −1 Ω 1 k = M 1 (θ + k · ω) −1 < δ −2 0 θ + k · ω − θ 1 −1 · θ + k · ω + θ 1 −1 . (3.40) For k ∈ P 1 , we define A 1 k ⊂ Ω 1 k to be A 1 k := {k} case (C1) 0 {o} ∪ {o * } case (C2) 0 , (3.41) where k = 1 2 (o + o * ) for some o ∈ O 1 (cf.
(3.29)) in the case (C2) 0 . We have verified (a) 1 -(d) 1 and (f ) 1 .
STEP3: Application of resolvent identity Now we verify (e) 1 which is based on iterating resolvent identity. Note that
log γ δ 1 = log γ δ 0 c 5 .
Recall that
Q ± 1 = {k ∈ P 1 : θ + k · ω ± θ 1 < δ 1 } , Q 1 = Q + 1 ∪ Q − 1 . We say a finite set Λ ⊂ Z d is 1-good iff Λ ∩ Q 0 ∩ Ω 1 k = ∅ ⇒ Ω 1 k ⊂ Λ, {k ∈ P 1 : Ω 1 k ⊂ Λ} ∩ Q 1 = ∅.
(3.42)
Theorem 3.4. If Λ is 1-good, then
T −1 Λ < δ −3 0 sup {k∈P1: Ω 1 k ⊂Λ} θ + k · ω − θ 1 −1 · θ + k · ω + θ 1 −1 , (3.43) T −1 Λ (x, y) < e −γ1 x−y 1 for x − y > N c 3 1 . (3.44) where γ 1 = γ 0 (1 − N 1 c −1 1 ) 3 .
Proof of Theorem 3.4. Denote
2Ω 1 k := Λ diam Ω 1 k (k). We have Lemma 3.5. For k ∈ P 1 \ Q 1 , we have |T −1 Ω 1 k (x, y)| < e − γ0 x−y 1 for x ∈ ∂ − Ω 1 k , y ∈ 2Ω 1 k , (3.45) where γ 0 = γ 0 (1 − N 1 c −1 1
).
Proof of Lemma 3.5. From our construction, we have
Q 0 ⊂ k∈P1 A 1 k ⊂ k∈P1 Ω 1 k . Thus ( Ω 1 k \ A 1 k ) ∩ Q 0 = ∅,
which shows Ω 1 k \ A 1 k is 0-good. As a result, one has by (3.20),
|T −1 Ω 1 k \A 1 k (x, w)| < e −γ0 x−w 1 for x ∈ ∂ − Ω 1 k , w ∈ ( Ω 1 k \ A 1 k ) ∩ 2Ω 1 k .
Since (3.40) and k / ∈ Q 1 , we have
T −1 Ω 1 k < δ −2 0 δ −2 1 < δ −3 1 .
Using resolvent identity implies
|T −1 Ω 1 k (x, y)| = |T −1 Ω 1 k \A 1 k (x, y)χ Ω 1 k \A 1 k (y) − (w ′ ,w)∈∂A 1 k T −1 Ω 1 k \A 1 k (x, w)Γ(w, w ′ )T −1 Ω 1 k (w ′ , y)| < e −γ0 x−y 1 + 4d sup w∈∂ + A 1 k e −γ0 x−w 1 T −1 Ω 1 k < e −γ0 x−y 1 + sup w∈∂ + A 1 k e −γ0( x−y 1− y−w 1)+C | log δ1| < e −γ0 x−y 1 + e −γ0 1−C x−y 1 c −1 1 + | log δ 1 | x−y 1 x−y 1 < e −γ0 x−y 1 + e −γ0 1−N 1 c −1 1 x−y 1 = e − γ0 x−y 1 since N c 1 diam Ω 1 k ∼ x − y 1 , y − w 1 diam Ω 1 k diam Ω 1 k 1 c and | log δ 1 | ∼ | log δ 0 | c 5 ∼ N c 10 τ 1 < N 1 c 1 . (3.46)
We are able to prove Theorem 3.4. First, we prove the estimate (3.43) by Schur's test. Define
P 1 = {k ∈ P 1 : Λ ∩ Ω 1 k ∩ Q 0 = ∅}, Λ ′ = Λ\ k∈ P1 Ω 1 k .
Then Λ ′ ∩ Q 0 = ∅, which shows Λ ′ is 0-good, and (3.19)-(3.20) hold for Λ ′ . We have the following cases.
(
1). Let x / ∈ k∈ P1 2Ω 1 k . Thus N 1 ≤ dist(x, ∂ − Λ Λ ′ )
. For y ∈ Λ, the resolvent identity reads as
T −1 Λ (x, y) = T −1 Λ ′ (x, y)χ Λ ′ (y) − (w,w ′ )∈∂ΛΛ ′ T −1 Λ ′ (x, w)Γ(w, w ′ )T −1 Λ (w ′ , y). Since y∈Λ ′ |T −1 Λ ′ (x, y)χ Λ ′ (y)| ≤ |T −1 Λ ′ (x, x)| + x−y 1>0 |T −1 Λ ′ (x, y)χ Λ ′ (y)| ≤ T −1 Λ ′ + x−y 1>0 e −γ0 x−y 1 ≤ 2δ −2 0 and w∈∂ − Λ Λ ′ |T −1 Λ ′ (x, w)| ≤ x−w 1≥N1 e −γ0 x−w 1 < e − 1 2 γ0N1 , we get y∈Λ |T −1 Λ (x, y)| ≤ y∈Λ ′ |T −1 Λ ′ (x, y)χ Λ ′ (y)| + y∈Λ,(w,w ′ )∈∂ΛΛ ′ |T −1 Λ ′ (x, w)Γ(w, w ′ )T −1 Λ (w ′ , y)| ≤ 2δ −2 0 + 2d w∈∂ − Λ Λ ′ |T −1 Λ ′ (x, w)| · sup w ′ ∈Λ y∈Λ |T −1 Λ (w ′ , y)| ≤ 2δ −2 0 + 1 10 sup w ′ ∈Λ y∈Λ |T −1 Λ (w ′ , y)|.
(2). Let x ∈ 2Ω 1 k for some k ∈ P 1 . Thus by (3.42), we have Ω 1 k ⊂ Λ and k / ∈ Q 1 . For y ∈ Λ, using resolvent identity shows
T −1 Λ (x, y) = T −1 Ω 1 k (x, y)χ Ω 1 k (y) − (w,w ′ )∈∂Λ Ω 1 k T −1 Ω 1 k (x, w)Γ(w, w ′ )T −1 Λ (w ′ , y).
By (3.40) , (3.45) and since
N 1 < diam Ω 1 k dist(x, ∂ − Λ Ω 1 k ), we get y∈Λ |T −1 Λ (x, y)| ≤ y∈Λ |T −1 Ω 1 k (x, y)χ Ω 1 k (y)| + y∈Λ,(w,w ′ )∈∂Λ Ω 1 k |T −1 Ω 1 k (x, w)Γ(w, w ′ )T −1 Λ (w ′ , y)| < # Ω 1 k · T −1 Ω 1 k + CN c 2 d 1 e − γ0N1 sup w ′ ∈Λ y∈Λ |T −1 Λ (w ′ , y)| < CN c 2 d 1 δ −2 0 θ + k · ω − θ 1 −1 · θ + k · ω + θ 1 −1 + 1 10 sup w ′ ∈Λ y∈Λ |T −1 Λ (w ′ , y)| < 1 2 δ −3 0 θ + k · ω − θ 1 −1 · θ + k · ω + θ 1 −1 + 1 10 sup w ′ ∈Λ y∈Λ |T −1 Λ (w ′ , y)|.
Combining estimates of the above two cases yields
T −1 Λ ≤ sup x∈Λ y∈Λ |T −1 Λ (x, y)| < δ −3 0 sup {k∈P1: Ω 1 k ⊂Λ} θ + k · ω − θ 1 −1 · θ + k · ω + θ 1 −1 . (3.47)
Now we prove the off-diagonal decay estimate (3.44). For every w ∈ Λ, define its block in Λ
J w = Λ 1 2 N1 (w) ∩ Λ if w / ∈ k∈ P1 2Ω 1 k , 1 ○ Ω 1 k if w ∈ 2Ω 1 k for some k ∈ P 1 . 2 ○ Then diam J w ≤ max diam Λ 1 2 N1 (w), diam Ω 1 k < 3N c 2 1 . For 1 ○, since dist(w, Λ ∩ Q 0 ) ≥ dist(w, k∈ P1 Ω 1 k ) ≥ N 1 , we have J w ∩ Q 0 = ∅. Thus J w is 0-good. Noticing that dist(w, ∂ − Λ J w ) ≥ 1 2 N 1 , from (3.20), we have |T −1 Jw (w, w ′ )| < e −γ0 w−w ′ 1 for w ′ ∈ ∂ − Λ J w . For 2 ○, by (3.45), we have |T Jw (w, w ′ )| < e − γ0 w−w ′ 1 for w ′ ∈ ∂ − Λ J w . Let x − y > N c 3
1 . Using resolvent identity shows
T −1 Λ (x, y) = T −1 Jx (x, y)χ Jx (y) − (w,w ′ )∈∂ΛJx T −1 Jx (x, w)Γ(w, w ′ )T −1 Λ (w ′ , y).
The first term of the above identity is zero because y / ∈ J x (since x − y > N c 3 1 > 3N c 2 1 ). It follows that
|T −1 Λ (x, y)| ≤ CN c 2 d 1 e − min(γ0(1−2N −1 1 ), γ0(1−N −1 1 )) x−x1 1 |T −1 Λ (x 1 , y)| ≤ CN c 2 d 1 e − γ0(1−N −1 1 ) x−x1 1 |T −1 Λ (x 1 , y)| < e − γ0(1−N −1 1 − C log N 1 N 1 ) x−x1 1 |T −1 Λ (x 1 , y)| < e −γ0(1−N 1 c −1 1 ) 2 x−x1 1 |T −1 Λ (x 1 , y)| = e −γ ′ 0 x−x1 1 |T −1 Λ (x 1 , y)| for some x 1 ∈ ∂ + Λ J x , where γ ′ 0 = γ 0 (1 − N 1 c −1 1 ) 2 .
Then iterate and stop for some step L such that x L − y < 3N c 2 1 . Recalling (3.46) and (3.47), we get
|T −1 Λ (x, y)| ≤ e −γ ′ 0 x−x1 1 · · · e −γ ′ 0 xL−1−xL 1 |T −1 Λ (x L , y)| ≤ e −γ ′ 0 ( x−y 1−3N c 2 1 ) T −1 Λ < e −γ ′ 0 (1−3N c 2 −c 3 1 ) x−y 1 δ −3 1 < e −γ ′ 0 (1−3N c 2 −c 3 1 −3 | log δ 1 | N c 3 1 ) x−y 1 < e −γ ′ 0 (1−N 1 c −1 1 ) x−y 1 = e −γ1 x−y 1 .
This completes the proof of Theorem 3.4.
3.3. Proof of Theorem 3.2: from P s to P s+1 .
Proof of Theorem 3.2. We have finished the proof of P 1 in Subsection 3.2. Assume that P s holds true. In order to complete the proof of Theorem 3.2 it suffices to establish P s+1 .
In the following, we try to prove P s+1 holds true. For this purpose, we will establish (a) s+1 -(f ) s+1 assuming (a) s -(f ) s . We divide the proof into 3 steps. Let
Q ± s = {k ∈ P s : θ + k · ω ± θ s < δ s } , Q s = Q + s ∪ Q − s . (3.48)
and Q ± s = k ∈ P s : θ + k · ω ± θ s < δ
dist Q − s , Q + s = dist Q + s , Q − s .
Thus (3.50) also implies that
dist Q + s , Q − s > 100N c s+1 .
(3.51)
By (3.18) and the definitions of Q ± s (cf. (3.48)) and Q ± s (cf. (3.49)), we obtain
Q ± s = {k ∈ Z d + 1 2 s−1 i=0 l i : θ + k · ω ± θ s < δ s }, (3.52) Q ± s = {k ∈ Z d + 1 2 s−1 i=0 l i : θ + k · ω ± θ s < δ 1 100 s }.
Then the proof is similar to that of Remark 3.1 and we omit the details.
Assuming (3.50), then we define P s+1 = Q s , l s = 0.
(3.53) By (3.8) and (3.9), we have
P s+1 ⊂ k ∈ Z d + 1 2 s i=0 l i : min σ=±1 ( θ + k · ω + σθ s ) < δ s . (3.54)
Thus from (3.51), we get for k, k ′ ∈ P s+1 with k = k ′ ,
k − k ′ > min | log γ 2δ s | 1 τ , 100N c s+1 ≥ 100N c s+1 .
(3.55)
In the following, we will associate every k ∈ P s+1 with blocks Ω s+1 k and Ω s+1 k so that
Λ Ns+1 (k) ⊂ Ω s+1 k ⊂ Λ Ns+1+50N c 2 s (k), Λ N c s+1 (k) ⊂ Ω s+1 k ⊂ Λ N c s+1 +50N c 2 s (k), and Ω s+1 k ∩ Ω s ′ k ′ = ∅ (s ′ < s + 1) ⇒ Ω s ′ k ′ ⊂ Ω s+1 k , Ω s+1 k ∩ Ω s ′ k ′ = ∅ (s ′ < s + 1) ⇒ Ω s ′ k ′ ⊂ Ω s+1 k , dist( Ω s+1 k , Ω s+1 k ′ ) > 10 diam Ω s+1 k for k = k ′ ∈ P s+1 .
(3.56)
In addition, the set
Ω s+1 k − k ⊂ Z d + 1 2 s i=0 l i
is independent of k ∈ P s+1 and is symmetrical about the origin.
Such Ω s+1 k and Ω s+1 k can be constructed by the following argument (We only consider Ω s+1 k since Ω s+1 k is discussed by the similar argument). Fixing k 0 ∈ Q + s , we start from
J 0,0 = Λ N c s+1 (k 0 ). Define H r = (k 0 − P s+1 + P s−r ) ∪ (k 0 + P s+1 − P s−r ) (0 ≤ r ≤ s − 1).
Notice that by (3.54), we have k 0 − P s+1 ∈ Z d and, P s−r ⊂ Z d + 1 2 s−r−1 i=0 l i since (3.8)-(3.9). Thus
H s−r ⊂ Z d + 1 2 s−r−1 i=0 l i .
Define inductively J r,0 J r,1 · · · J r,tr := J r+1,0 , where
J r,t+1 = J r,t {h∈Hr: Λ 2N c 2 s−r (h)∩Jr,t =∅} Λ 2N c 2 s−r (h)
and t r is the largest integer satisfying the relationship (the following argument shows that t r < 10). Thus
h ∈ H r , Λ 2N c 2 s−r (h) ∩ J r+1,0 = ∅ ⇒ Λ 2N c 2 s−r (h) ⊂ J r+1,0 .
(3.57)
For k ∈ k 0 − P s+1 , we have since (3.54) min k · ω , k · ω + 2θ s < 2δ s .
For k ′ ∈ P s−r , we get since (3.8) and (3.9) that min σ=±1 ( θ + k ′ · ω + σθ s−r−1 ) < δ s−r−1 if (C1) s−r holds true, (3.58) Thus for h ∈ k 0 − P s+1 + P s−r , we obtain for (3.58), min σ=±1 ( θ + h · ω + σθ s−r−1 , θ + h · ω + 2θ s + σθ s−r−1 ) < 2δ s−r−1 and for (3.59),
θ + k ′ · ω < 3δmin( θ + h · ω , θ + h · ω + 1 2 , θ + h · ω + 2θ s , θ + h · ω + 1 2 + 2θ s ) < 4δ 1 100 s−r−1 . Notice that k 0 + P s+1 − P s−r = 2k 0 − (k 0 − P s+1 + P s−r ) is symmetrical to k 0 − P s+1 + P s−r about k 0 . Thus, if a set Λ (⊂ Z d + 1 2 s−r−1 i=0 l i ) contains 10 distinct elements of H r , then diam Λ > log γ 8δ 1 100 s−r−1 1 τ ≫ 100N c 2 s−r .
(3.60)
We claim that t r < 10. Otherwise, there exist distinct h t ∈ H r (1 ≤ t ≤ 10), such that Λ 2N c 2
s−r (h 1 ) ∩ J r,0 = ∅, Λ 2N c 2 s−r (h t ) ∩ Λ 2N c 2 s−r (h t+1 ) = ∅.
In particular, dist(h t , h t+1 ) ≤ 4N c 2 s−r .
Thus h t ∈ Λ 40N c 2 s−r (0) + h 1 (1 ≤ t ≤ 10).
This contradicts (3.60). Thus we have shown
J r+1,0 = J r,tr ⊂ Λ 40N c 2 s−r (J r,0 ). (3.61) Since s−1 r=0 40N c 2 s−r < 50N c 2 s ,
we find J s,0 to satisfy
Λ N c s+1 (k 0 ) = J 0,0 ⊂ J s,0 ⊂ Λ 50N c 2 s (J 0,0 ) ⊂ Λ N c s+1 +50N c 2 s (k 0 ). Now, for any k ∈ P s+1 , define Ω s+1 k = J s,0 + (k − k 0 ). (3.62) Using k − k 0 ∈ Z d and Ω s+1 k ⊂ Z d yields Λ N c s+1 (k) ⊂ Ω s+1 k ⊂ Λ N c s+1 +50N c 2 s (k).
We are able to verify (3.56). In fact, since (3.55) and 50N c 2
s ≪ N c s+1 , we get dist( Ω s+1 k , Ω s+1 k ′ ) > 10 diam Ω s+1 k for k = k ′ ∈ P s+1 .
Assume that for some k ∈ P s+1 and k ′ ∈ P s ′ (1 ≤ s ′ ≤ s),
Ω s+1 k ∩ Ω s ′ k ′ = ∅. Then Ω s+1 k + (k 0 − k) ∩ Ω s ′ k ′ + (k 0 − k) = ∅. (3.63) From Λ N c s ′ (k ′ ) ⊂ Ω s ′ k ′ ⊂ Λ N c s ′ +50N c 2 s ′ −1 (k ′ ) ⊂ Λ 1.5N c 2 s ′ (k ′ ), Ω s+1 k + (k 0 − k) = J s,0 and (3.63), we obtain J s,0 ∩ Λ 1.5N c 2 s ′ (k ′ + k 0 − k) = ∅.
Recalling (3.61), we have
J s,0 ⊂ Λ 50N c 2 s ′ −1 (J s−s ′ +1,0 ). Thus Λ 50N c 2 s ′ −1 (J s−s ′ +1,0 ) ∩ Λ 1.5N c 2 s ′ (k ′ + k 0 − k) = ∅. From 50N c 2 s ′ −1 ≪ 0.5N c 2 s ′ , it follows that J s−s ′ +1,0 ∩ Λ 2N c 2 s ′ (k ′ + k 0 − k) = ∅.
Since k ′ ∈ P s ′ , we have k ′ + k 0 − k ∈ H s−s ′ , and by (3.57),
Λ 2N c 2 s ′ (k ′ + k 0 − k) ⊂ J s−s ′ +1,0 ⊂ J s,0 .
Hence
Ω s ′ k ′ ⊂ Λ 2N c 2 s ′ (k ′ ) ⊂ J s,0 + (k − k 0 ) = Ω s+1 k .
Next, we will show Ω s+1 k − k is independent of k. For this, recalling (3.62) and from
l i ∈ Z d , Ω s+1 k ⊂ Z d , k ∈ P s+1 ⊂ Z d + 1 2 s i=0 l i , we obtain that Ω s+1 k − k ⊂ Z d − 1 2 s i=0 l i = Z d + 1 2 s i=0 l i and Ω s+1 k − k = J s,0 + (k − k 0 ) − k = Ω s+1 k0 − k 0 is independent of k.
Finally, we prove the symmetry property of Ω s+1 k . The definition of H r implies that it is symmetrical about k 0 , which implies all J r,t is symmetrical about k 0 as well. In particular, Ω s+1 k0 = J s,0 is symmetrical about k 0 , i.e., Ω s+1 k0 − k 0 is symmetrical about origin. In summary, we have established (a) s+1 and (b) s+1 in the case (C1) s . Now we turn to the proof of (c) s+1 . First, in this construction we have for every
k ′ ∈ Q s (= P s+1 ), Ω s k ′ ⊂ Ω s+1 k ′ . For every k ∈ P s+1 , define A s+1 k = A s k . Then A s+1 k ⊂ Ω s k ⊂ Ω s+1 k and #A s+1 k = #A s k ≤ 2 s . It remains to show Ω s+1 k \ A s+1 k is s-good, i.e., l ′ ∈ Q s ′ , Ω s ′ l ′ ⊂ ( Ω s+1 k \ A s+1 k ), Ω s ′ l ′ ⊂ Ω s ′ +1 l ⇒ Ω s ′ +1 l ⊂ ( Ω s+1 k \ A s+1 k ) for s ′ < s, l ∈ P s : Ω s l ⊂ ( Ω s+1 k \ A s+1 k ) ∩ Q s = ∅. Assume that l ′ ∈ Q s ′ , Ω s ′ l ′ ⊂ ( Ω s+1 k \ A s+1 k ), Ω s ′ l ′ ⊂ Ω s ′ +1 l .
We have the following two cases. The first case is s ′ ≤ s − 2. In this case, since
∅ = Ω s ′ l ′ ⊂ Ω s ′ +1 l
∩ Ω s+1 k , we get by using (3.56) that Ω s ′ +1
l ⊂ Ω s+1 k . Assuming Ω s ′ +1 l ∩ A s+1 k = ∅, (3.64)
then Ω s ′ +1 l ∩ Ω s k = ∅. Thus from (3.10) (since s ′ + 1 < s), one has Ω s ′ +1
l ⊂ Ω s k , which implies Ω s ′ l ′ ⊂ ( Ω s k \ A s k ). Since ( Ω s k \ A s k ) is (s − 1)-good, we get Ω s ′ +1 l ⊂ ( Ω s k \ A s k ) ⊂ ( Ω s+1 k \ A s+1 k ).l ′ ⊂ ( Ω s k \ A s k ). This contradicts l ∈ P s−1 : Ω s−1 l ⊂ ( Ω s k \ A s k ) ∩ Q s−1 = ∅ because ( Ω s k \ A s k ) is (s − 1)-good.
Finally, if l ∈ Q s and Ω s l ⊂ Ω s+1 k , then l = k since k is the only element of Q s such that Ω s k ⊂ Ω s+1 k by the separation property of Q s . As a result, Ω s
l ( Ω s+1 k \ A s+1 k ), which implies l ∈ P s : Ω s l ⊂ ( Ω s+1 k \ A s+1 k ) ∩ Q s = ∅. Moreover, the set A s+1 k − k = A s k − k
is independent of k ∈ P s+1 and symmetrical about the origin since the induction assumptions on A s k of the s-th step. This finishes the proof of (c) s+1 in the case (C1) s .
In the following, we try to prove (d) s+1 and (f ) s+1 in the case (C1) s . For the case k ∈ Q − s , we consider the analytic matrix-valued function
M s+1 (z) := T (z) Ω s+1 k −k = (cos 2π(z + n · ω)δ n,n ′ − E + ε∆) n∈ Ω s+1 k −k defined in {z ∈ C : |z − θ s | < δ 1 10 s }. (3.65) If k ′ ∈ P s and Ω s k ′ ⊂ ( Ω s+1 k \ A s+1 k ), then 0 = k ′ − k ≤ 2N c s+1 . Thus θ + k ′ · ω − θ s ≥ (k ′ − k) · ω − θ + k · ω − θ s ≥ γe −(2N c s+1 ) τ − δ s ≥ γe −2 τ | log γ δs | 1 c 4 − δ s > δ 1 10 4 s .
By (3.51), we have k ′ / ∈ Q + s and thus
θ + k ′ · ω + θ s > δ 1 100 s . From (3.16), we obtain T −1 Ω s+1 k \A s+1 k < δ −3 s−1 sup {k ′ ∈Ps: Ω s k ′ ⊂( Ω s+1 k \A s+1 k )} θ + k ′ · ω − θ s −1 · θ + k ′ · ω + θ s −1 < 1 2 δ −2× 1 100 s .
(3.66)
One may restate (3.66) as
M s+1 (θ + k · ω) ( Ω s+1 k \A s+1 k )−k −1 < 1 2 δ −2× 1 100 s . Notice that z − (θ + k · ω) ≤ |z − θ s | + θ + k · ω − θ s < δ 1 10 s + δ s < 2δ 1 10 s ≪ δ 2× 1 100 s . (3.67)
Thus by Neumann series argument, we can show
M s+1 (z) ( Ω s+1 k \A s+1 k )−k −1 < δ −2× 1 100 s . (3.68)
We may then control M s+1 (z) −1 by the inverse of
S s+1 (z) = M s+1 (z) A s+1 k −k − R A s+1 k −k M s+1 (z)R ( Ω s+1 k \A s+1 k )−k × M s+1 (z) ( Ω s+1 k \A s+1 k )−k −1 R ( Ω s+1 k \A s+1 k )−k M s+1 (z)R A s+1 k −k .
Our next aim is to analyze the function det S s+1 (z). Since A s+1 1)-good and by (3.16)-(3.17), we get
k − k = A s k − k ⊂ Ω s k − k and dist(Ω s k , ∂ + Ω s k ) > 1, we obtain R A s+1 k −k M s+1 (z)R ( Ω s+1 k \A s+1 k )−k = R A s k −k M s+1 (z)R ( Ω s k \A s k )−k . Thus S s+1 (z) = M s+1 (z) A s k −k − R A s k −k M s+1 (z)R ( Ω s k \A s k )−k × M s+1 (z) ( Ω s+1 k \A s+1 k )−k −1 R ( Ω s k \A s k )−k M s+1 (z)R A s k −k . Since Ω s k \ A s k is (s −T −1 Ω s k \A s k < δ −3 s−1 , T −1 Ω s k \A s k (x, y) < e −γs−1 x−y 1 for x − y > N c 3 s−1 .
Equivalently,
M s+1 (θ + k · ω) ( Ω s k \A s k )−k −1 < δ −3 s−1 , (3.69) M s+1 (θ + k · ω) ( Ω s k \A s k )−k −1 (x, y) < e −γs−1 x−y 1 for x − y > N c 3 s−1 . (3.70)
In the set defined by (3.67), we claim that
M s+1 (z) ( Ω s k \A s k )−k −1 (x, y) < δ 10 s for x − y > N c 4 s−1 . (3.71)
Proof of the Claim (i.e., (3.71)). Denote
T 1 = M s+1 (θ + k · ω) ( Ω s k \A s k )−k , T 2 = M s+1 (z) ( Ω s k \A s k )−k . Then D = T 1 − T 2 is diagonal so that D < 4πδT −1 2 = (I − T −1 1 D) −1 T −1 1 = +∞ i=0 T −1 1 D i T −1 1 . (3.72)
Since (3.69) and (3.70), we have
T −1 1 (x, y) < δ −3 s−1 e −γs−1( x−y 1−N c 3 s−1 ) .
Thus for x − y > N c 4 s−1 and 0 ≤ i ≤ 200,
|( T −1 1 D i T −1 1 )(x, y)| ≤ 4πδ 1 10 s i w1,··· ,wi |T 1 (x, w 1 ) · · · T 1 (w i−1 , w i )T 1 (w i , y)| < 4πδ 1 10 s i CN c 2 d s δ −3(i+1) s−1 e −γs−1( x−y 1−(i+1)N c 3 s−1 ) < δ 1 20 (i−1) s e −γs−1(N c 4 s−1 −(i+1)N c 3 s−1 ) . From 2 < γ s−1 , 201N c 3 s−1 < 1 2 N c 4 s−1 and | log δ s | ∼ | log δ s−1 | c 5 ∼ N c 10 τ s ∼ N c 15 τ s−1 < N c 3 s−1 , we get e −γs−1(N c 4 s−1 −(i+1)N c 3 s−1 ) < e −N c 4 s−1 < δ 20 s . Hence 200 i=0 |( T −1 1 D i T −1 1 )(x, y)| < 1 2 δ 10 s . (3.73) For i > 200, |( T −1 1 D i T −1 1 )(x, y)| < 4πδ 1 10 s i δ −3(i+1) s−1 < δ 1 20 i s < δ 10 s δ 1 20 (i−200) s . Thus i>200 |( T −1 1 D i T −1 1 )(x, y)| < 1 2 δ 10 s . (3.74)
Combining (3.72), (3.73) and (3.74), we get
T −1 2 (x, y) < δ 10 s for x − y > N c 4 s−1 .
This completes the proof of (3.71).
Denote X = ( Ω s k \ A s k ) − k and Y = ( Ω s+1 k \ A s+1 k ) − k. Let x ∈ X satisfy dist(x, A s k − k) ≤ 1.
By resolvent identity, we have for any y ∈ Y , s . It then follows that
(M s+1 (z) Y ) −1 (x, y) − χ X (y) (M s+1 (z) X ) −1 (x, y) = − (w,w ′ )∈∂Y X (M s+1 (z) X ) −1 (x, w)Γ(w, w ′ ) (M s+1 (z) Y ) −1 (w ′ , y). (3.75) Since dist(x, w) ≥ dist(A s k − k, ∂ − Ω s k − k) − 2 > N s > N c 4 s−1 ,R A s k −k M s+1 (z)R X (M s+1 (z) Y ) −1 = R A s k −k M s+1 (z)R X (M s+1 (z) X ) −1 R X + O(δ 9 s ).
As a result,
R A s k −k M s+1 (z)R X (M s+1 (z) Y ) −1 R X M s+1 (z)R A s k −k = R A s k −k M s+1 (z)R X (M s+1 (z) X ) −1 R X M s+1 (z)R A s k −k + O(δ 9 s ) = R A s k −k M s (z)R X (M s (z) X ) −1 R X M s (z)R A s k −k + O(δ 9 s ) and S s+1 (z) = M s (z) A s k −k − R A s k −k M s (z)R X (M s (z) X ) −1 R X M s (z)R A s k −k + O(δ 9 s ) = S s (z) + O(δ 9 s ),
which implies (3.13) for the (s + 1)-th step. Recalling (3.65) and (3.12), we have since (3.14)
det S s (z)
δs−1 ∼ z − θ s · z + θ s .
By Hadamard's inequality, we obtain det S s+1 (z) = det S s (z) + O((2 s ) 2 10 2 s δ 9 s ) = det S s (z)
+ O(δ 8 s ),
where we use the fact that #(A s k − k) ≤ 2 s , (3.13) and log log | log δ s | ∼ s. Notice that
z + θ s ≥ θ + k · ω + θ s − z − θ s − θ + k · ω − θ s > δ 1 100 s − δ 1 10 s − δ 1 > 1 2 δ 1 100
s .
Then we have det S s+1 (z) δs ∼ (z − θ s ) + r s+1 (z),
where r s+1 (z) is an analytic function defined in (3.65) with |r s+1 (z)| < δ 7 s . Finally, by the Róuche theorem, the equation
(z − θ s ) + r s+1 (z) = 0
has a unique root θ s+1 in the set defined by (3.65) satisfying
|θ s+1 − θ s | = |r s+1 (θ s+1 )| < δ 7 s , |(z − θ s ) + r s+1 (z)| ∼ |z − θ s+1 |.
Moreover θ s+1 is also the unique root of det M s+1 (z) = 0 in the set defined by (3.65). From z + θ s > 1 2 δ 1 100 s and |θ s+1 − θ s | < δ 7 s , we have z + θ s ∼ z + θ s+1 .
Thus if z belongs to the set defined by (3.65), we have
det S s+1 (z) δs ∼ z − θ s+1 · z + θ s+1 .
(3.76)
Since | log δ s+1 | ∼ | log δ s | c 5 , we get δ (3.77)
The same argument shows that det M s+1 (z) = 0 has a unique root θ ′ s+1 in the set defined by (3.77). Since det M s+1 (z) is an even function of z, we get θ ′ s+1 = −θ s+1 . Thus if z belongs to the set defined by (3.77), we also have (3.76). In conclusion, (3.76) is established for z belonging to z ∈ C : min σ=±1 z + σθ s+1 < δ 1 10 4 s+1 , which proves (3.14) for the (s + 1)-th step. Combining l s = 0, (3.52)-(3.53) and the following
θ + k · ω ± θ s+1 < 10δ 1 100 s+1 , |θ s+1 − θ s | < δ 7 s ⇒ θ + k · ω ± θ s < δ s , we get k ∈ Z d + 1 2 s i=0 l i : min σ=±1 θ + k · ω + σθ s+1 < 10δ 1 100 s+1 ⊂ P s+1 ,
which proves (3.18) at the (s + 1)-th step. Finally, we want to estimate T −1 Ω s+1 k . For k ∈ P s+1 , by (3.54), we obtain θ + k · ω ∈ {z ∈ C : min σ=±1 z + σθ s < δ 1 10 s }, which together with (3.76) implies
det(T A s+1 k − R A s+1 k T R Ω s+1 k \A s+1 k T −1 Ω s+1 k \A s+1 k R Ω s+1 k \A s+1 k T R A s+1 k ) = | det S s+1 (θ + k · ω)| ≥ 1 C δ s θ + k · ω − θ s+1 · θ + k · ω + θ s+1 .
By Cramer's rule and Hadamard's inequality, one has
(T A s+1 k − R A s+1 k T R Ω s+1 k \A s+1 k T −1 Ω s+1 k \A s+1 k R Ω s+1 k \A s+1 k T R A s+1 k ) −1 < C2 s 10 2 s δ −1 s θ + k · ω − θ s+1 −1 · θ + k · ω + θ s+1 −1 .
From Schur complement argument (cf. Lemma B.1) and (3.66), we get Then there exist i s ∈ Q + s and j s ∈ Q − s with i s − j s ≤ 100N c s+1 , such that
T −1 Ω s+1 k < 4 1 + T −1 Ω s+1 k \A s+1 k 2 × 1 + (T A s+1 k − R A s+1 k T R Ω s+1 k \A s+1 k T −1 Ω s+1 k \A s+1 k R Ω s+1 k \A s+1 k T R A s+1 k ) −1 < δ −2 s θ + k · ω − θ s+1 −1 · θ + k · ω + θ s+1 −1 .θ + i s · ω + θ s < δ s , θ + j s · ω − θ s < δ 1 100 s . Denote l s = i s − j s .
Using (3.8) and (3.9) yields
Q + s , Q − s ⊂ P s ⊂ Z d + 1 2 s−1 i=0 l i . Thus i s ≡ j s (mod Z d ) and l s ∈ Z d . Define O s+1 = Q − s ∪ (Q + s − l s ). (3.79)
For every o ∈ O s+1 , define its mirror point
o * = o + l s .
Then we have
O s+1 ⊂ o ∈ Z d + 1 2 s−1 i=0 l i : θ + o · ω − θ s < 2δ 1 100 s and O s+1 + l s ⊂ o * ∈ Z d + 1 2 s−1 i=0 l i : θ + o * · ω + θ s < 2δ 1 100 s .
Then by (3.18), we obtain
O s+1 ∪ (O s+1 + l s ) ⊂ P s . (3.80) Define P s+1 = 1 2 (o + o * ) : o ∈ O s+1 = o + l s 2 : o ∈ O s+1 . (3.81) Notice that min l s 2 · ω + θ s , l s 2 · ω + θ s − 1 2 = 1 2 l s · ω + 2θ s ≤ 1 2 ( θ + i s · ω + θ s + θ + j s · ω − θ s ) < δ 1 100
s .
Since δ s ≪ 1, only one of the following l s 2 · ω + θ s < δ 1 100 s , l s 2 · ω + θ s − 1 2 < δ 1 100 s occurs. First, we consider the case l s 2 · ω + θ s < δ 1 100 s .
(3.82)
Let k ∈ P s+1 . Since k = o + ls 2 for some o ∈ O s+1 and (3.82), we get
θ + k · ω ≤ θ + o · ω − θ s + l s 2 · ω + θ s < 3δ 1 100 s , which implies P s+1 ⊂ k ∈ Z d + 1 2 s i=0 l i : θ + k · ω < 3δ 1 100 s . (3.83) Moreover, if k = k ′ ∈ P s+1 , then k − k ′ > log γ 6δ 1 100 s ∼ N c 5 s+1 ≫ 10N c 2 s+1 .
Similar to the proof appeared in STEP1 (i.e., the (C1) s case), we can associate k ∈ P s+1 the blocks Ω s+1 k and Ω s+1
k with Λ 100N c s+1 (k) ⊂ Ω s+1 k ⊂ Λ 100N c s+1 +50N c 2 s (k), Λ N c 2 s+1 (k) ⊂ Ω s+1 k ⊂ Λ N c 2 s+1 +50N c 2 s (k) satisfying Ω s+1 k ∩ Ω s ′ k ′ = ∅ (s ′ < s + 1) ⇒ Ω s ′ k ′ ⊂ Ω s+1 k , Ω s+1 k ∩ Ω s ′ k ′ = ∅ (s ′ < s + 1) ⇒ Ω s ′ k ′ ⊂ Ω s+1 k , dist( Ω s+1 k , Ω s+1 k ′ ) > 10 diam Ω s+1 k for k = k ′ ∈ P s+1 .
(3.84)
In addition, the set
Ω s+1 k − k ⊂ Z d + 1 2 s i=0 l i
is independent of k ∈ P s+1 and symmetrical about the origin. Clearly, in this construction, for every k ′ ∈ Q s , there exists k = k ′ − ls 2 or k ′ + ls 2 ∈ P s+1 , such that Ω s k ′ ⊂ Ω s+1 k . For every k ∈ P s+1 , we have o, o * ∈ P s since (3.80). Define (3.81)). Then
A s+1 k = A s o ∪ A s o * , where o ∈ O s+1 and k = o + o * (cf.A s+1 k ⊂ Ω s o ∪ Ω s o * ⊂ Ω s+1 k , #A s+1 k = #A s o + #A s o * ≤ 2 s+1 .
Now we will verify that ( Ω s+1
k \ A s+1 k ) is s-good, i.e., l ′ ∈ Q s ′ , Ω s ′ l ′ ⊂ ( Ω s+1 k \ A s+1 k ), Ω s ′ l ′ ⊂ Ω s ′ +1 l ⇒ Ω s ′ +1 l ⊂ ( Ω s+1 k \ A s+1 k ) for s ′ < s, l ∈ P s : Ω s l ⊂ ( Ω s+1 k \ A s+1 k ) ∩ Q s = ∅.
For this purpose, assume that (3.85) By (3.16), we have we obtain using Neumann series argument
l ′ ∈ Q s ′ , Ω s ′ l ′ ⊂ ( Ω s+1 k \ A s+1 k ), Ω s ′ l ′ ⊂ Ω s ′ +1 l . If s ′ ≤ s − 2, since ∅ = Ω s ′ l ′ ⊂ Ω s ′ +1 l ∩ Ω s+1 k , we have by (3.84) Ω s ′ +1 l ⊂ Ω s+1 k . If Ω s ′ +1 l ∩ A s+1 k = ∅, then we have Ω s ′ +1 l ∩ A s o = ∅ or Ω s ′ +1 l ∩ A s o * = ∅. Thus by (3.10) (s ′ + 1 < s), we get Ω s ′ +1 l ′ ⊂ Ω s o or Ω s ′ +1 l ′ ⊂ Ω s o * , which implies Ω s ′ l ′ ⊂ ( Ω s o \A s o ) or Ω s ′ l ′ ⊂ ( Ω s o * \A s o * ). Thus we have either Ω s ′ +1 l ′ ⊂ ( Ω s o \A s o ) ⊂ ( Ω s+1 k \A s+1 k ) or Ω s ′ +1 l ′ ⊂ ( Ω s o * \ A s o * ) ⊂ ( Ω s+1 k \ A s+1 k ) since both ( Ω s o \ Al ∈ P s : Ω s l ⊂ ( Ω s+1 k \ A s+1 k ) ∩ Q s = ∅.
Moreover, we have
A s+1 k − k = (A s o − k) ∪ (A s o * − k) = (A s o − o) − l s 2 ∪ (A s o * − o * ) +If k ′ ∈ P s and Ω s k ′ ⊂ ( Ω s+1 k \ A s+1 k ), then k ′ = o, o * and k ′ − o , k ′ − o * ≤ 4N c 2 s+1 . Thus θ + k ′ · ω − θ s ≥ (k ′ − o) · ω − θ + o · ω − θ s ≥ γe −(4N c 2 s+1 ) τ − 2δ 1 100 s ≥ γe −4 τ | log γ δs | 1 c − 2δ 1 100 s > δ 1 10 4 s , and θ + k ′ · ω + θ s ≥ (k ′ − o * ) · ω − θ + o * · ω + θ s ≥ γe −(4N c 2 s+1 ) τ − 2δ 1 100 s ≥ γe −4 τ | log γ δs | 1 c − 2δT −1 Ω s+1 k \A s+1 k < δ −3 s−1 sup {k ′ ∈Ps: Ω s k ′ ⊂( Ω s+1 k \A s+1 k )} θ + k ′ · ω − θ s −1 · θ + k ′ · ω + θ s −1 < 1 2 δ −3× 1M s+1 (z) ( Ω s+1 k \A s+1 k )−k −1 < δ −3× 1 10 4 s .
(3.88)
We may control M s+1 (z) −1 by the inverse of
S s+1 (z) = M s+1 (z) A s+1 k −k − R A s+1 k −k M s+1 (z)R ( Ω s+1 k \A s+1 k )−k × M s+1 (z) ( Ω s+1 k \A s+1 k )−k −1 R ( Ω s+1 k \A s+1 k )−k M s+1 (z)R A s+1 k −k .
Our next aim is to analyze det S s+1 (z). Since A s+1 Let x ∈ X and dist(x, A s o − k) ≤ 1. By resolvent identity, we have for any y ∈ Y , Thus both z − ls 2 · ω and z + ls 2 · ω belong to the set defined by (3.12), which together with (3.14) implies
k − k = (A s o − k) ∪ (A s o * − k), A s o − k ⊂ Ω s o − k, A s o * − k ⊂ Ω s o * − k and dist(Ω s o − k, Ω s o * − k) > 10 diam Ω s o , we have M s+1 (z) A s+1 k −k = M s+1 (z) A s o −k ⊕ M s+1 (z) A s o * −k . From dist(Ω s o , ∂ + Ω s o ) and dist(Ω s o * , ∂ + Ω s o * ) > 1, we have R A s o −k M s+1 (z)R ( Ω s+1 k \A s+1 k )−k = R A s o −k M s+1 (z)R ( Ω s o \A s o )−k , R A s o * −k M s+1 (z)R ( Ω s+1 k \A s+1 k )−k = R A s o * −k M s+1 (z)R ( Ω s o * \A s o * )−k . Denote X = ( Ω s o \ A s o ) − k, X * = ( Ω s o * \ A s o * ) − k, Y = ( Ω s+1 k \ A s+1 k ) − k. Then direct computations yield Ss+1(z) = Ms+1(z) A s o −k ⊕ Ms+1(z) A s o * −k − (R A s o −k ⊕ R A s o * −k )Ms+1(z)RY Ms+1(z) −1 Y RY Ms+1(z)R A s+1 k −k = Ms+1(z) A s o −k − R A s o −k Ms+1(z)RX Ms+1(z) −1 Y RY Ms+1(z)R A s+1 k −k ⊕ Ms+1(z) A s o * −k − R A s o * −k Ms+1(z)RX * Ms+1(z) −1 Y RY Ms+1(z)R A s+1 k −k . (3.89) Since Ω s o \ A s o is (s − 1)-good, we have by (3.16)-(3.17) T −1 Ω s o \A s o < δ −3 s−1 , T −1 Ω s o \A s o (x, y) < e −γs−1 x−y 1 for x − y > N c 3 s−1 . In other words, (M s+1 (θ + k · ω) X ) −1 < δ −3 s−1 , (3.90) (M s+1 (θ + k · ω) X ) −1 (x, y) < e −γs−1 x−y 1 for x − y > N c 3 s−1 .(M s+1 (z) Y ) −1 (x, y) − χ X (y) (M s+1 (z) X ) −1 (x, y) = − (w,w ′ )∈∂Y X (M s+1 (z) X ) −1 (x, w)Γ(w, w ′ ) (M s+1 (z) Y ) −1 (w ′ , y). (3.93) From dist(x, w) ≥ dist(A s o − k, ∂ − Ω s o − k) − 2 > N s > N c 4 s−1 ,(3.R A s o −k M s+1 (z)R X (M s+1 (z) Y ) −1 = R A s o −k M s+1 (z)R X (M s+1 (z) X ) −1 R X + O(δ 9 s ). Similarly, R A s o * −k M s+1 (z)R X * (M s+1 (z) Y ) −1 = R A s o * −k M s+1 (z)R X * (M s+1 (z) X * ) −1 R X * + O(δ 9 s ). Recalling (3.89), we get S s+1 (z) = M s+1 (z) A s o −k − R A s o −k M s+1 (z)R X (M s+1 (z) X ) −1 R ( Ω s o \A s o )−k M s+1 (z)R A s o −k ⊕ M s+1 (z) A s o * −k − R A s o * −k M s+1 (z)R X * (M s+1 (z) X * ) −1 R X * M s+1 (z)R A s o * −k + O(δ 9 s ) = Ss(z − ls 2 · ω) ⊕ Ss(z + ls 2 · ω) + O(δ 9 s ).det S s (z − l s 2 · ω) δs−1 ∼ (z − l s 2 · ω) − θ s · (z − l s 2 · ω) + θ s , (3.95) det S s (z + l s 2 · ω) δs−1 ∼ (z + l s 2 · ω) − θ s · (z + l s 2 · ω) + θ s . (3.96) Moreover, det S s+1 (z) = det S s (z − l s 2 ω) · det S s (z + l s 2 ω) + O((2 s+1 ) 2 10 2 s+1 δ 9 s ) = det S s (z − l s 2 ω) · det S s (z + l s 2 ω) + O(δ 8 s ) (3.97) since #(A s+1 k − k) ≤ 2 s+1 , (3.13) and log log | log δ s | ∼ s. Notice that z + l s 2 · ω − θ s ≥ l s · ω − z − l s 2 · ω − θ s > γe −(100N c s ) τ − δ 1 10 4 s > δ 1 10 4 s , (3.98) z − l s 2 · ω + θ s ≥ l s · ω − z + l s 2 · ω + θ s > γe −(100N c s ) τ − δz s+1 ≡ l s 2 · ω + θ s (mod Z), |z s+1 | = l s 2 · ω + θ s < δ 1 100 s . (3.100) From (3.95)-(3.99), we get det S s+1 (z) δs ∼ (z − z s+1 ) · (z + z s+1 ) + r s+1 (z),
where r s+1 (z) is an analytic function in the set defined by (3.85) with |r s+1 (z)| < δ 7 s . By Róuche theorem, the equation (z − z s+1 ) (z + z s+1 ) + r s+1 (z) = 0 has exactly two roots θ s+1 , θ ′ s+1 in the set defined by (3.85), which are perturbations of ±z s+1 , respectively. Notice that |z| < δ
θ ′ s+1 = −θ s+1 . Moreover, we get |θ s+1 − z s+1 | ≤ |r s+1 (θ s+1 )| 1 2 < δ 3 s (3.101)
and
| (z − z s+1 ) (z + z s+1 ) + r s+1 (z)| ∼ | (z − θ s+1 ) (z + θ s+1 ) |.
Thus for z being in the set defined by (3.85), we have Hence (3.102) also holds true for z belonging to {z ∈ C : z ± θ s+1 < δ 1 10 4 s+1 }, which proves (3.14) for the (s + 1)-th step.
det S s+1 (z) δs ∼ z − θ s+1 · z + θ s+1 .
Notice that
θ + k · ω + θ s+1 < 10δ 1 100 s+1 , |θ s+1 − z s+1 | < δ 3 s ⇒ θ + k · ω + l s 2 + θ s < δ s . Thus if k ∈ Z d + 1 2 s i=0 l i and θ + k · ω + θ s+1 < 10δ 1 100 s+1 , then k + l s 2 ∈ Z d + 1 2 s−1 i=0 l i and θ + (k + l s 2 ) · ω + θ s < δ s .
Thus by (3.52), we have k + ls 2 ∈ Q + s . Recalling also (3.79) and (3.81), we have k ∈ P s+1 . Thus
k ∈ Z d + 1 2 s i=0 l i : θ + k · ω + θ s+1 < 10δ 1 100 s+1 ⊂ P s+1 .
Similarly,
k ∈ Z d + 1 2 s i=0 l i : θ + k · ω − θ s+1 < 10δ 1 100 s+1 ⊂ P s+1 .
Hence we prove (3.18) for the (s + 1)-th step.
Finally, we will estimate T −1 Ω s+1 k . For k ∈ P s+1 , we have by (3.83) θ + k · ω ∈ z ∈ C : z < δ 1 10 3 s . Thus from (3.102), we obtain
det(T A s+1 k − R A s+1 k T R Ω s+1 k \A s+1 k T −1 Ω s+1 k \A s+1 k R Ω s+1 k \A s+1 k T R A s+1 k ) = | det S s+1 (θ + k · ω)| ≥ 1 C δ s θ + k · ω − θ s+1 · θ + k · ω + θ s+1 .
Using Cramer's rule and Hadamard's inequality implies
(T A s+1 k − R A s+1 k T R Ω s+1 k \A s+1 k T −1 Ω s+1 k \A s+1 k R Ω s+1 k \A s+1 k T R A s+1 k ) −1 < C2 s+1 10 2 s+1 δ −1 s θ + k · ω − θ s+1 −1 · θ + k · ω + θ s+1 −1 .
Recalling Schur complement argument (cf. Lemma B.1) and (3.86), we get
T −1 Ω s+1 k < 4 1 + T −1 Ω s+1 k \A s+1 k 2 × 1 + (T A s+1 k − R A s+1 k T R Ω s+1 k \A s+1 k T −1 Ω s+1 k \A s+1 k R Ω s+1 k \A s+1 k T R A s+1 k ) −1 < δ −2 s θ + k · ω − θ s+1 −1 · θ + k · ω + θ s+1 −1 .
(3.103)
For the case
l s 2 · ω + θ s − 1 2 < δ 1 100 s , (3.104) we have P s+1 ⊂ k ∈ Z d + 1 2 s i=0 l i : θ + k · ω − 1 2 < 3δ 1 100 s .
(3.105)
Thus we can consider
M s+1 (z) := T (z) Ω 1 k −k = (cos 2π(z + n · ω)δ n,n ′ − E + ε∆) n∈ Ω 1 k −k in z ∈ C : |z − 1 2 | < δ 1 10 3 s .
(3.106)
By the similar arguments as above, we obtain both θ s+1 and 1 − θ s+1 belong to the set defined by (3.106). Moreover, all the corresponding conclusions in the case of (3.82) hold for the case (3.104). Recalling (3.78), estimate (3.103) holds for the case (3.104) as well. STEP3: Application of resolvent identity Finally, we aim to establish (e) s+1 by iterating the resolvent identity.
Recall that
log γ δ s+1 = log γ δ s c 5 . Define Q s+1 = k ∈ P s+1 : min σ=±1 θ + k · ω + σθ s+1 < δ s+1 .
Assume the finite set Λ ⊂ Z d is (s + 1)-good, i.e.,
k ′ ∈ Q s ′ , Ω s ′ k ′ ⊂ Λ, Ω s ′ k ′ ⊂ Ω s ′ +1 k ⇒ Ω s ′ +1 k ⊂ Λ for s ′ < s + 1, {k ∈ P s+1 : Ω s+1 k ⊂ Λ} ∩ Q s+1 = ∅. (3.107)
It remains to verify the implications (3.16) and (3.17) with s being replaced with s + 1. For k ∈ P t (1 ≤ t ≤ s + 1), denote by
2Ω t k := Λ diam Ω t k (k)
the "double"-size block of Ω t k . Define moreover
P t = {k ∈ P t : ∃ k ′ ∈ Q t−1 s.t., Ω t−1 k ′ ⊂ Λ, Ω t−1 k ′ ⊂ Ω t k } (1 ≤ t ≤ s + 1).
(3.108) Lemma 3.6. For k ∈ P s+1 \ Q s+1 , we have
|T −1 Ω s+1 k (x, y)| < e − γs x−y 1 for x ∈ ∂ − Ω s+1 k and y ∈ 2Ω s+1 k , (3.109) where γ s = γ s (1 − N 1 c −1 s+1
). Proof of Lemma 3.6. Notice first that
dist(∂ − Ω s+1 k , 2Ω s+1 k ) diam Ω s+1 k > N s+1 ≫ N c 3 s . Since Ω s+1 k \ A s+1 k is s-good, we have by (3.17) |T −1 Ω s+1 k \A s+1 k (x, w)| < e −γs x−w 1 for x ∈ ∂ − Ω s+1 k , w ∈ ( Ω s+1 k \ A s+1 k ) ∩ 2Ω s+1 k .
From (3.103) and k / ∈ Q s+1 , we obtain
T −1 Ω s+1 k < δ −2 s δ −2 s+1 < δ −3 s+1 .
Using resolvent identity implies (since x ∈ ∂ − Ω s+1 k )
|T −1 Ω s+1 k (x, y)| = T −1 Ω s+1 k \A s+1 k (x, y)χ Ω s+1 k \A s+1 k (y) − (w ′ ,w)∈∂A s+1 k T −1 Ω s+1 k \A s+1 k (x, w)Γ(w, w ′ )T −1 Ω s+1 k (w ′ , y) < e −γs x−y 1 + 2d · 2 s+1 sup w∈∂ + A s+1 k e −γs x−w 1 T −1 Ω s+1 k < e −γs x−y 1 + sup w∈∂ + A s+1 k e −γs( x−y 1− y−w 1)+C | log δs+1| < e −γs x−y 1 + e −γs 1−C x−y 1 c −1 1 + | log δ s+1 | x−y 1 x−y 1 < e −γs 1−N 1 c −1 s+1 x−y 1 = e − γs x−y 1 , since N c s+1 diam Ω s+1 k ∼ x − y 1 , y − w 1 diam Ω s+1 k diam Ω s+1 k 1 c and | log δ s+1 | ∼ | log δ s | c 5 ∼ N c 10 τ s+1 < N 1 c s+1 . (3.110)
This proves the lemma.
Next we consider the general case and will finish the proof of (e) s+1 . Define
Λ ′ = Λ\ k∈ Ps+1 Ω s+1 k .
We claim that Λ ′ is s-good. In fact, for s ′ ≤ s − 1, assume
Ω s ′ l ′ ⊂ Λ ′ , Ω s ′ l ′ ⊂ Ω s ′ +1 l
and Ω s ′ +1 l ∩ k∈ Ps+1 Ω s+1 k = ∅. Thus by (3.84), we obtain Ω s ′ +1
l ⊂ k∈ Ps+1 Ω s+1 k , we have y∈Λ |T −1 Λ (x, y)| ≤ y∈Λ |T −1 Ω s+1 k (x, y)χ Ω s+1 k (y)| + y∈Λ,(w,w ′ )∈∂Λ Ω s+1 k |T −1 Ω s+1 k (x, w)Γ(w, w ′ )T −1 Λ (w ′ , y)| < # Ω s+1 k · T −1 Ω s+1 k + CN c 2 d s+1 e − γsNs+1 sup w ′ ∈Λ y∈Λ |T −1 Λ (w ′ , y)| < CN c 2 d s+1 δ −2 s θ + k · ω − θ s+1 −1 · θ + k · ω + θ s+1 −1 + 1 10 sup w ′ ∈Λ y∈Λ |T −1 Λ (w ′ , y)| < 1 2 δ −3 s θ + k · ω − θ s+1 −1 · θ + k · ω + θ s+1 −1 + 1 10 sup w ′ ∈Λ y∈Λ |T −1 Λ (w ′ , y)|.
Combining the above two cases, we obtain
T −1 Λ ≤ sup x∈Λ y∈Λ |T −1 Λ (x, y)| < δ −3 s sup {k∈Ps+1: Ω s+1 k ⊂Λ} θ + k · ω − θ s+1 −1 · θ + k · ω + θ s+1 −1 .
(3.111)
Finally, we turn to the off-diagonal decay estimates. From (3.11), (3.107) and (3.108), it follows that for k ′ ∈ P t ∩ Q t (1 ≤ t ≤ s) there exists k ∈ P t+1 such that
Ω t k ′ ⊂ Ω t+1 k and P s+1 ∩ Q s+1 = ∅. Moreover, 1≤t≤s+1 k∈ Pt Ω t k ⊂ Λ.
Hence for any w ∈ Λ, if w ∈ k∈ P1
2Ω 1 k , then there exists 1 ≤ t ≤ s + 1 such that
w ∈ k∈ Pt\Qt 2Ω t k .
For every w ∈ Λ, define its block in Λ
J w = Λ 1 2 N1 (w) ∩ Λ if w / ∈ k∈ P1 2Ω 1 k , 1 ○ Ω t k if w ∈ 2Ω t k for some k ∈ P t \ Q t . 2 ○ Then diam J w ≤ diam Ω s+1 k < 3N c 2 s+1 . For 1 ○, we have J w ∩Q 0 = ∅ and dist(w, ∂ − Λ J w ) ≥ 1 2 N 1 . Thus |T −1 Jw (w, w ′ )| < e −γ0 w−w ′ 1 for w ′ ∈ ∂ − Λ J w .|T Jw (w, w ′ )| < e − γt−1 w−w ′ 1 for w ′ ∈ ∂ − Λ J w . Let x − y > N c 3 s+1 .
The resolvent identity reads as
T −1 Λ (x, y) = T −1 Jx (x, y)χ Jx (y) − (w,w ′ )∈∂ΛJx T −1 Jx (x, w)Γ(w, w ′ )T −1 Λ (w ′ , y).
The first term in the above identity is zero since x − y > N c 3 s+1 > 3N c 2 s+1 (so that y / ∈ J x ). It follows that
|T −1 Λ (x, y)| ≤ CN c 2 d s+1 e − min t (γ0(1−2N −1 1 ), γt−1(1−N −1 t )) x−x1 1 |T −1 Λ (x 1 , y)| ≤ CN c 2 d s+1 e − γs(1−N −1 s+1 ) x−x1 1 |T −1 Λ (x 1 , y)| < e − γs(1−N −1 s+1 − C log N s+1 N s+1 ) x−x1 1 |T −1 Λ (x 1 , y)| < e −γs(1−N 1 c −1 s+1 ) 2 x−x1 1 |T −1 Λ (x 1 , y)| = e −γ ′ s x−x1 1 |T −1 Λ (x 1 , y)| for some x 1 ∈ ∂ + Λ J x , where γ ′ s = γ s (1 − N 1 c −1 s+1 ) 2 .
Iterate the above procedure and stop it if for some L, x L − y < 3N c 2 s+1 . Recalling (3.110) and (3.111), we get
|T −1 Λ (x, y)| ≤ e −γ ′ s x−x1 1 · · · e −γ ′ s xL−1−xL 1 |T −1 Λ (x L , y)| ≤ e −γ ′ s ( x−y 1−3N c 2 s+1 ) T −1 Λ < e −γ ′ s (1−3N c 2 −c 3 s+1 ) x−y 1 δ −3 s+1 < e −γ ′ s (1−3N c 2 −c 3 s+1 −3 | log δ s+1 | N c 3 s+1 ) x−y 1 < e −γ ′ s (1−N 1 c −1 s+1 ) x−y 1 = e −γs+1 x−y 1 .
This gives the off-diagonal decay estimates. We have completed the proof of Theorem 3.2.
Arithmetic Anderson localization
As an application of Green's function estimates of previous section, we prove the arithmetic version of Anderson localization below.
Proof of Theorem 1.2. Recall first Θ τ1 = (θ, ω) ∈ T × R τ,γ : the relation 2θ + n · ω ≤ e − n τ 1 holds for finitely many n ∈ Z d ,
where 0 < τ 1 < τ .
We prove for 0 < ε ≤ ε 0 , ω ∈ R τ,γ and (θ, ω) ∈ Θ τ1 , H(θ) has only pure point spectrum with exponentially decaying eigenfunctions. Let ε 0 be given by Theorem 3.2. Fix ω and θ so that ω ∈ R τ,γ and (θ, ω) ∈ Θ τ1 . Let E ∈ [−2, 2] be a generalized eigenvalue of H(θ) and u = {u(n)} n∈Z d = 0 be the corresponding generalized eigenfunction satisfying |u(n)| ≤ (1 + n ) d . From Schnol's theorem, it suffices to show u decays exponentially. For this purpose, note first there exists (since (θ, ω) ∈ Θ τ1 ) some s ∈ N such that 2θ + n · ω > e − n τ 1 for all n satisfying n ≥ N s .
(4.1)
We claim that there exists s 0 > 0 such that, for s ≥ s 0 ,
Λ 2N c 4 s ∩ k∈Qs Ω s k = ∅. (4.2)
For otherwise, then there exist a subsequence s i → +∞ (as i → ∞) such that
Λ 2N c 4 s i ∩ k∈Qs i Ω si k = ∅. (4.3)
Then we can enlarge Λ N c 4
s i to Λ i satisfying Λ N c 4 s i ⊂ Λ i ⊂ Λ N c 4 s i +50N c 2 s i , and Λ i ∩ Ω s ′ k = ∅ ⇒ Ω s ′ k ⊂ Λ i for s ′ ≤ s and k ∈ P s ′ . From (4.3), we have Λ i ∩ k∈Qs i Ω si k = ∅,
which shows Λ i is s i -good. As a result, for n ∈ Λ Ns i , since dist(n, ∂ − Λ N c 4 From N si → +∞, it follows that u(n) = 0 for ∀ n ∈ Z d . This contradicts u = 0, and the claim is proved. Next, define
U s = Λ 8N c 4 s+1 \Λ 4N c 4 s , U * s = Λ 10N c 4 s+1 \Λ 3N c 4 s .
We can also enlarge U * s to U * s so that U * s ⊂ U * s ⊂ Λ 50N c 2 s (U * s ), and U * s ∩ Ω s ′ k = ∅ ⇒ Ω s ′ k ⊂ U * s for s ′ ≤ s and k ∈ P s ′ . Let n satisfy n > max(4N c 4 s , 4N c 4 s0 ). Then there exists some s ≥ max( s, s 0 ) such that n ∈ U s . ∩ Ω s k = ∅ for some k ∈ Q + s . Then for k = k ′ ∈ Q + s , we have
k − k ′ > log γ 2δ s 1 τ N c 5 s+1 ≫ diam U * s . Thus U * s ∩ l∈Q + s Ω s l = ∅.
Now, if there exists l ∈ Q − s such that U * s ∩ Ω s l = ∅, then
N s < N c 4 s − 100N c 2 s ≤ l − k ≤ l + k ≤ l + k < 11N c 4 s+1 . Recalling Q s ⊂ P s ⊂ Z d + 1 2 s−1 i=0 l i ,
we have l + k ∈ Z d . Hence by (4.1), e −(11N c 4 s+1 ) τ 1 < 2θ + (l + k) · ω ≤ θ + l · ω − θ s + θ + k · ω + θ s < 2δ s .
This contradicts | log δ s | ∼ N c 5 τ s+1 ≫ N c 4 τ1 s+1 . We thus have shown which yields the exponential decay u.
We complete the proof of Theorem 1.2.
Remark 4.1. Assume that for some E ∈ [−2, 2], the inductive process stops at a finite stage (i.e., Q s = ∅ for some s < ∞). Then for N > N c 5 s , we can enlarge Λ N to Λ N with Λ N ⊂ Λ N ⊂ Λ N +50N c 2 s , and Λ N ∩ Ω s ′ k = ∅ ⇒ Ω s ′ k ⊂ Λ N for s ′ ≤ s and k ∈ P s ′ . Thus Λ N is s-good. For n ∈ Λ N Hence such E is not a generalized eigenvalue of H(θ).
5.
( 1 2 −)-Hölder continuity of the IDS In this section, we apply our estimates to obtain ( 1 2 −)-Hölder continuity of the IDS.
Proof of Theorem 1.3. Let T be given by (3.1). Fix µ > 0, θ ∈ T and E ∈ [−2, 2]. Let ε 0 be defined in Theorem 3.2 and assume 0 < ε ≤ ε 0 . Fix 0 < η < η 0 = min e −( 4 µ ) c c−1 , e −| log δ0| c .
(5.1) Denote by {ξ r : r = 1, . . . , R} ⊂ span (δ n : n ∈ Λ N ) the ℓ 2 -orthonormal eigenvectors of T ΛN with eigenvalues belonging to [−η, η]. We aim to prove that for sufficiently large N (depending on η), R ≤ (#Λ N )η 1 2 −µ . From (5.1), we can choose s ≥ 1 such that | log δ s−1 | c ≤ | log η| < | log δ s | c .
Enlarge Λ N to Λ N so that Λ N ⊂ Λ N ⊂ Λ N +50N c 2 s and Λ N ∩ Ω s ′ k = ∅ ⇒ Ω s ′ k ⊂ Λ N for s ′ ≤ s and k ∈ P s ′ . Define further K = k ∈ P s : Ω s k ⊂ Λ N , min σ=±1 ( θ + k · ω + σθ s ) < η Thus by (3.10), we obtain
k ′ ∈ Q s ′ , Ω s ′ k ′ ⊂ Λ ′ N , Ω s ′ k ′ ⊂ Ω s ′ +1 k ⇒ Ω s ′ +1 k ⊂ Λ ′ N for s ′ < s.
Since | log η| < | log δ s | c ∼ | log δ s−1 | c 6 ∼ N c 11 τ s < N 1 c s , we get from the resolvent identity
T −1 Λ ′ N < δ −3 s−1 sup {k∈Ps: Ω s k ⊂ Λ ′ N } θ + k · ω − θ s −1 · θ + k · ω + θ s −1 < δ −3 s−1 η µ−1 < 1 2 η −1 , (5.2)
where the last inequality follows from (5.1). By the uniform distribution of {n · ω} n∈Z d in T, we have
#( Λ N \ Λ ′ N ) ≤ #Ω s k · #K ≤ CN cd s · # k ∈ Z + s−1 i=0 l i : k ≤ N + 50N c 2 s , min σ=±1 ( θ + k · ω + σθ s ) < η 1 2 − µ 2 ≤ CN cd s · η 1 2 − µ 2 (N + 50N c 2 s ) d ≤ CN cd s · η 1 2 − µ 2 #Λ N for sufficiently large N .
For a vector ξ ∈ C Λ with Λ ⊂ Z d , we define ξ to be the ℓ 2 -norm. Assume ξ ∈ {ξ r : r ≤ R} be an eigenvector of T ΛN . Then
T ΛN ξ = R ΛN T ξ ≤ η. Hence η ≥ R Λ ′ N T ΛN ξ = R Λ ′ N T R Λ ′ N ξ + R Λ ′ N T R ΛN \ Λ ′ N ξ − R Λ ′ N \ΛN T ξ (5.3)
Applying T −1 Λ ′ N to (5.3) and (5.2) implies
R Λ ′ N ξ + T −1 Λ ′ N R Λ ′ N T R ΛN \ Λ ′ N ξ − R Λ ′ N \ΛN T ξ < 1 2 . (5.4) Denote H = Range T −1 Λ ′ N R Λ ′ N T R ΛN \ Λ ′ N − R Λ ′ N \ΛN T . Then dim H ≤ Rank T −1 Λ ′ N R Λ ′ N T R ΛN \ Λ ′ N − R Λ ′ N \ΛN T ≤ #( Λ N \ Λ ′ N ) + #( Λ N \ Λ N ) ≤ CN cd s · η 1 2 − µ 2 #Λ N + CN c 2 d s N d−1 ≤ CN cd s · η 1 2 − µ 2 #Λ N .
Denote by P H the orthogonal projection to H. Applying I − P H to (5.4), we get
R Λ ′ N ξ − P H R Λ ′ N ξ 2 = R Λ ′ N ξ 2 − P H R Λ ′ N ξ 2 ≤ 1 4 .
Before concluding the proof, we need a useful lemma.
| φ i , P H2 ξ r | 2 = i R r=1 | P H2 φ i , ξ r | 2 ≤ i P H2 φ i 2 ≤ i φ i 2 ≤ dim H 1 .
Finally, it follows from Lemma 5.1 that
R = R r=1 ξ r 2 = R r=1 R Λ ′ N ξ r 2 + R r=1 R ΛN \ Λ ′ N ξ r 2 ≤ 1 4 R + R r=1 P H R Λ ′ N ξ r 2 + R ΛN \ Λ ′ N ξ r 2 ≤ 1 4 R + dim H + #(Λ N \ Λ ′ N ) ≤ 1 4 R + CN cd s · η 1 2 − µ 2 #Λ N .
Hence we get
R ≤ CN cd s · η 1 2 − µ 2 #Λ N ≤ η 1 2 −µ #Λ N .
We finish the proof of Theorem 1.3.
Remark 5.1. In the above proof, if the inductive process stops at a finite stage (i.e., Q s = ∅ for some s) and | log δ s | c ≤ | log η|. Then Λ N is s-good and
T −1 ΛN < δ −3 s−1 δ −2 s < 1 2 η −1 , which implies R ≤ 4 3 #( Λ N \ Λ N ) ≤ CN c 2 d s N −1 #Λ N .
Letting N → ∞, we get N (E + η) − N (E − η) = 0, which means E / ∈ σ(H(θ)).
Appendix A.
Proof of Remark 3.1. Let i ∈ Q + 0 and j ∈ Q − 0 satisfy θ + i · ω + θ 0 < δ 0 , θ + j · ω − θ 0 < δ 1 100 0 .
Then (1.4) implies 1, ω 1 , . . . , ω d are rational independent and {k · ω} k∈Z d is dense in T. Thus there exists k ∈ Z d such that 2θ + k · ω is sufficiently small with θ + (k − j) · ω + θ 0 ≤ 2θ + k · ω + θ + j · ω − θ 0 < δ 1 100 0 , θ + (k − i) · ω − θ 0 ≤ 2θ + k · ω + θ + i · ω + θ 0 < δ 0 .
We obtain then k − j ∈ Q + 0 and k − i ∈ Q − 0 , which implies
dist Q + 0 , Q − 0 ≤ dist Q − 0 , Q + 0 .
The similar argument shows
dist Q + 0 , Q − 0 ≥ dist Q − 0 , Q + 0 .
We have shown
dist Q + 0 , Q − 0 = dist Q − 0 , Q + 0 .
Appendix B.
Lemma B.1 (Schur Complement Lemma). Let A ∈ C d1×d1 , D ∈ C d2×d2 , B ∈ C d1×d2 , D ∈ C d2×d1 be matrices and
M = A B C D .
Assume further that A is invertible and B , C ≤ 1. Then we have
:
det S 1 (z) = 0 and det M 1 (z) is an even function (cf. Lemma C.1) of z. Thus
Róuche theorem shows the equation(z − z 1 ) (z − (1 − z 1 )) + r 1 (z) = 0has exact two roots θ 1 , θ ′ 1 in (3.39), which are perturbations of z 1 and 1−z 1 M 1 (z) is a 1-periodic even function of z (cf. Lemma C.1). Thus
This contradicts (3.64). We then consider the case s ′ = s − 1. From Ω A s k = ∅, then l = k and Ω s−1
( 3 .
368) and (3.71), the right hand side (RHS) of (3.75) is bounded by CN c
For k ∈ Q + s , one considers M s+1 (z) := T (z) Ω s+1 k −k = (cos 2π(z + n · ω)δ n,n ′ − E + ε∆) n∈ Ω s+1 k −k for z being in {z ∈ C : |z + θ s |
)
s o ) and ( Ω s o * \ A s o * ) are (s − 1)-good. This is a contradiction. If s ′ = s − 1, Ω either l = o or l = o * , thus Ω ∩Q s−1 = l ∈ P s−1 : Ω s−1 l ⊂ ( Ω s o * \ A s o * ) ∩Q s−1 = ∅ since both ( Ω s o \ A s o ) and ( Ω s o * \ A s o * ) are (s − 1)-good. Finally, if l ∈ Q s and Ω s l ⊂ Ω s+1 k , then l = o or l = o * .
of k ∈ P s+1 and symmetrical about the origin. Now consider the analytic matrix-valued function M s+1 (z) := T (z) Ω s+1 k −k = (cos 2π(z + n · ω)δ n,n ′ − E + ε∆)
.
Since z − (θ + k · ω) ≤ |z| + θ + k
approximation (3.87), we deduce by the same argument as (3.71) that M s+1 (z)
M s+1 (z) is an even function of z. Thus
For 2 ○, by (3.109), we have
≤
, n ′ )u(n ′′ )| ≤ 2d n ′ ∈∂ − Λi |T −1 Λi (n, n ′ )| · sup n ′′ ∈∂ + Λi |u(n ′′ )| CN
4.2), without loss of generality, we may assume Λ 2N c 4 s
This implies U * s is s-good. Finally, recalling (4.4), we havedist(n, ∂ − U * s ) ≥ min 10N c 4 s+1 − |n|, |n| − 3N c
1 2
1, since dist(n, ∂ − Λ N ) > N c 3 s , we have |u(n)| ≤ (n ′ ,n ′′ )∈∂ ΛN |T −1 ΛN (n, n ′ )u(n ′′ )| ≤ 2d n ′ ∈∂ − ΛN |T −1 ΛN (n, n ′ )| · sup n ′′ ∈∂ + ΛN |u(n ′′ )| ≤ CN 2d · e − 1 2 γ∞N → 0.
= det A · det S, where S = D − CA −1 B is called the Schur complement of A. (2). M is invertible iff S is invertible, and S −1 ≤ M −1 < 4 1 + A −1 2 1 + S −1 . (B.1)Proof of Lemma B.1. Direct computation showsM −1 = A −1 + A −1 BS −1 CA −1 −A −1 BS −1 −S −1 CA −1 S −1 ,which implies (B.1).
1.2. Main results. In this paper, we study the QP Schrödinger operators on Z d
Lemma 5.1. Let H be a Hilbert space and let H 1 , H 2 be its subspaces. Let {ξ r : r = 1, . . . , R} be a set of orthonormal vectors. Then we have H1 P H2 ξ r 2 ≤ dim H 1 .Proof of Lemma 5.1. Denote by ·, · the inner product on H. Let {φ i } be the orthonormal basis of H 1 . By Parseval's equality and Bessel's inequality, we haveR
r=1
P R
r=1
P H1 P H2 ξ r
2 =
R
r=1 i
Even more general sets , e.g., the (s + 1)-good sets remain true.
which contradicts Ω s ′ l ′ ⊂ Λ ′ . If there exists k ′ such that k ′ ∈ Q s and Ω s k ′ ⊂ Λ ′ ⊂ Λ, then by (3.107) there exists k ∈ P s+1 , such thatHence recalling (3.108), one has k ∈ P s+1 andThis contradicts Ω s k ′ ⊂ Λ ′ . We have proven the claim. As a result, the estimates (3.16) and (3.17) hold true with Λ replaced by Λ ′ . We can now estimate T −1 Λ . For this purpose, we have the following two cases.(1). Assume that x /. For y ∈ Λ, using resolvent identity shows(2). Assume that x ∈ 2Ω s+1 k for some k ∈ P s+1 . Then by (3.107), we have Ω s+1 k ⊂ Λ and k / ∈ Q s+1 . For y ∈ Λ, t using resolvent identity showsBy (3.103), (3.109) andLemma C.1. Let l ∈ 1 2 Z d and let Λ ⊂ Z d + l be a finite set which is symmetrical about the origin (i.e., n ∈ Λ ⇔ −n ∈ Λ). Then det T (z) Λ = det ((cos 2π(z + n · ω)δ n,n ′ − E + ε∆) n∈Λ is an even function of z.Proof of Lemma C.1. Define the unitary map U Λ : ℓ 2 (Λ) −→ ℓ 2 (Λ) with (U φ)(n) = φ(−n).Then U −1 Λ T (z) Λ U Λ = ((cos 2π(z − n · ω)δ n,n ′ − E + ε∆) n∈Λ = T (−z) Λ , which implies det T (z) Λ = det T (−z) Λ .
Almost localization and almost reducibility. A Avila, S Jitomirskaya, J. Eur. Math. Soc. (JEMS). 121A. Avila and S. Jitomirskaya. Almost localization and almost reducibility. J. Eur. Math. Soc. (JEMS), 12(1):93-131, 2010.
Hölder continuity of the rotation number for quasi-periodic co-cycles in SL(2, R). S Amor, Comm. Math. Phys. 2872S. Amor. Hölder continuity of the rotation number for quasi-periodic co-cycles in SL(2, R). Comm. Math. Phys., 287(2):565-588, 2009.
Sharp phase transitions for the almost Mathieu operator. A Avila, J You, Q Zhou, Duke Math. J. 16614A. Avila, J. You, and Q. Zhou. Sharp phase transitions for the almost Mathieu operator. Duke Math. J., 166(14):2697-2718, 2017.
On nonperturbative localization with quasi-periodic potential. J Bourgain, M Goldstein, Ann. of Math. 1522J. Bourgain and M. Goldstein. On nonperturbative localization with quasi-periodic po- tential. Ann. of Math. (2), 152(3):835-879, 2000.
Anderson localization for Schrödinger operators on Z 2 with quasi-periodic potential. J Bourgain, M Goldstein, W Schlag, Acta Math. 1881J. Bourgain, M. Goldstein, and W. Schlag. Anderson localization for Schrödinger oper- ators on Z 2 with quasi-periodic potential. Acta Math., 188(1):41-86, 2002.
Absolutely continuous spectrum for 1D quasiperiodic operators. J Bourgain, S Jitomirskaya, Invent. Math. 1483J. Bourgain and S. Jitomirskaya. Absolutely continuous spectrum for 1D quasiperiodic operators. Invent. Math., 148(3):453-463, 2002.
On Melnikov's persistency problem. J Bourgain, Math. Res. Lett. 44J. Bourgain. On Melnikov's persistency problem. Math. Res. Lett., 4(4):445-458, 1997.
Hölder regularity of integrated density of states for the almost Mathieu operator in a perturbative regime. J Bourgain, Lett. Math. Phys. 512J. Bourgain. Hölder regularity of integrated density of states for the almost Mathieu operator in a perturbative regime. Lett. Math. Phys., 51(2):83-118, 2000.
On the spectrum of lattice Schrödinger operators with deterministic potential. J Bourgain, II. J. Anal. Math. 88J. Bourgain. On the spectrum of lattice Schrödinger operators with deterministic po- tential. II. J. Anal. Math., 88:221-254, 2002. Dedicated to the memory of Tom Wolff.
Green's function estimates for lattice Schrödinger operators and applications. J Bourgain, Annals of Mathematics Studies. 158Princeton University PressJ. Bourgain. Green's function estimates for lattice Schrödinger operators and applica- tions, volume 158 of Annals of Mathematics Studies. Princeton University Press, Prince- ton, NJ, 2005.
Anderson localization for quasi-periodic lattice Schrödinger operators on Z d , d arbitrary. J Bourgain, Geom. Funct. Anal. 173J. Bourgain. Anderson localization for quasi-periodic lattice Schrödinger operators on Z d , d arbitrary. Geom. Funct. Anal., 17(3):682-706, 2007.
Methods of KAM-theory for long-range quasiperiodic operators on Z ν . Pure point spectrum. V A Chulaevsky, E I Dinaburg, Comm. Math. Phys. 1533V. A. Chulaevsky and E. I. Dinaburg. Methods of KAM-theory for long-range quasi- periodic operators on Z ν . Pure point spectrum. Comm. Math. Phys., 153(3):559-577, 1993.
Schrödinger operators with dynamically defined potentials. Ergodic Theory Dynam. D Damanik, Systems. 376D. Damanik. Schrödinger operators with dynamically defined potentials. Ergodic Theory Dynam. Systems, 37(6):1681-1764, 2017.
Some problems in the spectral theory of discrete operators with quasiperiodic coefficients. Uspekhi Mat. Nauk. E I Dinaburg, 52E. I. Dinaburg. Some problems in the spectral theory of discrete operators with quasiperi- odic coefficients. Uspekhi Mat. Nauk, 52(3(315)):3-52, 1997.
Floquet solutions for the 1-dimensional quasi-periodic Schrödinger equation. L H Eliasson, Comm. Math. Phys. 1463L. H. Eliasson. Floquet solutions for the 1-dimensional quasi-periodic Schrödinger equa- tion. Comm. Math. Phys., 146(3):447-482, 1992.
Discrete one-dimensional quasi-periodic Schrödinger operators with pure point spectrum. L H Eliasson, Acta Math. 1792L. H. Eliasson. Discrete one-dimensional quasi-periodic Schrödinger operators with pure point spectrum. Acta Math., 179(2):153-196, 1997.
Absence of diffusion in the Anderson tight binding model for large disorder or low energy. J Fröhlich, T Spencer, Comm. Math. Phys. 882J. Fröhlich and T. Spencer. Absence of diffusion in the Anderson tight binding model for large disorder or low energy. Comm. Math. Phys., 88(2):151-184, 1983.
Localization for a class of one-dimensional quasi-periodic Schrödinger operators. J Fröhlich, T Spencer, P Wittwer, Comm. Math. Phys. 1321J. Fröhlich, T. Spencer, and P. Wittwer. Localization for a class of one-dimensional quasi-periodic Schrödinger operators. Comm. Math. Phys., 132(1):5-25, 1990.
Hölder continuity of the integrated density of states for quasi-periodic Schrödinger equations and averages of shifts of subharmonic functions. M Goldstein, W Schlag, Ann. of Math. 1542M. Goldstein and W. Schlag. Hölder continuity of the integrated density of states for quasi-periodic Schrödinger equations and averages of shifts of subharmonic functions. Ann. of Math. (2), 154(1):155-203, 2001.
Fine properties of the integrated density of states and a quantitative separation property of the Dirichlet eigenvalues. M Goldstein, W Schlag, Geom. Funct. Anal. 183M. Goldstein and W. Schlag. Fine properties of the integrated density of states and a quantitative separation property of the Dirichlet eigenvalues. Geom. Funct. Anal., 18(3):755-869, 2008.
On resonances and the formation of gaps in the spectrum of quasi-periodic Schrödinger equations. M Goldstein, W Schlag, Ann. of Math. 1732M. Goldstein and W. Schlag. On resonances and the formation of gaps in the spectrum of quasi-periodic Schrödinger equations. Ann. of Math. (2), 173(1):337-475, 2011.
Arithmetic version of Anderson localization via reducibility. L Ge, J You, Geom. Funct. Anal. 305L. Ge and J. You. Arithmetic version of Anderson localization via reducibility. Geom. Funct. Anal., 30(5):1370-1401, 2020.
Hölder regularity of the integrated density of states for quasi-periodic long-range operators on ℓ 2 (Z d ). L Ge, J You, X Zhao, Comm. Math. Phys. 3922L. Ge, J. You, and X. Zhao. Hölder regularity of the integrated density of states for quasi-periodic long-range operators on ℓ 2 (Z d ). Comm. Math. Phys., 392(2):347-376, 2022.
Anderson localization for the almost Mathieu equation: a nonperturbative proof. S Jitomirskaya, Comm. Math. Phys. 1651S. Jitomirskaya. Anderson localization for the almost Mathieu equation: a nonpertur- bative proof. Comm. Math. Phys., 165(1):49-57, 1994.
Metal-insulator transition for the almost Mathieu operator. S Jitomirskaya, Ann. of Math. 1502S. Jitomirskaya. Metal-insulator transition for the almost Mathieu operator. Ann. of Math. (2), 150(3):1159-1175, 1999.
Nonperturbative localization. S Jitomirskaya, Proceedings of the International Congress of Mathematicians. the International Congress of MathematiciansBeijing; BeijingHigher Ed. PressIIIS. Jitomirskaya. Nonperturbative localization. In Proceedings of the International Con- gress of Mathematicians, Vol. III (Beijing, 2002), pages 445-455. Higher Ed. Press, Beijing, 2002.
L 2 -reducibility and localization for quasiperiodic operators. S Jitomirskaya, I Kachkovskiy, Math. Res. Lett. 232S. Jitomirskaya and I. Kachkovskiy. L 2 -reducibility and localization for quasiperiodic operators. Math. Res. Lett., 23(2):431-444, 2016.
Universal hierarchical structure of quasiperiodic eigenfunctions. S Jitomirskaya, W Liu, Ann. of Math. 1872S. Jitomirskaya and W. Liu. Universal hierarchical structure of quasiperiodic eigenfunc- tions. Ann. of Math. (2), 187(3):721-776, 2018.
Anderson localization for multi-frequency quasiperiodic operators on Z D. S Jitomirskaya, W Liu, Y Shi, Geom. Funct. Anal. 302S. Jitomirskaya, W. Liu, and Y. Shi. Anderson localization for multi-frequency quasi- periodic operators on Z D . Geom. Funct. Anal., 30(2):457-481, 2020.
Quantitative inductive estimates for Green's functions of non-self-adjoint matrices. W Liu, arXiv:2007.00578Anal. PDE2020to appearW Liu. Quantitative inductive estimates for Green's functions of non-self-adjoint matri- ces. arXiv:2007.00578, Anal. PDE (to appear), 2020.
Dynamics and spectral theory of quasi-periodic Schrödinger-type operators. C A Marx, S Jitomirskaya, Ergodic Theory Dynam. Systems. 378C. A. Marx and S. Jitomirskaya. Dynamics and spectral theory of quasi-periodic Schrödinger-type operators. Ergodic Theory Dynam. Systems, 37(8):2353-2393, 2017.
On the one-dimensional Schrödinger equation with a quasiperiodic potential. H Rüssmann, Ann. New York Acad. Sci. 357H. Rüssmann. On the one-dimensional Schrödinger equation with a quasiperiodic po- tential. Ann. New York Acad. Sci., 357:90-107, 1980.
On the integrated density of states for Schrödinger operators on Z 2 with quasi periodic potential. W Schlag, Comm. Math. Phys. 2231W. Schlag. On the integrated density of states for Schrödinger operators on Z 2 with quasi periodic potential. Comm. Math. Phys., 223(1):47-65, 2001.
Anderson localization for one-dimensional difference Schrödinger operator with quasiperiodic potential. Y G Sinai, J. Statist. Phys. 46Y. G. Sinai. Anderson localization for one-dimensional difference Schrödinger operator with quasiperiodic potential. J. Statist. Phys., 46(5-6):861-909, 1987.
. Zhang) School of Mathematical Sciences. Peking UniversityChina Email address: [email protected]) School of Mathematical Sciences, Peking University, Beijing 100871, China Email address: [email protected]
| [] |
[
"Reliable Optimization of Arbitrary Functions over Quantum Measurements",
"Reliable Optimization of Arbitrary Functions over Quantum Measurements"
] | [
"Jing Luo \nSchool of Physics\nKey Laboratory of Advanced Optoelectronic Quantum Architecture and Measurement of Ministry of Education\nBeijing Institute of Technology\n100081BeijingChina\n",
"Jiangwei Shang \nSchool of Physics\nKey Laboratory of Advanced Optoelectronic Quantum Architecture and Measurement of Ministry of Education\nBeijing Institute of Technology\n100081BeijingChina\n"
] | [
"School of Physics\nKey Laboratory of Advanced Optoelectronic Quantum Architecture and Measurement of Ministry of Education\nBeijing Institute of Technology\n100081BeijingChina",
"School of Physics\nKey Laboratory of Advanced Optoelectronic Quantum Architecture and Measurement of Ministry of Education\nBeijing Institute of Technology\n100081BeijingChina"
] | [] | As the connection between classical and quantum worlds, quantum measurements play a unique role in the era of quantum information processing. Given an arbitrary function of quantum measurements, how to obtain its optimal value is often considered as a basic yet important problem in various applications. Typical examples include but are not limited to optimizing the likelihood functions in quantum measurement tomography, searching the Bell parameters in Bell-test experiments, and calculating the capacities of quantum channels. In this work, we propose reliable algorithms for optimizing arbitrary functions over the space of quantum measurements by combining the so-called Gilbert's algorithm for convex optimization with certain gradient algorithms. With extensive applications, we demonstrate the efficacy of our algorithms with both convex and nonconvex functions. | 10.3390/e25020358 | [
"https://export.arxiv.org/pdf/2302.07534v1.pdf"
] | 256,868,346 | 2302.07534 | c6dd09050ff2041637140fba1373cdeb60fb1b17 |
Reliable Optimization of Arbitrary Functions over Quantum Measurements
Published: 15 February 2023
Jing Luo
School of Physics
Key Laboratory of Advanced Optoelectronic Quantum Architecture and Measurement of Ministry of Education
Beijing Institute of Technology
100081BeijingChina
Jiangwei Shang
School of Physics
Key Laboratory of Advanced Optoelectronic Quantum Architecture and Measurement of Ministry of Education
Beijing Institute of Technology
100081BeijingChina
Reliable Optimization of Arbitrary Functions over Quantum Measurements
Published: 15 February 202310.3390/e25020358Received: 25 January 2023 Revised: 9 February 2023 Accepted: 14 February 2023Citation: Luo, J.; Shang, J. Reliable Optimization of Arbitrary Functions over Quantum Measurements. Entropy 2023, 25, 358. https:// Academic Editor: Jay Lawrence This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/). entropy Articlequantum measurementGilbert's algorithmconvex optimizationnonconvex optimization
As the connection between classical and quantum worlds, quantum measurements play a unique role in the era of quantum information processing. Given an arbitrary function of quantum measurements, how to obtain its optimal value is often considered as a basic yet important problem in various applications. Typical examples include but are not limited to optimizing the likelihood functions in quantum measurement tomography, searching the Bell parameters in Bell-test experiments, and calculating the capacities of quantum channels. In this work, we propose reliable algorithms for optimizing arbitrary functions over the space of quantum measurements by combining the so-called Gilbert's algorithm for convex optimization with certain gradient algorithms. With extensive applications, we demonstrate the efficacy of our algorithms with both convex and nonconvex functions.
Introduction
In quantum information science, numerous complex mathematical problems remain to be solved. Since the set of quantum states as well as quantum measurements form convex sets, various important tasks in this field, such as the calculation of ground state energy, violation of the Bell inequality, and the detection and quantification of quantum entanglement [1,2], conform to the framework of convex optimization theory. The primary tool in convex optimization is semidefinite programming (SDP) [3,4], which can be used to derive relaxed constraints and provide accurate solutions for a large number of computationally challenging tasks. However, serious drawbacks also exist for SDP including its slow computation speed and low accuracy. For instance, SDP can only compute up to four qubits in quantum state tomography (QST), while improved superfast algorithms [5] can quickly go up to eleven qubits with a higher precision. Consequently, developing more efficient algorithms in convex optimization is becoming more and more crucial as quantum technologies rapidly advance.
Recently, an efficient convex optimization algorithm [6] was proposed by Brierley et al. based on the so-called Gilbert's algorithm [7]. Concurrently, Ref. [8] used Gilbert's algorithm to investigate whether nonlocal relationships can be distinguished in polynomial time. In Ref. [9], Gilbert's algorithm was employed as a tool to satisfy certain constraints, based on which two reliable convex optimization schemes over the quantum state space were proposed. In addition, some nonconvex optimization algorithms were also brought out for QST; for instance, the one in Ref. [10] is faster and more accurate as compared to previous approaches. One notices that all these studies concern only the optimization over quantum state space, with the consideration over quantum measurement space rarely being mentioned.
In fact, various important and meaningful problems related to quantum measurements exist in convex optimization, including, for example, searching the Bell parameters in Bell-test experiments [11], optimizing the correlation of quantum measurements under different measurement settings [12][13][14][15], and maximizing the likelihood functions in quantum measurement tomography. Meanwhile, characterization of quantum measurements forms the basis for quantum state tomography [16][17][18] and quantum process tomography [19][20][21]. Therefore, convex optimization over the quantum measurement space stands as an independent yet important problem in quantum information theory. However, the space of quantum measurements is much more complex as compared to the quantum state space since it is possible to produce an infinite variety of different measurement outcomes as long as the probabilities for these outcomes sum to one. Recently, Ref. [22] proposed a method to optimize over the measurement space based on SDP, but it fails to solve complex tasks due to the intrinsic problem with SDP. Worst of all, nonconvex functions [23] easily appear in the space of quantum measurements. Unlike convex functions, local optima might be found during the process of optimization. Hence, nonconvex optimization is regarded as more difficult than convex optimization. In this work, we propose two reliable algorithms for optimizing arbitrary functions over the space of quantum measurements by combining the so-called Gilbert's algorithm for convex optimization with the directgradient (DG) algorithm as well as the accelerated projected gradient (APG) algorithm. With extensive applications, we demonstrate the efficacy of our algorithms with both convex and nonconvex functions.
This work is organized as follows: In Section 2, we propose two reliable algorithms for optimizing over quantum measurement space by combining Gilbert's algorithm with the DG and APG algorithms, respectively. The universality of our method is demonstrated by several examples with both convex and nonconvex functions in Section 3. The last Section 4 provides the conclusions.
Function Optimization
In the quantum state space Q, an arbitrary state ρ should satisfy the conditions
ρ ≥ 0 ,(1)tr(ρ) = 1 .(2)
Given a smaller convex subset C ∈ Q, Gilbert's algorithm can be used to approximately find the closest state ρ C ∈ C with respect to ρ [9]. In general, for an arbitrary matrix M in the matrix space M, we employ Gilbert's algorithm to search for the closest quantum state ρ Q ∈ Q with respect to M. Throughout this work, let us denote the operation by using Gilbert's algorithm as
ρ Q ≡ S M .(3)
Given experimental data, it is critical to identify the measurement settings that are most compatible with the data. Here, we consider the quantum measurement space Ω as all the positive operator-valued measures (POVMs). A quantum measurement device is characterized by a set of operators Π l , which have to satisfy two constraints
Π l ≥ 0 ,(4)L ∑ l=1 Π l = I ,(5)
where L is the total number of operators in the set. Denote a function F Π l defined over the quantum measurement space Ω. We assume that F Π l is differentiable with the gradient ∇F Π l ≡ G Π l . The objective is to optimize F Π l over the entire quantum measurement space, and we have
optimize F Π l ,(6a)s.t. Π l ∈ Ω .(6b)
A simple gradient method is very likely to take Π l outside of the quantum measurement space; for this, we employ Gilbert's algorithm to guarantee the condition in Equation (4). In addition, we rewrite the POVM as Π l = Π 1 , Π 2 , . . . , Π L−1 , I − ∑ L−1 l=1 Π l to satisfy the condition in Equation (5). Then, the structure of optimization proceeds as follows.
Taking the to-be-minimized objective function as an example, for the (k + 1)th iteration, first update the (L − 1) elements foremost of the measurement operators with the DG scheme to obtain
Π l,k+1 = Π l,k − G Π l,k ≡ DG Π l,k , G Π l,k , .(7)
Here, represents the step size of the update which can be any positive value, and k is the number of iterations. Second, normalize the measurement operators Π l,k+1 as density matrices ρ l,k+1 , such that
ρ l,k+1 = Π l,k+1 tr(Π l,k+1 ) ,(8)
which could be nonphysical. Third, use Gilbert's algorithm to project ρ l,k+1 back to the quantum state space Q, i.e., ρ l,k+1 → ρ Q l,k+1 = S(ρ l,k+1 ). Finally, reconstruct the physical measurement operators as
Π Ω l,k+1 = ρ Q l,k+1 t l,k+1 L−1 l=1 ,(9)Π Ω L,k+1 = I − L−1 ∑ l=1 Π Ω l,k+1 ,(10)
where the parameter t l is obtained by fixing the obtained ρ Q l,k+1 to obtain t l,k+1
L−1 l=1 = argmin F t l,k+1 L−1 l=1
. Here, to ensure that the first (L − 1) measurement operators satisfy condition Equation (4), only t l,k+1 ≥ 0 is required since ρ Q l,k+1 ≥ 0 is guaranteed by using Gilbert's algorithm. Meanwhile, in order to ensure that the last element of the new POVM satisfies the condition in Equation (4), let
Π Ω L,k+1 = I − L−1 ∑ l=1 ρ Q l,k+1 t l,k+1 ≥ 0 .(11)
Hence, we obtain the new POVM Π Ω k+1,l that satisfies the condition in Equation (6b) after each iteration. Whenever the difference between the values of the adjacent iterations is less than a certain threshold, the iteration stops, and the optimal POVM is obtained. Otherwise, continue with the iteration and the step size is controlled by a step factor β. When F k < F k−1 , the step size is appropriately selected. When F k > F k−1 , it indicates that the step size selection is too large, and the step factor β needs to be used to adjust the step size. See the DG algorithm in Algorithm 1.
However, the DG algorithm has some disadvantages, such as slow optimization speed and low accuracy. For faster convergence, one can choose the APG algorithm [5,24]. The APG algorithm adjusts the direction of the gradient at each step, which improves the convergence speed of the algorithm. In simple terms, the APG algorithm has introduced a companion operator E l,k = Π l,k +
θ k−1 −1 θ k Π l,k − Π l,k−1 ,
which provides the momentum of the previous step controlled by the parameter θ, in order to update the measurement operators Π l,k = E l,k−1 − G E l,k−1 . See the specific process shown in Algorithm 2.
Algorithm 1: DG algorithm
Input: > 0, 0 < β < 1, choose any Π l,0 L−1 l=1 ∈ Ω, F 0 = F Π l,0 . Output: Π l . 1 for k = 1, · · · , do 2 for l = 1, · · · , L − 1 do 3 Update Π l,k = DG Π l,k−1 , G Π l,k−1 , . Calculate ρ l,k and ρ Q l,k = S ρ l,k . 4 end 5 Gain t l,k L−1 l=1 = argmin F k t l,k L−1 l=1 . Calculate Π Ω l,k , F k = F Π Ω l,k . 6 Termination criterion! 7 if F k > F k−1 then 8 Reset = β , and Π l,k = Π Ω l,k−1 .Input: > 0, 0 < β < 1, choose any Π l,0 L−1 l=1 ∈ Ω, E l,0 = Π l,0 , θ 0 = 1, and F 0 = F Π l,0 . 1 . Output: Π l . 2 for k = 1, · · · , do 3 for l = 1, · · · , L − 1 do 4 Update Π l,k = E l,k−1 − G E l,k−1 . Calculate ρ l,k and ρ Q l,k = S ρ l,k . 5 end 6
Gain t l,k
L−1 l=1 = argmin F k t l,k L−1 l=1 . Calculate Π Ω l,k , F k = F Π Ω l,k . 7 Termination criterion! 8 if F k > F k−1 then 9
Reset = β , and Π l,k = Π Ω l,k−1 . E l,k = Π l,k , and θ k = 1. Set θ k = 1
2 1 + 1 + 4θ 2 k−1 ; 12 Update E l,k = Π l,k + θ k−1 −1 θ k Π l,k − Π l,k−1 .
Applications
In this section, we demonstrate the efficacy of our algorithms by optimizing arbitrary convex as well as nonconvex functions over the space of quantum measurements.
Convex Functions
In quantum measurement tomography [25][26][27], a set of known probe states ρ m is measured to provide the information needed to reconstruct an unknown POVM Π l . The probability that the device would respond to the quantum state ρ m by producing the outcome Π l is given by
p lm = tr ρ m Π l .(12)
Typically, the linear inversion method [28] can be used to obtain the ideal POVM, but nonphysical results are likely to be obtained. Then, the maximum likelihood estimation (MLE) [29] is proposed to reconstruct the POVM that satisfies all the conditions. However, MLE fails to return any meaningful results when the target POVM is of low rank, which is quite typical, especially in higher-dimensional spaces. These problems can be avoided by using our algorithms.
To estimate the operators Π l , we maximize the likelihood function
L Π l = L ∏ l=1 M ∏ m=1 tr ρ m Π l f lm ,(13)
where M is the number of different input states ρ m , and
f lm = n lm n ,(14)
with n lm denoting the number of lth outcome when measuring the mth state ρ m , and n representing the total number of measured input states. One can see that L Π l is not strictly concave, while the log-likelihood ln L Π l is. Here, we minimize the negative log-likelihood function F Π l = − ln L Π l with
ln L Π l = L ∑ l=1 M ∑ m=1 f lm ln p lm .(15)
To satisfy the condition in Equation (5), rewrite the objective function as
ln L Π l = L−1 ∑ l=1 M ∑ m=1 f lm ln tr ρ m Π l + M ∑ m=1 f Lm ln tr ρ m I − L−1 ∑ l=1 Π l .(16)
The gradient of ln L Π l with respect to Π l is
∇ ln L Π l = M ∑ m=1 f lm p lm − f lm 1 − ∑ L−1 l=1 p lm ρ m .(17)
For numerical simulations, we mainly consider Pauli measurements which are the most commonly-used measurements in quantum information processing. Then, the cases of one qubit, one qutrit, two qubits, and two qutrits are used for the experimental setup, respectively. Specifically, the setups of these four scenarios are described below.
One Qubit
For one qubit, we take the eigenstates of σ z and the superposition states − 1
√ 2 |0 z ± |1 z and 1 √ 2
|0 z ± i|1 z as the input states. In the measurement setup, we select the projection of the spin along the x-axis, i.e.,
Π 1 = |0 x 0 x | ; Π 2 = |1 x 1 x | .(18)
One Qutrit
For one qutrit, we use 12 different input states: three eigenstates of σ z , | − 1 z , |0 z and |1 z , and nine superposition states 1
√ 2 | − 1 z + e iψ j |0 z , 1 √ 2 |0 z + e iψ j |1 z and 1 √ 2 | − 1 z + e iΨ j |1 z ,
where j = 1, 2, 3; and ψ 1 = 0, ψ 2 = π 2 , and ψ 3 = π. The device measures the projection of the spin along the x-axis, and the POVM are projectors
Π 1 = | − 1 x −1 x | ; Π 2 = |0 x 0 x | ; Π 3 = |1 x 1 x | .(19)
Two Qubits
In the case of two qubits, we take the tensor products of the four eigenstates of two Pauli-Z operators |0 z 0 z , |1 z 1 z , |0 z 1 z , |1 z 0 z and the superposition states 1
√ 2 |0 z 0 z + e iψ j |0 z 1 z , 1 √ 2 |0 z 0 z + e iψ j |1 z 0 z , 1 √ 2 |0 z 0 z + e iψ j |1 z 1 z , 1 √ 2 |0 z 1 z + e iψ j |1 z 0 z , 1 √ 2 |0 z 1 z + e iψ j |1 z 1 z , 1 √ 2
|1 z 0 z + e iψ j |1 z 1 z as the probe states, where j = 1, 2, 3; ψ 1 = 0, ψ 2 = π 2 , and ψ 3 = π. Then, we choose the following POVM for the experimental simulation:
Π 1 = |0 x 0 x 0 x 0 x | ; Π 2 = |0 x 1 x 0 x 1 x | ; Π 3 = |1 x 0 x 1 x 0 x | ; Π 4 = |0 x 0 x 0 x 0 x | .(20)
Two Qutrits
Finally, for the case of two qutrits, we perform a numerical simulation of the Stern-Gerlach apparatus measuring two particles with spin-1. We assume 45 different input states:
|1 z − 1 z , | − 1 z 0 z , | − 1 z 1 z , |0 z − 1 z , |0 z 0 z , |0 z 1 z , |1 z 0 z , |1 z 1 z , | − 1 z − 1 z
and 36 superposition states. In the simulation, the device measures the projection of the spin along the x -axis, and the POVM are projectors
Π 1 = |0 x 1 x 0 x 1 x | ; Π 2 = |0 x − 1 x 0 x − 1 x | ; Π 3 = |1 x 0 x 1 x 0 x | ; Π 4 = |1 x − 1 x 1 x − 1 x | ; Π 5 = |0 x 0 x 0 x 0 x | ; Π 6 = | − 1 x 0 x −1 x 0 x | ; Π 7 = |1 x 1 x 1 x 1 x | ; Π 8 = | − 1 x 1 x −1 x 1 x | ; Π 9 = | − 1 x − 1 x −1 x − 1 x | .(21)
For each case of simulation, the number of measurements for each probe state is 300, 10 5 , 10 5 , and 5 × 10 5 , respectively. Then, according to the frequency obtained by the simulated data, we use our algorithm to reconstruct the POVM. The fidelity between different POVM elements is defined as the fidelity between the two states σ and ρ, i.e.,
F(σ, ρ) := tr √ σρ √ σ 2 = F Π l tr(Π l ) , Π j tr(Π j ) .(22)
In addition, the overall fidelity between two POVMs Π l L l=1 and Π j L j=1 on a d-dimensional Hilbert space is defined by
F(Π l , Π j ) := L ∑ l=1 w l F Π l tr(Π l ) , Π j tr(Π j ) 2 ,(23)
with w l = √ tr(Π l )tr(Π j ) d [30]. The overall fidelities of the reconstructed POVMs are shown in Figure 1. Figures 2 and 3 present the variations of fidelity of the POVM elements reconstructed using the DG algorithm and APG algorithm with respect to the number of iteration steps in different cases. We can see that these two algorithms are almost identical in accuracy, and the fidelities of the measurement operators are close to 1. Generally speaking, the APG algorithm converges faster than the DG algorithm. In addition, one notices that the fidelity of the last element in some of the simulations is not always increasing, which is a result of the constraint that we set in Equation (11). Figure 1. For different cases, the two algorithms are compared to reconstruct the overall fidelity of the measurements. The number of measurements used in each simulation for each probe state is 300, 10 5 , 10 5 , and 5 × 10 5 , respectively. For most cases, the APG algorithm converges faster than the DG algorithm.
Nonconvex Functions
Quantum detector self-characterization (QDSC) tomography is another method for characterizing quantum measurements. Unlike quantum measurement tomography, this method does not require knowing the specific form of the input probe states, but directly optimizes the cost function based on the measurement statistic f m to reconstruct the measurements. For POVM with L outcomes detected by m states, a data set of the measurement statistic f lm is obtained. We write the distribution of the data for each state as a vector
f m = f 1m f 2m . . . f Lm .(24)
For the one qubit case, define N i,l = b T i b l and write the POVM as
Π l = a l I + b l · σ(25)
under the Bloch representation, where i and l represent the number of rows i and columns l of the matrix N, a = (a 1 · · · a L ) T , b l = (b l,x , b l,y , b l,z ), σ = (σ x , σ y , σ z ), 1 ≤ i, l ≤ L. The matrix N and vector a can be represented as
N i,l = b T i b l = 1 2 tr(Π i Π l ) − 1 4 tr(Π i )tr(Π l ) ,(26)a l = 1 2 tr(Π l ) .(27)
Then, optimization of the cost function F N + , a is given by [23] min
∑ m 1 − f m − a T N + f m − a 2 , (28a) s.t. a 2 l − N l,l >= 0 ,(28b)
where N + stands for the Moore-Penrose pseudoinverse of N. One notices that the objective function is nonconvex. Optimization of nonconvex functions is difficult as local minima might be found. Interestingly, we find that our algorithm can also be used to optimize nonconvex functions. Since our algorithm guarantees the conditions for quantum measurements, one only needs to optimize the objective function regardless of the constraint in Equation (28b). For numerical simulations, we choose 50 probe states:
1 2 I + σ z , 1 2 I − σ z , 1 2 I + sin iπ 4 cos nπ 8 σ x + sin iπ 4 sin nπ 8 σ y + cos iπ 4 σ z ,(29)
where i = 1, 2, · · · , 6; n = 1, 2, · · · , 8. In addition, we use the two-dimensional SIC POVM as the measurement device, and each state is measured 200 times. The APG algorithm is used to optimize the objective function. First, select any set of POVM operators in the measurement space, and use Equations (26) and (27) to obtain the initial values N + k and a k , respectively. Similarly, we calculate the gradient of the objective function in Equation (28a). The gradient of the objective function is given by
δF a = ∑ m 2 1 − f m − a T N + f m − a N + T f m + N + f m − N + + N + T a ,(30)δF N + = ∑ m −2 1 − f m − a T N + f m − a 2 f m − a T .(31)
The values of N k+1 and a k+1 are obtained by iterating over N k and a k using gradient descent; then, b l,k+1 is obtained by decomposing N k+1 . In the experiment, we specify that the reference frame, i.e., the vector b 1 is parallel to the z-direction of the Bloch sphere, and set the xz plane of the Bloch sphere as the plane determined by the vectors b 1 and b 2 . This is equivalent to b 1,x = b 1,y = b 2,y = 0. Then, Π l,k+1 L−1 l=1 can be obtained by using Equation (25), which is the update for Π l,k L−1 l=1 . The fidelity of each POVM element can approach 1 in a very small number of iteration steps; see Figure 4. Then, the fidelities of the measurements are compared with the ones reported in [23], demonstrating that the performance of our algorithm is slightly better; see Figure 5.
Conclusions
We have proposed two reliable algorithms for optimizing arbitrary functions over the quantum measurement space. For a demonstration, we have shown several examples on the convex function of quantum measurement tomography with different dimensions as well as a nonconvex function of one qubit in quantum detector self-characterization tomography. Surprisingly, our method does not encounter the problem of rank deficiency. Compared with SDP, our method can be easily applied to higher-dimensional cases as well as to optimize nonconvex functions. Moreover, our method reports better results as compared to previous approaches. For future work, we will consider the optimization over the joint space of quantum states and quantum measurements, for tasks such as calculating the capacity of quantum channels. Institutional Review Board Statement: Not applicable.
Data Availability Statement:
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Figure 2 .
2For different cases of the quantum measurement tomography, fidelities of the measurements obtained by the DG algorithm vary with the number of iteration steps. In general, the fidelity of each POVM element saturates to the maximum very quickly.
Figure 3 .
3For different cases of the quantum measurement tomography, fidelities of the measurements obtained by the APG algorithm vary with the number of iteration steps. In general, the fidelity of each POVM element saturates to the maximum very quickly.
Figure 4 .
4In the case of QDSC, the fidelity of each element of the two-dimensional SIC POVM saturates to the maximum by using only two steps.
Figure 5 .
5Comparison of the fidelities of the reconstructed quantum measurements between the APG algorithm (blue) and the method in[23] (green).
Author
Contributions: J.L. performed the numerical calculations. All authors contributed to the interpretation of the results, preparation, and writing of the manuscript. All authors have read and agreed to the published version of the manuscript.
Funding:
This work has been supported by the National Natural Science Foundation of China (Grants No. 11805010, No. 12175014, and No. 92265115).
Acknowledgments:We thank Ye-Chao Liu for fruitful discussions.Conflicts of Interest:The authors declare no conflict of interest.
Entanglement detection. O Gühne, G Tóth, 10.1016/j.physrep.2009.02.004Phys. Rep. 474Gühne, O.; Tóth, G. Entanglement detection. Phys. Rep. 2009, 474, 1-75. [CrossRef]
Entanglement in many-body systems. L Amico, R Fazio, A Osterloh, V Vedral, 10.1103/RevModPhys.80.517Rev. Mod. Phys. 80Amico, L.; Fazio, R.; Osterloh, A.; Vedral, V. Entanglement in many-body systems. Rev. Mod. Phys. 2008, 80, 517. [CrossRef]
Graph implementations for nonsmooth convex programs. M C Grant, S P Boyd, In Recent Advances in Learning and Control. Grant, M.C.; Boyd, S.P. Graph implementations for nonsmooth convex programs. In Recent Advances in Learning and Control;
. Springer, London, UKSpringer: London, UK, 2008; pp. 95-110.
Convex Optimization. S P Boyd, L Vandenberghe, IEEE Trans. Automat. Contr. 51Boyd, S.P.; Vandenberghe, L. Convex Optimization. IEEE Trans. Automat. Contr. 2006, 51, 1859-1859.
Superfast maximum-likelihood reconstruction for quantum tomography. J Shang, Z Zhang, H K Ng, 10.1103/PhysRevA.95.062336Phys. Rev. A. 95Shang, J.; Zhang, Z.; Ng, H.K. Superfast maximum-likelihood reconstruction for quantum tomography. Phys. Rev. A 2017, 95, 062336. [CrossRef]
Convex separation from convex optimization for large-scale problems. S Brierley, M Navascues, T Vertesi, arXiv:1609.05011Brierley, S.; Navascues, M.; Vertesi, T. Convex separation from convex optimization for large-scale problems. arXiv 2016, arXiv:1609.05011.
An iterative procedure for computing the minimum of a quadratic form on a convex set. E G Gilbert, 10.1137/0304007SIAM J. Control. Optim. 4Gilbert, E.G. An iterative procedure for computing the minimum of a quadratic form on a convex set. SIAM J. Control. Optim. 1966, 4, 61-80. [CrossRef]
A Montina, S Wolf, arXiv:1609.06269Can non-local correlations be discriminated in polynomial time? arXiv 2016. Montina, A.; Wolf, S. Can non-local correlations be discriminated in polynomial time? arXiv 2016, arXiv:1609.06269.
Convex optimization over classes of multiparticle entanglement. J Shang, O Gühne, 10.1103/PhysRevLett.120.050506Phys. Rev. Lett. 12050506Shang, J.; Gühne, O. Convex optimization over classes of multiparticle entanglement. Phys. Rev. Lett. 2018, 120, 050506. [CrossRef]
Provable compressed sensing quantum state tomography via non-convex methods. A Kyrillidis, A Kalev, D Park, S Bhojanapalli, C Caramanis, S Sanghavi, 10.1038/s41534-018-0080-4NPJ Quantum Inf. 4Kyrillidis, A.; Kalev, A.; Park, D.; Bhojanapalli, S.; Caramanis, C.; Sanghavi, S. Provable compressed sensing quantum state tomography via non-convex methods. NPJ Quantum Inf. 2018, 4, 1-7. [CrossRef]
Avoiding apparent signaling in Bell tests for quantitative applications. M Smania, M Kleinmann, A Cabello, M Bourennane, arXiv:1801.05739Smania, M.; Kleinmann, M.; Cabello, A.; Bourennane, M. Avoiding apparent signaling in Bell tests for quantitative applications. arXiv 2018, arXiv:1801.05739.
Templates for convex cone problems with applications to sparse signal recovery. S R Becker, E J Candès, M C Grant, 10.1007/s12532-011-0029-5Math. Program. Comput. 3Becker, S.R.; Candès, E.J.; Grant, M.C. Templates for convex cone problems with applications to sparse signal recovery. Math. Program. Comput. 2011, 3, 165-218. [CrossRef]
Quantum correlations are stronger than all nonsignaling correlations produced by n-outcome measurements. M Kleinmann, A Cabello, 10.1103/PhysRevLett.117.150401Phys. Rev. Lett. 117150401PubMedKleinmann, M.; Cabello, A. Quantum correlations are stronger than all nonsignaling correlations produced by n-outcome measurements. Phys. Rev. Lett. 2016, 117, 150401. [CrossRef] [PubMed]
Proposed experiment to test fundamentally binary theories. M Kleinmann, T Vértesi, A Cabello, 10.1103/PhysRevA.96.032104Phys. Rev. A. 96Kleinmann, M.; Vértesi, T.; Cabello, A. Proposed experiment to test fundamentally binary theories. Phys. Rev. A 2017, 96, 032104. [CrossRef]
Observation of stronger-than-binary correlations with entangled photonic qutrits. X M Hu, B H Liu, Y Guo, G Y Xiang, Y F Huang, C F Li, G C Guo, M Kleinmann, T Vértesi, A Cabello, 10.1103/PhysRevLett.120.180402Phys. Rev. Lett. 120180402Hu, X.M.; Liu, B.H.; Guo, Y.; Xiang, G.Y.; Huang, Y.F.; Li, C.F.; Guo, G.C.; Kleinmann, M.; Vértesi, T.; Cabello, A. Observation of stronger-than-binary correlations with entangled photonic qutrits. Phys. Rev. Lett. 2018, 120, 180402. [CrossRef]
Efficient quantum state tomography. M Cramer, M B Plenio, S T Flammia, R Somma, D Gross, S D Bartlett, O Landon-Cardinal, D Poulin, Y K Liu, 10.1038/ncomms1147Nat. Commun. 1PubMedCramer, M.; Plenio, M.B.; Flammia, S.T.; Somma, R.; Gross, D.; Bartlett, S.D.; Landon-Cardinal, O.; Poulin, D.; Liu, Y.K. Efficient quantum state tomography. Nat. Commun. 2010, 1, 1-7. [CrossRef] [PubMed]
Neural-network quantum state tomography. G Torlai, G Mazzola, J Carrasquilla, M Troyer, R Melko, G Carleo, 10.1038/s41567-018-0048-5Nat. Phys. 14Torlai, G.; Mazzola, G.; Carrasquilla, J.; Troyer, M.; Melko, R.; Carleo, G. Neural-network quantum state tomography. Nat. Phys. 2018, 14, 447-450. [CrossRef]
Quantum state tomography via compressed sensing. D Gross, Y K Liu, S T Flammia, S Becker, J Eisert, 10.1103/PhysRevLett.105.150401Phys. Rev. Lett. 150401PubMedGross, D.; Liu, Y.K.; Flammia, S.T.; Becker, S.; Eisert, J. Quantum state tomography via compressed sensing. Phys. Rev. Lett. 2010, 105, 150401. [CrossRef] [PubMed]
Quantum-process tomography: Resource analysis of different strategies. M Mohseni, A T Rezakhani, D A Lidar, 10.1103/PhysRevA.77.032322Phys. Rev. A. 7732322Mohseni, M.; Rezakhani, A.T.; Lidar, D.A. Quantum-process tomography: Resource analysis of different strategies. Phys. Rev. A 2008, 77, 032322. [CrossRef]
Ancilla-assisted quantum process tomography. J B Altepeter, D Branning, E Jeffrey, T Wei, P G Kwiat, R T Thew, J L O'brien, M A Nielsen, A G White, 10.1103/PhysRevLett.90.193601Phys. Rev. Lett. 90PubMedAltepeter, J.B.; Branning, D.; Jeffrey, E.; Wei, T.; Kwiat, P.G.; Thew, R.T.; O'Brien, J.L.; Nielsen, M.A.; White, A.G. Ancilla-assisted quantum process tomography. Phys. Rev. Lett. 2003, 90, 193601. [CrossRef] [PubMed]
Quantum process tomography of a controlled-NOT gate. J L O'brien, G Pryde, A Gilchrist, D James, N K Langford, T Ralph, A White, 10.1103/PhysRevLett.93.080502Phys. Rev. Lett. 9380502O'Brien, J.L.; Pryde, G.; Gilchrist, A.; James, D.; Langford, N.K.; Ralph, T.; White, A. Quantum process tomography of a controlled-NOT gate. Phys. Rev. Lett. 2004, 93, 080502. [CrossRef]
Semidefinite programming for selfconsistent quantum measurement tomography. M Cattaneo, E M Borrelli, G García-Pérez, M A Rossi, Z Zimborás, D Cavalcanti, arXiv:2212.10262arXiv 2022Cattaneo, M.; Borrelli, E.M.; García-Pérez, G.; Rossi, M.A.; Zimborás, Z.; Cavalcanti, D. Semidefinite programming for self- consistent quantum measurement tomography. arXiv 2022, arXiv:2212.10262.
Experimental self-characterization of quantum measurements. A Zhang, J Xie, H Xu, K Zheng, H Zhang, Y T Poon, V Vedral, L Zhang, 10.1103/PhysRevLett.124.040402Phys. Rev. Lett. 12440402PubMedZhang, A.; Xie, J.; Xu, H.; Zheng, K.; Zhang, H.; Poon, Y.T.; Vedral, V.; Zhang, L. Experimental self-characterization of quantum measurements. Phys. Rev. Lett. 2020, 124, 040402. [CrossRef] [PubMed]
A fast iterative shrinkage-thresholding algorithm for linear inverse problems. A Beck, M Teboulle, 10.1137/080716542SIAM J. Imaging Sci. 2Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009, 2, 183-202. [CrossRef]
Quantum detector tomography of a time-multiplexed superconducting nanowire single-photon detector at telecom wavelengths. C M Natarajan, L Zhang, H Coldenstrodt-Ronge, G Donati, S N Dorenbos, V Zwiller, I A Walmsley, R H Hadfield, 10.1364/OE.21.000893Opt. Express. 21Natarajan, C.M.; Zhang, L.; Coldenstrodt-Ronge, H.; Donati, G.; Dorenbos, S.N.; Zwiller, V.; Walmsley, I.A.; Hadfield, R.H. Quantum detector tomography of a time-multiplexed superconducting nanowire single-photon detector at telecom wavelengths. Opt. Express 2013, 21, 893-902. [CrossRef]
Quantum calibration of measurement instrumentation. G M D'ariano, L Maccone, P L Presti, 10.1103/PhysRevLett.93.250407Phys. Rev. Lett. 93PubMedD'Ariano, G.M.; Maccone, L.; Presti, P.L. Quantum calibration of measurement instrumentation. Phys. Rev. Lett. 2004, 93, 250407. [CrossRef] [PubMed]
A proposed testbed for detector tomography. H B Coldenstrodt-Ronge, J S Lundeen, K L Pregnell, A Feito, B J Smith, W Mauerer, C Silberhorn, J Eisert, M B Plenio, I A Walmsley, 10.1080/09500340802304929J. Mod. Opt. 56Coldenstrodt-Ronge, H.B.; Lundeen, J.S.; Pregnell, K.L.; Feito, A.; Smith, B.J.; Mauerer, W.; Silberhorn, C.; Eisert, J.; Plenio, M.B.; Walmsley, I.A. A proposed testbed for detector tomography. J. Mod. Opt. 2009, 56, 432-441. [CrossRef]
Description of states in quantum mechanics by density matrix and operator techniques. U Fano, 10.1103/RevModPhys.29.74Rev. Mod. Phys. 29Fano, U. Description of states in quantum mechanics by density matrix and operator techniques. Rev. Mod. Phys. 1957, 29, 74. [CrossRef]
Maximum-likelihood estimation of quantum measurement. J Fiurášek, 10.1103/PhysRevA.64.024102Phys. Rev. A. 6424102Fiurášek, J. Maximum-likelihood estimation of quantum measurement. Phys. Rev. A 2001, 64, 024102. [CrossRef]
Deterministic realization of collective measurements via photonic quantum walks. Z Hou, J F Tang, J Shang, H Zhu, J Li, Y Yuan, K D Wu, G Y Xiang, C F Li, G C Guo, 10.1038/s41467-018-03849-xNat. Commun. 9Hou, Z.; Tang, J.F.; Shang, J.; Zhu, H.; Li, J.; Yuan, Y.; Wu, K.D.; Xiang, G.Y.; Li, C.F.; Guo, G.C. Deterministic realization of collective measurements via photonic quantum walks. Nat. Commun. 2018, 9, 1-7. [CrossRef]
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods. Disclaimer/Publisher's Note, instructions or products referred to in the contentDisclaimer/Publisher's Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
| [] |
[
"Determining the Proximity Effect Induced Magnetic Moment in Graphene by Polarized Neutron Reflectivity and X-ray Magnetic Circular Dichroism",
"Determining the Proximity Effect Induced Magnetic Moment in Graphene by Polarized Neutron Reflectivity and X-ray Magnetic Circular Dichroism",
"Determining the Proximity Effect Induced Magnetic Moment in Graphene by Polarized Neutron Reflectivity and X-ray Magnetic Circular Dichroism",
"Determining the Proximity Effect Induced Magnetic Moment in Graphene by Polarized Neutron Reflectivity and X-ray Magnetic Circular Dichroism",
"Determining the Proximity Effect Induced Magnetic Moment in Graphene by Polarized Neutron Reflectivity and X-ray Magnetic Circular Dichroism",
"Determining the Proximity Effect Induced Magnetic Moment in Graphene by Polarized Neutron Reflectivity and X-ray Magnetic Circular Dichroism",
"Determining the Proximity Effect Induced Magnetic Moment in Graphene by Polarized Neutron Reflectivity and X-ray Magnetic Circular Dichroism",
"Determining the Proximity Effect Induced Magnetic Moment in Graphene by Polarized Neutron Reflectivity and X-ray Magnetic Circular Dichroism"
] | [
"R O M Aboljadayel \nPhysics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom\n",
"C J Kinane \nISIS Facility\nSTFC Rutherford Appleton Laboratory\nHarwell Science and Innovation Campus\nOX11 0QXOxonUnited Kingdom\n",
"C A F Vaz \nSwiss Light Source\nPaul Scherrer Institut\n5232Villigen PSISwitzerland\n",
"D M Love \nPhysics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom\n",
"R S Weatherup \nDepartment of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUnited Kingdom\n",
"P Braeuninger-Weimer \nDepartment of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUnited Kingdom\n",
"M.-B Martin \nDepartment of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUnited Kingdom\n",
"A Ionescu \nPhysics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom\n",
"A J Caruana \nISIS Facility\nSTFC Rutherford Appleton Laboratory\nHarwell Science and Innovation Campus\nOX11 0QXOxonUnited Kingdom\n",
"T R Charlton \nISIS Facility\nSTFC Rutherford Appleton Laboratory\nHarwell Science and Innovation Campus\nOX11 0QXOxonUnited Kingdom\n",
"J Llandro \nPhysics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom\n",
"P M S Monteiro \nPhysics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom\n",
"C H W Barnes \nPhysics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom\n",
"S Hofmann \nDepartment of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUnited Kingdom\n",
"S Langridge \nISIS Facility\nSTFC Rutherford Appleton Laboratory\nHarwell Science and Innovation Campus\nOX11 0QXOxonUnited Kingdom\n",
"R O M Aboljadayel \nPhysics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom\n",
"C J Kinane \nISIS Facility\nSTFC Rutherford Appleton Laboratory\nHarwell Science and Innovation Campus\nOX11 0QXOxonUnited Kingdom\n",
"C A F Vaz \nSwiss Light Source\nPaul Scherrer Institut\n5232Villigen PSISwitzerland\n",
"D M Love \nPhysics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom\n",
"R S Weatherup \nDepartment of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUnited Kingdom\n",
"P Braeuninger-Weimer \nDepartment of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUnited Kingdom\n",
"M.-B Martin \nDepartment of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUnited Kingdom\n",
"A Ionescu \nPhysics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom\n",
"A J Caruana \nISIS Facility\nSTFC Rutherford Appleton Laboratory\nHarwell Science and Innovation Campus\nOX11 0QXOxonUnited Kingdom\n",
"T R Charlton \nISIS Facility\nSTFC Rutherford Appleton Laboratory\nHarwell Science and Innovation Campus\nOX11 0QXOxonUnited Kingdom\n",
"J Llandro \nPhysics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom\n",
"P M S Monteiro \nPhysics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom\n",
"C H W Barnes \nPhysics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom\n",
"S Hofmann \nDepartment of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUnited Kingdom\n",
"S Langridge \nISIS Facility\nSTFC Rutherford Appleton Laboratory\nHarwell Science and Innovation Campus\nOX11 0QXOxonUnited Kingdom\n",
"R O M Aboljadayel \nPhysics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom\n",
"C J Kinane \nISIS Facility\nSTFC Rutherford Appleton Laboratory\nHarwell Science and Innovation Campus\nOX11 0QXOxonUnited Kingdom\n",
"C A F Vaz \nSwiss Light Source\nPaul Scherrer Institut\n5232Villigen PSISwitzerland\n",
"D M Love \nPhysics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom\n",
"R S Weatherup \nDepartment of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUnited Kingdom\n",
"P Braeuninger-Weimer \nDepartment of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUnited Kingdom\n",
"M.-B Martin \nDepartment of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUnited Kingdom\n",
"A Ionescu \nPhysics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom\n",
"A J Caruana \nISIS Facility\nSTFC Rutherford Appleton Laboratory\nHarwell Science and Innovation Campus\nOX11 0QXOxonUnited Kingdom\n",
"T R Charlton \nISIS Facility\nSTFC Rutherford Appleton Laboratory\nHarwell Science and Innovation Campus\nOX11 0QXOxonUnited Kingdom\n",
"J Llandro \nPhysics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom\n",
"P M S Monteiro \nPhysics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom\n",
"C H W Barnes \nPhysics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom\n",
"S Hofmann \nDepartment of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUnited Kingdom\n",
"S Langridge \nISIS Facility\nSTFC Rutherford Appleton Laboratory\nHarwell Science and Innovation Campus\nOX11 0QXOxonUnited Kingdom\n",
"R O M Aboljadayel \nPhysics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom\n",
"C J Kinane \nISIS Facility\nSTFC Rutherford Appleton Laboratory\nHarwell Science and Innovation Campus\nOX11 0QXOxonUnited Kingdom\n",
"C A F Vaz \nSwiss Light Source\nPaul Scherrer Institut\n5232Villigen PSISwitzerland\n",
"D M Love \nPhysics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom\n",
"R S Weatherup \nDepartment of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUnited Kingdom\n",
"P Braeuninger-Weimer \nDepartment of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUnited Kingdom\n",
"M.-B Martin \nDepartment of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUnited Kingdom\n",
"A Ionescu \nPhysics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom\n",
"A J Caruana \nISIS Facility\nSTFC Rutherford Appleton Laboratory\nHarwell Science and Innovation Campus\nOX11 0QXOxonUnited Kingdom\n",
"T R Charlton \nISIS Facility\nSTFC Rutherford Appleton Laboratory\nHarwell Science and Innovation Campus\nOX11 0QXOxonUnited Kingdom\n",
"J Llandro \nPhysics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom\n",
"P M S Monteiro \nPhysics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom\n",
"C H W Barnes \nPhysics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom\n",
"S Hofmann \nDepartment of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUnited Kingdom\n",
"S Langridge \nISIS Facility\nSTFC Rutherford Appleton Laboratory\nHarwell Science and Innovation Campus\nOX11 0QXOxonUnited Kingdom\n"
] | [
"Physics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom",
"ISIS Facility\nSTFC Rutherford Appleton Laboratory\nHarwell Science and Innovation Campus\nOX11 0QXOxonUnited Kingdom",
"Swiss Light Source\nPaul Scherrer Institut\n5232Villigen PSISwitzerland",
"Physics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom",
"Department of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUnited Kingdom",
"Department of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUnited Kingdom",
"Department of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUnited Kingdom",
"Physics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom",
"ISIS Facility\nSTFC Rutherford Appleton Laboratory\nHarwell Science and Innovation Campus\nOX11 0QXOxonUnited Kingdom",
"ISIS Facility\nSTFC Rutherford Appleton Laboratory\nHarwell Science and Innovation Campus\nOX11 0QXOxonUnited Kingdom",
"Physics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom",
"Physics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom",
"Physics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom",
"Department of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUnited Kingdom",
"ISIS Facility\nSTFC Rutherford Appleton Laboratory\nHarwell Science and Innovation Campus\nOX11 0QXOxonUnited Kingdom",
"Physics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom",
"ISIS Facility\nSTFC Rutherford Appleton Laboratory\nHarwell Science and Innovation Campus\nOX11 0QXOxonUnited Kingdom",
"Swiss Light Source\nPaul Scherrer Institut\n5232Villigen PSISwitzerland",
"Physics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom",
"Department of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUnited Kingdom",
"Department of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUnited Kingdom",
"Department of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUnited Kingdom",
"Physics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom",
"ISIS Facility\nSTFC Rutherford Appleton Laboratory\nHarwell Science and Innovation Campus\nOX11 0QXOxonUnited Kingdom",
"ISIS Facility\nSTFC Rutherford Appleton Laboratory\nHarwell Science and Innovation Campus\nOX11 0QXOxonUnited Kingdom",
"Physics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom",
"Physics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom",
"Physics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom",
"Department of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUnited Kingdom",
"ISIS Facility\nSTFC Rutherford Appleton Laboratory\nHarwell Science and Innovation Campus\nOX11 0QXOxonUnited Kingdom",
"Physics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom",
"ISIS Facility\nSTFC Rutherford Appleton Laboratory\nHarwell Science and Innovation Campus\nOX11 0QXOxonUnited Kingdom",
"Swiss Light Source\nPaul Scherrer Institut\n5232Villigen PSISwitzerland",
"Physics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom",
"Department of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUnited Kingdom",
"Department of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUnited Kingdom",
"Department of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUnited Kingdom",
"Physics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom",
"ISIS Facility\nSTFC Rutherford Appleton Laboratory\nHarwell Science and Innovation Campus\nOX11 0QXOxonUnited Kingdom",
"ISIS Facility\nSTFC Rutherford Appleton Laboratory\nHarwell Science and Innovation Campus\nOX11 0QXOxonUnited Kingdom",
"Physics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom",
"Physics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom",
"Physics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom",
"Department of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUnited Kingdom",
"ISIS Facility\nSTFC Rutherford Appleton Laboratory\nHarwell Science and Innovation Campus\nOX11 0QXOxonUnited Kingdom",
"Physics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom",
"ISIS Facility\nSTFC Rutherford Appleton Laboratory\nHarwell Science and Innovation Campus\nOX11 0QXOxonUnited Kingdom",
"Swiss Light Source\nPaul Scherrer Institut\n5232Villigen PSISwitzerland",
"Physics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom",
"Department of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUnited Kingdom",
"Department of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUnited Kingdom",
"Department of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUnited Kingdom",
"Physics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom",
"ISIS Facility\nSTFC Rutherford Appleton Laboratory\nHarwell Science and Innovation Campus\nOX11 0QXOxonUnited Kingdom",
"ISIS Facility\nSTFC Rutherford Appleton Laboratory\nHarwell Science and Innovation Campus\nOX11 0QXOxonUnited Kingdom",
"Physics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom",
"Physics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom",
"Physics Department\nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom",
"Department of Engineering\nUniversity of Cambridge\nCB3 0FACambridgeUnited Kingdom",
"ISIS Facility\nSTFC Rutherford Appleton Laboratory\nHarwell Science and Innovation Campus\nOX11 0QXOxonUnited Kingdom"
] | [] | 1 arXiv:2101.09946v2 [cond-mat.mes-hall] Abstract We report the magnitude of the induced magnetic moment in CVD-grown epitaxial and rotateddomain graphene in proximity with a ferromagnetic Ni film, using polarized neutron reflectivity (PNR) and X-ray magnetic circular dichroism (XMCD). The XMCD spectra at the C K -edge confirms the presence of a magnetic signal in the graphene layer and the sum rules give a magnetic moment of up to ∼ 0.47 µ B /C atom induced in the graphene layer. For a more precise estimation, we conducted PNR measurements. The PNR results indicate an induced magnetic moment of ∼ 0.53 µ B /C atom at 10 K for rotated graphene and ∼ 0.38 µ B /C atom at 10 K for epitaxial graphene.Additional PNR measurements on graphene grown on a non-magnetic Ni 9 Mo 1 substrate, where no magnetic moment in graphene is measured, suggest that the origin of the induced magnetic moment is due to the opening of the graphene's Dirac cone as a result of the strong C p z -3d hybridization.I. INTRODUCTIONGraphene is a promising material for many technological and future spintronic device applications such as spin-filters, 1-7 spin-valves and spin field-effect transistors due to its excellent transport properties. 8,9 Graphene can have an intrinsic charge carrier mobility of more than 200,000 cm 2 V −1 s −1 at room temperature (RT), 10 and a large spin relaxation time as a result of its long electron mean free path and its negligible spin-orbit and hyperfine couplings.2,11 Manipulating spins directly in the graphene layer has attracted great attention as it opens new ways for using this 2D material in spintronics applications. 2,12 This has been realized via various approaches such as through the proximity-induced effect, 2,13-16 chemical doping of the graphene surface 11 or through a chemically-induced sublattice. 17 Here, we report the feasibility of the first method in utilizing the exchange coupling of local moments between graphene and a ferromagnetic (FM) material to induce a magnetic moment in graphene.Graphene is a zero-gap semiconductor because the π and π * bands meet at the Fermi energy (E F ), at the corner of the graphene's Brillouin zone (K points), i.e. at degenerate points forming the Dirac point (E D ), where the electronic structure of these bands can be described * | 10.1021/acsami.2c02840 | [
"https://export.arxiv.org/pdf/2101.09946v2.pdf"
] | 231,698,579 | 2101.09946 | 60568ba19806c4135a1b4e7719b4978a4470f240 |
Determining the Proximity Effect Induced Magnetic Moment in Graphene by Polarized Neutron Reflectivity and X-ray Magnetic Circular Dichroism
21 Mar 2022
R O M Aboljadayel
Physics Department
Cavendish Laboratory
University of Cambridge
CB3 0HECambridgeUnited Kingdom
C J Kinane
ISIS Facility
STFC Rutherford Appleton Laboratory
Harwell Science and Innovation Campus
OX11 0QXOxonUnited Kingdom
C A F Vaz
Swiss Light Source
Paul Scherrer Institut
5232Villigen PSISwitzerland
D M Love
Physics Department
Cavendish Laboratory
University of Cambridge
CB3 0HECambridgeUnited Kingdom
R S Weatherup
Department of Engineering
University of Cambridge
CB3 0FACambridgeUnited Kingdom
P Braeuninger-Weimer
Department of Engineering
University of Cambridge
CB3 0FACambridgeUnited Kingdom
M.-B Martin
Department of Engineering
University of Cambridge
CB3 0FACambridgeUnited Kingdom
A Ionescu
Physics Department
Cavendish Laboratory
University of Cambridge
CB3 0HECambridgeUnited Kingdom
A J Caruana
ISIS Facility
STFC Rutherford Appleton Laboratory
Harwell Science and Innovation Campus
OX11 0QXOxonUnited Kingdom
T R Charlton
ISIS Facility
STFC Rutherford Appleton Laboratory
Harwell Science and Innovation Campus
OX11 0QXOxonUnited Kingdom
J Llandro
Physics Department
Cavendish Laboratory
University of Cambridge
CB3 0HECambridgeUnited Kingdom
P M S Monteiro
Physics Department
Cavendish Laboratory
University of Cambridge
CB3 0HECambridgeUnited Kingdom
C H W Barnes
Physics Department
Cavendish Laboratory
University of Cambridge
CB3 0HECambridgeUnited Kingdom
S Hofmann
Department of Engineering
University of Cambridge
CB3 0FACambridgeUnited Kingdom
S Langridge
ISIS Facility
STFC Rutherford Appleton Laboratory
Harwell Science and Innovation Campus
OX11 0QXOxonUnited Kingdom
Determining the Proximity Effect Induced Magnetic Moment in Graphene by Polarized Neutron Reflectivity and X-ray Magnetic Circular Dichroism
21 Mar 2022(Dated: March 29, 2022)
1 arXiv:2101.09946v2 [cond-mat.mes-hall] Abstract We report the magnitude of the induced magnetic moment in CVD-grown epitaxial and rotateddomain graphene in proximity with a ferromagnetic Ni film, using polarized neutron reflectivity (PNR) and X-ray magnetic circular dichroism (XMCD). The XMCD spectra at the C K -edge confirms the presence of a magnetic signal in the graphene layer and the sum rules give a magnetic moment of up to ∼ 0.47 µ B /C atom induced in the graphene layer. For a more precise estimation, we conducted PNR measurements. The PNR results indicate an induced magnetic moment of ∼ 0.53 µ B /C atom at 10 K for rotated graphene and ∼ 0.38 µ B /C atom at 10 K for epitaxial graphene.Additional PNR measurements on graphene grown on a non-magnetic Ni 9 Mo 1 substrate, where no magnetic moment in graphene is measured, suggest that the origin of the induced magnetic moment is due to the opening of the graphene's Dirac cone as a result of the strong C p z -3d hybridization.I. INTRODUCTIONGraphene is a promising material for many technological and future spintronic device applications such as spin-filters, 1-7 spin-valves and spin field-effect transistors due to its excellent transport properties. 8,9 Graphene can have an intrinsic charge carrier mobility of more than 200,000 cm 2 V −1 s −1 at room temperature (RT), 10 and a large spin relaxation time as a result of its long electron mean free path and its negligible spin-orbit and hyperfine couplings.2,11 Manipulating spins directly in the graphene layer has attracted great attention as it opens new ways for using this 2D material in spintronics applications. 2,12 This has been realized via various approaches such as through the proximity-induced effect, 2,13-16 chemical doping of the graphene surface 11 or through a chemically-induced sublattice. 17 Here, we report the feasibility of the first method in utilizing the exchange coupling of local moments between graphene and a ferromagnetic (FM) material to induce a magnetic moment in graphene.Graphene is a zero-gap semiconductor because the π and π * bands meet at the Fermi energy (E F ), at the corner of the graphene's Brillouin zone (K points), i.e. at degenerate points forming the Dirac point (E D ), where the electronic structure of these bands can be described *
using the tight-binding model. 18,19 However, the adsorption of graphene on a strongly interacting metal distorts its intrinsic band structure around E D . This is a result of the overlap of the graphene's valence band with that of the metal substrate due to the breaking of degeneracy around E D in a partially-filled d -metal, as discussed in the universal model proposed by Voloshina and Dedkov. 20 Their model was supported by density functional theory calculations and proven experimentally using angle-resolved photoemission spectroscopy. 18,[20][21][22][23][24][25] Furthermore, a small magnetic signal was detected in the X-ray magnetic circular dichroism (XMCD) spectra of the graphene layer in proximity with a FM transition metal (TM) film, suggesting that a magnetic moment is induced in the graphene. 2,13,26,27 However, no direct quantitative analysis of the total induced magnetic moment has been reported.
It is widely accepted that graphene's C atoms are assembled in what is known as the topfcc configuration on top of close-packed (111) surfaces, where the C atoms are placed on top of the atoms in the first and third layers of the TM substrate. 20,22,23 The strength of the graphene-TM interaction is influenced by the lattice mismatch, the graphene-TM bond length and the position of the d orbital of the TM relative to E F . Therefore, a Ni (111) substrate was used as the FM since it has a small lattice mismatch of -1.2%, a bond length of 2.03Å and their d orbitals are positioned ∼ 1.1 eV below E F (i.e. forming π−d hybrid states around the K points). [20][21][22]28 Epitaxial and rotated-domain graphene structures were investigated since rotated graphene is expected to interact more weakly with the TM film underneath. This is a result of the loss of epitaxial relationship and a lower charge transfer from the TM due to missing direct Ni top −C interaction and the smaller region covered by extended graphene layer as a result of the 3 • rotation between the graphene and Ni . 22,[29][30][31] Therefore, a smaller magnetic moment is expected to be induced in rotated-domain graphene.
We have studied the structural, magnetic and electronic properties of epitaxial-and rotateddomain graphene grown on Ni films, confirmed the presence of a magnetic moment in graphene by element-specific XMCD and measured the induced magnetic moment at 10 K and 300 K for rotated and epitaxial graphene using polarized neutron reflectivity (PNR).
To our knowledge, our attempt is the first reported approach in using PNR and XMCD sum rules combined to estimate the total induced magnetic moment in graphene. This is due to the thinness of graphene which is close to the resolution of the PNR technique, and the difficulty in processing the XMCD C K -edge signal due to the contribution of the carbon contamination in the beamline optics. We attribute the presence of an induced magnetic moment in graphene to the hybridization of the C p z orbital with the 3d bands of the TM, which is supported by additional PNR measurements on graphene grown on a non-magnetic Ni 9 Mo 1 (111) substrate, where no magnetic moment is detected in the graphene layer.
II. RESULTS AND DISCUSSION
A. Raman Spectroscopy Measurements
In order to evaluate the quality, number of graphene layers, the doping and defect density in the grown graphene samples we used Raman spectroscopy, which is a non-destructive technique known to be particularly sensitive to the structural and electronic properties of graphene. 32,33 For these measurements, the graphene layer was first transferred by a chemical etching process from the metallic film onto a Si substrate with a thermally oxidized SiO 2 layer similar to that reported in Ref [ 34]. This was done to avoid loss in the resonance conditions due to the strong chemical interaction between the graphene π orbital and the d -states of Ni and Ni 9 Mo 1 which also alters the graphene's p z orbitals (see Experimental section for further details). Figure 1 shows the Raman scans taken at three different regions of the graphene after being transferred from the Ni films (see Section IV B of the Experimental Section). All the spectra possess the D, G, D', D+D" and 2D peaks. 35,36 Although all the 2D peaks shown in Figure 1 were fitted with single Lorentzians, they have relatively broad full-width at half-maximum (FWHM). The average FWHM of the 2D peak of epitaxial and rotated-domain graphene transferred from the Ni film are 40.8 cm −1 and 46.2 cm −1 , respectively. Furthermore, the spectra of both samples show a high I D /I G ratio (average of 1.49 for epitaxial graphene and 2.35 for rotated graphene). The variation in the spectra of each sample, presence of second order and defect-induced peaks, the large FWHM of the 2D peak and the high I D /I G ratio could be a result of the chemical etching and transfer process (see the Sample Preparation section) or the chemical doping from the HNO 3 used to etch the metallic films. Therefore, it is difficult to estimate the number of graphene layers based on the position of the G and 2D peaks and I 2D /I G ratio. However, the SEM scan and LEED diffraction pattern in Figure 7 (a) and (c), respectively, show a single epitaxial graphene layer grown on Ni(111). The broader FWHM of the 2D and the higher average I 2D /I G ratio in the rotated graphene compared to the epitaxial structure could be attributed to the formation of more defective or turbostratic (multilayer graphene with relative rotation between the layers) graphene as a result of the occasional overlap of the graphene domains. 37 The Raman spectra for the graphene/Ni 9 Mo 1 sample, as well as the full list of the peak positions and the 2D average FWHM of all the measured samples is provided in the supplementary material (SI).
B. X-ray Magnetic Circular Dichroism (XMCD)
The X-ray absorption spectra (µ XAS ) and the XMCD response (µ XMCD ) at the Ni L 2,3 -edge in the rotated graphene/Ni sample are shown in Figure 2. The spectra show no sign of oxidation, proving that graphene acts as a good passivation layer against oxidation. 38,39 The region between the L 3 and L 2 edges with a constant negative intensity is known as the diffuse magnetism region, µ diff , and it has been observed and reported for Co, Ni and Fe XMCD spectra. 40,41 µ diff is expected to arise as a result of the opposite spin directions for the 4s and 3d electrons, interstitial and sp-projected magnetic moments, and the fact that it couples antiferromagnetically to the sample's total magnetic moment in 3d elements, except for Mn. 40 Although µ diff has been reported to contribute to about −7% to the total magnetic moment in Ni, 40,42 since the sum rule does not account for µ diff , the integration range over the L 3 was stopped just before µ diff for the calculation of the orbital magnetic moment, m o , and the spin magnetic moment, m s (858.7 eV). On the other hand, the main L 3 peak and the shoulder, µ shoulder , are due to multiple initial-states configuration, 3d 8 and 3d 9 , respectively, and therefore they were accounted for in the sum rule calculations. 40 Although the non-resonant contribution was subtracted from the µ + and µ − spectra, a higher background is measured at the post-edge (E > 880 eV). This tail has been excluded from the sum rules as it is considered part of the non-resonant contribution. The calculated m o and m s are 0.084 µ B /Ni atom and 0.625 µ B /Ni atom, respectively, and thus m total is 0.709 µ B /Ni atom (see the SI for the expressions for m o , m s and m total at the L-edge). Considering the 20% accuracy of the XMCD technique in estimating the magnetic moments of materials, the results obtained for Ni is consistent with the values reported in the literature. 43,44 The C K -edge spectra for the rotated graphene/Ni sample are shown in Figure 2 (c) and (d).
For the C K -edge, higher sources of errors are expected in the XMCD estimation attributed to the difficulty of applying the sum rules to the C K -edge spectra in comparison with that for Ni L 2,3 -edge. For instance, various studies have been reported for Ni 13,23,42,43,[45][46][47] which can be used as references for our measurements, but the application of the sum rules has not been reported for graphene before. Also, the number of holes, n h , has not been measured for C previously. Moreover, the gyromagnetic factor (g) of the graphene was found to be different depending on the underlying substrate, 48-50 and it has not been reported for graphene on Ni. It is also noteworthy to mention the difficulty associated with measuring the C K -edge due to the C contamination of the optical elements which appear as a significant reduction in the incoming intensity at this particular energy.
Nonetheless, we can obtain an upper limit to the orbital moment of the graphene layer by integrating the modulus of the dichroic signal, |µ XM CD |, which is shown in Figure 2 (c), red curve. Although the magnetic dichroism response is expected mainly at the peak corresponding to the 1s→ π * transition as a result of the C p z −Ni 3d hybridization, 13 a small magnetic signal is observed at the 1s→ σ * transition peak as well; a similar behaviour was reported for graphene/2 ML Co/Ir(111). 24 The calculated upper bound m o for graphene is 0.062 µ B /C atom using n h = 4; which corresponds to an m s of 0.412 µ B /C atom, using g = 2.3, which is the value reported for graphene grown on SiC. 48 Therefore, m total of the rotated-domain graphene grown on Ni is ∼0.474 µ B /C atom (see the SI for the expressions of m o , m s and m total at the K -edge).
Despite the large uncertainties expected for the estimated graphene moments, the XMCD results demonstrate the presence of magnetic polarization in graphene. For more precise, quantified and independent estimates, we turn to PNR. shows the magnetic scattering length density (mSLD) for each temperature.
The fitting procedure is fully described in the SI, which contains the various models used to fit the rotated graphene/Ni sample. The 10 K and 300 K data shown in Figures 3 and 4 were fitted simultaneously with a shared nSLD and independent mSLD. The bulk of the fitting model selection was done on the rotated graphene/Ni sample. The study presented in the SI show the importance of prior knowledge of the sample properties to obtain the best PNR fit to estimate the induced magnetic moment in graphene. We used the information obtained from the structural characterizations (SEM and Raman spectroscopy) to obtain a lower bound on the graphene layer thickness. The fit tends to shrink the graphene thickness to less than one monolayer, if unrestrained, which does not agree with the SEM and Raman scans (refer to SI). This could be due to the limited Q (wavevector transfer) range measured in the time available for the experiment, as the fit is found to rely strongly on the high Q statistics. It should be noted that in model 9 shown in Figure 3 the magnetic moment of the graphene layer was allowed to fit to zero, but the analysis always required a non-zero value for the magnetic moment to obtain a good fit with low uncertainty, which is consistent with the XMCD results. The unrestrained graphene thickness models are shown in the SI. The results of the fits to the PNR data are summarised in Table I. Interestingly, it is the rotated-domain graphene rather than the epitaxially grown graphene that has the largest induced magnetic moment, counter to the initial hypothesis that the coupling between the have the full Ni moment at 10 K, which is only slightly reduced at 300 K. This reduction is associated with a magnetic gradient across the Ni(111) film. In the rotated-domain sample, this gradient, as shown in Figure 3 (d), starts with a magnetic moment close to the bulk nSLD of Ni (9.414 x 10 −6Å−2 ) and reduces in size towards the surface, and is required in order to fit the lower Q features and allow the higher Q features to converge, paramount to getting certainty on the thin graphene layers. The mSLD consequently also has a gradient that oppositely mirrors the nSLD at 300 K and reaches the full moment for Ni (0.6 µ B /Ni atom equivalent to an mSLD = 1.4514 x 10 −6Å−2 ) near the surface and being slightly reduced near the substrate. The Ni(111) used for the epitaxial graphene, displays a much weaker gradient, being almost uniform across the Ni thickness but has the same general trends. We attribute the origin of the difference in the nSLD profiles to the fact that the samples were deposited at different times, following the growth recipe described in the Sample Preparation section.
The Ni 9 Mo 1 sample was used to clarify whether the induced magnetic moments in graphene are due to the C p z -3d hybridization which result in opening the Dirac cone, as postulated in
Refs. [2,18,20,23,28], or because of electron transfer (spin doping) and surface reconstruction, which distorts the d band of the TM as for fullerene/non-magnetic TM, as proposed in Refs. [53][54][55]. For this purpose, a Ni 9 Mo 1 film was used with the aim of preserving the fcc crystal structure of Ni while suppressing its magnetization, as suggested in Ref. [56]. Since the Ni is doped by 10% only, the lattice mismatch and bond length to graphene are expected to be similar to that for graphene/Ni(111) sample, but the d -orbital position is considerably downshifted with respect to E F . The growth procedure and the structural properties of the sample is discussed in detail in the Experimental Section.
The results of the PNR measurements of the graphene/Ni 9 Mo 1 sample are shown in Figure 5. Again there is a gradient across the Ni 9 Mo 1 film. At 10 K a small, but detectable spin splitting and a minute variation in the SA are observed. Surprisingly, a higher magnetization is detected in graphene (0.12 (0.07, 0.17) µ B /C atom) than in the Ni 9 Mo 1 film (0.064 (0.06, 0.07) µ B /C atom), but the 95% confidence intervals indicate that we are not sensitive to this to a degree to say for sure given the Q range and counting statistics in the data. All we can ascertain is that there is a small moment in the Ni 9 Mo 1 and a non zero moment in the graphene. At 300 K, both the alloy and the graphene have effectively zero moment within the 95% confidence intervals. The origin of the small residual moment at 10 K could arise from clusters of unalloyed Ni throughout the layer that become ferromagnetic at low temperature, which then polarize the graphene as per the Ni(111) samples. At 300 K, no magnetic moment in the Ni 9 Mo 1 and graphene are detected. Therefore, the results support the hypothesis of the universal model, whereby the measured induced magnetic moment in graphene is due to the opening of the E D rather than the distortion of the d band.
This is because no magnetic moment is detected in the Ni 9 Mo 1 nor in the graphene layer.
The PNR fits have shown that at 10 K a magnetic moment of ∼ 0.53 µ B /C atom (∼ 0.38 µ B /C atom) was induced in the rotated-domain (epitaxial) graphene grown on Ni films.
These results indicate larger moments than previously reported. 13,26 In Ref. [26], Mertins et al. estimated the magnetic moment of graphene to be 0.14 ± 0.3 µ B /C atom when its grown on a hcp Co(0001) film. They obtained this value by comparing the XMCD reflectivity signal of graphene to that of the underlayer Co film. 26 based heterostructures with other C allotropes/TM systems. This is because the Dirac cone is a characteristic feature of graphene and CNTs of the carbon allotropes. Therefore, one cannot exclude that a different mechanism other than the break of degeneracy around E D may be responsible for the magnetic moment detected in the C layer of a C/Fe multilayer system. On the other hand, for the CNTs/Co heterostructure, although Dirac cones exist in CNTs, a direct quantitative analysis of the induced magnetic moment was not possible from the MFM images reported in Ref. [59]. In contrast, PNR provides a direct estimation of the induced magnetic moment in graphene. The SEM images shown in Figure 7 illustrate the structure of epitaxial and rotated graphene on Ni with a surface coverage of ∼ 70% − 90%. Figure 7 from the incoming X-ray beam. An electromagnet was fixed at 40 • to the incoming X-ray beam and a magnetic field of 0.11 T was applied for 30 seconds in-plane to the surface of the samples to align the film magnetization along the beam direction. It was then reduced to 0.085 T during the X-ray absorption spectroscopy measurements. The intensity of the incident X-ray beam was measured with a clean, carbon free, gold mesh placed just before the sample position. This is particularly important for normalizing the signal at the C K -edge due to the presence of carbon on the surface of the X-ray optical components.
D. Polarized Neutron Reflectivity
The PNR measurements were conducted at 10 K and 300 K, under an in-plane magnetic field of 0.5 T, using the Polref instrument at ISIS spallation neutron source (UK). The fitting of the data was done using the Refl1D 61 software package with preliminary fits done in GenX. 62 Although Ni is ferromagnetic at RT, the 10 K measurements are expected to provide a better estimation of the induced magnetic moment due to the lower thermal excitations of the electron spin at low temperature. Both the 10 K and 300 K data sets were fitted simultaneously to provide further constraint to the fits. This is analogous to the isotropic contrast matching 63,64 used in soft matter neutron reflectivity experiments. This is very important in this case due to the attempt to measure a thin layer of graphene within a limited total Q range for PNR. The PNR is sensitive to only part of the broad fringe from the graphene layer which acts as an envelope function on the higher frequency fringes from the thicker Ni layer underneath (the modelling methodology is discussed in detail in the SI). Graphene has two main characteristic peaks in the Raman spectra, a first-order Raman scattering (RS) G peak at ∼ 1582 cm −1 , which is a graphite-like line that can be observed in different carbon-based materials; 1-3 and a second-order (doubleresonance) RS 2D peak at ∼ 2700 cm −1 , which is usually used as an indication of a perfect crystalline honeycomb-like structure. 1,3 Graphene also possesses other second-order RS peaks such as D+D" located at ∼ 2450 cm −1 and 2D' at ∼ 3200 cm −1 and disorder-induced peaks such as D at ∼ 1350 cm −1 and D' at ∼ 1600 cm −1 . 3,4 Single-layer graphene is known to have three characteristic features; I 2D /I G ratio > 2 and a 2D peak fitted with a single Lorentzian with a full-width at half-maximum (FWHM) less than 40 cm −1 . 5 Furthermore, I D /I G is usually used as an indicator of defects present in graphene, which increases with the increase of disorder in the graphene structure. 1,5 Therefore, good-quality graphene possesses a small I D /I G ratio. Also, the D peak was reported to change in shape, position and width by increasing the number of graphene layers, while the G peak is downshifted with increasing the number of graphene layers, but upshifted with increasing the doping level. 3 Moreover, the width of the 2D peak increases with the number of graphene layers. 4,6-8 Therefore, we assess the quality of our transferred graphene based on these features. Figure S1 shows the RT Raman spectra for graphene transferred from the Ni 9 Mo 1 film. The average peaks positions and the average 2D FWHM of all the samples are listed in Table I. a) Electronic mail: [email protected] b) Electronic mail: [email protected]
The Raman spectra of the transferred graphene from Ni 9 Mo 1 ( Figure S1 (a)) show a high I D' . Therefore, an argument similar to that used for the graphene transferred from Ni(111) can be applied here to explain the spectra. However, the wide spatial variation across the surface of the sample, which could be attributed to formation of occasional strong Mo−C bonding between the graphene and the Ni 9 Mo 1 film, makes it difficult to assess the quality of the graphene grown on Ni 9 Mo 1 based on the 2D FWHM.
S2. X-RAY MAGNETIC CIRCULAR DICHROISM (XMCD) FORMULAE
The X-ray absorption amplitude (µ XAS ) and the X-ray magnetic circular dichroism (µ XMCD ) are expressed as:
µ XAS = 1 2 (µ + + µ − ),(1)
and
µ XMCD = µ + − µ − ,(2)
were µ + and µ − are the absorption coefficients for the right and left circularly polarized light, respectively, normalized to a common value. 9 According to the sum rules, XAS and XMCD spectra can be used to determine the 3d orbital angular momentum < L z > and the spin angular momentum < S z > for the L 2,3 -edge using the following expressions: 10,11
< L z >= −2n h · L 3 +L 2 (µ + − µ − )dE L 3 +L 2 (µ + + µ − + µ XAS )dE ,(3)< S z >= − 3 2 n h · L 3 (µ + − µ − )dE − 2 L 2 (µ + − µ − )dE L 3 +L 2 (µ + + µ − + µ XAS )dE · 1 + 7 < T z > 2 < S z > .(4)
Here, n h = (10 − n d ) is the number of holes in the 3d states, whereas n d is the electron occupation number of the 3d states. L 3 + L 2 represent the integration range over the L 3 and L 2 edges and < T z >, which is the expectation value of the magnetic dipole operator, is known to be very small in TMs and hence it was ignored. 12 In (2), µ XMCD is corrected for the light degree of polarization, P, and the light's incident angle, θ .
Therefore, it was multiplied by a factor 1 P cos θ , 13 where θ is measured with respect to the sample's surface, while µ XAS remains unchanged. 14 The orbital magnetic moment, m o , is equal to µ B < L z >, whereas the spin magnetic moment, m s , is given by m s = 2µ B < S z >. 11 For the total magnetic moment, m total = m o + m s . 11,15,16 In contrast, since the XMCD at the K-edge measures the transitions from a non-spin-split s orbital to p orbital, only m o can be obtained: 17,18
m o = − 1 3 n h P cos θ K µ XMCD K µ XAS ,(5)
For the K-edge, n h becomes equal to 6 − n p , where n p is the electron occupation number in the 2p bands. 18 Since m s = 2m o /(g − 2), where g is the gyromagnetic factor, 19 m total can be estimated for the K-edge if g is known.
S3. POLARIZED NEUTRON REFLECTOMETRY ANALYSIS
The primary analysis of the PNR data was performed using the Refl1D software package from the National Center for Neutron Research (NCNR) at the National Institute for Standards and Technology (NIST). 20 Refl1D uses the Bayesian analysis fitting package Bumps, 21 for both obtaining the best fit and performing an uncertainty analysis on the parameters used to model the data. Refl1D is based on the Abeles optical matrix method 22 along with interfaces approximated using the method of Névot and Croce. 23 Refl1D allows multiple data sets to be fitted simultaneously which is advantageous in this case, due to the difficulty of detecting such a thin layer of graphene on top of a thick layer of Ni metal. Hence the data taken at the two temperatures (10 K and 300 K) were fitted simultaneously for each sample, to reduce the uncertainly in the fitted parameters, such as the magnetic contrast variation. This is akin to the isotopic contrast variation method used as standard in non-polarized neutron reflectivity soft matter experiments. 24
A. Modeling methodology
The analysis was started on the rotated-domain graphene/Ni sample using the simplest model possible as a starting point. Once convergence was reached as defined in the Refl1D documentation, 25 an assessment was made taking into account the improvement of the χ 2 and the uncertainty in the param-eters as to whether to increase the complexity of the model by adding more parameters. Prior knowledge of the sample growth conditions and sample characterisation was also employed to aid in this process. We echo what is said in the Refl1D documentation 25 that, since this is parametric modeling technique, we cannot say that any one model is truly correct, only that while it fits the observed data well, there may well be other possible solutions. We couple this knowledge with the following secondary information:
• A well defined question for what the modeling is intended to answer. This provides a measure of how well the model is working, but most importantly, a point at which to stop fitting the data. It can also be instrumental in determining if the data quality is insufficient to answer the posed question.
• A knowledge of the sample growth conditions to provide, initial priors (maximum and minimum bounds on those initial parameters) and a nominal starting structure. In particular, the substrate material parameters should be well known.
• Magnetic contrast variation, which is akin to solving simultaneous equations.
• Secondary characterization techniques to provide a cross reference for a particular parameter and lock them down, e.g. XMCD for the total moment. This can also be crucial in dealing with issues ultimately arising from the inverse phase problem, such as multiple solutions with similar χ 2 , i.e. multi-modal fit solutions.
Such information provides confidence in the validity of the model and avoid over parameterization.
In the present case, the question is: can the model confirm that there is a graphene layer present and primarily is it magnetic and can the magnitude of the magnetic moment be measured with any certainty?
Model 1 consists of a single layer of Ni on an Al 2 O 3 substrate with the Ni magnetism commensurate with the boundaries of the Ni Layer. There are no magnetic dead-layers included and no graphene layer. The aim is to provide a null result to see if the graphene is required to fit the dataset. The results are shown in Figure S2. Panel (a) displays the Fresnel reflectively (R Fresnel ) given by:
R Fresnel = R R Substrate ,(6)
which is the total reflectivity (R) divided by the calculated substrate reflectivity (R Substrate ). This visually aids finding discrepancies between the fit and the data. In this case it is obvious that the fit is not matching the depth of the fringes at low Q as ringed in blue.
Panels (b) and (c)
show the spin asymmetries (SA). It is clear the fit does not fit well magnetically at high Q for both 10 K and 300 K. This is in the same region in the R Fresnel that curves up at high Q. This is indicative of a thin and magnetic layer being missing from the model, such as a thin graphene layer. The oscillations also don't match at low Q. Panel (d) shows the nuclear scattering length density (nSLD) profile. The inset showing the 68% and 95% Bayesian confidence intervals from the Bumps fitting package. This gives some indication of how certain the model is with regards to the fit of the data, but we stress that this just indicates how well the current model fits the data, it does not indicate if the model is the correct one or not.
It follows from model 1 that a thin layer or layers is required to fit the high Q data. This is not necessarily a graphene layer, it could be a thin magnetic dead layer. Therefore we constructed model 2 shown in Figure S3. This model consists of a single layer of Ni on an Al 2 O 3 substrate where the Ni magnetism is now incommensurate with the boundaries of the Ni layer, i.e. we allow Ni dead-layers with a moment that can vary from zero to the full Ni moment and have thicknesses and roughnesses independent of the nSLD profile. Again this is intended as a null case. This model slightly improves the χ 2 however still fails to fit the low and high Q regions that model 1 did not reproduce.
Model 3, shown in Figure S4, consists of a single layer of Ni on an Al 2 O 3 substrate with the Ni magnetism commensurate with the boundaries of the Ni Layer and a non-magnetic graphene layer on the top surface. This produces a fit with a similar χ 2 to model 2 that still fails to sort the issues at high and low Q from the previous models.
The next step is to allow the graphene layer to be magnetic as shown in Figure S5 for model 4, while keep the rest of the model the same as model 3. Although this model produces a small step change in the χ 2 where the high Q data is significantly improved, it does not capture the low Q fringes. However, it provides some evidence that we need a graphene layer that is magnetic to fit the data. At this point it seems likely that some combination of Ni dead-layers and magnetic graphene maybe at play, however, certainty on the values eludes us since the fit cannot reproduce the low Q data.
Model 5, shown in Figure S6, is another null model, where the statistics dominate, consisting of a single layer of Ni on an Al 2 O 3 substrate with the Ni magnetism having dead layers top and bottom with a non-magnetic graphene layer at the surface. This produces a significant improvement in χ 2 as you would expect from the earlier model, hinting that some combination is required. Model 6 takes model 5 and allows the graphene to be magnetic. This results in a magnetic discrepancy at the top interface due to a small region of magnetically dead Ni leading to magnetic graphene as shown in the mSLD in Figure S7 (e). This manifests as a spike in the magnetization of the graphene layer and is not physical.
Model 7, presented in Figure sues highlighted by red circles in panels (d) and (e) where this time the Ni roughness violates the Névot-Croce 23 approximation and exceeds the thickness of the graphene. This produces non-physical magnetic profiles, with a peak and dip at the interface regions. This is also present in Model 6. At this point it is obvious that the fit is limited by the low Q features ringed in blue in Figure S8. The fit cannot improve at high Q unless the low Q is improved due to the weighting of the statistics.
Models 1 − 7 show that small changes to the graphene and magnetic dead-layers have no effect on the low Q oscillations whose statistics are dominating the χ 2 and so prevent a good fit of the high Q. It is noteworthy to mention that high Q fringes contain information on the graphene layer, whereas that at low Q range provide details about the Ni film. Model 8 ( Figure S9) takes model 7 and splits the Ni layer into two, allowing a large roughness between the two layers to grade the nSLD and mSLD, while the two are kept magnetically linked.
Having two very similar nSLD layers allows some extra contrast and beat frequencies that can modify the amplitude of the Kiessig oscillations. This dramatically improves the χ 2 and the uncertainties on the graphene layer as the fit is no longer dominated by the residuals at low Q. We have little justification for this, but as the Ni layer is relatively thick, if there is a clamping to the Al 2 O 3 substrate then the approx 80 nm Ni(111) layer may relax its stress/strain across this thickness, producing an nSLD gradient across the layer.
Model 8 is a workable fit demonstrating magnetism in the graphene layer, however the graphene layer thickness reduces down to ∼ 2 Å , which is right at the fitting limit set for one monolayer. The uncertainty in the fitted parameter indicates that we are not very sensitive to it. At this point we have not included any information from other characterization techniques. Both SEM and Raman spectroscopy measurements indicate that the Ni(111) with rotated-domain graphene should have approximately two to three monolayers of graphene on the surface. Therefore, if we use this information to set the lower bound on the prior for the graphene thickness, we get model 9 (shown in Figure 3 of the main text). This produces a fit within the error of the χ 2 of model 8 but utilising all the information we have available.
Four further checks were performed to study the sensitivity of the PNR modeling to the magnetic moment in the graphene layer. This is now a crucial check as all the fits discussed previously (i.e. before model 9) have been dominated by the residuals at low Q. These were not adequately modelled until the Ni layer was allowed to be graded. Therefore, model 10 ( Figure S10), is based on model 9, but the magnetic moment in graphene is fixed to zero. The fit produced a χ 2 which is only slightly worse than model 9, see Table II. This indicates we still have little sensitivity to any magnetism in the graphene.
The next test is where the graphene layer is removed entirely, the results of which are shown in Figure S11. Again, the magnetism in the Ni has dead layers top and bottom. Surprisingly, this almost matches Model 9's χ 2 , being fractionally worse but with overlapping error bars.
Models 12 and 13 shown in Figures S12 and S13 are copies of models 10 and 11, respectively, but with dead layers at the bottom interfaces only. The magnetism at the top is conformal with the nuclear structure as in model 9. Both of which produce worse fits than model 9, and lead to the idea that either a graphene layer or a top magnetic dead-layer in the Ni is needed to improve the fit.
We are faced with the conundrum of now having 5 models all with very similar χ 2 values. Taking into account the extra evidence provided by the SEM, Raman and XMCD allows us to qualitatively select the model with two monolayers of graphene that are magnetic. Ideally, we would like a quantitative measure applicable to the PNR fits to also show this is the most likely case.
It is possible to obtain a more quantitative measure by refit-ting the models using a nested sampler. 26,27 Here we used the UltraNest package. 28 The details of how nested sampling works are beyond the scope of this work and we direct the reader to the work of Skilling et al 26 This approach avoids the risk of over-fitting as the Bayesian evidence is derived from an integral in parameter space and therefore scales with the number of parameters. It is akin to having a built-in Occam's razor which means that any addi- tional free parameters to the model must significantly improve the likelihood of the determined value. However, it is important also to note that the accuracy of the determined evidence depends on the prior probabilities chosen for each of the free parameters and therefore care should be taken to ensure that these are meaningful. 29 All shared parameters between the models have strictly the same priors (uniform ranges and distribution). It is the setting of new additional parameter priors, in our case the graphene thickness and magnetism, as maximum and minimum ranges (assuming a uniform probability distribution as in our case) via our secondary evidence (SEM, Raman and XMCD) that allows us to get some measure of how including this information in the fit gives some validity to the models even when they have very similar χ 2 values. Hence, the Bayesian evidence allows for the comparison of two models, with different numbers of parameters, given that the data they are applied to is the same.
The evidence term is outputted as a negative log-likelihood. 26 In this case, the larger the number is, the more probable the model is. For example, -2 is more probable than -5. There are ways of further utilising the evidence term but we refer the reader to the literature with regards to these. 30,31 Models 9-13 where refitted in this manner following the outline by McCluskey et al. 29 The evidence terms derived from these fits are shown Figure S14 (c) and Table II. Figure S14 FIG. S10. PNR Model 10: Ni layer with a dead layer at the substrate interface and continuous magnetism into a magnetic graphene layer on the surface. The Ni has been split into two continuous Ni layers to allow a gradient across the layer. The graphene layer is constrained so it cannot be thinner than 2 monolayers of graphene inline with the SEM and Raman results. It has also been set to be non-magnetic. (a) Fresnel reflectivity for 10 K and 300 K, (b) and (c) show the spin asymmetries, (d) is the nuclear scattering length density (nSLD) profile and (e) the magnetic scattering length density (mSLD) profile. The grey banded regions around the SLD lines are the 95% Bayesian confidence intervals.
FIG. S11. PNR Model 11: Ni layer has dead layers at the substrate interface and top Ni/Graphene interface. The Ni has been split into two continuous Ni layers to allow a gradient across the layer. The graphene layer has been removed. (a) Fresnel reflectivity for 10 K and 300 K, (b) and (c) show the spin asymmetries, (d) is the nuclear scattering length density (nSLD) profile and (e) the magnetic scattering length density (mSLD) profile. The grey banded regions around the SLD lines are the 95% Bayesian confidence intervals.
(a) also shows the normalized χ 2 values as a function of number of parameters and model number. It is clear that Model 8 and 9 are very close, together with 8 being slightly better than 9. There were several bi-modal correlations in the fit in particular regarding the graphene thickness and the graphene and top Ni interface roughnesses. Some of the cleanest examples of which are shown in Figure S15. This shows how crucial the inclusion of the secondary information is in determining the best model as it effectively allows the selection of the right node. The result from the nested sampler, following that the biggest value represents the greatest evidence and is the most probable, i.e. model 9.
In summary, the decision on the best PNR model was taken by considering the information obtained from the complementary techniques (SEM, Raman spectroscopy and XMCD). Therefore, model 9 was used to estimate the magnetic moment induced in graphene in the rotated-domain graphene/Ni sample. The same model was then used to fit the PNR data of the other systems; epitaxial graphene/Ni and graphene/Ni 9 Mo 1 samples. S14. Panels (a) and (b) show the trends in χ 2 vs the number of parameters and model number, respectively. Models 8 and 9 have the lowest (best) χ 2 values. However, Model 8 is dismissed as the thickness of the graphene was allowed to vary freely and shrunk to 2Å, which is not physical. The Raman and SEM results indicate that two monolayers of graphene are present. (c) Bayesian evidence term as computed by the UltraNest nested sampler package. 28 The highest values has the greatest evidence, the dashed grey line indicates model 9's cut-off vs the 4 main closest physical models with similar χ 2 . These models (10-13) all dismiss the secondary information from the SEM, Raman and XMCD, that show there are two monolayers of graphene layer present and that it is magnetic in nature. The error bars in panel (c) are smaller than the data points.
FIG. 1 .
1Room temperature Raman spectroscopy measurements taken at three different regions (1-3) after transferring the graphene from (a) the epitaxial Ni and (b) rotated Ni samples to a Si/SiO 2 wafer, showing the graphene's characteristic peaks. The dashed vertical lines separate the regions of the different peaks.
FIG. 2 .
2X-ray absorption spectra for circular polarized light and the areas used to apply the sum rules for the rotated graphene/Ni sample measured at 300 K: (a) and (b) XMCD and XAS spectra for Ni L 2,3 -edge. (c) and (d) XMCD and XAS spectra for the graphene layer. The vertical and horizontal dashed lines in (a) indicate the maximum integration range over the L 2,3 peak and the values used for the calculation of m o and m s , respectively.
were carried out to measure the magnetic properties of each layer of the samples individually and to determine the value of the induced magnetic moment in graphene quantitatively. The PNR results for the rotated-domain graphene/Ni(111) and epitaxial graphene/Ni(111) samples, measured at 10 K and 300 K, are displayed in Figures 3 and 4, respectively. Panel (a) for each figure shows the Fresnel reflectivity profiles and panels (b) and (c) the spin asymmetry (SA = [R + − R − ]/[R + + R − ], where R + and R − are the spin-up and spin-down neutron specular reflectivities, respectively). SA scales with the magnetic signal. A flat SA line at zero, shown as a blue dashed line, represents no net magnetic induction present in the system. Panel (d) displays the nuclear scattering length density (nSLD), for the sample structure. This structure is shared at both temperatures in the co-refinement, and panel (e)
It is noteworthy that the improvement in the figure of merit (χ 2 ) for the unconstrained to the constrained graphene thickness are within the error bar of each other. However, the results show that the graphene thickness has only a subtle influence, if any, on the amount of the induced magnetization as can be deduced from the values of the measured magnetic moments. To get greater certainty on the model selection given the data we used a nested sampler51,52 to calculate the Bayesian evidence taking into account the structural information, providing a high degree of confidence in the final fit model as shown in the SI.Additional scenarios were also tested. For example, oxidation of the Ni layer and the formation of Ni-carbide were examined by embedding an intermediate NiO and Ni 2 C layer, respectively, at the interface between the Ni and graphene, but this lead to poorer fits and the best results, shown inFigures 3 and 4, were achieved using a simpler model: Substrate/a FM layer split into two regions/graphene (see the modelling methodology in the SI).
FIG. 3 .
3PNR Model 9: consists of a Ni layer split into two regions for the rotated graphene/Ni sample with a magnetic dead-layer at the substrate interface and continuous magnetic moment variation up to the graphene layer which is allowed to be magnetic. (a) Fresnel reflectivity for 10 K and 300 K, (b) and (c) show the spin asymmetries, (d) is the nuclear scattering length density (nSLD) profile and (e) the magnetic scattering length density (mSLD) profile. The grey banded regions around the SLD lines are the 95% Bayesian confidence intervals. FIG. 4. Ni layer split into two regions for the epitaxial graphene/Ni sample (a) Fresnel reflectivity for 10 K and 300 K, (b) and (c) show the spin asymmetries, (d) is the nuclear scattering length density (nSLD) profile and (e) the magnetic scattering length density (mSLD) profile. graphene and Ni(111) would be weaker in the rotated-domain case. The greater uncertainty in the values of the moment obtained for the graphene in the epitaxial case can be attributed to the Bayesian analysis being sensitive to the reduced count time and short Q range that the epitaxial Ni was measured with, due to finite available beam time. Both Ni(111) samples
FIG. 5 .
5Ni 9 Mo 1 split into two regions for the graphene/Ni 9 Mo 1 sample. (a) Fresnel reflectivity for 10 K and 300 K, (b) and (c) show the spin asymmetries, (d) is the nuclear scattering length density (nSLD) profile and (e) the magnetic scattering length density (mSLD) profile.
Weser et al. suggested a similar value (0.05 − 0.1 µ B /C atom) for the magnetic moment induced in a monolayer of graphene grown on a Ni(111) film. 13 However, their assumption was based on comparing the graphene/Ni system with other C/3d TM structures, such as a C/Fe multilayer with 0.55 nm of C, 58 and carbon nanotubes (CNTs) on a Co film, 59 where a magnetic moment of 0.05 µ B /C atom and 0.1 µ B /C atom was estimated, respectively. Nonetheless, it is difficult to compare graphene-
summary, we have successfully grown graphene by chemical vapor deposition (CVD) on different TM substrates. Induced magnetic moment in rotated-domain graphene as a result of the proximity effect in the vicinity of a FM substrate was detected by elementspecific XMCD measurements at the C K -edge. PNR experiments were carried out to determine the magnitude of the magnetic moment detected by XMCD. Although a higher magnetic moment was expected to be induced in epitaxial graphene/Ni sample, the PNR results indicate the epitaxial graphene film had a magnetic moment of ∼ 0.38 µ B /C atom as compared to the rotated-domain graphene ∼ 0.53 µ B /C atom. Both values are higher than those predicted in other studies. 13,58,59 PNR measurements on graphene/Ni 9 Mo 1 support the universal model proposed by Voloshina and Dedkov that the induced magnetic moment in graphene arises as a result of the opening of the graphene's Dirac cone as a result of the strong C p z -Ni 3d hybridization. Our PNR provides the first quantitative estimation of the induced magnetization in graphene by PNR.IV. EXPERIMENTAL SECTIONA. Sample PreparationThe sample preparation procedure involved two stages; the growth of the TM films using magnetron sputtering and the growth of graphene by CVD.The TM films were deposited at RT on 1 mm thick Al 2 O 3 (0001) substrates using a CEVP magnetron sputtering chamber with a base pressure of 1.2 -2 × 10 −8 mTorr. The thick substrates were used to reduce the possibility of sample deformation which could affect the reflectivity measurements. The deposition of the TM films was performed using 99.9% pure Ni and Ni 9 Mo 1 targets. A DC current of 0.1 A and a constant flow of pure Argon of 14 sccm were used to grow 80 nm of highly textured Ni(111) and Ni 9 Mo 1 (111) films at a rate of 0.02 nm s −1 in a plasma pressure of 2 mTorr (3 mTorr for Ni 9 Mo 1 ).Figure 6shows the X-ray diffraction (XRD) measurements of the deposited films acquired with a Bruker D8 Discover HRXRD with a Cu Kα monochromatic beam (40 kV, 40 mA). The scans show highly textured pure films oriented in the [111] direction for Ni and Ni 9 Mo 1 films.The samples were then transferred into a CVD system for the growth of graphene directly on Ni(111) and Ni 9 Mo 1 (111) films on ∼ 2 cm × 2 cm substrates. Growth recipes similar to those reported by Patera et al.29 were adapted to obtain epitaxial and rotated-domain graphene directly on the Ni film. For the Ni 9 Mo 1 film, rotated-domain graphene was grown by first introducing pure H 2 gas at a rate of 200 sccm to the CVD chamber with a base pressure of 2.7 × 10 −6 mbar. Then, the CVD growth chamber was heated to 650 • C for 12 minutes and the sample was then exposed to C 2 H 4 with a flow rate of 0.24 sccm and 40 minutes before it was cooled down to RT in vacuum. This approach reduces any oxidized TM back to a clean metallic surface before the growth of graphene.FIG. 6. X-ray diffraction measurements (scanning range of 40 • -53 • ) of highly textured films: (a) Al 2 O 3 (0001)/Ni(111) and (b) Al 2 O 3 (0001)/Ni 9 Mo 1 (111). FIG. 7. SEM images at 1 kV showing the graphene domains for (a) epitaxial graphene/Ni and (b) rotated graphene/Ni and (c) The LEED diffraction pattern of epitaxial graphene on a Ni(111) substrate at 300 eV. The red circle in (b) highlights a single graphene domain with a diameter of ∼ 0.25 µm.
(a) shows a homogenous monolayer of graphene on Ni. The darker grey regions in Figure 7 (b) are the differently oriented graphene domains, whereas the bright areas in Figure 7 (a) and (b) are the bare Ni film. The low-energy electron diffraction (LEED) in Figure 7 (c) shows the graphene's hexagonal pattern epitaxially grown on the Ni(111) substrate. The (1 × 1) grown graphene structure is confirmed since no additional diffraction spots are observed in the LEED pattern.B. Raman Spectroscopy MeasurementsRT Raman scans were taken at three different regions of each sample using a 532 nm excitation laser wavelength (50× objective lens and ∼ 1 µm laser spot size). Before the measurements, the graphene was first transferred by a chemical etching process from the metallic films onto a Si substrate with a 300 nm thermally oxidized SiO 2 layer. This approach overcome the fact that closely lattice-matched films lead to a loss in the resonance conditions for observing Raman spectra as a result of the strong chemical interaction between the graphene π orbital and the d -states of Ni and Ni 9 Mo 1 which also alters the graphene's p z orbitals. Furthermore, the increase in the C−C bond length to match the lattice of the FM leads to significant changes in the graphene's phonon spectrum.22,60 For the transfer process, the samples were cleaved into ∼ 5 mm × 5 mm squares and HNO 3 , diluted to 5% for Ni 9 Mo 1 and 10% for Ni, was used to etch the metallic films slowly while preserving the graphene layer.C. X-ray Magnetic Circular Dichroism (XMCD)We carried out element selective XMCD measurements to detect and distinguish the magnetization in the graphene from that of the FM layer. The XMCD experiments were performed at 300 K at the SIM end station of the Swiss Light Source (SLS) at the Paul Scherrer Institut (PSI), Switzerland, using total electron yield (TEY) detection mode with 100% circularly polarized light. The rotated-domain graphene/Ni sample was set to an incident angle of 30 •
O. M. Aboljadayel, 1, a) C. J. Kinane, 2 C. A. F. Vaz, 3 D. M. Love, 1 R. S. Weatherup, 4 P. Braeuninger-Weimer, 4 M.-B. Martin, 4 A. Ionescu, 1 A. J. Caruana, 2 T. R. Charlton, 2 J. Llandro, 1 P. M. S. Monteiro, 1 C. H. W. Barnes, 1 S. Hofmann, 4 and S. Langridge 2, b)
arXiv:2101.09946v2 [cond-mat.mes-hall] 21 Mar 2022 FIG. S1. Room temperature Raman spectroscopy measurements for graphene transferred from the Ni 9 Mo 1 (111) film taken at three different regions, showing the graphene's characteristic peaks. The dashed vertical lines indicate regions for the different peaks.
S8, removes the top Ni deadlayer allowing continuous magnetism into the graphene. Although this model does not improve the χ 2 over model 6, it slightly improves the top interface. However, there are still is-FIG. S2. PNR Model 1: Single Ni layer with commensurate magnetism. (a) Fresnel reflectivity for 10 K and 300 K, (b) and (c) show the spin asymmetries, (d) is the nuclear scattering length density (nSLD) profile and (e) the magnetic scattering length density (mSLD) profile. The grey banded regions around the SLD lines are the 95% Bayesian confidence intervals. FIG. S3. PNR Model 2: Ni layer with dead-layers top and bottom. (a) Fresnel reflectivity for 10 K and 300 K, (b) and (c) show the spin asymmetries, (d) is the nuclear scattering length density (nSLD) profile and (e) the magnetic scattering length density (mSLD) profile. The grey banded regions around the SLD lines are the 95% Bayesian confidence intervals.
FIG. S4 .
S4PNR Model 3: Ni layer with commensurate magnetism and a nonmagnetic graphene layer on the surface. (a) Fresnel reflectivity for 10 K and 300 K, (b) and (c) show the spin asymmetries, (d) is the nuclear scattering length density (nSLD) profile and (e) the magnetic scattering length density (mSLD) profile. The grey banded regions around the SLD lines are the 95% Bayesian confidence intervals. FIG. S5. PNR Model 4: Ni layer with commensurate magnetism and a magnetic graphene layer on the surface. (a) Fresnel reflectivity for 10 K and 300 K, (b) and (c) show the spin asymmetries, (d) is the nuclear scattering length density (nSLD) profile and (e) the magnetic scattering length density (mSLD) profile. The grey banded regions around the SLD lines are the 95% Bayesian confidence intervals.
FIG. S6 .
S6PNR Model 5: Ni layer with dead-layers top and bottom a nonmagnetic graphene layer on the surface. (a) Fresnel reflectivity for 10 K and 300 K, (b) and (c) show the spin asymmetries, (d) is the nuclear scattering length density (nSLD) profile and (e) the magnetic scattering length density (mSLD) profile. The grey banded regions around the SLD lines are the 95% Bayesian confidence intervals. FIG. S7. PNR Model 6: Ni layer with dead-layers top and bottom and a magnetic graphene layer on the surface. (a) Fresnel reflectivity for 10 K and 300 K, (b) and (c) show the spin asymmetries, (d) is the nuclear scattering length density (nSLD) profile and (e) the magnetic scattering length density (mSLD) profile. The grey banded regions around the SLD lines are the 95% Bayesian confidence intervals.
FIG. S8 .
S8PNR Model 7: Ni layer with a dead layer at the substrate interface and continuous magnetism into a magnetic graphene layer on the surface. (a) Fresnel reflectivity for 10 K and 300 K, (b) and (c) show the spin asymmetries, (d) is the nuclear scattering length density (nSLD) profile and (e) the magnetic scattering length density (mSLD) profile. The grey banded regions around the SLD lines are the 95% Bayesian confidence intervals. FIG. S9. PNR Model 8: Ni layer with a dead layer at the substrate interface and continuous magnetism into a magnetic graphene layer on the surface. Graphene layer thickness is unconstrained. (a) Fresnel reflectivity for 10 K and 300 K, (b) and (c) show the spin asymmetries, (d) is the nuclear scattering length density (nSLD) profile and (e) the magnetic scattering length density (mSLD) profile. The grey banded regions around the SLD lines are the 95% Bayesian confidence intervals.
TABLE I .
ISummary of the PNR results for the rotated graphene/Ni, epitaxial graphene/Ni and graphene/Ni 9 Mo 1 samples using model 9: sapphire/a FM layer split into two regions/graphene.The values in the parenthesis are the lower and upper 95% Bayesian confidence limits.57 FM Layer1 + FM layer2
Graphene
Sample
Temperature Thickness
Magnetic Moment
Thickness
Magnetic Moment
[K]
[nm]
[µ B /atom]
[nm]
[µ B /atom]
Rotated Gr/Ni
10 K
80.6(80.2, 81.3)
0.60(0.60, 0.61)
0.86(0.81, 1.04)
0.53(0.52, 0.54)
300 K
0.57(0.56, 0.57)
0.53(0.51, 0.54)
Epitaxial Gr/Ni
10 K
77.5(77.1, 78.0)
0.61(0.60, 0.61)
0.92(0.2, 1.2)
0.38(0.12, 0.50)
300 K
0.579(0.575, 0.583)
0.2(0.0, 0.5)
Gr/Ni 9 Mo 1
10 K
75.0(74.0, 76.4)
0.064(0.056, 0.071)
0.84(0.80, 1.00)
0.12(0.07, 0.17)
300 K
0.003(−0.004, 0.01)
0.02(0.001, 0.05)
TABLE I .
IThe results of the Raman measurements summarising the average peaks positions and the 2D average FWHM for graphene trans-ferred from rotated Ni, epitaxial Ni and Ni 9 Mo 1 films.
Sample
D [cm −1 ] G [cm −1 ] D' [cm −1 ]
D+D"
[cm −1 ]
2D [cm −1 ] 2D FWHM
[cm −1 ]
Rotated Ni(111)
1346.3
1590.7
1624.8
2464.2
2681.9
46.2
Epitaxial Ni(111) 1344.6
1590.7
1623.1
2462.6
2682.6
40.8
Ni 9 Mo 1 (111)
1346.3
1590.7
1619.9
2464.0
2682.6
39.6
who developed nested sampling as a method of Bayesian evidence determination. The work of McCluskey et al 29 deploys nested sampling for analysing neutron reflectivity data for a soft matter systems, where they obtained the Bayesian evidence term and used it to compare different complexities of model and which most meaningfully fit the data.
TABLE II .
IIComparison of model parameters, χ 2 values and Bayesian evidence terms. Model Number Number of Parametersχ 2
χ 2 error
Logz (Evidence)
Logz e rr
9
28
2.73
± 0.06
−868.0
± 0.3
10
30
2.86
± 0.06
-915.0
± 0.7
11
27
2.79
± 0.05
-1097.6
± 0.5
12
26
3.52
± 0.05
-888.7
± 0.4
13
23
2.95
± 0.05
-927.1
± 0.4
Table of Evidence
FIG.
I. Shlimak, A. Haran, E. Zion, T. Havdala, Y. Kaganovskii, A. V. Butenko, L. Wolfson, V. Richter, D. Naveh, A. Sharoni, E. Kogan, and M. Kaveh, Physical Review B 91, 045414 (2015). 2 A. Allard and L. Wirtz, Nano Letters 10, 4335 (2010).
AcknowledgementsWe would like to thank the ISIS Neutron and Muon Source for the provision of beam time (RB1510330 and RB1610424). The data is available at the following DOIs https://doi. Other data presented in this study are available from the corresponding author upon request.
. V M Karpan, G Giovannetti, P A Khomyakov, M Talanana, A A Starikov, M Zwierzycki, J Van Den, G Brink, P J Brocks, Kelly, Physical Review Letters. 99V. M. Karpan, G. Giovannetti, P. A. Khomyakov, M. Talanana, A. A. Starikov, M. Zwierzycki, J. van den Brink, G. Brocks, and P. J. Kelly, Physical Review Letters 99 (2007).
. M Weser, E N Voloshina, K Horn, Y S Dedkov, Physical Chemistry Chemical Physics. 137534M. Weser, E. N. Voloshina, K. Horn, and Y. S. Dedkov, Physical Chemistry Chemical Physics 13, 7534 (2011).
. J Zhang, B Zhao, Y Yao, Z Yang, Scientific Reports. 51J. Zhang, B. Zhao, Y. Yao, and Z. Yang, Scientific Reports 5, 1 (2015).
. P Högl, T Frank, K Zollner, D Kochan, M Gmitra, J Fabian, Physical Review Letters. 124136403P. Högl, T. Frank, K. Zollner, D. Kochan, M. Gmitra, and J. Fabian, Physical Review Letters 124, 136403 (2020).
. J Meng, J J Chen, Y Yan, D P Yu, Z M Liao, Nanoscale. 58894J. Meng, J. J. Chen, Y. Yan, D. P. Yu, and Z. M. Liao, Nanoscale 5, 8894 (2013).
. Y Song, G Dai, Applied Physics Letters. 106Y. Song and G. Dai, Applied Physics Letters 106 (2015).
. J Xu, S Singh, J Katoch, G Wu, T Zhu, I Žutić, R K Kawakami, Nature Communications. 9J. Xu, S. Singh, J. Katoch, G. Wu, T. Zhu, I.Žutić, and R. K. Kawakami, Nature Commu- nications 9, 2 (2018).
. E W Hill, A K Geim, K Novoselov, F Schedin, P Blake, IEEE Transactions on Magnetics. 422694E. W. Hill, A. K. Geim, K. Novoselov, F. Schedin, and P. Blake, IEEE Transactions on Magnetics 42, 2694 (2006).
. Y G Semenov, K W Kim, J M Zavada, Applied Physics Letters. 91153105Y. G. Semenov, K. W. Kim, and J. M. Zavada, Applied Physics Letters 91, 153105 (2007).
. S M Sze, K K Ng, Physics of Semiconductor Devices. WileyS. M. Sze and K. K. Ng, Physics of Semiconductor Devices (Wiley, 2006).
. K Pi, W Han, K M Mccreary, A G Swartz, Y Li, R K Kawakami, Physical Review Letters. 104187201K. Pi, W. Han, K. M. McCreary, A. G. Swartz, Y. Li, and R. K. Kawakami, Physical Review Letters 104, 187201 (2010).
. Y Zhang, X Sui, D L Ma, K K Bai, W Duan, L He, Physical Review Applied. 101Y. Zhang, X. Sui, D. L. Ma, K. K. Bai, W. Duan, and L. He, Physical Review Applied 10, 1 (2018).
. M Weser, Y Rehder, K Horn, M Sicot, M Fonin, A B Preobrajenski, E N Voloshina, E Goering, Y S Dedkov, Applied Physics Letters. 9612504M. Weser, Y. Rehder, K. Horn, M. Sicot, M. Fonin, A. B. Preobrajenski, E. N. Voloshina, E. Goering, and Y. S. Dedkov, Applied Physics Letters 96, 012504 (2010).
. J C Leutenantsmeyer, A A Kaverzin, M Wojtaszek, B J Van Wees, 2D Materials. 414001J. C. Leutenantsmeyer, A. A. Kaverzin, M. Wojtaszek, and B. J. van Wees, 2D Materials 4, 014001 (2016).
. Z Wang, C Tang, R Sachs, Y Barlas, J Shi, Physical Review Letters. 11416603Z. Wang, C. Tang, R. Sachs, Y. Barlas, and J. Shi, Physical Review Letters 114, 016603 (2015).
. H Haugen, D Huertas-Hernando, A Brataas, Physical Review B. 77115406H. Haugen, D. Huertas-Hernando, and A. Brataas, Physical Review B 77, 115406 (2008).
. D V Tuan, S Roche, Physical Review Letters. 116106601D. V. Tuan and S. Roche, Physical Review Letters 116, 106601 (2016).
. Y Dedkov, E Voloshina, Journal of Physics: Condensed Matter. 27303002Y. Dedkov and E. Voloshina, Journal of Physics: Condensed Matter 27, 303002 (2015).
. V , Nature Physics. 3151V. Fal'ko, Nature Physics 3, 151 (2007).
. E N Voloshina, Y S Dedkov, Materials Research Express. 1135603E. N. Voloshina and Y. S. Dedkov, Materials Research Express 1 1, 035603 (2014).
. E Voloshina, Y Dedkov, Physics and Applications of Graphene -Experiments. 499InTechE. Voloshina and Y. Dedkov, in Physics and Applications of Graphene -Experiments, Vol. 499 (InTech, 2011) pp. 75-78.
. A Dahal, M Batzill, Nanoscale. 62548A. Dahal and M. Batzill, Nanoscale 6, 2548 (2014).
. Y S Dedkov, M Fonin, New Journal of Physics. 12125004Y. S. Dedkov and M. Fonin, New Journal of Physics 12, 125004 (2010).
. H Vita, S Böttcher, P Leicht, K Horn, A B Shick, F Máca, Physical Review B -Condensed Matter and Materials Physics. 901H. Vita, S. Böttcher, P. Leicht, K. Horn, A. B. Shick, and F. Máca, Physical Review B - Condensed Matter and Materials Physics 90, 1 (2014).
. D Marchenko, A Varykhalov, J Sánchez-Barriga, O Rader, C Carbone, G Bihlmayer, Physical Review B -Condensed Matter and Materials Physics. 911D. Marchenko, A. Varykhalov, J. Sánchez-Barriga, O. Rader, C. Carbone, and G. Bihlmayer, Physical Review B -Condensed Matter and Materials Physics 91, 1 (2015).
. H C Mertins, C Jansing, M Krivenkov, A Varykhalov, O Rader, H Wahab, H Timmers, A Gaupp, A Sokolov, M Tesch, P M Oppeneer, Physical Review B. 981H. C. Mertins, C. Jansing, M. Krivenkov, A. Varykhalov, O. Rader, H. Wahab, H. Timmers, A. Gaupp, A. Sokolov, M. Tesch, and P. M. Oppeneer, Physical Review B 98, 1 (2018).
. J B Mendes, O Santos, T Chagas, R Magalhães-Paniago, T J Mori, J Holanda, L M Meireles, R G Lacerda, A Azevedo, S M Rezende, Physical Review B. 991J. B. Mendes, O. Alves Santos, T. Chagas, R. Magalhães-Paniago, T. J. Mori, J. Holanda, L. M. Meireles, R. G. Lacerda, A. Azevedo, and S. M. Rezende, Physical Review B 99, 1 (2019).
. V M Karpan, P A Khomyakov, A A Starikov, G Giovannetti, M Zwierzycki, M Talanana, G Brocks, J Van Den, P J Brink, Kelly, Physical Review B. 78195419V. M. Karpan, P. A. Khomyakov, A. A. Starikov, G. Giovannetti, M. Zwierzycki, M. Talanana, G. Brocks, J. van den Brink, and P. J. Kelly, Physical Review B 78, 195419 (2008).
. L L Patera, C Africh, R S Weatherup, R Blume, S Bhardwaj, C Castellarin-Cudia, A Knop-Gericke, R Schloegl, G Comelli, S Hofmann, C Cepek, ACS nano. 7901L. L. Patera, C. Africh, R. S. Weatherup, R. Blume, S. Bhardwaj, C. Castellarin-Cudia, A. Knop-Gericke, R. Schloegl, G. Comelli, S. Hofmann, and C. Cepek, ACS nano , 7901 (2013).
. R S Weatherup, H Amara, R Blume, B Dlubak, B C Bayer, M Diarra, M Bahri, A Cabrero-Vilatela, S Caneva, P R Kidambi, M B Martin, C Deranlot, P Seneor, R Schloegl, F Ducastelle, C Bichara, S Hofmann, Journal of the American Chemical Society. 13613698R. S. Weatherup, H. Amara, R. Blume, B. Dlubak, B. C. Bayer, M. Diarra, M. Bahri, A. Cabrero-Vilatela, S. Caneva, P. R. Kidambi, M. B. Martin, C. Deranlot, P. Seneor, R. Schloegl, F. Ducastelle, C. Bichara, and S. Hofmann, Journal of the American Chem- ical Society 136, 13698 (2014).
. S M Kozlov, F Viñes, A Görling, Journal of Physical Chemistry C. 1167360S. M. Kozlov, F. Viñes, and A. Görling, Journal of Physical Chemistry C 116, 7360 (2012).
. Y Y Wang, Z H Ni, Z X Shen, H M Wang, Y H Wu, 10.1063/1.2838745arXiv:0801.4595Applied Physics Letters. 92Y. Y. Wang, Z. H. Ni, Z. X. Shen, H. M. Wang, and Y. H. Wu, Applied Physics Letters 92 (2008), 10.1063/1.2838745, arXiv:0801.4595.
I Chilres, L A Jauregui, W Park, H Cao, Y P Chen, New Developments in Photon and Materials Research. J. I. JangNova Science Publishers, Incorporated19I. Chilres, L. A. Jauregui, W. Park, H. Cao, and Y. P. Chen, in New Developments in Photon and Materials Research, edited by J. I. Jang (Nova Science Publishers, Incorporated, 2013) Chap. 19, pp. 553-595.
. A Reina, H Son, L Jiao, B Fan, M S Dresselhaus, Z Liu, J Kong, The Journal of Physical Chemistry C. 11217741A. Reina, H. Son, L. Jiao, B. Fan, M. S. Dresselhaus, Z. Liu, and J. Kong, The Journal of Physical Chemistry C 112, 17741 (2008).
. L M Malard, M A Pimenta, G Dresselhaus, M S Dresselhaus, Physics Reports. 47351L. M. Malard, M. A. Pimenta, G. Dresselhaus, and M. S. Dresselhaus, Physics Reports 473, 51 (2009).
. A C Ferrari, D M Basko, Nature Nanotechnology. 8235A. C. Ferrari and D. M. Basko, Nature Nanotechnology 8, 235 (2013).
. A Cabrero-Vilatela, R S Weatherup, P Braeuninger-Weimer, S Caneva, S Hofmann, Nanoscale. 82149A. Cabrero-Vilatela, R. S. Weatherup, P. Braeuninger-Weimer, S. Caneva, and S. Hofmann, Nanoscale 8, 2149 (2016).
. M.-B Martin, B Dlubak, R S Weatherup, M Piquemal-Banci, H Yang, R Blume, R Schloegl, S Collin, F Petroff, S Hofmann, J Robertson, A Anane, A Fert, P Seneor, Applied Physics Letters. 10712408M.-B. Martin, B. Dlubak, R. S. Weatherup, M. Piquemal-Banci, H. Yang, R. Blume, R. Schloegl, S. Collin, F. Petroff, S. Hofmann, J. Robertson, A. Anane, A. Fert, and P. Seneor, Applied Physics Letters 107, 012408 (2015).
. B Dlubak, M.-B Martin, R S Weatherup, H Yang, C Deranlot, R Blume, R Schloegl, A Fert, A Anane, S Hofmann, P Seneor, J Robertson, ACS nano. 610930B. Dlubak, M.-B. Martin, R. S. Weatherup, H. Yang, C. Deranlot, R. Blume, R. Schloegl, A. Fert, A. Anane, S. Hofmann, P. Seneor, and J. Robertson, ACS nano 6, 10930 (2012).
. W L O'brien, B P Tonner, Physical Review B. 5012672W. L. O'Brien and B. P. Tonner, Physical Review B 50, 12672 (1994).
. O Eriksson, A M Boring, R C Albers, G W Fernando, B R Cooper, Physical reviewO. Eriksson, A. M. Boring, R. C. Albers, G. W. Fernando, and B. R. Cooper, Physical review.
. W L O'brien, B P Tonner, G R Harp, S S P Parkin, Journal of Applied Physics. 766462W. L. O'Brien, B. P. Tonner, G. R. Harp, and S. S. P. Parkin, Journal of Applied Physics 76, 6462 (1994).
. O Eriksson, B Johansson, R C Albers, A M Boring, M S S Brooks, Physical Review B. 422707O. Eriksson, B. Johansson, R. C. Albers, A. M. Boring, and M. S. S. Brooks, Physical Review B 42, 2707 (1990).
. C Vaz, J Bland, G Lauhoff, Reports on Progress in Physics. 7156501C. Vaz, J. Bland, and G. Lauhoff, Reports on Progress in Physics 71, 056501 (2008).
. R Nakajima, J Stöhr, Y U Idzerda, Physical Review B. 596421R. Nakajima, J. Stöhr, and Y. U. Idzerda, Physical Review B 59, 6421 (1999).
. C T Chen, F Sette, Y Ma, S Modesti, Physical Review B. 427262C. T. Chen, F. Sette, Y. Ma, and S. Modesti, Physical Review B 42, 7262 (1990).
. P Söderlind, O Eriksson, B Johansson, R C Albers, A M Boring, Physical Review B. 4512911P. Söderlind, O. Eriksson, B. Johansson, R. C. Albers, and A. M. Boring, Physical Review B 45, 12911 (1992).
. N Menezes, V S Alves, E C Marino, L Nascimento, L O Nascimento, C. Morais Smith, Physical Review B. 951N. Menezes, V. S. Alves, E. C. Marino, L. Nascimento, L. O. Nascimento, and C. Morais Smith, Physical Review B 95, 1 (2017).
. Y J Song, A F Otte, Y Kuk, Y Hu, D B Torrance, P N First, W A De Heer, H Min, S Adam, M D Stiles, A H Macdonald, J A Stroscio, Nature. 467185Y. J. Song, A. F. Otte, Y. Kuk, Y. Hu, D. B. Torrance, P. N. First, W. A. De Heer, H. Min, S. Adam, M. D. Stiles, A. H. MacDonald, and J. A. Stroscio, Nature 467, 185 (2010).
. P V Semenikhin, A N Ionov, M N Nikolaeva, Technical Physics Letters. 46186P. V. Semenikhin, A. N. Ionov, and M. N. Nikolaeva, Technical Physics Letters 46, 186 (2020).
. J Skilling, AIP Conference Proceedings. 735J. Skilling, AIP Conference Proceedings 735, 395 (2004),
. 10.1063/1.1835238https://aip.scitation.org/doi/pdf/10.1063/1.1835238.
. A R Mccluskey, J F K Cooper, T Arnold, T Snow, Machine Learning: Science and Technology. 135002A. R. McCluskey, J. F. K. Cooper, T. Arnold, and T. Snow, Machine Learning: Science and Technology 1, 035002 (2020).
. F A Ma'mari, T Moorsom, G Teobaldi, W Deacon, T Prokscha, H Luetkens, S Lee, G E , F. A. Ma'Mari, T. Moorsom, G. Teobaldi, W. Deacon, T. Prokscha, H. Luetkens, S. Lee, G. E.
. D A Sterbinsky, D A Arena, M Maclaren, M Flokstra, M C Ali, G Wheeler, B J Burnell, O Hickey, Cespedes, Nature. 52469Sterbinsky, D. A. Arena, D. A. MacLaren, M. Flokstra, M. Ali, M. C. Wheeler, G. Burnell, B. J. Hickey, and O. Cespedes, Nature 524, 69 (2015).
. W W Pai, H T Jeng, C M Cheng, C H Lin, X Xiao, A Zhao, X Zhang, G Xu, X Q , W. W. Pai, H. T. Jeng, C. M. Cheng, C. H. Lin, X. Xiao, A. Zhao, X. Zhang, G. Xu, X. Q.
. M A Shi, C S Van Hove, K D Hsue, Tsuei, Physical Review Letters. 1041Shi, M. A. Van Hove, C. S. Hsue, and K. D. Tsuei, Physical Review Letters 104, 1 (2010).
. T Moorsom, M Wheeler, T Khan, F Ma'mari, C Kinane, S Langridge, D Ciudad, A Bedoya-Pinto, L Hueso, G Teobaldi, V K Lazarov, D Gilks, G Burnell, B J Hickey, O Cespedes, Physical Review B -Condensed Matter and Materials Physics. 901T. Moorsom, M. Wheeler, T. Mohd Khan, F. Al Ma'Mari, C. Kinane, S. Langridge, D. Ciudad, A. Bedoya-Pinto, L. Hueso, G. Teobaldi, V. K. Lazarov, D. Gilks, G. Burnell, B. J. Hickey, and O. Cespedes, Physical Review B -Condensed Matter and Materials Physics 90, 1 (2014).
M Hansen, R P Elliott, F A Shunk, Constitution of Binary Alloys. New YorkMcGraw-Hill1305M. Hansen, R. P. Elliott, and F. A. Shunk, Constitution of Binary Alloys (McGraw-Hill, New York, 1958) p. 1305.
. P Kienzle, J Krycka, N Patel, I Sahin, accessed: (28-09-2021P. Kienzle, J. Krycka, N. Patel, and I. Sahin, https://bumps.readthedocs.io/en/latest/ (2011), accessed: (28-09-2021).
. H.-C Mertins, S Valencia, W Gudat, P M Oppeneer, O Zaharko, H Grimmer, Europhysics Letters (EPL). 66743H.-C. Mertins, S. Valencia, W. Gudat, P. M. Oppeneer, O. Zaharko, and H. Grimmer, Europhysics Letters (EPL) 66, 743 (2004).
. O Céspedes, M S Ferreira, S Sanvito, M Kociak, J M D Coey, Journal of Physics: Condensed Matter. 16155O. Céspedes, M. S. Ferreira, S. Sanvito, M. Kociak, and J. M. D. Coey, Journal of Physics: Condensed Matter 16, L155 (2004).
. A Allard, L Wirtz, Nano Letters. 104335A. Allard and L. Wirtz, Nano Letters 10, 4335 (2010).
. P Kienzle, B Maranville, K O'donovan, J Ankner, N Berk, C Majkrzak, accessed: (28-09-2021P. Kienzle, B. Maranville, K. O'Donovan, J. Ankner, N. Berk, and C. Majkrzak, https: //www.nist.gov/ncnr/reflectometry-software (2017), accessed: (28-09-2021).
. M Björck, G Andersson, Journal of Applied Crystallography. 401174M. Björck and G. Andersson, Journal of Applied Crystallography 40, 1174 (2007).
. C F Majkrzak, N F Berk, U A Perez-Salas, Langmuir. 197796C. F. Majkrzak, N. F. Berk, and U. A. Perez-Salas, Langmuir 19, 7796 (2003).
The Ni layer has a dead layer at the substrate interface only. The Ni has been split into two continuous Ni layers to allow a gradient across the layer. The graphene layer is constrained so it cannot be thinner than 2 monolayers of graphene inline with the SEM and Raman results. The graphene is set to be non-magnetic. (a) Fresnel reflectivity for 10 K and 300 K, (b) and (c) show the spin asymmetries, (d) is the nuclear scattering length density (nSLD) profile and (e) the magnetic scattering length density (mSLD) profile. Fig, S12, PNR Model 12: a less realistic version of Model 10 for completeness. The grey banded regions around the SLD lines are the 95% Bayesian confidence intervalsFIG. S12. PNR Model 12: a less realistic version of Model 10 for completeness. The Ni layer has a dead layer at the substrate interface only. The Ni has been split into two continuous Ni layers to allow a gradient across the layer. The graphene layer is constrained so it cannot be thinner than 2 monolayers of graphene inline with the SEM and Raman results. The graphene is set to be non-magnetic. (a) Fresnel reflectivity for 10 K and 300 K, (b) and (c) show the spin asymmetries, (d) is the nuclear scattering length density (nSLD) profile and (e) the magnetic scattering length density (mSLD) profile. The grey banded regions around the SLD lines are the 95% Bayesian confidence intervals.
PNR Model 13: a less realistic version of Model 11. The Ni layer has a dead layer at the substrate interface only. The Ni has been split into two continuous Ni layers to allow a gradient across the layer, and the graphene layer has been removed. Fig, S13, reflectivity for 10FIG. S13. PNR Model 13: a less realistic version of Model 11. The Ni layer has a dead layer at the substrate interface only. The Ni has been split into two continuous Ni layers to allow a gradient across the layer, and the graphene layer has been removed. (a) Fresnel reflectivity for 10
(b) and (c) show the spin asymmetries, (d) is the nuclear scattering length density (nSLD) profile and (e) the magnetic scattering length density (mSLD) profile. K and 300 K,The grey banded regions around the SLD lines are the 95% Bayesian confidence intervalsK and 300 K, (b) and (c) show the spin asymmetries, (d) is the nuclear scattering length density (nSLD) profile and (e) the magnetic scattering length density (mSLD) profile. The grey banded regions around the SLD lines are the 95% Bayesian confidence intervals.
. L M Malard, M A Pimenta, G Dresselhaus, M S Dresselhaus, Physics Reports. 47351L. M. Malard, M. A. Pimenta, G. Dresselhaus, and M. S. Dresselhaus, Physics Reports 473, 51 (2009).
. A C Ferrari, D M Basko, Nature Nanotechnology. 8235A. C. Ferrari and D. M. Basko, Nature Nanotechnology 8, 235 (2013).
. A Cabrero-Vilatela, R S Weatherup, P Braeuninger-Weimer, S Caneva, S Hofmann, Nanoscale. 82149A. Cabrero-Vilatela, R. S. Weatherup, P. Braeuninger-Weimer, S. Caneva, and S. Hofmann, Nanoscale 8, 2149 (2016).
. A C Ferrari, Solid State Communications. 14347A. C. Ferrari, Solid State Communications 143, 47 (2007).
S Reichardt, L Wirtz, Optical Properties of Graphene. WORLD SCIENTIFICS. Reichardt and L. Wirtz, in Optical Properties of Graphene (WORLD SCIENTIFIC, 2017) pp. 85-132.
. A C Ferrari, J C Meyer, V Scardaci, C Casiraghi, M Lazzeri, F Mauri, S Piscanec, D Jiang, K S Novoselov, S Roth, A K Geim, Physical Review Letters. 97187401A. C. Ferrari, J. C. Meyer, V. Scardaci, C. Casiraghi, M. Lazzeri, F. Mauri, S. Piscanec, D. Jiang, K. S. Novoselov, S. Roth, and A. K. Geim, Physical Review Letters 97, 187401 (2006).
. W L O'brien, B P Tonner, Physical Review B. 5012672W. L. O'Brien and B. P. Tonner, Physical Review B 50, 12672 (1994).
. H Wang, C Bryant, D W Randall, L B Lacroix, E I Solomon, M Legros, S P Cramer, The Journal of Physical Chemistry B. 1028347H. Wang, C. Bryant, D. W. Randall, L. B. LaCroix, E. I. Solomon, M. LeGros, and S. P. Cramer, The Journal of Physical Chemistry B 102, 8347 (1998).
. C T Chen, Y U Idzerda, H.-J Lin, N V Smith, G Meigs, E Chaban, G H Ho, E Pellegrin, F Sette, Physical Review Letters. 75152C. T. Chen, Y. U. Idzerda, H.-J. Lin, N. V. Smith, G. Meigs, E. Chaban, G. H. Ho, E. Pellegrin, and F. Sette, Physical Review Letters 75, 152 (1995).
. Y U Idzerda, C T Chen, H J Lin, H Tjeng, G Meigs, Physics B. 746Y. U. Idzerda, C. T. Chen, H. J. Lin, H. Tjeng, and G. Meigs, Physics B 208-209, 746 (1995).
. K Amemiya, T Yokoyama, Y Yonamoto, D Matsumura, T Ohta, Physical Review B. 64132405K. Amemiya, T. Yokoyama, Y. Yonamoto, D. Matsumura, and T. Ohta, Physical Review B 64, 132405 (2001).
Z Sun, Y Zhan, S Shi, M Fahlman, Organic Electronics: physics, materials, applications. 151951Z. Sun, Y. Zhan, S. Shi, and M. Fahlman, Organic Electronics: physics, materials, applications 15, 1951 (2014).
. H Wende, A Scherz, C Sorg, K Baberschke, E K U Gross, H Appel, K Burke, J Minár, H Ebert, A L Ankudinov, J J Rehr, AIP Conference Proceedings. 88278H. Wende, A. Scherz, C. Sorg, K. Baberschke, E. K. U. Gross, H. Appel, K. Burke, J. Minár, H. Ebert, A. L. Ankudinov, and J. J. Rehr, AIP Confer- ence Proceedings 882, 78 (2007).
. J Vogel, A Fontaine, V Cros, F Petroff, Physical Review B -Condensed Matter and Materials Physics. 553663J. Vogel, A. Fontaine, V. Cros, and F. Petroff, Physical Review B -Con- densed Matter and Materials Physics 55, 3663 (1997).
. H Vita, S Böttcher, P Leicht, K Horn, A B Shick, F Máca, Physical Review B -Condensed Matter and Materials Physics. 901H. Vita, S. Böttcher, P. Leicht, K. Horn, A. B. Shick, and F. Máca, Physical Review B -Condensed Matter and Materials Physics 90, 1 (2014).
. D J Huang, H T Jeng, C F Chang, G Y Guo, J Chen, W P Wu, S C , D. J. Huang, H. T. Jeng, C. F. Chang, G. Y. Guo, J. Chen, W. P. Wu, S. C.
. S G Chung, C C Shyu, H J Wu, C T Lin, Chen, Physical Review B. 66174440Chung, S. G. Shyu, C. C. Wu, H. J. Lin, and C. T. Chen, Physical Review B 66, 174440 (2002).
. J Pelzl, R Meckenstock, D Spoddig, F Schreiber, J Pflaum, Z Frait, Journal of Physics Condensed Matter. 15J. Pelzl, R. Meckenstock, D. Spoddig, F. Schreiber, J. Pflaum, and Z. Frait, Journal of Physics Condensed Matter 15 (2003).
. P Kienzle, B Maranville, K O'donovan, J Ankner, N Berk, C Majkrzak, accessed: (28-09-2021P. Kienzle, B. Maranville, K. O'Donovan, J. Ankner, N. Berk, and C. Ma- jkrzak, https://www.nist.gov/ncnr/reflectometry-software (2017), accessed: (28-09-2021).
. P Kienzle, J Krycka, N Patel, I Sahin, accessed: (28-09-2021P. Kienzle, J. Krycka, N. Patel, and I. Sahin, https://bumps. readthedocs.io/en/latest/ (2011), accessed: (28-09-2021).
. F Abelés, J. Phys. Radium. 11307F. Abelés, J. Phys. Radium 11, 307 (1950).
. L Névot, P Croce, Revue de Physique Appliquée. 15761L. Névot and P. Croce, Revue de Physique Appliquée 15, 761 (1980).
. A Koutsioubas, Journal of Applied Crystallography. 52538A. Koutsioubas, Journal of Applied Crystallography 52, 538 (2019).
. P Kienzle, J Krycka, N Patel, I Sahin, accessed:(28-09-2021P. Kienzle, J. Krycka, N. Patel, and I. Sahin, https: //refl1d.readthedocs.io/en/latest/guide/fitting.html# reporting-results (2017), accessed:(28-09-2021).
. J Skilling, 10.1063/1.1835238AIP Conference Proceedings. 735J. Skilling, AIP Conference Proceedings 735, 395 (2004), https://aip.scitation.org/doi/pdf/10.1063/1.1835238.
Nested sampling methods. J Buchner, arXiv:2101.09675stat.COJ. Buchner, "Nested sampling methods," (2021), arXiv:2101.09675 [stat.CO].
. J Buchner, arXiv:2101.09604The Journal of Open Source Software. 6stat.COJ. Buchner, The Journal of Open Source Software 6, 3001 (2021), arXiv:2101.09604 [stat.CO].
. A R Mccluskey, J F K Cooper, T Arnold, T Snow, Machine Learning: Science and Technology. 135002A. R. McCluskey, J. F. K. Cooper, T. Arnold, and T. Snow, Machine Learn- ing: Science and Technology 1, 035002 (2020).
. R E Kass, A E Raftery, 10.1080/01621459.1995.10476572Journal of the American Statistical Association. 90R. E. Kass and A. E. Raftery, Journal of the American Statistical Association 90, 773 (1995), https://www.tandfonline.com/doi/pdf/10.1080/01621459.1995.10476572.
. D Makowski, M S Ben-Shachar, S H A Chen, D Lüdecke, 10.3389/fpsyg.2019.02767Frontiers in Psychology. 10D. Makowski, M. S. Ben-Shachar, S. H. A. Chen, and D. Lüdecke, Fron- tiers in Psychology 10 (2019), 10.3389/fpsyg.2019.02767.
Secondary information (obtained from the SEM and Raman spectroscopy measurements) was used to select one mode as preferable in each case. Fig, S15, Correlation plots and posterior probability distributions for the parameters exhibiting bi-modal behaviour from Model 9FIG. S15. Correlation plots and posterior probability distributions for the parameters exhibiting bi-modal behaviour from Model 9. Secondary information (obtained from the SEM and Raman spectroscopy measurements) was used to select one mode as preferable in each case.
| [] |
[
"DETECTION OF CLOUDS IN MULTIPLE WIND VELOCITY FIELDS USING GROUND-BASED INFRARED SKY IMAGES",
"DETECTION OF CLOUDS IN MULTIPLE WIND VELOCITY FIELDS USING GROUND-BASED INFRARED SKY IMAGES"
] | [
"Guillermo Terrén-Serrano [email protected] \nDepartment of Electrical and Computer Engineering\nDepartment of Electrical and Computer Engineering\nThe University of New Mexico Albuquerque\n87131NMUnited States\n",
"Manel Martínez-Ramón \nThe University of New Mexico Albuquerque\n87131NMUnited States\n"
] | [
"Department of Electrical and Computer Engineering\nDepartment of Electrical and Computer Engineering\nThe University of New Mexico Albuquerque\n87131NMUnited States",
"The University of New Mexico Albuquerque\n87131NMUnited States"
] | [] | Horizontal atmospheric wind shear causes wind velocity fields to have different directions and speeds. In images of clouds acquired using ground-based sky imagers, clouds may be moving in different wind layers. To increase the performance of an intra-hour global solar irradiance forecasting algorithm, it is important to detect multiple layers of clouds. The information provided by a solar forecasting algorithm is necessary to optimize and schedule the solar generation resources and storage devices in a smart grid. This investigation studies the performance of unsupervised learning techniques when detecting the number of cloud layers in infrared sky images. The images are acquired using an innovative infrared sky imager mounted on a solar tracker. Different mixture models are used to infer the distribution of the cloud features. Multiple Bayesian metrics and a sequential hidden Markov model are implemented to find the optimal number of clusters in the mixture models, and their performances are compared. The motion vectors are computed using a weighted implementation of the Lucas-Kanade algorithm. The correlations between the cloud velocity vectors and temperatures are analyzed to find the method that leads to the most accurate results. We have found that the sequential hidden Markov model outperformed the detection accuracy of the Bayesian metrics. | 10.1016/j.knosys.2023.110628 | [
"https://export.arxiv.org/pdf/2105.03535v3.pdf"
] | 234,338,302 | 2105.03535 | 6e6c050bd61b7da65fe19a0d05b094dd261d1a7e |
DETECTION OF CLOUDS IN MULTIPLE WIND VELOCITY FIELDS USING GROUND-BASED INFRARED SKY IMAGES
November 2, 2021 27 Jun 2021
Guillermo Terrén-Serrano [email protected]
Department of Electrical and Computer Engineering
Department of Electrical and Computer Engineering
The University of New Mexico Albuquerque
87131NMUnited States
Manel Martínez-Ramón
The University of New Mexico Albuquerque
87131NMUnited States
DETECTION OF CLOUDS IN MULTIPLE WIND VELOCITY FIELDS USING GROUND-BASED INFRARED SKY IMAGES
November 2, 2021 27 Jun 2021A PREPRINT -NOVEMBER 2, 2021Cloud Detection · Hidden Markov Model · Machine Learning · Mixture Models · Sky Imaging · Solar Forecasting · Weighted Lucas-Kanade
Horizontal atmospheric wind shear causes wind velocity fields to have different directions and speeds. In images of clouds acquired using ground-based sky imagers, clouds may be moving in different wind layers. To increase the performance of an intra-hour global solar irradiance forecasting algorithm, it is important to detect multiple layers of clouds. The information provided by a solar forecasting algorithm is necessary to optimize and schedule the solar generation resources and storage devices in a smart grid. This investigation studies the performance of unsupervised learning techniques when detecting the number of cloud layers in infrared sky images. The images are acquired using an innovative infrared sky imager mounted on a solar tracker. Different mixture models are used to infer the distribution of the cloud features. Multiple Bayesian metrics and a sequential hidden Markov model are implemented to find the optimal number of clusters in the mixture models, and their performances are compared. The motion vectors are computed using a weighted implementation of the Lucas-Kanade algorithm. The correlations between the cloud velocity vectors and temperatures are analyzed to find the method that leads to the most accurate results. We have found that the sequential hidden Markov model outperformed the detection accuracy of the Bayesian metrics.
Introduction
The ongoing transition toward energy generation systems that produce low-carbon emissions is increasing the penetration of renewable energies in the power grid [1]. However, the only three renewable sources that can produce enough power to fulfill the demand are geothermal, biomass and solar. In particular, solar energy has the potential to become the primary source of power due to its availability and capability [2]. A Smart Grid (SG) may optimize the dispatch of energy in large Photovoltaic (PV) power plants to meet demand using recent advances in information and communication technologies [3].
The power generated by PV systems is affected by Global Solar Irradiance (GSI) fluctuations that reach the surface of PV panels [4,5]. Shadows projected by moving clouds produce mismatch losses [6]. Although certain configurations of PV arrays reduce the impact of the losses, they still are outside of the allowed range demanded by grid operators [7]. Forecasting of power output will equip a SG powered by PV systems with the technology necessary for regulating the dispatch of energy [8,9]. Nevertheless, PV power plants have different physical configurations [10]. PV cells and batteries degrade following unique patterns [11]. For these reasons, the predicted power output cannot be directly extrapolated among the PV systems connected to the same SG [12,13,14,15]. The forecasting of GSI over a grid projected on the Earth's surface facilitates the prediction of power output from PV systems located within the span of the grid [16,17].
The formation of clouds is a phenomenon restricted by the Tropopause [18]. Different types of clouds are expected to form at different altitudes within the Troposphere [19]. The magnitude of the wind velocity field increases with the altitude in the lower atmosphere [20,21]. The wind gradient may also change its direction due to the friction of the wind with the surface of the Earth. The planetary boundary layer (the lowest part of the Troposphere) [22] is the point where the wind shear causes low level clouds to move in a different direction and speed of that from high level clouds [23,24,25].
Numerical Weather Prediction (NWP) models are computationally expensive for the forecasting resolution necessary in these applications [26,27,28,29,30,31]. GSI forecasting models which include ground weather features from meso-scale meteorology have problems of collinearity [32]. Cloud information extracted from geostationary satellite images improved the performance of solar irradiance forecasting with respect to NWP models [33,34]. However, real-time applications of GSI forecasting using satellite imaging are not feasible due to communications delays [35,36]. Ground-based sky imaging systems are an efficient and low-cost alternative for satellite imaging [37,38,39]. The performances of solar irradiance or PV power output forecasting algorithms are increased when visible and infrared (IR) ground-based sky imaging systems are installed on a solar tracker [40,41].
Attaching a fish-eye lens to a low-cost visible light camera can provide sky images with large Field of View (FOV) [42,43]. The disadvantage of using visible light sky imaging systems in solar forecasting is that the intensity of the pixels in the circumsolar area are saturated [44]. Radiometric long-wave IR cameras of uncooled microbolometers are low-cost and widely available [45]. These types of cameras have been used to analyze the radiation emitted by gases and clouds in the atmosphere [46]. Cloud statistics may be computed [47] to establish optical links for applications that involve Earth-space communication [48]. The measured temperature of clouds depends on the air temperature on the ground [49], so the calibration of these cameras is important to perform accurate measurements [50]. Merging thermal images acquired from multiple IR cameras mounted on a dome-shaped plane can provide thermal images with a larger FOV [51].
The wind velocity field shown in a sequence of cloud images is a physical process that is assumed to have a limited complexity [52]. A sequence of IR images allows the derivation of physical features from moving clouds in a wind velocity field. These are more interpretable for modelling physical processes. The features are temperature, velocity vectors, and height [53,54]. It has been found that the advantage of using unsupervised learning algorithms is that the response to a sequence of images is expected to depend on the physical process that the images represent rather than the intensity of their pixels [55]. Unsupervised learning methods, in special mixture models, infer the probability density functions of the observations without prior information [56].
Hidden Markov Models (HMM) were introduced to model linear sequences of discrete latent variables or states as a Markov process. These models are popular in computer vision and pattern recognition applications in which the current state of a system in an image is modelled in terms of previous states [57]. HMM have been used to detect and analyze temperature distributions of images acquired from IR thermal imaging systems [58]. It is possible to apply HMM to model stochastic physical processes [59,60] in image classification [61], and object recognition [62].
The unsupervised learning algorithm proposed in this investigation does inference over the number of wind velocity fields in a sequence of images using features extracted from the clouds. IR images of clouds are obtained using an innovative Data Acquisition (DAQ) system mounted in a solar tracker [49]. The velocity vectors are computed using a weighted variation of the standard Lucas-Kanade (LK) method [63]. The velocity vector of each pixel is computed in a weighted window of neighboring pixels. The weight of a pixel is the posterior probability of belonging to the lower or the upper cloud layer. The obtained velocity vectors for each cloud layer are averaged together weighting them by the posterior probability of each cloud layer.
A real-time probabilistic model is implemented to detect the number of layers in an IR image [64]. The proposed model is an HMM that models the hidden process of the number of wind velocity fields in a sequence of images [65,66]. The motion of the clouds on a sequence of images is used to calculate the velocity vectors. The temperature and height of the pixels are also extracted from the cloud images. The distributions of the features are inferred with different parametric mixture models [67,68]. The mixture models are optimized using the Expectation-Maximization (EM) algorithm [69,70]. The label switching of the mixture models is solved using the average height of each distribution in the mixture model [71].
Methodology
The long-wave IR camera provides an uniform thermal image applying the Wein's displacement law to the radiation emitted by a black body. Wein's displacement law says that emitted radiation is inversely proportional to the temperature, the black body radiation maxima is at different wavelengths depending on the temperature. In our application, the feasible cloud temperatures are within the long-wave infrared spectrum [72].
A pixel of the camera frame is defined by a pair of euclidean coordinates X = {(i, j) | ∀i = 1, . . . , M, ∀j = 1, . . . , N }, and the temperature of each one of the pixels is defined in Kelvin degrees as
T (t) = {T (t)
i,j ∈ R | ∀i = 1, . . . , M, ∀j = 1, . . . , N }, where t represents a process defined as t ∈ (0, ∞], that is a sequence of IR images ordered chronologically.
When there are multiple layers of clouds in an image, a mixture model is expected to have multiple clusters. In order to infer the distribution of the temperatures or heights using a Beta Mixture Model (BeMM), the features are first normalized to the domain of a beta distributionT
i,j = [T (t) i,j − min(T (t) )]/[max(T (t) ) − min(T (t) )].
When the inference is performed with a Gamma Mixture Model (GaMM), the temperatures are normalized to the domain of the gamma distributionT i,j = T (t) i,j − min(T (t) ). The heights (in kilometers) are within the domain of the gamma distribution. When the inference is performed using a Gaussian Mixture Model (GMM), the temperatures and heights do not require normalization.
Weighted Lucas-Kanade
In current computer vision literature, there are three primary methods to estimate the motion of objects in a sequence of images: Lucas-Kanade [63], Horn-Schunck [73] and Farnebäck [74] methods. These three methods are based on the space-time partial derivatives between two consecutive frames. The techniques to estimate the motion vectors in an image are sensitive to the intensity gradient of the pixels. An atmospheric model is implemented to remove the gradient produced by the Sun's direct irradiance and the atmospheric scattered irradiance (both of which routinely appear on the images in the course of the year). A persistent model of the outdoor germanium window of the IR camera removes debris and water spots that appear in the image [75]. In this investigation, it is implemented a Weighted Lucas-Kanade (WLK) method.
Optical Flow
The optical flow equation considers that exists a small displacement ∆x and ∆y in the direction of an object in an image. The object is assumed to have constant intensity I between two consecutive frames. The frames are separated in time by small time increment ∆t, I (x, y, t) = I (x + ∆x, y + ∆y, t + ∆t) .
(1)
Assuming that the difference in intensity between neighboring pixels is smooth and that brightness of a pixel in consecutive frames is constant, the Taylor series expansion is applied and following equation obtained,
I (x + ∆x, y + ∆y, t + ∆t) = I (x, y, t) + ∂I ∂x ∆x + ∂I ∂y ∆y + ∂I ∂t ∆t.
The factors are simplified combining the last two equation,
∂I ∂x ∆x + ∂I ∂y ∆y + ∂I ∂t ∆t = 0.(3)
The velocity of an object is derived dividing the terms of the displacement by the increment of time ∆t, ∂I ∂x ∆x ∆t + ∂I ∂y
∆y ∆t + ∂I ∂t ∆t ∆t = 0.(4)
The velocity components are defined as u and v so that,
∂I ∂x u + ∂I ∂y v + ∂I ∂t = 0.(5)
This equation is known as the aperture problem,
I x u + I y v = −I t ,(6)
where I x = ∂I/∂x, I y = ∂I/∂y and I t = ∂I/∂t are the derivatives for notation simplification.
The 2-dimensional derivatives are approximated using convolutional filters in a image [76]. Let us define a discrete time and space sequence of images as
I (t) = {I (t) i,j ∈ R [0,2 8 ) | i = 1, .
. . , N, j = 1, . . . , M }. The finite differences method is applied to compute the derivatives,
I x = I (t−1) K x I y = I (t−1) K y I t = I (t−1) K t + I (t) −K t (7)
where represent a 2-dimensional convolution, and I (t−1) and I (t) are the first and second consecutive frames. K x , K y and K t are the differential kernels in the x, y and t direction respectively,
K x = −1 1 −1 1 , K y = −1 −1 1 1 , K t = σ 1 1 1 1 ,(8)
the parameter σ is the amplitude of the temporal kernel. This parameter may be cross-validated when the velocity field is known.
Lucas-Kanade
The LK method proposes to find the solution for the optical flow equations via Least Squares (LS). The optical flow equation is solved using a local image of the pixels within a sliding window. In this research, the LK method is extended for multiple importance weights. A sliding window is defined with odd width W = 2w + 1, where w is the window size parameter, which has to be cross-validated.
Now we assume that the image may contain more than one velocity field. Then, at pixel i, j of layer l, we define 1 ≤ l ≤ L hypothesis over the possible velocities v (l)
i,j . If the position of the central pixel of the window is defined as i, j, then the dependent and independent variables can be defined as
x i−m,j−n = I x (i − m, j − n) I y (i − m, j − n) , v (l) i,j = u (l) i v (l) j , y i−m,j−n = −I t (i − m, j − n) , −w ≤ m ≤ w, −w ≤ n ≤ w.(9)
Each velocity vector is associated to a posterior probability γ
(l) i,j p(z i,j = l | T i,j )
where z i,j = l is a latent variable that indicates that the velocity field at pixel i, j is corresponds to cloud layer l. These posteriors will be estimated in the next section.
We introduce a Weighted Least Squares (WLS) approach [77], whose weights are posterior probabilities γ (l) i,j . Assume an extended vector y i,j containing all instances of y i−m,j−n and a matrix X i,j containing all x i−m,j−n .
Instead of minimizing the mean square error, we can maximize the expectation of the unnormalized log-posterior
E log p y i,j X i,j , v (l) i,j p v (l) i,j = = E log m,n,l p y i−m,j−n x i−m,j−n , v (l) i,j I(zi,j =l) p v (l) i,j = m,n,l E [I (z i−m,j−n = l)] log p y i−m,j−n x i−m,j−n , v (l) i,j + log p v (l) i,j = m,n,l γ (l) i−m,j−n log p y i−m,j−n,l x i−m,j−n , v (l) i,j + log p(v (l) i,j ) = − m,n,l γ (l) i−m,j−n v (l) i,j x i−m,j−n − y i−m,j−n 2 − τ v (l) i,j 2 + constant(10)
where γ
(l) i−m,j−n = E[I(z i−m,j−n = l)]
is the posterior probability of the velocity field, I(·) is the indicator function, and where we assumed that both probabilities are Gaussian distributions, where the first one is a multivariate Gaussian modelling the error, which has a given variance σ 2 n , and the second one is a multivariate Gaussian modelling the prior over the velocities, whose covariance is an identity I 2×2 . Thus, τ = σ 2 n plays the role of a regularization parameter, and it may be validated or inferred by maximizing the likelihood term [78].
By computing the gradient of the expression with respect to v (l) i,j and nulling it, we obtain the solution,
v (l) i,j = X i,j Γ (l) i,j X i,j + τ I 2L×2L −1 X i,j Γ (l) i,j y i,j(11)
where Γ (l)
i,j is a diagonal matrix containing all posteriors of the l cluster.
The estimated velocity components are defined as
U (l) = {u (l) i,j ∈ R | i = 1, . . . , N, j = 1, . . . , M } and V (l) = {v (l) i,j ∈ R | i = 1, . . . , N, j = 1, . . . , M }.
The obtained velocity components for each posterior l, are averaged weighting the vectors components by their posterior,
U = L l=1 Γ (l) U (l) ; V = L l=1 Γ (l) V (l) ,(12)
where is the Hadamard product. The result are the velocity components for each pair of coordinates in the original frame U, V ∈ R N ×M . The velocity vectors defined in polar coordinates have magnitude R = (U U + V V) 1/2 and angle Φ = arctan2(U, V).
Maximum a Posteriori Mixture Model
When the clouds are moving in multiple wind velocity fields the distributions of the temperatures, heights and velocity vectors components are expected to be distributed in multiple clusters in the feature space of the observation.
Mixture models are implemented to infer the distributions of the physical features extracted from IR cloud images. These physical features have different domain so the inference is implemented using probability functions defined in each one of these domains. Thus, the distribution of the velocity vectors defined in Cartesian coordinates is inferred with a multivariate GMM. The inference of the velocity vectors when they are defined in polar coordinates, is performed independently for each component using a GaMM and Von Mises Mixture Model (VMMM) for the magnitude and the angle respectively.
In this section, we propose the use of different mixture models to infer probability functions that approximate better the actual distribution of a features with the aim of detecting the most likely number of wind velocity fields on an image and their pixelwise posterior probabilities γ (l)
i,j . The formulation of the proposed mixture models includes a prior distribution on the cluster weights in order to avoid overfitting.
Expectation-Maximization
Let us consider that x i are observations (i.e. feature vectors) which we wish to model as a mixture model, and that z i are the corresponding latent variables of their cluster index. The optimal set of parameters in a mixture model can be computed using the EM algorithm. The implementation of the EM algorithm guarantees a smooth convergence to a local maximum following an iterative approach consisting of two steps [69]. The maximized function is the expected complete data log-likelihood plus the log-prior,
Q (θ, θ t−1 ) E N i=1 log p (x i , z i |θ) + log p (π|α) = N i=1 E log L l=1 π (l) p x i θ (l) I(zi=l) + log p (π|α) = N i=1 L l=1 p z i = l x i , θ (l) t−1 log π (l) p x i θ (l) + log p (π|α) = N i=1 L l=1 γ (l) i log π (l) p x i θ (l) + log p (π|α) ,(13)
where t represents the iteration of the algorithm, and the posterior probability introduced in Eq. (10) appears here as
γ (l) i p z i = l x i , θ (l) t−1 , α ,(14)
which is commonly referred to as the responsability of cluster l in sample i. A prior p (π|α) with parameters α is introduced for probabilities π. The initialization of the EM starts by randomly assigning a set of parameters and a prior.
In the expectation step of the EM algorithm, a posterior γ (l)
i is assigned to each sample using the likelihood function,
γ (l) i = π (l) p x i θ (l) t−1 L l=1 π (l) p x i θ (l) t−1 .(15)
In the maximization step, the parameters that maximize the complete data log-likelihood plus the log-prior are found analytically, as it will be shown further. The prior p(π|α) is a Dirichlet distribution π ∼ Dir(α), and α (l) ≥ 1. The mixture weights are updated using the posterior probabilities [70],
π (l) = α (l) − 1 + N i=1 γ (l) i N − L + L l=1 α (l) ,(16)
when α (l) = 1, the prior is noninformative p(π|α) = 0 and thus the Maximum A Posterior (MAP) estimation is equivalent to the Maximum Likelihood (ML),
π (l) = [ N i=1 γ (l)
i ]/N . The E and M steps are repeated until the complete data log-likelihood have converged to a local maxima. In the case of a quadratic loss function this problem has analytical solution, for instance in a GMM [70]. When the loss function has not analytical solution, it can be solved implementing a numerical optimization method based on the gradient descent.
Gamma Mixture Model
The distribution of the magnitude of velocity vectors or the heights can be approximate by mixture of Gamma distributions X ∼ G(α (l) , β (l) ) which density function is,
f x i ; α (l) , β (l) = x α (l) −1 i e − x i β (l) β (l) α (l) Γ α (l) x i > 0, α (l) , β (l) > 0,(17)
where Γ(α (l) ) is the Gamma function.
The log-likelihood of the Gamma density function needed to compute the expected complete data log-likelihood in a GaMM is,
log p x i α (l) , β (l) = α (l) − 1 log x i − x i β (l) − α (l) log β (l) − log Γ α (l) .(18)
The maximization step has to be solved via numerical optimization. The gradient w.r.t. α (l) is,
∂Q θ (l) ∂α (l) = L l=1 N i=1 γ (l) i ∂ ∂α (l) log p x i α (l) , β (l) = L l=1 N i=1 γ (l) i log x i − log β (l) − Γ α (l) Γ α (l) ,(19)
where Γ (α (l) ) is the derivative of the Gamma function. The gradient w.r.t. β (l) is,
∂Q θ (l) ∂β (l) = L l=1 N i=1 γ (l) i ∂ ∂β (l) log p x i α (l) , β (l) = L k=1 N i=1 γ (l) i 1 β (l) x i β (l) − α (l) .(20)
The generalizations of the Gamma distribution for multiple dimensions do not have an unified expression. In fact, the multivariate Gamma distribution is unknown in the exponential family [79].
Bivariate Gamma Mixture Model
The distribution of magnitude of velocity vectors and heights can be approximated by mixture of bivariate Gamma distributions X, Y ∼ BG(α (l) , β (l) , a (l) ) which density function is [79],
f x i , y i ; α (l) , β (l) , a (l) = β (l) α (l) x α (l) +a (l) −1 i y α (l) −1 i Γ a (l) Γ α (l) e −β (l) xi e −xiyi x i , y i > 0,(21)
where Γ(α (l) ) is the Gamma function, and the parameters are α (l) , β (l) , a (l) > 0.
The log-likelihood of the bivariate Gamma density function needed for computing the expected complete data loglikelihood in a Bivariate Gamma Mixture Model (BGaMM) is,
log p x i , y i α (l) , β (l) , a (l) = α (l) log β (l) + α (l) + a (l) − 1 log x i + . . . · · · + a (l) − 1 log y i − β (l) x i − x i y i − log Γ α (l) − log Γ a (l) .(22)
As the maximization of Eq. (13) has not analytical solution when the likelihood is a bivariate Gamma, the maximization step is solved by numerical optimization. The gradient w.r.t. α (l) is,
∂Q θ (l) ∂α (l) = L l=1 N i=1 γ (l) i ∂ ∂α (l) log p x i , y i α (l) , β (l) , a (l) = L l=1 N i=1 γ (l) i log β (l) + log x i − Γ α (l) Γ α (l) .(23)
The gradient w.r.t. β (l) is,
∂Q θ (l) ∂β (l) = L l=1 N i=1 γ (l) i ∂ ∂β (l) log p x i α (l) , β (l) , a (l) = L l=1 N i=1 γ (l) i α (l) β (l) − x i .(24)
The gradient w.r.t. a (l) is,
∂Q θ (l) ∂a (l) = L l=1 N i=1 γ (l) i ∂ ∂β (l) log p x i , y i α (l) , β (l) , a (l) = L l=1 N i=1 γ (l) i log x i + log y i − Γ a (l) Γ a (l) .(25)
Applying the independence assumption to each component of the Gamma model, the general form of joint density for a multivariate Gamma distribution can be derived, but it needs to be assumed that marginal density functions for each one of random variables are available [79].
Von Mises Mixture Model
The angular component of the velocity vectors is approximated by a Von Mises distribution X ∼ VM(µ (l) , κ (l) ). The density function of this distribution is,
f x i ; µ (l) , κ (l) = e κ (l) cos(xi−µ l ) 2πI 0 κ (l) , x i , µ (l) ∈ [−π, π] , κ (l) > 0,(26)
where I 0 represents the modified Bessel function of order 0 that has this formula,
I ν κ (l) = κ (l) 2 ν ∞ n=0 1 4 κ (l) 2 n n!Γ(ν + n + 1) .(27)
In the case of mixture of a Von Mises distribution, the data log-likelihood for each cluster is,
log p x i µ (l) , κ (l) = κ (l) cos x i − µ (l) − log 2π − log I 0 κ (l) .(28)
The maximization step is solved computing the gradient w.r.t. µ (l) ,
∂Q θ (l) ∂µ (l) = L l=1 N i=1 γ (l) i ∂ ∂µ (l) log p x i µ (l) , κ (l) = L l=1 N i=1 γ (l) i κ (l) sin x i − µ (l) ,(29)
and the gradient w.r.t. κ (l) ,
∂Q θ (l) ∂κ (l) = L l=1 N i=1 γ (l) i ∂ ∂κ (l) log p x i µ (l) , κ (l) = L l=1 N i=1 γ (l) i cos x i − µ (l) − I 1 κ (l) I 0 κ (l) ,(30)
where the Bessel function of order 1 is obtained from ∂I 0 (κ)/∂κ = I 1 (κ).
An extension for the multivariate Von Mises distribution can be found in [80], other solutions to the VMMM problem are [81,82].
Beta Mixture Model
The distribution of the normalized temperatures or heights can be approximated by mixture of beta distributions X ∼ B(α (l) , β (l) ) which density function is,
f x i ; α (l) , β (l) = 1 B α (l) , β (l) x α (l) −1 i (1 − x i ) β (l) −1 , α (l) , β (l) > 0,(31)
where x i ∈ (0, 1), beta function is B(α (l) , β (l) ) = [Γ(α (l) )Γ(β (l) )]/[Γ(α (l) + β (l) )], and Γ(α (l) ) is the Gamma function.
The log-likelihood of the beta density function, that is needed to compute the expected complete data log-likelihood in the mixture model is,
log p x i α (l) , β (l) = α (l) − 1 log x i + β (l) − 1 log (1 − x i ) − log B α (l) , β (l) .(32)
The maximization step has to be solved by gradient descent. The gradient w.r.t. α (l) is,
∂Q θ (l) ∂α (l) = L l=1 N i=1 γ (l) i ∂ ∂α (l) log p x i α (l) , β (l) = N i=1 L l=1 γ (l) i log x i − ψ α (l) + ψ α (l) + β (l) ,(33)
where ∂B(α (l) , β (l) )/∂α (l) = B(α (l) , β (l) )[ψ(α (l) ) − ψ(α (l) + β (l) )], and ψ(·) is the digamma function, which is ψ(α (l) ) = Γ (α (l) )/Γ(α (l) ). The gradient w.r.t. β (l) is,
∂Q θ (l) ∂β (l) = L l=1 N i=1 γ (l) i ∂ ∂β (l) log p x i α (l) , β (l) = L l=1 N i=1 γ (l) i log (−x i ) − ψ β (l) + ψ α (l) + β (l) .(34)
In previous work carried out in the implementation of a BeMM clustering, was found that is not optimal to assign the number of clusters in these models applying Bayesian Information Criterion (BIC) [83] (see Subsection 2.3). The authors proposed to implement Integrated Classification Likelihood (ICL) instead.
Gaussian Mixture Model
The distribution of the velocity components in a Cartesian coordinates system can be approximate by a mixture of multivariate normal distributions X ∼ N (µ (l) , Σ (l) ). The multivariate normal likelihood is,
f x; µ (l) , Σ (l) = 1 (2π) d Σ (l) · exp − 1 2 x − µ (l) Σ (l) −1 x − µ (l) .(35)
The log-likelihood of the multivariate density function [70] for computing the expected complete data log-likelihood in the GMM is,
log p x i µ (l) , Σ (l) = − d 2 log 2π − 1 2 log Σ (l) − 1 2 x i − µ (l) Σ (l) −1 x i − µ (l) .(36)
In the maximization stage, the mean and variance of each cluster that maximize the log-likelihood have an analytical solution that is,
µ (l) = N i=1 γ (l) i x i γ (l) Σ (l) = N i=1 γ (l) i x i x i γ (l) − µ (l) µ (l) .(37)
The temperatures or heights can be approximated with univariate normal distribution. The extension of the GMM is the same for the case of one variable or multiple variables. The theory behind mixture models, as well as the EM algorithm, is fully developed in [70].
Bayesian Metrics
BIC is a metric to choose between models but penalizing the models that have higher number of parameters, and have more samples [84]. The BIC in a mixture model is,
BIC(θ) L = λ log N − 2 log Q(θ) = λ log N − 2 L l=1 N i=1 I (z i = l) log π (l) + log p x i θ (l) ,(38)
where λ is the number of parameters in the model, and N is the number of samples. As a pixel is assumed to be in one wind layer or another I(z
(l) i = l) ∈ {0, 1}.
The BIC is close related to Akaike Information Criterion (AIC) [85],
AIC(θ) L = 2λ − 2 log Q(θ),(39)
In other metrics, such as the Classification Likelihood Criterion (CLC),
CLC(θ) L = 2H(θ) − 2 log Q(θ),(40)
uses the entropy function H(·) in the context of information theory. CLC is similar to the AIC [86], but applying the entropy as a penalizing factor instead of the number of parameters. The entropy in a mixture model is,
H(θ) L = L l=1 N k=1 γ (l) i log γ (l) i .(41)
The ICL, which is
ICL(θ) L = BIC(θ) L + 2H(θ),(42)
Hidden Markov Model
A HMM is state space model which latent variables (i.e. system states) are discrete. In the problem of detecting the number of wind velocity fields in an image, a HMM is implemented to infer the cluster number L in a mixture model. We assume that the current state of the system L t (i.e. cluster number) is a Markov process conditional to the previous observed states p(L t | L 1 , . . . , L t−1 ) [69]. For simplification, we propose to model the process as a first-order Markov chain, which current state is only conditional to the immediately previous state p(L t | L 1 , . . . , L t−1 ) = p(L t | L t−1 ). Henceforth, L is defined as the HMM hidden variable, which represents the number of detected wind velocity fields L ∈ {1, 2} in image L t , and x i,t is an observations (i.e. feature vector).
The parameters θ (l)
t = {π (l) t , µ (l) t , Σ (l)
t } of each distribution l in the mixture model, and the hidden state of the system L t in image t are the MAP estimation obtained applying the Bayes' theorem,
p (z i,t |x i,t , Θ t ) = p (x i,t , z i,t |Θ t ) p (x i,t ) ∝ p x i,t , z (l) i,t θ (l) t p (L t |L t−1 ) ∝ p x i,t , z (l) i,t µ (l) t , Σ (l) t p π (l) t α (l) p (L t |L t−1 ) .(43)
The set with all the parameters in the mixture model in state L t are Θ t = {π
(1) t , µ (1) t , Σ (1) t , . . . , π (Lt) t , µ (Lt) t , Σ (Lt) t
}, and the parameters of the prior distribution of the clusters weights π (l) are α (l) .
The joint distribution p(x i,t , z (l) i,t | θ (l) t ) = p(x i,t | z (l) i,t , θ (l) t )p(z (l) i,t ) is factorized to independently infer a mixture model of each feature, p x (1) i,t , x (2) i,t z (l) i,t , θ (l) t = p x (1) i,t z (1,l) i,t , θ (l) 1,t p x (2) i,t x (1) i,t , z (1,l) i,t , z (2,l) i,t , θ (l) 2,t(44)
where θ i,t of each cluster detected in image t. The state of the system L t , which represents the number of clusters L in the mixture models, is modelled using a HMM. Therefore, the mixture model parameters inferred using Eq. (13) is
p X t , Z (l) t θ (l) t ,α (l) N i=1 Lt l=1 π (l) p x i,t μ (l) t ,Σ (l) t I(zi,t=l) p π (l) α (l) ,(45)
whereθ (l) t andα (l) are the parameters that maximize the CDLL of the mixture model with L t clusters. The prior on the latent variable L t , is defined as a distribution of the exponential family,
log p (L t |L t−1 ) = 1 Z exp [−ψ (L t , L t−1 )] ,(46)
where the exponent ψ(L t , L t−1 ), is a function that depends on the previous state of the system L t−1 ,
ψ (L t , L t−1 ) −β if L t = L t−1 +β if L t = L t−1 ,(47)
the parameter β has to be cross-validated.
Combining Eq. (45) and Eq. (46) in Eq. (43), and taking logarithms and expectations with respect to z i,t , the CDLL of a image t being in state L (t) are,
Q (θ, θ t−1 ) = N i=1 Lt l=1 γ (l) i,t log π (l) t + N i=1 Lt l=1 γ (l) i,t log p x i,t θ (l) t + log p π (l) α (l) − ψ (L t , L t−1 ) + constant.(48)
It should be noted that ψ (L t , L t−1 ) is a constant with respect to z i,t , so maximizing this equation is equivalent to maximizing the original CDLL in Eq. (45).
After completing the inference of the mixture model parametersθ (l) t andα (l) when L t = 1 and L t = 2, the optimal state of the systemL t ∈ {1, 2} is the MAP estimation obtained from,
L t = argmax Lt∈{1,2} N i=1 Lt l=1 log p z i,t x i,t ,θ (l) t ,α (l) , L t − ψ (L t , L t−1 ) ,(49)
where the latent variable L t defines the number of different wind velocity fields detected in an image.
Experiments
Study Area and Data Acquisition System
The climate of Albuquerque, NM is arid semi-continental with little precipitation, which is more likely during the summer months. The average altitude of the city is 1, 620m. Between mid May and mid June, the sky is clear or partly cloudy 80% of the time. Approximately 170 days of the year are sunny, with less than 30% cloud coverage, and 110 are partly sunny, with 40% to 80% cloud coverage. Temperatures range from a minimum of 268.71K in winter to a maximum of 306.48K in summer. Combined rainfall and snowfall are approximately 27.94cm per year.
The proposed detection methods utilize data acquired by a DAQ system equipped with a solar tracker that updates its pan and tilt every second, maintaining the Sun in a central position in the images throughout the day. The IR sensor is a Lepton 1 radiometric camera with wavelength from 8 to 14 µm. The pixels in a frame are temperature measurements in centikelvin. The resolution of the IR images is 80 × 60 pixels. The DAQ is located on the roof area of the UNM-ECE building in Albuquerque, NM. The dataset composed of GSI measurements and IR images is available in a repository [87].
The weather features that were used to compute cloud height as well as to remove cyclostationary artifacts from the IR images are: atmospheric pressure, air temperature, dew point and humidity. The weather station measures every 10 minutes. The data was interpolated to match the IR images samples. The weather station is located at the University of New Mexico Hospital. It is publicly accessible 2 .
Image Preprocessing
The IR images were preprocessed to remove the effects of the direct irradiance from the Sun, the scattered irradiance from the atmosphere and the scattered irradiance from the germanium IR camera window (see Fig. 1). The effect of the direct irradiance from the Sun is constant on the IR images, and is modelled and removed. The scattering effect produced by the atmosphere is cyclostationary, so the optimal model in each frame is different. The parameters of the atmospheric irradiance model depend on the azimuth and elevation angles of the Sun, and on weather features. The scattering effect produced by the germanium IR camera window is modelled and removed using the median IR image of the last in a set of clear-sky images (see Fig. 2). The image processing methods and atmospheric conditions model are fully described in [75]. From left to right, raw IR image, IR after removing the effects of the atmospheric irradiance, and IR after removing the effects of the atmospheric irradiance and the scattering effect produced by the camera window. The models applied are shown in Fig. 1.
The proposed algorithm for the detection of clouds in multiple wind velocity fields requires that pixels containing clouds be previously segmented in the images. In this way, only features from clouds containing pixels are analyzed. The cloud segmentation algorithm implemented in this investigation is a voting scheme that uses three different cloud segmentation models. The segmentation models are a Gaussian process, a support vector machine and unsupervised Markov Random Field (see Fig. 3). The cloud segmentation models and feature extraction are explained in [54]. Figure 3: Cloud segmentation in IR images. The first image shows an IR image normalized to 8bits, the second image shows the probabilities computed by the segmentation algorithm of pixels belonging to a cloud, and the third image shows the segmentation after applying a ≥ 0.5 threshold to the probabilities. The normalized images are also used to compute the velocity vectors in the proposed WLK.
WLK Parameters Cross-Validation
A series of images with clouds flowing through different directions were simulated to cross-validate the set of parameters for each one of the mentioned methods [49]. The WLK method was found to be the most suitable for this application. The investigation was searching for a dense implementation of a motion vector method to approximate the dynamics of a cloud. The most suitable method was found to be WLK. The optimal window size, WLS regularization, and temporal kernel amplitude are: W = 16 pixels 2 , τ = 1 × 10 −8 and σ = 1.
Mixture Models Parameters Cross-Validation
The parameters that have to be cross-validated for each mixture model are α l and β. The parameter α l in Eq. (16) is the parameter of the prior distribution of the cluster weight π l in a mixture model. The parameters α in a mixture model of the temperatures are cross-validated as α α 0 1 L×1 for simplification. Equivalently, the parameters α in a mixture model of the velocity vectors are cross-validated as α α 1 1 L×1 . The parameter β in Eq. (47) is the prior of the number of cloud layers in an image used in the sequential HMM. Both in training and in testing, the state L t is initialized to the opposite number of cloud layers in the IR image sequence (e.g. if L t = 2 in the sequence, L is initialized as L 0 = 1).
The parameter's cross-validation was implemented using a High Performance Computer (HPC). Even when the parameter's cross-validation is implemented in a HPC, it is still computationally expensive and the number of validation samples is prohibitive. The cross-validation used two nodes and it was distributed across sixteen cores, which corresponds with the number of possible β in the cross-validation β = {0, . . . , 1000}. In each core, all possible combinations of α 0 , which is the parameter of the priors of the clusters corresponding to temperature, and α 1 (velocity vector cluster priors) were cross-validated α l = {1, . . . , 1000}. The performance of each combination of parameters for each mixture model were evaluated in the six training sequences. The optimal combination of parameters for each mixture model is the one which achieved the highest accuracy.
The training dataset is formed by six sequences of 21 consecutive images acquired on six different days. The training sequences were captured on different seasons and different times of the day. IR images were manually labelled as having one cloud layer L = 1 or two cloud layers L = 2. The images from three of the six days show a layer of cirrostratus in winter during the morning, altostratus in spring during the afternoon and stratocumulus in the summer during the afternoon. The other three days show two layers of altostratus and stratocumulus in winter at noon, cirrostratus and altocumulus in spring during the afternoon, and cirrocumulus and cumulus in summer during the morning. As the proposed method is an online machine learning algorithm, the training dataset is used only for validating the prior distribution parameters. The optimal parameters of the mixture models are computed for each new sample during the implementation.
Testing Performance
The testing dataset is composed of ten consecutive sequences of 30 images. The sequences were acquired at different hours of the day and in different seasons. The images in the testing dataset were acquired after the training dataset. The images in the testing dataset were manually labelled in the same way as the training set. The testing dataset includes five sequences of images that have one layer of clouds, and five sequences of images that have two layers of clouds. Table 2: Detection accuracy achieved when multivariate probability functions are used in the likelihood of the mixture model. The detection accuracy is compared using different Bayesian metrics and a maximum a posteriori implementation of a mixture model with a prior on the weights and cluster numbers.
The clouds in the sequences with one layer are: stratocumulus on a summer morning, cumulus on a summer morning, stratus on a summer afternoon, cumulus on a fall morning, and stratocumulus on a winter morning. The clouds in the sequences with two layers are: cumulus and cirrostratus on a summer morning, cumulus and altostratus on a fall morning, cumulus and cirrus on a fall afternoon, stratucumulus and altostratus on a fall morning, and cumulus and nimbostratus on a winter afternoon.
We assume that the distribution of the velocity vectors is different in each cloud layer that appears in an IR image. In addition, we assume that a correlation exists between the height of a cloud and its velocity vectors. As the height of a cloud is a function of its temperature, we propose to use the temperature of the pixels and the velocity vectors to infer the number of cloud layers in an IR image. The distribution of temperatures is inferred using a BeMM, GaMM Table 3: Detection accuracy achieved when the mixture model likelihood is factorized in the product of independent likelihood functions for each feature. The detection accuracy is compared using different Bayesian metrics and adding a prior of the weights and the cloud layers number to the mixture model. The distribution of the velocity vector components is inferred using a multivariate GMM. The performance of the multivariate GMM is compared to the distribution of the velocity vectors magnitude and angle inferred factorizing the probability of the velocity vectors into two independent probability functions (see table 2). In Fig. 5, the distributions of the velocity angle and magnitudes are inferred using a VMMM and GaMM respectively. When weights are not applied to the LK method, the distribution of the temperatures, velocity vector angles and magnitude can be inferred using a multivariate GMM. Similarly, a DGaMM is also proposed to infer the distribution of the temperatures and velocity vector magnitudes. The multivariate GMM and DGaMM likelihood are displayed in Fig. 6. In this case, the probability of the velocity vector angles is factorized and inferred independently using a VMMM. The performance of these two mixture models are also shown in table 2.
The experiments were carried out in the Wheeler high performance computer of UNM-CARC, which uses SGI AltixXE Xeon X5550 at 2.67GHz with 6 GB of RAM memory per core, has 8 cores per node, 304 nodes total, and runs at 25 theoretical peak FLOPS. It has Linux CentOS 7 installed.
Discussion
In the problem at hand, the BIC and AIC criteria do not produce an improvement on the detection accuracy with respect to accuracy achieved by ML criterion. The best detection accuracy achieved by a model that uses the ML criterion is 78.15%. In contrast, the same model achieved a detection accuracy of 76.67% and 77.41% when the criteria were minimum BIC and AIC respectively (see table 3). This mixture model has a factorized likelihood which uses a normal probability function to infer the temperature distribution and a Von Mises to infer the velocity vector angles. However, when the minimum CLC and ICL criteria are applied, the detection accuracy improved with respect to that achieved by the ML criteria. The detection accuracy achieved by CLC, ICL and ML criteria were 81.11%, 79.63% and 74.07% respectively (see table 1). Therefore, the best detection using a Bayesian metric was performed by a mixture model with a normal likelihood on the temperatures.
The detection accuracy of the proposed algorithm increases when the mixture model includes prior distributions on the mixture weights and the number of clusters. In these mixture models, the decision criterion is MAP. Adding a prior to the mixture weights and the cluster number is equivalent to regularizing the parameters. The prior adds certain known information to the model. In our problem, the prior on the number of clusters produces the following effect: if the previous frame had L t−1 clusters, the next frame is more likely to have L t as well. Similarly, the prior on the mixture weights assures that when the likelihood of two cloud layers is inferred, the cluster weights cannot vanish to zero. In table 1, when we look at the model that achieved the best detection accuracy using a Bayesian metric (ICL), the detection accuracy increased from 81.11% to 87.11%. Nevertheless, the best detection accuracy using the MAP criterion reached 97.41% (see table 3). The model that presents the best detection accuracy is a MAP mixture model with factorized likelihood which uses a beta probability function to infer the temperature distribution and a Von Mises distribution to infer the velocity vector angles. This validates our assumption that different cloud layers are at different heights (i.e. temperature) and hence the wind shear is also different (i.e. velocity vector angle). The proposed likelihood factorization allows us to find the optimal probability function of each feature independently.
The results show that it is feasible to identify different cloud layers in IR ground-based sky-images (see Fig. 7-9). The main advantage of this algorithm is that it provides the capability of independently estimating the motion of different Figure 8: Testing sequence of IR images with two detected cloud layers (first row). The time interval between images is 1 minute. The images in the second row show the pixels that belong to the low and high temperatures, in gray and white respectively. The pixels in black were classified as not belonging to a cloud by the segmentation algorithm. The third and fourth row show the distribution of the temperature and velocity vectors of the best model. cloud layers in an IR image using the posterior probabilities in Fig. 4. This is useful in predicting when different clouds will occlude the Sun. The features and dynamics can be analyzed independently to increase the performances of a solar forecasting algorithm. Another advantage of the learning algorithm proposed is that it is unsupervised, so the user does not need to provide labels, which makes the training process automatic.
As it can be seen in tables 1-3, the Bayesian metrics are not useful in this application. The highest accuracy achieved by a Bayesian metric was 81.11% with a GMM of the temperatures. The model selection was performed using minimum ICL criterion. The performances of the BGaMM are lower than the rest of the mixture models, thus it is not practical to assess the number of cloud layers in IR images. This is because the BGaMM tends to overfit even when the cluster weights are regularized using a prior distribution. A disadvantage of the proposed unsupervised learning algorithm is that the EM requires several initializations to guarantee that the EM algorithm converges to the best local maxima. This is problematic when the cloud layer detection algorithm is meant for real-time applications. An implementation of the algorithm feasible in real-time applications will require multiple CPUs to run different initializations in parallel.
Conclusions
This investigation proposes an online unsupervised learning algorithm to detect moving clouds in different wind velocity fields. The mixture model of the pixel temperature is used to know when a cloud is below or on top of another. The posterior probabilities of the mixture model are used to compute velocity vectors. The algorithm to compute the velocity vectors is a weighted implementation of the Lucas-Kanade optical flow. The weights are the posterior probabilities of the mixture model. The velocity vectors are computed in a scenario that assumes one cloud layer and in another scenario that assumes two cloud layers (using the cloud segmentation or the posterior probabilities respectively). The distribution of the velocity vectors and the temperatures is used to determine which one of the analyzed scenarios is the most likely. The proposed algorithm implements the MAP criterion.
The detection of clouds flowing in different wind velocity fields is useful to increase the accuracy of a forecasting algorithm that predicts the global solar irradiance that will reach a photovoltaic power plant. The prediction will aid a smart grid to adjust the generation mix to compensate for the decrease of energy generated by the photovoltaic panels.
In particular, the posterior probabilities of the pixel temperatures may aid the extraction of features using either image processing techniques, gradient-based learning (e.g. deep neural networks) or both. However, the posterior probabilities are only advantageous when there are multiple cloud layers in an IR image. The proposed method models a prior distribution of the cluster weights, and a prior function of each possible scenario. The prior function of the scenarios is a temporal implementation of a hidden Markov model. This investigation shows that the proposed method increases the detection accuracy compared to the accuracy achieved by the most common Bayesian metrics used in practice.
Future work in this area will implement cloud detection algorithms in a ramp-down and intra-hour solar forecasting algorithm. The dynamics of clouds may be analyzed independently to extract features from clouds moving in different wind velocity fields. The improvement in the performance can be assessed to determine how to combine the features extracted from different clouds to model their respective influence on the GSI that will reach the surface of a photovoltaic system. Another investigation may focus on the implementation of the proposed algorithm in images acquired using ground-based all-sky imagers that are sensitive to the visible light spectrum instead of the infrared. The most interesting aspect will be to fuse information acquired using visible and infrared light cameras.
is based on both BIC and the entropy. The number of clusters L and the likelihood function, is different in each model M L , thus each model is expected to have a different BIC(θ) L , AIC(θ) L , CLC(θ) L , and ICL(θ) L . For all those metrics, the optimal number of clusters is when the value of the metric is the lowest.
t } are the parameters of the independent mixture models and the responsibilities respectively.The posterior distribution of a mixture model is proportional to the joint probability of the observations x i,t and the responsibilities z
Figure 1 :
1Irradiance effect models applied in the image preprocessing. From left to right, direct irradiance effect of the Sun, scattering effect produced by the atmosphere, and combination of the irradiance effects of the Sun and the atmosphere, and scattering effect produced by the germanium camera window.
Figure 2 :
2Preprocessing of the IR images. The images in the first row show the 3-dimensional image of the images in the second row.
Figure 4 :
4First row shows the distribution of the temperatures inferred using a GMM, GaMM and BeMM. The mixture model likelihood is evaluated with the optimal parameters. The mixture model posterior probabilities for the top cloud layer and bottom cloud layer are shown in the middle and bottom rows respectively. and GMM, see Fig. 4. The performances of each distribution are analyzed and compared in tables 1-2. The posterior probabilities of the temperature mixture models in Fig. 4 are the weights used in the WLK.
Figure 5 :
5Distribution of the velocity vector's angle and magnitude. The mixture model likelihood is evaluated with the optimal parameters. The distribution of the velocity vector angle was inferred using a VMMM (left). The distribution of the velocity vector magnitude was inferred using GaMM (right).
Figure 6 :
6The distributions of the temperatures and the velocity vectors. The graphs show the density of mixture model likelihood evaluated with the optimal parameters. The distribution of the velocity vectors was inferred using a multivariate GMM (top left graph). The distribution of the temperatures and the velocity vector's magnitude was inferred using a BGaMM (top left graph). The distribution of the temperatures and the velocity vectors was inferred using a multivariate GMM (bottom graph).
Figure 7 :
7Testing sequence of consecutive IR images acquired at 1 minute interval. In the first row, the IR images show a cloud flowing in a single detected wind velocity field. The images in the second row show the cloud segmentation. The graphs in the third and fourth row show the selected model distribution of the temperatures and the velocity vector angles respectively.
Figure 9 :
9Testing image with two detected cloud layers. The time interval between the IR image is 1 minute. The consecutive IR images are shown in the first row. The pixel labels of the top (grey) or the bottom cloud layer (white) are shown in the second row. The third and fourth row show the factorized likelihood that a VMMM and GaMM uses for the velocity vector angles and magnitude respectively.
https://www.flir.com/ 2 https://www.wunderground.com/dashboard/pws/KNMALBUQ473
AcknowledgmentsThis work has been supported by NSF EPSCoR grant number OIA-1757207 and the King Felipe VI endowed Chair. Authors would like to thank the UNM Center for Advanced Research Computing, supported in part by the National Science Foundation, for providing the high performance computing and large-scale storage resources used in this work.
Can renewable generation, energy storage and energy efficient technologies enable carbon neutral energy transition? Applied Energy. Ning Zhao, Fengqi You, 279115889Ning Zhao and Fengqi You. Can renewable generation, energy storage and energy efficient technologies enable carbon neutral energy transition? Applied Energy, 279:115889, 2020.
Solar energy: Potential and future prospects. Ehsanul Kabir, Pawan Kumar, Sandeep Kumar, Adedeji A Adelodun, Ki-Hyun Kim, Renewable and Sustainable Energy Reviews. 82Ehsanul Kabir, Pawan Kumar, Sandeep Kumar, Adedeji A. Adelodun, and Ki-Hyun Kim. Solar energy: Potential and future prospects. Renewable and Sustainable Energy Reviews, 82:894 -900, 2018.
Smart grid encounters edge computing: opportunities and applications. Cheng Feng, Yi Wang, Qixin Chen, Yi Ding, Goran Strbac, Chongqing Kang, Advances in Applied Energy. 1100006Cheng Feng, Yi Wang, Qixin Chen, Yi Ding, Goran Strbac, and Chongqing Kang. Smart grid encounters edge computing: opportunities and applications. Advances in Applied Energy, 1:100006, 2021.
Study on areal solar irradiance for analyzing areally-totalized pv systems. Kenji Otani, Jyunya Minowa, Kosuke Kurokawa, Solar Energy Materials and Solar Cells. 471Kenji Otani, Jyunya Minowa, and Kosuke Kurokawa. Study on areal solar irradiance for analyzing areally-totalized pv systems. Solar Energy Materials and Solar Cells, 47(1):281 -288, 1997.
Solar pv output prediction from video streams using convolutional neural networks. Yuchi Sun, Gergely Szűcs, Adam R Brandt, Energy & Environmental Science. 117Yuchi Sun, Gergely Szűcs, and Adam R Brandt. Solar pv output prediction from video streams using convolutional neural networks. Energy & Environmental Science, 11(7):1811-1818, 2018.
Effects of the sharpness of shadows on the mismatch losses of pv generators under partial shading conditions caused by moving clouds. Kari Lappalainen, Anssi Mäki, Seppo Valkealahti, Proceedings of 28th European photovoltaic solar energy conference. 28th European photovoltaic solar energy conferenceKari Lappalainen, Anssi Mäki, and Seppo Valkealahti. Effects of the sharpness of shadows on the mismatch losses of pv generators under partial shading conditions caused by moving clouds. In Proceedings of 28th European photovoltaic solar energy conference, pages 4081-4086, 2013.
Output power variation of different pv array configurations during irradiance transitions caused by moving clouds. Kari Lappalainen, Seppo Valkealahti, Applied energy. 190Kari Lappalainen and Seppo Valkealahti. Output power variation of different pv array configurations during irradiance transitions caused by moving clouds. Applied energy, 190:902-910, 2017.
Prediction of global solar irradiance based on time series analysis: Application to solar thermal power plants energy production planning. Luis Martín, Luis F Zarzalejo, Jesús Polo, Ana Navarro, Ruth Marchante, Marco Cony, Solar Energy. 8410Luis Martín, Luis F. Zarzalejo, Jesús Polo, Ana Navarro, Ruth Marchante, and Marco Cony. Prediction of global solar irradiance based on time series analysis: Application to solar thermal power plants energy production planning. Solar Energy, 84(10):1772 -1781, 2010.
A review and evaluation of the state-of-the-art in pv solar power forecasting: Techniques and optimization. R Ahmed, V Sreeram, Y Mishra, M D Arif, Renewable and Sustainable Energy Reviews. 124109792R. Ahmed, V. Sreeram, Y. Mishra, and M.D. Arif. A review and evaluation of the state-of-the-art in pv solar power forecasting: Techniques and optimization. Renewable and Sustainable Energy Reviews, 124:109792, 2020.
Probabilistic solar irradiance transposition models. Hao Quan, Dazhi Yang, Renewable and Sustainable Energy Reviews. 125109814Hao Quan and Dazhi Yang. Probabilistic solar irradiance transposition models. Renewable and Sustainable Energy Reviews, 125:109814, 2020.
Assessment of an early degraded pv generator. Gilberto Figueiredo, Marcelo Pinho Almeida, Alex R A Manito, Roberto Zilles, Solar Energy. 189Gilberto Figueiredo, Marcelo Pinho Almeida, Alex R.A. Manito, and Roberto Zilles. Assessment of an early degraded pv generator. Solar Energy, 189:385 -388, 2019.
Improvement of the japan meteorological agency meso-scale model for the forecasting the photovoltaic power production: Modification of the cloud scheme. Hideaki Ken Ichi Shimose, Joao Ohtake, Silva Gari Da, Takumi Fonseca, Takashi Takashima, Yoshinori Oozeki, Yamada, Energy Procedia. ISES Solar World Congress57Ken ichi Shimose, Hideaki Ohtake, Joao Gari da Silva Fonseca, Takumi Takashima, Takashi Oozeki, and Yoshinori Yamada. Improvement of the japan meteorological agency meso-scale model for the forecasting the photovoltaic power production: Modification of the cloud scheme. Energy Procedia, 57:1346 -1353, 2014. 2013 ISES Solar World Congress.
Comparing support vector regression for pv power forecasting to a physical modeling approach using measurement, numerical weather prediction, and cloud motion data. Björn Wolff, Jan Kühnert, Elke Lorenz, Oliver Kramer, Detlev Heinemann, Solar Energy. 135Björn Wolff, Jan Kühnert, Elke Lorenz, Oliver Kramer, and Detlev Heinemann. Comparing support vector regression for pv power forecasting to a physical modeling approach using measurement, numerical weather prediction, and cloud motion data. Solar Energy, 135:197 -208, 2016.
Deterministic and probabilistic forecasting of photovoltaic power based on deep convolutional neural network. Huaizhi Wang, Haiyan Yi, Jianchun Peng, Guibin Wang, Yitao Liu, Hui Jiang, Wenxin Liu, Energy Conversion and Management. 153Huaizhi Wang, Haiyan Yi, Jianchun Peng, Guibin Wang, Yitao Liu, Hui Jiang, and Wenxin Liu. Deterministic and probabilistic forecasting of photovoltaic power based on deep convolutional neural network. Energy Conversion and Management, 153:409 -422, 2017.
Short-term photovoltaic solar power forecasting using a hybrid wavelet-pso-svm model based on scada and meteorological information. Jianhua Abinet Tesfaye Eseye, Dehua Zhang, Zheng, Renewable Energy. 118Abinet Tesfaye Eseye, Jianhua Zhang, and Dehua Zheng. Short-term photovoltaic solar power forecasting using a hybrid wavelet-pso-svm model based on scada and meteorological information. Renewable Energy, 118:357 - 367, 2018.
A minutely solar irradiance forecasting method based on real-time sky image-irradiance mapping model. Fei Wang, Zhiming Xuan, Zhao Zhen, Yu Li, Kangping Li, Liqiang Zhao, Miadreza Shafie-Khah, João P S Catalão, Energy Conversion and Management. 220113075Fei Wang, Zhiming Xuan, Zhao Zhen, Yu Li, Kangping Li, Liqiang Zhao, Miadreza Shafie-khah, and João P.S. Catalão. A minutely solar irradiance forecasting method based on real-time sky image-irradiance mapping model. Energy Conversion and Management, 220:113075, 2020.
Short-term solar irradiance forecasting via satellite/model coupling. D Steven, Matthew A Miller, John M Rogers, Manajit Haynes, Andrew K Sengupta, Heidinger, Solar Energy. 168Advances in Solar Resource Assessment and ForecastingSteven D. Miller, Matthew A. Rogers, John M. Haynes, Manajit Sengupta, and Andrew K. Heidinger. Short-term solar irradiance forecasting via satellite/model coupling. Solar Energy, 168:102 -117, 2018. Advances in Solar Resource Assessment and Forecasting.
Atmospheric Modeling, Data Assimilation and Predictability. Eugenia Kalnay, Cambridge University PressEugenia Kalnay. Atmospheric Modeling, Data Assimilation and Predictability. Cambridge University Press, 2003.
Physics and Chemistry of Clouds. D Lamb, J Verlinde, Cambridge University PressD. Lamb and J. Verlinde. Physics and Chemistry of Clouds. Cambridge University Press, 2011.
Developing wind power projects: theory and practice. Tore Wizelius, EarthscanTore Wizelius. Developing wind power projects: theory and practice. Earthscan, 2007.
13 -design and implementation of a wind power project. T Wizelius, Ali SayighElsevier2OxfordComprehensive Renewable EnergyT. Wizelius. 2.13 -design and implementation of a wind power project. In Ali Sayigh, editor, Comprehensive Renewable Energy, pages 391 -430. Elsevier, Oxford, 2012.
7 -the atmosphere. Robert J Charlson, Earth System Science. Michael C. Jacobson, Robert J. Charlson, Henning Rodhe, and Gordon H. OriansAcademic Press72Robert J. Charlson. 7 -the atmosphere. In Michael C. Jacobson, Robert J. Charlson, Henning Rodhe, and Gordon H. Orians, editors, Earth System Science, volume 72 of International Geophysics, pages 132 -158. Academic Press, 2000.
On the value of operationally synthesized multiple-doppler wind fields. Olivier Bousquet, Pierre Tabary, Jacques Parent, Châtelet, Geophysical research letters. 3422Olivier Bousquet, Pierre Tabary, and Jacques Parent du Châtelet. On the value of operationally synthesized multiple-doppler wind fields. Geophysical research letters, 34(22), 2007.
Operational multiple-doppler wind retrieval inferred from long-range radial velocity measurements. Olivier Bousquet, Pierre Tabary, Jacques Parent, Châtelet, Journal of applied meteorology and climatology. 4711Olivier Bousquet, Pierre Tabary, and Jacques Parent du Châtelet. Operational multiple-doppler wind retrieval inferred from long-range radial velocity measurements. Journal of applied meteorology and climatology, 47(11):2929-2945, 2008.
Multi-layer wind velocity field visualization in infrared images of clouds for solar irradiance forecasting. Guillermo Terrén, - Serrano, Manel Martínez-Ramón, Applied Energy. 288116656Guillermo Terrén-Serrano and Manel Martínez-Ramón. Multi-layer wind velocity field visualization in infrared images of clouds for solar irradiance forecasting. Applied Energy, 288:116656, 2021.
Evaluation of numerical weather prediction for intra-day solar forecasting in the continental united states. Patrick Mathiesen, Jan Kleissl, Solar Energy. 855Patrick Mathiesen and Jan Kleissl. Evaluation of numerical weather prediction for intra-day solar forecasting in the continental united states. Solar Energy, 85(5):967 -977, 2011.
Comparison of numerical weather prediction solar irradiance forecasts in the us. Richard Perez, Elke Lorenz, Sophie Pelland, Mark Beauharnois, Glenn Van Knowe, Karl Hemker, Detlev Heinemann, Jan Remund, Stefan C Müller, Wolfgang Traunmüller, Gerald Steinmauer, David Pozo, Jose A Ruiz-Arias, Vicente Lara-Fanego, Lourdes Ramirez-Santigosa, Martin Gaston-Romero, Luis M Pomares, canada and europe. Solar Energy. 94Richard Perez, Elke Lorenz, Sophie Pelland, Mark Beauharnois, Glenn Van Knowe, Karl Hemker, Detlev Heinemann, Jan Remund, Stefan C. Müller, Wolfgang Traunmüller, Gerald Steinmauer, David Pozo, Jose A. Ruiz-Arias, Vicente Lara-Fanego, Lourdes Ramirez-Santigosa, Martin Gaston-Romero, and Luis M. Pomares. Comparison of numerical weather prediction solar irradiance forecasts in the us, canada and europe. Solar Energy, 94:305 -326, 2013.
A high-resolution, cloud-assimilating numerical weather prediction model for solar irradiance forecasting. Patrick Mathiesen, Craig Collier, Jan Kleissl, Solar Energy. 92Patrick Mathiesen, Craig Collier, and Jan Kleissl. A high-resolution, cloud-assimilating numerical weather prediction model for solar irradiance forecasting. Solar Energy, 92:47 -61, 2013.
Improved model output statistics of numerical weather prediction based irradiance forecasts for solar power applications. A Remco, Petra W Verzijlbergh, Stephan R Heijnen, Alexander De Roode, Los, J J Harm, Jonker, Solar Energy. 118Remco A. Verzijlbergh, Petra W. Heijnen, Stephan R. de Roode, Alexander Los, and Harm J.J. Jonker. Improved model output statistics of numerical weather prediction based irradiance forecasts for solar power applications. Solar Energy, 118:634 -645, 2015.
Combining solar irradiance measurements, satellite-derived data and a numerical weather prediction model to improve intra-day solar forecasting. L Aguiar, B Pereira, P Lauret, F Díaz, M David, Renewable Energy. 97L. Mazorra Aguiar, B. Pereira, P. Lauret, F. Díaz, and M. David. Combining solar irradiance measurements, satellite-derived data and a numerical weather prediction model to improve intra-day solar forecasting. Renewable Energy, 97:599 -610, 2016.
Modeling of uncertainty of solar irradiance forecasts on numerical weather predictions with the estimation of multiple confidence intervals. Akinobu Murata, Hideaki Ohtake, Takashi Oozeki, Renewable Energy. 117Akinobu Murata, Hideaki Ohtake, and Takashi Oozeki. Modeling of uncertainty of solar irradiance forecasts on numerical weather predictions with the estimation of multiple confidence intervals. Renewable Energy, 117:193 - 201, 2018.
Evaluation of dimensionality reduction methods applied to numerical weather models for solar radiation forecasting. O García-Hinde, G Terrén-Serrano, M Á Hombrados-Herrera, V Gómez-Verdejo, S Jiménez-Fernández, C Casanova-Mateo, J Sanz-Justo, M Martínez-Ramón, S Salcedo-Sanz, Engineering Applications of Artificial Intelligence. 69O. García-Hinde, G. Terrén-Serrano, M.Á. Hombrados-Herrera, V. Gómez-Verdejo, S. Jiménez-Fernández, C. Casanova-Mateo, J. Sanz-Justo, M. Martínez-Ramón, and S. Salcedo-Sanz. Evaluation of dimensionality reduction methods applied to numerical weather models for solar radiation forecasting. Engineering Applications of Artificial Intelligence, 69:157 -167, 2018.
Intra-day solar irradiation forecast using rls filters and satellite images. -Acland Franco Marchesoni, Rodrigo Alonso-Suárez, Renewable Energy. 161Franco Marchesoni-Acland and Rodrigo Alonso-Suárez. Intra-day solar irradiation forecast using rls filters and satellite images. Renewable Energy, 161:1140 -1154, 2020.
Intra-day solar probabilistic forecasts including local short-term variability and satellite information. R Alonso-Suárez, M David, V Branco, P Lauret, Renewable Energy. 158R. Alonso-Suárez, M. David, V. Branco, and P. Lauret. Intra-day solar probabilistic forecasts including local short-term variability and satellite information. Renewable Energy, 158:554 -573, 2020.
Solar power prediction based on satellite images and support vector machine. H S Jang, K Y Bae, H S Park, D K Sung, IEEE Transactions on Sustainable Energy. H. S. Jang, K. Y. Bae, H. S. Park, and D. K. Sung. Solar power prediction based on satellite images and support vector machine. IEEE Transactions on Sustainable Energy, pages 1255-1263, 2016.
Machine learning algorithm to forecast ionospheric time delays using global navigation satellite system observations. Lakshmi Mallika, I , D Venkata, Saravana Ratnam, G Raman, Sivavaraprasad, Acta Astronautica. 173Lakshmi Mallika I, D. Venkata Ratnam, Saravana Raman, and G. Sivavaraprasad. Machine learning algorithm to forecast ionospheric time delays using global navigation satellite system observations. Acta Astronautica, 173:221 -231, 2020.
Utilization of low cost, sky-imaging technology for irradiance forecasting of distributed solar generation. M Cervantes, Krishnaswami, R Richardson, Vega, 2016 IEEE Green Technologies Conference (GreenTech). IEEEM Cervantes, H Krishnaswami, W Richardson, and R Vega. Utilization of low cost, sky-imaging technology for irradiance forecasting of distributed solar generation. In 2016 IEEE Green Technologies Conference (GreenTech), pages 142-146. IEEE, 2016.
A low cost, edge computing, all-sky imager for cloud tracking and intra-hour irradiance forecasting. Walter Richardson, Hariharan Krishnaswami, Rolando Vega, Michael Cervantes, Sustainability. 94482Walter Richardson, Hariharan Krishnaswami, Rolando Vega, and Michael Cervantes. A low cost, edge computing, all-sky imager for cloud tracking and intra-hour irradiance forecasting. Sustainability, 9(4):482, 2017.
Hybrid approaches based on deep whole-sky-image learning to photovoltaic generation forecasting. Weicong Kong, Youwei Jia, Yang Zhao, Ke Dong, Songjian Meng, Chai, Applied Energy. 280115875Weicong Kong, Youwei Jia, Zhao Yang Dong, Ke Meng, and Songjian Chai. Hybrid approaches based on deep whole-sky-image learning to photovoltaic generation forecasting. Applied Energy, 280:115875, 2020.
Sun-tracking imaging system for intra-hour dni forecasts. Yinghao Chu, Mengying Li, Carlos F M Coimbra, 96Renewable EnergyYinghao Chu, Mengying Li, and Carlos F.M. Coimbra. Sun-tracking imaging system for intra-hour dni forecasts. Renewable Energy, 96:792 -799, 2016.
Low-cost solar micro-forecasts for pv smoothing. A Mammoli, A Ellis, A Menicucci, S Willard, T Caudell, J Simmins, 2013 1st IEEE Conference on Technologies for Sustainability (SusTech). A. Mammoli, A. Ellis, A. Menicucci, S. Willard, T. Caudell, and J. Simmins. Low-cost solar micro-forecasts for pv smoothing. In 2013 1st IEEE Conference on Technologies for Sustainability (SusTech), pages 238-243, 2013.
Very short-term solar irradiance forecast using all-sky imaging and real-time irradiance measurements. M Caldas, R Alonso-Suárez, Renewable Energy. 143M. Caldas and R. Alonso-Suárez. Very short-term solar irradiance forecast using all-sky imaging and real-time irradiance measurements. Renewable Energy, 143:1643 -1658, 2019.
Automated construction of clear-sky dictionary from all-sky imager data. Peter Shaffery, Aron Habte, Marcos Netto, Afshin Andreas, Venkat Krishnan, Solar Energy. 212Peter Shaffery, Aron Habte, Marcos Netto, Afshin Andreas, and Venkat Krishnan. Automated construction of clear-sky dictionary from all-sky imager data. Solar Energy, 212:73 -83, 2020.
Intra-hour forecasting with a total sky imager at the uc san diego solar energy testbed. Chi Wai Chow, Bryan Urquhart, Matthew Lave, Anthony Dominguez, Jan Kleissl, Janet Shields, Byron Washom, Solar Energy. 8511Chi Wai Chow, Bryan Urquhart, Matthew Lave, Anthony Dominguez, Jan Kleissl, Janet Shields, and Byron Washom. Intra-hour forecasting with a total sky imager at the uc san diego solar energy testbed. Solar Energy, 85(11):2881 -2893, 2011.
Physics principles in radiometric infrared imaging of clouds in the atmosphere. A Joseph, Paul W Shaw, Nugent, European Journal of Physics. 346Joseph A Shaw and Paul W Nugent. Physics principles in radiometric infrared imaging of clouds in the atmosphere. European Journal of Physics, 34(6):S111-S121, oct 2013.
Radiometric cloud imaging with an uncooled microbolometer thermal infrared camera. Joseph A Shaw, Paul W Nugent, Nathan J Pust, Brentha Thurairajah, Kohei Mizutani, Opt. Express. 1315Joseph A. Shaw, Paul W. Nugent, Nathan J. Pust, Brentha Thurairajah, and Kohei Mizutani. Radiometric cloud imaging with an uncooled microbolometer thermal infrared camera. Opt. Express, 13(15):5807-5817, Jul 2005.
Cloud statistics measured with the infrared cloud imager (ici). B Thurairajah, J A Shaw, IEEE Transactions on Geoscience and Remote Sensing. 439B. Thurairajah and J. A. Shaw. Cloud statistics measured with the infrared cloud imager (ici). IEEE Transactions on Geoscience and Remote Sensing, 43(9):2000-2007, Sep. 2005.
Infrared cloud imaging in support of earth-space optical communication. Paul W Nugent, Joseph A Shaw, Sabino Piazzolla, Opt. Express. 1710Paul W. Nugent, Joseph A. Shaw, and Sabino Piazzolla. Infrared cloud imaging in support of earth-space optical communication. Opt. Express, 17(10):7862-7872, May 2009.
Data acquisition and image processing for solar irradiance forecast. Guillermo Terrén, - Serrano, Manel Martínez-Ramón, Guillermo Terrén-Serrano and Manel Martínez-Ramón. Data acquisition and image processing for solar irradiance forecast, 2020.
Correcting for focal-plane-array temperature dependence in microbolometer infrared cameras lacking thermal stabilization. J Pust Nathan, W Paul, Joseph A Nugent, Shaw, Optical Engineering. 526Nathan J. Pust Paul W. Nugent, Joseph A. Shaw. Correcting for focal-plane-array temperature dependence in microbolometer infrared cameras lacking thermal stabilization. Optical Engineering, 52(6):1 -8 -8, 2013.
An experimental method to merge far-field images from multiple longwave infrared sensors for short-term solar forecasting. Andrea Mammoli, Guillermo Terrén-Serrano, Anthony Menicucci, P Thomas, Manel Caudell, Martínez-Ramón, Solar Energy. 187Andrea Mammoli, Guillermo Terrén-Serrano, Anthony Menicucci, Thomas P Caudell, and Manel Martínez- Ramón. An experimental method to merge far-field images from multiple longwave infrared sensors for short-term solar forecasting. Solar Energy, 187:254-260, 2019.
Numerical simulation and prediction of spatial wind field under complex terrain. Shujin Hehe Ren, Wen-Li Laima, Bo Chen, Anxin Zhang, Hui Guo, Li, Journal of Wind Engineering and Industrial Aerodynamics. 180Hehe Ren, Shujin Laima, Wen-Li Chen, Bo Zhang, Anxin Guo, and Hui Li. Numerical simulation and prediction of spatial wind field under complex terrain. Journal of Wind Engineering and Industrial Aerodynamics, 180:49 - 65, 2018.
Cloud detection, classification and motion estimation using geostationary satellite imagery for cloud cover forecast. H Escrig, Francisco Batlles, Joaquín Alonso-Montesinos, F M Baena, Juan Bosch, I Salbidegoitia, Juan Burgaleta, Energy. 55H. Escrig, Francisco Batlles, Joaquín Alonso-Montesinos, F.M. Baena, Juan Bosch, I. Salbidegoitia, and Juan Burgaleta. Cloud detection, classification and motion estimation using geostationary satellite imagery for cloud cover forecast. Energy, 55, 06 2013.
Comparative analysis of methods for cloud segmentation in ground-based infrared images. Guillermo Terrén, - Serrano, Manel Martínez-Ramón, 175Renewable EnergyGuillermo Terrén-Serrano and Manel Martínez-Ramón. Comparative analysis of methods for cloud segmentation in ground-based infrared images. Renewable Energy, 175:1025-1040, 2021.
Unsupervised learning: foundations of neural computation. Terrence Joseph Geoffrey E Hinton, Sejnowski, A Tomaso, Poggio, MIT pressGeoffrey E Hinton, Terrence Joseph Sejnowski, Tomaso A Poggio, et al. Unsupervised learning: foundations of neural computation. MIT press, 1999.
Unsupervised learning. Trevor Hastie, Robert Tibshirani, Jerome Friedman, SpringerTrevor Hastie, Robert Tibshirani, and Jerome Friedman. Unsupervised learning. Springer, 2009.
Hidden Markov models: applications in computer vision. Horst Bunke, Terry Michael Caelli, World Scientific. 45Horst Bunke and Terry Michael Caelli. Hidden Markov models: applications in computer vision, volume 45. World Scientific, 2001.
Infrared-image classification using hidden markov trees. P Bharadwaj, L Carin, IEEE Transactions on Pattern Analysis and Machine Intelligence. 2410P. Bharadwaj and L. Carin. Infrared-image classification using hidden markov trees. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(10):1394-1398, 2002.
A hidden markov model for downscaling synoptic atmospheric patterns to precipitation amounts. Enrica Bellone, P James, Peter Hughes, Guttorp, Climate research. 151Enrica Bellone, James P Hughes, and Peter Guttorp. A hidden markov model for downscaling synoptic atmospheric patterns to precipitation amounts. Climate research, 15(1):1-12, 2000.
A non-homogeneous hidden markov model for precipitation occurrence. P James, Peter Hughes, Stephen P Guttorp, Charles, Journal of the Royal Statistical Society: Series C (Applied Statistics). 481James P Hughes, Peter Guttorp, and Stephen P Charles. A non-homogeneous hidden markov model for precipita- tion occurrence. Journal of the Royal Statistical Society: Series C (Applied Statistics), 48(1):15-30, 1999.
Image classification by a two-dimensional hidden markov model. Jia Li, Amir Najmi, Robert M Gray, IEEE transactions on signal processing. 482Jia Li, Amir Najmi, and Robert M Gray. Image classification by a two-dimensional hidden markov model. IEEE transactions on signal processing, 48(2):517-533, 2000.
Object trajectory-based activity classification and recognition using hidden markov models. I Faisal, Bashir, A Ashfaq, Dan Khokhar, Schonfeld, IEEE transactions on Image Processing. 167Faisal I Bashir, Ashfaq A Khokhar, and Dan Schonfeld. Object trajectory-based activity classification and recognition using hidden markov models. IEEE transactions on Image Processing, 16(7):1912-1919, 2007.
An iterative image registration technique with an application to stereo vision. B D Lucas, T Kanade, B. D. Lucas and T. Kanade. An iterative image registration technique with an application to stereo vision, 1981.
Adaptive background mixture models for real-time tracking. Chris Stauffer, W Eric, L Grimson, Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149). 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)IEEE2Chris Stauffer and W Eric L Grimson. Adaptive background mixture models for real-time tracking. In Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149), volume 2, pages 246-252. IEEE, 1999.
Spatio-temporal markov random field for video denoising. Jia Chen, Chi-Keung Tang, 2007 IEEE Conference on Computer Vision and Pattern Recognition. IEEEJia Chen and Chi-Keung Tang. Spatio-temporal markov random field for video denoising. In 2007 IEEE Conference on Computer Vision and Pattern Recognition, pages 1-8. IEEE, 2007.
A hidden spatial-temporal markov random field model for network-based analysis of time course gene expression data. The Annals of applied statistics. Zhi Wei, Hongzhe Li, 2Zhi Wei, Hongzhe Li, et al. A hidden spatial-temporal markov random field model for network-based analysis of time course gene expression data. The Annals of applied statistics, 2(1):408-429, 2008.
Mixture models: Inference and applications to clustering. J Geoffrey, Kaye E Mclachlan, Basford, M. Dekker. 38Geoffrey J McLachlan and Kaye E Basford. Mixture models: Inference and applications to clustering, volume 38. M. Dekker New York, 1988.
Finite mixture models. J Geoffrey, David Mclachlan, Peel, John Wiley & SonsGeoffrey J McLachlan and David Peel. Finite mixture models. John Wiley & Sons, 2004.
Christopher M Bishop, Pattern Recognition and Machine Learning. SpringerChristopher M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006.
Kevin P Murphy, Machine Learning: A Probabilistic Perspective. The MIT PressKevin P. Murphy. Machine Learning: A Probabilistic Perspective. The MIT Press, 2012.
Dealing with label switching in mixture models. Matthew Stephens, Journal of the Royal Statistical Society: Series B (Statistical Methodology). 624Matthew Stephens. Dealing with label switching in mixture models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 62(4):795-809, 2000.
Criteria for selection of infrared camera system. Soumitra K Ghosh, P E Paul, Galeski, Proceedings of 1994 IEEE Industry Applications Society Annual Meeting. 1994 IEEE Industry Applications Society Annual MeetingIEEE2Soumitra K Ghosh, J Paul, and PE Galeski. Criteria for selection of infrared camera system. In Proceedings of 1994 IEEE Industry Applications Society Annual Meeting, volume 2, pages 1893-1897. IEEE, 1994.
Determining optical flow. K P Berthold, Brian G Horn, Schunck, Artificial intelligence. 171-3Berthold KP Horn and Brian G Schunck. Determining optical flow. Artificial intelligence, 17(1-3):185-203, 1981.
Two-frame motion estimation based on polynomial expansion. Gunnar Farnebäck, Image analysis. Gunnar Farnebäck. Two-frame motion estimation based on polynomial expansion. Image analysis, pages 363-370, 2003.
Processing of global solar irradiance and ground-based infrared sky images for very short-term solar forecasting. Guillermo Terrén, - Serrano, Manel Martínez-Ramón, Guillermo Terrén-Serrano and Manel Martínez-Ramón. Processing of global solar irradiance and ground-based infrared sky images for very short-term solar forecasting, 2021.
Simple filter design for first and second order derivatives by a double filtering approach. Anders Hast, Pattern Recognition Letters. 42Anders Hast. Simple filter design for first and second order derivatives by a double filtering approach. Pattern Recognition Letters, 42:65 -71, 2014.
Lucas-kanade 20 years on: A unifying framework: Part 2. Simon Baker, Ralph Gross, Takahiro Ishikawa, Iain Matthews, International Journal of Computer Vision. 56Simon Baker, Ralph Gross, Takahiro Ishikawa, and Iain Matthews. Lucas-kanade 20 years on: A unifying framework: Part 2. International Journal of Computer Vision, 56:221-255, 2003.
Predictive approaches for choosing hyperparameters in gaussian processes. S Sundararajan, Sathiya Keerthi, Advances in neural information processing systems. S Sundararajan and S Sathiya Keerthi. Predictive approaches for choosing hyperparameters in gaussian processes. In Advances in neural information processing systems, pages 631-637, 2000.
A bivariate distribution with conditional gamma and its multivariate form. Sumen Sen, Rajan Lamichhane, Norou Diawara, Journal of Modern Applied Statistical Methods. 13Sumen Sen, Rajan Lamichhane, and Norou Diawara. A bivariate distribution with conditional gamma and its multivariate form. Journal of Modern Applied Statistical Methods, 13:169-184, 11 2014.
The multivariate generalised von mises distribution: Inference and applications. K W Alexandre, Jes Navarro, Richard E Frellsen, Turner, Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI'17. the Thirty-First AAAI Conference on Artificial Intelligence, AAAI'17AAAI PressAlexandre K. W. Navarro, Jes Frellsen, and Richard E. Turner. The multivariate generalised von mises distribution: Inference and applications. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI'17, pages 2394-2400. AAAI Press, 2017.
Von mises-fisher clustering models. Siddharth Gopal, Yiming Yang, PMLRProceedings of the 31st International Conference on Machine Learning. Eric P. Xing and Tony Jebarathe 31st International Conference on Machine Learning32Siddharth Gopal and Yiming Yang. Von mises-fisher clustering models. In Eric P. Xing and Tony Jebara, editors, Proceedings of the 31st International Conference on Machine Learning, volume 32 of Proceedings of Machine Learning Research, pages 154-162. PMLR, 6 2014.
Clustering on the unit hypersphere using von mises-fisher distributions. Arindam Banerjee, S Inderjit, Joydeep Dhillon, Suvrit Ghosh, Sra, J. Mach. Learn. Res. 6Arindam Banerjee, Inderjit S. Dhillon, Joydeep Ghosh, and Suvrit Sra. Clustering on the unit hypersphere using von mises-fisher distributions. J. Mach. Learn. Res., 6:1345-1382, December 2005.
Applications of beta-mixture models in bioinformatics. Yuan Ji, Chunlei Wu, Ping Liu, Jing Wang, Kevin R Coombes, Bioinformatics. 219Yuan Ji, Chunlei Wu, Ping Liu, Jing Wang, and Kevin R Coombes. Applications of beta-mixture models in bioinformatics. Bioinformatics, 21(9):2118-2122, 2005.
Estimating the dimension of a model. Gideon Schwarz, Ann. Statist. 62Gideon Schwarz. Estimating the dimension of a model. Ann. Statist., 6(2):461-464, 03 1978.
A new look at the statistical model identification. Hirotugu Akaike, IEEE Transactions on Automatic Control, AC. 196Hirotugu Akaike. A new look at the statistical model identification. IEEE Transactions on Automatic Control, AC-19(6):716-723, 12 1974.
Laplace mixture of linear experts. D Hien, Geoffrey J Nguyen, Mclachlan, Computational Statistics & Data Analysis. 93Hien D. Nguyen and Geoffrey J. McLachlan. Laplace mixture of linear experts. Computational Statistics & Data Analysis, 93:177 -191, 2016.
Girasol, a sky imaging and global solar irradiance dataset. Data in Brief. Guillermo Terrén-Serrano, Adnan Bashir, Trilce Estrada, Manel Martínez-Ramón, 106914Guillermo Terrén-Serrano, Adnan Bashir, Trilce Estrada, and Manel Martínez-Ramón. Girasol, a sky imaging and global solar irradiance dataset. Data in Brief, page 106914, 2021.
| [] |
[
"Learning Primal Heuristics for Mixed Integer Programs",
"Learning Primal Heuristics for Mixed Integer Programs"
] | [
"Yunzhuang Shen \nSchool of Computing Technologies\nSchool of Mathematics\nRMIT University Melbourne\nAustralia\n",
"Yuan Sun [email protected] \nSchool of Science\nMonash University Melbourne\nAustralia\n",
"Andrew Eberhard [email protected] \nSchool of Computing Technologies\nRMIT University Melbourne\nAustralia\n",
"Xiaodong Li [email protected] \nRMIT University Melbourne\nAustralia\n"
] | [
"School of Computing Technologies\nSchool of Mathematics\nRMIT University Melbourne\nAustralia",
"School of Science\nMonash University Melbourne\nAustralia",
"School of Computing Technologies\nRMIT University Melbourne\nAustralia",
"RMIT University Melbourne\nAustralia"
] | [] | This paper proposes a novel primal heuristic for Mixed Integer Programs, by employing machine learning techniques. Mixed Integer Programming is a general technique for formulating combinatorial optimization problems. Inside a solver, primal heuristics play a critical role in finding good feasible solutions that enable one to tighten the duality gap from the outset of the Branch-and-Bound algorithm (B&B), greatly improving its performance by pruning the B&B tree aggressively. In this paper, we investigate whether effective primal heuristics can be automatically learned via machine learning. We propose a new method to represent an optimization problem as a graph, and train a Graph Convolutional Network on solved problem instances with known optimal solutions. This in turn can predict the values of decision variables in the optimal solution for an unseen problem instance of a similar type. The prediction of variable solutions is then leveraged by a novel configuration of the B&B method, Probabilistic Branching with guided Depthfirst Search (PB-DFS) approach, aiming to find (near-)optimal solutions quickly. The experimental results show that this new heuristic can find better primal solutions at a much earlier stage of the solving process, compared to other state-of-the-art primal heuristics. | 10.1109/ijcnn52387.2021.9533651 | [
"https://arxiv.org/pdf/2107.00866v1.pdf"
] | 235,727,773 | 2107.00866 | e10cc81c7f82ce496de75697991c012f4f7d5a0d |
Learning Primal Heuristics for Mixed Integer Programs
Yunzhuang Shen
School of Computing Technologies
School of Mathematics
RMIT University Melbourne
Australia
Yuan Sun [email protected]
School of Science
Monash University Melbourne
Australia
Andrew Eberhard [email protected]
School of Computing Technologies
RMIT University Melbourne
Australia
Xiaodong Li [email protected]
RMIT University Melbourne
Australia
Learning Primal Heuristics for Mixed Integer Programs
Index Terms-Mixed Integer ProgrammingPrimal HeuristicsMachine Learning
This paper proposes a novel primal heuristic for Mixed Integer Programs, by employing machine learning techniques. Mixed Integer Programming is a general technique for formulating combinatorial optimization problems. Inside a solver, primal heuristics play a critical role in finding good feasible solutions that enable one to tighten the duality gap from the outset of the Branch-and-Bound algorithm (B&B), greatly improving its performance by pruning the B&B tree aggressively. In this paper, we investigate whether effective primal heuristics can be automatically learned via machine learning. We propose a new method to represent an optimization problem as a graph, and train a Graph Convolutional Network on solved problem instances with known optimal solutions. This in turn can predict the values of decision variables in the optimal solution for an unseen problem instance of a similar type. The prediction of variable solutions is then leveraged by a novel configuration of the B&B method, Probabilistic Branching with guided Depthfirst Search (PB-DFS) approach, aiming to find (near-)optimal solutions quickly. The experimental results show that this new heuristic can find better primal solutions at a much earlier stage of the solving process, compared to other state-of-the-art primal heuristics.
I. Introduction
Combinatorial Optimization problems can be formulated as Mixed Integer Programs (MIPs). To solve general MIPs, sophisticated software uses Branch-and-Bound (B&B) framework, which recursively decomposes a problem and enumerates over the sub-problems. A bounding function determines whether a sub-problem can be safely discarded by comparing the objective value of the current best solution (incumbent) to a dual bound, generally obtained by solving a linear programming relaxation (i.e., relaxing integral constraints) of that sub-problem.
Inside solvers, primal heuristics play an important role in finding good primal solutions at an early stage [1]. A good primal solution strengthens the bounding functions which allows one to prune suboptimal branches more aggressively [2]. Moreover, finding good feasible solutions earlier greatly reduces the duality gap, which is important for user satisfaction [3]. Acknowledging the importance of primal heuristics, a modern open-source MIP solver SCIP [4], employs dozens of heuristics [2], including meta-heuristics [5], heuristics supported by mathematical theory [6], and heuristics mined by experts and verified by extensive experimental evidence [1]. Those heuristics are triggered to run with engineered timings during the B&B process.
In many situations, users are required to solve MIPs of a similar structure on a regular basis [7], so it is natural to seek Machine Learning (ML) solutions. In particular, a number of studies leverage ML techniques to speed up finding good primal solutions for MIP solvers. He et al. [8] train Support Vector Machine (SVM) to decide whether to explore or discard a certain sub-problem, aiming to devote more of the computational budget on ones that are likely to contain an optimal solution. Khalil et al. [9] train an SVM model to select which primal heuristic to run at a certain sub-problem. More related to our work here, Sun et al. [10] and Ding et al. [11] leverage ML to predict values of decision variables in the optimal solution, which are then used to fix a proportion of decision variables to reduce the size of the original problem, in the hope that the reduced space still contains the optimal solution of the original problem.
In this work, we propose a novel B&B algorithm guided by ML, aiming to search for high-quality primal solutions efficiently. Our approach works in two steps. Firstly, we train an ML model using a dataset formed by optimally-solved small-scale problem instances where the decision variables are labeled by the optimal solution values. Specifically, we employ the Graph Convolutional Network [12] (GCN), where an input graph associated with a problem instance to GCN is formed by representing each decision variable as a node and assigning an edge between two nodes if the corresponding decision variables appear in a constraint in the MIP formulation (see Section II-A). Then given an unseen problem instance, the trained GCN with the proposed graph representation method can efficiently predict for each decision variable its probability of belonging to an optimal solution in an unseen problem instance (e.g., whether a vertex is a part of the optimal solution for the Maximum Independent Set Problem). The predicted probabilities are then used to guide a novel B&B configuration, called Probabilistic Branching technique with guided Depth-first Search (PB-DFS). PB-DFS enumerates over the search space starting from the region more likely to contain good primal solutions to the region of unpromising ones, indicated by GCN.
Although both the problem-reduction approaches [10], [11] and our proposed PB-DFS utilize solution prediction by ML, they are inherently different. The former can be viewed as a pre-processing step to prune the search space of the original problem, and the reduced problem is then solved by a B&B algorithm. In contrast, our PB-DFS algorithm configures the search order of the B&B method itself and directly operates on the original problem. In this sense, our PB-DFS is an exact method if given sufficient running time. However, as the PB-DFS algorithm does not take into account the size of the B&B search tree, it is not good at proving optimality. Therefore, we will limit the running time of PB-DFS and use it as a primal heuristic.
Our contributions can be summarized as follows: 1) We propose the Probabilistic Branching technique with guided Depth-first Search, a configuration of the B&B method specialized for boosting primal solutions empowered by ML techniques. 2) We propose a novel graph representation method that captures the relation between decision variables in a problem instance. The constructed graph can be input to GCN to make predictions efficiently. 3) Extensive experimental evaluation on NP-hard covering problems shows that 1) GCN with the proposed graph representation method is very competitive in terms of efficiency and effectiveness as compared to a tree-based model, a linear model, and a variant of Graph Neural Network [11]. 2) PB-DFS can find (near-)optimal solutions at a much earlier stage comparing to existing primal heuristics as well as the problem-reduction approaches using ML.
II. Background
A. MIP and MIP solvers
MIP takes the form arg min{c T x | x ∈ F}. For an MIP instance with n decision variables, c ∈ R n is the objective coefficient vector. x denotes a vector of decision variables. We consider problems where the decision variables x are binary in this study, although our method can be easily extended to discrete domain [13]. F is the set of feasible solutions (search space), which is typically defined by integrality constraints and linear constraints Ax ≤ b, where A ∈ R m×n and b ∈ R m are the constraint matrix and constraint right-hand-side vector, respectively. m is the number of constraints. The goal is to find an optimal solution in F that minimizes a linear objective function.
For solving MIPs, exact solvers employ B&B framework as their backbone, as outlined in Algorithm 1. There are two essential components in the B&B framework, branching Algorithm 1 Branch-and-Bound Algorithm Require: a problem instance: I; 1: the node queue: L ← {I}; 2: the incumbent and its objective value:x ← ∅,ĉ ← ∞; 3: while L is not empty do 4: choose Q from L; L ← L \ Q; 5: solve the linear relaxation Q LP of Q; 6: if Q LP is infeasible then 7: go to Line 3 ;
8:
end if 9: denote the LP solutionx LP ; 10: denote the LP objectiveĉ LP ; 11: ifĉ LP ≤ĉ then 12: ifx LP is feasible in Q then 13:x ←x LP ;ĉ ←ĉ LP ; 14: else 15: split Q into subproblems Q = Q 1 ∩ ... ∩ Q n ; 16: L ← L ∩ {Q 1 ..Q n }; 17: end if 18: end if 19: end while 20: returnx policy and node selection strategy. A node selection strategy determines the next (sub-)problem (node) Q to solve from the queue L, which maintains a list of all unexplored nodes. B&B obtains a lower bound on the objective values of Q by solving its Linear Programming (LP) relaxation Q LP . If the LP solution (lower bound) is larger than the objectiveĉ ≡ c Tx of the incumbentx (upper bound), then the sub-tree rooted at node Q can be pruned safely. Otherwise, this sub-tree possibly contains better solutions and should be explored further. If the LP solutionx LP is feasible in Q and of better objective value, the incumbent is updated byx LP . Otherwise, Q is decomposed into smaller problems by fixing a candidate variable to a possible integral value. The resulting sub-problems are added to the node queue. Branching policy is an algorithm for choosing the branching variable from a set of candidate variables, which contains decision variables taking on fractional values in the solutionx LP . Modern solvers implement sophisticated algorithms for both components, aiming at finding good primal solutions quickly while maintaining a relatively small tree size. See [4] for a detailed review.
B. Heuristics in MIP solvers
During computation, primal heuristics can be executed at any node, devoted to improve the incumbent, such that more sub-optimal nodes can be found earlier and thus pruned without further exploration. Berthold [2] classifies these primal heuristics into two categories: start heuristics and improvement heuristics. Start heuristics aim to find feasible solutions at an early solving stage, while improvement heuristics build upon a feasible solution (typically the incumbent) and seek better solutions. All
Algorithm 2 An MIP Instance to Linkage Graph
Require: the constraint matrix: A ∈ R m×n ; 1: the adjacency matrix: G adj ← 0 n×n ; 2: the row index of A: i ← 0; 3: the index set of variables: C ← ∅; 4: while i < m do 5:
C ← {j | A i,j = 0}; 6: for k, l ∈ C , k = l do 7: G adj k,l ← 1, G adj l,k ← 1; 8:
end for 9: i ← i + 1; 10: end while 11: return G adj heuristics run on external memory (e.g., a copy of a subproblem) and do not modify the structure of the B&B tree. For a more comprehensive description of primal heuristics, we refer readers to [2], [3].
III. Method
In Section III-A, we describe how to train a machine learning model to predict the probability for each binary decision variable of its value in the optimal solution. In Section III-B, we illustrate the proposed PB-DFS that leverages the predicted probabilities to boost the search for high-quality primal solutions.
A. Solution Prediction
Given a combinatorial optimization problem, we construct a training dataset from multiple optimally-solved problem instances where each instance associates with an optimal solution. A training example corresponds to one decision variable x i from a solved problem instance. The label y i of x i is the solution value of x i in the optimal solution associated with a particular problem instance. The features f i of x i are extracted from the MIP formulation, which describes the role of x i in that problem instance. We describe those features in Appendix A. Given the training data, an ML model can be trained by minimising the crossentropy loss function [14] to separate the training examples with different class labels [10], [11].
We adapt the Graph Convolutional Network (GCN) [12] for this classification task, a type of Graph-based Neural Network (GNN), to take the relation between decision variables from a particular problem instance into account. To model the relation between decision variables, we propose a simple method to extract information from the constraint matrix of an MIP. Algorithm 2 outlines the procedure. Given an MIP we represent each decision variable as a node in a graph, and assign an edge between two nodes if the corresponding decision variables appear in a constraint. This graph representation method can capture the linkage of decision variables effectively, especially for graph-based problems. For example, the constructed graph for the Maximum Independent Set problem is exactly the same as the graph on which the problem is defined, and the constructed graph for the Traveling Salesman Problem is the line graph of the graph given by the problem definition.
Given the dataset containing training examples (f i , y i ) grouped by problem instances and each problem instance associated with an adjacency matrix G adj representing the relation between decision variables, we can then train GCN. Specifically for a problem instance, the adjacency matrix G adj is precomputed to normalize graph Laplacian by
L := I − D − 1 2 G adj D − 1 2 ,(1)
where I and D are the identity matrix and diagonal matrix of G adj , respectively. The propagation rule is defined as
H l+1 := σ(LH l W l + H l ),(2)
where l denotes the index of a layer. Inside the activation function σ(·), W l denotes the weight matrix. H l is a matrix that contains hidden feature representations for decision variables in that problem instance, initialized by the feature vectors of decision variables
H 0 = [f 1 , · · · , f n ] T .
The second term is referred to as the residual connection, which preserves information from the previous layer. For hidden layers with arbitrary number of neurons, We adopt ReLU (x) = max(x, 0) as the activation function [15]. For the output layer L, there is only one neural to output a scalar value for each decision variable and sigmoid function is used as the activation function for prediction H L = [ŷ 1 , · · · ,ŷ n ]. We train GCN using Stochastic Gradient Descent to minimize the cross-entropy loss function between the predicted valuesŷ i of decision variables and their optimal values y, defined as
min − 1 N N i=1 y i × log(ŷ i ) + (1 − y i ) × log(1 −ŷ i ) ,(3)
where N is the total number decision variables from multiple training problem instances. Given an unseen problem instance at test time, we can use the trained GCN to predict for each decision variable x i a valueŷ i , which can be interpreted as the probability of a decision variable taking the value of 1 in the optimal solution p i = P (x i = 1). We refer to the array of predicted values as the probability vector.
B. Probabilistic Branching with guided Depth-first Search
We can then apply the predicted probability vector to guide the search process of B&B method. The proposed branching strategy, Probabilistic Branching (PB) attempts to greedily select variable x i from candidate variables with the highest score z i to branch on. The score of variable x i can be computed by
z i ← max(p i , 1 − p i ),(4)
where p i is the probability of x i being assigned to 1 predicted by the GCN model. This function assigns a higher score to variables whose p i is closer to either 0 or 1. This value tells how certain an ML model is about its prediction. We then can branch on the decision variable x i with the highest score,
i ← arg max i z i ; s.t. i ∈ C.(5)
C is the index set of candidate variables that are not fixed at the current node. In this way, our PB method prefers to branch on the variables for which our prediction is more "confident" at the shallow level of the search tree while exploring the uncertain variables at the deep level of the search tree. We propose to use a guided Depth-first Search (DFS) as the node selection strategy to select the next sub-problem to explore. After branching a node, we have a maximum of two child nodes to explore (because the decision variables are binary). Guided DFS selects the child node that results from fixing the decision variable to the nearest integer of p i . When reaching a leaf node, guided DFS backtracks to the deepest node in the search tree among all unexplored nodes. Therefore, guided DFS always explores the node most likely to contain optimal solutions, instructed by the prediction of an ML model. We refer to this implementation of B&B method as PB-DFS. Figure 1 illustrates PB-DFS applied to a problem with three decision variables. We note that as a configuration of the B&B framework, PB-DFS can be an exact search method if given enough time. However, it is specialized for aggressively seeking high-quality primal solutions while trading off the size of the B&B tree created during the computation. Hence, we implement and evaluate it as a primal heuristic, which partially solves an external copy of the (sub-)problem with a certain termination criterion.
IV. Experiments
In this section, we use numerical experiments to evaluate the efficacy of our proposed method. After describing the experiment setup, we first analyse different ML models in terms of both effectiveness and efficiency. Then, we evaluate the proposed PB-DFS equipped with different ML models against a set of primal heuristics. Further, we show the significance of PB-DFS with the proposed GCN model by comparing it to the full-fledged SCIP solver and problem-reduction approaches using the SCIP solver [11]. LG stands for Linkage Graph. We compare LG-GCN against three other machine learning (ML) models, Logistic Regression (LR), XGBoost [16], and a variant of Graph Neural Network (GNN) which represents the constraint matrix of MIP as a tripartite graph [11]. We address this GNN variant as TRIG-GCN, where TRIG stands for Tripartite Graph. We set the number of layers to 20 for LG-GCN and that of TRIG-GCN is set to 2 due to its high computational cost (explained later). For these two Graph Neural Networks, the dimension of the hidden vector of a decision variable is set to 32. For LR and XGBoost, the hyper-parameters are the default ones in the Scikit-learn [17] package. For all ML models, the feature vector of a decision variable has 57 dimensions, containing statistics extracted from the MIP formulation of a problem instance (listed in Appendix A). For each feature, we normalize its values to the range [0, 1] using min-max normalization with respect to the decision variables in a particular problem instance. Besides, LG-GCN and TRIG-GCN are trained with graphs of different structures with respect to each problem instance. For a problem, an ML model is trained using 500 optimally-solved small-scale instances. c) Evaluation of ML Models: We evaluate the classification performance of ML models on two test datasets, constructed from 50 small problem instances (different from the training instances) and medium-sized problem instances, respectively. We measure a model's performance with the Average Precision (AP) [18], defined by accumulating the product of precision and the change in recall when moving the decision threshold on a set of ranked variables. AP is a more informative metric than the accuracy metric in our context, because it takes the ranking of decision variables into account. This allows AP to better measure the classification performance for imbalanced data that is common in the context of solution prediction for NP-hard problems. Further, AP can better measure the performance of the proposed PB-DFS, because exploring a node not containing the optimal solution in the upper level of the B&B tree is more harmful than exploring one in the lower level of the tree.
d) Evaluation of Solution Methods:
We evaluate PB-DFS equipped with the best ML model against heuristic methods as well as problem-reduction approaches on largescale problem instances. In the first part, PB-DFS is compared with primal heuristics that do not require a feasible solution on large problem instances. By examining the performance of the heuristics enabled by default in SCIP, four types of heuristics are selected as baselines: the Feasibility Pump [19] (FP), the Relaxation Enhanced Neighborhood Search [2] (RENS), a set of 15 diving heuristics, and a set of 8 rounding heuristics [1]. We allow PB-DFS to run only at the root node and terminate it upon the first-feasible solution is found. The compared heuristic methods run multiple times during the B&B process under a cutoff time of 50 seconds with default running frequency and termination criteria tuned by SCIP developers. Generating cutting planes is disabled to best measure the time of finding solutions by different heuristics. In the second part, we demonstrate the effectiveness of PB-DFS by comparing SCIP solver with only PB-DFS as the primal heuristic against full-fledged SCIP solver where all heuristics are enabled as well as a problemreduction approach by Ding et al. [11]. The problemreduction approach splits the root node of the search tree by a constraint generated from probability vector, and then solved by SCIP-DEF. To alleviate the effects of ML predictions, we use the probability vector generated by LG-GCN for both PB-DFS and ML-Split. The cutoff time is set to 1000 seconds. Note that for DSP, we employ an alternative score function z i ← p i . The corresponding DFS selects the child node that is the result of fixing the decision variable to one when those nodes are at the same depth. A comparison of alternative score functions is detailed in Appendix C. e) Experimental Environment: We conduct experiments on a cluster with 32 Intel 2.3 GHz CPUs and 128 GB RAM. PB-DFS is implemented using C-api provided by the state-of-the-art open-source solver, SCIP, version 6.0.1. The implementations of Logistic Regression and XGBoost are taken from Scikit-learn [17]. Both LG-GCN and TRIG-GCN are implemented using Tensorflow package [20] where offline training and online prediction are done by multiple CPUs in parallel. All solution approaches are evaluated on a single CPU core. Our code is available online 1 .
B. Results on Solution Prediction
Table I presents the ML models' performance on solution prediction. The small-scale test instances are of the same size as the training instances and the medium-scale ones are used for examining the generalization performance of tested ML models. We cannot measure the classification performance of ML models on large-scale instances because the optimal solutions for them are not available. By comparing the mean statistic of AP values, we observe that LG-GCN is very competitive among all problems indicated by mean statistics. We conduct student's t-test by comparing LG-GCN against other baselines where the AP values for a group of test problem instances of LG-GCN is compared with that of other ML models. In practice, p-value less than 0.05 (a typical significance level) indicates that the difference between the two samples is significant. Therefore, we confirm that the proposed LG-GCN can predict solutions with better quality as compared to other ML models on MISP, VCP, and CAP. The performances of ML models on DSP are comparable. Note that on CAP which is not originally formulated on graphs, LG-GCN's AP value is significantly better than TRIG-GCN's. This shows that the proposed graph construction method is more robust when extending to non-graph based problems as compared to TRIG-GCN. LR is slightly better than XGBoost overall. All models show their capability to generalize to larger problem instances. In addition to prediction accuracy, the prediction time of an ML model is a part of the total solving time, which should also be considered for developing efficient solution methods. Figure 2 shows the increase of the prediction time when enlarging the problem size for the compared ML models. Comparing graph-based models, we observe that in practice the mean prediction time by LG-GCN is much less than the one by TRIG-GCN. The high computational cost of TRIG-GCN prevents us from building it with a large number of layers. The computation time of LG-GCN is close to those of linear models on VCP and MISP and shifts away on DSP and CAP. This is understandable because the complexity of LG-GCN is linear in the number of edges in a graph [12] and is polynomial with respect to the number of decision variables, as compared to linear growth for LR and XGBoost. However, for MISP and VCP where the constraint coefficient matrix is sparse (i.e., the fraction of zero values is high), in practice, the difference in the growth of computation time with increasing problem size is not as dramatic, but it may be significant when an MIP instance has a dense constraint matrix, e.g., Combinatorial Auction Problems. Table II shows the computational results of PB-DFS as compared to the most effective heuristic methods used in the SCIP solver. Recall that the FP and RENS are two standalone heuristics. Roundings refers to a set of 8 rounding heuristics and Divings covers 15 diving heuristics. PB-DFS-LR and PB-DFS-GCN stand for the PB-DFS method equipped with LG-GCN and LR model, respectively. Note that the PB-DFS method only runs at the root node and terminates upon finding the first feasible solution. Besides, we analyze an additional criterion using PB-DFS-GCN, terminating with a cutoff time 20 seconds (statistics are shown in brackets). For each problem, we run a heuristic 30 times on different instances. Since the SCIP solver assigns heuristics with different running frequencies based on the characteristic of a problem, we only show heuristics called at least once per instance. The column # Instances no feasible solution reports the number of instances that a heuristic does not find a feasible solution. Other columns show statistics with a geometric mean shifted by one averaged over instances where a heuristic finds at least one feasible solution. Note that we only consider the best solutions found by a heuristic. Solutions found by branching are not included. We also show the average number of calls of a heuristic and the total running time by a heuristic in the last columns for reference. For those PB-DFS heuristics, mean prediction time by models is added to the Best Solution Time and Heuristic Total Time. Note that the primal heuristics do not meet the running criteria by SCIP for a certain problem is excluded from the results.
C. Results For Finding Primal Solutions
Overall, the PB-DFS methods can find better primal solutions much earlier on VCP, DSP, and MISP. It is less competitive on CAP. This is understandable because the AP value of the prediction for CAP is low (Table I), indicating the prediction of an ML model is less effective. Comparing PB-DFS with different ML models, PB-DFS-GCN can find better solutions than PB-DFS-LR on VCP, MISP, and CAP, and they are comparable on DSP. This means that the AP values of probability vectors for different models can reflect the performance of PB-DFS heuristics to a certain extent. When comparing two termination criteria using the PB-DFS-GCN model, we observe that giving the proposed heuristic more time can lead to better solutions. These results show that the PB-DFS method can be a very strong primal heuristic when the ML prediction is of high-quality (i.e., high AP values).
We further demonstrate the effectiveness of PB-DFS by comparing the use of SCIP equipped with PB-DFS only, against both the full-fledged SCIP solver (SCIP-DEF) and a problem-reduction approach [11] using SCIP-DEF as the solver (ML-Split). Figure 3 presents the change of the primal bound during the solving process and the detailed solving statistics are shown in Table III. From Figure 3, we observe that PB-DFS finds (near-)optimal solutions at the very beginning and outperforms other methods on VCP and MISP. On DSP, early good solutions found by PB-DFS are still very useful such that it can help the solver without any primal heuristic outperform full-fledged solvers for a while. Further, PB-DFS is computationally cheap. Therefore, when incorporated into the SCIP solver, PB-DFS does not introduce a large computational overhead, and keeps more computational resources for the B&B process. This explains why the quality of the incumbent solution quickly catches up with other approaches on CAP.
The detailed solving statistics in Table III are consistent with the above analysis, which confirms that the PB-DFS method is very competitive V. Conclusion In this work, we propose a primal heuristic based on machine learning, Probabilistic Branching with guided Depth-First Search (PB-DFS). PB-DFS is a B&B configuration specialized for boosting the search for high-quality primal solutions, by leveraging a predicted solution to guide the search process of the B&B method. Results show that PB-DFS can find better primal solutions, and find them much faster on several NP-hard covering problems as compared to other general heuristics in a state-of-the-art open-source MIP solver, SCIP. Further, we demonstrate that PB-DFS can make better use of high-quality predicted solutions as compared to recent solution prediction approaches.
We would like to note several promising directions beyond the scope of this work. Firstly, we demonstrate that PB-DFS can be deployed as a primal heuristic that runs only at the root node during a solver's B&B process. More sophisticated implementations, e.g. triggering PB-DFS to run on different nodes with engineered timings, can lead to further performance improvement. Secondly, PB-DFS relies on high-quality predicted solutions. We observe the drop in the performances of existing ML models when extending it to general MIP problems. We expect that improving ML models in the context of solution prediction for Mixed Integer Programs could be a fruitful avenue for future research.
Figure 1 :
1Probabilistic Branching with guided Depth-first Search on three decision variables. Given a node, PB branches on a variable our prediction is more confident. The variables that have already been branched are removed from the candidate set. The order of node selection by guided DFS is indicated by the arrow lines.
Problems: We select a set of representative NP-hard problems: Maximum Independent Set (MISP), Dominant Set Problem (DSP), Vertex Cover Problem (VCP), and an additional problem from Operational Research, Combinatorial Auction Problem (CAP). For each problem, we generate instances of three different scales, i.e., small, medium, and large. Small-scale problem instances and medium-scale problem instances are solved to optimality for training and evaluating ML models. Largescale problem instances are used for evaluating different solution approaches. Details of problem formulations and instance generations are provided in Appendix B. b) ML Models: We refer to the GCN that takes the proposed graph representation of MIP as LG-GCN, where
Figure 2 :
2Increase of model prediction time when enlarging the size of problem instances: the y-axis is the prediction time in seconds in log-scale, and the x-axis is the size of problem instances in 3 scales.
Figure 3 :
3Change of the primal bound during computation on large-scale problem instances: the x-axis is the solving time in seconds, and the y-axis is the objective value of the incumbent.
Table I :
IComparison between ML models. AP column shows the mean statistic of Average Precision values over 50 problem instances. We conduct student's t-test by comparing LG-GCN against other baselines. p-value less than 0.05 (a typical significance level) indicates that the LG-GCN is significantly better than other ML models.Problem Size
Model
Independent Set
Dominant Set
Vertex Cover
Combinatorial Auction
AP
p-value
AP
p-value
AP
p-value
AP
p-value
Small
LG-GCN
96.53
-
87.25
-
98.05
-
46.10
-
TRIG-GCN
88.62
8.7E-28
86.90 8.0E-02
92.42
2.8E-87
41.77
4.7E-02
XGBoost
74.11
3.9E-135 86.45 2.6E-01
84.82
1.7E-140
39.05
1.4E-03
LR
74.24
1.6E-128 86.59 3.5E-01
85.29
2.2E-137
41.31
2.3E-02
Medium
LG-GCN
96.41
-
87.52
-
98.24
-
47.07
-
TRIG-GCN
88.16
4.8E-31
87.18 5.1E-02
92.74
2.1E-51
42.67
6.0E-04
XGBoost
73.07
1.1E-125 86.80 2.0E-01
84.70
5.2E-67
40.85
1.2E-06
LR
73.99
2.0E-148 86.86 2.3E-01
85.33
3.3E-57
42.35
1.8E-04
small
medium
large
10 −1
Table II :
IIComparison between PB-DFS and primal heuristics.Problem
Heuristic
Best Solution
Objective
Best Solution
Time
# Instances
no feasible solution
# Calls
Heuristic
Total Time
VCP (Min.)
Table III :
IIIPB-DFS compared with SCIP-DEF and ML-Split.Problem
Method
Best Solution
Objective
Best Solution
Time
Optimality
Gap (%)
Heuristic
Total Time
VCP (Min.)
SCIP-DEF
1636.4
542.8
3.97
198.5
ML-Split
1630.1
202.7
3.62
202.8
PB-DFS
1629.0
5.7
3.46
5.7
DSP (Min.)
SCIP-DEF
315.5
390.8
3.0
114.2
ML-Split
315.3
358.9
2.95
110.1
PB-DFS
316.1
303.4
3.15
11.5
MISP (Max.)
SCIP-DEF
1364.9
559.9
4.67
174.5
ML-Split
1370.4
281.6
4.34
200.4
PB-DFS
1371.0
4.5
4.17
4.5
CAP (Max.)
SCIP-DEF
3824.3
535.0
11.62
29.2
ML-Split
3821.2
667.5
12.01
30.8
PB-DFS
3831.4
514.6
11.35
5.5
Code is available at https://github.com/Joey-Shen/pb-dfs.
Appendix AThe features for decision variables are outlined as follows:• original, positive and negative objective coefficients; • number of non-zero, positive, and negative coefficients in constraints; • variable LP solution of the original problemx;x − x ;x −x; a boolean indicator for whetherx is fractional; • variable's upward and downward pseudo costs; the ratio between these pseudocosts; sum and product of these pseudo costs; variable's reduced cost; • global lower bound and upper bound; • mean, standard deviation, minimum, and maximum degree for constraints in which the variable has a non-zero coefficient. The degree of a constraint is the number of non-zero coefficients of that constraint; • the maximum and the minimum ratio between the lefthand-side and right-hand-side over constraints where the variable has a non-zero coefficient; • statistics (sum, mean, standard deviation, maximum, minimum) for a variable's positive and negative constraint coefficients respectively; • coefficient statistics of all variables in the constraints (sum, mean, standard deviation, maximum, minimum) with respect to three weighting schemes: unit weight, dual cost, the inverse of the sum of the coefficients in the constraint.Appendix BThe MIP formulations for tested problems are as follows: a) Maximum Independent Set Problem (MISP): In an undirected graph G(V, E), a subset of nodes S is independent iff there is no edge between any pair of nodes in S. A MISP is to find an independent set in G of maximum cardinality. The MIP formulation of the MISP is:b) Dominant Set Problem (DSP):In an undirected graph G(V, E), a subset of nodes S ⊂ V dominates the complementary subset V \ S iff every node not in S is adjacent to at least one node in S. The objective of a DSP is to find a dominant set in G of minimum cardinality. The MIP of the DSP is as follows: min x v∈V x v , subject toc) Vertex Cover Problem (VCP): In an undirected graph G(V, E), a subset of nodes S ⊂ V is a cover of G iff for every edge e ∈ E, there is at least one endpoint in S. The objective of the VCP is to find a cover set in G of minimum cardinality. The MIP of the VCP is as follows:) Combinatorial Auction Problem (CAP):A seller faces to selectively accept offers from bidders. Each offer indexed by i contains a bid p i for a set of items (bundle) C i by a particular bidder. The seller has limited amount of goods and aims to allocate the goods in a way that maximizes the total revenue. We use I and J to denote the index set of offers and the index set of items. Formally, the MIP formulation of the problem can be expressed as max x i∈I p i x i , subject to k∈{i | j∈Ci,∀i} x k ≤ 1, ∀j ∈ J and x i ∈ 0, 1, ∀i ∈ I.For MISP, DSP, and VCP, we sample random graphs using Erdős-Rényi generator. The affinity is set to 4. The training data consists of examples from solved small graph instances between 500 to 1001 nodes. We form the smallscale and medium-scale testing datasets with solved graph instances of 1000 nodes and those of 2000 nodes. We evaluate heuristics on large-scale graph instances of 3000 nodes. The CAP instances are generated using an arbitrary relationship procedure. The CAP instances in small-scale are sampled with items in the range [100, 150] and bids in the range[500,750]. Instances in medium-scale and largescale are generated with 150 items for 750 bids and 200 items for 1000 bids, respectively. The detailed parameter settings are given by following[21].Appendix CWe consider two alternative score functions, z i ← p i and z i ← 1 − p i , subject to i ∈ C. When using z i ← p i as the score function to select the decision variable with the maximum score to branch, our guided DFS always prefers the node that is the result of fixing the decision variable to 1. The behavior of the PB-DFS due to this score function can be interpreted as incrementally adding variables that more likely belong to an optimal solution until a feasible solution is obtained. When using the other score function z i ← 1 − p i , guided DFS always prefers the node that is the result of fixing the decision variable to 0, and the resulting behavior of the PB-DFS can be interpreted as continuously removing variables that less likely belong to an optimal solution until a feasible solution is obtained. In table IV, we observe that the score functions do not significantly affect the first-found solution.
Rounding and propagation heuristics for mixed integer programming. T Achterberg, T Berthold, G Hendel, Operations research proceedings. SpringerT. Achterberg, T. Berthold, and G. Hendel, "Rounding and propagation heuristics for mixed integer programming," in Operations research proceedings 2011. Springer, 2012, pp. 71-76.
Primal heuristics for mixed integer programs. T Berthold, T. Berthold, "Primal heuristics for mixed integer programs," 2006.
Heuristics in mixed integer programming. M Fischetti, A Lodi, Wiley Encyclopedia of Operations Research and Management Science. M. Fischetti and A. Lodi, "Heuristics in mixed integer pro- gramming," Wiley Encyclopedia of Operations Research and Management Science, 2010.
Scip: solving constraint integer programs. T Achterberg, Mathematical Programming Computation. 11T. Achterberg, "Scip: solving constraint integer programs," Mathematical Programming Computation, vol. 1, no. 1, pp. 1-41, 2009.
Local search in combinatorial optimization. E Aarts, E H Aarts, J K Lenstra, Princeton University PressE. Aarts, E. H. Aarts, and J. K. Lenstra, Local search in combinatorial optimization. Princeton University Press, 2003.
Rens. T Berthold, Mathematical Programming Computation. 61T. Berthold, "Rens," Mathematical Programming Computation, vol. 6, no. 1, pp. 33-54, 2014.
Exact combinatorial optimization with graph convolutional neural networks. M Gasse, D Chételat, N Ferroni, L Charlin, A Lodi, Advances in Neural Information Processing Systems. M. Gasse, D. Chételat, N. Ferroni, L. Charlin, and A. Lodi, "Exact combinatorial optimization with graph convolutional neural networks," in Advances in Neural Information Processing Systems, 2019, pp. 15 554-15 566.
Learning to search in branch and bound algorithms. H He, H Daume, Iii , J M Eisner, Advances in neural information processing systems. H. He, H. Daume III, and J. M. Eisner, "Learning to search in branch and bound algorithms," in Advances in neural informa- tion processing systems, 2014, pp. 3293-3301.
Learning to run heuristics in tree search. E B Khalil, B Dilkina, G L Nemhauser, S Ahmed, Y Shao, IJCAI. E. B. Khalil, B. Dilkina, G. L. Nemhauser, S. Ahmed, and Y. Shao, "Learning to run heuristics in tree search." in IJCAI, 2017, pp. 659-666.
Using statistical measures and machine learning for graph reduction to solve maximum weight clique problems. Y Sun, X Li, A Ernst, IEEE transactions. Y. Sun, X. Li, and A. Ernst, "Using statistical measures and machine learning for graph reduction to solve maximum weight clique problems," IEEE transactions on pattern analysis and machine intelligence, 2019.
Accelerating primal solution findings for mixed integer programs based on solution prediction. J.-Y Ding, C Zhang, L Shen, S Li, B Wang, Y Xu, L Song, arXiv:1906.09575arXiv preprintJ.-Y. Ding, C. Zhang, L. Shen, S. Li, B. Wang, Y. Xu, and L. Song, "Accelerating primal solution findings for mixed integer programs based on solution prediction," arXiv preprint arXiv:1906.09575, 2019.
Semi-supervised classification with graph convolutional networks. T N Kipf, M Welling, arXiv:1609.02907arXiv preprintT. N. Kipf and M. Welling, "Semi-supervised classification with graph convolutional networks," arXiv preprint arXiv:1609.02907, 2016.
Solving mixed integer programs using neural networks. V Nair, S Bartunov, F Gimeno, I Von Glehn, P Lichocki, I Lobov, B O'donoghue, N Sonnerat, C Tjandraatmadja, P Wang, arXiv:2012.13349arXiv preprintV. Nair, S. Bartunov, F. Gimeno, I. von Glehn, P. Lichocki, I. Lobov, B. O'Donoghue, N. Sonnerat, C. Tjandraatmadja, P. Wang et al., "Solving mixed integer programs using neural networks," arXiv preprint arXiv:2012.13349, 2020.
Pattern recognition and machine learning. C M Bishop, springerC. M. Bishop, Pattern recognition and machine learning. springer, 2006.
Rectified linear units improve restricted boltzmann machines. V Nair, G E Hinton, ICML. V. Nair and G. E. Hinton, "Rectified linear units improve restricted boltzmann machines," in ICML, 2010.
Xgboost: A scalable tree boosting system. T Chen, C Guestrin, Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining. the 22nd acm sigkdd international conference on knowledge discovery and data miningT. Chen and C. Guestrin, "Xgboost: A scalable tree boosting system," in Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, 2016, pp. 785-794.
Scikit-learn: Machine learning in python. F Pedregosa, G Varoquaux, A Gramfort, V Michel, B Thirion, O Grisel, M Blondel, P Prettenhofer, R Weiss, V Dubourg, the Journal of machine Learning research. 12F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg et al., "Scikit-learn: Machine learning in python," the Journal of machine Learning research, vol. 12, pp. 2825-2830, 2011.
Recall, precision and average precision. M Zhu, 230WaterlooDepartment of Statistics and Actuarial Science, University of WaterlooM. Zhu, "Recall, precision and average precision," Department of Statistics and Actuarial Science, University of Waterloo, Waterloo, vol. 2, p. 30, 2004.
The feasibility pump. M Fischetti, F Glover, A Lodi, Mathematical Programming. 1041M. Fischetti, F. Glover, and A. Lodi, "The feasibility pump," Mathematical Programming, vol. 104, no. 1, pp. 91-104, 2005.
Tensorflow: A system for large-scale machine learning. M Abadi, P Barham, J Chen, Z Chen, A Davis, J Dean, M Devin, S Ghemawat, G Irving, M Isard, 12th {USENIX} Symposium on Operating Systems Design and Implementation. M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard et al., "Tensorflow: A system for large-scale machine learning," in 12th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 16), 2016, pp. 265-283.
Towards a universal test suite for combinatorial auction algorithms. K Leyton-Brown, M Pearson, Y Shoham, Proceedings of the 2nd ACM conference on Electronic commerce. the 2nd ACM conference on Electronic commerceK. Leyton-Brown, M. Pearson, and Y. Shoham, "Towards a universal test suite for combinatorial auction algorithms," in Proceedings of the 2nd ACM conference on Electronic commerce, 2000, pp. 66-76.
| [
"https://github.com/Joey-Shen/pb-dfs."
] |
[
"Direct imaging of lattice strain-induced stripe phases in an optimally-doped manganite",
"Direct imaging of lattice strain-induced stripe phases in an optimally-doped manganite"
] | [
"L Sudheendra ",
"V Moshnyaga ",
"B Damaschke ",
"K Samwer ",
"\nPhysikalisches Institut\nFriedrich-Hund-Platz 1\n",
"\nUniversitaet Goettingen\nD37077GoettingenGermany\n"
] | [
"Physikalisches Institut\nFriedrich-Hund-Platz 1",
"Universitaet Goettingen\nD37077GoettingenGermany"
] | [] | In a manganite film without quenched disorder, we show texturing in the form of insulating and metallic stripes above and below Curie temperature (T c ), respectively, by high resolution scanning tunneling microscopy/spectroscopy (STM/STS). The formation of these stripes involves competing orbital and charge orders, and are an outcome of overlapping electron wave-functions mediated by long-range lattice strain. Contrary to popular perception, electronically homogeneous stripe phase underlines the efficacy of the lattice strain in bringing about charge density modulation and in impeding the cross-talk between the order parameters, which otherwise evolves inhomogeneously in the form of orbitally-ordered insulating and orbitally disordered metallic phases. Complex phases in manganites are a consequence of a direct interplay between spin, charge, and orbital order parameters [1, 2]. The resulting modulated phases observed in manganites analogous to other oxide [3] materials are termed electronically soft [4] as the competing order parameters show indifference towards the formation of either a homogeneous ordered or an inhomogeneous disordered phase. In colossal magnetoresistant (CMR) manganites [5], of the form La 1-x Ca x MnO 3 (0.2 ≤ x ≤ 0.33), the localizing effect of the charge due to the Coulomb repulsion and Jahn-Teller (JT) [6,7] coupling at the Mn-site results in correlated polarons, with CE-type ordering, confined to nano-regions above T C [8-10]. Such phases symbolize the coupling of the spin, charge 1 | 10.1103/physrevb.75.172407 | [
"https://arxiv.org/pdf/cond-mat/0605712v2.pdf"
] | 9,171,748 | cond-mat/0605712 | 702667bbdf5c82ef67d8ee681cc3db3b2e6dbb09 |
Direct imaging of lattice strain-induced stripe phases in an optimally-doped manganite
L Sudheendra
V Moshnyaga
B Damaschke
K Samwer
Physikalisches Institut
Friedrich-Hund-Platz 1
Universitaet Goettingen
D37077GoettingenGermany
Direct imaging of lattice strain-induced stripe phases in an optimally-doped manganite
In a manganite film without quenched disorder, we show texturing in the form of insulating and metallic stripes above and below Curie temperature (T c ), respectively, by high resolution scanning tunneling microscopy/spectroscopy (STM/STS). The formation of these stripes involves competing orbital and charge orders, and are an outcome of overlapping electron wave-functions mediated by long-range lattice strain. Contrary to popular perception, electronically homogeneous stripe phase underlines the efficacy of the lattice strain in bringing about charge density modulation and in impeding the cross-talk between the order parameters, which otherwise evolves inhomogeneously in the form of orbitally-ordered insulating and orbitally disordered metallic phases. Complex phases in manganites are a consequence of a direct interplay between spin, charge, and orbital order parameters [1, 2]. The resulting modulated phases observed in manganites analogous to other oxide [3] materials are termed electronically soft [4] as the competing order parameters show indifference towards the formation of either a homogeneous ordered or an inhomogeneous disordered phase. In colossal magnetoresistant (CMR) manganites [5], of the form La 1-x Ca x MnO 3 (0.2 ≤ x ≤ 0.33), the localizing effect of the charge due to the Coulomb repulsion and Jahn-Teller (JT) [6,7] coupling at the Mn-site results in correlated polarons, with CE-type ordering, confined to nano-regions above T C [8-10]. Such phases symbolize the coupling of the spin, charge 1
and the orbital order parameters brought about by the lattice degree of freedom . The increase of the one-electron bandwidth below T c results in a metallic state. The metallicity relaxes the condition for orbital and charge ordering, corroborated by the loss of correlation in the charge order parameter [8][9][10].
Apart from JT distortions, the transport properties of the manganite are strongly influenced by the size mismatch between the A-site cations (eg. La and Ca). The random Coulomb potential emanating from the quenched disorder at the A-site bears a strong influence on the electronic characteristics of manganites [11]. Indeed, electronic inhomogeneity, in the form of JT-distorted insulating and undistorted metallic phase, is justified based on the presence of such a disorder [12]. Further, as the JT distortions are stabilized by the additional rotational degrees of freedom [11], it is believed that the phase coexistence and the percolative nature of the metal-insulator transition (MIT) are also intrinsic to manganites [12,13].
The absence of substrate-induced strain within the La 0.75 Ca 0. 25 MnO 3 (LCMO) film, and in contrast, the presence of La-Ca cation-ordered rhombohedral superstructure was recently shown to suppress the electronic inhomogeneity down to 1 nm scale [14].
Nevertheless, the unit-cell deformation and the associated strain arising from the difference in the radii of the La and Ca cations is intrinsic and could persist even in the absence of A-site disorder. As the present study shows, homogeneous novel electronic phases can surface as a corollary to a co-operative effect involving distorted octahedra and the strain field generated by the ordering of the deformed unit-cells.
2 Epitaxial rhombohedral film exhibiting La and Ca ordering was grown by metal-organic aerosol technique [14]. STM and STS were performed at a base pressure of ~1-5×10 -10 torr. The film was loaded into the vacuum chamber after they were cleaned with iso- Since stripes in an optimally hole-doped manganite are intriguing, we try to address the results obtained in the following order: Firstly, the origin of the stripe phase is elucidated based on lattice deformation and interrelated strain that manipulates the charge modulation and distortions at the Mn-site. In addition, the role of the strain in determining the coupling between the orbital and charge order parameters, and the intrinsically related transport property (insulating or metallic) of these stripes are clarified.
The criteria for obtaining atomic resolution by tunneling experiments in manganites was recently expounded, where the results point towards reduced screening of the charge due to defect-induced confinement -a trapped polaron -as one such possibility [15]. In hole-doped manganites, it is well known from x-ray scattering experiments that the number of such quasiparticles detectable, in general, is very small and it is almost non-existent in the rhombohedral phase [16]. This further augments the difficulties in obtaining atomic resolution by tunneling experiments on manganites [15,17,18].
Although the presence of small or large polarons can explain the near atomic resolution perceived, the formation of stripes with two different periodicities signify a different origin for the charge localization and structural modulation. Apart from point defect [15] and strong JT induced charge localization [17,18], strain effects [19] especially originating from domain walls [4,20] have great proclivity towards charge localization, and are also illustrated to yield exotic new phases in manganites. Atomic scale theory concerning lattice deformation, indeed, postulates such coupled electronic and elastic textures [21]. In the LCMO film, the presence of strain, at a qualitative level, can be described based on the atomic displacements caused by the ordering of the La-Ca cations. A checkered-board arrangement of uniaxially dilated La and compressed Ca unitcells, can be deciphered from the cross-sectional high resolution transmission electron microscopy images (HRTEM) [14]. The unique ordering of uniaxially strained (e 1 ) unitcells generates shear (e 2 ) and 'shuffle' (s x and s y ) deformations (Fig. 3a) [19]. These deformations, although symmetrically distributed, control the microscopic electronic and topological features of the manganite film [19]. Hence, the strain associated with the deformations effect structural distortions at the Mn octahedra; this in turn creates chargedisparity (Mn1 and Mn3) enhancing the atomic contrast in the tunneling experiments (Figs. 3b and 3c). As the ordering of the A-cation is long-range, these stripes point towards a correlated phenomenon relating charge modulation, structural distortions, and the associated orbital ordering. Such a correlation would be absent for small or large polaron description [16].
In manganites, based on the electron-phonon coupling and the long-range strain interactions, the presence of bond (along Mn-O-Mn directions or <100> directions in cubic notations) and diagonal (<110>) orbital ordering were theoretically predicted [22].
Such bond or diagonal-type stripes materialize when the inequality C 11 -C 12 -2C 44 , where C ij 's are elastic moduli, is either greater than zero or less than zero, respectively [22]. In line with this theory, as the structural distortions reflects in the orbital ordering, a periodicity of (Fig. 3b) [22]. Similarly, the electron polarization along the Mn-O-Mn direction along c yields bond stripes (Fig. 3c). Recent Brillion scattering experiment on a similar composition has shown the increase in the c 11 with decrease in temperature [23].
Clearly, the shear and uniaxial deformations play a vital part in the stripe formation. It must be noted that the orbital stripes are predicted without the overlapping strain field from the lattice [22]. However, in reality, the structural distortions (JT distortions) are stabilized and the JT-induced strain is annealed out due to rotations of the octahedra.
The strain mediated coupling of the charge and the orbital orders introduces lattice "incommensuration" resulting in a CE-type of charge-orbital ordering, as can be comprehended from diagonal stripes (Fig. 3b). On the other hand, below T c , a 2a p periodicity of the bond stripes implies that the charge-orbital modulation is "commensurate" with the lattice illustrating a weakened charge-orbital coupling. Further, as Mn octahedra corresponding to La unit-cells (Fig. 3a) are associated with larger anisotropic distortion than the octahedra of La-Ca unit-cells, the similarly distorted octahedra, hence, have a periodicity of 2a p rather than a p (Fig. 3c). It must be mentioned that both strain and magnetic interactions affect the bandwidth across the MIT and as the energetics (~ 20 meV/Mn) are similar [24,25], these stripes provide signatures of possible magnetic interaction. Thus, such incommensurate and commensurate stripes [1,4] can also be labeled as antiferro-and ferro-quadrupolar stripes, respectively arising from charge-orbital density wave.
As these stripes, in the LCMO film, are an outcome of inadequate screening of the electron wave-function due to the overlapping long-range strain interaction, it suggests a possible self-organization of polarons. Of relevance to this discussion is the nano-scale polaron-polaron interaction deduced from x-rays in a layered manganite [26]. The STS offers a better understanding of the intricacy of competing charge and orbital orders in engineering the polaron-polaron correlation, as the difference in the screening of the charge-carrier reflects on the conductance of the stripe phase. The tunneling currentvoltage characteristic (Fig. 4) on the bond stripes reveals a metallic behavior (red curve) distinct from the diagonal stripes, which appear insulator-like (green curve). Therefore, the room temperature density of state (∝ dV dI ) with a depletion (
0 0 ≈ → V dV dI ) near
the Fermi level (inset of Fig. 4) substantiates electron-hole localization and orbital ordering within diagonal stripes [27,28]. Furthermore, the linear part of the I-V curve, which mirrors the 'Drude' part to the conductivity [17], appears less than 0.2 V in the bond stripe. Hence, the onset of nonlinear I-V characteristics at lower voltages reveals a dominant polaronic nature of the conductance. In addition, the metallicity and bond stripe entails charge delocalization and orbital ordering. Such coexistence could be comprehended by evoking an electronic inhomogeneity in the form of orbitally-ordered insulating and orbitally-disordered metallic state within the stripes. However, the conductance spectra on the bond stripes do not support a gap (
0 0 ≈ → V dV dI ) scenario.
The electronic inhomogeneity in the form of insulating and metallic regions can, therefore, be ruled out. In contrast, this puzzle can be understood as arising from decoupling of the charge and orbital order parameters. Wherein, the strain modulates the structural distortions pinning the orbital, while in the limit of weak electron-phonon coupling, the electron itinerancy could be due to reduced dimensionality, as the stripes constitute a form of charge-orbital density wave structure [29].
In conclusion, in the limit of weak electron-phonon coupling, the polaron-polaron interaction leading to stripe structure mediated by elastic continuum due to the lattice mismatch provides the rationale for stripe in manganites [30]. These strain-induced stripes also provide insight into the complex structure-property relationship brought out by the propensity of manganites towards new phases such as a tendency towards a fragile CE phase within a rhombohedral structure [31], and also a polaron liquid-like metallic state [29] in the absence of quenched disorder or strong JT distortions. The modulations (brown and green)-the direction of the charge-orbital density wave, which also is a measure of variation of the structural distortion.
propanol solution. Mechanically cut Pt-Ir tips were employed to get high resolution images in the constant current mode (I = 0.1-0.3 nA) with a tip bias ranging from 0.35-0.7 V. Stable high resolution images were obtained on flat terraces of the as prepared film both at room and low temperatures (see supplementary information). Layer-by-layer growth with unit-cell heights of 4 Å and in certain cases integral multiples of 2 Å were detected by STM. The high resolution room temperature STM images on a 45×45 Å 2 area (Figs. 1a and b) demonstrate the stripe-like features with a periodicity of p a 2 (where a p is the cubic-perovskite cell length). These stripes reveal two different corrugations: a large (black arrows in Fig. 1c) corrugation with a width (spread) of 6 Å, quite in contrast to the smaller corrugation (orange arrows inFig. 1c), which is 4 Å wide. The distances between the two nearest large features are ~11 Å(Fig. 1b). High resolution STM at 115 K is visualized in Figs. 2a and b. One can clearly see the stripe-like features even in the ferromagnetic metallic state (T c ≈ T MI = 273 K). The peak to peak modulations are ~6.5-7.5 Å (~ 2a p ). From Figs. 2a, and c, non-planar pyramidal-like structures of stripes are implicit. Such stripe features are contrary to expectations, as in the metallic state, a delocalization of e g electron quenches JT distorted octahedra ensuing screening of the charge.
in Figs. 1a and 1b can be classified as diagonal orbital-ordered stripes. Although, there is an absence of a clear atomic lattice conceivably due to charge fluctuations. The CE-type stripes with two different LDOS features transpire due to the polarization of the electron wave-function involving next nearest neighbouring 5 interactions
Figure captions
Figure captions
Figure 1 .
1Room temperature (294 K) STM images of La-Ca ordered La 0.75 Ca 0.25 MnO 3 film. a and b, The images (45×45 Å 2 ) represent the stripes originating from chargeorbital ordering. c, Line profiles corresponding to Figs. a and b. There are two distinct Mn sites (pink arrows)-large that are broader in comparison to the smaller sites indicated by black and orange arrows. Peak-to-peak distance is ~p a 2 , indicating that the stripe modulation is along <110> directions. The tip is held at a positive voltage with respect to the sample; the peak features indicates an 'electron-like' character coming from the occupied state.
Figure 2 .Figure 3 .
23Stripes observed on LCMO film by STM at T << T c . a, 49.5×49.5 Å 2 (115 K). b, 50×50 Å 2 (115 K). The images show stripes of Mn-octahedra with a peak-to-peak (black arrows) distance of ~. The modulations of these stripes in certain images are 'pyramidal-like' (see line profiles of a). Schematic representation of unit-cell deformations of the LCMO film, and the probable nearest and next nearest interactions leading to stripe phases. a, The Mn displacements deduced from cross-sectional (a-b plane) HRTEM 14 . The unit-cell lattice deformations are classified as long (e 1 and e 2 ) and short range (s x and s y ) 19 . Corners of the polygon are occupied by the Mn ion (termed as Mn-site). Strain-free superstructure in the b-a plane is shown by the orange square. b, One of the possible attractive next nearest interaction is depicted. The black square is the unit-cell in the c-a plane; Mn1, Mn2 and Mn4 are representative Mn 3+ -like sites, while Mn3 is Mn 4+ -like. The blue block arrows the direction of next nearest attraction and a measure of the electron polarization directions. The polarization of the electron wave-functions generates the two different LDOS features Mn 3+ -like (Mn1) and Mn 4+ -like (Mn3) sites predominantly observed within diagonal stripes. c, The nearest neighbour interactions generating bond stripes.
Figure. 4 .
4Current-voltage (I-V) characteristics of the stripe phases. Insulating-like behavior at 294 K (green) and metallic feature at 115 K (50×50 Å 2 , red) are obtained on diagonal (STM is shown at top-left corner) and bond stripes (the corresponding STM is displayed bottom-right corner), respectively. Curves are average of four data sets taken at the points (yellow) selected in the STM images. The room temperature dV dI spectrum (top-right inset) shows a depletion of LDOS near the Fermi level (V=0). The broken vertical lines indicate the linear part of the I-
Figure 3 .
3Sudheendra et al.
AcknowledgmentsAuthors are grateful to P. B. Littlewood for helpful suggestions.
. C H Chen, & S.-W Cheong, Phys. Rev. Lett. 764042C. H. Chen, & S.-W. Cheong, Phys. Rev. Lett. 76, 4042 (1996).
. J C Loudon, N D Mathur, & P Midgley, Nature. 420797J. C.Loudon, N. D. Mathur, & P. Midgley, Nature 420, 797 (2002).
. S A Kivelson, E Fradkin, & V J Emery, Nature. 393550S. A.Kivelson, E. Fradkin, & V. J. Emery, Nature, 393, 550 (1998).
. G C Milward, M J Calderón, & P B Littlewood, Nature. 433607G. C.Milward, M. J. Calderón, & P. B. Littlewood, Nature, 433, 607 (2005).
. R Helmolt, J Wecker, B Holzapfel, L Schultz, & K Samwer, Phys. Rev. Lett. 712331R. von Helmolt, J.Wecker, B. Holzapfel, L. Schultz, & K. Samwer, Phys. Rev. Lett. 71, 2331 (1993).
. A J Millis, Nature. 392147A. J. Millis, Nature 392, 147 (1998).
. R Kilian, & G Khaliullin, Phys. Rev. B. 6013458R. Kilian, & G. Khaliullin, Phys. Rev. B 60, 13458 (1999).
. C P Adams, J W Lynn, Y M Mukovskii, A A Arsenov, D Shulyatev, Phys. Rev. Lett. 853954C. P. Adams, J. W. Lynn, Y. M. Mukovskii, A. A. Arsenov, & Shulyatev, D. Phys. Rev. Lett. 85, 3954 (2000).
. P Dai, Phys. Rev. Lett. 852553P. Dai, et al. Phys. Rev. Lett. 85, 2553 (2000).
. J M Zuo, & J Tao, Phys. Rev. B. 6360407J. M. Zuo, & J. Tao, Phys. Rev. B 63, 060407(R), (2001).
. L M Rodriguez-Martinez, & J P Attfield, Phys. Rev. B. 582426L. M. Rodriguez-Martinez, & J. P. Attfield, Phys. Rev. B 58, 2426 (1998).
E Dagotto, Nanoscale Phase Separation and Colossal Magnetoresistance. Berlin, GermanySpringer136Solid State ScienceE. Dagotto, Nanoscale Phase Separation and Colossal Magnetoresistance (Springer Series in Solid State Science Vol. 136, Springer, Berlin, Germany (2002)).
. M Uehara, S Mori, C H Chen, & S.-W Cheong, Nature. 399560M. Uehara, S. Mori, C. H. Chen, & S.-W. Cheong, Nature 399, 560 (1999).
. V Moshnyaga, V. Moshnyaga, et al. Preprint at http://www.arxiv.org/cond-mat/0512350 (2005).
. H M Rønnow, Ch Renner, G Aeppli, T Kimura, & Y Tokura, Nature. 4401025H. M. Rønnow, Ch. Renner, G. Aeppli, T. Kimura, & Y. Tokura, Nature, 440, 1025 (2006).
. V Kiryukhin, Phys. Rev. B. 70214424V. Kiryukhin, et al. Phys. Rev. B 70, 214424 (2004).
. Ch, G Renner, B.-G Aeppli, Y-A Kim, & S.-W Soh, Cheong, Nature. 416518Ch. Renner, G. Aeppli, B.-G. Kim, Y-A. Soh, & S.-W. Cheong, Nature 416, 518 (2002).
. J X Ma, D T Gillaspie, E W Plummer, & J Shen, Phys. Rev. Lett. 95237210J. X. Ma, D. T. Gillaspie, E. W. Plummer, & J. Shen, Phys. Rev. Lett. 95, 237210 (2005).
. K H Ahn, T Lookman, & A R Bishop, Nature. 428401K. H. Ahn, T. Lookman, & A. R. Bishop, Nature, 428, 401 (2004).
. K H Ahn, T Lookman, A Saxena, & A R Bishop, Phys. Rev. B. 68921101K. H. Ahn, T. Lookman, A. Saxena, & A. R. Bishop, Phys. Rev. B 68, 0921101 (2003).
. K H Ahn, T Lookman, A Saxena, & A R Bishop, Phys. Rev. B. 71212102K. H. Ahn, T. Lookman, A. Saxena, & A. R. Bishop, Phys. Rev. B 71, 212102 (2005).
. D I Khomskii, & K I Kugel, Phys. Rev. B. 67134401D. I. Khomskii, & K. I. Kugel, Phys. Rev. B 67, 134401 (2003).
. C Md Motin Seikh, L Narayana, A K Sudheendra, C N R Sood, Rao, J. Phys.: Condens. Matter. 164381Md Motin Seikh, C. Narayana, L. Sudheendra, A. K. Sood and C. N. R. Rao, J. Phys.: Condens. Matter, 16, 4381 (2004).
. M J Calderón, A J Millis, & K H Ahn, Phys. Rev. B. 68100410M. J. Calderón, A. J. Millis, & K. H. Ahn, Phys. Rev. B 68, 100410 (R) (2003).
. S Yunoki, T Hotta, & E Dagotto, Phys. Rev. Lett. 843714S. Yunoki, T. Hotta & E. Dagotto, Phys. Rev. Lett. 84, 3714 (2000).
. B J Campbell, Phys. Rev. B. 6514427B. J. Campbell, et al. Phys. Rev. B 65, 014427 (2001).
. L Brey, & P B Littlewood, Phys. Rev. Lett. 95117205L. Brey, & P. B. Littlewood, Phys. Rev. Lett. 95, 117205 (2005).
. T Hotta, A Feiguin, & E Dagotto, Phys. Rev. Lett. 864922T. Hotta, A. Feiguin, & E. Dagotto, Phys. Rev. Lett. 86, 4922 (2001).
. N Mannella, Nature. 438474N. Mannella, et al. Nature, 438, 474 (2005).
. J C Loudon, Phys. Rev. Lett. 9497202J. C. Loudon et al. Phys. Rev. Lett. 94, 097202 (2005).
. H Aliaga, Phys. Rev. B. 68104405H. Aliaga, et al. Phys. Rev. B 68, 104405 (2003).
| [] |
[
"FlexiBERT: Are Current Transformer Architectures too Homogeneous and Rigid?",
"FlexiBERT: Are Current Transformer Architectures too Homogeneous and Rigid?"
] | [
"Shikhar Tuli [email protected] ",
"Bhishma Dedhia [email protected] ",
"Shreshth Tuli [email protected] ",
"Niraj K Jha [email protected] ",
"\nDept. of Electrical & Computer Engineering\nDepartment of Computing\nPrinceton University Princeton\n08544NJUSA\n",
"\nDept. of Electrical & Computer Engineering\nImperial College London London\nSW7 2AZUK\n",
"\nPrinceton University\n08544PrincetonNJUSA\n"
] | [
"Dept. of Electrical & Computer Engineering\nDepartment of Computing\nPrinceton University Princeton\n08544NJUSA",
"Dept. of Electrical & Computer Engineering\nImperial College London London\nSW7 2AZUK",
"Princeton University\n08544PrincetonNJUSA"
] | [] | The existence of a plethora of language models makes the problem of selecting the best one for a custom task challenging. Most state-of-the-art methods leverage transformerbased models (e.g., BERT) or their variants. Training such models and exploring their hyperparameter space, however, is computationally expensive. Prior work proposes several neural architecture search (NAS) methods that employ performance predictors (e.g., surrogate models) to address this issue; however, analysis has been limited to homogeneous models that use fixed dimensionality throughout the network. This leads to sub-optimal architectures. To address this limitation, we propose a suite of heterogeneous and flexible models, namely FlexiBERT, that have varied encoder layers with a diverse set of possible operations and different hidden dimensions. For better-posed surrogate modeling in this expanded design space, we propose a new graph-similarity-based embedding scheme. We also propose a novel NAS policy, called BOSHNAS, that leverages this new scheme, Bayesian modeling, and second-order optimization, to quickly train and use a neural surrogate model to converge to the optimal architecture. A comprehensive set of experiments shows that the proposed policy, when applied to the FlexiBERT design space, pushes the performance frontier upwards compared to traditional models. FlexiBERT-Mini, one of our proposed models, has 3% fewer parameters than BERT-Mini and achieves 8.9% higher GLUE score. A FlexiBERT model with equivalent performance as the best homogeneous model achieves 2.6× smaller size. FlexiBERT-Large, another proposed model, achieves state-of-the-art results, outperforming the baseline models by at least 5.7% on the GLUE benchmark.1. Here, by heterogeneity, we mean that different encoder layers can have distinct attention operations, feed-forward stack depths, etc. By flexibility, we mean that the hidden dimensions for different encoder layers, in a transformer architecture, are allowed to be mismatched. | 10.1613/jair.1.13942 | [
"https://arxiv.org/pdf/2205.11656v1.pdf"
] | 249,017,980 | 2205.11656 | 1bcd42583a7b4475d3b456678e7f3752acd9edd1 |
FlexiBERT: Are Current Transformer Architectures too Homogeneous and Rigid?
Shikhar Tuli [email protected]
Bhishma Dedhia [email protected]
Shreshth Tuli [email protected]
Niraj K Jha [email protected]
Dept. of Electrical & Computer Engineering
Department of Computing
Princeton University Princeton
08544NJUSA
Dept. of Electrical & Computer Engineering
Imperial College London London
SW7 2AZUK
Princeton University
08544PrincetonNJUSA
FlexiBERT: Are Current Transformer Architectures too Homogeneous and Rigid?
Preprint. In review. Submitted -/-; published -/-
The existence of a plethora of language models makes the problem of selecting the best one for a custom task challenging. Most state-of-the-art methods leverage transformerbased models (e.g., BERT) or their variants. Training such models and exploring their hyperparameter space, however, is computationally expensive. Prior work proposes several neural architecture search (NAS) methods that employ performance predictors (e.g., surrogate models) to address this issue; however, analysis has been limited to homogeneous models that use fixed dimensionality throughout the network. This leads to sub-optimal architectures. To address this limitation, we propose a suite of heterogeneous and flexible models, namely FlexiBERT, that have varied encoder layers with a diverse set of possible operations and different hidden dimensions. For better-posed surrogate modeling in this expanded design space, we propose a new graph-similarity-based embedding scheme. We also propose a novel NAS policy, called BOSHNAS, that leverages this new scheme, Bayesian modeling, and second-order optimization, to quickly train and use a neural surrogate model to converge to the optimal architecture. A comprehensive set of experiments shows that the proposed policy, when applied to the FlexiBERT design space, pushes the performance frontier upwards compared to traditional models. FlexiBERT-Mini, one of our proposed models, has 3% fewer parameters than BERT-Mini and achieves 8.9% higher GLUE score. A FlexiBERT model with equivalent performance as the best homogeneous model achieves 2.6× smaller size. FlexiBERT-Large, another proposed model, achieves state-of-the-art results, outperforming the baseline models by at least 5.7% on the GLUE benchmark.1. Here, by heterogeneity, we mean that different encoder layers can have distinct attention operations, feed-forward stack depths, etc. By flexibility, we mean that the hidden dimensions for different encoder layers, in a transformer architecture, are allowed to be mismatched.
Introduction
In recent years, self-attention (SA)-based transformer models (Vaswani et al., 2017;Devlin et al., 2019) have achieved state-of-the-art results on tasks that span the natural language processing (NLP) domain. This burgeoning success has largely been driven by large-scale pre-training datasets, increasing computational power, and robust training techniques . A challenge that remains is efficient optimal model selection for a specific task and a set of user requirements. In this context, only those models should be trained that have the maximum predicted performance. This falls in the domain of neural architecture search (NAS) (Zoph & Le, 2016).
Challenges
The design space of transformer models is vast. Several models have been proposed in the past after rigorous search. Popular models include BERT, XLM, XLNet, BART, ConvBERT, and FNet (Devlin et al., 2019;Conneau & Lample, 2019;Yang et al., 2019;Lewis et al., 2020;Jiang et al., 2020;Lee-Thorp et al., 2021). Transformer design involves a choice of several hyperparameters, including the number of layers, size of hidden embeddings, number of attention heads, and size of the hidden layer in the feed-forward network (Khetan & Karnin, 2020). This leads to an exponential increase in the design space, making a brute-force approach to explore the design space computationally infeasible (Ying et al., 2019). The aim is to converge to an optimal model as quickly as possible, by testing the lowest possible number of datapoints (Pham et al., 2018). Moreover, model performance may not be deterministic, requiring heteroscedastic modeling (Ru et al., 2020).
Existing solutions
Recent NAS advancements use various techniques to explore and optimize different models in the deep learning domain, from image recognition to speech recognition and machine translation (Zoph & Le, 2016;Mazzawi et al., 2019). In the computer-vision domain, many convolutional neural network (CNN) architectures have been designed using various search approaches, such as genetic algorithms, reinforcement learning, structure adaptation, etc. Some even introduce new basic operations (Zhang et al., 2018) to enhance performance on different tasks. Many works leverage a performance predictor, often called a surrogate model, to reliably predict model accuracy. Such a surrogate can be trained through active learning by querying a few models from the design space and regressing their performance to the remaining space (under some theoretical assumptions), thus significantly reducing search times (Siems et al., 2020;White et al., 2021b).
However, unlike CNN frameworks (Ying et al., 2019;Tan & Le, 2019) that are only meant for vision tasks, there is no universal framework for NLP that differentiates among transformer architectural hyperparameters. Works that do compare different design decisions often do not consider heterogeneity and flexibility in their search space and explore the space over a limited hyperparameter set (Khetan & Karnin, 2020;Xu et al., 2021;Gao et al., 2021) 1 . For instance, Primer (So et al., 2021) only adds depth-wise convolutions to the attention heads; AutoBERT-Zero (Gao et al., 2021) lacks deep feed-forward stacks; AutoTinyBERT (Yin et al., 2021) does not consider linear transforms (LTs) that have been shown to outperform traditional SA operations in terms of parameter efficiency; AdaBERT only considers a design space of convolution and pooling operations. Most works, in the field of NAS for transformers, target model compression while trying to maintain the same performance Yin et al., 2021;Wang et al., 2020), which is orthogonal to our objectives in this work, i.e., searching for novel architectures that push the performance (So et al., 2021) ES AdaBERT DS AutoTinyBERT (Yin et al., 2021) ST DynaBERT (Hou et al., 2020) ST NAS-BERT ST AutoBERT-Zero (Gao et al., 2021) ES
FlexiBERT (ours) BOSHNAS
frontier. In addition, all previous works only consider rigid architectures. For instance, DynaBERT (Hou et al., 2020) only adapts the width of the network by varying the number of attention heads (and not the hidden dimension of each head), which is only a simple extension to traditional architectures. Further, their individual models still have the same hidden dimension throughout the network. AutoTinyBERT (Yin et al., 2021) and HAT (Wang et al., 2020), among others, fix the input and output dimensions for each encoder layer (see Appendix A.1 for a background on the SA operation), which leads to rigid architectures. Table 1 gives an overview of various baseline NAS frameworks for transformer architectures. It presents the aforementioned works and the respective features they include. Primer (So et al., 2021) and AutoBERT-Zero (Gao et al., 2021) exploit evolutionary search (ES), which faces various drawbacks that limit elitist algorithms (Dang et al., 2021;White et al., 2021a;Siems et al., 2020). AdaBERT leverages differentiable architecture search (DS), a popular technique used in many CNN design spaces (Siems et al., 2020). On the other hand, some recent works like AutoTinyBERT (Yin et al., 2021), DynaBERT (Hou et al., 2020), and NAS-BERT leverage super-network training, where one large transformer is trained and its sub-networks are searched in a one-shot manner. However, this technique is not amenable to diverse design spaces, as the super-network size would drastically increase, limiting the gains from weight transfer to the relatively minuscule sub-network. Moreover, previous works limit their search to either the standard SA operation, i.e., the scaled dot-product (SDP), or the convolution operation. We extend the basic attention operation to also include the weighted multiplicative attention (WMA). Taking motivation from recent advances with LT-based transformer models (Lee-Thorp et al., 2021), we also add discrete Fourier transform (DFT) and discrete cosine transform (DCT) to our design space. AutoTinyBERT and DynaBERT also allow adaptive widths in the transformer architectures in their design space, however, each instance still has the same dimensionality throughout the network (in other words, every encoder layer has the same hidden dimension, as explained above). We detail why this is inherently a limitation in traditional transformer architectures in Appendix A.1. FlexiBERT, to the best of our knowledge, is the first framework to allow full flexibility -not only different transformer instances in the design space can have distinct widths, but each encoder layer within a transformer instance can also have different hidden dimensions. Finally, we also leverage a novel NAS technique -Bayesian Optimization using Second-Order Gradients and Heteroscedastic Models for Neural Architecture Search (BOSHNAS).
Our contributions
To address the limitations of homogeneous and rigid models, we make the following technical contributions:
• We expand the design space of transformer hyperparameters to incorporate heterogeneous architectures that venture beyond simple SA by employing other operations like convolutions and LTs.
• We propose novel projection layers and relative/trained positional encodings to make hidden sizes flexible across layers -hence the name FlexiBERT.
• We propose Transformer2vec that uses similarity measures to compare computational graphs of transformer models to obtain a dense embedding that captures model similarity in a Euclidean space.
• We propose a novel NAS framework, namely, BOSHNAS. It uses a neural network as a heteroscedastic surrogate model and second-order gradient-based optimization using backpropagation to input (GOBI) (Tuli et al., 2021) to speed up search for the next query in the exploration process. It leverages nearby trained models to transfer weights in order to reduce the amortized search time for every query.
• Experiments on the GLUE benchmark (Wang et al., 2018) show that BOSHNAS applied to the FlexiBERT design space results in a score improvement of 0.4% compared to the baseline, i.e., NAS-BERT . The proposed model, FlexiBERT-Mini, has 3% fewer parameters than BERT-Mini and achieves 8.9% higher GLUE score. FlexiBERT also outperforms the best homogeneous architecture by 3%, while requiring 2.6× fewer parameters. FlexiBERT-Large, our BERT-Large (Devlin et al., 2019) counterpart, outperforms the state-of-the-art models by at least 5.7% average accuracy on the first eight tasks in the GLUE benchmark (Wang et al., 2018).
The rest of the paper is organized as follows. Section 2 presents related work. Section 3 describes the set of steps and decisions that undergird the FlexiBERT framework. In Section 4, we present the results of design space exploration experiments. Finally, Section 5 concludes the article.
Related Work
We briefly describe related work next.
Transformer design space
Traditionally, transformers have primarily relied on the SA operation (details in Appendix A.1). Nevertheless, several works have proposed various compute blocks to reduce the number of model parameters and hence computational cost, without compromising performance. For instance, ConvBERT uses dynamic span-based convolutional operations that replace SA heads to directly model local dependencies . Recently, FNet improved model efficiency using LTs instead (Lee-Thorp et al., 2021). MobileBERT, another recent architecture, uses bottleneck structures and multiple feed-forward stacks to obtain smaller and faster models while achieving competitive results on well-known benchmarks (Sun et al., 2020). For completeness, we present other previously proposed advances to improve the BERT model in Appendix A.2.
Neural architecture search
NAS is an important machine learning technique that algorithmically searches for new neural network architectures within a pre-specified design space under a given objective (He et al., 2021). Prior work has implemented NAS using a variety of techniques, albeit limited to the CNN design space. A popular approach is to use a reinforcement learning algorithm, REINFORCE, that has been shown to be superior to other tabular approaches (Williams, 1992). Other approaches include Gaussian-Process-based Bayesian Optimization (GP-BO) (Snoek et al., 2012), Evolutionary Search (ES) Lu et al., 2019), etc. However, these methods come with challenges that limit their ability to reach state-of-the-art results in the CNN design space (White et al., 2021a).
Recently, NAS has also seen application of surrogate models for performance prediction in CNNs (Siems et al., 2020). This results in training of much fewer models to predict accuracy for the entire design space under some confidence constraints. However, these predictors are computationally expensive to train. This leads to a bottleneck, especially in large design spaces, in the training of subsequent models since new queries are produced only after this predictor is trained for every batch of trained models in the search space. Siems et al. (2020) use a Graph Isomorphism Net (Xu et al., 2019) that regresses performance values directly on the computational graphs formed for each CNN model.
Although previously restricted to CNNs (Zoph et al., 2017), NAS has recently seen applications in the transformer space as well. So et al. (2019) use standard NAS techniques to search for optimal transformer architectures. However, their method requires every new model to be trained from scratch. They do not employ knowledge transfer, in which weights from previously trained neighboring models are used to speed up subsequent training. This is important in the transformer space since pre-training every model is computationally expensive. Further, the attention heads in their model follow the same dimensionality, i.e., are not fully flexible.
One of the state-of-the-art NAS techniques, BANANAS, implements Bayesian Optimization (BO) over a neural network model and predicts performance uncertainty using ensemble networks that are, however, too compute-heavy (White et al., 2021a). BANANAS uses mutation/crossover on the current set of best-performing models and obtains the next best predicted model in this local space. Instead, we propose the use of GOBI (Tuli et al., 2021) in order to efficiently search for the next query in the global space. Thanks to random cold restarts, GOBI can search over diverse models in the architecture space. BANANAS also uses path embeddings, which have been shown to perform sub-optimally for search over a diverse space (Cheng et al., 2021).
Graph Embeddings that Drive NAS
Many works on NAS for CNNs have primarily used graph embeddings to model their performance predictor. These embeddings are formed for each computational graph, representing a specific CNN architecture in the design space. A popular approach to learning with graph-structured data is to make use of graph kernel functions that measure similarity between graphs. A recent work, NASGEM (Cheng et al., 2021), uses the Weisfeiler-Lehman (WL) sub-tree kernel, which compares tree-like substructures of two computational graphs. This helps distinguish between substructures that other kernels, like the random walk kernel, may deem identical (Shervashidze et al., 2011). Also, the WL kernel has an attractive computational complexity. This has made it one of the most widely used graph kernels. Graph-distance-driven NAS often leads to enhanced representation capacity that yields optimal search results (Cheng et al., 2021). However, the WL kernel only computes sub-graph similarities based on overlap in graph nodes. It does not consider whether two nodes are inherently similar or not. For example, a computational 'block' (or its respective graph node) for an SA head with h = 128 and o = SDP, would be closer to another attention block with, say, h = 256 and o = WMA, but would be farther from a block representing a feed-forward layer (for details on SA types, see Section 3.1).
Once we have similarities computed between every possible graph pair in the design space, next we learn dense embeddings, the Euclidean distance for which should follow the similarity function. These embeddings would not only be helpful in effective visualization of the design space, but also for fast computation of neighboring graphs in the activelearning loop. Further, a dense embedding helps us practically train a finite-input surrogate function (as opposed to the sparse path encodings used in White et al., 2021a). Many works have achieved this using different techniques. Narayanan et al. (2017) train task-specific graph embeddings using a skip-gram model and negative sampling, taking inspiration from word2vec (Mikolov et al., 2013). In this work, we take inspiration from GloVe instead (Pennington et al., 2014), by applying manifold learning to all distance pairs (Kruskal, 1964). Hence, using global similarity distances built over domain knowledge, and batched gradient-based training, we obtain the proposed Transformer2vec embeddings that are superior to traditional generalized graph embeddings.
We take motivation from NASGEM (Cheng et al., 2021), which showed that training a WL kernel-guided encoder has advantages in scalable and flexible search. Thus, we train a performance predictor on the Transformer2vec embeddings, which not only aid in transfer of weights between neighboring models, but also support better-posed continuous performance approximation. More details on the computation of these embedding are given in Section 3.3.
Methodology
In this work, we train a heteroscedastic surrogate model that predicts the performance of a transformer architectrue and uses it to run second-order optimization in the design space. We do this by decoupling the training procedure from pre-processing of the embedding of every model in the design space to speed up training. First, we train embeddings to map the space of computational graphs to a Euclidean space (Transformer2vec) and then train the surrogate model on the embeddings.
Our work involves exploring a vast and heterogeneous design space and searching for optimal architectures with a given task. To this end, we (a) define a design space via a flexible set of architectural choices (see Section 3.1), (b) generate possible computational graphs (G; see Section 3.2), (c) learn an embedding for each point in the space using a distance metric for graphs (∆; see Section 3.3), and (d) employ a novel search technique (BOSHNAS) based on surrogate modeling of the performance and its uncertainty over the continuous embedding space (see Section 3.4). In addition, to tackle the enormous design space, we propose a hierarchical search technique that iteratively searches over finer-grained models derived from (e) a crossover of the best models obtained in the current iteration and their neighbors. Figure 1 gives a broad overview of the FlexiBERT pipeline as explained above. An unrolled version of this iterative flow is presented below:
Design Space → G 1 T − → ∆ 1 BOSHNAS − −−−−−− → g * cross-over − −−−−− → G 2 T − → . . .
However, for simplicity of notation, we omit the iteration index in further references. We now discuss the key elements of this pipeline in detail.
FlexiBERT Design Space
We now describe the FlexiBERT design space, i.e., box (a) in Figure 1.
(h j ) {128, 256} Feed-forward dimension (f j ) {512, 1024} Number of feed-forward stacks {1, 3} Operation parameters (p j ): if o j = SA Self-attention type: {SDP, WMA} else if o j = LT Linear transform type: {DFT , DCT} else if o j = DSC Convolution kernel size: {5, 9}
Set of operations in FlexiBERT
The traditional BERT model is composed of multiple layers, each containing a bidirectional multi-headed by SA module followed by a feed-forward module. Several modifications have been proposed to the original encoder, primarily to the attention module. This gives rise to a richer design space. We consider WMA-based SA in addition to SDP-based operations (Luong et al., 2015). We also incorporate LT-based attention in FNet (Lee-Thorp et al., 2021) and dynamicspan-based convolution (DSC) in ConvBERT , in place of the vanilla SA mechanism. Whereas the original FNet implementation uses DFT, we also consider DCT. The motivation behind using DCT is its widespread application in lossy data compression, which we believe can lead to sparse weights, thus leaving room for optimizations with sparsity-aware machine learning accelerators (Yu & Jha, 2022). Our design space allows variable kernel sizes for convolution-based attention. Consolidating different attention module types that vary in their computational costs into a single design space enables the models to have inter-layer variance in expression capacity. Inspired by MobileBERT (Sun et al., 2020), we also consider architectures with multiple feed-forward stacks. We summarize the entire design space with the range of each operation type in Table 2. The ranges of different hyperparameters are in accordance with the design space spanned by BERT-Tiny to BERT-Mini (Turc et al., 2019), with additional modules included as discussed. We call this the Tiny-to-Mini space. This restricts our curated testbed to models with up to 3.3 × 10 7 trainable parameters. This curated parameter space allows us to perform extensive experiments, comparing the proposed approach against various baselines (see Section 4.3).
Every model in the design space can therefore be expressed via a model card that is a dictionary containing the chosen value for each design decision. BERT-Tiny (Turc et al., 2019), in this formulation, can be represented as where the length of the list for every entry in f denotes the size of the feed-forward stack. The model card can be used to derive the computational graph of the model using smaller modules inferred from the design choice (details in Section 3.2).
Flexible hidden dimensions
In traditional transformer architectures, the flow of information is restricted through the use of a constant embedding dimension across the network (a matrix of dimensions N T × h from one layer to the next, where N T denotes the number of tokens and h the hidden dimension; more details in Appendix A.1). We allow architectures in our design space to have flexible dimensions across layers that, in turn, enables different layers to capture information of different dimensions, as it learns more abstract features deeper into the network. For this, we make the following modifications:
• Projection layers: We add an affine projection network between encoder layers with dissimilar hidden sizes to transform encoding dimensionality.
• Relative positional encoding: The vanilla-BERT implementation uses an absolute positional encoding at the input and propagates it ahead through residual connections.
Since we relax the restriction of a constant hidden size across layers, this is not applicable to many models in our design space (as the learned projections for absolute encodings may not be one-to-one). Instead, we add a relative positional encoding at each layer (Shaw et al., 2018;Huang et al., 2018;Yang et al., 2019). Such an encoding can entirely replace absolute positional encodings with relative position representations learned using the SA mechanism. Whereas the SA module implementation remains the same as in previous works, for DSC-based and LT-based attention, we learn the relative encodings separately using SA and add them to the output of the attention module.
Formally, let Q and V denote the query and the value layers, respectively. Let R denote the relative embedding tensor that is to be learned. Let Z and X denote the output and the input tensors of the attention module, respectively. In addition, let us define LT-based attention and DSC-based attention as LT(·) and DSC(·), respectively. Then,
RelativeAttention(X) = softmax QR d Q V Z LT = LT(X) + RelativeAttention(X) Z DSC = DSC(X) + RelativeAttention(X)
It should be noted that the proposed approach would only be applicable when the positional encodings are trained, instead of being predetermined (Vaswani et al., 2017). Thanks to relative and trained positional encodings, this enabled us to make the dimensionality of data flow flexible across the network layers. This also means that each layer in the feed-forward stack can have a distinct hidden dimension.
Graph Library
We now describe the graph library, i.e., box (b) in Figure 1.
Block-level computational graphs
To learn a lower-dimensional dense manifold of the given design space, characterized by a large number of FlexiBERT models, we convert each model into a computational graph. This is formulated based on the forward flow of connections for each compute block. For our design space, we take all possible combinations of the compute blocks derived from the design decisions presented in Table 2 (see Appendix B.1). Using this design space and the set of compute blocks, we create all possible computational graphs within the design space for every transformer model. We then use recursive hashing as follows (Ying et al., 2019). For every node in this graph, we concatenate the hash of its input, hash of that node, and its output, and then take the hash of the result. We use SHA256 as our hashing function. Doing this for all nodes and then hashing the concatenated hashes gives us the resultant hash of a given computational graph. This helps us detect isomorphic graphs and remove redundancy. Figure 2 shows the block-level computational graph for BERT-Tiny. Using the connection patterns for every possible block permutation (as presented in Appendix B.1), we can generate multiple graphs for the given design space.
Levels of hierarchy
The total number of possible graphs in the design space with heterogeneous feed-forward hidden layers is ∼3.32 billion. This is substantially larger than any transformer design space used in the past.
To make our approach tractable, we propose a hierarchical search method. Each model in the design space can be considered to be composed of multiple stacks, where each stack contains at least one encoder layer. In the first step, we restrict each stack to s = 2 layers, where each layer in a stack shares the same design configuration. Naturally, this limits the search space size (the set of all graphs in this space is denoted by G 1 ). Hence, for instance, BERT-Tiny falls under G 1 since the two encoder layers have the same configuration. We learn embeddings in this space and then run NAS to obtain the best-performing models. In the subsequent step, we consider a design space constituted by a finer-grained neighborhood of these models. The neighborhood is derived by pairwise crossover between the bestperforming models and their neighbors in a space where the number of layers per stack is set to s/2 = 1, denoted by G 2 (details in Appendix B.3). Finally, we include heterogeneous feed-forward stacks (s = 1 * ) and denote the space by G 3 .
Transformer2vec
We now describe the Transformer2vec embedding and how we create an embedding library from a graph library G, i.e., box (c) in Figure 1.
Graph edit distance
Taking inspiration from Cheng et al. (2021) and Pennington et al. (2014), we train dense embeddings using global distance metrics, such as the Graph Edit Distance (GED) (Abu-Aisheh et al., 2015). These embeddings enable fast derivation of neighboring graphs in the active learning loop to facilitate transfer of weights. We call them Transformer2vec embeddings. Unlike other approaches like the WL kernel, GED bakes in domain knowledge in graph comparisons, as explained in Section 2.3, by using a weighted sum of node insertions, deletions, and substitutions costs.
For the GED computation, we first sort all possible compute blocks in the order of their computational complexity. Then, we weight the insertion and deletion cost for every block based on its index in this sorted list, and the substitution cost between two blocks based on the difference in the indices in this sorted list. For computing the GED, we use a depth-first algorithm that requires less memory than traditional methods (Abu-Aisheh et al., 2015).
Training embeddings
Given that there are S graphs in G, we compute the GED for all possible computational graph pairs. This gives us a dataset of N = S 2 distances. To train the embedding, we minimize the mean-square error as the loss function between the predicted Euclidean distance and the corresponding GED. For the design space in consideration, embeddings of d dimensions are generated for every level of the hierarchy. Concretely, to train embedding T , we minimize the loss
L T = 1≤i≤N,1≤j≤N,i =j d(T (g i ), T (g j )) − GED(g i , g j ) 2 ,
where d(·, ·) is the Euclidean distance and the GED is calculated for the corresponding computational graphs g i , g j ∈ G.
Weight transfer among neighboring models
Pre-training each model in the design space is computationally expensive. Hence, we rely on weight sharing to initialize a query model in order to directly fine-tune it and minimize exploration time (details in Appendix B.2). We do this by generating k nearest neighbors of a graph in the design space (we use k = 100 for our experiments). Naturally, we would like to transfer weights from the corresponding fine-tuned neighbor that is closest to the query, as such models intuitively have similar initial internal representations.
We calculate this similarity using a biased overlap measure that counts the number of encoder layers from the input to the output that are common to the current graph (i.e., have exactly the same set of hyperparameter values). We stop counting the overlap when we encounter different encoder layers, regardless of subsequent overlaps. In this ranking, there could be more than one graph with the same biased overlap with the current graph. Since the internal representations learned depend on the subsequent set of operations as well, we break ties based on the embedding distance of these graphs with the current graph. This gives us a set of neighbors, denoted by N q for a model q, for every graph that are ranked based on both the biased overlap and the embedding distance. It helps increase the probability of finding a trained neighbor with high overlap.
As a hard constraint, we only consider transferring weights if the biased overlap fraction (O f (q, n) = biased overlap/l q , where q is the query model, n ∈ N q is the neighbor in consideration, and l q is the number of layers in q) between the queried model and its neighbor is above a threshold τ . If the constraint is met, the weights of the shared part from the corresponding neighbor are transferred to the query and the query is fine-tuned. Otherwise, we pre-train the query. The weight transfer operation is denoted by W q ← W n .
BOSHNAS
We now describe the BOSHNAS search policy, i.e., box (d) in Figure 1.
Uncertainty types
To overcome the challenges of an unexplored design space, it is important to consider the uncertainty in model predictions to guide the search process. Predicting model performance deterministically is not enough to estimate the next most probably best-performing model. We leverage the upper confidence bound (UCB) exploration on the predicted performance of unexplored models (Russell & Norvig, 2010). This could arise from not only the approximations in the surrogate modeling process, but also parameter initializations and variations in model performance due to different training recipes. These are called epistemic and aleatoric uncertainties, respectively. The former, also called reducible uncertainty, arises from a lack of knowledge or information, and the latter, also called irreducible uncertainty, refers to the inherent variation in the system to be modeled.
Surrogate model
In BOSHNAS, we use Monte-Carlo (MC) dropout (Gal & Ghahramani, 2016) and a Natural Parameter Network (NPN) (Wang et al., 2016) to model the epistemic and aleatoric uncertainties, respectively. The NPN not only helps with a distinct prediction of aleatoric uncertainty that can be used for optimizing the training recipe once we are close to the optimal architecture, but also serves as a superior model than Gaussian Processes, Bayesian Neural Networks (BNNs), and other Fully-Connected Neural Networks (FCNNs) (Tuli et al., 2021). Consider the NPN network f S (x; θ) with a transformer embedding x as an input and parameters θ. The output of such a network is the pair (µ, σ) ← f S (x; θ), where µ is the predicted mean performance and σ is the aleatoric uncertainty. To model the epistemic uncertainty, we use two deep surrogate models: (1) teacher (g S ) and (2) student (h S ) networks. It is a surrogate for the performance of a transformer, using its embedding x as an input. The teacher network is an FCNN with MC Dropout (parameters θ ). To compute the epistemic uncertainty, we generate n samples using g S (x, θ ). The standard deviation of the sample set is denoted by ξ. To run GOBI (Tuli et al., 2021) and avoid numerical gradients due to their poor performance, we use a student network (FCNN with parameters θ ) that directly predicts the outputξ ← h S (x, θ ), a surrogate of ξ.
Active learning and optimization
For a design space G, we first form an embedding space ∆ by transforming all graphs in G using the Transformer2vec embedding. Assuming we have the three networks f S , g S , and h S , for our surrogate model, we use the following UCB estimate:
UCB = µ + k 1 · σ + k 2 ·ξ = f S (x, θ)[0] + k 1 · f S (x; θ)[1] + k 2 · h S (x, θ ),(1)
where x ∈ ∆, k 1 , and k 2 are hyperparameters.
To generate the next transformer to test, we run GOBI using neural network inversion and the AdaHessian optimizer (Yao et al., 2021) that uses second-order updates to x (∇ 2
x UCB) up till convergence. From this, we get a new query embedding, x . The nearest transformer architecture is found based on the Euclidean distance of all available transformer architectures in the design space ∆, giving the next closest model x. This model is fine-tuned (or pre-trained if there is no nearby trained model with sufficient overlap; see Section 3.3) on the required task to give the respective performance. Once the new datapoint, (x, o), is received, we train the models using the loss functions on the updated corpus, δ :
L NPN (f S , x, o) = (x,o)∈δ (µ − o) 2 2σ 2 + 1 2 ln σ 2 , L Teacher (g S , x, o) = (x,o)∈δ (g S (x, θ ) − o) 2 , L Student (h S , x) = x,∀(x,o)∈δ (h S (x, θ ) − ξ) 2 ,(2)
where µ, σ = f S (x, θ), and ξ is obtained by sampling g S (x, θ ). The first is the aleatoric loss to train the NPN model (Wang et al., 2016); the other two are squared-error loss functions. Appendix B.4 presents the flow of these models in a schematic. We run multiple random cold restarts of GOBI to get multiple queries for the next step in the search process. Algorithm 1 summarizes the BOSHNAS workflow. Starting from an initial pre-trained set δ in the first level of hierarchy G 1 , we run until convergence the following steps in a Algorithm 1: BOSHNAS Result: best architecture 1 Initialize: overlap threshold (τ ), convergence criterion, uncertainty sampling prob.
(α), diversity sampling prob. (β), surrogate model (f S , g S , and h S ) on initial corpus δ, design space g ∈ G ⇔ x ∈ ∆; 2 while convergence criterion not met do 3 wait till a worker is free send random x to worker; /* Diversity sampling */ multi-worker compute cluster. To trade off between exploration and exploitation, we consider two probabilities: uncertainty-based exploration (α) and diversity-based exploration (β). With probability 1 − α − β, we run second-order GOBI using the surrogate model to minimize UCB in Eq. (1). Adding the converged point (x, o) in δ, we minimize the loss values in Eq.
4 if prob ∼ U (0, 1) < 1 − α − β then 5 δ ← δ ∪ {new performance point (x, o)}; 6 fit(surrogate, δ) using Eqn. (2); 7 x ← GOBI(f S , h S ); /* Optimization step */ 8 for n in N x do 9 if n is trained & O f (x, n) ≥ τ then 10 W x ← W n ; 11 send x to worker; 12 break; 13 else 14 if 1 − α − β ≤ prob. < 1 − β then 15 x ← argmax(k 1 · σ + k 2 ·ξ); /*
(2) (line 6 in Algorithm 1). We then generate a new query point, transfer weights from a neighboring model, and train it (lines 7-11). With α probability, we sample the search space using the combination of aleatoric and epistemic uncertainties, k 1 · σ + k 2 ·ξ, to find a point where the performance estimate is uncertain (line 15). To avoid getting stuck in a localized search subset, we also choose a random point with probability β (line 18). Once we converge in the first level, we continue with second and third levels, G 2 and G 3 , as described in Section 3.2.
Experimental Results
In this section, we show how the FlexiBERT model obtained from BOSHNAS outperforms the baselines.
Setup
For our experiments, we set the number of layers in each stack to s = 2 for the first level of the hierarchy, where models have the same configurations in every stack. In the second level, we use s = 1. Finally, we also make the feed-forward stacks heterogeneous (s = 1 * ) in the third level (details given in Section 3.2). For the range of design choices in Table 2 and setting s = 2, we obtained 9312 unique graphs after removing isomorphic graphs. The dimension of the Transformer2vec embedding is set to d = 16 after running grid search. To do this, we minimize the distance prediction error while also keeping d small using knee-point detection. The hyperparameter values in Algorithm 1 are obtained through grid search. We use overlap threshold τ = 80%, α = β = 0.1, and k 1 = k 2 = 0.5 in our experiments. The convergence criterion is met in BOSHNAS when the change in performance is within 10 −4 for five iterations. Further experimental setup details are given in Appendix B.5.
Pre-training and Fine-tuning Models
Our pre-training recipe is adapted from the one used in RoBERTa, proposed by Liu et al. (2019), with slight variations in order to reduce the training budget (details in Appendix B.5).
We initialize the architecture space with models adapted from equivalent models presented in literature (Turc et al., 2019;Lee-Thorp et al., 2021;Jiang et al., 2020). The 12 initial models used to initiate the search process are BERT-Tiny, BERT-2/256 (with two encoder layers and a fixed hidden dimension of 256), and ConvBERT-Mini (with p j = DFT for FNets and p j = 9 for ConvBERTs adapted from the original models). These models form the initial set δ in Algorithm 1.
Ablation Study of BOSHNAS
We compare BOSHNAS against other popular techniques from the CNN space, namely Random Search (RS), ES, REINFORCE, GP-BO, and a recent state-of-the-art, BANANAS. We present performance on the GLUE benchmark. Figure 3 presents the best GLUE scores reached by respective baseline NAS techniques along with BOSHNAS used with naive (i.e., feature-based one-hot) or Transformer2vec embeddings on a representative design space. We use the space in the first level of the hierarchy (i.e., with 9312 graphs, s = 2) and run all these algorithms in an active-learning scenario (all targeted homogeneous models form a subset of this space) over 50 runs for each algorithm. The plot highlights that enhancing the richness of the design space enables the algorithms to search for more accurate models (6% improvement averaged across all models). We also see that Transformer2vec embeddings help NAS algorithms reach better-performing architectures (9% average improvement). Overall, BOSHNAS with the Transformer2vec embeddings performs the best in this representative design space, outperforming the stateof-the-art (i.e., BANANAS on naive embeddings) by 13%. Figure 4(a) shows the best GLUE score reached by each baseline NAS algorithm along with the number of models it trained. Again, these runs are performed on the representative design space described above, using the Transformer2vec encodings. As can be seen from the figure, BOSHNAS reaches the best GLUE score. Ablation analysis justifies the need for heteroscedastic modeling and second-order optimization (see Figure 4(b)). The heteroscedastic model forces the optimization of the training recipe when the framework approaches optimal architectural design decisions. Second-order gradients, on the other hand, help the search avoid local optima and saddle points, and also aid faster convergence. Table 3 shows the scores of the ablation models on the GLUE benchmarking tasks. We refer to the best model obtained from BOSHNAS in the Tiny-to-Mini space as FlexiBERT- Mini. Once we get the best architecture from the search process (using the same, albeit limited compute budget for feasible search times), it is pre-trained and fine-tuned on a larger compute budget (details in Appendix B.5). As can be seen from the table, FlexiBERT-Mini outperforms the baseline, NAS-BERT , by 0.4% on the GLUE benchmark. Since NAS-BERT finds the higher-performing architecture while only considering the first eight GLUE tasks (i.e., without the WNLI dataset), for fair comparisons, we find a neighboring model in the FlexiBERT design space that only optimizes performance on the first eight tasks. We call this model FlexiBERT-Mini † . We see that although FlexiBERT-Mini † does not have the highest GLUE score, it outperforms NAS-BERT 10 by significant margins on the first eights tasks. Figure 5 demonstrates that FlexiBERT pushes to improve the performance frontier relative to traditional homogeneous architectures. In other words, the best-performing models in the expanded (Tiny-to-Mini) space outperform traditional models for the same number of parameters. Here, the homogeneous models incorporate the same design decisions for all encoder layers, even with the expanded set of operations (i.e., including convolutional and LT-based attention operations). FlexiBERT-Mini has 3% fewer parameters than BERT-Mini and achieves 8.9% higher GLUE score. FlexiBERT achieves 3% higher performance than the best homogeneous model while the model with equivalent performance achieves 2.6× smaller size.
Best Architecture in the Design Space
After running BOSHNAS for each level of the hierarchy, we get the respective best-performing models, whose model cards are presented in Appendix B.6. From these best-performing models, we can extract the following rules that lead to high-performing transformer architectures: Table 2) and for traditional homogeneous models. • Models with DCT in the deeper layers are preferable for higher performance on the GLUE benchmark.
• Models with more attention heads, but smaller hidden dimension, are preferable in the deeper layers.
• Feed-forward networks with larger widths, but smaller depth, are preferable in the deeper layers.
Using these guidelines, we extrapolate the model card for FlexiBERT-Mini to get the design decisions for FlexiBERT-Large, which is an equivalent counterpart of BERT-Large (Devlin et al., 2019). Appendix B.6 presents the approach for extrapolation of hyperparameter choices from FlexiBERT-Mini to obtain FlexiBERT-Large. We train FlexiBERT-Large with the larger compute budget (see Appendix B.5) and show its GLUE score in Table 4. FlexiBERT-Large outperforms the baseline RoBERTa by 0.6% on the entire GLUE benchmarking suite, and AutoBERT-Zero Large by 5.7% when only considering the first eight tasks. Just like FlexiBERT-Large is the BERT-Large counterpart of FlexiBERT-Mini, we similarly form the BERT-Small and BERT-Base equivalents (Turc et al., 2019). Figure 6 presents the performance frontier of these FlexiBERT models with different baseline works. As can be seen, FlexiBERT consistently outperforms the baselines for different constraints on model size, thanks to its search in a vast, heterogeneous, and flexible design space of architectures.
Conclusion
In this work, we presented FlexiBERT, a suite of heterogeneous and flexible transformer models. We characterized the effects of this expanded design space and proposed a novel Transformer2vec embedding scheme to train a surrogate model that searches the design space for high-performance models. We described a novel NAS algorithm, BOSHNAS, and showed that it outperforms the state-of-the-art by 13%. The FlexiBERT-Mini model searched in this design space has a GLUE score that is 8.9% higher than BERT-Mini, while requiring 3% lower parameters. It also beats the baseline, NAS-BERT 10 by 0.4%. A FlexiBERT model with equivalent performance as the best homogeneous model achieves 2.6× smaller size. FlexiBERT-Large outperforms the state-of-the-art models by at least 5.7% average accuracy on the first eight tasks in the GLUE benchmark.
Appendix A. Background
Here, we discuss some supplementary background concepts.
A.1 Self-Attention
Traditionally, transformers have relied on the SA operation. It is basically a trainable associative memory. We depict the vanilla SA operation as SDP and introduce the WMA operation in our design space as well. For a source vector s and a hidden-state vector h:
SDP := s h √ d , WMA := s W a h
where d is the dimension of the source vector and W a is a trainable weight matrix in the attention layer. Naturally, a WMA layer is more expressive than an SDP layer. The SA mechanism used in the context of transformers also involves the softmax function and matrix multiplication. More concretely, in a multi-headed SA operation (with n heads), there are four matrices:
W q i ∈ R d inp ×h/n , W k i ∈ R d inp ×h/n , W v i ∈ R d inp ×h/n , and W o
i ∈ R h/n×dout , and it takes the hidden states of the previous layer as input H ∈ R N T ×d inp , where i refers to an attention head, d inp is the input dimension, d out is the output dimension, and h is the hidden dimension. The output of the attention head (H i ∈ R N T ×dout ) is then calculated as follows:
Q i , K i , V i = HW q i , HW k i , HW v i H i = softmax Q i K i √ h V i W o i
For traditional homogenous transformer models, d out has to equal d inp (usually, d inp = d out = h) due to the residual connections. However, thanks to the relative and trained positional encodings and the added projection layer at the end of each encoder (W p ∈ R dout×dp ), we can relax this constraint. This leads to an expansion of the flexibility of transformer models in the FlexiBERT design space.
A.2 Improving BERT's performance
BERT is one of the most widely used transformer architectures (Devlin et al., 2019).
Researchers have improved BERT's performance by revamping the pre-training technique. RoBERTa proposed a more robust pre-training approach to improve BERT's performance by considering dynamic masking in the Masked Language Modeling (MLM) objective . Functional improvements have also been proposed for pre-training -XLNet introduced Permuted Language Modeling (PLM) (Yang et al., 2019) and MPNet extended it by unifying MLM and PLM techniques . Other approaches, including denoising autoencoders, have also been proposed (Lewis et al., 2020). On the other hand, Khetan and Karnin (2020) consider optimizing the set of architectural design decisions for BERT -number of encoder layers l, size of hidden embeddings h, number of attention heads a, size of the hidden layer in the feed-forward network f , etc. However, it is only concerned with pruning BERT and does not target optimization of accuracy over different tasks. Further, it has a limited search space consisting of only homogeneous models.
Appendix B. Experimental Details
We present the details of the experiments performed next.
B.1 Possible Compute Blocks
Based on the design space presented in Table 2, we consider all possible compute blocks, as presented next:
• For layer j, when the operation is SA, we have two or four heads with: h-128/SA-SDP, h-128/SA-WMA, h-256/SA-SDP, and h-256/SA-WMA. If the encoder layer has an LT operation, then we have two or four heads with: h-128/LT-DFT, h-128/LT-DCT, h-256/LT-DFT, and h-256/LT-DCT; the latter entry being the type of LT operation. For a convolutional (DSC) operation, we have two or four heads with: h-128/DSC-5, h-128/DSC-9, h-256/DSC-5, and h-256/DSC-9; the latter entry referring to the kernel size.
• For layer j, the size of the hidden layer in the feed-forward network is either 512 or 1024. Also, the feed-forward network may either have just one hidden layer or a stack of three layers. At higher levels of the hierarchy in the hierarchical search framework (details in Section 3.2), all the layers in the stack of hidden layers have the same dimension, until we relax this constraint in the last leg of the hierarchy.
• Other blocks like: Add&Norm, Input, and Output.
B.2 Knowledge Transfer
Knowledge transfer has been used in recent works, but restricted to long short-term memories and simple recurrent neural networks (Mazzawi et al., 2019). Wang et al. (2020) train a super-transformer and share its weights with smaller models. However, this is not feasible for diverse heterogeneous and flexible architectures. We propose the use of knowledge transfer in transformers for the first time, to the best of our knowledge, by comparing weights with computational graphs of nearby models. Furthermore, previous works only consider a static training recipe for all the models in the design space, an assumption we relax in our experiments. We directly fine-tune models for which nearby models are already pre-trained. We test for this using the biased overlap metric defined in Section 3.3. Figure 7 presents the time gains from knowledge transfer. Since some percentage of models could directly be fine-tuned, thanks to their neighboring pre-trained models, we were able to speed up the overall training time by 38%.
B.3 Crossover between Transformer Models
We obtain new transformer models of the subsequent level in the hierarchy by taking a crossover between the best models in the previous level (which had layers per stack = s) and their neighbors. The stack configuration of the children is chosen from all unique hyperparameter values present in the parent models at the same depth. We present a simple example of this scheme in Figure 8. The design space of permissible operation blocks for layers in the stack, s, is computed by the product of the respective design choices of the parents for that stack. These layers are then independently formed with the new constraint of s/2 layers having the same choice of hyperparameter values. Expanding the design space Figure 8: Crossover between two parent models yields a finer-grained design space. Each stack configuration in the children is derived from the product of the parent design choices at the same depth. in such a fashion retains the original hyperparameters that give good performance while also exploring the internal representations learned by combinations of the hyperparameters at the same level.
B.4 BOSHNAS Training Flow
Different surrogate models in the BOSHNAS pipeline (f S , g S , and h S ) have been presented in the order of flow in Figure 9. As explained in Section 3.4, the NPN network (f S ) models the model performance and the aleatoric uncertainty, and the student network (h S ) models the epistemic uncertainty from the teacher network (g S ).
B.5 Model Training
We pre-train our models with a combination of publicly available text corpora, viz. BookCorpus (BookC) (Zhu et al., 2015), Wikipedia English (Wiki), OpenWebText (OWT) (Gokaslan & Cohen, 2019), and CC-News (CCN) (Mackenzie et al., 2020). Most training hyperparameters are borrowed from RoBERTa. We set the batch size to 256, learning rate warmed up over the first 10, 000 steps to its peak value at 1 × 10 −5 that then decays linearly, weight decay to 0.01, Adam scheduler's parameters β 1 = 0.9, β 2 = 0.98 (shown to improve stability; Liu et al., 2019), = 1 × 10 −6 , and run pre-training for 1, 000, 000 steps.
Once the best models are found, we pre-train and fine-tune the selected models with a larger compute budget. For pre-training, we add the C4 dataset (Raffel et al., 2019) and train for 3, 000, 000 steps before fine-tuning. We also fine-tune on each GLUE task for 10 epochs instead of 5 (further details are given below). This was done for the FlexiBERT-Mini and FlexiBERT-Large models. Table 5 shows the improvement in performance of FlexiBERT-Mini that was trained using knowledge transfer (where the weights were transferred from a nearby trained model) after additional training. When compared to the model directly fine-tuned after knowledge transfer, we see only a marginal improvement when we pre-train from scratch. This reaffirms the advantage of knowledge transfer, that it reduces training time (see Appendix B.2) with negligible loss in performance. Training with a larger compute budget further improves performance on the GLUE benchmark, validating the importance of data size and diversity in pre-training . Running a full-fledged BOSHNAS on the larger design space (i.e., with layers from 2 to 24, Tiny-to-Large) can be an easy extension of this work.
While running BOSHNAS, we fine-tune our models on the nine GLUE tasks over five epochs and a batch size of 64 where early stopping is implemented. We also run automatic hyperparameter tuning for the fine-tuning process using the Tree-structured Parzen Estimator algorithm (Akiba et al., 2019). The learning rate is randomly selected logarithmically in the [2 × 10 −5 , 5 × 10 −4 ] range, and the batch size in {32, 64, 128} uniformly. Table 6 shows the best hyperparameters for fine-tuning of each GLUE task selected using this auto-tuning technique. This hyperparameter optimization uses random initialization every time, which results in variation in performance each time the model is queried (see aleatoric uncertainty explained in Section 3.4).
We have included baselines trained with the pre-training + fine-tuning procedure as proposed by Turc et al. (2019) for like-for-like comparisons, and not the knowledge distillation counterparts . Nevertheless, FlexiBERT is orthogonal to (and thus can easily be combined with) knowledge distillation because FlexiBERT focuses on searching the best architecture while knowledge distillation focuses on better training of a given architecture.
All models were trained on NVIDIA A100 GPUs and 2.6 GHz AMD EPYC Rome processors. The entire process of running BOSHNAS for all levels of the hierarchy took around 300 GPU-days of training.
B.6 Best-performing Models
From different hierarchy levels (s = 2, 1, and 1 * ), we get the respective best-performing models after running BOSHNAS as follows: where, in the last leg of the hierarchy, the stack length is 1, but the feed-forward stacks are also heterogeneous (see Section 3.2). Both s = 1 and s = 1 * gave the same solution despite finer granularity in the latter case. Thus, the second model card above is that of FlexiBERT-Mini.
The model cards of the FlexiBERT-Mini ablation models, as presented in Table 3, are given below:
Figure 1 :
1Overview of the FlexiBERT pipeline.
l
: 2, o : [SA, SA], h : [128, 128], n : [2, 2], f : [[512], [512]] , p : [SDP, SDP] .
Figure 2 :
2Block-level computation graph for BERT-Tiny in FlexiBERT. The projection layer implements an identity function since the hidden sizes of input and output encoder layers are equal.
Figure 3 :
3Bar plot comparing all NAS techniques with (a) naive embeddings and a design space of homogeneous models, (b) naive embeddings and an expanded design space of homogeneous and heterogeneous models, and (c) Transformer2vec (T2v) embeddings with the expanded design space. Plotted with 90% confidence intervals.
Figure 4 :
4Performance results: (a) best GLUE score with trained models for NAS baselines and (b) ablation of BOSHNAS. Plotted with 90% confidence intervals.
Figure 5 :
5Performance frontiers of FlexiBERT on an expanded design space (under the constraints defined in
Figure 6 :
6Performance of FlexiBERT and other baseline methods on various GLUE tasks: (a) SST-2, (b) QNLI, (c) MNLI (accuracy of MNLI-m is plotted), and (d) CoLA.
Figure 7 :
7Bar plot showing average time for training a transformer model (in GPU-hours) with and without knowledge transfer. (a) Pre-train + Fine-tune: total training time. (b) Direct Fine-tune: training time for a pre-trained model. (c)
Figure 9 :
9Overview of the BOSHNAS pipeline. Variables have been defined in Section 3.4
s
= 2 : l : 4, o : [LT, LT, LT, LT], h : [256, 256, 256, 256], n : [4, 4, 2, 2], f : [[1024], [1024], [512, 512, 512], [512, 512, 512]], p : [DCT, DCT, DCT, DCT] s = 1 : l : 4, o : [SA, SA, LT, LT], h : [256, 256, 128, 128], n : [2, 2, 4, 4], f : [[512, 512, 512], [512, 512, 512], [1024], [1024]], p : [SDP, SDP, DCT, DCT]
Figure 10 :
10(w/o S.) : l : 2, o : [SA, SA], h : [128, 128], n : [4, 4], f : [[1024], [1024]], p : [SDP, WMA] (w/o H.) : l : 4, o : [LT, LT, SA, SA], h : [256, 256, 128, 128], n : [4, 4, 4, 4], f : [[1024, 1024, 1024], [1024, 1024, 1024], [512, 512, 512], [512, 512, 512]], p : [DCT, DCT, SDP, SDP] Figure 10 shows a working schematic of the design choices in the FlexiBERT-Mini and FlexiBERT-Large models. As explained in Section 4.4, FlexiBERT-Large was formed by extrapolating the design choices in FlexiBERT-Mini to obtain a BERT-Large counterpart(Devlin et al., 2019). Obtained FlexiBERT models after running the BOSHNAS pipeline: (a)FlexiBERT-Mini, and its design choices extrapolated to obtain (b) FlexiBERT-Large.
Table 1 :
1Comparison of related works with different parameters ( indicates that the corresponding feature is present). Adaptive width refers to different architectures having possibly different hidden dimensions (albeit each layer within the architecture having the same hidden dimension). Full flexibility corresponds to each encoder layer having, possibly, a different hidden dimension.Framework
Self-Attention Conv.
Lin. Transform Flexible no. of
attn. ops.
Flexible feed-
fwd. stacks
Flexible hidden dim.
Search
technique
SDP WMA
DFT
DCT
Ad. width Full flexibility
Primer
Table 2 :
2Design space description. Super-script (j) depicts the value for layer j.Design Element
Allowed Values
Number of encoder layers (l)
{2, 4}
Type of attention operation used (o j ) {SA, LT, DSC}
Number of operation heads (n j )
{2, 4}
Hidden size
Uncertainty sampling */16
send x to worker;
17
else
18
Table 3 :
3Comparison between FlexiBERT and baselines. Results are evaluated on the development set of the GLUE benchmark. We use Matthews correlation for CoLA,Spearman correlation for STS-B, and accuracy for other tasks. MNLI is reported on
the matched set. Ablation models for BOSHNAS without second-order gradients
(w/o S.) and without using the heteroscedastic model (w/o H.) are also included.
Best (second-best) performance values are in boldface (underlined). * Performance
for NAS-BERT 10 was not reported on the WNLI dataset and was reproduced
using an equivalent model in our design space. † FlexiBERT-Mini model that
only optimizes performance on the first eight tasks, for fair comparisons with
NAS-BERT.
Model
Parameters CoLA MNLI MRPC QNLI QQP RTE SST-2 STS-B WNLI Avg.
BERT-Mini (Turc et al., 2019)
16.6M
0
74.8
71.8
84.1
66.4
57.9
85.9
73.3
62.3
64.0
NAS-BERT 10 (Xu et al., 2021)
10M
27.8
76.0
81.5
86.3
88.4
66.6
88.6
84.8
53.7 *
72.6
FlexiBERT-Mini (ours, w/o S.)
7.2M
16.7
72.3
72.9
81.7
76.9
64.1
80.9
77.0
65.3
67.5
FlexiBERT-Mini (ours, w/o H.)
20M
12.3
74.4
72.3
76.4
76.3
59.5
81.2
75.4
67.8
66.2
FlexiBERT-Mini † (ours)
13.8M
28.7
77.5
82.3
86.9
87.8
67.6
89.7
83.0
51.8
72.7
FlexiBERT-Mini (ours)
16.1M
23.8
76.1
82.4
87.1
88.7 69.0
81.0
78.9
69.3
72.9
Table 4 :
4Comparison between FlexiBERT-Large (outside of the constraints defined in Table 2) and baselines on GLUE score. * GLUE scores reported do not consider the WNLI dataset.Model
Parameters GLUE score
RoBERTa (Liu et al., 2019)
345M
88.5
FNet-Large (Lee-Thorp et al., 2021)
357M
81.9 *
AutoTinyBERT (Yin et al., 2021)
85M
81.2 *
DynaBERT (Hou et al., 2020)
345M
81.6 *
NAS-BERT 60 (Xu et al., 2021)
60M
83.2 *
AutoBERT-Zero Large (Gao et al., 2021)
318M
84.5 *
FlexiBERT-Large (ours)
319M
89.1/90.2 *
Knowledge Transfer: training using weight transfer from a trained nearby model, gives 38% speedup. Plotted with 90% confidence intervals.Stack A
Stack B
Stack D
Stack C
Stack F
Stack E
A D
B C
F
E
E
F
layers
per
stack
layers
per
stack
B C
B C
B C
A D
A D
A D
Table 5 :
5Performance of FlexiBERT-Mini from BOSHNAS after knowledge transfer from a nearby trained model, and after pre-training from scratch along with a larger compute budget.Model
Pre-training data
Pre-training steps Fine-tuning epochs GLUE score
FlexiBERT-Mini
w/ knowledge transfer
BookC, Wiki, OWT, CCN
1,000,000
5
69.7
+ pre-training from scratch BookC, Wiki, OWT, CCN
1,000,000
5
70.4
+ larger compute budget
BookC, Wiki, OWT, CCN, C4
3,000,000
10
72.9
Table 6 :
6Hyperparameters used for fine-tuning FlexiBERT-Mini on the GLUE tasks. 128 STS-B 7.0 × 10 −5 32 WNLI 4.0 × 10 −5 128Task
Learning rate Batch size
CoLA
2.0 × 10 −4
64
MNLI
9.4 × 10 −5
64
MRPC 2.23 × 10 −5
32
QNLI
5.03 × 10 −5
128
QQP
3.7 × 10 −4
64
RTE
1.9 × 10 −4
128
SST-2
1.2 × 10 −4
AcknowledgmentsThis work was supported by NSF Grant No. CNS-1907381 and CCF-2203399. The experiments reported in this paper were substantially performed on the computational resources managed and supported by Princeton Research Computing at Princeton University. We also thank Xiaorun Wu for initial discussions.
An exact graph edit distance algorithm for solving pattern recognition problems. Z Abu-Aisheh, R Raveaux, J.-Y Ramel, P Martineau, Proceedings of the International Conference on Pattern Recognition Applications and Methods. the International Conference on Pattern Recognition Applications and Methods1Abu-Aisheh, Z., Raveaux, R., Ramel, J.-Y., & Martineau, P. (2015). An exact graph edit distance algorithm for solving pattern recognition problems. In Proceedings of the International Conference on Pattern Recognition Applications and Methods, Vol. 1, pp. 271-278.
Optuna: A next-generation hyperparameter optimization framework. T Akiba, S Sano, T Yanase, T Ohta, M Koyama, Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data MiningAkiba, T., Sano, S., Yanase, T., Ohta, T., & Koyama, M. (2019). Optuna: A next-generation hyperparameter optimization framework. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 2623-2631.
AdaBERT: Task-adaptive BERT compression with differentiable neural architecture search. D Chen, Y Li, M Qiu, Z Wang, B Li, B Ding, H Deng, J Huang, W Lin, J Zhou, Proceedings of the 29th International Joint Conference on Artificial Intelligence. the 29th International Joint Conference on Artificial IntelligenceChen, D., Li, Y., Qiu, M., Wang, Z., Li, B., Ding, B., Deng, H., Huang, J., Lin, W., & Zhou, J. (2021). AdaBERT: Task-adaptive BERT compression with differentiable neural architecture search. In Proceedings of the 29th International Joint Conference on Artificial Intelligence, pp. 2463-2469.
NASGEM: Neural architecture search via graph embedding method. H.-P Cheng, T Zhang, Y Zhang, S Li, F Liang, F Yan, M Li, V Chandra, H Li, Y Chen, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence35Cheng, H.-P., Zhang, T., Zhang, Y., Li, S., Liang, F., Yan, F., Li, M., Chandra, V., Li, H., & Chen, Y. (2021). NASGEM: Neural architecture search via graph embedding method. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35, pp. 7090-7098.
Cross-lingual language model pretraining. A Conneau, G Lample, Advances in Neural Information Processing Systems. 32Conneau, A., & Lample, G. (2019). Cross-lingual language model pretraining. In Advances in Neural Information Processing Systems, Vol. 32, pp. 7059-7069.
Escaping local optima with non-elitist evolutionary algorithms. D.-C Dang, A Eremeev, P K Lehre, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence35Dang, D.-C., Eremeev, A., & Lehre, P. K. (2021). Escaping local optima with non-elitist evo- lutionary algorithms. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35, pp. 12275-12283.
BERT: Pre-training of deep bidirectional transformers for language understanding. J Devlin, M.-W Chang, K Lee, K Toutanova, Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol. 1, pp. 4171-4186.
Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. Y Gal, Z Ghahramani, Proceedings of The 33rd International Conference on Machine Learning. The 33rd International Conference on Machine Learning48Gal, Y., & Ghahramani, Z. (2016). Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In Proceedings of The 33rd International Conference on Machine Learning, Vol. 48, pp. 1050-1059.
AutoBERT-Zero: Evolving BERT backbone from scratch. J Gao, H Xu, H Shi, X Ren, P L H Yu, X Liang, X Jiang, Z Li, abs/2107.07445CoRRGao, J., Xu, H., Shi, H., Ren, X., Yu, P. L. H., Liang, X., Jiang, X., & Li, Z. (2021). AutoBERT-Zero: Evolving BERT backbone from scratch. CoRR, abs/2107.07445.
OpenWebText corpus. A Gokaslan, V Cohen, Gokaslan, A., & Cohen, V. (2019). OpenWebText corpus. http://Skylion007.github.io/ OpenWebTextCorpus.
AutoML: A survey of the state-of-the-art. Knowledge-Based Systems. X He, K Zhao, X Chu, 212106622He, X., Zhao, K., & Chu, X. (2021). AutoML: A survey of the state-of-the-art. Knowledge- Based Systems, 212, 106622.
DynaBERT: Dynamic BERT with adaptive width and depth. L Hou, Z Huang, L Shang, X Jiang, X Chen, Q Liu, Advances in Neural Information Processing Systems. 33Hou, L., Huang, Z., Shang, L., Jiang, X., Chen, X., & Liu, Q. (2020). DynaBERT: Dynamic BERT with adaptive width and depth. In Advances in Neural Information Processing Systems, Vol. 33, pp. 9782-9793.
Music transformer: Generating music with long-term structure. C.-Z A Huang, A Vaswani, J Uszkoreit, I Simon, C Hawthorne, N Shazeer, A M Dai, M D Hoffman, M Dinculescu, D Eck, Proceedings of the International Conference on Learning Representations. the International Conference on Learning RepresentationsHuang, C.-Z. A., Vaswani, A., Uszkoreit, J., Simon, I., Hawthorne, C., Shazeer, N., Dai, A. M., Hoffman, M. D., Dinculescu, M., & Eck, D. (2018). Music transformer: Generating music with long-term structure. In Proceedings of the International Conference on Learning Representations.
ConvBERT: Improving BERT with span-based dynamic convolution. Z.-H Jiang, W Yu, D Zhou, Y Chen, J Feng, S Yan, Advances in Neural Information Processing Systems. 33Jiang, Z.-H., Yu, W., Zhou, D., Chen, Y., Feng, J., & Yan, S. (2020). ConvBERT: Improving BERT with span-based dynamic convolution. In Advances in Neural Information Processing Systems, Vol. 33, pp. 12837-12848.
schuBERT: Optimizing elements of BERT. A Khetan, Z Karnin, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsKhetan, A., & Karnin, Z. (2020). schuBERT: Optimizing elements of BERT. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 2807-2818.
Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. J B Kruskal, Psychometrika. 291Kruskal, J. B. (1964). Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. Psychometrika, 29 (1), 1-27.
FNet: Mixing tokens with Fourier transforms. J Lee-Thorp, J Ainslie, I Eckstein, S Ontanon, abs/2105.03824CoRRLee-Thorp, J., Ainslie, J., Eckstein, I., & Ontanon, S. (2021). FNet: Mixing tokens with Fourier transforms. CoRR, abs/2105.03824.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. M Lewis, Y Liu, N Goyal, M Ghazvininejad, A Mohamed, O Levy, V Stoyanov, L Zettlemoyer, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsLewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mohamed, A., Levy, O., Stoyanov, V., & Zettlemoyer, L. (2020). BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 7871-7880.
. Y Liu, M Ott, N Goyal, J Du, M Joshi, D Chen, O Levy, M Lewis, L Zettlemoyer, V Stoyanov, RoBERTa: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., & Stoyanov, V. (2019). RoBERTa: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692.
NSGA-Net: Neural architecture search using multi-objective genetic algorithm. Z Lu, I Whalen, V Boddeti, Y Dhebar, K Deb, E Goodman, W Banzhaf, Proceedings of the Genetic and Evolutionary Computation Conference. the Genetic and Evolutionary Computation ConferenceLu, Z., Whalen, I., Boddeti, V., Dhebar, Y., Deb, K., Goodman, E., & Banzhaf, W. (2019). NSGA-Net: Neural architecture search using multi-objective genetic algorithm. In Proceedings of the Genetic and Evolutionary Computation Conference, pp. 419-427.
Effective approaches to attention-based neural machine translation. T Luong, H Pham, C D Manning, Proceedings of the Conference on Empirical Methods in Natural Language Processing. the Conference on Empirical Methods in Natural Language ProcessingLuong, T., Pham, H., & Manning, C. D. (2015). Effective approaches to attention-based neural machine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 1412-1421.
CC-News-En: A large English news corpus. J Mackenzie, R Benham, M Petri, J R Trippas, J S Culpepper, A Moffat, Proceedings of the 29th ACM International Conference on Information & Knowledge Management. the 29th ACM International Conference on Information & Knowledge ManagementMackenzie, J., Benham, R., Petri, M., Trippas, J. R., Culpepper, J. S., & Moffat, A. (2020). CC-News-En: A large English news corpus. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pp. 3077-3084.
Improving keyword spotting and language identification via neural architecture search at scale. H Mazzawi, X Gonzalvo, A Kracun, P Sridhar, N Subrahmanya, I Lopez-Moreno, H Park, P Violette, Proceedings of Interspeech. InterspeechMazzawi, H., Gonzalvo, X., Kracun, A., Sridhar, P., Subrahmanya, N., Lopez-Moreno, I., Park, H., & Violette, P. (2019). Improving keyword spotting and language identification via neural architecture search at scale. In Proceedings of Interspeech, pp. 1278-1282.
Distributed representations of words and phrases and their compositionality. T Mikolov, I Sutskever, K Chen, G S Corrado, J Dean, Advances in Neural Information Processing Systems. 26Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., & Dean, J. (2013). Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, Vol. 26, pp. 3111-3119.
A Narayanan, M Chandramohan, R Venkatesan, L Chen, Y Liu, S Jaiswal, abs/1707.05005graph2vec: Learning distributed representations of graphs. CoRR. Narayanan, A., Chandramohan, M., Venkatesan, R., Chen, L., Liu, Y., & Jaiswal, S. (2017). graph2vec: Learning distributed representations of graphs. CoRR, abs/1707.05005.
GloVe: Global vectors for word representation. J Pennington, R Socher, C Manning, Proceedings of the Conference on Empirical Methods in Natural Language Processing. the Conference on Empirical Methods in Natural Language ProcessingPennington, J., Socher, R., & Manning, C. (2014). GloVe: Global vectors for word represen- tation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 1532-1543.
Efficient neural architecture search via parameters sharing. H Pham, M Guan, B Zoph, Q Le, J Dean, Proccedings of the 35th International Conference on Machine Learning. cedings of the 35th International Conference on Machine LearningPham, H., Guan, M., Zoph, B., Le, Q., & Dean, J. (2018). Efficient neural architecture search via parameters sharing. In Proccedings of the 35th International Conference on Machine Learning, pp. 4095-4104.
Exploring the limits of transfer learning with a unified text-to. C Raffel, N Shazeer, A Roberts, K Lee, S Narang, M Matena, Y Zhou, W Li, P J Liu, text transformer. CoRR, abs/1910.10683Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., & Liu, P. J. (2019). Exploring the limits of transfer learning with a unified text-to-text transformer. CoRR, abs/1910.10683.
Regularized evolution for image classifier architecture search. E Real, A Aggarwal, Y Huang, Q V Le, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Real, E., Aggarwal, A., Huang, Y., & Le, Q. V. (2019). Regularized evolution for image classifier architecture search. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33, pp. 4780-4789.
Neural architecture generator optimization. R Ru, P Esperanca, F M Carlucci, Advances in Neural Information Processing Systems. 33Ru, R., Esperanca, P., & Carlucci, F. M. (2020). Neural architecture generator optimization. In Advances in Neural Information Processing Systems, Vol. 33, pp. 12057-12069.
S Russell, P Norvig, Artificial Intelligence: A Modern Approach. Prentice Hall3rd editionRussell, S., & Norvig, P. (2010). Artificial Intelligence: A Modern Approach (3rd edition). Prentice Hall.
Self-attention with relative position representations. P Shaw, J Uszkoreit, A Vaswani, Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies2Shaw, P., Uszkoreit, J., & Vaswani, A. (2018). Self-attention with relative position repre- sentations. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol. 2, pp. 464-468.
Weisfeiler-Lehman graph kernels. N Shervashidze, P Schweitzer, E J Van Leeuwen, K Mehlhorn, K M Borgwardt, Journal of Machine Learning Research. 1277Shervashidze, N., Schweitzer, P., van Leeuwen, E. J., Mehlhorn, K., & Borgwardt, K. M. (2011). Weisfeiler-Lehman graph kernels. Journal of Machine Learning Research, 12 (77), 2539-2561.
NAS-Bench-301 and the case for surrogate benchmarks for neural architecture search. J Siems, L Zimmer, A Zela, J Lukasik, M Keuper, F Hutter, abs/2008.09777CoRRSiems, J., Zimmer, L., Zela, A., Lukasik, J., Keuper, M., & Hutter, F. (2020). NAS-Bench- 301 and the case for surrogate benchmarks for neural architecture search. CoRR, abs/2008.09777.
Practical Bayesian optimization of machine learning algorithms. J Snoek, H Larochelle, R P Adams, Advances in Neural Information Processing Systems. 25Snoek, J., Larochelle, H., & Adams, R. P. (2012). Practical Bayesian optimization of machine learning algorithms. In Advances in Neural Information Processing Systems, Vol. 25, pp. 2951-2959.
Searching for efficient transformers for language modeling. D So, W Mańke, H Liu, Z Dai, N Shazeer, Q V Le, Advances in Neural Information Processing Systems. 34So, D., Mańke, W., Liu, H., Dai, Z., Shazeer, N., & Le, Q. V. (2021). Searching for efficient transformers for language modeling. In Advances in Neural Information Processing Systems, Vol. 34, pp. 6010-6022.
. D R So, C Liang, Q V Le, The evolved transformer. CoRR, abs/1901.11117So, D. R., Liang, C., & Le, Q. V. (2019). The evolved transformer. CoRR, abs/1901.11117.
MPNet: Masked and permuted pre-training for language understanding. K Song, X Tan, T Qin, J Lu, T.-Y Liu, Advances in Neural Information Processing Systems. 33Song, K., Tan, X., Qin, T., Lu, J., & Liu, T.-Y. (2020). MPNet: Masked and permuted pre-training for language understanding. In Advances in Neural Information Processing Systems, Vol. 33, pp. 16857-16867.
MobileBERT: A compact task-agnostic BERT for resource-limited devices. Z Sun, H Yu, X Song, R Liu, Y Yang, D Zhou, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsSun, Z., Yu, H., Song, X., Liu, R., Yang, Y., & Zhou, D. (2020). MobileBERT: A compact task-agnostic BERT for resource-limited devices. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 2158-2170.
EfficientNet: Rethinking model scaling for convolutional neural networks. M Tan, Q Le, Proceedings of the 36th International Conference on Machine Learning. the 36th International Conference on Machine LearningTan, M., & Le, Q. (2019). EfficientNet: Rethinking model scaling for convolutional neural networks. In Proceedings of the 36th International Conference on Machine Learning, pp. 6105-6114.
COSCO: Container orchestration using co-simulation and gradient based optimization for fog computing environments. S Tuli, S R Poojara, S N Srirama, G Casale, N R Jennings, IEEE Transactions on Parallel and Distributed Systems. 331Tuli, S., Poojara, S. R., Srirama, S. N., Casale, G., & Jennings, N. R. (2021). COSCO: Container orchestration using co-simulation and gradient based optimization for fog computing environments. IEEE Transactions on Parallel and Distributed Systems, 33 (1), 101-116.
Well-read students learn better: The impact of student initialization on knowledge distillation. I Turc, M Chang, K Lee, K Toutanova, abs/1908.08962CoRRTurc, I., Chang, M., Lee, K., & Toutanova, K. (2019). Well-read students learn better: The impact of student initialization on knowledge distillation. CoRR, abs/1908.08962.
Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, L Kaiser, I Polosukhin, Advances in Neural Information Processing Systems. 30Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., & Polosukhin, I. (2017). Attention is all you need. In Advances in Neural Information Processing Systems, Vol. 30, pp. 5998-6008.
GLUE: A multi-task benchmark and analysis platform for natural language understanding. A Wang, A Singh, J Michael, F Hill, O Levy, S Bowman, Proceedings of the EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. the EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLPWang, A., Singh, A., Michael, J., Hill, F., Levy, O., & Bowman, S. (2018). GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 353-355.
HAT: Hardwareaware transformers for efficient natural language processing. H Wang, Z Wu, Z Liu, H Cai, L Zhu, C Gan, S Han, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsWang, H., Wu, Z., Liu, Z., Cai, H., Zhu, L., Gan, C., & Han, S. (2020). HAT: Hardware- aware transformers for efficient natural language processing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 7675-7688.
Natural-parameter networks: A class of probabilistic neural networks. H Wang, X Shi, D.-Y Yeung, Advances in Neural Information Processing Systems. 29Wang, H., Shi, X., & Yeung, D.-Y. (2016). Natural-parameter networks: A class of probabilis- tic neural networks. In Advances in Neural Information Processing Systems, Vol. 29, pp. 118-126.
BANANAS: Bayesian optimization with neural architectures for neural architecture search. C White, W Neiswanger, Y Savani, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence35White, C., Neiswanger, W., & Savani, Y. (2021a). BANANAS: Bayesian optimization with neural architectures for neural architecture search. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35, pp. 10293-10301.
How powerful are performance predictors in neural architecture search. C White, A Zela, B Ru, Y Liu, F Hutter, abs/2104.01177CoRRWhite, C., Zela, A., Ru, B., Liu, Y., & Hutter, F. (2021b). How powerful are performance predictors in neural architecture search?. CoRR, abs/2104.01177.
Simple statistical gradient-following algorithms for connectionist reinforcement learning. R J Williams, Machine Learning. 8Williams, R. J. (1992). Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8 (3-4), 229-256.
NAS-BERT: Task-agnostic and adaptive-size BERT compression with neural architecture search. J Xu, X Tan, R Luo, K Song, J Li, T Qin, T.-Y Liu, Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. the 27th ACM SIGKDD Conference on Knowledge Discovery & Data MiningXu, J., Tan, X., Luo, R., Song, K., Li, J., Qin, T., & Liu, T.-Y. (2021). NAS-BERT: Task-agnostic and adaptive-size BERT compression with neural architecture search. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pp. 1933-1943.
How powerful are graph neural networks?. K Xu, W Hu, J Leskovec, S Jegelka, Proceedings of the International Conference on Learning Representations. the International Conference on Learning RepresentationsXu, K., Hu, W., Leskovec, J., & Jegelka, S. (2019). How powerful are graph neural networks?. In Proceedings of the International Conference on Learning Representations.
XLNet: Generalized autoregressive pretraining for language understanding. Z Yang, Z Dai, Y Yang, J G Carbonell, R Salakhutdinov, Q V Le, abs/1906.08237CoRRYang, Z., Dai, Z., Yang, Y., Carbonell, J. G., Salakhutdinov, R., & Le, Q. V. (2019). XLNet: Generalized autoregressive pretraining for language understanding. CoRR, abs/1906.08237.
ADA-HESSIAN: An adaptive second order optimizer for machine learning. Z Yao, A Gholami, S Shen, M Mustafa, K Keutzer, M Mahoney, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence35Yao, Z., Gholami, A., Shen, S., Mustafa, M., Keutzer, K., & Mahoney, M. (2021). ADA- HESSIAN: An adaptive second order optimizer for machine learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35, pp. 10665-10673.
AutoTinyBERT: Automatic hyper-parameter optimization for efficient pre-trained language models. Y Yin, C Chen, L Shang, X Jiang, X Chen, Q Liu, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing1Yin, Y., Chen, C., Shang, L., Jiang, X., Chen, X., & Liu, Q. (2021). AutoTinyBERT: Automatic hyper-parameter optimization for efficient pre-trained language models. In Proceedings of the 59th Annual Meeting of the Association for Computational Lin- guistics and the 11th International Joint Conference on Natural Language Processing, Vol. 1, pp. 5146-5157.
NAS-Bench-101: Towards reproducible neural architecture search. C Ying, A Klein, E Christiansen, E Real, K Murphy, F Hutter, Proceedings of the 36th International Conference on Machine Learning. the 36th International Conference on Machine Learning97Ying, C., Klein, A., Christiansen, E., Real, E., Murphy, K., & Hutter, F. (2019). NAS- Bench-101: Towards reproducible neural architecture search. In Proceedings of the 36th International Conference on Machine Learning, Vol. 97, pp. 7105-7114.
SPRING: A sparsity-aware reduced-precision monolithic 3D CNN accelerator architecture for training and inference. Y Yu, N K Jha, IEEE Transactions on Emerging Topics in Computing. 101Yu, Y., & Jha, N. K. (2022). SPRING: A sparsity-aware reduced-precision monolithic 3D CNN accelerator architecture for training and inference. IEEE Transactions on Emerging Topics in Computing, 10 (1), 237-249.
ShuffleNet: An extremely efficient convolutional neural network for mobile devices. X Zhang, X Zhou, M Lin, J Sun, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionZhang, X., Zhou, X., Lin, M., & Sun, J. (2018). ShuffleNet: An extremely efficient convolu- tional neural network for mobile devices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6848-6856.
Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. Y Zhu, R Kiros, R Zemel, R Salakhutdinov, R Urtasun, A Torralba, S Fidler, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionZhu, Y., Kiros, R., Zemel, R., Salakhutdinov, R., Urtasun, R., Torralba, A., & Fidler, S. (2015). Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE International Conference on Computer Vision, pp. 19-27.
Neural architecture search with reinforcement learning. B Zoph, Q V Le, abs/1611.01578CoRRZoph, B., & Le, Q. V. (2016). Neural architecture search with reinforcement learning. CoRR, abs/1611.01578.
Learning transferable architectures for scalable image recognition. B Zoph, V Vasudevan, J Shlens, Q V Le, abs/1707.07012CoRRZoph, B., Vasudevan, V., Shlens, J., & Le, Q. V. (2017). Learning transferable architectures for scalable image recognition. CoRR, abs/1707.07012.
| [] |
[
"Handling Epistemic and Aleatory Uncertainties in Probabilistic Circuits",
"Handling Epistemic and Aleatory Uncertainties in Probabilistic Circuits"
] | [
"Federico Cerutti [email protected] \nDepartment of Information Engineering\nUniversità degli Studi di Brescia\nBresciaItaly\n\nCardiff University, Crime and Security Research Institute\nCardiffUK\n",
"Lance M Kaplan [email protected] \nCCDC Army Research Laboratory\nAdelphiMDUSA\n",
"Angelika Kimmig [email protected] \nDepartment of Computer Science\nKU Leuven\nBelgium\n",
"Murat Şensoy [email protected] \nDepartment of Computer Science\nBlue Prism AI Labs\nOzyegin University\nIstanbulLondon, UKTurkey\n"
] | [
"Department of Information Engineering\nUniversità degli Studi di Brescia\nBresciaItaly",
"Cardiff University, Crime and Security Research Institute\nCardiffUK",
"CCDC Army Research Laboratory\nAdelphiMDUSA",
"Department of Computer Science\nKU Leuven\nBelgium",
"Department of Computer Science\nBlue Prism AI Labs\nOzyegin University\nIstanbulLondon, UKTurkey"
] | [] | When collaborating with an AI system, we need to assess when to trust its recommendations. If we mistakenly trust it in regions where it is likely to err, catastrophic failures may occur, hence the need for Bayesian approaches for probabilistic reasoning in order to determine the confidence (or epistemic uncertainty) in the probabilities in light of the training data. We propose an approach to overcome the independence assumption behind most of the approaches dealing with a large class of probabilistic reasoning that includes Bayesian networks as well as several instances of probabilistic logic. We provide an algorithm for Bayesian learning from sparse, albeit complete, observations, and for deriving inferences and their confidences keeping track of the dependencies between variables when they are manipulated within the unifying computational formalism provided by probabilistic circuits. Each leaf of such circuits is labelled with a betadistributed random variable that provides us with an elegant framework for representing uncertain probabilities. We achieve better estimation of epistemic uncertainty than state-of-the-art approaches, including highly engineered ones, while being able to handle general circuits and with just a modest increase in the computational effort compared to using point probabilities. | 10.1007/s10994-021-06086-4 | [
"https://arxiv.org/pdf/2102.10865v1.pdf"
] | 231,985,488 | 2102.10865 | ccadb79f523b7fe76a872590f2af8d3f92fc2ac9 |
Handling Epistemic and Aleatory Uncertainties in Probabilistic Circuits
Federico Cerutti [email protected]
Department of Information Engineering
Università degli Studi di Brescia
BresciaItaly
Cardiff University, Crime and Security Research Institute
CardiffUK
Lance M Kaplan [email protected]
CCDC Army Research Laboratory
AdelphiMDUSA
Angelika Kimmig [email protected]
Department of Computer Science
KU Leuven
Belgium
Murat Şensoy [email protected]
Department of Computer Science
Blue Prism AI Labs
Ozyegin University
IstanbulLondon, UKTurkey
Handling Epistemic and Aleatory Uncertainties in Probabilistic Circuits
Submitted to MACH: Under Review
When collaborating with an AI system, we need to assess when to trust its recommendations. If we mistakenly trust it in regions where it is likely to err, catastrophic failures may occur, hence the need for Bayesian approaches for probabilistic reasoning in order to determine the confidence (or epistemic uncertainty) in the probabilities in light of the training data. We propose an approach to overcome the independence assumption behind most of the approaches dealing with a large class of probabilistic reasoning that includes Bayesian networks as well as several instances of probabilistic logic. We provide an algorithm for Bayesian learning from sparse, albeit complete, observations, and for deriving inferences and their confidences keeping track of the dependencies between variables when they are manipulated within the unifying computational formalism provided by probabilistic circuits. Each leaf of such circuits is labelled with a betadistributed random variable that provides us with an elegant framework for representing uncertain probabilities. We achieve better estimation of epistemic uncertainty than state-of-the-art approaches, including highly engineered ones, while being able to handle general circuits and with just a modest increase in the computational effort compared to using point probabilities.
Introduction
Even in simple collaboration scenarios-like those in which an artificial intelligence (AI) system assists a human operator with predictions-the success of the team hinges on the human correctly deciding when to follow the recommendations of the AI system and when to override them [6]. Extracting benefits from collaboration with the AI system depends on the human developing insights (i.e., a mental model) of when to trust the AI system with its recommendations [6]. If the human mistakenly trusts the AI system in regions where it is likely to err, catastrophic failures may occur. This is a strong argument in favour of Bayesian approaches to probabilistic reasoning: research in the intersection of AI and HCI has found that interaction improves when setting expectations right about what the system can do and how well it performs [39,5]. Guidelines have been produced [1], and they recommend to Make clear what the system can do (G1), and Make clear how well the system can do what it can do (G2).
To identify such regions where the AI system is likely to err, we need to distinguish between (at least) two different sources of uncertainty: aleatory (or aleatoric), and epistemic uncertainty [26,27]. Aleatory uncertainty refers to the variability in the outcome of an experiment which is due to inherently random effects (e.g. flipping a fair coin): no additional source of information but Laplace's daemon 1 can reduce such a variability. Epistemic uncertainty refers to the epistemic state of the agent using the model, hence its lack of knowledge that-in principle-can be reduced on the basis of additional data samples. Particularly when considering sparse data, the epistemic uncertainty around the learnt model can significantly affect decision making [2,3], for instance when used for computing an expected utility [58].
In this paper, we propose an approach to probabilistic reasoning that manipulates distributions of probabilities without assuming independence and without resorting to sampling approaches within the unifying computational formalism provided by arithmetic circuits [59], sometimes named probabilistic circuits when manipulating probabilities, or simply circuits. This is clearly a novel contribution as the few approaches [47,29,10] resorting to distribution estimation via moment matching like we also propose, still assume statistical independence in particular when manipulating distributions within the circuit. Instead, we provide an algorithm for Bayesian learning from sparse-albeit completeobservations, and for probabilistic inferences that keep track of the dependencies between variables when they are manipulated within the circuit. In particular, we focus on the large class of approaches to probabilistic reasoning that rely upon algebraic model counting (AMC) [37] (Section 2.1), which has been proven to encompass probabilistic inferences under [50]'s semantics, thus covering not only Bayesian networks [49], but also probabilistic logic programming approaches such as ProbLog [21], and others as discussed by [11]. As AMC is defined in terms of the set of models of a propositional logic theory, we can exploit the results of [16] (Section 2.2) who studied the succinctness relations between various types of circuits and thus their applicability to model counting. To stress the applicability of this setting, circuit compilation techniques [13,14,44] are behind state-of-the-art algorithms for (1) exact and approximate inference in discrete probabilistic graphical models [12,38,22]; and (2) probabilistic programs [21,8]. Also, learning tractable circuits is the current method of choice for discrete density estimation [23,48,57,56,42]. Finally, [60] also used circuits to enforce logical constraints on deep neural networks.
In this paper, we label each leaf of the circuit with a beta-distributed random variable (Section 3). The beta distribution is a well-defined theoretical framework that specifies a distribution of probabilities representing all the possible values of a probability when the exact value is unknown. In this way, the expected value of a beta-distributed random variable relates to the aleatory uncertainty of the phenomenon, and the variance to the epistemic uncertainty: the higher the variance, the less certain the machine is, thus targeting directly [1, G1 and G2]. In previous work [10] we provided operators for manipulating beta-distributed random variables under strong independence assumptions (Section 4). This paper significantly extends and improves our previous approach by eliminating the independence assumption in manipulating beta-distributed random variables within a circuit.
Indeed, our main contribution (Section 5) is an algorithm for reasoning over a circuit whose leaves are labelled with beta-distributed random variables, with the additional piece of information describing which of those are actually independent (Section 5.1). This is the input to an algorithm that shadows the circuit by superimposing a second circuit for computing the probability of a query conditioned on a set of pieces of evidence (Section 5.2) in a single feed forward. While this at first might seems unnecessary, it is actually essential when inspecting the main algorithm that evaluates such a shadowed circuit (Section 5.3), where a covariance matrix plays an essential role by keeping track of the dependencies between random variables while they are manipulated within the circuit. We also include discussions on memory management of the covariance matrix in Section 5.4.
We evaluate our approach against a set of competing approaches in an extensive set of experiments detailed in Section 6, comparing against leading approaches to dealing with uncertain probabilities, notably: (1) Monte Carlo sampling; (2) our previous proposal [10] taken as representative of the class of approaches using moment matching with strong independence assumptions;
(3) Subjective Logic [30], that provides an alternative representation of beta distributions as well as a calculus for manipulating them applied already in a variety of domains, e.g. [31,43,52]; (4) Subjective Bayesian Network (SBN) on circuits derived from singly-connected Bayesian networks [28,32,33], that already showed higher performance against other traditional approaches dealing with uncertain probabilities, such as (5) Dempster-Shafer Theory of Evidence [18,53], and (6) replacing single probability values with closed intervals representing the possible range of probability values [61]. We achieve better estimation of epistemic uncertainty than state-of-the-art approaches, including highly engineered ones for a narrow domain such as SBN, while being able to handle general circuits and with just a modest increase in the computational effort compared to using point probabilities.
2 Background 2.1 Algebraic Model Counting [37] introduce the task of algebraic model counting (AMC). AMC generalises weighted model counting (WMC) to the semiring setting and supports various types of labels, including numerical ones as used in WMC, but also sets, polynomials, Boolean formulae, and many more. The underlying mathematical structure is that of a commutative semiring.
A semiring is a structure pA, ', b, e ' , e b q, where addition ' and multiplication b are associative binary operations over the set A, ' is commutative, b distributes over ', e ' P A is the neutral element of ', e b P A that of b, and for all a P A, e ' b a " a b e ' " e ' . In a commutative semiring, b is commutative as well.
Algebraic model counting is now defined as follows. Given:
• a propositional logic theory T over a set of variables V,
• a commutative semiring pA, ', b, e ' , e b q, and
• a labelling function ρ : L Ñ A, mapping literals L of the variables in V to elements of the semiring set A,
compute ApT q " à IPMpT q â lPI ρplq,(1)
where MpT q denotes the set of models of T . Among others, AMC generalises the task of probabilistic inference according to [50]'s semantics (PROB), [37,Thm. 1], [24,19,4,7,36].
A query q is a finite set of algebraic literals q Ď L. We denote the set of interpretations where the query is true by Ipqq,
Ipqq " tI | I P MpT q^q Ď Iu(2)
The label of query q is defined as the label of Ipqq,
Apqq " ApIpqqq " à IPIpqq â lPI ρplq.(3)
As both operators are commutative and associative, the label is independent of the order of both literals and interpretations.
In the context of this paper, we extend AMC for handling PROB of queries with evidence by introducing an additional division operator m that defines the conditional label of a query as follows:
Apq|E " eq " ApIpq^E " eqq m ApIpE " eqq(4)
where ApIpq^E " eqq m ApIpE " eqq returns the label of q^E " e given the label of a set of pieces of evidence E " e.
In the case of probabilities as labels, i.e. ρp¨q P r0, 1s, (5) presents the AMCconditioning parametrisation S p for handling PROB of (conditioned) queries:
A " R ě0 a ' b " a`b a b b " a¨b e ' " 0 e b " 1 ρpf q P r0, 1s ρp f q " 1´ρpf q a m b " a b(5)
A naïve implementation of (4) is clearly exponential: [14] introduced the first method for deriving tractable circuits (d-DNNFs) that allow polytime algorithms for clausal entailment, model counting and enumeration.
Probabilistic Circuits
As AMC is defined in terms of the set of models of a propositional logic theory, we can exploit the succinctness results of the knowledge compilation map of [16]. The restriction to two-valued variables allows us to directly compile AMC tasks to circuits without adding constraints on legal variable assignments to the theory.
In their knowledge compilation map, [16] provide an overview of succinctness relationships between various types of circuits. Instead of focusing on classical, flat target compilation languages based on conjunctive or disjunctive normal forms, [16] consider a richer, nested class based on representing propositional sentences using directed acyclic graphs: NNFs. A sentence in negation normal form (NNF) over a set of propositional variables V is a rooted, directed acyclic graph where each leaf node is labeled with true (J), false (K), or a literal of a variable in V, and each internal node with disjunction (_) or conjunction (^).
An NNF is decomposable if for each conjunction node Ź n i"1 φ i , no two children φ i and φ j share any variable.
An NNF is deterministic if for each disjunction node Ž n i"1 φ i , each pair of different children φ i and φ j is logically contradictory, that is φ i^φj |ù K for i ‰ j. In other terms, only one child can be true at any time. 2 The function Eval specified in Algorithm 1 evaluates an NNF circuit for a commutative semiring pA, ', b, e ' , e b q and labelling function ρ. Evaluating an NNF representation N T of a propositional theory T for a semiring pA, ', b, e ' , e b q and labelling function ρ is a sound AMC computation iff EvalpN T , ', b, e ' , e b , ρq " ApT q.
Algorithm 1 Evaluating an NNF circuit N for a commutative semiring pA, ', b, e ' , e b q and labelling function ρ.
1: procedure Eval(N, ', b, e ' , e b , ρ) 2:
if N is a true node J then return e b
3:
if N is a false node K then return e '
4:
if N is a literal node l then return ρplq 5: if N is a disjunction Ž m i"1 N i then 6: return
À m i"1 Eval(N i , ', b, e ' , e b , ρ) 7: end if 8: if N is a conjunction Ź m i"1 N i then 9: return  m i"1 Eval(N i , ', b, e ' , e b , ρ) 10:
end if 11: end procedure
In particular, [37,Theorem 4] shows that evaluating a d-DNNF representation of the propositional theory T for a semiring and labelling function with neutral p', ρq is a sound AMC computation. A semiring addition and labelling function pair p', ρq is neutral iff @v P V : ρpvq ' ρp vq " e b .
Unless specified otherwise, in the following we will refer to d-DNNF circuits labelled with probabilities or distributions of probability simply as circuits, and any addition and labelling function pair p', ρq are neutral. Also, we extend the definition of the labelling function such that it also operates on tK, Ju, i.e. ρpKq " e ' and ρpJq " e b .
Let us now introduce a graphical notation for circuits in this paper: Figure 1 illustrates a d-DNNF circuit where each node has a unique integer (positive or negative) identifier. Moreover, circled nodes are labelled either with ' for disjunction (a.k.a. '-gates) or with b for conjunction (a.k.a. b-gates). Leaves nodes are marked with a squared box and they are labelled with the literal, J, or K, as well as its label via the labelling function ρ.
Unless specified otherwise, in the following we will slightly abuse the notation by defining an¨operator both for variables and J, K, i.e. for x P V Y tK, Ju,
x " $ & % x if x P V K if x " J J if x " K(6)
and for elements of the set A of labels, s.t. ρpxq " ρp x q. Finally, each leaf node presents an additional parameter λ-i.e. the indicator variable cf. [21]-that assumes values 0 or 1, and we will be using it for reusing the same circuit for different purposes.
In the following, we will make use of a running example based upon the burglary example as presented in [21,Example 6]. In this way, we hope to convey better to the reader the value of our approach as the circuit derived from it using [14] will have a clear, intuitive meaning behind. However, our approach is independent from the system that employs circuit compilation for 1 0.1::burglary. 2 0.2::earthquake. 3 0.7::hears_alarm(john). 4 alarm :-burglary. 5 alarm :-earthquake. 6 calls(john) :-alarm, hears_alarm(john). 7 evidence(calls(john)). 8 query(burglary).
Listing 1: Problog code for the Burglary example, originally Example 6 in [21].
its reasoning process, as long as it can make use of d-DNNFs circuits. The d-DNNF circuit for our running example is depicted in Figure 1 and has been derived by compiling [14] in a d-DNNF the ProbLog [21] code listed in Listing 1 [21,Example 6]. For compactness, in the graph each literal of the program is represented only by the initials, i.e. burglary becomes b, hears alarm(john) becomes h(j). ProbLog is an approach to augment 3 prolog programs [40,9] annotating facts 4 with probabilities: see Appendix A for an introduction. As discussed in [21], the prolog language admits a propositional representation of its semantics. For the example the propositional representation of Listing 1 is:
alarm Ø burglary _ earthquake calls(john) Ø alarm^hears alarm(john) calls(john) Figure 1 thus shows the result of the compilation of (7) in a circuit, annotated with a unique id that is either a number x or x to indicate the node that represent the negation of the variable represented by node x; and with weights (probabilities) as per Listing 1.
To enforce that we know calls(john) is true (see line 7 of Listing 1). This translates in having λ " 1 for the double boxed node with index 2 in Figure 1that indeed is labelled with the shorthand for calls(john), i.e. c(j)-while λ " 0 for the double boxed node with index 2 that is instead labelled with the shorthand for calls(john), i.e. c(j).
The λ indicators modify the execution of the function Eval (Alg. 1) in the way illustrated by Algorithm 2: note that Algorithm 2 is analogous to Algorithm 1 when all λ " 1. Hence, in the following, when considering the function Eval, we will be referring to the one defined in Algorithm 2.
Finally, the ProbLog program in Listing 1 queries the value of burglary, hence we need to compute the probability of burglary given calls(john), Algorithm 2 Evaluating an NNF circuit N for a commutative semiring pA, ', b, e ' , e b q and labelling function ρ, considering indicators λ. if N is a true node J then if N is a false node K then return e '
7:
if N is a literal node l then 8: if λ " 1 then return ρplq if N is a disjunction Ž m i"1 N i then 12: return
À m i"1 Eval(N i , ', b, e ' , e b , ρ) 13: end if 14: if N is a conjunction Ź m i"1 N i then 15: return  m i"1 Eval(N i , ', b, e ' , e b , ρ) 16:
end if 17: end procedure ppburglary | calls(john)q " ppburglary^calls(john)q ppcalls(john)q
While the denominator of (8) is given by Eval of the circuit in Figure 1, we need to modify it in order to obtain the numerator ppburglary^calls(john)q as depicted in Figure 2, where λ " 0 for the node labelled with burglary. Eval on the circuit in Figure 2 will thus return the value of the denominator in (8).
It is worth highlighting that computing ppquery | evidencesq for an arbirtrary query and arbirtrary set of evidences requires Eval to be executed at least twice on slightly modified circuits.
In this paper, similarly to [38], we are interested in learning the parameters of our circuit, i.e. the ρ function for each of the leaves nodes, or ρ in the following, thus representing it as a vector. We will learn ρ from a set of examples, where each example is an instantiation of all propositional variables: for n propositional variables, there are 2 n of such instantiations. In the case the circuit is derived from a logic program, an example is a complete interpretation of all the ground atoms. A complete dataset D is then a sequence (allowing for repetitions) of examples, each of those is a vector of instantiations of independent Bernoulli distributions with true but unknown parameter p x . From this, the likelihood is thus: Figure 1: Circuit computing ppcalls(john)q for the Burglary example (Listing 1). Solid box for query, double box for evidence. Figure 2: Circuit computing ppburglary^calls(john)q for the Burglary example (Listing 1). Solid box for query, double box for evidence. White over black for the numeric value that has changed from Figure 1. In particular, in this case, λ for the node labelled with burglary is set to 0.
ppD | pq " |D| ź i"1 ppx i | p xi q (9) 1 ρ( a ) = 1 λ = 1 1 ρ( a ) = 1 λ = 1 2 ρ( c(j) ) = 1 λ = 1 2 ρ( c(j) ) = 1 λ = 0 7 ρ( b ) = 0.1 λ = 1 7 ρ( b ) = 0.1 λ = 1 3 ρ( h(j) ) = 0.7 λ = 1 8 ρ( e ) = 0.2 λ = 1 3 ρ( h(j) ) = 0.7 λ = 1 4 ⊗ 5 ⊗ 6 ⊕ 8 ρ( e ) = 0.2 λ = 1 9 ⊗ 10 ⊕ 11 ⊗ 12 ⊕ 13 ⊗ 14 ⊗ 15 ⊕ 16 ⊗ 17 ⊕1 ρ( a ) = 1 λ = 1 1 ρ( a ) = 1 λ = 1 2 ρ( c(j) ) = 1 λ = 1 2 ρ( c(j) ) = 1 λ = 0 7 ρ( b ) = 0.1 λ = 1 7 ρ( b ) = 0.1 λ = 0 3 ρ( h(j) ) = 0.7 λ = 1 8 ρ( e ) = 0.2 λ = 1 3 ρ( h(j) ) = 0.7 λ = 1 4 ⊗ 5 ⊗ 6 ⊕ 8 ρ( e ) = 0.2 λ = 1 9 ⊗ 10 ⊕ 11 ⊗ 12 ⊕ 13 ⊗ 14 ⊗ 15 ⊕ 16 ⊗ 17 ⊕
where x i represents the i-th example in the dataset D. Differently, however, from [38], we do not search for a maximum likelihood solution of this problem, rather we provide a Bayesian analysis of it in Section 3.
The following analysis provides the distribution of the probabilities (secondorder probabilities) for each propositional variables. For complete datasets, these distributions factor meaning that the second-order probabilities for the propositional variables are statistically independent (see Appendix C). Nevertheless, it is shown that second-order probabilities of a variable and its negation are correlated because the first-order probabilities (i.e. the expected values of the distributions) sum up to one.
The inference process proposed in this paper does not assume independent second-order probabilities as it encompasses dependencies between the random variables associated to the proposition in the form of a covariance matrix. For complete datasets, the covariances at the leaves are only non-zero between a variable and its negation. More generally, when the training dataset D is not complete (i.e., variable values cannot always be observed for various instantiations), the second-order probabilities become correlated. The derivations of these correlations during the learning process with partial observations is left for future work. Nevertheless, the proposed inference method can accommodate such correlations without any modifications. This is one of our main contributions, that separates our approach from the literature. Indeed, ours is clearly not the only Bayesian approach to learning parameters in circuits, see for instance [29,62,54,57,47,63]. In addition, similarly to [47,29] we also apply the idea of moment matching instead of using sampling.
A Bayesian Account of Uncertain Probabilities
Let us now expand further (9): for simplicity, let us consider here only the case of a single propositional variable, i.e. a single binary random variable x P t0, 1u, e.g. flipping coin, not necessary fair, whose probability is thus conditioned by a parameter 0 ď p x ď 1:
ppx " 1 | p x q " p x(10)
The probability distribution over x is known as the Bernoulli distribution:
Bernpx | p x q " p x x p1´p x q 1´x(11)
Given a data set D of i.i.d. observations px 1 , . . . , x N q T drawn from the Bernoulli with parameter p x , which is assumed unknown, the likelihood of data given p x is:
ppD | p x q " N ź n"1 ppx n | p x q " N ź n"1 p xn x p1´p x q 1´xn(12)
To develop a Bayesian analysis of the phenomenon, we can choose as prior the beta distribution, with parameters α " xα x , α x y, α x ě 1 and α x ě 1, that is conjugate to the Bernoulli:
Betapp x | αq " Γpα x`αx q Γpα x qΓpα x q p αx´1 x p1´p x q αx´1(13)
where Γptq "
ż 8 0 u t´1 e´udu(14)
is the gamma function. Given a beta-distributed random variable X,
s X " α x`αx(15)
is its Dirichlet strength and
ErXs " α x s X(16)
is its expected value. From (15) and (16) the beta parameters can equivalently be written as:
α X " xErXss X , p1´ErXsqs X y.(17)
The variance of a beta-distributed random variable X is varrXs " varr1´Xs " ErXsp1´ErXsq s X`1 (18) and because X`p1´Xq " 1, it is easy to see that covrX, 1´Xs "´varrXs.
From (18) we can rewrite s X (15) as
s X " ErXsp1´ErXsq varrXs´1 .(20)
Considering a beta distribution prior and the binomial likelihood function, and given N observations of x such that for r observations x " 1 and for s " N´r observations x " 0
ppp x | D, α 0 q " ppD | p x qppp x | α 0 q ppDq 9 p r`α 0 x´1 x p1´p x q s`α 0 x´1(21)
Hence ppp x | r, s, α 0 q is another beta distribution such that after normalization via p(D),
ppp x | r, s, α 0 q " Γpr`α 0 x`s`α 0 x q Γpr`α 0 x qΓps`α 0 x q p r`α 0 x´1 x p1´p x q s`α 0 x´1(22)
We can specify the parameters for the prior we are using for deriving our beta distributed random variable X as α 0 " xa X W, p1´a X qW y where a X is the prior assumption, i.e. ppx " 1q in the absence of observations; and W ą 0 is a prior weight indicating the strength of the prior assumption. Unless specified otherwise, in the following we will assume @X, a X " 0.5 and W " 2, so to have an uninformative, uniformly distributed, prior.
The complete dataset D is modelled as samples from independent binomials distributions for facts and rules. As such, the posterior factors as a product of beta distributions representing the posterior distribution for each fact or rule as in (22) for a single fact (see Appendix C for further details). This posterior distribution enable the computation of the means and covariances for the leaves of the circuit, and because it factors, the different variables are statistically independent leading to zero covariances. Only the leaves associated to a variable and its complement exhibit nonzero covariance via (19). Now, the means and covarainces of the leaves can be propagated through the circuit to determine the distribution of the queried conditional probability as described in Section 5.
Given an inference, like the conditioned query of our running example (8), we approximate its distribution by a beta distribution by finding the corresponding Dirichlet strength to match the compute variance. Given a random variable Z with known mean ErZs and variance varrZs, we can use the method of moments and (20) to estimate the α parameters of a beta-distributed variable Z 1 of mean ErZ 1 s " ErZs and
s Z 1 " max " ErZsp1´ErZsq varrZs´1 , W a Z ErZs , W p1´a Z q p1´ErZsq * .(23)
(23) is needed to ensure that the resulting beta-distributed random variable Z 1 does not lead to a α Z 1 ă x1, 1y.
Subjective Logic
Subjective logic [30] provides (1) an alternative, more intuitive, way of representing the parameters of beta-distributed random variables, and (2) a set of operators for manipulating them. A subjective opinion about a proposition X is a tuple ω X " xb X , d X , u X , a X y, representing the belief, disbelief and uncertainty that X is true at a given instance, and, as above, a X is the prior probability that X is true in the absence of observations. These values are non-negative and b X`dX`uX " 1. The projected probability ppxq " b X`uX¨aX , provides an estimate of the ground truth probability p x . The mapping from a beta-distributed random variable X with parameters α X " xα x , α x y to a subjective opinion is:
ω X " B α x´W a X s X , α x´W p1´a X q s X , W s X , a X F(24)
With this transformation, the mean of X is equivalent to the projected probability ppxq, and the Dirichlet strength is inversely proportional to the uncertainty of the opinion:
ErXs " ppxq " b X`uX a X , s X " W u X(25)
Conversely, a subjective opinion ω X translates directly into a beta-distributed random variable with:
α X " B W u X b X`W a X , W u X d X`W p1´a X q F(26)
Subjective logic is a framework that includes various operators to indirectly determine opinions from various logical operations. In particular, we will make use of ' SL , b SL , and m SL , resp. summing, multiplying, and dividing two subjective opinions as they are defined in [30] (Appendix B). Those operators aim at faithfully matching the projected probabilities: for instance the multiplication of two subjective opinions ω X b SL ω Y results in an opinion ω Z such that ppzq " ppxq¨ppzq.
AMC-conditioning parametrisation with strong independence assumptions
Building upon our previous work [10], we allow manipulation of imprecise probabilities as labels in our circuits. Figure 3 shows an example of the circuits we will be manipulating, where probabilities from the circuit depicted in Fig. 1 has been replaced by uncertain probabilities represented as beta-distributed random variables and formalised as SL opinion, in a shorthand format listing only belief and uncertainty values.
SL AMC-conditioning parametrisation with strong independence assumptions
The straightforward approach to derive an AMC-conditioning parametrisation under complete independence assumptions at each step of the evaluation of the probabilistic circuit using subjective logic, is to use the operators ', b, and m. This gives rise to the SL AMC-conditioning parametrisation S SL , defined as Figure 1 with leaves labelled with imprecise probabilities represented as Subjective Logic opinions, listing only b X and u X : d X " 1´b X´uX , and a X " 0.5. Solid box for query, double box for evidence.
follows:
A SL " R 4 ě0 a ' SL b " $ & % a if b " e 'SL b if a " e 'SL a ' SL b otherwise a b SL b " $ & % a if b " e bSL b if a " e bSL a b SL b otherwise e 'SL " x0, 1, 0, 0y e bSL " x1, 0, 0, 1y ρ SL pf i q " xb fi , d fi , u fi , a fi y ρ SL p f i q " xd fi , b fi , u fi , 1´a fi y a m SL b " $ & % a if b " e bSL a m SL b if defined x0, 0, 1, 0.5y otherwise(27)
Note that xA SL , ' SL , b SL , e 'SL , e bSL y does not form a commutative semiring in general. If we consider only the projected probabilities-i.e. the means of the associated beta distributions-then ' and b are indeed commutative, associative, and b distributes over '. However, the uncertainty of the resulting opinion depends on the order of operands.
Moment Matching AMC-conditioning parametrisation with strong independence assumptions
In [10] we derived another set of operators operating with moment matching: they aim at maintaining a stronger connection to beta distribution as the result of the manipulation. Indeed, while SL operators try to faithfully characterise the projected probabilities, they employ an uncertainty maximisation principle to limit the belief commitments, hence they have a looser connection to the beta distribution. Instead, in [10] we first represented beta distributions (and thus also SL opinions) not parametric in α, but rather parametric on mean and variance. Hence we then propose operators that manipulate means and variances, and then we transformed them back into beta distributions by moment matching.
In [10] we first defined a sum operator between two independent betadistributed random variables X and Y as the beta-distributed random variable Z such that ErZs " ErX`Y s and σ 2 Z " σ 2 X`Y . The sum (and in the following the product as well) of two beta random variables is not necessarily a beta random variable. Consistently with [33], the resulting distribution is then approximated as a beta distribution via moment matching on mean and variance.
Given X and Y independent beta-distributed random variables represented by the subjective opinion ω X and ω Y , the sum of X and Y (ω X ' β ω Y ) is defined as the beta-distributed random variable Z such that:
ErZs " ErX`Y s " ErXs`ErY s(28)
and
σ 2 Z " σ 2 X`Y " σ 2 X`σ 2 Y .(29)
ω Z " ω X ' β ω Y can then be obtained as discussed in Section 3, taking (23) into consideration. The same applies for the following operators as well.
The product operator between two independent beta-distributed random variables X and Y is then defined as the beta-distributed random variable Z such that ErZs " ErXY s and σ 2 Z " σ 2 XY . Given X and Y independent betadistributed random variables represented by the subjective opinion ω X and ω Y , the product of X and Y (ω X b β ω Y ) is defined as the beta-distributed random variable Z such that:
ErZs " ErXY s " ErXs ErY s(30)
and
σ 2 Z " σ 2 XY " σ 2 X pErY sq 2`σ2 Y pErXsq 2`σ2 X σ 2 Y .(31)
Finally, the conditioning-division operator between two independent betadistributed random variables X and Y , represented by subjective opinions ω X and ω Y , is the beta-distributed random variable Z such that ErZs " Er X Y s and
σ 2 Z " σ 2 X Y .
Given ω X " xb X , d X , u X , a X y and ω Y " xb Y , d Y , u Y , a Y y subjective opinions such that X and Y are beta-distributed random variables,
Y " ApIpE " eqq " ApIpq^E " eqq ' ApIp q^E " eqq, with ApIpq^E " eqq " X. The conditioning-division of X by Y (ω X m β ω Y )
is defined as the beta-distributed random variable Z such that:
ErZs " E " X Y " ErXs E " 1 Y » ErXs ErY s(32)
and 5
σ 2 Z » pErZsq 2 p1´ErZsq 2ˆσ 2 X pErXsq 2`σ 2 Y`σ 2 X pErY s´ErXsq 2`2 σ 2 X ErXspErY s´ErXsq( 33)
Similarly to (27), the moment matching AMC-conditioning parametrisation S β is defined as follows:
A β " R 4 ě0 a ' β b " a ' β b a b β b " a b β b e ' β " x1, 0, 0, 0.5y e b β " x0, 1, 0, 0.5y ρ β pf i q " xb fi , d fi , u fi , a fi y P r0, 1s 4 ρ β p f i q " xd fi , b fi , u fi , 1´a fi y a m β b " a m β b(34)
As per (27), also xA β , ' β , b β , e ' β , e b β y is not in general a commutative semiring. Means are correctly matched to projected probabilities, therefore for them S β actually operates as a semiring. However, for what concerns variance, by using (31) and (29)-thus under independence assumption-the product is not distributive over addition: varrXpY`Zqs " varrXspErY s`ErZsq 2p varrY s`varrZsqErXs 2`v arrXspvarrY s`varrZsq ‰ varrXspErY s 2`E rZs 2 qp varrY s`varrZsqErXs 2`v arrXspvarrY s`varrZsq " varrpXY q`pXZqs.
To illustrate the discrepancy, let's consider node 6 in Figure 3: the disjunction operator there is summing up probabilities that are not statistically independent, despite the independence assumption used in developing the operator. Due to the dependencies between nodes in the circuit, the error grows during propagation, and then the numerator and denominator in the conditioning operator exhibit strong correlation due to redundant operators. Therefore, (33) introduces further error leading to an overall inadequate characterisation of variance. The next section reformulates the operations to account for the existing correlations.
Algorithm 3 Solving the PROB problem on a circuit N A labelled with identifier of beta-distributed random variables and the associative table A, and covariance matrix C A .
1: procedure CovProbBeta(N A , C A ) 2: y N A := ShadowCircuit(N A ) 3: return EvalCovProbBeta( y N A , C A ) 4: end procedure varrY b Xs ' varrY b Xs " varrY s(35)
Algorithm 3 provides an overview of CPB , that comprises three stages: (1) pre-processing; (2) circuit shadowing; and (3) evaluation. The overall approach is to view the second-order probability of each node in the circuit as a beta distribution. The determination of the distributions is through moment matching via the first and second moments through (18) and (20). Effectively, the collection of nodes are treated as multivariate Gaussian characterised by a mean vector and covariance matrix that it computed via the propagation process described below. When analysing the distribution for particular node (via marginalisation of the Gaussian), it is approximated via the best-fitting beta distribution through moment-matching.
Pre-processing
We assume that the circuit we are receiving has the leaves labelled with unique identifiers of beta-distributed random variables. We also allow for the specification of the covariance matrix between the beta-distributed random variables, bearing in mind that covrX, 1´Xs "´varrXs, cf. (18) and (19). We do not provide a specific algorithm for this, as it would depend on the way the circuit is computed. In our running example, we assume the ProbLog code from Listing 1 has been transformed into the aProbLog 6 code in Listing 2.
We also expect there is a table associating the identifier with the actual value of the beta-distributed random variable. In the following, we assume that ω 1 is a reserved indicator for the Betap8, 1.00q (in Subjective Logic term x1.0, 0.0, 0.0, 0.5y). For instance, Table 1 provides the associations for code in Listing 2, and Table 2 the covariance matrix for those beta-distributed random variables that we assume being learnt from complete observations of independent random variables, and hence the posterior beta-distributed random variables are also independent (cf. Appendix C).
1 ω 2 ::burglary. 2 ω 3 ::earthquake. 3 ω 4 ::hears_alarm(john). 4 alarm :-burglary. 5 alarm :-earthquake. 6 calls(john) :-alarm, hears_alarm(john). 7 evidence(calls(john)). 8 query(burglary).
Listing 2: Problog code for the Burglary example with unique identifier for the random variables associated to the database, originally Example 6 in [21] Identifier Beta parameters Subjective Logic opinion
ω 1
Betap8, 1q x1.00, 0.00, 0.00, 0.50y We use a short-hand notation for clarity: σ 2 i " covrω i s. Zeros are omitted.
ω 1 Betap1,ω 1 ω 2 ω 2 ω 3 ω 3 ω 4 ω 4 ω 1 σ 2 1´σ 2 1 ω 1´σ 2 1 σ 2 1 ω 2 σ 2 2´σ 2 2 ω 2´σ 2 2 σ 2 2 ω 3 σ 2 3´σ 2 3 ω 3´σ 2 3 σ 2 3 ω 4 σ 2 4´σ 2 4 ω 4´σ 2 4 σ 2 4
Circuit shadowing
We then augment the circuit adding shadow nodes to superinpose a second circuit to enable the possibility to assess, in a single forward pass, both ppqueryê videnceq and ppevidenceq. This can provide a benefit time-wise at the expense of memory, but more importantly it simplifies the bookkeeping of indexes in the covariance matrix as we will see below.
Algorithm 4 focuses on the node that identifies the negation of the query we want to evaluate with this circuit, identified as qnodepN A q): 7 indeed, to evaluate ppquery^evidenceq, the λ parameter for such a node must be set to 0. In lines 4-18 Algorithm 4 superimpose a new circuit by creating shadow nodes, e.g. p c at line 9, that will represent random variables affected by the change in the λ parameter for the qnodepN A q). The result of Algorithm 4 on the circuit for our running example is depicted in Figure 4.
In Algorithm 4 we make use of a stack data structure with associated pop and push functions (cf. lines 3,5,8,16): that is for ease of presentation as the algorithm does not require an ordered list.
Evaluating the shadowed circuit
Each of the nodes in the shadowed circuit (e.g. Figure 4) has associated a (betadistributed) random variable. In the following, and in Algorithm 5, given a node n, its associated random variable is identified as X n . For the nodes for which exists a ρ label, its associated random variable is the beta-distributed random variable labelled via the ρ function, cf. Figure 4. Figure 4: Shadowing of the circuit represented in Figure 1 according to Algorithm 4. Solid box for query, double box for evidence, in grey the shadow nodes added to the circuit. If a node has a shadow, they are grouped together with a dashed box. Dashed arrows connect shadow nodes to their children. return y N A 20: end procedure Algorithm 5 begins with building a vector of means (means), and a matrix of covariances (cov ) of the random variables associated to the leaves of the circuit (lines 2-16) derived from the C A covariance matrix provided as input. The algorithm can be made more robust by handling the case where C A is empty or a matrix of zeroes: in this case, assuming independence among the variables, it is straightforward to obtain a matrix such as Table 2.
1 ρ( a ) = ω1 X1 = ω1 λ = 1 1 ρ( a ) = ω1 X 1 = ω1 λ = 1 2 ρ( c(j) ) = ω1 X2 = ω1 λ = 1 2 ρ( c(j) ) = ω1 X 2 = ω1 λ = 0 12 ⊕ 12 ⊕ 13 ⊗ 13 ⊗ 14 ⊗ 14 ⊗ 16 ⊗ 16 ⊗ 17 ⊕ 17 ⊕ 7 ρ( b ) = ω2 X 7 = ω2 λ = 1 7 ρ( b ) = ω2 X 7 = ω2 λ = 0 7 ρ( b ) = ω2 X7 = ω2 λ = 1 9 ⊗ 9 ⊗ 8 ρ( e ) = ω3 X 8 = ω3 λ = 1 8 ρ( e ) = ω3 X8 = ω3 λ = 1 11 ⊗ 6 ⊕ 15 ⊕ 3 ρ( h(j) ) = ω4 X3 = ω4 λ = 1 3 ρ( h(j) ) = ω4 X 3 = ω4 λ = 1 4 ⊗ 5 ⊗ 10 ⊕
Then, Algorithm 5 proceeds to compute the means and covariances for all the remaining nodes in the circuit (lines 17-31). Here two cases arises.
Let n be a '-gate over C nodes, its children: hence (lines 22-35)
ErX n s " ÿ cPC ErX c s,(36)covrX n s " ÿ cPC ÿ c 1 PC covrX c , X c 1 s,(37)covrX n , X z s " ÿ cPC covrX c , X z s for z P y N A ztnu(38)
with covrX, Y s " ErXY s´ErXsErY s (39) and covrXs " covrX, Xs " varrXs. Let n be a b-gate over C nodes, its children (lines [26][27][28][29][30]. Due to the nature of the variable X n , following [25, §4.3.2] we perform a Taylor approximation.
Algorithm 5 Evaluating the shadowed circuit y N A taking into consideration the given C A covariance matrix. nvisited :" nvisited Y tnu 22: if n is a (shadowed) disjunction over C :" childrenp y N A , nq then cov rz, ns :" cov rn, zs :" ř cPC cov rc, zs @z P y N A ztnu 26: else if n is a (shadowed) conjunction over C :" childrenp y N A , nq then Let's assume X n " ΠpX C q " ź cPC X c , with X C " pX c1 , . . . , X c k q T and k " |C|.
Expanding the first two terms yields:
X n » ΠpErX C sq`pX C´E rX C sq T ∇ΠpX C qˇˇX C "ErX C s " ErX n s`pX c1 ErX c1 sq ź cPCztc1u ErX c s`...`pX c ḱ ErX c k sq ź cPCztc k u ErX c s " ErX n s`ÿ cPC ś c 1 PC ErX c 1 s ErX c s pX c´E rX c sq " ErX n s`ÿ cPC
ErX n s ErX c s pX c´E rX c sq (40) where the first term can be seen as an approximation for ErX n s.
Using this approximation, then (lines 34-42 of Algorithm 5)
covrX n s » ÿ cPC ÿ c 1 PC ErX n s 2 ErX c sErX c 1 s covrX c , X c 1 s,(41)covrX n , X z s » ÿ cPC ErX n s ErX c s covrX c , X z s for z P y N A ztnu.(42)
Finally, Algorithm 5 computes a conditioning between X r and X
which implies
E " X r X p r » ErX r s ErX p r s ,(44)
cov " X r X Tables 3 and 4 depicts respectively the non-zero values of the means vector and cov matrix for our running example. Overall, the mean and variance for ppburglary|calls(john)q are 0.3571 and 0.0528, respectively. Figure 5 depicts the resulting beta-distributed random variable (solid line) against a Monte Carlo simulation.
X 5 X 6 X p 7 X 9 X 10 X 11 X 12 X x 12 X 13 X x 13 X 15 X 17 XX 3 X 5 X 6 X 7 X p 7 X 8 X 9 X 11 X 12 X x 12 X 13 X x 13 X 17 X x 17 X 3 3.
Memory performance
Algorithms 3 returns the mean and variance of the probability for the query conditioned on the evidence. Algorithm 4 adds shadow nodes to the initial circuit formed by the evidence to avoid redundant computations in the second pass. For the sake of clarity, Algorithm 5 is presented in its most simple form. As formulated, it requires a | y N A |ˆ| y N A | array to store the covariance values between the nodes. For large circuits, this memory requirement can significantly slow down the processing (e.g., disk swaps) or simply become prohibitive. The covariances of a particular node are only required after it is computed via lines 24-25 or 34-35 in Algorithm 5. Furthermore, these covariances are no longer needed once all the parent node values have been computed. Thus, it is straightforward to dynamically allocate/de-allocate portions of the covariance array as needed. In fact, the selection of node n to compute in line 19, which is currently arbitrary, can be designed to minimise processing time in light of the resident memory requirements for the covariance array. Such an optimisation depends on the computing architecture and complicates the presentation. Thus, further details are beyond the scope of this paper.
6 Experimental results
The benefits of considering covariances
To illustrate the benefits of Algorithm 3 (Section 5), we run an experimental analysis involving several circuits with unspecified labelling function. For each circuit, first labels are derived for the case of parametrisation S p (5) by selecting the ground truth probabilities from a uniform random distribution. Then, for each label, we derive a subjective opinion by observing N ins instantiations of a random variables derived from the chosen probability, so to simulate data sparsity [33].
We then proceed analysing the inference on specific query nodes q in the presence of a set of evidence E " e using:
• CPB as articulated in Section 5;
• S β , cf. (34);
• S SL , cf. (27);
• MC , a Monte Carlo analysis with 100 samples from the derived random variables to obtain probabilities, and then computing the probability of queries in presence of evidence using the parametrisation S p .
We then compare the RMSE to the actual ground truth. This process of inference to determine the marginal beta distributions is repeated 1000 times by considering 100 random choices for each label of the circuit, i.e. the ground truth, and for each ground truth 10 repetitions of sampling the interpretations We judge the quality of the beta distributions of the queries on how well its expression of uncertainty captures the spread between its projected probability and the actual ground truth probability, as also [33] did. In simulations where the ground truths are known, such as ours, confidence bounds can be formed around the projected probabilities at a significance level of γ and determine the fraction of cases when the ground truth falls within the bounds. If the uncertainty is well determined by the beta distributions, then this fraction should correspond to the strength γ of the confidence interval [33,Appendix C].
Following [10], we consider the famous Friends & Smokers problem, cf. Listing 3, 8 with fixed queries and set of evidence. Table 5 provides the root mean square error (RMSE) between the projected probabilities and the ground truth probabilities for all the inferred query variables for N ins = 10, 50, 100. The table also includes the predicted RMSE by taking the square root of the average-over the number of runs-variances from the inferred marginal beta distributions, cf. (18). Figure 6 plots the desired and actual significance levels for the confidence intervals (best closest to the diagonal), i.e. the fractions of times the ground truth falls within confidence bounds set to capture x% of the data. Figure 7 depicts the distribution of execution time for running the various algorithm over this dataset: for each circuit, all algorithms have been computed on a Intel(R) Xeon(R) Skylake Quad-core 2.30GHz and 128 GB of RAM. Finally, Figure 8 depicts the correlation of Dirichlet strengths between MC runs with variable number of samples and the golden standard (i.e. a Monte Carlo run with 10,000 samples), as well as between CPB and the golden standard, which is clearly independent of the number of samples used for MC . Given X g q (resp. X q ) the random variable associated to the queries q computed using the golden standard (resp. computed using either MC or CPB ), the Pearson's correlation coefficient displayed in Figure 8 is given by:
r " covrs X g q , s Xq s covrs X g q scovrs Xq s(46)
This is a measure of the quality of the epistemic uncertainty associated with the evaluation of the circuit using MC with varying number of samples, and CPB : the closer the Dirichlet strengths are to those of the golden standard, the better the computed epistemic uncertainty represents the actual uncertainty, 9 hence the closer the correlations are to 1 in Figure 8 the better. From Table 5, CPB exhibits the lowest RMSE and the best prediction of its own RMSE. As already noticed in [10], S β is a little conservative in estimating its own RMSE, while S SL is overconfident. This is reflected in Figure 6, with the results of S β being over the diagonal, and those of S SL being below it, while CPB sits exactly on the diagonal, like also MC . However, MC with 100 samples does not exhibit the lowest RMSE according to Table 5, although the difference with the best one is much lower compared with S SL . Considering the execution time, Figure 7, we can see that there is a substantial difference between CPB and MC with 100 samples.
Finally, Figure 8 depicts the correlation of the Dirichlet strength between the golden standard, i.e. a Monte Carlo simulation with 10,000 samples, and both CPB and MC , this last one varying the number of samples used. It is straightforward to see that MC improves the accuracy of the computed epistemic uncertainty when increasing the number of samples considered, approaching the same level of CPB when considering more than 200 samples.
Comparison with other approaches for dealing with uncertain probabilities
To compare our approach against the state-of-the-art approaches for reason- ing with uncertain probabilities, following [10] we restrict ourselves to the case of circuits representing inferences over a Bayesian network. For instance, Listing 4 shows an aProblog code that can also be interpreted as a Bayesian network. We considered three circuits and their Bayesian network representation: Net1 (Listing 4); Net2; and Net3. Figure 12 in Appendix D depicts the Bayesian networks that can be derived from such circuits. In the following, we will refer to NetX as both the circuit and the Bayesian network without distinction. We then compared CPB against three approaches specifically designed for dealing with uncertain probabilities in Bayesian networks: Subjective Bayesian Networks; Belief Networks; and Credal Networks.
Subjective Bayesian Network SBN [28,32,33], was first proposed in [28], and it is an uncertain Bayesian network where the conditionals are subjective opinions instead of dogmatic probabilities. In other words, the conditional probabilities are known within a beta distribution. SBN uses subjective belief propagation (SBP), which was introduced for trees in [32] and extended for singly-connected networks in [33], that extends the Belief Propagation (BP) inference method of [45]. In BP, π-and λmessages are passed from parents and children, respectively, to a node, i.e., variable. The node uses these messages to formulate the inferred marginal probability of the corresponding variable. The node also uses these messages to determine the π-and λ-messages to send to its children and parents, respectively. In SBP, the π-and λ-messages are subjective opinions characterised by a projected probability and Dirichlet strength.
The SBP formulation approximates output messages as beta-distributed random variables using the methods of moments and a first-order Taylor series approximation to determine the mean and variance of the output messages in light of the beta-distributed input messages. The details of the derivations are provided in [32,33].
Belief Networks GBT [53] introduced a computationally efficient method to reason over networks via Dempster-Shafer theory [18]. It is an approximation of a valuation-based system. Namely, a (conditional) subjective opinion ω X " rb x , bx, u X s from our circuit obtained from data is converted to the following belief mass assignment: mpxq " b x , mpxq " bx and mpx Yxq " u X . Note that in the binary case, the belief function overlaps with the belief mass assignment. The method exploits the disjunctive rule of combination to compose beliefs conditioned on the Cartesian product space of the binary power sets. This enables both forward propagation and backward propagation after inverting the belief conditionals via the generalized Bayes' theorem (GBT). By operating in the Cartesian product space of the binary power sets, the computational complexity grows exponentially with respect to the number of parents.
Credal Networks Credal [61]. A credal network over binary random variables extends a Bayesian network by replacing single probability values with closed intervals representing the possible range of probability values. The extension of Pearl's message-passing algorithm by the 2U algorithm for credal networks is described in [61]. This algorithm works by determining the maximum and minimum value (an interval) for each of the target probabilities based on the given input intervals. It turns out that these extreme values lie at the vertices of the polytope dictated by the extreme values of the input intervals. As a result, the computational complexity grows exponentially with respect to the number of parents nodes. For the sake of comparison, we assume that the random variables we label our circuts with and elicited from the given data corresponds to a credal network in the following way: if ω x " rb x , bx, u X s is a subjective opinion on the probability p x , then we have rb x , b x`uX s as an interval corresponding to this probability in the credal network. It should be noted that this mapping from the beta-distributed random variables to an interval is consistent with past studies of credal networks [35].
As before, Table 6 provides the root mean square error (RMSE) between the projected probabilities and the ground truth probabilities for all the inferred query variables for N ins = 10, 50, 100, together with the RMSE predicted by taking the square root of the average variances from the inferred marginal beta distributions. Figure 9 plots the desired and actual significance levels for the confidence intervals (best closest to the diagonal). Figure 10 depicts the distribution of execution time for running the various algorithms, and Figure 11 the correlation of the Dirichlet strength between the golden standard, i.e. a Monte Carlo simulation with 10,000 samples, and both CPB and MC varying the number of samples. Table 6 shows that CPB shares the best performance with the state-of-theart SBN and S β almost constantly. This is clearly a significant achievement considering that SBN is the state-of-the-art approach when dealing only with single connected Bayesian Networks with uncertain probabilities, while we can also handle much more complex problems. Consistently with Table 5, and also with [10], S β has lower RMSE than S SL and it seems that S β overestimates the predicted RMSE and S SL underestimates it as S SL predicts smaller error than is realised and vice versa for S β .
From visual inspection of Figure 9, it is evident that CPB , SBN , and MC all are very close to the diagonal, thus correctly assessing their own epistemic uncertainty. S β performance is heavily affected by the fact that it computes the conditional distributions at the very end of the process and it relies, in (33), on the assumption of independence. CPB , keeping track of the covariance between the various nodes in the circuits, does not suffer from this problem. This positive result has been achieved without substantial deterioration of the performance in terms of execution time, as displayed in Figure 10, for which the same commentary of Figure 7 applies.
Finally, Figure 11 depicts the correlation of the Dirichlet strength between the golden standard, i.e. a Monte Carlo simulation with 10,000 samples, and both CPB and MC , this last one varying the number of samples used. Like for Figure 8, it is straightforward to see that MC improves the accuracy of its computed epistemic uncertainty when increasing the number of samples considered, approaching the same level of CPB when considering more than 200 samples, while CPB performs very closely to the optimal value of 1.
Conclusion
In this paper, we introduce (Section 5) an algorithm for reasoning over a probabilistic circuit whose leaves are labelled with beta-distributed random variables, with the additional piece of information describing which of those are actually independent (Section 5.1). This provides the input to an algorithm that shadows the circuit derived for computing the probability of the pieces of evidence by superimposing a second circuit modified for computing the probability of a given query and the pieces of evidence, thus having all the necessary components for computing the probability of a query conditioned on the pieces of evidence (Section 5.2). This is essential when evaluating such a shadowed circuit (Section 5.3), with the covariance matrix playing an essential role by keeping track of the dependencies between random variables while they are manipulated within the circuit. We also include discussions on memory management in Section 5.4.
In our extensive experimental analysis (Section 6) we compare against leading approaches to compute uncertain probabilities, notably: (1) Monte Carlo sampling; (2) our previous proposal [10] as representative of the family of approaches using a moment matching approach with strong independence assumptions; (3) Subjective Logic [30]; (4) Subjective Bayesian Network (SBN) [28,32,33]; (5) Dempster-Shafer Theory of Evidence [18,53]; and (6) credal networks [61]. We achieve the same or better results of state-of-the-art approaches for dealing with epistemic uncertainty, including highly engineered ones for a narrow domain such as SBN, while being able to handle general probabilistic circuits and with just a modest increase in the computational effort. In fact, this work has inspired us to leverage probabilistic circuits to expand second-order inference for SBN for arbitrary directed acyclic graphs whose variables are multinomials. In work soon to be released [34], we prove the mathematical equivalence of the updated SBN inference approach to that of [55], but with significantly lower computational burden.
We focused our attention on probabilistic circuits derived from d-DNNFs: work by [15], and then also by [38] has introduced Sentential Decision Diagrams (SDDs) as a new canonical formalism respectively for propositional and for probabilistic circuits. However, as we can read in [15, p. 819] SDDs is a strict subset of d-DNNF, which is thus the least constrained type of propositional circuit we can safely rely on according to [37,Theorem 4]. However, in future work we will enable our approach to efficiently make use of SDDs.
In addition, we will also work in the direction of enabling learning with partial observations-incomplete data where the instantiations of each of the propositional variables are not always visible over all training instantiationson top of its ability of tracking the covariance values between the various random variables for a better estimation of epistemic uncertainty. extension [36] to handle arbitrary labels from a semiring. They all are based on definite clause logic (pure Prolog) extended with facts labelled with probability values. Their meaning is typically derived from Sato's distribution semantics [50], which assigns a probability to every literal. The probability of a Herbrand interpretation, or possible world, is the product of the probabilities of the literals occurring in this world. The success probability is the probability that a query succeeds in a randomly selected world.
For a set J of ground facts, we define the set of literals LpJq and the set of interpretations IpJq as follows:
LpJq " J Y t f | f P Ju (47) IpJq " tS | S Ď LpJq^@l P J : l P S Ø l R Su(48)
An algebraic Prolog (aProbLog) program [36] consists of: burglary is an algebraic fact with label 0.05, and alarm :-burglary represents a background knowledge clause, whose intuitive meaning is: in case of burglary, the alarm should go off. The idea of splitting a logic program in a set of facts and a set of clauses goes back to Sato's distribution semantics [50], where it is used to define a probability distribution over interpretations of the entire program in terms of a distribution over the facts. This is possible because a truth value assignment to the facts in F uniquely determines the truth values of all other atoms defined in the background knowledge. In the simplest case, as realised in ProbLog [17,21], this basic distribution considers facts to be independent random variables and thus multiplies their individual probabilities. aProbLog uses the same basic idea, but generalises from the semiring of probabilities to general commutative semirings. While the distribution semantics is defined for countably infinite sets of facts, the set of ground algebraic facts in aProbLog must be finite.
In aProbLog, the label of a complete interpretation I P IpFq is defined as the product of the labels of its literals ApIq " â lPI ρplq (49) and the label of a set of interpretations S Ď IpFq as the sum of the interpretation labels
ApSq " à IPS â lPI ρplq(50)
A query q is a finite set of algebraic literals and atoms from the Herbrand base, 10 q Ď LpFq Y HBpF Y BKq. We denote the set of interpretations where the query is true by Ipqq,
Ipqq " tI | I P IpFq^I Y BK |ù qu(51)
The label of query q is defined as the label of Ipqq,
Apqq " ApIpqqq " à IPIpqq â lPI ρplq.(52)
As both operators are commutative and associative, the label is independent of the order of both literals and interpretations. ProbLog [21] is an instance of aProbLog with
A " R ě0 ; a ' b " a`b; a b b " a¨b; e ' " 0; e b " 1; δpf q P r0, 1s; δp f q " 1´δpf q(53)
B Subjective Logic Operators of Sum, Multiplication, and Division
Let us recall the following operators as defined in [30]. In the following, let ω X " xb X , d X , u X , a X y and ω Y " xb Y , d Y , u Y , a Y y be two subjective logic opinions.
B.1 Sum
The opinion about XYY (sum, ω X ' SL ω Y ) is defined as ω XYY " xb XYY , d XYY , u XYY , a XYY y, where:
• b XYY " b X`bY ;
• d XYY " a X pd X´bY q`a Y pd Y´bX q a X`aY ;
• u XYY " a X u X`aY u Y a X`aY ; and
• a XYY " a X`aY .
B.2 Product
The opinion about X^Y (product, ω X b SL ω Y ) is defined-under assumption of independence-as ω X^Y " xb X^Y , d X^Y , u X^Y , a X^Y y, where:
• b X^Y " b X b Y`p 1´a X qa Y b X u Y`aX p1´a Y qu X b Y 1´a X a Y ; • d X^Y " d X`dY´dX d Y ; • u X^Y " u X u Y`p 1´a Y qb X u Y`p 1´a X qu X b Y 1´a X a Y
; and
• a X^Y " a X a Y .
B.3 Division
The opinion about the division of X by Y , X rY (division, ω X m SL ω Y ) is defined as ω X rY " xb X rY , d X rY , u X rY , a X rY y where
• b X rY = a Y pb X`aX u X q pa Y´aX qpb Y`aY u Y q´a X p1´d X q pa Y´aX qp1´d Y q ; • d X rY " d X´dY 1´d Y ;
• u X rY " a Y p1´d X q pa Y´aX qp1´d Y q´a Y pb X`aX u X q pa Y´aX qpb Y`aY u Y q ; and • a X rY " a X a Y subject to:
• a X ă a Y ; d X ě d Y ;
• b X ě a X p1´a Y qp1´d X qb Y p1´a X qa Y p1´d Y q ; and
• u X ě p1´a Y qp1´d X qu Y p1´a X qp1´d Y q .
C Independence of posterior distributions when learning from complete observations Let us instantiate AMC using probabilities as labels (cf. (5)) and let us consider a propositional logic theory over M variables. We can thus re-write (1) as:
ppT q " ÿ
Let's assume that we want to learn such probabilities from a dataset D " px 1 , . . . , x N q T , then by (55)
We can thus re-write the likelihood (9) as: ppD | p x q "
Assuming a uniform prior, and letting r m be the number of observations for x m " 1 and s m the number of observations for x m " 0, we can thus compute the posterior as:
ppp x | D, α 0 q 9 ppD | p x q¨ppp x | α 0 q 9 M ź m"1 p rm`α 0 xm´1 xm p1´p xm q sm`α 0 xm´1(58)
which, in turns, show that the independence is maintained also considering the posterior beta distributions.
D Bayesian networks derived from aProbLog programs Figure 12 depicts the Bayesian networks that can be derived from the three circuits considered in the experiments described in Section 6.2. Figure 12: Network structures tested where the exterior gray variables are directly observed and the remaining are queried: (a) Net1, a tree; (b) Net2, singly connected network with one node having two parents; (c) Net3, singly connected network with one node having three parents.
1: procedure Eval(N, ', b, e ' , e b , ρ) 2:
Figure 3 :
3Variation on the circuit represented in
n
1 P leavesp y N A qz { qnodepN A q do 14:cov rn, n 1 s :" C A rX n , X := n P nqueue s.t. childrenp y N A , nq Ď nvisited20: nqueue :" nqueueztnu 21:
c 1
1PC cov rc, c 1 s 25:
meansrXcsmeansrX c 1 s cov rc, c 1 s 29:cov rz, ns :" cov rn, zs :" ř cPC meansrXns meansrXcs cov rc, zs @z P y rs 2 cov rr, rs`m eansrrs 2 meansrp rs 4 cov rp r, p rs´2 meansrrs meansrrs 3 cov rr, p rs E 34: end procedure
r , with r being the root of the circuit (r :" rootp y N A q at line 46). This shows how critical is to keep track of the non-zero covariances where they exist.
Figure 5 :
5Resulting distribution of probabilities for our running example using Algorithm 3 (solid line), and a Monte Carlo simulation with 100,000 samples grouped in 25 bins and then interpolated with a cubic polynomial (dashed line).
Listing 3 :
3Smoker and Friends aProbLog code used to derive the subjective opinion labels observing N ins instantiations of all the variables.
Figure 6 :
6Actual versus desired significance of bounds derived from the uncertainty for Smokers & Friends with: (a) N ins " 10; (b N ins " 50; and (c) N ins " 100. Best closest to the diagonal. Monte Carlo approach has been run over 100 samples.
Figure 7 :Figure 8 :
78Distribution of execution time for running the different algorithms for Smokers & Friends with: (a) N ins " 10; (b) N ins " 50; and (c) N ins " 100. Best lowest. Monte Carlo approach has been run over 100 samples. Correlation of Dirichlet strengths between runs of Monte Carlo approach varying the number of samples and golden standard (i.e. a Monte Carlo run with 10,000 samples) as well as between the proposed approach and golden standard with cubic interpolation-that is independent of the number of samples used in Monte Carlo -for Smokers & Friends with: (a) N ins " 10; (b) N ins " 50; and (c) N ins " 100.
Figure 9 :
9Actual versus desired significance of bounds derived from the uncertainty for: (a) Net1 with N ins " 10; (b) Net1 with N ins " 50; (c) Net1 with N ins " 100; (d) Net2 with N ins " 10; (e) Net2 with N ins " 50; (f) Net2 with N ins " 100; (g) Net3 with N ins " 10; (h) Net3 with N ins " 50; (i) Net3 with N ins " 100. Best closest to the diagonal. Monte Carlo approach has been run over 100 samples.
•
a commutative semiring xA, ', b, e ' , e b y • a finite set of ground algebraic facts F " tf 1 , . . . , f n u • a finite set BK of background knowledge clauses • a labeling function ρ : LpFq Ñ A Background knowledge clauses are definite clauses, but their bodies may contain negative literals for algebraic facts. Their heads may not unify with any algebraic fact. For instance, in the following aProbLog program alarm :-burglary. 0.05 :: burglary.
the probability of a theory is function of the probabilities of interpretations ppI P MpT qq, where ppI P MpT qq "
the variables for which we are learning probabilities are independent, hence ppl 1 , . . . , l M q "
p1´p xm q 1´xi,m
Figure 10 :Figure 11 :
1011Distribution of computational time for running the different algorithms for: (a) Net1 with N ins " 10; (b) Net1 with N ins " 50; (c) Net1 with N ins " 100; (d) Net2 with N ins " 10; (e) Net2 with N ins " 50; (f) Net2 with N ins " 100; (g) Net3 with N ins " 10; (h) Net3 with N ins " 50; (i) Net3 with N ins " 100. Monte Carlo approach has been run over 100 samples. Correlation of Dirichlet strengths between runs of Monte Carlo approarch varying the number of samples and golden standard (i.e. a Monte Carlo run with 10,000 samples) as well as between the proposed approach and golden standard with cubic interpolation-that is independent of the number of samples used in Monte Carlo-for: (a) Net1 with N ins " 10; (b) Net1 with N ins " 50; (c) Net1 with N ins " 100; (d) Net2 with N ins " 10; (e) Net2 with N ins " 50; (f) Net2 with N ins " 100; (g) Net3 with N ins " 10; (h) Net3 with N ins " 50; (i) Net3 with N ins " 100.
Table 2 :
2Covariance matrix for the associative table (Tab. 1) under the assumption that all the beta-distributed random variables are independent each other.
Algorithm 4Shadowing the circuit N A .1: procedure ShadowCircuit(N A )
2:
y
N A := N A
3:
links := stack()
4:
for p P parents(N A , qnodepN A q) do
5:
push(links, xqnodepN A q, py)
6:
end for
7:
while empty(links) do
8:
xc, py := pop(links)
9:
y
N A := y
N A Y tp cu
10:
if p
p R y
N A then
11:
y
N A := y
N A Y tp pu
12:
children( y
N A , p
p) := children( y
N A , p)
13:
end if
14:
children( y
N A , p
p) := (children( y
N A , p
p)ztcuq Y tp cu
15:
for p 1 P parents(N A , p) do
16:
push(links, xp, p 1 y)
17:
end for
18:
end while
19:
Table 3 :
3Means as computed by Algorithm 5 on our running example. In grey the shadow nodes. Values very close or equal to zero are omitted. Also, values for nodes labelled with negated variables are omitted. p 7, i.e. the shadow of qnode(N A ), is included for illustration purpose.
Table 4 :
4Covariances (ˆ10´2) as computed by Algorithm 5 on our running ex-
ample. In grey the shadow nodes. In grey the shadow nodes. Values very close
or equal to zero are omitted. Also, values for nodes labelled with negated vari-
ables are omitted. p 7, i.e. the shadow of qnode(N A ), is included for illustration
purpose.
Table 5 :
5RMSE for the queried variables in the Friends & Smokers program:
A stands for Actual, P for Predicted. Best results-also considering hidden
decimals-for the actual RMSE boxed. Monte Carlo approach has been run
over 100 samples.
Table 6 :
6RMSE for the queried variables in the various networks: A stands for
Actual, P for Predicted. Best results-also considering hidden decimals-for the
Actual RMSE boxed. Monte Carlo approach has been run over 100 samples.
"An intelligence that, at a given instant, could comprehend all the forces by which nature is animated and the respective situation of the beings that make it up"[41, p.2].
In the case φ i and φ j are seen as events in a sample space, the determinism can be equivalently rewritten as φ i X φ j " H and hence P pφ i X φ j q " 0.
We refer readers interested in probabilistic augmentation of logical theories in general to[11].4 Albeit ProbLog allows for rules to be annotated with probabilities: rules of the form p::h :-b are translated into h :-b,t with t a new fact of the form p::t.
Please note that (33) corrects a typos that is present in its version in[10].
CPB: Covariance-aware Probabilistic inference with beta-distributed random variablesWe now propose an entirely novel approach to the AMC-conditioning problem that considers the covariances between the various distributions we are manipulating. Indeed, our approach for computing Covariance-aware Probabilistic entailment with beta-distributed random variables CPB is designed to satisfy the total probability theorem, and in particular to enforce that for any X and Y beta-disributed random variables,
aProbLog[36] is the algebraic version of ProbLog that allows for arbitrary labels to be used.
In this paper we focus on a query composed by a single literal.
https://dtai.cs.kuleuven.be/problog/tutorial/basic/05_smokers.html (on 29th April 2020).
The Dirichlet strengths are inversely proportional to the epistemic uncertainty.
I.e., the set of ground atoms that can be constructed from the predicate, functor and constant symbols of the program.
AcknowledgementThis research was sponsored by the U.S. Army Research Laboratory and the U.K. Ministry of Defence under Agreement Number W911NF-16-3-0001. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Army Research Laboratory, the U.S. Government, the U.K. Ministry of Defence or the U.K. Government. The U.S. and U.K. Governments are authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation hereon.A aProbLogIn the last years, several probabilistic variants of Prolog have been developed, such as ICL[46], Dyna[20], PRISM[51]and ProbLog[17], with its aProbLog
stress(X) :-person(X). 1 ::stress(X) :-person(X).
) :-person(X), person(Y). X,Yω 1 ::influencesω 1 ::influences(X,Y) :-person(X), person(Y).
smokes(X) :-stress(X). smokes(X) :-stress(X).
asthma(X) :-smokes(X). 3ω 3 ::asthma(X) :-smokes(X).
n5 :-\+n3. 99 ::n5 :-\+n3. 9 ω 10 ::n5 :-n3.
n8 :-\+n5. 15 ω. 15ω 15 ::n8 :-\+n5. 15 ω 16 ::n8 :-n5.
Listing 4: An example of aProblog code that can be seen also as a Bayesian network, cf. Fig. 12a in Appendix D. e i are randomly assigned as either True or FalseListing 4: An example of aProblog code that can be seen also as a Bayesian network, cf. Fig. 12a in Appendix D. e i are randomly assigned as either True or False.
Guidelines for human-AI interaction. Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny Collisson, Jina Suh, Shamsi Iqbal, Paul N Bennett, Kori Inkpen, Jaime Teevan, Ruth Kikin-Gil, Eric Horvitz, Conference on Human Factors in Computing Systems -Proceedings. New York, New York, USAAssociation for Computing MachinerySaleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny Collisson, Jina Suh, Shamsi Iqbal, Paul N. Bennett, Kori Inkpen, Jaime Teevan, Ruth Kikin-Gil, and Eric Horvitz. Guidelines for human-AI interaction. In Conference on Human Factors in Computing Systems -Proceedings, pages 1-13, New York, New York, USA, may 2019. Association for Computing Machinery.
Using a bayesian model for confidence to make decisions that consider epistemic regret. R Anderson, N Hare, S Maskell, 19th International Conference on Information Fusion. R. Anderson, N. Hare, and S. Maskell. Using a bayesian model for confi- dence to make decisions that consider epistemic regret. In 19th Interna- tional Conference on Information Fusion, pages 264-269, 2016.
Decision making with hierarchical credal sets. Alessandro Antonucci, Alexander Karlsson, David Sundgren, Information Processing and Management of Uncertainty in Knowledge-Based Systems. Anne Laurent, Oliver Strauss, Bernadette Bouchon-Meunier, and Ronald R. YagerAlessandro Antonucci, Alexander Karlsson, and David Sundgren. Decision making with hierarchical credal sets. In Anne Laurent, Oliver Strauss, Bernadette Bouchon-Meunier, and Ronald R. Yager, editors, Information Processing and Management of Uncertainty in Knowledge-Based Systems, pages 456-465, 2014.
Solving #Sat and bayesian inference with backtracking search. Fahiem Bacchus, Shannon Dalmao, Toniann Pitassi, Journal of Artificial Intelligence Research. 34Fahiem Bacchus, Shannon Dalmao, and Toniann Pitassi. Solving #Sat and bayesian inference with backtracking search. Journal of Artificial Intelli- gence Research, 34:391-442, jan 2009.
Beyond Accuracy: The Role of Mental Models in Human-AI Team Performance. Gagan Bansal, Besmira Nushi, Ece Kamar, Walter Lasecki, Dan Weld, Eric Horvitz, HCOMP. AAAIGagan Bansal, Besmira Nushi, Ece Kamar, Walter Lasecki, Dan Weld, and Eric Horvitz. Beyond Accuracy: The Role of Mental Models in Human-AI Team Performance. In HCOMP. AAAI, oct 2019.
Updates in Human-AI Teams: Understanding and Addressing the Performance/Compatibility Tradeoff. Gagan Bansal, Besmira Nushi, Ece Kamar, S Daniel, Weld, S Walter, Eric Lasecki, Horvitz, AAAI. Gagan Bansal, Besmira Nushi, Ece Kamar, Daniel S Weld, Walter S Lasecki, and Eric Horvitz. Updates in Human-AI Teams: Understand- ing and Addressing the Performance/Compatibility Tradeoff. In AAAI, pages 2429-2437, 2019.
Path problems in networks. John S Baras, George Theodorakopoulos, Synthesis Lectures on Communication Networks. 3John S. Baras and George Theodorakopoulos. Path problems in networks. Synthesis Lectures on Communication Networks, 3:1-77, jan 2010.
Expectation Maximization over Binary Decision Diagrams for Probabilistic Logic Programs. Elena Bellodi, Fabrizio Riguzzi, Intell. Data Anal. 172Elena Bellodi and Fabrizio Riguzzi. Expectation Maximization over Binary Decision Diagrams for Probabilistic Logic Programs. Intell. Data Anal., 17(2):343-363, mar 2013.
Prolog programming for artificial intelligence. Ivan Bratko, Addison WesleyIvan Bratko. Prolog programming for artificial intelligence. Addison Wesley, 2001.
Probabilistic logic programming with beta-distributed random variables. Federico Cerutti, Lance M Kaplan, Angelika Kimmig, Murat Sensoy, The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019. Honolulu, Hawaii, USAAAAI PressFederico Cerutti, Lance M. Kaplan, Angelika Kimmig, and Murat Sensoy. Probabilistic logic programming with beta-distributed random variables. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Confer- ence, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 -February 1, 2019, pages 7769-7776. AAAI Press, 2019.
A general approach to reasoning with probabilities. Federico Cerutti, Matthias Thimm, Int. J. Approx. Reason. 111Federico Cerutti and Matthias Thimm. A general approach to reasoning with probabilities. Int. J. Approx. Reason., 111:35-50, 2019.
On probabilistic inference by weighted model counting. Mark Chavira, Adnan Darwiche, Artificial Intelligence. 1726Mark Chavira and Adnan Darwiche. On probabilistic inference by weighted model counting. Artificial Intelligence, 172(6):772-799, 2008.
Dynamic minimization of Sentential Decision Diagrams. Arthur Choi, Adnan Darwiche, Proceedings of the 27th AAAI Conference on Artificial Intelligence, AAAI 2013, AAAI'13. the 27th AAAI Conference on Artificial Intelligence, AAAI 2013, AAAI'13AAAI PressArthur Choi and Adnan Darwiche. Dynamic minimization of Sentential De- cision Diagrams. In Proceedings of the 27th AAAI Conference on Artificial Intelligence, AAAI 2013, AAAI'13, pages 187-194. AAAI Press, 2013.
New Advances in Compiling CNF to Decomposable Negation Normal Form. Adnan Darwiche, Proceedings of the 16th European Conference on Artificial Intelligence, ECAI'04. the 16th European Conference on Artificial Intelligence, ECAI'04IOS PressAdnan Darwiche. New Advances in Compiling CNF to Decomposable Negation Normal Form. In Proceedings of the 16th European Conference on Artificial Intelligence, ECAI'04, pages 318-322, NLD, 2004. IOS Press.
SDD: A New Canonical Representation of Propositional Knowledge Bases. Adnan Darwiche, Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence -Volume Volume Two, IJCAI'11. the Twenty-Second International Joint Conference on Artificial Intelligence -Volume Volume Two, IJCAI'11AAAI PressAdnan Darwiche. SDD: A New Canonical Representation of Propositional Knowledge Bases. In Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence -Volume Volume Two, IJCAI'11, pages 819-826. AAAI Press, 2011.
A Knowledge Compilation Map. Adnan Darwiche, Pierre Marquis, J. Artif. Int. Res. 171Adnan Darwiche and Pierre Marquis. A Knowledge Compilation Map. J. Artif. Int. Res., 17(1):229-264, sep 2002.
ProbLog: A probabilistic Prolog and its application in link discovery. Angelika Luc De Raedt, Hannu Kimmig, Toivonen, Proceedings of the 20th International Joint Conference on Artificial Intelligence. the 20th International Joint Conference on Artificial IntelligenceLuc De Raedt, Angelika Kimmig, and Hannu Toivonen. ProbLog: A prob- abilistic Prolog and its application in link discovery. In Proceedings of the 20th International Joint Conference on Artificial Intelligence, pages 2462- 2467, 2007.
A generalization of bayesian inference. A P Dempster, Journal of the Royal Statistical Society. Series B (Methodological). 302A. P. Dempster. A generalization of bayesian inference. Journal of the Royal Statistical Society. Series B (Methodological), 30(2):205-247, 1968.
Parameter Estimation for Probabilistic Finite-State Transducers. Jason Eisner, Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. the 40th Annual Meeting of the Association for Computational LinguisticsPhiladelphia, Pennsylvania, USAAssociation for Computational LinguisticsJason Eisner. Parameter Estimation for Probabilistic Finite-State Trans- ducers. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 1-8, Philadelphia, Pennsylvania, USA, jul 2002. Association for Computational Linguistics.
Compiling comp ling: Practical weighted dynamic programming and the dyna language. Jason Eisner, Eric Goldlust, Noah A Smith, Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT '05. the Conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT '05Jason Eisner, Eric Goldlust, and Noah A. Smith. Compiling comp ling: Practical weighted dynamic programming and the dyna language. In Pro- ceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT '05, pages 281-290, 2005.
Inference and learning in probabilistic logic programs using weighted {B}oolean formulas. Theory and Practice of Logic Programming. Daan Fierens, Joris Guy Den Broeck, Dimitar Renkens, Bernd Shterionov, Ingo Gutmann, Gerda Thon, Luc De Janssens, Raedt, 15Daan Fierens, Guy den Broeck, Joris Renkens, Dimitar Shterionov, Bernd Gutmann, Ingo Thon, Gerda Janssens, and Luc De Raedt. Inference and learning in probabilistic logic programs using weighted {B}oolean formulas. Theory and Practice of Logic Programming, 15(03):358-401, may 2015.
Approximate Knowledge Compilation by Online Collapsed Importance Sampling. Tal Friedman, H Guy Den Broeck ; H Wallach, Larochelle, Grauman, R Cesa-Bianchi, Garnett, Advances in Neural Information Processing Systems. Curran Associates, Inc31S Bengio,Tal Friedman and Guy den Broeck. Approximate Knowledge Compila- tion by Online Collapsed Importance Sampling. In S Bengio, H Wallach, H Larochelle, K Grauman, N Cesa-Bianchi, and R Garnett, editors, Ad- vances in Neural Information Processing Systems 31, pages 8024-8034. Curran Associates, Inc., 2018.
Learning the structure of sum-product networks. Robert Gens, Pedro Domingos, PMLR30th International Conference on Machine Learning, ICML 2013. Sanjoy Dasgupta and David McAllesterAtlanta, Georgia, USA28Robert Gens and Pedro Domingos. Learning the structure of sum-product networks. In Sanjoy Dasgupta and David McAllester, editors, 30th Inter- national Conference on Machine Learning, ICML 2013, volume 28 of Pro- ceedings of Machine Learning Research, pages 1910-1917, Atlanta, Georgia, USA, 2013. PMLR.
. Joshua Goodman. Semiring Parsing. Computational Linguistics. 254Joshua Goodman. Semiring Parsing. Computational Linguistics, 25(4):573- 606, 1999.
Probability Models in Engineering and Science. Seon Mi Han Mark Nagurka Haym Benaroya, Seon Mi HanCRC PressSeon Mi Han Mark Nagurka Haym Benaroya, Seon Mi Han. Probability Models in Engineering and Science. CRC Press, 2005.
Aleatory and epistemic uncertainty in probability elicitation with an example from hazardous waste management. Reliability Engineering & System Safety. C Stephen, Hora, Treatment of Aleatory and Epistemic Uncertainty. 54Stephen C. Hora. Aleatory and epistemic uncertainty in probability elici- tation with an example from hazardous waste management. Reliability En- gineering & System Safety, 54(2):217 -223, 1996. Treatment of Aleatory and Epistemic Uncertainty.
Aleatoric and epistemic uncertainty in machine learning: A tutorial introduction. Eyke Hüllermeier, Willem Waegeman, Eyke Hüllermeier and Willem Waegeman. Aleatoric and epistemic uncer- tainty in machine learning: A tutorial introduction, 2019.
Subjective networks: Perspectives and challenges. Magdalena Ivanovska, Audun Jøsang, Lance Kaplan, Francesco Sambo, Proc. of the 4th International Workshop on Graph Structures for Knowledge Representation and Reasoning. of the 4th International Workshop on Graph Structures for Knowledge Representation and ReasoningBuenos Aires, ArgentinaMagdalena Ivanovska, Audun Jøsang, Lance Kaplan, and Francesco Sambo. Subjective networks: Perspectives and challenges. In Proc. of the 4th In- ternational Workshop on Graph Structures for Knowledge Representation and Reasoning, pages 107-124, Buenos Aires, Argentina, 2015.
Online algorithms for sum-product networks with continuous variables. Priyank Jaini, Abdullah Rashwan, Han Zhao, Yue Liu, Ershad Banijamali, Zhitang Chen, Pascal Poupart, Conference on Probabilistic Graphical Models. Priyank Jaini, Abdullah Rashwan, Han Zhao, Yue Liu, Ershad Banijamali, Zhitang Chen, and Pascal Poupart. Online algorithms for sum-product net- works with continuous variables. In Conference on Probabilistic Graphical Models, pages 228-239, 2016.
Subjective Logic: A Formalism for Reasoning Under Uncertainty. Audun Jøsang, SpringerAudun Jøsang. Subjective Logic: A Formalism for Reasoning Under Un- certainty. Springer, 2016.
Trust network analysis with subjective logic. Audun Jøsang, Ross Hayward, Simon Pope, Proceedings of the 29th Australasian Computer Science Conference. the 29th Australasian Computer Science Conference48Audun Jøsang, Ross Hayward, and Simon Pope. Trust network analysis with subjective logic. In Proceedings of the 29th Australasian Computer Science Conference-Volume 48, pages 85-94, 2006.
Efficient subjective Bayesian network belief propagation for trees. Lance Kaplan, Magdalena Ivanovska, 19th International Conference on Information Fusion. Lance Kaplan and Magdalena Ivanovska. Efficient subjective Bayesian net- work belief propagation for trees. In 19th International Conference on In- formation Fusion, pages 1300-1307, 2016.
Efficient belief propagation in second-order bayesian networks for singly-connected graphs. Lance Kaplan, Magdalena Ivanovska, International Journal of Approximate Reasoning. 93Lance Kaplan and Magdalena Ivanovska. Efficient belief propagation in second-order bayesian networks for singly-connected graphs. International Journal of Approximate Reasoning, 93:132-152, 2018.
Second-order inference in uncertain Bayesian networks. to be submitted to the. Lance Kaplan, Magdalena Ivanovska, Vijay Kumar, Federico Mishra, Murat Cerutti, Sensoy, International Journal of Artificial Intelligence. Lance Kaplan, Magdalena Ivanovska, Kumar Vijay Mishra, Federico Cerutti, and Murat Sensoy. Second-order inference in uncertain Bayesian networks. to be submitted to the International Journal of Artificial Intel- ligence, 2020.
An empirical comparison of Bayesian and credal networks for dependable high-level information fusion. Alexander Karlsson, Ronnie Johansson, F Sten, Andler, Intl. Conf. on Information Fusion (FUSION). Alexander Karlsson, Ronnie Johansson, and Sten F Andler. An empirical comparison of Bayesian and credal networks for dependable high-level in- formation fusion. In Intl. Conf. on Information Fusion (FUSION), pages 1-8, 2008.
An algebraic prolog for reasoning about possible worlds. Angelika Kimmig, Guy Van Den, Luc De Broeck, Raedt, Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence. the Twenty-Fifth AAAI Conference on Artificial IntelligenceAngelika Kimmig, Guy Van den Broeck, and Luc De Raedt. An algebraic prolog for reasoning about possible worlds. In Proceedings of the Twenty- Fifth AAAI Conference on Artificial Intelligence, pages 209-214, 2011.
Algebraic model counting. Angelika Kimmig, Guy Van Den, Luc De Broeck, Raedt, J. of Applied Logic. 22CAngelika Kimmig, Guy Van den Broeck, and Luc De Raedt. Algebraic model counting. J. of Applied Logic, 22(C):46-62, July 2017.
Probabilistic Sentential Decision Diagrams. Doga Kisa, Arthur Guy Den Broeck, Adnan Choi, Darwiche, Proceedings of the Fourteenth International Conference on Principles of Knowledge Representation and Reasoning, KR'14. the Fourteenth International Conference on Principles of Knowledge Representation and Reasoning, KR'14AAAI PressDoga Kisa, Guy den Broeck, Arthur Choi, and Adnan Darwiche. Prob- abilistic Sentential Decision Diagrams. In Proceedings of the Fourteenth International Conference on Principles of Knowledge Representation and Reasoning, KR'14, pages 558-567. AAAI Press, 2014.
Will you accept an imperfect AI? Exploring Designs for Adjusting End-user Expectations of AI Systems. Rafal Kocielnik, Saleema Amershi, Paul N Bennett, Conference on Human Factors in Computing Systems -Proceedings. New York, New York, USAAssociation for Computing MachineryRafal Kocielnik, Saleema Amershi, and Paul N. Bennett. Will you accept an imperfect AI? Exploring Designs for Adjusting End-user Expectations of AI Systems. In Conference on Human Factors in Computing Systems - Proceedings, pages 1-14, New York, New York, USA, may 2019. Association for Computing Machinery.
The early years of logic programming. Robert A Kowalski, Communications of the ACM. 311Robert A. Kowalski. The early years of logic programming. Communica- tions of the ACM, 31(1):38-43, 1988.
A Philosophical Essay on Probabilities. Pierre Simon Laplace, SpringerTranslator Andrew I. Dale. Published in 1995Pierre Simon Laplace. A Philosophical Essay on Probabilities. Springer, 1825. Translator Andrew I. Dale, Published in 1995.
Learning the structure of probabilistic sentential decision diagrams. Yitao Liang, Jessa Bekker, Guy Van Den, Broeck, Uncertainty in Artificial Intelligence -Proceedings of the 33rd Conference. Yitao Liang, Jessa Bekker, and Guy Van Den Broeck. Learning the struc- ture of probabilistic sentential decision diagrams. Uncertainty in Artificial Intelligence -Proceedings of the 33rd Conference, UAI 2017, 2017.
Multi-criteria decision assessments using subjective logic: Methodology and the case of urban water strategies. Magnus Moglia, Ashok K Sharma, Shiroma Maheepala, Journal of Hydrology. Magnus Moglia, Ashok K. Sharma, and Shiroma Maheepala. Multi-criteria decision assessments using subjective logic: Methodology and the case of urban water strategies. Journal of Hydrology, 452-453:180-189, 2012.
A Top-down Compiler for Sentential Decision Diagrams. Umut Oztok, Adnan Darwiche, Proceedings of the 24th International Conference on Artificial Intelligence, IJCAI'15. the 24th International Conference on Artificial Intelligence, IJCAI'15AAAI PressUmut Oztok and Adnan Darwiche. A Top-down Compiler for Sentential Decision Diagrams. In Proceedings of the 24th International Conference on Artificial Intelligence, IJCAI'15, pages 3141-3148. AAAI Press, 2015.
Judea Pearl. Fusion, propagation, and structuring in belief networks. 29Judea Pearl. Fusion, propagation, and structuring in belief networks. Ar- tificial Intelligence, 29(3):241-288, 1986.
Abducing through negation as failure: stable models within the independent choice logic. David Poole, The Journal of Logic Programming. 441David Poole. Abducing through negation as failure: stable models within the independent choice logic. The Journal of Logic Programming, 44(1):5- 35, 2000.
Online and distributed bayesian moment matching for parameter learning in sum-product networks. Abdullah Rashwan, Han Zhao, Pascal Poupart, Artificial Intelligence and Statistics. Abdullah Rashwan, Han Zhao, and Pascal Poupart. Online and distributed bayesian moment matching for parameter learning in sum-product net- works. In Artificial Intelligence and Statistics, pages 1469-1477, 2016.
Learning Sum-Product Networks with Direct and Indirect Variable Interactions. Amirmohammad Rooshenas, Daniel Lowd, I-710-I-718. JMLR.orgProceedings of the 31st International Conference on International Conference on Machine Learning. the 31st International Conference on International Conference on Machine Learning32Amirmohammad Rooshenas and Daniel Lowd. Learning Sum-Product Net- works with Direct and Indirect Variable Interactions. In Proceedings of the 31st International Conference on International Conference on Machine Learning -Volume 32, ICML'14, pages I-710-I-718. JMLR.org, 2014.
Performing bayesian inference by weighted model counting. Tian Sang, Paul Bearne, Henry Kautz, Proceedings of the 20th National Conference on Artificial Intelligence. the 20th National Conference on Artificial Intelligence1Tian Sang, Paul Bearne, and Henry Kautz. Performing bayesian inference by weighted model counting. In Proceedings of the 20th National Conference on Artificial Intelligence -Volume 1, pages 475-481, 2005.
A statistical learning method for logic programs with distribution semantics. Taisuke Sato, Proceedings of the 12th International Conference on Logic Programming (ICLP-95). the 12th International Conference on Logic Programming (ICLP-95)Taisuke Sato. A statistical learning method for logic programs with distri- bution semantics. In Proceedings of the 12th International Conference on Logic Programming (ICLP-95), 1995.
Parameter learning of logic programs for symbolic-statistical modeling. Taisuke Sato, Yoshitaka Kameya, Journal of Artificial Intelligence Research. 151Taisuke Sato and Yoshitaka Kameya. Parameter learning of logic pro- grams for symbolic-statistical modeling. Journal of Artificial Intelligence Research, 15(1):391-454, December 2001.
Evidential deep learning to quantify classification uncertainty. Murat Sensoy, Lance Kaplan, Melih Kandemir, 32nd Conference on Neural Information Processing Systems (NIPS 2018). Murat Sensoy, Lance Kaplan, and Melih Kandemir. Evidential deep learn- ing to quantify classification uncertainty. In 32nd Conference on Neural Information Processing Systems (NIPS 2018), 2018.
Belief functions: The disjunctive rule of combination and the generalized Bayesian theorem. P Smets, International Journal of Approximate Reasoning. 9P. Smets. Belief functions: The disjunctive rule of combination and the generalized Bayesian theorem. International Journal of Approximate Rea- soning, 9:1-35, 1993.
Bayesian learning of sum-product networks. Martin Trapp, Robert Peharz, Hong Ge, Franz Pernkopf, Zoubin Ghahramani, Advances in Neural Information Processing Systems. Martin Trapp, Robert Peharz, Hong Ge, Franz Pernkopf, and Zoubin Ghahramani. Bayesian learning of sum-product networks. In Advances in Neural Information Processing Systems, pages 6344-6355, 2019.
Quantifying the uncertainty of a belief net response: Bayesian error-bars for belief net inference. Tim Van Allen, Ajit Singh, Russell Greiner, Peter Hooper, Artificial Intelligence. 1724Tim Van Allen, Ajit Singh, Russell Greiner, and Peter Hooper. Quantifying the uncertainty of a belief net response: Bayesian error-bars for belief net inference. Artificial Intelligence, 172(4):483-513, 2008.
Simplifying, Regularizing and Strengthening Sum-Product Network Structure Learning. Antonio Vergari, Nicola Di Mauro, Floriana Esposito, Machine Learning and Knowledge Discovery in Databases. Annalisa Appice, Pedro Pereira Rodrigues, Vítor Santos Costa, João Gama, Alípio Jorge, and Carlos SoaresChamSpringer International PublishingAntonio Vergari, Nicola Di Mauro, and Floriana Esposito. Simplifying, Regularizing and Strengthening Sum-Product Network Structure Learning. In Annalisa Appice, Pedro Pereira Rodrigues, Vítor Santos Costa, João Gama, Alípio Jorge, and Carlos Soares, editors, Machine Learning and Knowledge Discovery in Databases, pages 343-358, Cham, 2015. Springer International Publishing.
Automatic Bayesian density analysis. Antonio Vergari, Alejandro Molina, Robert Peharz, Zoubin Ghahramani, Kristian Kersting, Isabel Valera, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Antonio Vergari, Alejandro Molina, Robert Peharz, Zoubin Ghahramani, Kristian Kersting, and Isabel Valera. Automatic Bayesian density anal- ysis. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 5207-5215, 2019.
John Von Neumann, Oskar Morgenstern, Theory of games and economic behavior. Princeton university presscommemorative editionJohn Von Neumann and Oskar Morgenstern. Theory of games and eco- nomic behavior (commemorative edition). Princeton university press, 2007.
. Joachim Von Zur Gathen, Algebraic Complexity Theory. Annual Review of Computer Science. 31Joachim von zur Gathen. Algebraic Complexity Theory. Annual Review of Computer Science, 3(1):317-348, jun 1988.
A semantic loss function for deep learning with symbolic knowledge. Jingyi Xu, Zilu Zhang, Tal Friedman, Yitao Liang, Guy Van Den, Broeck, 35th International Conference on Machine Learning. 12Jingyi Xu, Zilu Zhang, Tal Friedman, Yitao Liang, and Guy Van Den Broeck. A semantic loss function for deep learning with symbolic knowl- edge. 35th International Conference on Machine Learning, ICML 2018, 12:8752-8760, 2018.
2U: An exact interval propagation algorithm for polytrees with binary variables. M Zaffalon, E Fagiuoli, Artificial Intelligence. 1061M. Zaffalon and E. Fagiuoli. 2U: An exact interval propagation algorithm for polytrees with binary variables. Artificial Intelligence, 106(1):77-107, 1998.
Collapsed variational inference for sum-product networks. Han Zhao, Tameem Adel, Geoff Gordon, Brandon Amos, International Conference on Machine Learning. Han Zhao, Tameem Adel, Geoff Gordon, and Brandon Amos. Collapsed variational inference for sum-product networks. In International Confer- ence on Machine Learning, pages 1310-1318, 2016.
A Unified Approach for Learning the Parameters of Sum-Product Networks. Han Zhao, Pascal Poupart, Geoff Gordon, Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS'16. the 30th International Conference on Neural Information Processing Systems, NIPS'16Red Hook, NY, USACurran Associates IncHan Zhao, Pascal Poupart, and Geoff Gordon. A Unified Approach for Learning the Parameters of Sum-Product Networks. In Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS'16, pages 433-441, Red Hook, NY, USA, 2016. Curran Associates Inc.
| [] |
[
"Reverse Engineering Self-Supervised Learning",
"Reverse Engineering Self-Supervised Learning"
] | [
"Ido Ben-Shaul [email protected] ",
"Ravid Shwartz-Ziv [email protected] ",
"Tomer Galanti [email protected] ",
"Shai Dekel [email protected] ",
"Yann Lecun ",
"\nDepartment of Applied Mathematics\nDepartment of Applied Mathematics\nTel-Aviv University & eBay Research\nNew York University\nMassachusetts Institute of Technology Cambridge\nMAUSA\n",
"\nTel-Aviv University\nNew York University & Meta AI\nFAIR\n"
] | [
"Department of Applied Mathematics\nDepartment of Applied Mathematics\nTel-Aviv University & eBay Research\nNew York University\nMassachusetts Institute of Technology Cambridge\nMAUSA",
"Tel-Aviv University\nNew York University & Meta AI\nFAIR"
] | [] | Self-supervised learning (SSL) is a powerful tool in machine learning, but understanding the learned representations and their underlying mechanisms remains a challenge. This paper presents an in-depth empirical analysis of SSL-trained representations, encompassing diverse models, architectures, and hyperparameters. Our study reveals an intriguing aspect of the SSL training process: it inherently facilitates the clustering of samples with respect to semantic labels, which is surprisingly driven by the SSL objective's regularization term. This clustering process not only enhances downstream classification but also compresses the data information. Furthermore, we establish that SSL-trained representations align more closely with semantic classes rather than random classes. Remarkably, we show that learned representations align with semantic classes across various hierarchical levels, and this alignment increases during training and when moving deeper into the network. Our findings provide valuable insights into SSL's representation learning mechanisms and their impact on performance across different sets of classes. * Equal contribution 1 arXiv:2305.15614v2 [cs.LG] 31 May 2023 | null | [
"https://export.arxiv.org/pdf/2305.15614v2.pdf"
] | 258,887,664 | 2305.15614 | a2b8ff257658b8291deb9e40ec1c164c8fefeb06 |
Reverse Engineering Self-Supervised Learning
Ido Ben-Shaul [email protected]
Ravid Shwartz-Ziv [email protected]
Tomer Galanti [email protected]
Shai Dekel [email protected]
Yann Lecun
Department of Applied Mathematics
Department of Applied Mathematics
Tel-Aviv University & eBay Research
New York University
Massachusetts Institute of Technology Cambridge
MAUSA
Tel-Aviv University
New York University & Meta AI
FAIR
Reverse Engineering Self-Supervised Learning
Self-supervised learning (SSL) is a powerful tool in machine learning, but understanding the learned representations and their underlying mechanisms remains a challenge. This paper presents an in-depth empirical analysis of SSL-trained representations, encompassing diverse models, architectures, and hyperparameters. Our study reveals an intriguing aspect of the SSL training process: it inherently facilitates the clustering of samples with respect to semantic labels, which is surprisingly driven by the SSL objective's regularization term. This clustering process not only enhances downstream classification but also compresses the data information. Furthermore, we establish that SSL-trained representations align more closely with semantic classes rather than random classes. Remarkably, we show that learned representations align with semantic classes across various hierarchical levels, and this alignment increases during training and when moving deeper into the network. Our findings provide valuable insights into SSL's representation learning mechanisms and their impact on performance across different sets of classes. * Equal contribution 1 arXiv:2305.15614v2 [cs.LG] 31 May 2023
Introduction
Self-supervised learning (SSL) [Bromley et al., 1993 has made significant progress over the last few years, almost reaching the performance of supervised baselines on many downstream tasks [Larsson et al., 2016, Bachman et al., 2019, Gidaris et al., 2018, Misra and van der Maaten, 2019, Grill et al., 2020, Shwartz-Ziv et al., 2022b, Chen et al., 2020b, He et al., 2020, Zbontar et al., 2021, Chen and He, 2021. However, understanding the learned representations and their underlying mechanisms remains a persistent challenge due to the complexity of the models and the lack of labeled training data [Shwartz-Ziv et al., 2022a]. Moreover, the pretext tasks used in selfsupervision frequently lack direct relevance to the specific downstream tasks, further complicating the interpretation of the learned representations.
In supervised classification, however, the structure of the learned representations is often very simple. For instance, a recent line of work [Papyan et al., 2020, Han et al., 2022 has identified a universal phenomenon called neural collapse, which occurs at the terminal stages of training. Neural collapse reveals several properties of the top-layer embeddings, such as mapping samples from the same class to the same point, maximizing the margin between the classes, and certain dualities between activations and the top-layer weight matrix. Later studies [Ben-Shaul and Dekel, 2022, Galanti et al., 2022a, Galanti et al., 2022, Rangamani et al., 2023, Tirer and Bruna, 2022 characterized related properties at intermediate layers of the network.
Compared to traditional classification tasks that aim to accurately categorize samples into specific classes, modern SSL algorithms typically minimize loss functions that combine two essential components: one for clustering augmented samples (invariance constraint) and another for preventing representation collapse (regularization constraint). For example, contrastive learning methods [Chen et al., 2020a, Mnih and Kavukcuoglu, 2013, Misra and Maaten, 2020 train representations to be indistinguishable for different augmentations of the same sample and at the same time to distinguish augmentations of different samples. On the other hand, non-contrastive approaches [Bardes et al., 2022, Zbontar et al., 2021, Grill et al., 2020 use regularizers to avoid representation collapse.
Contributions. In this paper, we provide an in-depth analysis of representation learning with SSL through a series of carefully designed experiments. Our investigation sheds light on the clustering process that occurs during training. Specifically, our findings reveal that augmented samples exhibit highly clustered behavior, forming centroids around the mean embedding of augmented samples sharing the same original image. More surprisingly, we observe that clustering also occurs with respect to semantic labels, despite the absence of explicit information about the target task. This demonstrates the capability of SSL to capture and group samples based on semantic similarities.
Our main contributions are summarized as follows:
• Clustering: We investigate the clustering properties of SSL-trained representations. In Figure 2, we show that akin to supervised classification [Papyan et al., 2020], SSL-trained representation functions exhibit a centroid-like geometric structure, where feature embeddings of augmented samples belonging to the same image tend to cluster around their respective means. Interestingly, at later stages of training, a similar trend appears with respect to semantic classes.
• The role of regularization: In Figure 3 (left) we show that the accuracy of extracting (i.e., linear probing) semantic classes from SSL-trained representations continuously improves even after the model accurately clusters the augmented training samples based on their sample identity. As can be seen, the regularization constraint plays a key role in inducing the clustering of data into semantic attributes at the embedding space.
• Impact of randomness: We argue that SSL-trained representation functions are highly correlated with semantic classes. To support this claim, we study how the degree of randomness in the targets affects the ability to learn them from a pretrained representation. As we show in Figure 4, the alignment between the targets and the learned representations substantially improves as the targets become less random and more semantically meaningful.
• Learning hierarchic representations: We study how SSL algorithms learn representations that exhibit hierarchic classes. We demonstrate that throughout the training process, the ability to cluster and distinguish between samples with respect to different levels of semantic targets continually improves at all layers of the model. We consider several levels of hierarchy, such as the sample level (the sample identity), semantic classes (such as different types of animals and vehicles), and high-level classes (e.g., animals, and vehicles).
Furthermore, we analyze the ability of each layer to capture distinct semantic classes. As shown in Figure 5, we observe that the clustering and separation ability of each layer improves as we move deeper into the network. This phenomenon closely resembles the behavior observed in supervised learning [Ben-Shaul and Dekel, 2022, Galanti et al., 2022, Rangamani et al., 2023, Tirer and Bruna, 2022, despite the fact that SSL-trained models are trained without direct access to the semantic targets.
In Section 4, we investigate various clustering properties in the top layer of SSL-trained networks.
In Section 5, we conduct several experiments to understand what kinds of functions are encoded in SSL-trained neural networks and can be extracted from their features. In Section 6, we study how SSL algorithms learn different hierarchies and how this occurs at intermediate layers.
2 Background and Related Work 2.1 Self-Supervised Learning SSL is a family of techniques that leverages unlabeled data to learn representation functions that can be easily adapted to new downstream tasks. Unlike supervised learning, which relies on labeled data, SSL employs self-defined signals to establish a proxy objective. The model is pre-trained using this proxy objective and then fine-tuned on the supervised downstream task. However, a major challenge of this approach is to prevent 'representation collapse', where the model maps all inputs to the same output [Jing et al., 2022.
Several methods have been proposed to address this challenge. Contrastive learning methods such as SimCLR [Chen et al., 2020a] and its InfoNCE criterion learn representations by maximizing the agreement between positive pairs (augmentations of the same sample) and minimizing the agreement between negative pairs (augmentations of different samples). In contrast, non-contrastive learning methods [Bardes et al., 2022, Zbontar et al., 2021, Grill et al., 2020 replace the dependence on contrasting positive and negative samples, by applying certain regularization techniques to prevent representation collapse. For instance, some methods use stopgradients and extra predictors to avoid collapse He, 2021, Grill et al., 2020], while others use an additional clustering constraints [Caron et al., 2020].
Variance-Invariance-Covariance Regularization (VICReg). A widely used method for SSL training [Bardes et al., 2022], which generates representations for both an input and its augmented counterpart. It aims to optimize two key aspects:
• Invariance loss -The mean-squared Euclidean distance between pairs of embedding, serving to ensure consistency between the representation of the original and augmented inputs.
• Regularization -Comprising two elements, the variance loss, which promotes increased variance across the batch dimension through a hinge loss restraining the standard deviation, and the covariance loss, which penalizes off-diagonal coefficients of the covariance matrix of the embeddings to foster decorrelation among features.
This leads to the VICReg loss function:
L(f ) = λs(Z, Z ′ ) Invariance + µ[v(Z) + v(Z ′ )] + ν[c(Z) + c(Z ′ )] Regularization ,(1)
where L(f ) is minimized over batches of samples I. s(Z, Z ′ ), v(Z), and c(Z) denote the invariance, variance, and covariance losses, respectively, and λ, µ, ν > 0 are hyperparameters controlling the balance between these loss components.
A Simple Framework for Contrastive Learning of Visual Representations (SimCLR). The SimCLR method [Chen et al., 2020a], another popular approach, minimizes the contrastive loss function [Hadsell et al., 2006]. Given two batches of embeddings Z, Z ′ , we can also decompose the objective into an invariance term and a regularization term:
L(f ) = − 1 B B i=1 (sim(Z i , Z ′ i )/τ ) Invariance − log j̸ =i exp(sim(Z i , Z ′ j )/τ ) Regularization , where sim(a, b) := a ⊤ b
∥a∥·∥b∥ is the cosine similarity between vectors a and b, and τ > 0 is a 'temperature' parameter that controls the sharpness of the distribution. Intuitively, minimizing the loss encourages the representations Z i and Z ′ i of the same input sample to be similar (i.e., have a high cosine similarity) while pushing away the representations Z i and Z ′ j of different samples (i.e., have a low cosine similarity) for all i, j ∈ [B].
Neural Collapse and Clustering
In a recent paper [Papyan et al., 2020], it was demonstrated that deep networks trained for classification tasks exhibit a notable behavior: the top-layer feature embeddings of training samples of the same class tend to concentrate around their respective class means, which are maximally distant from each other. This behavior is generally regarded as desirable since max-margin classifiers tend to exhibit better generalization guarantees (e.g., [Anthony and Bartlett, 2009, Bartlett et al., 2017, Golowich et al., 2020), and clustered embedding spaces are useful for few-shot transfer learning (e.g., [Goldblum et al., 2020, Galanti et al., 2022a).
In this paper, we aim to explore whether similar clustering tendencies occur in SSL-trained representation functions at both the sample level (with respect to augmentations) and at the semantic level. To address this question, we first introduce some key aspects of neural collapse. Specifically, we will focus on class-feature variance collapse and nearest class-center (NCC) separability.
To measure these properties, we use the following metrics. Suppose we have a representation function f : R d → R p and a collection of unlabeled datasets S 1 , . . . , S C ⊂ R d (each corresponding to a different class). The feature-variability collapse measures to what extent the samples from the same class are mapped to the same vector. To quantify this property we use the averaged class-distance normalized variance (CDNV) [Galanti et al., 2022a]
Avg i̸ =j [V f (S i , S j )] = Avg i̸ =j [ Var f (Si)+Var f (Sj ) 2∥µ f (Si)−µ f (Sj )∥ 2 ],
where µ f (S) and Var f (S) are the mean and variance of the uniform distribution U [{f (x) | x ∈ S}].
A weaker notion of clustering is the nearest class-center (NCC) separability [Galanti et al., 2022], which measures to what extent the feature embeddings of training samples form centroid-like geometry. The NCC separability asserts that, during training, the penultimate-layer's feature embeddings of training samples can be accurately classified with the 'nearest class-center (NCC) classifier',
h(x) = arg min c∈[C] ∥f (x) − µ f (S c )∥.
To measure this property, we compute the NCC train and test accuracies, which are simply the accuracy rates of the NCC classifier.
A recent paper [Dubois et al., 2022] argued that idealized SSL-trained representations simultaneously cluster data into multiple equivalence classes. This implies that there are various ways to categorize the data that result in high NCC accuracy. However, achieving a very small CDNV for multiple different categorizations is impossible since a zero CDNV would mean that feature embeddings of two samples from the same category are identical. Consequently, an intriguing question arises: with respect to which sets of classes do the learned feature embeddings cluster?
Reverse Engineering Neural Networks
Reverse engineering neural networks have recently garnered attention as an approach to explain how neural networks make predictions and decisions. While substantial work has been dedicated to understanding the functionalities of intermediate layers of neural network classifiers [Elhage et al., 2021, Ben-Shaul and Dekel, 2021, Cohen et al., 2018, Shwartz-Ziv and Tishby, 2017 A major source of complexity is the reliance on a pretext task and image augmentations. For instance, to fully understand the success of SSL it is essential to understand the relationship between the pretext task and the downstream task [Saunshi et al., 2019, von Kügelgen et al., 2021. It is yet unclear how to properly train representations with SSL, and therefore, SSL algorithms are quite complicated and include many different types of losses [Chen et al., 2020a, Zbontar et al., 2021, Bardes et al., 2022 and optimization tricks [Chen et al., 2020a, Chen and He, 2021, He et al., 2020. Of note, it is not obvious what are the differences between representations that are learned with contrastive learning and non-contrastive learning methods [Park et al., 2023]. In this work, we make new strides to understand SSL-trained representations, by investigating their clustering properties with respect to various types of classes.
Problem Setup
Self-supervised learning (SSL) is frequently used to pretrain models, preparing them for adaptation to new downstream tasks. This raises an essential question: How does SSL training influence the learned representation? Specifically, what underlying mechanisms are at work during this training, and which classes can be learned from these representation functions? To investigate this, we train SSL networks across various settings and analyze their behavior using different techniques. All the training details can be found in Appendix A.
Data and augmentations. Throughout all of the experiments (in the main text) we used the CIFAR-100 [Krizhevsky, 2009] image classification dataset, which contains 100 classes of semantic objects (original classes), which are also divided into 20 superclasses, with each superclass containing 5 classes (for example, the 'fish' superclass contains the 'aquarium fish', 'flatfish', 'ray', 'shark' and 'trout' categories). In order to train our models we used the image augmentation protocol introduced in SimCLR [ Chen et al., 2020a]. Each SSL training session is carried out for 1000 epochs, using the SGD optimizer with momentum. Usually, SSL-trained models are tested on a variety of downstream datasets to assess their performance. To simplify our analysis and minimize the influence of potential distribution shifts, we specifically evaluate the models' performance on the CIFAR-100 dataset. Such evaluations are typically done in SSL research [Bardes et al., 2022, Grill et al., 2020, Zbontar et al., 2021, usually training on and evaluating on ILSVRC (ImageNet). Additional evaluations on various datasets (CIFAR-10 [Krizhevsky, 2009] and FOOD101 [Bossard et al., 2014]) can be found in Appendix B.2.
Backbone architecture. In all our experiments, we utilized the RES-L-H architecture as the backbone, coupled with a two-layer multi-layer perceptron (MLP) projection head. The RES-L-H architecture is a variant of the ResNet architecture [He et al., 2016] where the width H remains constant across all L residual blocks as introduced in [Galanti et al., 2022].
Linear probing. To evaluate the effectiveness of extracting a given discrete function (e.g., categories) from a representation function, we employ linear probing. In this approach, we train a linear classifier, also known as a linear probe, on top of the representation, using a set of training samples. To train the linear classifier, we used the Linear Support Vector Classification (LSVC) algorithm from the scikit-learn library [Pedregosa et al., 2011]. The "linear accuracy" is determined by measuring the performance of the trained linear classifier on a validation set.
Sample level classification. We aim to evaluate the capacity of a given representation function to correctly classify different augmentations originating from the same image and distinguish them from augmentations of different images. To facilitate this, we constructed a new dataset specifically designed to evaluate sample-level separability.
The training dataset comprises 500 random images sourced from the CIFAR-100 training set. Each image represents a particular class and undergoes 100 different augmentations. As such, the training dataset contains 500 classes with a total of 50000 samples. For the test set, we utilize the same 500 images but apply 20 different augmentations drawn from the same distribution. This results in a test set consisting of 10000 samples. To measure the linear or NCC accuracy rates of a given representation function at the sample level, we first compute a relevant classifier using the training data and subsequently evaluate its accuracy rates on the corresponding test set.
Unraveling the Clustering Process in Self-Supervised Learning
The clustering process has long played a significant role in aiding the analysis of deep learning models [Alain and Bengio, 2017, Cohen et al., 2018, Papyan et al., 2020. To gain intuition behind the SSL training, in Figure 1 we present a UMAP [Sainburg et al., 2021] visualization of the network's embedding space for the training samples, both pre-and post-training, across different hierarchies.
As anticipated, the training process successfully clusters together samples at the per-sample level, mapping different augmentations of the same image (as depicted in the first row). This outcome is unsurprising given that the objective inherently encourages this behavior (through the invariance loss term). Yet, more remarkably, the training process also clusters the original 'semantic classes' from the standard CIFAR-100 dataset, even though the labels were absent during the training process. Interestingly, the high-level hierarchy (the superclasses) is also effectively clustered. This example demonstrates that while our training procedure directly encourages clustering at the sample level, the SSL-trained representations of data are additionally clustered with respect to semantic classes across different hierarchies.
To quantify this clustering process further, we train a RES-10-250 network using VICReg. We measure the NCC train accuracy, which is a clustering measure (as described in Section 2.2), both at the sample level and with respect to the original classes. Notably, the SSL-trained representation exhibits neural collapse at a sample level (with an NCC train accuracy near 1.0), yet the clustering with respect to the semantic classes is also significant (≈ 0.41 on the original targets).
As shown in Figure 2 (left), most of the clustering process with respect to the augmentations (on which the network was directly trained) occurs at the beginning of the training process and then plateaus, while the clustering with respect to semantic classes (which is not directly featured in the training objective) continues to improve throughout the training process.
A key property observed in [Papyan et al., 2020] is that the top-layer embeddings of supervised training samples gradually converge towards a centroid-like structure. In our quest to better understand the clustering properties of SSL-trained representation functions, we investigated whether something similar occurs with SSL. The NCC classifier's performance, being a linear classifier, is bounded by the performance of the best-performing linear classifier. By evaluating the ratio between the accuracy of the NCC classifier and a linear classifier trained on the same data, we can investigate data clustering at various levels of granularity. In Figure 2 (middle), we monitor this ratio for the sample-level classes and for the original targets, normalized with respect to its value at initialization. As SSL training progresses, the gap between NCC accuracy and linear accuracy narrows, indicating that augmented samples progressively cluster more based on their sample identity and semantic attributes. Furthermore, the plot shows that the sample-level ratio is initially higher, suggesting that augmented samples cluster based on their identity until they converge to centroids (the ratio between the NCC accuracy and the linear accuracy is ≥ 0.9 at epoch 100). However, as training continues, the sample-level ratio saturates while the class-level ratio continues to grow and converges to ≈ 0.75. This implies that the augmented samples are initially clustered with respect to their sample identities, and once achieved, they tend to cluster with respect to high-level semantic classes.
Implicit information compression in SSL training. Having examined the clustering process within SSL training, we next turn our attention to analyzing information compression during the learning process. As outlined in Section 2.2, effective compression often yields representations that are both beneficial and practical [Shwartz-Ziv et al., 2018, Shwartz-Ziv andAlemi, 2020]. Nevertheless, it is still largely uncharted territory whether such compression indeed occurs during the course of SSL training .
To shed light on this, we utilize the Mutual Information Neural Estimation (MINE) [Belghazi et al., 2018], a method designed to estimate the mutual information between the input and its corresponding embedded representation during the course of training. This metric serves as an effective gauge of the representation's complexity level, essentially demonstrating how much information (number of bits) it encodes.
In Figure 3 (middle) we report the average mutual information computed across 5 different MINE initialization seeds. As demonstrated, the training process manifests significant compression, culminating in highly compact trained representations. This observation aligns seamlessly with our previous findings related to clustering, where we noted that more densely clustered representations correspond to more compressed information.
The role of regularization loss. To gain a deeper understanding of the factors influencing this clustering process, we study the distinct components of the SSL objective function. As we discussed in section 2.1, the SSL objective function is composed of two terms: invariance and regularization. The invariance term's main function is to enforce similarity among the representations of augmentations of the same sample. In contrast, the regularization term helps to prevent representation collapse.
To investigate the impact of these components on the clustering process, we dissect the objective function into the invariance and regularization terms and observe their behavior during the course of training. As part of this exploration, we trained a RES-5-250 network using VICReg with µ = λ = 25 and ν = 1, as proposed in [Bardes et al., 2022]. Figure 3 (left) presents this comparison, charting the progression of loss terms and the linear test accuracy with respect to the original semantic targets. Contrary to popular belief, the invariance loss does not significantly improve during the training process. Instead, the improvement in loss (and downstream semantic accuracy) is achieved due to minimizing the regularization loss. This observation is consistent with our earlier finding that the per-sampling clustering process (which is driven by the invariance loss) saturates early in the training process.
We conclude that the majority of the SSL training process is geared towards improving the semantic accuracy and clustering of the learned representations, rather than the per-sample classification accuracy and clustering. This is consistent with the observed trend of the regularization loss improving over the course of training and the early saturation of the invariance loss.
In essence, our findings suggest that while self-supervised learning directly targets sample-level clustering, the majority of the training time is spent on orchestrating data clustering according to semantic classes across various hierarchies. This observation underscores the remarkable versatility of SSL methodologies in generating semantically meaningful representations through clustering and sheds light on its underlying mechanisms.
Comparing supervised learning and SSL clustering. As mentioned in Section 4, deep network classifiers tend to cluster training samples into centroids based on their classes. However, the learned function is truly clustering the classes only if this property holds for the test samples, which is expected to occur but to a lesser degree.
An interesting question is to what extent SSL implicitly clusters the samples according to their semantic classes, compared to the clustering induced by supervised learning. In Figure 3 (right), we report the NCC train and test accuracy rates at the end of training for different scenarios: supervised learning with and without augmentations and SSL. For all cases, we used the RES-10-250 architecture.
While the NCC train accuracy of the supervised classifier is 1.0, which is significantly higher than the NCC train accuracy of the SSL-trained model, the NCC test accuracy of the SSL model is slightly higher than the NCC test accuracy of the supervised model. This implies that both models exhibit a similar degree of clustering behavior with respect to the semantic classes. Interestingly, when training the supervised model with augmentation slightly decreases the NCC train accuracy, yet significantly improves the NCC test accuracy.
Exploring Semantic Class Learning and the Impact of Randomness
Semantic classes define relationships between inputs and targets based on inherent patterns in the input. On the other hand, mappings from inputs to random targets lack discernible patterns, resulting in seemingly arbitrary connections between inputs and targets.
In this section, we study the influence of randomness on a model's proficiency in learning desired targets. To do so, we build a series of target systems characterized by varying degrees of randomness and examine their effect on the learned representations. We train a neural network classifier on the same dataset for classification and use its target predictions at different epochs as targets with varying degrees of randomness. At epoch 0, the network is random, generating deterministic yet seemingly arbitrary labels. As training advances, the functions become less random, culminating in targets that align with ground truth targets (considered non-random). We normalize the degree of randomness between 0 (non-random, end of training) and 1 (random, initialization). Utilizing this methodology, we investigate the classes SSL learns during training.
Initially, our emphasis is on exploring the impact of the SSL training process on the learned targets. We train a RES-5-250 network with VICReg and employ a ResNet-18 to generate targets with different levels of randomness. Figure 4 (left) showcases the linear test accuracy for varying degrees of random targets. Each line corresponds to the accuracy at a different stage of SSL training for different levels of randomness. As we can see, the model captures classes closer to "semantic" ones (lower degrees of randomness) more effectively throughout training while not showing significant performance improvement on highly random targets. For results with additional random target generators, see Appendix B.5.
A key question in deep learning revolves around understanding the functionalities and the impact of intermediate layers on classifying different types of classes. For example, do different layers learn different kinds of classes? We explore this by evaluating the linear test accuracy of different layer representations for various degrees of target randomness at the end of the training. As shown in Figure 4 (middle), the linear test accuracy consistently improves at all layers as randomness decreases, with deeper layers outperforming across all class types and the performance gap broadening for classes closer to semantic ones.
Beyond classification accuracy, a desirable representation exhibits a high degree of clustering with respect to the targets [Papyan et al., 2020, Elsayed et al., 2018, Goldblum et al., 2020, Galanti et al., 2022a. We assess the quality of clustering using several metrics: NCC accuracy, CDNV, average per-class variance, and average squared distance between class means [Galanti et al., 2022a]. To gauge the improvement in representation over training, we calculate the ratios of these metrics for semantic vs. random targets. Figure 4(right) displays these ratios, indicating that the representation increasingly clusters the data with respect to semantic targets compared to random ones, as the ratio of the NCC, CDNV, increases during training. Interestingly, we see that the decrease in the CDNV, which is the variance divided by the squared distance, is caused solely by the increase in squared distance. The variance ratio stays fairly constant during training. This phenomenon of encouraging large margins between clusters has been shown to improve the performance [Elsayed et al., 2018, Cohen et al., 2018.
Learning Class Hierarchies and Intermediate Layers
Previous studies have given evidence that in the context of supervised learning, intermediate layers gradually capture features at varying levels of abstraction [Alain and Bengio, 2017, Cohen et al., 2018, Papyan et al., 2020, Galanti et al., 2022, Ben-Shaul and Dekel, 2022. The initial layers tend to focus on low-level features, while deeper layers capture more abstract features. In Section 4, we showed that SSL implicitly learns representations highly correlated with semantic attributes. Next, we investigate whether the network learns higher-level hierarchical attributes and which layers are better correlated with these attributes.
In order to measure what kinds of targets are associated with the learned representations, we trained a RES-5-250 network using VICReg. We compute the linear test accuracy (normalized by its value at initialization) with respect to three hierarchies: the sample-level, the original 100 classes, and the 20 superclasses. In Figure 2 (right), we plot these quantities computed for the three different sets of classes. We observed that, in contrast to the sample-level classes, the performance with respect to both the original classes and the superclasses significantly increased during training. The complete details are provided in Appendix A.
As a next step, we investigate the behavior of intermediate layers of SSL-trained models and their ability to capture targets of different hierarchies. To this end, we trained a RES-10-250 with VICReg. In Figure 5 (left and middle), we plot the linear test accuracy across all intermediate layers at various stages of training, measured for both the original targets and the superclasses. In Figure 5 (right) we plot the ratios between the superclasses and the original classes.
We draw several conclusions from these results. First, we observe a consistent improvement in clustering as we move from earlier to deeper layers, becoming more prominent during the course of training. Furthermore, similar to supervised learning scenarios [Tirer and Bruna, 2022, Galanti et al., 2022, Ben-Shaul and Dekel, 2022, we find that the linear accuracy of the network improves in each layer during SSL training. Of note, we find that for the original classes, the final layers are not the optimal ones. Recent studies in SSL [Oquab et al., 2023, Radford et al., 2021 showed that the performance of the different algorithms are highly sensitive to the downstream task domains. Our study extends this observation and suggests that also different parts of the network may be more suitable for certain downstream tasks and class hierarchies. Figure 5(right), it is evident that relatively, the accuracy of the superclasses improves more than that of the original classes in the deeper layers of the network.
Conclusions
Representation functions that cluster data based on semantic classes are generally favored due to their ability to classify classes accurately with limited samples [Goldblum et al., 2020, Galanti et al., 2022a. In this paper, we conduct a comprehensive empirical exploration of SSL-trained representations and their clustering properties concerning semantic classes. Our findings reveal an intriguing impact of the regularization constraint in SSL algorithms. While regularization is primarily used to prevent representation collapse, it also enhances the alignment between learned representations and semantic classes, even after accurately clustering augmented training samples. We investigate the emergence of different class types during training, including targets with various hierarchies and levels of randomness, and examine their alignment at different intermediate layers.
Collectively, these results provide substantial evidence that SSL algorithms learn representations that effectively cluster based on semantic classes.
Despite the similarities observed between the supervised setting and the SSL setting in our paper, several questions remain unanswered. Foremost among them is the inquiry into why SSL algorithms learn semantic classes. Although we present compelling evidence linking this phenomenon to the regularization constraint, the explanation of how this term manages to cluster data with respect to semantic classes, and the specific types of classes being learned, remains unclear. A deeper understanding of the types of classes to which data clusters, coupled with the connection between neural collapse and transfer learning [Galanti et al., 2022a,b,c], may be helpful in understanding the transferability of SSL-trained representations to downstream tasks.
A Training Details
A.1 Data augmentation
We follow the image augmentation protocol first introduced in SimCLR [ Chen et al., 2020a] and now commonly used by similar approaches based on siamese networks [Caron et al., 2020, Grill et al., 2020, Chen et al., 2020a, Zbontar et al., 2021. Two random crops from the input image are cropped and resized to 32 × 32, followed by random horizontal flip, color jittering of brightness, contrast, saturation and hue, Gaussian blur, and random grayscale. Each crop is normalized in each color channel using the ImageNet mean and standard deviation pixel values. The following operations are performed sequentially to produce each view:
• Random cropping with an area uniformly sampled with size ratio between 0.08 to 1.0, followed by resizing to size 32 × 32. RandomResizedCrop(32, scale=(0.08, 1.0)) in PyTorch.
• Random horizontal flip with probability 0.5.
• Color jittering of brightness, contrast, saturation and hue, with probability 0.8. ColorJitter(0.4, 0.4, 0.2, 0.1) in PyTorch.
• Grayscale with probability 0.2.
• Gaussian blur with probability 0.5 and kernel size 23.
• Solarization with probability 0.1.
• Color normalization with mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225) (ImageNet standardization).
A.2 Network Architectures
In this section, we describe the network architectures used in our experiments.
SSL backbone. The main architecture used as the SSL backbone is a convolutional residual network, denoted RES-L-H. It consists of a stack of two 2 × 2 convolutional layers with stride 2, batch normalization, and ReLU activation, followed by L residual blocks. The ith block computes g i (x) = σ(x + B 2 i (C 2 i (σ(B 1 i (C 1 i (x)))))), where C j i is a 3 × 3 convolutional layer with H channels, stride 1, and padding 1, B j i is a batch normalization layer, for j ∈ {1, 2}, and σ is the ReLU activation.
Random target functions. Throughout the paper, we use two different random target functions, a ResNet-18 [He et al., 2016] and a visual transformer (ViT) [Dosovitskiy et al., 2021] (Appendix B.5). For the transformer architecture, we used 10 layers, with 8 attention heads and 384 hidden dimensions. The classification was done using a classification token.
A.3 Optimization
SSL.
We trained all of our SSL models for 1000 epochs using a batch size of 256. We used the SGD optimizer with a learning rate of 0.002, a momentum value of 0.9, and a weight decay value of 1e−6. Additionally, we used a CosineAnnealing learning rate scheduler [Loshchilov and Hutter, 2017].
Random targets. The ResNet-18 targets were trained using the Cross-Entropy loss, with the Adam optimizer [Kingma and Ba, 2015], using a learning rate of 0.05 and the Cosine Annealing learning rate scheduler. For the ViT training, we used the Adam optimizer with a 0.001 learning rate, weight decay of 5e-5, and a Cosine Annealing learning rate scheduler. We used the model's class predictions as the targets.
A.4 Implementation Details
Our experiments were implemented in PyTorch [Paszke et al., 2019], utilizing the Lightly [Susmelj et al., 2020] library for SSL models and PyTorch Lightning [Falcon et al., 2019] for training. All of our models were trained on V100 GPUs.
B Additional Experiments
Clustering in SSL. In Figure 2 in the main text, we reported behaviors of extracting classes of different hierarchies from SSL-trained models. In this experiment, we provide the un-normalized version of the results in Figure 2. Specifically, in Figure 6 (left), we show the un-normalized results, along with the normalized plots shown in the main text Figure 6 (right).
B.1 Network Architectures
SSL research primarily focuses on utilizing a single ResNet-50 backbone for experimental purposes. However, given the similarities between SSL and supervised learning, as demonstrated in the paper, it is worth exploring how the choice of backbone architecture affects the clustering of representations with respect to semantic targets. In Figure 7, we present the intermediate layer clustering for different network architectures for extracting the original classes (left) and the superclasses (right). In Figure 7 (top), we display the linear test accuracy of RES-5-50, RES-5-250, andRES-5-1000 networks at different epochs (20, 40, 100, 400, 1000). As can be seen, the network's width has a significant impact on the results for all intermediate layers and at all training epochs.
In Figure 7 (bottom), we present the linear test accuracy of networks RES-5-250, RES-10-250 throughout the training epochs (left) and intermediate layers (right). It is generally seen that in the later epochs of training, the additional layers benefit the linear test performance of the network. However, it is also interesting to note that for the initial layers of the network, the shallow RES-5-250 model gets more clustered representations both for the original classes and for the superclasses. This hints at the fact that deeper networks may need intermediate layer losses to encourage clustering in initial layers [Elsayed et al., 2018, Ben-Shaul andDekel, 2022].
B.2 Experiments with Additional Datasets
CIFAR-10. We run similar experiments using RES-5-250 on the CIFAR-10 dataset. In Figure 8 (top), we report the NCC test accuracy for recovering both the sample level and the class labels. Similar to the results in the main text, the samples progressively cluster around their augmentation class means early in training. However, during the majoring of the training process, the augmented samples tend to cluster with respect to the semantic labels. In Figure 8 (bottom), we report the NCC and linear (from left to right) test accuracies of intermediate layers with respect to the semantic labels. As can be seen, the degree of clustering monotonically improves at all layers during the course of training.
FOOD-101. Similar to CIFAR-10, we experiment on the FOOD101 dataset, consisting of 75,750 train samples and 25,250 test samples across 101 food classes (e.g. 'pizza', 'misso soup', 'waffles'), using VICReg and the RES-5-250 architecture. In Figure 9 (left), we plot the NCC test accuracy for both the sample level, and the semantic classes level, and in Figure 9 (right) we plot the NCC test accuracy of the intermediate layers with respect to the sample level labels. The results are similar to the ones shown in Section 4. Additionally, we present the sample level clustering in the intermediate layers.
Here, we also see an improvement throughout the layers and the epochs.
B.3 Hyperparameters
In Figure 10, we present the training losses and linear accuracies for three different hyperparameter selections trained using VICReg and a RES-5-250 network. Specifically, we vary the µ (variance regularization) parameter and display the losses and linear accuracy for µ = 5, 25, 100, from left to right, respectively. Both loss terms are normalized by µ. In all cases, we used ν = 1 and λ = 25 by default.
Interestingly, the configuration with µ = 5 exhibits significantly lower performance compared to the default setting (µ = 25). However, the configuration with µ = 100 achieves performance comparable to the default, despite the network having a considerably higher invariance loss term. This observation further supports our claim that the regularization term plays a crucial role in learning semantic features.
B.4 Experiments with Different SSL Algorithms
In our main text, we primarily focus on the VICReg SSL algorithm. However, to ensure the robustness of our findings across different algorithms, we also trained a network using the SimCLR algorithm [Chen et al., 2020a] with our default RES-5-250 architecture. In Figure 11, we present several experiments that compare the network trained with VICReg to the one trained with SimCLR.
Firstly, Figure 11 (top) illustrates the linear test accuracies of intermediate layers for both algorithms throughout the training epochs with respect to the original classes (left) and the superclasses (right). The darkness of the lines indicates the progression of epochs, with lighter shades representing later epochs. Despite the significant differences between the algorithms, particularly in SimCLR's contrastive nature, we observe similar clustering behavior across different layers and epochs with VICReg consistently achieving better performance. This finding demonstrates that various SSL algorithms yield comparable clustering characteristics. Inv.
Acc.
Epoch
Loss Linear Accuracy µ = 5 µ = 25 µ = 100 Figure 10: The role of the regularization term in SSL training. Each plot depicts the regularization and invariance losses, along with the linear test accuracy, throughout the training process of VICReg with µ = 5, 25, 100 respectively. Furthermore, in addition to clustering, we aim to assess the similarity of representations between the two algorithms. In Figure 11 (bottom left), we present the correspondence of the representations with random targets, similar to the approach used in Figure 4 (left), across different training epochs. The darkness of the lines represents the progression of epochs, with lighter shades indicating later epochs. Additionally, in Figure 11 (bottom right), we depict the same correspondence for different intermediate layers, similar to Figure 4 (middle), at the end of the SSL train. Across various layers and epochs, we observe similar behavior between the algorithms. However, it is worth noting that VICReg demonstrates a slightly higher performance in extracting the random targets.
B.5 Random Target Function Architectures
Neural network architectures possess inherent biases that aid in modeling complex functions. Convolutional networks, for example, implicitly encode biases through their structure, which includes locality and translation invariance. As a result, these networks tend to prioritize local patterns over global features. On the other hand, the ViT architecture takes a different approach by treating images as sequences of patches and utilizing self-attention mechanisms. This design introduces a bias that enables long-range dependencies and global context. An interesting question arises regarding how the choice of the SSL backbone architecture influences the learned representations. To explore this, we examine the alignment of representations with different types of random targets. Specifically, we conduct experiments similar to those illustrated in Figure 4 (left and middle) but employ different random targets based on the ViT architecture [Park et al., 2023].
In Figure 12, we monitor the linear test accuracy of VICReg-trained RES-5-250 in recovering both the ResNet-18 targets and the ViT targets. Figure 12 (left) presents the linear test accuracy at different training epochs (20, 40, 100, 400, 1000) from dark to light resp. and Figure 12 (right) shows the linear test accuracy at different intermediate layers (1-5) from dark to light resp. at the end of SSL training. As can be seen, the results with ViT targets are consistent with the claims in Section 5. In other words, the SSL representations exhibit improved alignment during training epochs and across intermediate layers.
However, it is evident that the linear accuracy with respect to the ResNet-18 targets consistently outperforms the accuracy with respect to ViT targets at all training stages, except for the end of the supervised training phase ("0" randomness). This indicates that during the training process, the SSL-trained model acquires representations that are more aligned with those achieved by training convolutional network architectures. This behavior can be attributed to the implicit biases introduced by the backbone, which share similarities with the ones present in the ResNet-18 architecture.
C Limitations
While our study has yielded significant findings, it is not without its limitations. Firstly, our experiments were conducted on select datasets, which inherently possess their unique characteristics and biases. As such, applying our analysis to different datasets may potentially give different results. This reflects the inherent variability and diversity of real-world data and the possible influence of dataset-specific factors on our findings. Secondly, our analysis primarily focuses on the vision domain. While we believe that our findings have substantial implications in this area, the generalizability of our findings to other domains, such as natural language processing or audio processing, remains unverified.
D Broader Impact
In this paper, our primary objective is to characterize the different properties and aspects of SSL, with the aim of deepening our understanding of the learned representations. SSL has demonstrated its versatility in a wide range of practical downstream applications, including image classification, image segmentation, and various language tasks. While our main focus lies in unraveling the underlying mechanisms of SSL, the insights gained from this research hold the potential to enhance SSL algorithms. Consequently, these improved algorithms could have a significant impact on a diverse set of applications. However, it is crucial to acknowledge the potential risks associated with the misuse of these technologies.
Figure 1 :
1SSL training induced semantic clustering. UMAP visualization of the SSL representations before and after training in different hierarchies. (top) Augmentations of five different samples, each sample colored distinctly. (middle) Samples from five different classes within the standard CIFAR-100 dataset. (bottom) Samples from five different superclasses within the dataset.
Figure 2 :Figure 3 :
23SSL algorithms cluster the data with respect to semantic targets. (left) The normalized NCC train accuracy, computed by dividing the accuracy values by their value at initialization. (middle) The ratio between the NCC test accuracy and the linear test accuracy for per-sample and semantic classes normalized by its value at initialization. (right) The linear test accuracy rates, normalized by their values at initialization. All experiments are conducted on CIFAR-(left) The regularization and invariance losses together with the original target linear test accuracy of an SSL-trained model over the course of training. (middle) Compression of the mutual information between the input and the representation during the course of training. (right) SSL training learns clustered representations. The NCC train and test accuracy in supervised (with/without augmentation), and SSL. All experiments are conducted on CIFAR-100 with VICReg training., Shwartz-Ziv, 2022, Rangamani et al., 2023, Arora et al., 2018, characterizing the functionalities of representation functions generated by SSL algorithms remains an open problem.
Figure 4 :Figure 5 :
45SSL continuously learns semantic targets over random ones. (left) The linear test accuracy for targets with varying levels of randomness from the last layers at different epochs. (middle) The linear test accuracy for targets with varying levels of randomness for the trained model. (right) The ratios between non-random and random targets for various clustering metrics. All experiments are conducted on CIFAR-100 with VICReg training. SSL efficiently learns semantic classes throughout intermediate layers. The linear test accuracy of different layers of the model at various epochs (left) With respect to the 100 original classes. (middle) With respect to the 20 superclasses. (right) The ratio between the superclass and the original classes. All experiments are conducted on CIFAR-100 with VICReg training.
Supervised learning -Figure 3 (
3right). For the supervised learning setting, we used AutoAugment [Cubuk et al., 2019] with a policy created for CIFAR-10 [Krizhevsky, 2009].
Figure 6 :
6SSL algorithms cluster the data with respect to semantic targets. (right) The results in Figure 2. (left) The unnormalized version of the same results.
Figure 7 :
7The influence of width and depth on learning semantic classes at intermediate layers. (top) Linear test accuracy at different epochs for neural networks of varying widths. (bottom) Linear test accuracy of neural networks with different depths. (left) The performance is measured in relation to the original classes. (right) The performance with respect to the superclasses.
Figure 8 :
8SSL algorithms cluster the data with respect to semantic targets. (top) The NCC test accuracy of an SSL-trained network on CIFAR-10, measured at the sample level and original classes (both un-normalized and normalized). (bottom) The NCC test accuracy of the model at different layers and epochs.
Figure 9 :
9SSL algorithms cluster the data with respect to semantic targets and invariance to augmentations at intermediate layers. (left) The normalized NCC test accuracy of a VICRegtrained network on FOOD101 with respect to the sample level labels and the original class labels. (right) The NCC test accuracy of the model with respect to the sample level labels of intermediate layers.
Figure 12 :
12The implicit bias of the backbone architecture on the learned representations. (left) Linear test accuracy of an SSL-trained RES-5-250 network for extracting ResNet-18 and ViT random target functions with varying degrees of randomness (x-axis) at different epochs (color-coded from dark to bright). (right) Linear test accuracy of an SSL-trained RES-5-250 network for extracting ResNet-18 and ViT random target functions with varying degrees of randomness (x-axis) at different intermediate layers (color-coded from dark to bright).
Nelson Elhage, Neel Nanda, Catherine Olsson, Tom Henighan, Nicholas Joseph, Ben Mann, Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, Nova DasSarma, Dawn Drain, Deep Ganguli, Zac Hatfield-Dodds, Danny Hernandez, Andy Jones, Jackson Kernion, Liane Lovitt, Kamal Ndousse, Dario Amodei, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish, and Chris Olah. A mathematical framework for transformer circuits. Transformer Circuits Thread, 2021.Gamaleldin Elsayed, Dilip Krishnan, Hossein Mobahi, Kevin Regan, and Samy Bengio. Large
margin deep networks for classification. In Advances in Neural Information Processing Systems,
volume 31. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/paper_
files/paper/2018/file/42998cf32d552343bc8e460416382dca-Paper.pdf.
William Falcon et al. Pytorch lightning. GitHub. Note: https://github.com/PyTorchLightning/pytorch-
lightning, 3, 2019.
Tomer Galanti, Liane Galanti, and Ido Ben-Shaul. On the Implicit Bias Towards Minimal Depth of
Deep Neural Networks. arXiv, 2022.
Tomer Galanti, András György, and Marcus Hutter. On the role of neural collapse in transfer learning.
In International Conference on Learning Representations, 2022a. URL https://openreview.
net/forum?id=SwIp410B6aQ.
Tomer Galanti, András György, and Marcus Hutter. Improved generalization bounds for transfer
learning via neural collapse. ICML Workshop on Pre-training: Perspectives, Pitfalls, and Paths
Forward, 2022b.
Tomer Galanti, András György, and Marcus Hutter. Generalization bounds for transfer learning with
pretrained classifiers, 2022c.
Spyros Gidaris, Praveer Singh, and Nikos Komodakis. Unsupervised representation learning by
predicting image rotations. In International Conference on Learning Representations, 2018. URL
https://openreview.net/forum?id=S1v4N2l0-.
Figure 11: SimCLR and VICReg have similar performance. (top) Linear test accuracy in different training epochs, as a function of the intermediate layer, for original classes and superclasses, from left to right resp. (bottom) (left) Linear test accuracy in different training epochs (from dark to light) with respect to different randomness levels. (right) Linear test accuracy in different intermediate layers, at the end of training with respect to different randomness levels.1
2
3
4
5
0.4
0.6
SSL Algorithm
VICReg
SimCLR
Layer
Linear Accuracy
1
2
3
4
5
0.4
0.6
SSL Algorithm
VICReg
SimCLR
Layer
Linear Accuracy
0.99
0.95
0.75
0
0.35
0.45
SSL Algorithm
VICReg
SimCLR
Randomness Level
Linear Accuracy
0.99
0.95
0.75
0
0.25
0.35
0.45
SSL Algorithm
VICReg
SimCLR
Randomness Level
Linear Accuracy
Understanding intermediate layers using linear classifier probes. Guillaume Alain, Yoshua Bengio, Guillaume Alain and Yoshua Bengio. Understanding intermediate layers using linear classifier probes, 2017. URL https://openreview.net/forum?id=ryF7rTqgl.
Martin Anthony, Peter L Bartlett, Neural Network Learning: Theoretical Foundations. USACambridge University Press0521118621st editionMartin Anthony and Peter L. Bartlett. Neural Network Learning: Theoretical Foundations. Cambridge University Press, USA, 1st edition, 2009. ISBN 052111862X.
Linear algebraic structure of word senses, with applications to polysemy. Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, Andrej Risteski, 10.1162/tacla00034Transactions of the Association for Computational Linguistics. 6Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. Linear algebraic structure of word senses, with applications to polysemy. Transactions of the Association for Computational Linguistics, 6:483-495, 2018. doi: 10.1162/tacl a 00034. URL https://aclanthology.org/ Q18-1034.
Learning representations by maximizing mutual information across views. Philip Bachman, Devon Hjelm, William Buchwalter, Advances in Neural Information Processing Systems. H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. GarnettCurran Associates, Inc32Philip Bachman, R Devon Hjelm, and William Buchwalter. Learning representations by maximizing mutual information across views. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché- Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, vol- ume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper_ files/paper/2019/file/ddf354219aac374f1d40b7e760ee5bb7-Paper.pdf.
Hamed Pirsiavash, Yann LeCun, and Micah Goldblum. A cookbook of self-supervised learning. Randall Balestriero, Mark Ibrahim, Vlad Sobal, Ari Morcos, Shashank Shekhar, Tom Goldstein, Florian Bordes, Adrien Bardes, Gregoire Mialon, Yuandong Tian, Avi Schwarzschild, Andrew Gordon Wilson, Jonas Geiping, Quentin Garrido, Pierre Fernandez, Amir BarRandall Balestriero, Mark Ibrahim, Vlad Sobal, Ari Morcos, Shashank Shekhar, Tom Goldstein, Flo- rian Bordes, Adrien Bardes, Gregoire Mialon, Yuandong Tian, Avi Schwarzschild, Andrew Gordon Wilson, Jonas Geiping, Quentin Garrido, Pierre Fernandez, Amir Bar, Hamed Pirsiavash, Yann LeCun, and Micah Goldblum. A cookbook of self-supervised learning, 2023.
VICReg: Variance-invariance-covariance regularization for self-supervised learning. Adrien Bardes, Jean Ponce, Yann Lecun, International Conference on Learning Representations. Adrien Bardes, Jean Ponce, and Yann LeCun. VICReg: Variance-invariance-covariance regularization for self-supervised learning. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=xm6YD62D1Ub.
Spectrally-normalized margin bounds for neural networks. L Peter, Dylan J Bartlett, Matus Foster, Telgarsky, Advances in Neural Information Processing Systems. Curran Associates Inc30Peter L. Bartlett, Dylan J. Foster, and Matus Telgarsky. Spectrally-normalized margin bounds for neural networks. In Advances in Neural Information Processing Systems, volume 30. Curran Associates Inc., 2017.
Mohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeswar, Sherjil Ozair, Yoshua Bengio, Aaron Courville, R Devon Hjelm, arXiv:1801.04062Mine: mutual information neural estimation. arXiv preprintMohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeswar, Sherjil Ozair, Yoshua Bengio, Aaron Courville, and R Devon Hjelm. Mine: mutual information neural estimation. arXiv preprint arXiv:1801.04062, 2018.
Sparsity-probe: Analysis tool for deep learning models. Ido Ben, - Shaul, Shai Dekel, abs/2105.06849ArXiv. Ido Ben-Shaul and Shai Dekel. Sparsity-probe: Analysis tool for deep learning models. ArXiv, abs/2105.06849, 2021.
Nearest class-center simplification through intermediate layers. Ido Ben, - Shaul, Shai Dekel, PMLR, 2022Proceedings of Topological, Algebraic, and Geometric Learning Workshops 2022. Topological, Algebraic, and Geometric Learning Workshops 2022196of Proceedings of Machine Learning ResearchIdo Ben-Shaul and Shai Dekel. Nearest class-center simplification through intermediate layers. In Proceedings of Topological, Algebraic, and Geometric Learning Workshops 2022, volume 196 of Proceedings of Machine Learning Research, pages 37-47. PMLR, 2022. URL https: //proceedings.mlr.press/v196/ben-shaul22a.html.
Food-101 -mining discriminative components with random forests. Lukas Bossard, Matthieu Guillaumin, Luc Van Gool, 978-3-319-10599-4David Fleet, Tomas Pajdla, Bernt Schiele, and Tinne Tuytelaars. Springer International PublishingComputer Vision -ECCV 2014Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool. Food-101 -mining discriminative com- ponents with random forests. In David Fleet, Tomas Pajdla, Bernt Schiele, and Tinne Tuytelaars, editors, Computer Vision -ECCV 2014, pages 446-461. Springer International Publishing, 2014. ISBN 978-3-319-10599-4.
Signature verification using a "siamese" time delay neural network. Jane Bromley, Isabelle Guyon, Yann Lecun, Eduard Säckinger, Roopak Shah, Advances in Neural Information Processing Systems. J. Cowan, G. Tesauro, and J. AlspectorMorgan-Kaufmann6Jane Bromley, Isabelle Guyon, Yann LeCun, Eduard Säckinger, and Roopak Shah. Signature verification using a "siamese" time delay neural network. In J. Cowan, G. Tesauro, and J. Alspector, editors, Advances in Neural Information Processing Systems, volume 6. Morgan- Kaufmann, 1993. URL https://proceedings.neurips.cc/paper_files/paper/1993/ file/288cc0ff022877bd3df94bc9360b9c5d-Paper.pdf.
Unsupervised learning of visual features by contrasting cluster assignments. Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, Armand Joulin, Proceedings of Advances in Neural Information Processing Systems (NeurIPS). Advances in Neural Information Processing Systems (NeurIPS)2020Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. Unsupervised learning of visual features by contrasting cluster assignments. In Proceedings of Advances in Neural Information Processing Systems (NeurIPS), 2020.
A simple framework for contrastive learning of visual representations. Ting Chen, Simon Kornblith, Mohammad Norouzi, Geoffrey Hinton, International Conference on Machine Learning (ICML). Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International Conference on Machine Learning (ICML), 2020a.
Big selfsupervised models are strong semi-supervised learners. Ting Chen, Simon Kornblith, Kevin Swersky, Mohammad Norouzi, Geoffrey E Hinton, Advances in Neural Information Processing Systems. H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. LinCurran Associates, Inc33Ting Chen, Simon Kornblith, Kevin Swersky, Mohammad Norouzi, and Geoffrey E Hinton. Big self- supervised models are strong semi-supervised learners. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 22243-22255. Curran Associates, Inc., 2020b.
Exploring simple siamese representation learning. Xinlei Chen, Kaiming He, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Xinlei Chen and Kaiming He. Exploring simple siamese representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 15750-15758, June 2021.
Dnn or k-nn: That is the generalize vs. memorize question. Gilad Cohen, Guillermo Sapiro, Raja Giryes, Gilad Cohen, Guillermo Sapiro, and Raja Giryes. Dnn or k-nn: That is the generalize vs. memorize question, 2018. URL https://arxiv.org/abs/1805.06822.
Learning augmentation strategies from data. Barret Ekin Dogus Cubuk, Dandelion Zoph, Vijay Mané, Vasudevan, V Quoc, Le, Autoaugment, IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Ekin Dogus Cubuk, Barret Zoph, Dandelion Mané, Vijay Vasudevan, and Quoc V. Le. Autoaugment: Learning augmentation strategies from data. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 113-123, 2019.
An image is worth 16x16 words: Transformers for image recognition at scale. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby, International Conference on Learning Representations. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2021. URL https://openreview. net/forum?id=YicbFdNTTy.
Improving self-supervised learning by characterizing idealized representations. Yann Dubois, Stefano Ermon, B Tatsunori, Percy S Hashimoto, Liang, Advances in Neural Information Processing Systems. 35Yann Dubois, Stefano Ermon, Tatsunori B Hashimoto, and Percy S Liang. Improv- ing self-supervised learning by characterizing idealized representations. In Ad- vances in Neural Information Processing Systems, volume 35, pages 11279-11296, 2022. URL https://proceedings.neurips.cc/paper_files/paper/2022/file/ 494f876fad056843f310ad647274dd99-Paper-Conference.pdf.
Obow: Online bag-of-visual-words generation for self-supervised learning. Spyros Gidaris, Andrei Bursuc, Gilles Puy, Nikos Komodakis, Matthieu Cord, Patrick Perez, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Spyros Gidaris, Andrei Bursuc, Gilles Puy, Nikos Komodakis, Matthieu Cord, and Patrick Perez. Obow: Online bag-of-visual-words generation for self-supervised learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 6830-6840, June 2021.
Unraveling meta-learning: Understanding feature representations for few-shot tasks. Micah Goldblum, Steven Reich, Liam Fowl, Renkun Ni, Valeriia Cherepanova, Tom Goldstein, PMLRProceedings of the 37th International Conference on Machine Learning. Hal Daumé III and Aarti Singhthe 37th International Conference on Machine Learning119Micah Goldblum, Steven Reich, Liam Fowl, Renkun Ni, Valeriia Cherepanova, and Tom Goldstein. Unraveling meta-learning: Understanding feature representations for few-shot tasks. In Hal Daumé III and Aarti Singh, editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 3607-3616. PMLR, 13-18 Jul 2020.
Size-independent sample complexity of neural networks. Information and Inference: A. Noah Golowich, Alexander Rakhlin, Ohad Shamir, 10.1093/imaiai/iaz007Journal of the IMA. 92Noah Golowich, Alexander Rakhlin, and Ohad Shamir. Size-independent sample complexity of neural networks. Information and Inference: A Journal of the IMA, 9(2):473-504, 05 2020. ISSN 2049-8772. doi: 10.1093/imaiai/iaz007. URL https://doi.org/10.1093/imaiai/iaz007.
Bootstrap your own latent a new approach to self-supervised learning. Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre H Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Daniel Guo, Mohammad Gheshlaghi Azar, Bilal Piot, Koray Kavukcuoglu, Rémi Munos, Michal Valko, Advances in Neural Information Processing Systems. Red Hook, NY, USACurran Associates IncISBN 9781713829546Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre H. Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Daniel Guo, Mohammad Gheshlaghi Azar, Bilal Piot, Koray Kavukcuoglu, Rémi Munos, and Michal Valko. Bootstrap your own latent a new approach to self-supervised learning. In Advances in Neural Information Processing Systems, Red Hook, NY, USA, 2020. Curran Associates Inc. ISBN 9781713829546.
Dimensionality reduction by learning an invariant mapping. R Hadsell, S Chopra, Y Lecun, 10.1109/CVPR.2006.100IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06). 2R. Hadsell, S. Chopra, and Y. LeCun. Dimensionality reduction by learning an invariant mapping. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06), volume 2, pages 1735-1742, 2006. doi: 10.1109/CVPR.2006.100.
Neural collapse under MSE loss: Proximity to and dynamics on the central path. X Y Han, Vardan Papyan, David L Donoho, International Conference on Learning Representations. X.Y. Han, Vardan Papyan, and David L. Donoho. Neural collapse under MSE loss: Proximity to and dynamics on the central path. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=w1UbdvWH_R3.
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, 10.1109/CVPR.2016.902016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770-778, 2016. doi: 10.1109/CVPR.2016.90.
Momentum contrast for unsupervised visual representation learning. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, Ross Girshick, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionKaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsu- pervised visual representation learning. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020.
Understanding dimensional collapse in contrastive self-supervised learning. Li Jing, Pascal Vincent, Yann Lecun, Yuandong Tian, International Conference on Learning Representations. Li Jing, Pascal Vincent, Yann LeCun, and Yuandong Tian. Understanding dimensional collapse in contrastive self-supervised learning. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=YevsQ05DEN7.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, International Conference on Learning Representations. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015.
Learning multiple layers of features from tiny images. Alex Krizhevsky, Technical ReportAlex Krizhevsky. Learning multiple layers of features from tiny images. In Technical Report, 2009.
Learning representations for automatic colorization. Gustav Larsson, Michael Maire, Gregory Shakhnarovich, 978-3-319-46493-0Computer Vision -ECCV 2016. Bastian Leibe, Jiri Matas, Nicu Sebe, and Max WellingChamSpringer International PublishingGustav Larsson, Michael Maire, and Gregory Shakhnarovich. Learning representations for automatic colorization. In Bastian Leibe, Jiri Matas, Nicu Sebe, and Max Welling, editors, Computer Vision -ECCV 2016, pages 577-593, Cham, 2016. Springer International Publishing. ISBN 978-3-319-46493-0.
SGDR: Stochastic gradient descent with warm restarts. Ilya Loshchilov, Frank Hutter, International Conference on Learning Representations. Ilya Loshchilov and Frank Hutter. SGDR: Stochastic gradient descent with warm restarts. In International Conference on Learning Representations, 2017. URL https://openreview.net/ forum?id=Skq89Scxx.
Self-supervised learning of pretext-invariant representations. Ishan Misra, Laurens Van Der Maaten, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Ishan Misra and Laurens van der Maaten. Self-supervised learning of pretext-invariant representations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
Self-supervised learning of pretext-invariant representations. Ishan Misra, Laurens Van Der Maaten, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Ishan Misra and Laurens van der Maaten. Self-supervised learning of pretext-invariant representations. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 6706- 6716, 2019.
Learning word embeddings efficiently with noise-contrastive estimation. Andriy Mnih, Koray Kavukcuoglu, Advances in Neural Information Processing Systems. C.J. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. WeinbergerCurran Associates, Inc26Andriy Mnih and Koray Kavukcuoglu. Learning word embeddings efficiently with noise-contrastive estimation. In C.J. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems, volume 26. Curran Associates, Inc., 2013. URL https://proceedings.neurips.cc/paper_files/paper/2013/file/ db2b4182156b2f1f817860ac9f409ad7-Paper.pdf.
Maxime Oquab, Timothée Darcet, Theo Moutakanni, V Huy, Marc Vo, Vasil Szafraniec, Pierre Khalidov, Daniel Fernandez, Francisco Haziza, Alaaeldin Massa, Russell El-Nouby, Po-Yao Howes, Hu Huang, Vasu Xu, Shang-Wen Sharma, Wojciech Li, Mike Galuba, Mido Rabbat, Nicolas Assran, Gabriel Ballas, Ishan Synnaeve, Herve Misra, Jegou, Armand Joulin, and Piotr Bojanowski. Dinov2: Learning robust visual features without supervision. Julien Mairal, Patrick LabatutMaxime Oquab, Timothée Darcet, Theo Moutakanni, Huy V. Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, Russell Howes, Po-Yao Huang, Hu Xu, Vasu Sharma, Shang-Wen Li, Wojciech Galuba, Mike Rabbat, Mido Assran, Nicolas Ballas, Gabriel Synnaeve, Ishan Misra, Herve Jegou, Julien Mairal, Patrick Labatut, Armand Joulin, and Piotr Bojanowski. Dinov2: Learning robust visual features without supervision, 2023.
Convolutional neural networks analyzed via convolutional sparse coding. Yaniv Vardan Papyan, Michael Romano, Elad, J. Mach. Learn. Res. 1852Vardan Papyan, Yaniv Romano, and Michael Elad. Convolutional neural networks analyzed via convolutional sparse coding. J. Mach. Learn. Res., 18:83:1-83:52, 2017.
Prevalence of neural collapse during the terminal phase of deep learning training. X Y Vardan Papyan, David L Han, Donoho, Proceedings of the National Academy of Sciences. 11740Vardan Papyan, X. Y. Han, and David L. Donoho. Prevalence of neural collapse during the terminal phase of deep learning training. Proceedings of the National Academy of Sciences, 117(40): 24652-24663, 2020.
What do selfsupervised vision transformers learn?. Namuk Park, Wonjae Kim, Byeongho Heo, Taekyung Kim, Sangdoo Yun, The Eleventh International Conference on Learning Representations. Namuk Park, Wonjae Kim, Byeongho Heo, Taekyung Kim, and Sangdoo Yun. What do self- supervised vision transformers learn? In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=azCKuYyS74.
Pytorch: An imperative style, high-performance deep learning library. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary Devito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, Soumith Chintala, Advances in Neural Information Processing Systems. Curran Associates, Inc32Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32, pages 8024-8035. Curran Associates, Inc., 2019. URL http://papers.neurips.cc/paper/ 9015-pytorch-an-imperative-style-high-performance-deep-learning-library. pdf.
Scikit-learn: Machine learning in Python. F Pedregosa, G Varoquaux, A Gramfort, V Michel, B Thirion, O Grisel, M Blondel, P Prettenhofer, R Weiss, V Dubourg, J Vanderplas, A Passos, D Cournapeau, M Brucher, M Perrot, E Duchesnay, Journal of Machine Learning Research. 12F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Pretten- hofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830, 2011.
Learning transferable visual models from natural language supervision. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever, Proceedings of the 38th International Conference on Machine Learning. the 38th International Conference on Machine Learning139Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In Proceedings of the 38th International Conference on Machine Learning, volume 139, pages 8748-8763, 2021. URL https://proceedings.mlr.press/v139/radford21a.html.
Feature learning in deep classifiers through intermediate neural collapse. Akshay Rangamani, Marius Lindegaard, Tomer Galanti, Tomaso Poggio, Proceedings of the 40th International Conference on Machine Learning, Proceedings of Machine Learning Research. the 40th International Conference on Machine Learning, Machine Learning ResearchAkshay Rangamani, Marius Lindegaard, Tomer Galanti, and Tomaso Poggio. Feature learning in deep classifiers through intermediate neural collapse. In Proceedings of the 40th International Conference on Machine Learning, Proceedings of Machine Learning Research, 2023.
Parametric umap embeddings for representation and semisupervised learning. Tim Sainburg, Leland Mcinnes, Timothy Q Gentner, Neural Computation. 3311Tim Sainburg, Leland McInnes, and Timothy Q Gentner. Parametric umap embeddings for represen- tation and semisupervised learning. Neural Computation, 33(11):2881-2907, 2021.
A theoretical analysis of contrastive unsupervised representation learning. Nikunj Saunshi, Orestis Plevrakis, Sanjeev Arora, Mikhail Khodak, Hrishikesh Khandeparkar, Proceedings of the 36th International Conference on Machine Learning. the 36th International Conference on Machine Learning97Nikunj Saunshi, Orestis Plevrakis, Sanjeev Arora, Mikhail Khodak, and Hrishikesh Khandeparkar. A theoretical analysis of contrastive unsupervised representation learning. In Proceedings of the 36th International Conference on Machine Learning, volume 97, pages 5628-5637, 2019. URL https://proceedings.mlr.press/v97/saunshi19a.html.
Ravid Shwartz-Ziv, arXiv:2202.06749Information flow in deep neural networks. arXiv preprintRavid Shwartz-Ziv. Information flow in deep neural networks. arXiv preprint arXiv:2202.06749, 2022.
Information in infinite ensembles of infinitely-wide neural networks. Ravid Shwartz, - Ziv, Alexander A Alemi , Symposium on Advances in Approximate Bayesian Inference. PMLRRavid Shwartz-Ziv and Alexander A Alemi. Information in infinite ensembles of infinitely-wide neural networks. In Symposium on Advances in Approximate Bayesian Inference, pages 1-17. PMLR, 2020.
To compress or not to compress-self-supervised learning and information theory: A review. Ravid Shwartz, -Ziv , Yann Lecun, arXiv:2304.09355arXiv preprintRavid Shwartz-Ziv and Yann LeCun. To compress or not to compress-self-supervised learning and information theory: A review. arXiv preprint arXiv:2304.09355, 2023.
Opening the black box of deep neural networks via information. Ravid Shwartz, -Ziv , Naftali Tishby, abs/1703.00810ArXiv. Ravid Shwartz-Ziv and Naftali Tishby. Opening the black box of deep neural networks via information. ArXiv, abs/1703.00810, 2017.
Representation compression and generalization in deep neural networks. Ravid Shwartz-Ziv, Amichai Painsky, Naftali Tishby, Ravid Shwartz-Ziv, Amichai Painsky, and Naftali Tishby. Representation compression and general- ization in deep neural networks, 2018.
What do we maximize in self-supervised learning?. Ravid Shwartz-Ziv, Randall Balestriero, Yann Lecun, arXiv:2207.10081arXiv preprintRavid Shwartz-Ziv, Randall Balestriero, and Yann LeCun. What do we maximize in self-supervised learning? arXiv preprint arXiv:2207.10081, 2022a.
Pre-train your loss: Easy bayesian transfer learning with informative priors. Ravid Shwartz-Ziv, Micah Goldblum, Hossein Souri, Sanyam Kapoor, Chen Zhu, Yann Lecun, Andrew G Wilson, Advances in Neural Information Processing Systems. 35Ravid Shwartz-Ziv, Micah Goldblum, Hossein Souri, Sanyam Kapoor, Chen Zhu, Yann LeCun, and Andrew G Wilson. Pre-train your loss: Easy bayesian transfer learning with informative priors. Advances in Neural Information Processing Systems, 35:27706-27715, 2022b.
An information-theoretic perspective on variance-invariance-covariance regularization. Ravid Shwartz-Ziv, Randall Balestriero, Kenji Kawaguchi, G J Tim, Yann Rudner, Lecun, arXiv:2303.00633arXiv preprintRavid Shwartz-Ziv, Randall Balestriero, Kenji Kawaguchi, Tim GJ Rudner, and Yann LeCun. An information-theoretic perspective on variance-invariance-covariance regularization. arXiv preprint arXiv:2303.00633, 2023.
. Igor Susmelj, Matthias Heller, Philipp Wirth, Jeremy Prescott, Malte Ebner, Lightly. GitHub. 2020Igor Susmelj, Matthias Heller, Philipp Wirth, Jeremy Prescott, and Malte Ebner et al. Lightly. GitHub. Note: https://github.com/lightly-ai/lightly, 2020.
Extended unconstrained features model for exploring deep neural collapse. Tom Tirer, Joan Bruna, PMLR, 2022Proceedings of the 39th International Conference on Machine Learning. the 39th International Conference on Machine Learning162Tom Tirer and Joan Bruna. Extended unconstrained features model for exploring deep neural collapse. In Proceedings of the 39th International Conference on Machine Learning, volume 162, pages 21478-21505. PMLR, 2022. URL https://proceedings.mlr.press/v162/tirer22a. html.
Aaron Van Den Oord, Yazhe Li, Oriol Vinyals, 10.48550/arXiv.1807.03748Representation Learning with Contrastive Predictive Coding. arXiv. Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation Learning with Contrastive Predictive Coding. arXiv, 2018. doi: 10.48550/arXiv.1807.03748.
Aaron Van Den Oord, Yazhe Li, Oriol Vinyals, arXiv:1807.03748Representation learning with contrastive predictive coding. arXiv preprintAaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018.
Self-supervised learning with data augmentations provably isolates content from style. Yash Julius Von Kügelgen, Luigi Sharma, Wieland Gresele, Bernhard Brendel, Michel Schölkopf, Francesco Besserve, Locatello, Advances in Neural Information Processing Systems. M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman VaughanCurran Associates, Inc34Julius von Kügelgen, Yash Sharma, Luigi Gresele, Wieland Brendel, Bernhard Schölkopf, Michel Besserve, and Francesco Locatello. Self-supervised learning with data augmentations prov- ably isolates content from style. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 16451-16467. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/ paper_files/paper/2021/file/8929c70f8d710e412d38da624b21c3c8-Paper.pdf.
Barlow twins: Self-supervised learning via redundancy reduction. Jure Zbontar, Li Jing, Ishan Misra, Yann Lecun, Stéphane Deny, International Conference on Machine Learning. Jure Zbontar, Li Jing, Ishan Misra, Yann LeCun, and Stéphane Deny. Barlow twins: Self-supervised learning via redundancy reduction. In International Conference on Machine Learning, 2021.
| [
"https://github.com/PyTorchLightning/pytorch-",
"https://github.com/lightly-ai/lightly,"
] |
[
"Deep Graph Stream SVDD: Anomaly Detection in Cyber-Physical Systems",
"Deep Graph Stream SVDD: Anomaly Detection in Cyber-Physical Systems"
] | [
"Ehtesamul Azim [email protected] \nDepartment of Computer Science\nUniversity of Central Florida\n32826OrlandoFLUSA\n",
"Dongjie Wang [email protected] \nDepartment of Computer Science\nUniversity of Central Florida\n32826OrlandoFLUSA\n",
"Yanjie Fu [email protected] \nDepartment of Computer Science\nUniversity of Central Florida\n32826OrlandoFLUSA\n"
] | [
"Department of Computer Science\nUniversity of Central Florida\n32826OrlandoFLUSA",
"Department of Computer Science\nUniversity of Central Florida\n32826OrlandoFLUSA",
"Department of Computer Science\nUniversity of Central Florida\n32826OrlandoFLUSA"
] | [] | Our work focuses on anomaly detection in cyber-physical systems. Prior literature has three limitations: (1) Failing to capture long-delayed patterns in system anomalies; (2) Ignoring dynamic changes in sensor connections; (3) The curse of high-dimensional data samples. These limit the detection performance and usefulness of existing works. To address them, we propose a new approach called deep graph stream support vector data description (SVDD) for anomaly detection. Specifically, we first use a transformer to preserve both short and long temporal patterns of monitoring data in temporal embeddings. Then we cluster these embeddings according to sensor type and utilize them to estimate the change in connectivity between various sensors to construct a new weighted graph. The temporal embeddings are mapped to the new graph as node attributes to form weighted attributed graph. We input the graph into a variational graph auto-encoder model to learn final spatio-temporal representation. Finally, we learn a hypersphere that encompasses normal embeddings and predict the system status by calculating the distances between the hypersphere and data samples. Extensive experiments validate the superiority of our model, which improves F1-score by 35.87%, AUC by 19.32%, while being 32 times faster than the best baseline at training and inference. | 10.48550/arxiv.2302.12918 | [
"https://export.arxiv.org/pdf/2302.12918v1.pdf"
] | 257,219,125 | 2302.12918 | e0ae58b1a801e5c968e8d3cddf9d554efdcfa8b6 |
Deep Graph Stream SVDD: Anomaly Detection in Cyber-Physical Systems
Ehtesamul Azim [email protected]
Department of Computer Science
University of Central Florida
32826OrlandoFLUSA
Dongjie Wang [email protected]
Department of Computer Science
University of Central Florida
32826OrlandoFLUSA
Yanjie Fu [email protected]
Department of Computer Science
University of Central Florida
32826OrlandoFLUSA
Deep Graph Stream SVDD: Anomaly Detection in Cyber-Physical Systems
Our work focuses on anomaly detection in cyber-physical systems. Prior literature has three limitations: (1) Failing to capture long-delayed patterns in system anomalies; (2) Ignoring dynamic changes in sensor connections; (3) The curse of high-dimensional data samples. These limit the detection performance and usefulness of existing works. To address them, we propose a new approach called deep graph stream support vector data description (SVDD) for anomaly detection. Specifically, we first use a transformer to preserve both short and long temporal patterns of monitoring data in temporal embeddings. Then we cluster these embeddings according to sensor type and utilize them to estimate the change in connectivity between various sensors to construct a new weighted graph. The temporal embeddings are mapped to the new graph as node attributes to form weighted attributed graph. We input the graph into a variational graph auto-encoder model to learn final spatio-temporal representation. Finally, we learn a hypersphere that encompasses normal embeddings and predict the system status by calculating the distances between the hypersphere and data samples. Extensive experiments validate the superiority of our model, which improves F1-score by 35.87%, AUC by 19.32%, while being 32 times faster than the best baseline at training and inference.
Introduction
Cyber-physical systems (CPS) have been deployed everywhere and play a significant role in the real world, including smart grids, robotics systems, water treatment networks, etc. Due to their complex dependencies and relationships, these systems are vulnerable to abnormal system events (e.g., cyberattacks, system exceptions), which can cause catastrophic failures and expensive costs. In 2021, hackers infiltrated Florida's water treatment plants and boosted the sodium hydroxide level in the water supply by 100 times of the normal level [3]. This may endanger the physical health of all Floridians. To maintain stable and safe CPS, considerable research effort has been devoted to effectively detect anomalies in such systems using sensor monitoring data [19,16].
Prior literature partially resolve this problem-however, there are three issues restricting their practicality and detection performance. Issue 1: long-delayed patterns. The malfunctioning effects of abnormal system events often do not manifest immediately. Kravchik et al. employed LSTM to predict future values based on past values and assessed the system status using prediction errors [5]. But, constrained by the capability of LSTM, it is hard to capture long-delayed patterns, which may lead to suboptimal detection performance. How can we sufficiently capture such long-delayed patterns? Issue 2: dynamic changes in sensor-sensor influence. Besides long-delayed patterns, the malfunctioning effects may propagate to other sensors. Wang et al. captured such propagation patterns in water treatment networks by integrating the sensor-sensor connectivity graph for cyber-attack detection [17]. However, the sensor-sensor influence may shift as the time series changes due to system failures. Ignoring such dynamics may result in failing to identify propagation patterns and cause poor detection performance. How can we consider such dynamic sensor-sensor influence? Issue 3: high-dimensional data samples. Considering the labeled data sparsity issue in CPS, existing works focus on unsupervised or semi-supervised setting [17,10]. But traditional models like One-Class SVM are too shallow to fit high-dimensional data samples. They have substantial time costs for feature engineering and model learning. How can we improve the learning efficiency of anomaly detection in high-dimensional scenarios?
To address these, we aim to effectively capture spatial-temporal dynamics in high-dimensional sensor monitoring data. In CPS, sensors can be viewed as nodes, and their physical connections resemble a graph. Considering that the monitoring data of each sensor changes over time and that the monitoring data of various sensors influences one another, we model them using a graph stream structure. Based on that, we propose a new framework called Deep Graph Stream Support Vector Data Description (DGS-SVDD). Specifically, to capture longdelayed patterns, we first develop a temporal embedding module based on transformer [15]. This module is used to extract these patterns from individual sensor monitoring data and embed them in low-dimensional vectors. Then, to comprehend dynamic changes in sensor-sensor connection, we estimate the influence between sensors using the previously learned temporal embedding of sensors. The estimated weight matrix is integrated with the sensor-sensor physically connected graph to produce an enhanced graph. We map the temporal embeddings to each node in the enhanced graph as its attributes to form a new attributed graph. After that, we input this graph into the variational graph auto-encoder (VGAE) [4] to preserve all information as final spatial-temporal embeddings. Moreover, to effectively detect anomalies in high-dimensional data, we adopt deep learning to learn the hypersphere that encompasses normal embeddings. The distances between the hypersphere and data samples are calculated to be criteria to predict the system status at each time segment. Finally, we conduct extensive experiments on a real-world dataset to validate the superiority of our work. In particular, compared to the best baseline model, DGS-SVDD improves F1-score by 35.87% and AUC by 19.32%, while accelerating model training and inference by 32 times.
Preliminaries
Definitions
Definition 1. Graph Stream. A graph object G i describes the monitoring values of the Cyber-Physical System at timestamp i. It can be defined as G i = (V,E,t i ) where V is the vertex (i.e., sensor) set with a size of n; E is the edge set with a size of m, and each edge indicates the physical connectivity between any two sensors; t i is a list that contains the monitoring value of n sensors at the i-th timestamp. A graph stream is a collection of graph objects over the temporal dimension. The graph stream with the length of L x at the t-th time segment can be defined as
X t = [G i , G i+1 , · · · G i+Lx−1 ].
Definition 2. Weighted Attributed Graph. The edge set E of each graph object in the graph stream X t does not change over time, which is a binary edge set that reflects the physical connectivity between sensors. However, the correlations between different sensors may change as system failures happen. To capture such dynamics, we useG t = (V,Ẽ t , U t ) to denote the weighted attributed graph at the t-th time segment. In the graph, V is the same as the graph object in the graph stream, which is the vertex (i.e., sensor) set with a size of n;Ẽ t is the weighted edge set, in which each item indicates the weighted influence calculated from the temporal information between two sensors; U t is the attributes of each vertex, which is also the temporal embedding of each node at the current time segment. Thus,G t contains the spatial-temporal information of the system.
Problem Statement
Our goal is to detect anomalies in cyber-physical systems at each time segment. Formally, assuming that the graph stream data at the t-th segment is X t , the corresponding system status is y t . We aim to find an outlier detection function that learns the mapping relation between X t and y t , denoted by f (X t ) → y t . Here, y t is a binary constant whose value is 1 if the system status is abnormal and 0 otherwise.
Methodology
In this section, we give an overview of our framework and then describe each technical part in detail. Figure 1 shows an overview of our framework, named DGS-SVDD. Specifically, we start by feeding the DGS-SVDD model the graph stream data for one time segment. In the model, we first analyze the graph stream data by adopting the transformer-based temporal embedding module to extract temporal dependencies. Then, we use the learnt temporal embedding to estimate the dynamics of sensor-sensor influence and combine it with information about the topological structure of the graph stream data to generate weighted attributed graphs. We then input the graph into the variational graph autoencoder (VGAE)-based spatial embedding module to get the spatial-temporal embeddings. Finally, we estimate the boundary of the embeddings of normal data using deep learning and support vector data description (SVDD), and predict the system status by measuring how far away the embedding sample is from the boundary.
Framework Overview
Embedding temporal patterns of the graph stream data
The temporal patterns of sensors may evolve over time if abnormal system events occur. We create a temporal embedding module that uses a transformer in a predictive manner to capture such patterns for accurate anomaly detection. To illustrate the following calculation process, we use the graph stream data X t at the t-th time segment as an example. We ignore the topological structure of the graph stream data at first during the temporal embedding learning process. Thus, we collect the time series data in X t to form a temporal matrix T t = [t 1 , t 2 , · · · , t Lx ], such that T t ∈ R n×Lx , where n is the number of sensors and L x is the length of the time segment. The temporal embedding module consists of an encoder and a decoder. For the encoder part, we input T t into it for learning enhanced temporal embedding U t . Specifically, we first use the multi-head attention mechanism to calculate the attention matrices between T t and itself for enhancing the temporal patterns among different sensors by information sharing. Considering that the calculation process in each head is the same, we take head 1 as an example to illustrate. To obtain the self-attention matrix Attn(T t , T t ), we input T t into head 1 , which can be formulated as follows,
Attn(T t , T t ) = softmax( (T t · W Q t )(T t · W K t ) √ L x ) · (T t · W V t )(1)
where W K t ∈ R Lx×d , W Q t ∈ R Lx×d , and W V t ∈ R Lx×d are the weight matrix for "key", "query" and "value" embeddings;
√ L x is the scaling factor. Assuming that we have h heads, we concatenate the learned attention matrix together in order to capture the temporal patterns of monitoring data from different perspectives. The calculation process can be defined as follows:
T t = Concat(Attn 1 t , Attn 2 t , · · · , Attn h t ) · W O t(2)
where W O t ∈ R hd×d model is the weight matrix and T t ∈ R n×d model . After that, we input T t into a fully connected feed-forward network constructed by two linear layers to obtain the enhanced embedding U t ∈ R n×d model . The calculation process can be defined as follows:
U t = T t + Relu(T t · W 1 t + b 1 t ) · W 2 t + b 2 t(3)
where W 1 t and W 2 t are the weight matrix respectively and their shape information is R d model ×d model ; b 1 t and b 2 t are the bias item respectively and their shape information is R n×d model .
For the decoder part, we input the learned embedding U t into a prediction layer to predict the monitoring value of the future time segment. The prediction process can be defined as follows:
T t+1 = U t · W p t + b p t(4)
whereŤ t+1 ∈ R n×Lx is the prediction value of the next time segment; W p t ∈ R d model ×Lx is the weight matrix and b p t ∈ R n×Lx is the bias item. During the optimization process, we minimize the difference between the predictionŤ t+1 and the real monitoring value T t+1 . The optimization objective can be defined as follows
min Lx t=1 ||T t+1 −Ť t+1 || 2(5)
When the model converges, we have preserved temporal patterns of monitoring data in the temporal embedding U t .
Generating dynamic weighted attributed graphs
In CPS, different sensors connect with each other, which forms a sensor-sensor graph. As a result, the malfunctioning effects of system abnormal events may propagate over time following the graph structure. But, the sensor-sensor influence is not static and may vary as the monitoring data changes are caused by system anomaly events. To capture such dynamics, we want to build up weighted attributed graphs using sensor-type information and learned temporal embeddings. For simplicity, we take the graph stream data of t-th time segment X t as an example to illustrate the following calculation process. Specifically, the adjacency matrix of X t is A ∈ R n×n , which reflects the physical connectivity between different sensors. A[i, j] = 1 when sensor i and j are directly connected and A[i, j] = 0 otherwise. From section 3.2, we have obtained the temporal embedding U t ∈ R n×d model , each row of which represents the temporal embedding for each sensor. We assume that the sensors belonging to the same type have similar changing patterns when confronted with system anomaly events. Thus, we want to capture this characteristic by integrating sensor type information into the adjacency matrix. We calculate the sensor type embedding by averaging the temporal embedding of sensors belonging to the type. After that, we construct a type-type similarity matrix C t ∈ R k×k by calculating the cosine similarity between each pair of sensor types, k being the number of sensor types. Moreover, we construct the similarity matrixČ t ∈ R n×n by mapping C t to each element position of A. For instance, if sensor 1 belongs to type 2 and sensor 2 belongs to type 3, we updateČ t [1,2] with C t [2,3]. We then introduce the dynamic property to the adjacency matrix A through element-wise multiplication between A andČ t . Each temporal embedding of this time segment is mapped to the weighted graph as the node attributes according to sensor information. The obtained weighted attributed graphG t contains all spatial-temporal information of CPS for the t-th time segment. The topological influence of this graph may change over time.
Representation learning for weighted attributed graph
To make the outlier detection model easily comprehend the information of G t , we develop a representation learning module based on variational graph autoencoder (VGAE). For simplicity, we use G t to illustrate the representation learning process. For G t = (V,Ẽ t , U t ) , the adjacency matrix isà t made up by V andẼ t , and the feature matrix is U t .
Specifically, this module follows the encoder-decoder paradigm. The encoder includes two Graph Convolutional Network(GCN) layers. The first GCN layer takes U t andà t as inputs and outputs a lower dimensional feature matrixÛ t . The calculation process can be represented as follows:
U t = Relu(D −1/2 tÃtD −1/2 t U tW0 )(6)
whereD t is the diagonal degree matrix of G andW 0 is the weight matrix of the first GCN layer. The second GCN layer estimates the distribution of the graph embeddings. Assuming that such embeddings conform to the normal distribution N (µ t , δ t ), we need to estimate the mean µ t and variance δ t of the distribution. Thus, the encoding process of the second GCN layer can be formulated as follows:
µ t , log(δ 2 t ) = Relu(D −1/2 t A tD −1/2 tÛtW1 )(7)
whereW 1 is the weight matrix of the second GCN layer. Then, we use the reparameterization technique to mimic the sample operation to obtain the graph embedding r t , which can be represented as follows:
r t = µ t + δ t × t(8)
where t is the random variable vector, which is sampled from N (0, I). Here, N (0, I) represents the high-dimensional standard normal distribution. The decoder part aims to reconstruct the adjacency matrix of the graph using r t , which can be defined as follows:
A t = σ(r t r t )(9)
where t is the reconstructed adjacency matrix and r t r t = ||r t || ||r t ||cos θ.
During the optimization process, we aim to minimize two objectives: 1) the divergence between the prior embedding distribution N (0, I) and the estimated embedding distribution N (µ t , δ t ); 2) the difference between the adjacency matrix A t and the reconstructed adjacency matrixà t ; Thus, the optimization objective function is as follows:
min T t=1 KL[q(r t |U t , A t )||p(r t )]
KL divergance between q(.) and p(.)
+ Loss between At andÂt ||A t −Â t || 2(10)
where KL refers to the Kullback-Leibler divergence; q(.|.) is the estimated embedding distribution and p(.) is the prior embedding distribution. When the model converges, the graph embedding r t ∈ R n×d emb contains spatiotemporal patterns of the monitoring data for the t-th time segment.
One-Class Detection with SVDD
Considering the sparsity issue of labeled anomaly data in CPS, anomaly detection is done in an unsupervised setting. Inspired by deep SVDD [14], we aim to learn a hypersphere that encircles most of the normal data, with data samples located beyond it being anomalous. Due to the complex nonlinear relations among the monitoring data, we use deep neural networks to approximate this hypersphere. Specifically, through the above procedure, we collecte the spatiotemporal embedding of all time segments, denoted by [r 1 , r 2 , · · · , r T ]. We input them into multi-layer neural networks to estimate the non-linear hypersphere. Our goal is to minimize the volume of this data-enclosing hypersphere. The optimization objective can be defined as follows:
min W 1 n T t=1 ||φ(r t ; W) − c|| 2
Average sum of weights, using squared error, for all normal training instances (from T segments)
+ Regularization item λ 2 ||W|| 2 F(11)
where W is the set of weight matrix of each neural network layer; φ(r t ; W) maps r t to the non-linear hidden representation space; c is the predefined hypersphere center; λ is the weight decay regularizer. The first term of the equation aims to find the most suitable hypersphere that has the closest distance to the center c. The second term is to reduce the complexity of W, which avoids overfitting. As the model converges, we get the network parameter for a trained model, W * . During the testing stage, given the embedding of a test sample r o , we input it into the well-trained neural networks to get the new representation. Then, we calculate the anomaly score of the sample based on the distance between it and the center of the hypersphere. The process can be formulated as follows:
s(r o ) = ||φ(r o ; W * ) − c|| 2(12)
After that, we compare the score with our predefined threshold to assess the abnormal status of each time segment in CPS.
Experiments
We conduct extensive experiments to validate the efficacy and efficiency of our framework (DGS-SVDD) and the necessity of each technical component.
Experimental Settings
Data Description We adopt the SWaT dataset [11], from the Singapore University of Technology and Design in our experiments. This dataset was collected from a water treatment testbed that contains 51 sensors and actuators. The collection process continued for 11 days. The system's status was normal for the first 7 days and for the final 4 days, it was attacked by a cyber-attack model. The statistical information of the SWaT dataset is shown in Table 1. Our goal is to detect attack anomalies as precisely as feasible. We only use the normal data to train our model. After the training phase, we validate the capability of our model by detecting the status of the testing data that contains both normal and anomalous data. Evaluation Metrics We evaluate the model performance in terms of precision, recall, area under the receiver operating characteristic curve (ROC/AUC), and F1-score. We adopt the point-adjust way to calculate these metrics. In particular, abnormal observations typically occur in succession to generate anomaly segments and an anomaly alert can be triggered inside any subset of a real window for anomalies. Therefore, if one of the observations in an actual anomaly segment is detected as abnormal, we would consider the time points of the entire segment to have been accurately detected.
Baseline Models To make the comparison objective, we input the spatialtemporal embedding vector r t into baseline models instead of the original data.
There are seven baselines in our work: KNN [12]: calculates the anomaly score of each sample according to the anomaly situation of its K nearest neighborhoods. Isolation-Forest [8]: estimates the average path length (anomaly score) from the root node to the terminating node for isolating a data sample using a collection of trees.LODA [13]: collects a list of weak anomaly detectors to produce a stronger one. LODA can process sequential data flow and is robust to missing data. LOF [2]: measures the anomalous status of each sample based on its local density. If the density is low, the sample is abnormal; otherwise, it is normal. ABOD [6]: is an angle-based outlier detector. If a data sample is located in the same direction of more than K data samples, it is an outlier; otherwise it is normal data. OC-SVM [9]: finds a hyperplane to divide normal and abnormal data through kernel functions.. GANomaly [1]: utilizes an encoder-decoder-encoder architecture. It evaluates the anomaly status of each sample by calculating the difference between the output embedding of two encoders.
Experimental Results
Overall Performance Table 2 shows experimental results on the SWaT dataset, with the best scores highlighted in bold. As can be seen, DGS-SVDD outperforms other baseline models in the majority of evaluation metrics. Compared with the second-best baseline, DGS-SVDD improves precision by 19%, F1-score by 36% and AUC by 8%. This observation validates that DGS-SVDD is effective to detect anomalies accurately. The underlying driver for the success of our model is that DGS-SVDD can capture long-delayed temporal patterns and dynamic sensor-sensor influences in CPS. Another interesting observation is that the detection performance of distance-based or angle-based outlier detectors is poor. A possible reason is that these geometrical measurements are vulnerable to high-dimensional data samples.
Ablation Study
To study the individual contribution of each component of DGS-SVDD, we perform ablation studies, the findings of which are summarized in Table 3 where bold indicates the best score. We build four variations of the DGS-SVDD model: 1) We feed unprocessed raw data into SVDD; 2) We only capture temporal patterns; 3) We capture the dynamics of sensor-sensor impact and spatial patterns in CPS; 4) We capture spatial-temporal patterns in CPS but discard the dynamics of sensor-sensor influence. We can find that DGS-SVDD outperforms its variants by a significant margin. The observation validates that each technical component of our work is indispensable. Another interesting observation is that removing the temporal embedding module dramatically degrades the detection performance, rendering the temporal embedding module the highest significance. Results from the final experiment show that capturing the dynamics of sensor-sensor influence really boosts model performance. Robustness Check and Parameter Sensitivity Figure 2 shows the experimental results for robustness check and parameter sensitivity analysis. To check the model's robustness, we train DGS-SVDD on different percentages of the training data, starting from 10% to 100%. We can find that DGS-SVDD is stable when confronted with different training data from Figure 2(a). But, compared with other percentages, DGS-SVDD achieves the best performance when we train it on 50% training data. In addition, we vary the dimension of the final spatial-temporal embedding in order to check its impacts. From Figure 2(b) and 2(c), we can find that DGS-SVDD is barely sensitive to the the sliding window length and dimension of the spatiotemporal embeddings. This observation validates that DGS-SVDD is robust to the dimension parameters. A possible reason is that our representation learning module has sufficiently captured spatial-temporal patterns of monitoring data for anomaly detection.
Study of Time Cost
We conduct six folds cross-validation to evaluate the time costs of different models. Figure 3 illustrates the comparison results. We can find that DGS-SVDD can be trained at a time competitive with simple models like Table 2. This shows that DGS-SVDD effectively learns the representation of each time segment of the graph stream data. Another important observation is that the testing time of DGS-SVDD is consistent with the simpler baselines.
A potential reason is that the network parameter W * , as discussed in section 3.5, completely characterizes our one-class classifier. This allows fast testing by simply evaluating the network φ with learnt parameters W * .
Related Work
Anomaly Detection in Cyber-Physical Systems. Numerous existing literature have studied the exploitation of temporal and spatial relationships in data streams from CPS to detect anomalous points [5]. For instance, [5,7] adopts a convolutional layer as the first layer of a Convolutional Neural Network to obtain correlations of multiple sensors in a sliding time window. Further, the extracted features are fed to subsequent layers to generate output scores. [7] proposed a GAN-based framework to capture the spatial-temporal correlation in multidimensional data. Both generator and discriminator are utilized to detect anomalies by reconstruction and discrimination errors.
Outlier detection with Deep SVDD. After being introduced in [14], deep SVDD and its many variants have been used for deep outlier detection. [18] designed deep structure preservation SVDD by integrating deep feature extraction with the data structure preservation. [20] proposed a Deep SVDD-VAE, where VAE is used to reconstruct the input sequences while a spherical discriminative boundary is learned with the latent representations simultaneously, based on SVDD. Although these models have been successfully applied to detect anomalies in the domain of computer vision, this domain lacks temporal and spatial dependencies prevalent in graph stream data generated from CPS.
Conclusion
We propose DGS-SVDD, a structured anomaly detection framework for cyberphysical systems using graph stream data. To this end, we integrate spatiotem-poral patterns, modeling dynamic characteristics, deep representation learning, and one-class detection with SVDD. Transformer-based encoder-decoder architecture is used to preserve the temporal dependencies within a time segment. The temporal embedding and the predefined connectivity of the CPS are then used to generate weighted attributed graphs from which the fused spatiotemporal embedding is learned by a spatial embedding module. A deep neural network, integrated with one-class SVDD is then used to group the normal data points in a hypersphere from the learnt representations. Finally, we conduct extensive experiments on the SWaT dataset to illustrate the superiority of our method as it delivers 35.87% and 19.32% improvement in F1-score and AUC respectively. For future work, we wish to integrate a connectivity learning policy into the transformer so that it just does not learn the temporal representation, rather it also models the dynamic influence among sensors. The code can be publicly accessed at https://github.com/ehtesam3154/dgs svdd.
Fig. 1 :
1An overview of our framework. There are four key components: transformer-based temporal embedding module, weighted attributed graph generator, VGAE-based spatiotemporal embedding module, and SVDD-based outlier detector.
Fig. 2 :
2Experimental results for robustness check and parameter sensitivity
Fig. 3 :
3Comparison of different models in terms of training and testing time cost OC-SVM or LOF while outperforming them by a huge margin as seen from
........Graph stream data
S1
S2
.....
.....
.....
.....
S1
S2
Sn
Sn
.....................
S1
S2
Sn
Time-series
Data Segments
........
S1
S2
.....
.....
.....
S1
S2
Sn
Sn
.....................
S1
S2
Sn
.
.
.
.
.
.
.
.
.
.
Temporal
Embeddings
Sensor/Device
type information
Weighted
attributed
graph
generator
S1
Sn
S2
S1
Sn
S2
S1
Sn
S2
S1
Sn
S2
.
.
.
.
.
.
.
.
.....
.....
.....
.....
.....
.....
.....
.....
.......................
.......................
.......................
.
.
.
.
.
.
.
.
.
VGAE
based
spatiotemporal
embedding
module
.
.
.
.
.
.
.
.
.
SVDD
normal/anomaly
0
0
1
0
.
.
.
.
.
.
.
.
.
1
0
normal
normal
normal
normal
attack
attack
Spatio-temporal
Embeddings
Weighted Attributed Graphs
Transformer
based
temporal
embedding
module
.
.
.
.
.
.
.
.....
................
................
................
................
U1
U2
U3
Un
U1
U2
U3
Un
r1
r2
r3
rn
r1
r2
r3
rn
1
0
normal
attack
0
normal
Deep Graph Stream SVDD
........
S1
S2
.....
.....
.....
.....
S1
S2
Sn
Sn
..
..
..
..
..
..
..
..
..
..
..
..
Table 1 :
1Statistics of SWaT DatasetData Type Feature Number Total Items Anomaly Number Normal/Anomaly
Normal
51
496800
0
-
Anomalous
51
449919
53900
7:1
Table 2 :
2Experimental Results on SWaT dataset
Method
Precision (%) Recall (%) F1-score (%) AUC (%)
OC-SVM
34.11
68.23
45.48
75
Isolation-Forest
35.42
81.67
49.42
80
LOF
15.81
93.88
27.06
63
KNN
15.24
96.77
26.37
61
ABOD
14.2
97.93
24.81
58
GANomaly
42.12
67.87
51.98
68.64
LODA
75.25
38.13
50.61
67.1
DGS-SVDD
94.17
82.33
87.85
87.96
Table 3 :
3Ablation Study of DGS-SVDDMethod
Precision (%) Recall (%) F1-score (%) AUC (%)
Transformer-based Temporal
Embedding Module
Weighted Attributed
Graph Generator
VGAE-based Spatiotemporal
Embedding Module
4.61
12.45
6.74
18.55
69.98
64.75
67.26
78.14
12.16
99.99
21.68
18.22
87.79
76.68
81.86
82.45
94.17
82.33
87.75
87.96
Ganomaly: Semi-supervised anomaly detection via adversarial training. S Akcay, A Atapour-Abarghouei, T P Breckon, Asian conference on computer vision. SpringerAkcay, S., Atapour-Abarghouei, A., Breckon, T.P.: Ganomaly: Semi-supervised anomaly detection via adversarial training. In: Asian conference on computer vi- sion. pp. 622-637. Springer (2018)
Lof: identifying density-based local outliers. M M Breunig, H P Kriegel, R T Ng, J Sander, Proceedings of the 2000 ACM SIGMOD international conference on Management of data. the 2000 ACM SIGMOD international conference on Management of dataBreunig, M.M., Kriegel, H.P., Ng, R.T., Sander, J.: Lof: identifying density-based local outliers. In: Proceedings of the 2000 ACM SIGMOD international conference on Management of data. pp. 93-104 (2000)
Jenni Bergal, Florida hack exposes danger to water systems. Jenni Bergal: Florida hack exposes danger to water systems (2021), https://www.pewtrusts.org/en/research-and-analysis/blogs/stateline/ 2021/03/10/florida-hack-exposes-danger-to-water-systems
Variational graph auto-encoders. T N Kipf, M Welling, arXiv:1611.07308arXiv preprintKipf, T.N., Welling, M.: Variational graph auto-encoders. arXiv preprint arXiv:1611.07308 (2016)
Detecting cyber attacks in industrial control systems using convolutional neural networks. M Kravchik, A Shabtai, Proceedings of the 2018 workshop on cyber-physical systems security and privacy. the 2018 workshop on cyber-physical systems security and privacyKravchik, M., Shabtai, A.: Detecting cyber attacks in industrial control systems using convolutional neural networks. In: Proceedings of the 2018 workshop on cyber-physical systems security and privacy. pp. 72-83 (2018)
Angle-based outlier detection in highdimensional data. H P Kriegel, M Schubert, A Zimek, Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining. the 14th ACM SIGKDD international conference on Knowledge discovery and data miningKriegel, H.P., Schubert, M., Zimek, A.: Angle-based outlier detection in high- dimensional data. In: Proceedings of the 14th ACM SIGKDD international con- ference on Knowledge discovery and data mining. pp. 444-452 (2008)
Mad-gan: Multivariate anomaly detection for time series data with generative adversarial networks. D Li, D Chen, B Jin, L Shi, J Goh, S K Ng, International conference on artificial neural networks. SpringerLi, D., Chen, D., Jin, B., Shi, L., Goh, J., Ng, S.K.: Mad-gan: Multivariate anomaly detection for time series data with generative adversarial networks. In: Interna- tional conference on artificial neural networks. pp. 703-716. Springer (2019)
eighth ieee international conference on data mining. F T Liu, K M Ting, Z H Zhou, IEEEIsolation forestLiu, F.T., Ting, K.M., Zhou, Z.H.: Isolation forest. In: 2008 eighth ieee interna- tional conference on data mining. pp. 413-422. IEEE (2008)
One-class svms for document classification. L M Manevitz, M Yousef, Journal of machine Learning research. 2Manevitz, L.M., Yousef, M.: One-class svms for document classification. Journal of machine Learning research 2(Dec), 139-154 (2001)
Anomaly detection based on sensor data in petroleum industry applications. L Martí, N Sanchez-Pi, J M Molina, A C B Garcia, Sensors. 152Martí, L., Sanchez-Pi, N., Molina, J.M., Garcia, A.C.B.: Anomaly detection based on sensor data in petroleum industry applications. Sensors 15(2), 2774-2797 (2015)
A P Mathur, N O Tippenhauer, 2016 international workshop on cyber-physical systems for smart water networks (CySWater). IEEESwat: A water treatment testbed for research and training on ics securityMathur, A.P., Tippenhauer, N.O.: Swat: A water treatment testbed for research and training on ics security. In: 2016 international workshop on cyber-physical systems for smart water networks (CySWater). pp. 31-36. IEEE (2016)
K-nearest neighbor. L E Peterson, Scholarpedia. 421883Peterson, L.E.: K-nearest neighbor. Scholarpedia 4(2), 1883 (2009)
Loda: Lightweight on-line detector of anomalies. T Pevnỳ, Machine Learning. 1022Pevnỳ, T.: Loda: Lightweight on-line detector of anomalies. Machine Learning 102(2), 275-304 (2016)
Deep one-class classification. L Ruff, R Vandermeulen, N Goernitz, L Deecke, S A Siddiqui, A Binder, E Müller, M Kloft, International conference on machine learning. PMLRRuff, L., Vandermeulen, R., Goernitz, N., Deecke, L., Siddiqui, S.A., Binder, A., Müller, E., Kloft, M.: Deep one-class classification. In: International conference on machine learning. pp. 4393-4402. PMLR (2018)
Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, L Kaiser, I Polosukhin, CoRR abs/1706.03762Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I.: Attention is all you need. CoRR abs/1706.03762 (2017), http: //arxiv.org/abs/1706.03762
Hierarchical graph neural networks for causal discovery and root cause localization. D Wang, Z Chen, J Ni, L Tong, Z Wang, Y Fu, H Chen, arXiv:2302.01987arXiv preprintWang, D., Chen, Z., Ni, J., Tong, L., Wang, Z., Fu, Y., Chen, H.: Hierarchical graph neural networks for causal discovery and root cause localization. arXiv preprint arXiv:2302.01987 (2023)
Defending water treatment networks: Exploiting spatio-temporal effects for cyber attack detection. D Wang, P Wang, J Zhou, L Sun, B Du, Y Fu, 2020 IEEE International Conference on Data Mining (ICDM). IEEEWang, D., Wang, P., Zhou, J., Sun, L., Du, B., Fu, Y.: Defending water treatment networks: Exploiting spatio-temporal effects for cyber attack detection. In: 2020 IEEE International Conference on Data Mining (ICDM). pp. 32-41. IEEE (2020)
Anomaly detection using improved deep svdd model with data structure preservation. Z Zhang, X Deng, Pattern Recognition Letters. 148Zhang, Z., Deng, X.: Anomaly detection using improved deep svdd model with data structure preservation. Pattern Recognition Letters 148, 1-6 (2021)
Siamese neural network based few-shot learning for anomaly detection in industrial cyber-physical systems. X Zhou, W Liang, S Shimizu, J Ma, Q Jin, IEEE Transactions on Industrial Informatics. 178Zhou, X., Liang, W., Shimizu, S., Ma, J., Jin, Q.: Siamese neural network based few-shot learning for anomaly detection in industrial cyber-physical systems. IEEE Transactions on Industrial Informatics 17(8), 5790-5798 (2020)
Vae-based deep svdd for anomaly detection. Y Zhou, X Liang, W Zhang, L Zhang, X Song, Neurocomputing. 453Zhou, Y., Liang, X., Zhang, W., Zhang, L., Song, X.: Vae-based deep svdd for anomaly detection. Neurocomputing 453, 131-140 (2021)
| [
"https://github.com/ehtesam3154/dgs"
] |
[
"A Multidatabase ExTRaction PipEline (METRE) for Facile Cross Validation in Critical Care Research",
"A Multidatabase ExTRaction PipEline (METRE) for Facile Cross Validation in Critical Care Research"
] | [
"Wei Liao \nDepartment of Electrical Engineering and Computer Science\nDepartment of Electrical Engineering and Computer Science\nMassachusetts Institute of Technology\nCambridgeUSA\n\nMassachusetts Institute of Technology\n77 Massachusetts Avenue, Room 36-82402139CambridgeMAUSA\n",
"Joel Voldman [email protected]. \nDepartment of Electrical Engineering and Computer Science\nDepartment of Electrical Engineering and Computer Science\nMassachusetts Institute of Technology\nCambridgeUSA\n\nMassachusetts Institute of Technology\n77 Massachusetts Avenue, Room 36-82402139CambridgeMAUSA\n",
"Joel Voldman "
] | [
"Department of Electrical Engineering and Computer Science\nDepartment of Electrical Engineering and Computer Science\nMassachusetts Institute of Technology\nCambridgeUSA",
"Massachusetts Institute of Technology\n77 Massachusetts Avenue, Room 36-82402139CambridgeMAUSA",
"Department of Electrical Engineering and Computer Science\nDepartment of Electrical Engineering and Computer Science\nMassachusetts Institute of Technology\nCambridgeUSA",
"Massachusetts Institute of Technology\n77 Massachusetts Avenue, Room 36-82402139CambridgeMAUSA"
] | [] | Transforming raw EHR data into machine learning model-ready inputs requires considerable effort. One widely used EHR database is Medical Information Mart for Intensive Care (MIMIC).Prior work on MIMIC-III cannot query the updated and improved MIMIC-IV version. Besides, the need to use multicenter datasets further highlights the challenge of EHR data extraction. Therefore, we developed an extraction pipeline that works on both MIMIC-IV and eICU Collaborative Research Database and allows for model cross validation using these 2 databases. Under the default choices, the pipeline extracted 38766 and 126448 ICU records for MIMIC-IV and eICU, respectively. Using the extracted time-dependent variables, we compared the Area Under the Curve (AUC) performance with prior works on clinically relevant tasks such as in-hospital mortality prediction. METRE achieved comparable performance with AUC 0.723-0.888 across all tasks. Additionally, when we evaluated the model directly on MIMIC-IV data using a model trained on eICU, we observed that the AUC change can be as small as +0.019 or -0.015. Our open-source pipeline transforms MIMIC-IV and eICU into structured data frames and allows researchers to perform model training and testing using data collected from different institutions, which is of critical importance for model deployment under clinical contexts. | 10.1016/j.jbi.2023.104356 | [
"https://export.arxiv.org/pdf/2302.13402v1.pdf"
] | 257,219,224 | 2302.13402 | b61ec48b5611fcbe8e46785bb61efbd43e64425a |
A Multidatabase ExTRaction PipEline (METRE) for Facile Cross Validation in Critical Care Research
Wei Liao
Department of Electrical Engineering and Computer Science
Department of Electrical Engineering and Computer Science
Massachusetts Institute of Technology
CambridgeUSA
Massachusetts Institute of Technology
77 Massachusetts Avenue, Room 36-82402139CambridgeMAUSA
Joel Voldman [email protected].
Department of Electrical Engineering and Computer Science
Department of Electrical Engineering and Computer Science
Massachusetts Institute of Technology
CambridgeUSA
Massachusetts Institute of Technology
77 Massachusetts Avenue, Room 36-82402139CambridgeMAUSA
Joel Voldman
A Multidatabase ExTRaction PipEline (METRE) for Facile Cross Validation in Critical Care Research
Corresponding author:EHRMIMIC-IVeICUExtraction pipeline
Transforming raw EHR data into machine learning model-ready inputs requires considerable effort. One widely used EHR database is Medical Information Mart for Intensive Care (MIMIC).Prior work on MIMIC-III cannot query the updated and improved MIMIC-IV version. Besides, the need to use multicenter datasets further highlights the challenge of EHR data extraction. Therefore, we developed an extraction pipeline that works on both MIMIC-IV and eICU Collaborative Research Database and allows for model cross validation using these 2 databases. Under the default choices, the pipeline extracted 38766 and 126448 ICU records for MIMIC-IV and eICU, respectively. Using the extracted time-dependent variables, we compared the Area Under the Curve (AUC) performance with prior works on clinically relevant tasks such as in-hospital mortality prediction. METRE achieved comparable performance with AUC 0.723-0.888 across all tasks. Additionally, when we evaluated the model directly on MIMIC-IV data using a model trained on eICU, we observed that the AUC change can be as small as +0.019 or -0.015. Our open-source pipeline transforms MIMIC-IV and eICU into structured data frames and allows researchers to perform model training and testing using data collected from different institutions, which is of critical importance for model deployment under clinical contexts.
INTRODUCTION
Electronic health record (EHR) data contains rich patient-specific information and holds great promise in developing clinical decision support systems, understanding patient trajectories and ultimately improving the quality of care. To date, in combination of machine learning (ML) methods, EHR data has been leveraged to build prediction systems for diseases such as sepsis [1][2][3][4], acute kidney failure [5,6], rheumatoid arthritis [7], etc., develop interpretable algorithms [8,9], or explore the state space to learn what treatments to avoid in data-constrained settings [10]. EHR data is usually contained within a relational database, and most machine learning algorithms cannot directly operate on the raw data. Transforming raw EHR data into ML modelready inputs requires substantial effort, which includes selecting what variables to be extracted, how to organize irregularly recorded time-dependent variables as well as how to handle missing and outlier data points. McDermott et al. [11] found that only 21% of the papers in the machine learning for health (MLH) field released their code publicly, and that researchers usually extract task-specific data, which impedes comparison across different studies.
Recently, there has also been an increasing concern on the potential sources of harm throughout the machine learning life cycle [12,13]. For instance, representation bias can occur when samples used during model development underrepresent some part of the population, and the model thus subsequently fails to generalize for a subset of the use population. In addition, machine learning models can learn spurious correlations between the data and the target output. For example, the model can potentially link a specific variable recording pattern in the EHR training data instead of relevant clinical physiology with the outcome [14], which could lead to failure when the algorithm is deployed to another clinical site. Therefore, in the model development phase, it is important for researchers to perform model training or validation using EHR data from different sources. As McDermott et al. point out [11], whereas ~80% of computer vision studies and ~58% of natural language processing studies used multiple datasets to establish their results, only ~23% of ML for health (MLH) papers did this. The need to use multicenter datasets further highlights the challenge of EHR data preprocessing, because EHR data is usually archived differently across institutions, which requires design of different query strategies.
The exciting research in MLH is made feasible by the availability of large collections of EHR datasets. Medical Information Mart for Intensive Care (MIMIC) is a pioneer in ensuring safe, appropriate release of EHR data. MIMIC-III is a large, freely-available database comprising over 40000 deidentified records of patients who stayed in the critical care unit at Boston's Beth Israel Deaconess Medical Center between 2001 and 2012 [15]. MIMIC-Extract [16] is a popular open source pipeline for transforming the raw critical care MIMIC-III database into data structures that are directly usable in common time-series prediction pipelines. It incorporates detailed unit conversion, outlier detection and missingness thresholding and a semantically similar feature aggregation pipeline to facilitate the reproducibility of research results that use MIMIC-III. Carrying on with the success of MIMIC-III, MIMIC-IV was introduced in 2020 with a few major changes to further facilitate usability. MIMIC-IV states the source database of each table (Chartevents, Labevents etc.) while MIMIC-III organizes the data as a whole. MIMIC-IV also contains more contemporary data recorded using the MetaVision system (instead of the CareVue system) from 2008 -2019, which makes prior query code built on CareVue redundant; MIMIC-IV's new structure and its accompanying level of detail require new query strategies.
COP-E-CAT [17] is a preprocessing framework developed for MIMIC-IV that allows users to specify the time window to aggregate the raw features. Despite that utility and impact of both MIMIC-Extract and COP-E-CAT, they are constrained to operating on MIMIC-III or MIMIC-IV data, and thus do not provide developers with an independent dataset to improve model generalization and reduce bias. FIDDLE [18] vasopressors, fluid bolus therapies) and further includes continuous renal replacement therapy (CRRT) and 3 types of transfusion procedures as well as antibiotics administration, which could be relevant in a variety of clinical tasks. Our default MIMIC-IV cohort is 38766 patient stays who were admitted to the ICU. We also incorporate flexible user-specified inputs into the pipeline.
Users can specify the age range, data missingness threshold, ICU stay length, and specific condition keywords to get their unique cohort.
2. We also developed an eICU database extraction pipeline with each extracted feature mapped onto those extracted from MIMIC-IV. The eICU pipeline was developed using the same outlier removal and missing data imputation strategy as MIMIC-IV. To demonstrate the usability of our pipeline, we performed the following tasks using MIMIC-IV and eICU data separately: 1) hospital mortality prediction, 2) acute respiratory failure (ARF) prediction using 4h or 12h ICU data, 3) shock prediction using 4h or 12h data. Afterwards, for models developed using MIMIC-IV, we performed model testing directly on eICU (or vice versa) without any transfer learning. range on the records to be extracted. Besides, we also provide a few arguments to extract condition-specific cohorts. For instance, such arguments include sepsis_3, under which the pipeline will only extract patients that meet sepsis-3 criteria [20] during their ICU stay. Such arguments include sepsis_3, ARF, shock, COPD, and CHF. The definition for each condition is in SI Section 8. Based on the identification info of the cohort (subject_id, hadm_id, stay_id in MIMIC-IV and patientunitstayid in eICU), the pipeline proceeds to extract 3 tables, namely Static, Vital and Intervention. The Static table contains information such as patient age, gender, ethnicity, and comorbidities.
METHODS
Pipeline overview
Notably, we also extract the 3-year time window for each patient admission in the MIMIC-IV database, since the evolution of care practices over time and the resultant concept drift can significantly change clinical data [21], which can limit model deployability. The release of the approximate admission window is a new feature of MIMIC-IV (versus MIMIC-III). For eICU, all the patients were discharged between 2014 and 2015.
The Vital table contains information such as blood gas and vital sign measurements. These measurements can be sparse and may contain erroneous values. Therefore, this branch has hourly aggregation, missing data handling and outlier removal before obtaining the final output.
In order to facilitate the flexibility of the pipeline, users can also disable the default outlier removal or data imputation and apply a custom removal process on the raw data.
The Intervention table contains features regarding procedures performed, such as mechanical ventilation and vasoactive medications (norepinephrine, phenylephrine, epinephrine etc.). The intervention features are treated as binary variables, with a series of 1s indicating the start time and end time of the procedure during the stay ( Figure 2B), which follows the practice in MIMIC-Extract [16]. A complete list of variables is in SI Table S4-S6.
Variable query
By default, our pipeline extracts all demographic variables (SI Table S4 We also made use of existing derived tables from both repositories [22,23] intervention-related records from both databases is in SI section 4.
Post processing
We performed post-processing on the Vital table and the Intervention table. For the Vital table, after getting the raw entries of each feature, we aggregated the feature by hour (Figure 2A) since most of the variables were not frequently recorded. For example, in MIMIC-IV, on average, heart rate was recorded every 1.07 hours and troponin-t was only recorded only every 131.46 hours. Users could choose their own time window ranging from 1h to 24h. After this aggregation, certain time points will have missing values. Before we implemented any imputation algorithm, we added a binary indicator column for each numerical variable (1 indicating the value is recorded and 0 indicating the value is imputed), since the recorded values could have higher credibility compared with the imputed values. Also noted by Ghassemi et al. [14], learning models without an appropriate model of missingness leads to brittle models when facing changes in measurement practices. Users have the option to discard the indicator column variables for a light-weight end-result. We then checked for outliers in the numerical variables. We made use of the list from in the source code repository of Harutyunyan et al. [24], which was based on clinical experts' knowledge of valid clinical measure ranges. For any value below the outlier_low or above the outlier_high, we emptied that cell and set the corresponding indicator cell to 0. Importantly, the same outlier removal criteria were applied on both MIMIC-IV and eICU to prevent introducing bias at this stage. Variable filling information before and after the outlier removal is in SI Table S1-S3. All the ranges used in the outlier removal are in SI Table S9. We also compared the mean and std in each variable between MIMIC-IV and eICU and demonstrated variables with the largest and smallest mean value difference (SI Figure S1, S2).
For the Intervention table, based on the start and end time of each variable, we added 1 or 0 indicating whether the treatment was performed in each hour ( Figure 2B). There was no special missing value imputation in this table and 0 indicates that treatment was not performed in that hour. The default pipeline also has a variable reordering step making sure eICU dataframes have the same ordering as MIMIC-IV dataframes, which facilitates direct cross-validation between these 2 databases.
Before we implemented the default data imputation, we checked the ratio of null values for each ICU stay and for each variable. Users can set an optional missingness threshold above which means a stay or variable is not well-documented and will be removed.
Baseline tasks
In order to demonstrate the utility of METRE, we extracted the eICU and MIMIC-IV data using the default choices and performed a number of clinically relevant prediction tasks using different model architectures.
Tasks:
We incorporated 5 prediction tasks as defined by Tang et al. [18], which are:
1) In-hospital mortality prediction. In this task, 48h of extracted data is used for each stay. ICU stays with LOS < 48h are excluded. The prediction target is binary and the ground truth is from hospital_expire_flag in mimic_core.admissions for MIMIC-IV and for eICU, eicu_crd.patients record unitdischargestatus ('Alive' or 'Expired') explicitly.
2-3)
Acute respiratory failure (ARF), using either 4h (task 2) or 12h (task 3) of data. ARF is identified by the need for respiratory support with positive-pressure mechanical ventilation.
For both MIMIC-IV and eICU, ventilation and positive end-expiratory pressure have been queried as variables so labels can be directly generated for each record.
4-5) Shock, using either 4h (task 4) or 12h (task 5) of data. Shock is identified by receipt of vasopressor including norepinephrine, epinephrine, dopamine, vasopressin or phenylephrine.
They have also been stored in the extracted Intervention table. For ARF and shock, the onset time is defined as the earliest time when the criteria is met.
Incorporating a gap between the end of the observation window (48h, 4h or 12h) and the onset of the positive target (time of death, ARF onset, shock onset) prevents data leakage and more closely resembles the real use case, where the care team takes possible measures after the model gives an alert. We incorporate gap hour into METRE, where positive cases are those whose onset time is observed during the ICU stay but out of observation window and the gap hour window. The negative cases are defined as no onset during the entire stay. Therefore, each task has a distinct study cohort. We used 6h as the gap hour but users can set their preferrable gap hours (including no gap). We also performed the same series of predictions without gap hour and the results are in SI Table S12-S14.
Models:
We compared 4 different modeling approaches.
1) Logistic regression (LR) models. For LR models, we used the sklearn linear model library and applied both L1 and L2 penalties [25]. Baysian optimization [26] was used for tuning the inverse of regularization strength C and the ElasticNet mixing parameter l1_ratio in order to maximize the average area under the receiver operating characteristic curve (AUC).
2) Random forest (RF) models. RF models were built using sklearn ensemble library
[27]. 6 hyperparameters including the number of trees in the forest, the maximum depth of the tree, the minimum number of samples required to split an internal node, the minimum number of samples required to be at a leaf node, the number of features to consider when looking for the best split, the number of samples to draw from the train set to train each base estimator were optimized using Bayesian optimization, with the same goal of maximizing AUC. For both LR and RF models, Bayesian optimization was implemented for 10 iterations with 5 steps of random exploration with expected improvement as the acquisition function. Besides, for both LR and RF models, since it requires 1D input, we flattened the time-series data before feeding into the model.
3) 1-dimensional convolutional neural networks (CNN) [28]. For CNN models, a random search on the convolutional kernel size, layer number, filter number, learning rate and batch size with a budget of 5 was implemented to maximize the AUC.
4) Long short-term memory networks (LSTM) [29]. We also used the same number of random search budget for the number of features in the LSTM hidden state, the number of recurrent layers, the feature dropout rate, learning rate and batch size.
The train-test split is 80:20 and the train set is used in a 10-fold cross validation. The test set AUC and area under the precision-recall curve (AUPRC) performance was reported.
Empirical 95% confidence intervals were also reported using 1000 bootstrapped samples of the test set and under 1000 bootstraps. Code to run these models is available online and more detailed parameter choices on the modeling are in the SI Section 9.
RESULTS
We first compared METRE with other MIMIC extraction pipelines (Table 1). METRE is the only pipeline that extracts the newest MIMIC database MIMIC-IV and eICU in a consistent way, includes a diverse set of EHR variables, and at the same time allows for both generic and condition-specific cohort selection. Notably, FIDDLE makes use of the underlying data distribution and performs post-processing similar to one-hot encoding, which results in the creation of large numbers of synthetic time-series variables. We also acknowledge that using MIMIC-III data has the benefit of abundant references and consequent in-depth understanding of the database. Using the default cohorts shown in Among all 5 tasks, the ARF prediction task presented an AUC drop as high as 0.322 when the model was trained using eICU and evaluated using MIMIC-IV. We hypothesize that the generation of the ARF label may be dependent on local clinical care practice (e.g., whether the patient was on ventilation or not) and the completeness of the entries into the database. We therefore compared the ratio of people receiving all the 16 intervention procedures in SI Table S15 and observed the differences between MIMIC-IV and eICU. Model generalization performance has been known to be a challenge in model deployment, especially in clinical contexts, where the incoming data is often generated under different care practices. Therefore, we believe the ability of our pipeline to perform model testing using data collected from different institutions prior to deployment is of critical importance. There are some interesting METRE variables that we didn't explore in this work. For instance, in our MIMIC-IV data, we extracted patient discharge year as a static feature. It has been studied by Nestor et al [21] that date-agnostic models could overestimate prediction quality and affect future deployment potential. We hope that METRE could spark research work exploring this feature and developing models that generalizes better over changing health care practices.
MIMIC-IV (N = 38766) eICU (N = 126448)
Although METRE's key value lies in the flexibility in extracting the data, to demonstrate the effectiveness of the pipeline, we performed 5 clinically relevant prediction tasks with hospital mortality, ARF, shock as the target and developed LR, RF, CNN and LSTM models for each task. With AUC and AUPRC as the metrics, all the models have comparable performance with models developed using MIMIC-Extract and FIDDLE, as expected. However, our pipeline has the additional advantage of enabling facile cross validation between MIMIC-IV and eICU datasets. It is also worth noting that we are not attempting to develop a one-size-fits-all solution that generalizes across all databases; this is an enduring challenge in the EHR field. METRE aims at expediting the data preprocessing stage for researchers who are interested in using both MIMIC-IV and eICU data. We welcome community contributions to METRE to keep developing additional functionality.
CONCLUSION
We developed an open-source cohort selection and pre-processing pipeline METRE to extract multi-variate EHR data. We focused on 2 widely-used EHR databases: MIMIC-IV and eICU.
METRE produces a wide variety of variables including time-series variables such as labs, vital signs, treatments, and interventions as well as static variables including demographics and
comorbidities. Our open-source pipeline transforms MIMIC-IV and eICU into structured data frames and allows researchers to perform model testing using data collected from different institutions, which is of critical importance for model deployment under clinical context.
DATA AVAILABILITY
The code used to extract the data and perform training is available here:
https://github.mit.edu/voldman-lab/METRE_MIMIC_eICU
ACKNOWLEDGEMENTS
FUNDING
The project is funded by MIT Jameel Clinic for Machine Learning in Health and W.L. received MIT EWSC Fellowship while conducting the research.
Figure 1
1summarizes the extraction steps for both MIMIC-IV and eICU. The extraction pipeline starts with defining the cohort, where users can specify age range and ICU length of stay (LOS)
Figure 1 .
1METRE schematic. (Dashed box is specific for eICU. Output dataframe shape is from MIMIC-IV.)
Figure 2 .
2Post processing done on time-series variables with one-time entry and variables spanning hours. Notably, we added a binary indicator column for each numerical variable in A) to explicitly model the data missingness. A) the Vital
Figure 3
3compares the MIMIC-IV/eICU cross validation AUC on all 8 hospital mortality predictions, which clearly demonstrates the utility that arises from facile multidatabase extraction. Error! Reference source not found. has a complete comparison between the external dataset validation performance across models and across 5 different prediction targets.
Figure 3 .
3Cross validation performances on in-hospital mortality prediction. A) LR, B) RF, C) CNN, D) LSTM models were trained using tables derived from MIMIC-IV and tested using both MIMIC-IV test set and the whole eICU set. E) LR, F) RF, G) CNN, H) LSTM models were trained using tables derived from eICU and tested using both eICU test set and the whole MIMIC-IV set. AUC curves with 95% confidence interval are shown in each panel.
the gap between MIMIC-IV and eICU by creating harmonized outputs that allow for facile cross-validation across the two datasets, which greatly benefits 1) Users who want to evaluate their model on a dataset that hasn't been seen during the development process. 2)Users who want to model larger cohorts; the combined cohort size of MIMIC-IV and eICU in METRE is 165254.METRE was developed to allow for substantial user flexibility. 1) Users can specify their own cohort choices on age, LOS, record missingness threshold at the beginning of the pipeline. 2) Users can also choose to stop at a few exit points defined at the pipeline. The exit points allow users to get outputs before implementing the default imputation method, or before normalizing the output, or before performing the train-validation-test split. In this way, users can merge their preferred design choices into the pipeline. 3) We only provided a limited set of arguments for condition-specific cohorts. To accommodate this, users can first locate the stayid (MIMIC-IV) or the patientunitstayid (eICU) for their specific cohort and use our pipeline under customid argument to do the rest of the variable query and cleaning work.
thank Dr. Luca Daniel (Massachusetts Institute of Technology), Dr. Tsui-Wei Weng (University of California San Diego), Ching-Yun Ko (Massachusetts Institute of Technology) for valuable discussions.
is a data extraction pipeline for both MIMIC-III and eICU. eICU Collaborative Research Database [19] is a multicenter database comprising deidentified health data associated with over 200000 admissions to ICUs across the United States between 2014-2015. FIDDLE incorporates important advances, such as circumventing the process of selecting variables and making use of the individual EHR variable distributions. However, for a given model developed using FIDDLE-MIMIC-III, it's still challenging to perform direct model evaluation using FIDDLE-eICU. In contrast to previous works, METRE advances the field with 2 primary contributions: 1. METRE works with the most recent MIMIC-IV database. We extract a diverse set of timedependent variables: 92 labs and vitals, 16 intervention variables as well as 35 time-invariant variables. Our intervention table expands upon those included in MIMIC-Extract (ventilation,
) from mimic_icu.icustays, mimic_core.admissions and mimic_core.patients tables forDatabase
MIMIC-IV/eICU
Cohort
selection
(age, ICU
length
of stay etc.)
Retrieve static
features:
gender, ethnicity,
comorbidities etc.
Retrieve dynamic
features:
blood gas, vital
sign etc.
Retrieve
procedures
such as mechanical
ventilation
Numerical:
Z score
categorical:
one-hot
encoding
Create time
window
Aggregate
data by hour
Outlier
removal and
imputation
Semantic and
feature order
matching for
eICU
Static Table
38766 x 35
Vital Table
2697900 x 184
Intervention Table
2697900 x 16
Static
Time
dependent
(Vital and Intervention)
*Output table row size is
shown for MIMIC-IV
extract 17 comorbidities for each ICU stay from mimic_hosp.diagnoses_icd and
eicu_crd.diagnosis, respectively. One critical difference between the 2 databases is that
MIMIC-IV uses both the International Classification of Diseases 9 th Revision (ICD_9) and
ICD_10 version codes while eICU only uses ICD_9 code. Even when querying using the same
ICD_9 standard, MIMIC-IV stores the code a little differently. For instance, congestive heart
failure could be represented by ICD codes 39891, 40201 etc. in MIMIC-IV, while in eICU, it's
recorded as 398.91, 402.01 etc. We paid extra caution in designing the query code to miss as
few comorbidity-related records as possible.
The Vital table variables contain time-varying measurements on blood gas, vital sign, urine
chemistry etc. For MIMIC-IV, these variables were queried from mimic_hosp.labevents,
mimic_icu.chartevents and mimic_icu.outputevents. For eICU, the raw results
came from eicu_crd.lab, eicu_crd.nursecharting, eicu_crd.intakeoutput,
eicu_crd.microlab, eicu_crd.vitalperiodic, eicu_crd.respiratorycharting.
to develop our SQL queries. There are in total 15 variables that were not found in the eICU database and we placed all NAN values in order to have the same dataframe shape for model cross validation. A list of these variables is in SITable S7. In order to facilitate infectious disease research utilizing METRE, we extracted variables related to the antibiotics administration and microculture for both databases. Culture sites from MIMIC-IV have 57 unique values while the eICU database recorded 20 unique culture sites. These culture sites lack simple one-to-one correspondence, so we grouped the results into 14 categories based on the semantics. The details on creating concept-mapped microculture-related output variables are in SI Section 4. IV, each measurement/treatment is associated with a unique itemids, while in eICU, it's usually directly represented by a unique string. The query strategy designed to obtainDifferent from the Vital table where we focus on the numerical values for most variables, the
Intervention table queries the start time and the end time of every intervention procedure.
Besides ventilation, vasoactive agent, colloid bolus and crystalloid bolus, which are included in
MIMIC-Extract [16], we further queried CRRT and 3 different types of transfusion as well as
antibiotics administration. Another distinct difference between MIMIC-IV and eICU databases is
that in MIMIC-
table. The hypothetic patient was admitted to the ICU at 8:00 and 780 bpm is treated as an outlier for heat rate measurement. B) the Intervention table. The hypothetic patient was admitted to the ICU at 10:00 12/01/2081.
Table 1 .
1Comparison of METRE with prior worksMETRE
MIMIC-Extract
[16]
FIDDLE [18]
Gupta et al.
[30]
Cop-e-cat
[17]
Time-series
variable number
108
104
A few hundred to a
few thousand
depending on the
task
NA
43
If used MIMIC,
MIMIC-III or IV?
IV
III
III
IV
IV
Included
medication and
Y
Y
Y
Y
Y
Table 2
2is the demographic and ICU stay summary of the extracted MIMIC-IV and eICU cohorts using the default settings (generic cohort, age>18, data missingness threshold 0.9, LOS> 24h and LOS < 240h). Users can obtain their custom cohorts by defining the keyword, age constraint, LOS length constraint and by choosing whether to remove poorly populated stays or not. The age, gender, ethnicity distributions between these 2 cohorts are very similar.
Table 2 .
2Demographic and ICU stay summary of the default MIMIC-IV and eICU cohort in METREMIMIC-IV (N = 38766) eICU (N = 126448)
Table 2 ,
2we first demonstrate the utility of METRE on the five RF, 1D CNN and LSTM). The complete results are inTable 3. We used AUC and AUPRC of the test set as evaluation metrics. For MIMIC data, models trained with data extracted using METRE achieved 5/10 best performance in the tasks performed, demonstrating the utility of the pipeline. One benefit of METRE is the built-in ability to extract from the eICU database in parallel; we used the same criteria to obtain ARF and shock labels for eICU cohorts and then compared different model performance with those listed in FIDDLE. Models trained using data from the METRE-eICU also have comparable performance with those developed in FIDDLE, as expected (SITable S10).Next, we evaluated the AUC and AUPRC from the unseen dataset for models trained on the other dataset. For the hospital mortality prediction model trained on MIMIC-IV, when we directly evaluated its performance on the whole eICU data (not only eICU test set, so the cohort sizetasks mentioned previously (and demonstrated in FIDDLE [18]) using 4 different training
architectures (LR,
Table 3 .
3AUC and AUPRC comparison of trained models on the 5 different tasks. Data is from MIMIC.Hospital Mortality 48h
ARF 4h
ARF 12h
Shock 4h
Shock 12h
AUROC
AUPRC
AUROC
AUPRC
AUROC
AUPRC
AUROC
AUPRC
AUROC
AUPRC
Table 4 .
4Cross validation performance (DIFF: AUC/AUPRC difference compared to the original test set)Hospital Mortality 48h
ARF 4h
ARF 12h
Shock 4h
Shock 12h
AUROC
AUPR
AUROC
AUPR
AUROC
AUPR
AUROC
AUPR
AUROC
AUPR
eICU
validation
(models
trained on
MIMIC-
IV)
Prediction of In-hospital Mortality in Emergency Department Patients With Sepsis: A Local Big Data-Driven. R A Taylor, J R Pare, A K Venkatesh, H Mowafi, E R Melnick, W Fleischman, M K Hall, 10.1111/acem.12876Machine Learning Approach. 23Acad. Emerg. Med.R.A. Taylor, J.R. Pare, A.K. Venkatesh, H. Mowafi, E.R. Melnick, W. Fleischman, M.K. Hall, Prediction of In-hospital Mortality in Emergency Department Patients With Sepsis: A Local Big Data-Driven, Machine Learning Approach, Acad. Emerg. Med. 23 (2016) 269-278. https://doi.org/10.1111/acem.12876.
Predicting sepsis in multi-site, multi-national intensive care cohorts using deep learning. M Moor, N Bennet, D Plecko, M Horn, B Rieck, N Meinshausen, P Bühlmann, K Borgwardt, M. Moor, N. Bennet, D. Plecko, M. Horn, B. Rieck, N. Meinshausen, P. Bühlmann, K. Borgwardt, Predicting sepsis in multi-site, multi-national intensive care cohorts using deep learning, (2021). https://arxiv.org/abs/2107.05230v1 (accessed October 31, 2021).
Machine learning models for early sepsis recognition in the neonatal intensive care unit using readily available electronic health record data. A J Masino, M C Harris, D Forsyth, S Ostapenko, L Srinivasan, C P Bonafide, F Balamuth, M Schmatz, R W Grundmeier, 10.1371/journal.pone.0212665PLOS ONE. 14A.J. Masino, M.C. Harris, D. Forsyth, S. Ostapenko, L. Srinivasan, C.P. Bonafide, F. Balamuth, M. Schmatz, R.W. Grundmeier, Machine learning models for early sepsis recognition in the neonatal intensive care unit using readily available electronic health record data, PLOS ONE. 14 (2019) e0212665. https://doi.org/10.1371/journal.pone.0212665.
Early detection of sepsis utilizing deep learning on electronic health record event sequences. S M Lauritsen, M E Kalør, E L Kongsgaard, K M Lauritsen, M J Jørgensen, J Lange, B Thiesson, 10.1016/j.artmed.2020.101820Artif. Intell. Med. 1042020S.M. Lauritsen, M.E. Kalør, E.L. Kongsgaard, K.M. Lauritsen, M.J. Jørgensen, J. Lange, B. Thiesson, Early detection of sepsis utilizing deep learning on electronic health record event sequences, Artif. Intell. Med. 104 (2020) 101820. https://doi.org/10.1016/j.artmed.2020.101820.
Electronic health record alerts for acute kidney injury: multicenter, randomized clinical trial. F P Wilson, M Martin, Y Yamamoto, C Partridge, E Moreira, T Arora, A Biswas, H Feldman, A X Garg, J H Greenberg, M Hinchcliff, S Latham, F Li, H Lin, S G Mansour, D G Moledina, P M Palevsky, C R Parikh, M Simonov, J Testani, U Ugwuowo, 10.1136/bmj.m4786BMJ. 372F.P. Wilson, M. Martin, Y. Yamamoto, C. Partridge, E. Moreira, T. Arora, A. Biswas, H. Feldman, A.X. Garg, J.H. Greenberg, M. Hinchcliff, S. Latham, F. Li, H. Lin, S.G. Mansour, D.G. Moledina, P.M. Palevsky, C.R. Parikh, M. Simonov, J. Testani, U. Ugwuowo, Electronic health record alerts for acute kidney injury: multicenter, randomized clinical trial, BMJ. 372 (2021) m4786. https://doi.org/10.1136/bmj.m4786.
A clinically applicable approach to continuous prediction of future acute kidney injury. N Tomašev, X Glorot, J W Rae, M Zielinski, H Askham, A Saraiva, A Mottram, C Meyer, S Ravuri, I Protsyuk, A Connell, C O Hughes, A Karthikesalingam, J Cornebise, H Montgomery, G Rees, C Laing, C R Baker, K Peterson, R Reeves, D Hassabis, D King, M Suleyman, T Back, C Nielson, J R Ledsam, S Mohamed, 10.1038/s41586-019-1390-1Nature. 572N. Tomašev, X. Glorot, J.W. Rae, M. Zielinski, H. Askham, A. Saraiva, A. Mottram, C. Meyer, S. Ravuri, I. Protsyuk, A. Connell, C.O. Hughes, A. Karthikesalingam, J. Cornebise, H. Montgomery, G. Rees, C. Laing, C.R. Baker, K. Peterson, R. Reeves, D. Hassabis, D. King, M. Suleyman, T. Back, C. Nielson, J.R. Ledsam, S. Mohamed, A clinically applicable approach to continuous prediction of future acute kidney injury, Nature. 572 (2019) 116- 119. https://doi.org/10.1038/s41586-019-1390-1.
Assessment of a Deep Learning Model Based on Electronic Health Record Data to Forecast Clinical Outcomes in Patients With Rheumatoid Arthritis. B Norgeot, B S Glicksberg, L Trupin, D Lituiev, M Gianfrancesco, B Oskotsky, G Schmajuk, J Yazdany, A J Butte, 10.1001/jamanetworkopen.2019.0606JAMA Netw. Open. 2190606B. Norgeot, B.S. Glicksberg, L. Trupin, D. Lituiev, M. Gianfrancesco, B. Oskotsky, G. Schmajuk, J. Yazdany, A.J. Butte, Assessment of a Deep Learning Model Based on Electronic Health Record Data to Forecast Clinical Outcomes in Patients With Rheumatoid Arthritis, JAMA Netw. Open. 2 (2019) e190606. https://doi.org/10.1001/jamanetworkopen.2019.0606.
An Interpretable ICU Mortality Prediction Model Based on Logistic Regression and Recurrent Neural Networks with LSTM units. W Ge, J.-W Huh, Y R Park, J.-H Lee, Y.-H Kim, A Turchin, AMIA. Annu. Symp. Proc. 2018W. Ge, J.-W. Huh, Y.R. Park, J.-H. Lee, Y.-H. Kim, A. Turchin, An Interpretable ICU Mortality Prediction Model Based on Logistic Regression and Recurrent Neural Networks with LSTM units., AMIA. Annu. Symp. Proc. 2018 (2018) 460-469.
RETAIN: an interpretable predictive model for healthcare using reverse time attention mechanism. E Choi, M T Bahadori, J A Kulas, A Schuetz, W F Stewart, J Sun, Proc. 30th Int. Conf. Neural Inf. Process. Syst. 30th Int. Conf. Neural Inf. ess. SystRed Hook, NY, USACurran Associates IncE. Choi, M.T. Bahadori, J.A. Kulas, A. Schuetz, W.F. Stewart, J. Sun, RETAIN: an interpretable predictive model for healthcare using reverse time attention mechanism, in: Proc. 30th Int. Conf. Neural Inf. Process. Syst., Curran Associates Inc., Red Hook, NY, USA, 2016: pp. 3512-3520.
Medical Dead-ends and Learning to Identify High-Risk States and Treatments. M Fatemi, T W Killian, J Subramanian, M Ghassemi, Adv. Neural Inf. Process. Syst. M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, J.W. VaughanCurran Associates, IncM. Fatemi, T.W. Killian, J. Subramanian, M. Ghassemi, Medical Dead-ends and Learning to Identify High-Risk States and Treatments, in: M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, J.W. Vaughan (Eds.), Adv. Neural Inf. Process. Syst., Curran Associates, Inc., 2021: pp. 4856-4870.
Reproducibility in machine learning for health research: Still a ways to go. M B A Mcdermott, S Wang, N Marinsek, R Ranganath, L Foschini, M Ghassemi, 10.1126/scitranslmed.abb1655Sci. Transl. Med. 13M.B.A. McDermott, S. Wang, N. Marinsek, R. Ranganath, L. Foschini, M. Ghassemi, Reproducibility in machine learning for health research: Still a ways to go, Sci. Transl. Med. 13 (2021) eabb1655. https://doi.org/10.1126/scitranslmed.abb1655.
Understanding Potential Sources of Harm throughout the Machine Learning Life Cycle. H Suresh, J Guttag, MIT Case Stud. Soc. Ethical Responsib. Comput. H. Suresh, J. Guttag, Understanding Potential Sources of Harm throughout the Machine Learning Life Cycle, MIT Case Stud. Soc. Ethical Responsib. Comput. (2021).
. 10.21428/2c646de5.c16a07bbhttps://doi.org/10.21428/2c646de5.c16a07bb.
Potential Biases in Machine Learning Algorithms Using Electronic Health Record Data. M A Gianfrancesco, S Tamang, J Yazdany, G Schmajuk, 10.1001/jamainternmed.2018.3763JAMA Intern. Med. 178M.A. Gianfrancesco, S. Tamang, J. Yazdany, G. Schmajuk, Potential Biases in Machine Learning Algorithms Using Electronic Health Record Data, JAMA Intern. Med. 178 (2018) 1544-1547. https://doi.org/10.1001/jamainternmed.2018.3763.
A Review of Challenges and Opportunities in Machine Learning for Health. M Ghassemi, T Naumann, P Schulam, A L Beam, I Y Chen, R Ranganath, AMIA Summits Transl. Sci. Proc. 2020M. Ghassemi, T. Naumann, P. Schulam, A.L. Beam, I.Y. Chen, R. Ranganath, A Review of Challenges and Opportunities in Machine Learning for Health, AMIA Summits Transl. Sci. Proc. 2020 (2020) 191-200.
MIMIC-III, a freely accessible critical care database. A E W Johnson, T J Pollard, L Shen, L H Lehman, M Feng, M Ghassemi, B Moody, P Szolovits, L Anthony Celi, R G Mark, 10.1038/sdata.2016.35Sci. Data. 3160035A.E.W. Johnson, T.J. Pollard, L. Shen, L.H. Lehman, M. Feng, M. Ghassemi, B. Moody, P. Szolovits, L. Anthony Celi, R.G. Mark, MIMIC-III, a freely accessible critical care database, Sci. Data. 3 (2016) 160035. https://doi.org/10.1038/sdata.2016.35.
MIMIC-Extract: A Data Extraction, Preprocessing, and Representation Pipeline for MIMIC-III. S Wang, M B A Mcdermott, G Chauhan, M C Hughes, T Naumann, M Ghassemi, 10.1145/3368555.3384469Proc. ACM Conf. Health Inference Learn. ACM Conf. Health Inference LearnS. Wang, M.B.A. McDermott, G. Chauhan, M.C. Hughes, T. Naumann, M. Ghassemi, MIMIC-Extract: A Data Extraction, Preprocessing, and Representation Pipeline for MIMIC- III, Proc. ACM Conf. Health Inference Learn. (2020) 222-235. https://doi.org/10.1145/3368555.3384469.
COP-E-CAT: cleaning and organization pipeline for EHR computational and analytic tasks. A Mandyam, E C Yoo, J Soules, K Laudanski, B E Engelhardt, 10.1145/3459930.3469536Proc. 12th ACM Conf. 12th ACM ConfGainesville FloridaACMA. Mandyam, E.C. Yoo, J. Soules, K. Laudanski, B.E. Engelhardt, COP-E-CAT: cleaning and organization pipeline for EHR computational and analytic tasks, in: Proc. 12th ACM Conf. Bioinforma. Comput. Biol. Health Inform., ACM, Gainesville Florida, 2021: pp. 1-9. https://doi.org/10.1145/3459930.3469536.
Democratizing EHR analyses with FIDDLE: a flexible data-driven preprocessing pipeline for structured clinical data. S Tang, P Davarmanesh, Y Song, D Koutra, M W Sjoding, J Wiens, 10.1093/jamia/ocaa139J. Am. Med. Inform. Assoc. 27S. Tang, P. Davarmanesh, Y. Song, D. Koutra, M.W. Sjoding, J. Wiens, Democratizing EHR analyses with FIDDLE: a flexible data-driven preprocessing pipeline for structured clinical data, J. Am. Med. Inform. Assoc. 27 (2020) 1921-1934. https://doi.org/10.1093/jamia/ocaa139.
The eICU Collaborative Research Database, a freely available multi-center database for critical care research. T J Pollard, A E W Johnson, J D Raffa, L A Celi, R G Mark, O Badawi, 10.1038/sdata.2018.178Sci. Data. 5180178T.J. Pollard, A.E.W. Johnson, J.D. Raffa, L.A. Celi, R.G. Mark, O. Badawi, The eICU Collaborative Research Database, a freely available multi-center database for critical care research, Sci. Data. 5 (2018) 180178. https://doi.org/10.1038/sdata.2018.178.
. M Singer, C S Deutschman, C W Seymour, M Shankar-Hari, D Annane, M Bauer, R Bellomo, G R Bernard, J.-D Chiche, C M Coopersmith, R S Hotchkiss, M M Levy, J , M. Singer, C.S. Deutschman, C.W. Seymour, M. Shankar-Hari, D. Annane, M. Bauer, R. Bellomo, G.R. Bernard, J.-D. Chiche, C.M. Coopersmith, R.S. Hotchkiss, M.M. Levy, J.C.
. G S Marshall, S M Martin, G D Opal, T Rubenfeld, J.-L Van Der Poll, D C Vincent, Marshall, G.S. Martin, S.M. Opal, G.D. Rubenfeld, T. van der Poll, J.-L. Vincent, D.C.
The Third International Consensus Definitions for Sepsis and Septic Shock (Sepsis-3). Angus , 10.1001/jama.2016.0287JAMA. 315Angus, The Third International Consensus Definitions for Sepsis and Septic Shock (Sepsis-3), JAMA. 315 (2016) 801-810. https://doi.org/10.1001/jama.2016.0287.
Feature Robustness in Non-stationary Health Records: Caveats to Deployable Model Performance in Common Clinical Machine Learning Tasks. B Nestor, M B A Mcdermott, W Boag, G Berner, T Naumann, M C Hughes, A Goldenberg, M Ghassemi, Proc. 4th Mach. Learn. Healthc. Conf., PMLR. 4th Mach. Learn. Healthc. Conf., PMLRB. Nestor, M.B.A. McDermott, W. Boag, G. Berner, T. Naumann, M.C. Hughes, A. Goldenberg, M. Ghassemi, Feature Robustness in Non-stationary Health Records: Caveats to Deployable Model Performance in Common Clinical Machine Learning Tasks, in: Proc. 4th Mach. Learn. Healthc. Conf., PMLR, 2019: pp. 381-405. https://proceedings.mlr.press/v106/nestor19a.html (accessed June 3, 2022).
T Pollard, A Johnson, Obadawi, M Tnaumann, Komorowski, J Rincont, Raffa, Theonesp, Mit-Lcp/Eicu-Code: Eicu-Crd Code Repository V1.0. T. Pollard, A. Johnson, Obadawi, Tnaumann, M. Komorowski, Rincont, J. Raffa, Theonesp, Mit-Lcp/Eicu-Code: Eicu-Crd Code Repository V1.0, (2018).
. 10.5281/ZENODO.1249016https://doi.org/10.5281/ZENODO.1249016.
The MIMIC Code Repository: enabling reproducibility in critical care research. A E W Johnson, D J Stone, L A Celi, T J Pollard, 10.1093/jamia/ocx084J. Am. Med. Inform. Assoc. 25A.E.W. Johnson, D.J. Stone, L.A. Celi, T.J. Pollard, The MIMIC Code Repository: enabling reproducibility in critical care research, J. Am. Med. Inform. Assoc. 25 (2018) 32-39. https://doi.org/10.1093/jamia/ocx084.
Multitask learning and benchmarking with clinical time series data. H Harutyunyan, H Khachatrian, D C Kale, G Ver, A Steeg, Galstyan, 10.1038/s41597-019-0103-9Sci. Data. 696H. Harutyunyan, H. Khachatrian, D.C. Kale, G. Ver Steeg, A. Galstyan, Multitask learning and benchmarking with clinical time series data, Sci. Data. 6 (2019) 96. https://doi.org/10.1038/s41597-019-0103-9.
Bayesian Optimization: Open source constrained global optimization tool for Python. F Nogueira, F. Nogueira. Bayesian Optimization: Open source constrained global optimization tool for Python. https://github.com/fmfn/BayesianOptimization. (accessed August 2, 2022).
Temporal Convolutional Networks for Action Segmentation and Detection. C Lea, M D Flynn, R Vidal, A Reiter, G D Hager, 10.1109/CVPR.2017.1132017 IEEE Conf. Comput. Vis. Pattern Recognit. CVPR. Honolulu, HIC. Lea, M.D. Flynn, R. Vidal, A. Reiter, G.D. Hager, Temporal Convolutional Networks for Action Segmentation and Detection, in: 2017 IEEE Conf. Comput. Vis. Pattern Recognit. CVPR, IEEE, Honolulu, HI, 2017: pp. 1003-1012. https://doi.org/10.1109/CVPR.2017.113.
. Lstm -Pytorch, 1LSTM -PyTorch 1.12 documentation, (n.d.).
An Extensive Data Processing Pipeline for MIMIC-IV. M Gupta, B Gallamoza, N Cutrona, P Dhakal, R Poulain, R Beheshti, M. Gupta, B. Gallamoza, N. Cutrona, P. Dhakal, R. Poulain, R. Beheshti, An Extensive Data Processing Pipeline for MIMIC-IV, (2022). http://arxiv.org/abs/2204.13841 (accessed June 3, 2022).
| [
"https://github.com/fmfn/BayesianOptimization."
] |
[
"Theory of sounds in He -II",
"Theory of sounds in He -II"
] | [
"Christian Fronsdal [email protected] \nDepartment of Physics and Astronomy Unversity of California\nBhaumik Institute\nLos AngelesUSA\n"
] | [
"Department of Physics and Astronomy Unversity of California\nBhaumik Institute\nLos AngelesUSA"
] | [] | A dynamical model for Landau's original approach to superfluid Helium is presented, with two velocities but only one mass density. Second sound is an adiabatic perturbation that involves the temperature and the roton, aka the notoph. The action incorporates all the conservation laws, including the equation of continuity. With only 4 canonical variables it has a higher power of prediction than Landau's later, more complicated model, with its 8 degrees of freedom. The roton is identified with the massless notoph. This theory gives a very satisfactory account of second and fourth sounds.Second sound is an adiabatic oscillation of the temperature and both vector fields, with no net material motion. Fourth sound involves the roton, the temperature and the density.With the experimental confirmation of gravitational waves the relations between Hydrodynamics and Relativity and particle physics have become more clear, and urgent. The appearance of the Newtonian potential in irrotational hydrodynamics comes directly from Einstein's equations for the metric. The density factor ρ is essential; it is time to acknowledge the role that it plays in particle theory.To complete the 2-vector theory we include the massless roton mode. Although this mode too is affected by the mass density, it turns out that the wave function of the unique notoph propagating mode N satisfies the normal massless wave equation N = 0; the roton propagates as a free particle in the bulk of the superfluid without meeting resistance. In this circumstance we may have discovered the mechanism that lies behind the flow of He-II through very thin pores. | 10.1063/10.0016839 | [
"https://export.arxiv.org/pdf/2305.17635v1.pdf"
] | 257,272,832 | 2305.17635 | e7e849a91d4f2d160aab31a08dba95cec1b651ac |
Theory of sounds in He -II
28 May 2023 May 30, 2023
Christian Fronsdal [email protected]
Department of Physics and Astronomy Unversity of California
Bhaumik Institute
Los AngelesUSA
Theory of sounds in He -II
28 May 2023 May 30, 2023
A dynamical model for Landau's original approach to superfluid Helium is presented, with two velocities but only one mass density. Second sound is an adiabatic perturbation that involves the temperature and the roton, aka the notoph. The action incorporates all the conservation laws, including the equation of continuity. With only 4 canonical variables it has a higher power of prediction than Landau's later, more complicated model, with its 8 degrees of freedom. The roton is identified with the massless notoph. This theory gives a very satisfactory account of second and fourth sounds.Second sound is an adiabatic oscillation of the temperature and both vector fields, with no net material motion. Fourth sound involves the roton, the temperature and the density.With the experimental confirmation of gravitational waves the relations between Hydrodynamics and Relativity and particle physics have become more clear, and urgent. The appearance of the Newtonian potential in irrotational hydrodynamics comes directly from Einstein's equations for the metric. The density factor ρ is essential; it is time to acknowledge the role that it plays in particle theory.To complete the 2-vector theory we include the massless roton mode. Although this mode too is affected by the mass density, it turns out that the wave function of the unique notoph propagating mode N satisfies the normal massless wave equation N = 0; the roton propagates as a free particle in the bulk of the superfluid without meeting resistance. In this circumstance we may have discovered the mechanism that lies behind the flow of He-II through very thin pores.
I. Introduction
The classical action for adiabatic hydro-thermo-dynamics of irrotational fluid flows allows for the well known calculation of the speed of sound, understood as an oscillation of the mass density and the velocity potential at fixed, uniform entropy (Laplace 1825) [1]. Some fluids transmit a second type of "sound" that has been interpreted as an oscillation of entropy and temperature at fixed pressure (Tisza 1938) [2]. Experiments have confirmed that the temperature is oscillating (Peshkov 1946) [3] and that the pressure is only weakly involved.
This paper presents an alternative interpretation of second and fourth sounds, within Landau's 2 -flow theory [4] of phonons and rotons, as an adiabatic oscillation of the temperature and the dynamical roton mode, with fixed density and entropy. The theory is an application of adiabatic thermodynamics, formulated as an action principle.
The dynamics of the roton field (˙ X) was identified with the notoph (Rasetti and Regge 1972) [5], providing the link to Special Relativity and Quantum Theory that is needed in any mature, physical field theory.
A 2-form gauge field Y is related to˙ X by Y ij = ǫ ijk X k . The dynamical roton is the massless field [7] N = ρ( ▽ · X + const).
(1.1)
The principal new discovery that is reported here is that second sound is an adiabatic oscillation of the temperature and N .
Section II is a brief introduction to the ideas that have led to a dynamical formulation of Landau's theory. The speed of second sound is calculated in Section III and fourth sound is tackled in Section IV.
II. The classical action principles Hydrodynamics
The essence of classical hydrodynamics is expressed by two equations, the equation of continuity,ρ + ▽ · ρ v = 0 (2.1) and the Bernoulli equation
∂ ∂t v = − ▽ v 2 /2 − 1 ρ ▽p − ▽ϕ. (2.2)
Here ρ is the (mass) density and p is the pressure. It applies only to irrotational flows, when the velocity takes the form v = − ▽Φ. The two equations of motion are the Euler -Lagrange equations of a classical action principle. The field ϕ is the Newtonian potential. 1 For some of the most elementary flows another branch of hydrodynamics must be invoked. In a popular, didactic experiment a glass of water is placed on a turntable. After some time the water is seen to be turning with the glass like a solid body, the surface rising towards the edge to form a meniscus. In the theory that is used to explain this phenomenon the velocity is a time derivative,˙ X, and the 'Bernoulli equation' takes a different form,
▽˙ X 2 /2 − ▽ϕ − 1 ρ ▽p = 0. (2.3)
This theory, by itself, is not an alternative to the irrotational theory. It does not have an equation of continuity; instead˙ X is subject to constraints, as expected of a vector field. The inclusion of the Newtonian potential in this equation is ad hoc, it can not be justified by an application of General Relativity and the vector field˙ X is not affected by the transformations of the Galilei group. In conclusion, we need both types of vector fields to explain some of the simplest experiments. The study of lementary applications like these are incontrovertible evidence that two kinds of flow are needed in hydrodynamis. A satisfactory description of the waterglass -on -turntable and the whorls seen in the wake of ships was proposed by Onsager. (Onsager 1962) [6].
Both theories can be expressed as action principles; the irrotational Lagrangian density is
L 1 [ρ, Φ, ϕ] = ρ(Φ − ▽Φ 2 /2 − ϕ) − W 1 [ρ] (2.4)
and Eq. (2.3) -without ϕ -is the Euler-Lagrange equation of
L 2 [ρ, X] = ρ( ▽˙ X 2 /2) − W 2 [ρ]. (2.5)
The density factor is traditional in L 1 , less so in L 2 ; its appearance in both is crucial. 2 Current hydrodynamics results from adding (2.4) and (2.5),
L Hydro [ρ.Φ, X] = L 1 [ρ, Φ] + L 2 [ρ, X] + κρ 2 dψdY. (2.6)
The last term will be explained below. The idea of two independent vector fields was already introduced by Landau [4] in his theory of superfluid Helium, his phonon and roton velocities fields are − ▽Φ and˙ X.
The classical theory of ordinary sound is derived from the Lagrangian (2.6),
L[ρ.Φ, X] = ρ(Φ − K − ϕ) − W [ρ], ) (2.7)
with the kinetic potential
K = ( ▽φ) 2 /2 − ( ▽˙ X) 2 /2 − κ˙ X · ▽Φ. (2.8)
This Lagrangian is invariant under the transformations of the Galilei group. (The field X is inert, up to a change of gauge.) The flow ρ(κ˙ X − ▽Φ) is identified by the property of being conserved, as expressed by equation of continuity, derived from the Lagrangian by variation of Φ.
As we shall show, this is a suitable action for Landau's phonons and rotons and a wide range of other applications of adiabatic hydro-thermodynamics. In the literature inspired by Landau's work on superfluids one finds that the applications make little use of roton dynamics; instead the field˙ X is more or less fixed. It is, therefore, not surprising to find that the theory, in its original, non -relativistic context, is characterized by strong constraints, as has been revealed by completion of the theory (Rasetti and Regge 1972). This is what brings the number of independent variables of hydrodynamics down to just 4.
The completed roton theory is a relativistic gauge theory. Like electrodynamics, it was completed with its development as a quantized gauge theory (Ogievetskij and Polubarinov 1963) [7] and Green, Schwartz and Witten (1987) [9]. Both relativity and quantum theory are needed for the formulation of unitarity. We return to this topic below.
Both L 1 and L 2 are non-relativistic limits of relativistic field theories; the former is a limit of
1 2 ρ(g µν ψ ,µ ψ ,ν − c 2 ) − W [ρ]. (2.9).
The non-relativistic limit includes the Newtonian potential defined by
g 00 = c 2 t + 2ϕ + O(1/c 2 ), ψ = c 2 + Φ + O(1/c 2 ) (2.10)
This is the origin of the Newtonian potential in Eq.(2.4) [9]. Its appearence in (2.5) cannot be justified by General Relativity.
Thermodynamics
The use of a variational principle for (adiabatic) thermodynamics is not often seen in the literature. There follows a resumé that shows that the basic equations are the Euler-Lagrange equations of a simple action. This reformulation of adiabatic thermodynamics contains nothing that is unfamiliar. What it does is to set the limits of the applications; 3 it fixes the Hamiltonian, the kinetic potential and the angular momentum and it puts us in a better position to confront new applications, such as gravitational waves [11] and the speed of second sound.
The equations that define adiabatic thermodynamics of a uniform system at rest are
∂F (T, V ) ∂V + P = 0, ∂F (T, ρ) ∂T + S = 0, (2.11)
where V is the volume, F is the Helmholtz free energy and P is the pressure. We prefer to formulate the theory in terms of densities,
s = ρS, f (T, ρ) = ρF (T, V ).
Following Callen we set the local version of Eq.s (2.11)
ρ ∂f ∂ρ − f = p, ∂f ∂T + s = 0.
Consider the action
A 1 [Φ, ρ, T, S, P ] = dt Σ L 1 − ∂Σ P . (2.12)
Here P is the 3-form of pressure on the multifaceted boundary. The Lagrangian density is
L 1 = ρ(Φ − ▽Φ 2 /2 − ϕ) − f (T, ρ) − sT. (2.13)
Assume that the specific entropy density S is fixed, constant and uniform. Vary this Lagrangian with respect to local variations of ρ and T , with S, P and -temporarily -the boundary ∂Σ, fixed, then the Euler -Lagrange equations are as follows.
Variation of A 1 with respect to Φ gives the equation of continuity, with v = − ▽Φ;ρ + ▽ · (ρ v) = 0,(2.
14)
Variation with respect to T gives the adiabatic relation:
∂ ∂T f + s = 0; (2.15)
it can be used to eliminate the temperature.
3 It is evidently incompatible with an oscillating entropy.
Theorem. When s = ρS, S fixed, constant and uniform, then
▽ ∂ ∂ρ (f + sT ) = 1 ρ ▽p. (2.16)
Local variation of L 1 , Eq.(2.13), by ρ, followed by the elimination of T , leads to the Bernoulli equation in the original form, Eq. (2.2).
▽Φ − ▽( ▽Φ) 2 /2 − ▽ϕ − 1 ρ ▽p = 0.
There is a proof in Fronsdal (2020) [10].
Finally, variation of the boundary gives
L 1 | ∂Σ = P. (2.17)
On-shell, on the boundary,
p = ρ ∂f ∂ρ − f = L 1 − ρ ∂L 1 ∂ρ = P. (2.18)
The first equality agrees with the first of Eq.s (2.11), but since it is taken to hold in a wider context it may be regarded as a definition; the second is a consequence of the fact that −f is the only term in the Lagrangian density that is not linear in ρ. The last equality confirms the identification of p as the pressure, an extrapolation of P from the boundary to the interior. We shall replace L 1 by L 1 + L 2 , as in Eq. (2.6).
Gauge theory
The gauge theory behind the roton field˙ X was discovered by Rasetti and Regge [5]. It is the theory of a massless 2-form, components (Y µν ). The free action density is ρdY 2 . For hydrodynamics the complete action density is
L 2 [ρ, Y ] = √ −g c 2 12 ρ dY 2 + κ 2 ρ dY dψ; (2.19)
The 2-form Y µν is related to X and ψ is related to Φ by (2.10). The non relativistic Lagrangian in Eq. (2.12) is derived from L 1 in Eq. (2.9) and L 2 is a limit of L[ρ, Y ] in (2.19). 1. The field ψ is a scalar field with a vacuum expectation value, ψ = Φ + c 2 t. The field Φ transforms, together with the velocity − ▽Φ, under the Galilei group, in the usual way, making L 1 invariant under this group.
2. The components of the 2-form are
Y ij = ǫ ijk X k , Y 0i =: η i . (2.20)
The vector field η is a gauge field, variation of the action with respect to η gives the constraint -the gauge condition -
▽ ∧ m = 0, m := ρ(˙ X + κ ▽Φ),(2.
21)
with the general solution m = − ▽τ.
A special choice for the gauge parameter τ is required for a massless mode to be recognized. This mode is
N := ρ( ▽ · X + κ). (2.22)
It is the only propagating field of this gauge theory. The free field equation is (Ref.s [11]). N = 0.
(2.23)
Remark. Non-relativistic electrodynamics, a limit when the speed of light -c -tends to infinity, makes sense only in the absence of the magnetic field. And non-relativistic hydrodynamics is the regime where N = 0, for this field, like B, enters the Lagrangian and the equations of motion multiplied by c 2 . Consequently, any study of a steady configuration will be one in which N is negligible.
III. Dynamics of first and second sound
First sound The classical theory of (first) sound propagation rests on the Eulerian theory, with the Lagrangian density L 1 and the two equations of motion (2.1) and (2.2). The speed of propagation is expressed in terms of the adiabatic derivative of the pressure
C 1 = dp(ρ, S) dρ . (3.1)
This equation is used in the quoted sources (Arp et al [12], Brooks and Donnelly [13], Maynard [14] ) to determine the equation of state. We shall obtain similar formulas for second and fourth sounds.
First sound is an oscillation of the density and the velocity potential, ρ and Φ, and S, fixed. The speed of first (ordinary) sound is usually calculated for a plane wave, a first order perturbation of a static configuration with uniform density. The two Euler -Lagrange equations arė
ρ − ▽ · ρ ▽Φ = 0,Φ − ∂(f + sT ) ∂ρ T = 0. (3.2)
Eliminating T , or using Eq. (2.16) and differentiations leads, in first order perturbation theory, tö
ρ = ρ ▽ ·˙ v(ρ, S), ▽ ·˙ v = 1 ρ ∂p(ρ, S) ∂ρ S ∆ρ (3.3)
and to (3.1). Note: in this case v = − ▽Φ, since˙ X = 0.
Second sound
Since the notoph is a massless particle, we add a Stefan-Boltzmann term to the enternal energy density:
u(T, ρ) →ũ(T, ρ, N ) = u + α k T k N. N := ρ( ▽ ·˙ X + κ). (3.4)
Let (T 0 , ρ 0 , X 0 , N 0 ) be a stationary solution of the Euler -Lagrange equations for the Lagrangian
L = ρ(Φ − K) − f − sT − α k N T k K = −β˙ X 2 /2 + v 2 /2, β = 1 + κ 2 ; (3.5)
this is an alternative expression for Eq.(2.6) and ρ v = ρ(κ˙ X − ▽Φ is the conserved current. Second sound is a first order, adiabatic deformation
(T 0 ,˙ X 0 ) → (T 0 + dT, X 0 + d X),
with N 0 = 0, v 0 = 0 and dp = 0, dρ = 0, d v = 0.
To the experimenter second sound is excited by a forced oscillation of the temperature at the boundary and that is not registered by a pressure sensitive microphone, hence dp ≈ 0.
We must review the equations that govern these oscillations.
The relevant part of the internal energy density is
−f − sT − α k N T k + ρβ˙ X 2 /2
In adiabatic thermodynamics, for any fixed value of S, the theory is an isolated Lagrangian action principle and u is the Hamiltonian density. For any adiabatic variation the integrated internal energy is at a minimum. Variation with respect to T and X, ρ fixed and uniform, gives the two Euler -Lagrange equations. From variation of T :
dT ∂(f + sT ) dT S,ρ,N = dT ρ ∂(F + ST ) ∂T S,ρ + α k T k dN dT (3.6)
In the first order of the perturbation this quantity is zero,
d(ρC V ) − αT k−1 dN = 0. (3.7)
Explanation: The internal energy density isũ and this has values that are measured and that give the recorded values of C V ; but in the cited papers N does not represent another variable; it is just a function of T , so their ρC V includes a term −(α/k)T k (dN /dT ). Derivation with respect to the time gives
ρ ∂C V ∂TṪ − αT k−1 dṄ = 0.
This is valid to first order in perturbation theory if N 0 = 0, as is natural under the circumstances. Under the same conditions,
ρ ∂C V ∂TT − αT k−1N = 0. (3.8) 2. From variation of X, ρβd˙ X ·˙ X − α k T k dN = 0, or d X · (β d¨ X − αT k−1 ▽T ) = 0. (3.9)
From (3.7) and the divergence of (3.8) follows that
ρ ∂C V ∂T pT − αT k−1 dN = 0, βdN − αρT k−1 ∆T = 0 (3.9)
and the speed C 2 is
C 2 = α √ β T k−1 ∂C V ∂T p −1/2 . (3.10)
The values of ∂C V /∂T | p will be taken from experimental data, for 0 < p < 25M P a.
Direct comparison with experimental values
Arp et al [12], Brooks and Donnelly [13] and Maynard [14] have collected results from many experiments. Their results for C V are plotted in Fig.1, along with our very simple interpolation. 4 The logarithmic singularity was placed on the λ line.
Our interpolation for C V is, for p = 0, 0.4 < T < 2.4
C V = − ln((2.18 − T ) 5/2 + 10 −30 ) + 3.35 − 4.5T + 1.6T 2 , .(3.
11)
Units are joules/gram. The curves C V (T ) and C 2 (T ) are shown in Fig. 1. The lowest value of C V on the interpolation curve is -.033 at T = .8282. The calculation stops at T = .7476, near the point where the experimenters loose their signal (Williams and Rosenbaum 1979) [15] and peaks at T = 2.18046. Only the overall factor, 17.5 in Eq. (3.16) could be adjusted for a best fit. Similar fits were obtained for p = 2M P a after a small adjustment of the parameters:
C V = − ln((2.165 − T ) 2.5 + 10 −30 ) + 4.0 − 5.85x + 2.1x 2 .
(3.12)
The minimum of the interpolation curve is .00389 at T = .700. The curve begins at T = .8287, and peaks at T = 2.17875, at the λ line.
Calculations have verified similar agreement for pressure 5, 10, 15, 20 and 25 MPa.
Given the experimental data for the values of C V , the theory predicts the speed of second sound to be given up to a multiplicative constant by Eq. (3.16). These formulas, with k = 3, give very good fits from the λ line down to T = .8, where C V has a local minimum and the signal is lost. Fitting the overall constant factor to the experiment we find that, for p = 0 and for p = 2, 5 the final result is
C 2 = α √ β T 2 ∂C V ∂T −1/2 , .8 < T < T λ . (3.13)
with α/ √ β = 17.5 in the units m/s and joules. Here C V was taken from the tables in terms of joules and the velocites were expressed in terms of m/s.
Interpretation
In the term (α/k)T k N that we have included in the internal energy, N is the notoph amplitude. The power k in T k was left open to be determined by measurements. The experimental value is k = 3. The factor T 3 is proportional to the number of quanta predicted by Planck's theory. The new term in the internal energy density is thus identified as the Stefan -Boltzmann term associated with the notoph.
Why include N T 3 instead of aT 4 . The notoph is a new experience and the wisest course is to accept the value provided by the experiments, which fixes the value of K at 3..
We have set the parameter N 0 equal to zero. This is because the experiments were made in vessels of a size such that boundary effects, the origin of capillary effects, are expected to be weak. The effect of varying this parameter away from zero is insignificant.
IV. Fourth sound
Fourth sound is observed in thin films and in containers packed with silicon wafers. The usual interpretation is that the "normal component" remains at rest; we shall assume that v 0 = 0, ▽Φ 1 = 0, that S is fixed, constant and uniform, while X, ρ and T oscillate together. Of the four equations of motion these three are relevant for the determination of the speed;
Equation of continuity:
dρ + κρd( ▽ ·˙ X) = 0.
To zero order in perturbation theory the system is taken to be stationary , and both terms are zero. For simplicity we shall replace X by N as the independent variable. Consider a plane wave perturbation, then to first order the equation reduces tȯ
ρ 1 + κρ 1 ( ▽ ·˙ X 0 ) + κρ 0 ( ▽ ·˙ X 1 ) = 0. (4.1)
It is clearly vital to know something about the zeroth approximation.
In the bulk of the fluid the field˙ X 0 is stationary and ( ▽ ·˙ X 0 ) is expected to vanish; in that case, in first order of perturbation, dρ + κdN = ρ 0 , constant; this will allow us to eliminate the density from the Bernoulli equation.
2. The adiabatic condition:
∂(f + sT ) ∂T ρ,N ,S dT = ρ ∂(F + ST ∂T S dT = − α 3 T 3 N − ∂(f + sT ) ∂ρ T N ,S
dρ and using Eq. (2.16):
∂(f + sT ) ∂ρ T,N ,S = 1 ρ ∂p ∂ρ to get ∂(f + sT ) ∂T ρ,N ,S dT = − α 3 T 3 dN − ∂p ∂ρ dρ, or dC V = (αT 2 + κC 2 1 )dN − ρ 0 C 2 1
The time derivatives: The unit of velocity is here 10 4 cm/sec, and C V is in joules. In the numerator α = 17.5β as in the calculation of C 2 and 1/100 converts α to the new unit of speed. The factor 1/10 in the denominator is valid when C V , is expressed in joules, as taken from the tables. [13][14][15]. It is almost covered by the blue line, the value given by Eq. (4.4). The fit was made with only one free parameter, the physical parameter κ of the fluid, for the first time determined experimentally. The vertical coordinate is the square of the speed of fourth sound in units of (10 4 cm/sec) 2 .
∂C V ∂TṪ + ρ 0 ∂C 2 1 ∂TṪ = (αT 2 + κC 2 1 )Ṅ
The Bernoulli equation is,
d¨ X = αT k−1 ▽T,(4.
The value of the parameter α/ √ β was determined in Section III. That leaves the value of κ as the sole free parameter; the value determined by using the earlier value of α is κ = .556 ± .0005.
If instead Eq. (4.4) is used to determine both parameters then α/ √ β is bracketed between 17.0 and 18.0.
That gives a unique theory, with no free parameters, that can be used to predict the strength of capillary effects and other properties of He -II.
The analytic interpolations used for C1 and C4 were
C1 2 = 5.63 + 0.05 * [1.2 − x] − (0.77 * [x − 1.2] 2 ) C4 2 = 54000 − (50000 * [x − 1.2] 2 )
V. What comes next?
This paper reports another application of a version of Landau's 2-vector theory of superfluids. It should be pointed out that the alternative idea of two densities has found no direct experimental support. The number "ρs/ρn" is fixed in terms of ρ, T , and p; it is not an independent variable. [13] The need for extra variables, besides a velocity potential, a density and the temperature, was demonstrated at the end of the 17th century and yet the first viable suggestion in that direction was Landau's idea of two velocities, at first in a very narrow context.
The two 'versions' of hydrodynamics date from the beginning. They have been said to be equivalent, but that is evidently not the case. The rotons are strongly associated with the socalled 'Lagrangian version' of hydrodynamics. This was pointed out in an important paper by Rasetti and Regge, but that paper had repercussions in string theory only. [16] Today we see the roton-notoph identification as the coming-of age of hydrodynamics, with applications to a large class of fluid phenomena, including capillary action, flight and gravitational waves. The recently confirmed unification of General Relativity with Particle Theory has given new impetus to bringing hydrodynamics into contact with both. We hope that the present paper will stimulate a more unified approach to these important branches of physics, by showing that hydrodynamics can be approached by methods that have been proper to Particle Physics, and profit from it. The approach assumes a precise model of fluids and makes detailed predictions on its own, without special adaptations in each special case.
To end where we began, superfluids still pose challenges. The existence of spin is obvious but details need to be examined. The spectacular ability of He-II to penetrate very fine pores is probably a manifestation of capillary phenomenon, related to the properties of the massless notoph, but this needs to be clarified. Finally, the growing importance of notoph = roton makes it urgent to discover how to detect it, in the CMB and in the laboratory.
Computation codes are available on request from the author.
Fig. 1 .
1The lower part shows values of CV (in joules) determined by measurements. The solid curve is our simple interpolation of the data. This interpolation was used to calculate the speed C2 of second sound, using Eq. (3.13), showed for k = 3 (units m/s); it is our prediction for the speed of second sound in He-II.
Fig. 2 .
2The relation (4.4). Red line: Interpolation by the author of measurements of C4. Blue line: Values of C4 calculated from experimental values of C1, C2 and CV reported in ref.s[13][14][15].
Fig. 2
2summarizes the result for C4. The red line is the square of fourth sound, interpolated from the experimental data
Table I .
IIntroduction II. The classical action principles Hydrodynamics. Thermodynamics. Gauge theory. Speeds of sound. III. Dynamics of first and second sounds. First sound. Second sound. Interpretation. IV. Fourth sound. V. What comes nex?
This theory is what remains of a relativistic theory when the relativistic scalar ψ is expanded as ψ = c 2 t + Φ + O(1/c 2 ) and, g 00 = c 2 + 2φ + O(1/c 2 ) and the other components are Lorenzian.2 That compressibilty of air is what makes flight possible was understood by Leonardo da Vinci in the 15th century.
Our interpolation formula was needed in the lowest interval of temperature only, there was no need for the elaborate interpolation used by Arp.
In the quoted reviews velocities are given in m/sec, energy densities in joules.
Acknowledgements I thank Gary Williams for discussions and information, and Joe Rudnick for conversations. I also wish to acknowledge the crucial reference to the paper[5], by Alexander Zheltukhin. I also thank Chair David Saltzberg for support.
. P S Laplace, Traité De Méchanique, DupratParisLaplace, P.S., Traité de Méchanique, Duprat, Paris (1825)
Transport Phenomena in Helium II. L Tisza, Nature. 141913Tisza, L., Transport Phenomena in Helium II, Nature volume 141, page 913 (1938)
Second Sound in Helium II. V Peshkov, Scientific Research. 8J. of PhysicsPeshkov, V., "Second Sound in Helium II. J. of Physics, 8, 381-389." Scientific Research (1944)
Theory of Superfluidity in Helium II. L Landau, Phys.Rev. 60Landau, L., "Theory of Superfluidity in Helium II", Phys.Rev. 60 356-358 (1941)
Quantum vortices and diff (R3). M Rasetti, T Regge, in Lecture Notes in PhysicsRasetti, M. and Regge, T., "Quantum vortices and diff (R3)", in Lecture Notes in Physics,
Private conversation. L Onsager, Onsager, L. Private conversation, 1962
. Springer-Verlag, Springer-Verlag. (1962)
The notoph and its possible interactions. V I Ogievetskij, I V Polubarinov, Lecture Notes in Physics. 173Springer-VerlagOgievetskij, V.I. and Polubarinov, I.V., "The notoph and its possible interactions", Lecture Notes in Physics, Springer-Verlag 173 (1962)
. M Green, J Schwartz, E Witten, Superstrings, Princeton U. PressGreen, M., Schwartz, J. and Witten, E., "Superstrings", Princeton U. Press (1987)
Ideal stars in General Relativity. C Fronsdal, Gen. Rel. Grav. 39Fronsdal, C. "Ideal stars in General Relativity", Gen. Rel. Grav. 39, 1971-2000 (2007)
. H Callen, Wiley Thermodynamics, Callen, H. Thermodynamics, Wiley (1960)
Hydronamic sources for Gravitational Waves. C Fronsdal, Fronsdal, C., "Hydronamic sources for Gravitational Waves". (2020)
Physical and Chemical Properties Division Chemical Science and Technology Laboratory. V D Arp, R D Mccarty, D F Friend, Boulder, ColoradoNational Institute of Standards and Technology 325 BroadwayThermophysical Properties of Helium-4 from 0.8 to 1500 K with Pressures to 2000 MPaArp. V.D., McCarty, R.D. and Friend, D.F., "Thermophysical Properties of Helium-4 from 0.8 to 1500 K with Pressures to 2000 MPa", Physical and Chemical Properties Division Chemical Science and Technology Laboratory, National Institute of Standards and Technology 325 Broadway Boulder, Colorado 80303-3328 (1998)
The calculated thermodynamic properties of superfluid Helium. J S Brooks, R J Donnelly, J.Phys.Chem.Ref.Data. 651Brooks, J.S. and Donnelly, R.J., "The calculated thermodynamic properties of superfluid Helium", J.Phys.Chem.Ref.Data 6 51 ((1977)
Determination of the thermodynamics of He II from sound-velocity data. The dynamics of Helium -II from the speeds of sound" m. J Maynard, Phys. Rev. B. 14Maynard, J., "Determination of the thermodynamics of He II from sound-velocity data. The dynamics of Helium -II from the speeds of sound" m Phys. Rev. B 14 1976 -3891 (1976)
Fifth sound in superfluid 4 He below 1K. G A Williams, R Rosenbaum, Phys. Rev. B. 20Williams, G. A. and Rosenbaum, R., "Fifth sound in superfluid 4 He below 1K", Phys. Rev. B. 20 4738 -4740 (1979)
| [] |
[
"Efficient algorithms for computing rank-revealing factorizations on a GPU",
"Efficient algorithms for computing rank-revealing factorizations on a GPU"
] | [
"Nathan Heavner ",
"Chao Chen ",
"Abinand Gopal ",
"Per-GunnarMartinsson "
] | [] | [] | Standard rank-revealing factorizations such as the singular value decomposition and column pivoted QR factorization are challenging to implement efficiently on a GPU. A major difficulty in this regard is the inability of standard algorithms to cast most operations in terms of the Level-3 BLAS. This paper presents two alternative algorithms for computing a rank-revealing factorization of the form A = UTV * , where U and V are orthogonal and T is trapezoidal (or triangular if A is square). Both algorithms use randomized projection techniques to cast most of the flops in terms of matrix-matrix multiplication, which is exceptionally efficient on the GPU. Numerical experiments illustrate that these algorithms achieve significant acceleration over finely tuned GPU implementations of the SVD while providing low rank approximation errors close to that of the SVD. | 10.1002/nla.2515 | [
"https://export.arxiv.org/pdf/2106.13402v2.pdf"
] | 235,652,065 | 2106.13402 | 7bc50e8a88e03afa4803905c0c489e695b31bda3 |
Efficient algorithms for computing rank-revealing factorizations on a GPU
May 2023
Nathan Heavner
Chao Chen
Abinand Gopal
Per-GunnarMartinsson
Efficient algorithms for computing rank-revealing factorizations on a GPU
May 2023Randomized numerical linear algebrarank-revealing matrix factorizationparallel algorithm for GPU
Standard rank-revealing factorizations such as the singular value decomposition and column pivoted QR factorization are challenging to implement efficiently on a GPU. A major difficulty in this regard is the inability of standard algorithms to cast most operations in terms of the Level-3 BLAS. This paper presents two alternative algorithms for computing a rank-revealing factorization of the form A = UTV * , where U and V are orthogonal and T is trapezoidal (or triangular if A is square). Both algorithms use randomized projection techniques to cast most of the flops in terms of matrix-matrix multiplication, which is exceptionally efficient on the GPU. Numerical experiments illustrate that these algorithms achieve significant acceleration over finely tuned GPU implementations of the SVD while providing low rank approximation errors close to that of the SVD.
INTRODUCTION
1.1. Rank-revealing factorizations. Given an m × n matrix A with m ≥ n, it is often desirable to compute a factorization of A that uncovers some of its fundamental properties. One such factorization, the rank-revealing UTV factorization, is characterized as follows. We say that a matrix factorization
A = U T V * , m × n m × m m × n n × n
is rank-revealing if U and V are orthogonal matrices, T is an upper trapezoidal matrix 5 , and for all k such that 1 ≤ k < n, it is the case that where B is an arbitrary matrix of the same size as A, the norm is the spectral norm, U(:, 1 : k) and T(1 : k, :) denote the first k columns and the first k rows of corresponding matrices, respectively (Matlab notation). This informal definition is a slight generalization of the usual definitions of rank-revealing decompositions that appear in the literature, minor variations of which appear in, e.g. [13,48,10,12]. Rank-revealing factorizations are useful in solving problems such as least squares approximation [12,11,25,35,5,26], rank estimation [10,50,49], subspace tracking [4,48], and low-rank approximation [36,21,34,14,3], among others.
Perhaps the two most commonly known and used rank-revealing factorizations are the singular value decomposition (SVD) and the column pivoted QR decomposition (CPQR). 6 A singular value decomposition provides a theoretically optimal rank-revealing decomposition, in that the error e k in (1) is minimum. The SVD has relatively high computational cost, however. The CPQR is less expensive, and also has the advantage that it builds the factorization incrementally, and can halt once a specified tolerance has been reached. This latter advantage is very valuable when working with matrices that are substantially rank-deficient. The drawback of CPQR is that it is much worse than the SVD at revealing the numerical rank (see, e.g., theoretical error bounds in [19,Section 3.2], and empirical results in Figures 6 and 8). For many practical applications, the error incurred is noticeably worse but usually acceptable. There exist pathological matrices for which CPQR leads to very suboptimal approximation errors [33], and specialized pivoting strategies to remedy it in some situations have been developed [10,31].
A third choice is the rank-revealing UTV factorization (RRUTV) [48,51,38,23,2]. An RRUTV can be thought of as a compromise between the SVD and CPQR that is better at revealing the numerical rank than the CPQR, and faster to compute than the SVD. Traditional algorithms for computing an RRUTV have been deterministic and guarantee revealing the rank of a matrix up to a user-defined tolerance. It is not used as widely as the aforementioned SVD and CPQR, though, except in a few settings such as subspace tracking.
1.2. Challenges of implementing the SVD and the CPQR on a GPU. As focus in high performance computing has shifted towards parallel environments, the use of GPUs to perform scientific computations has gained popularity and success [42,37,7]. The power of the GPU lies in its ability to execute many tasks in parallel extremely efficiently, and software tools have rapidly developed to allow developers to make full use of its capabilities. Algorithm design, however, is just as important. Classical algorithms for computing both the SVD and CPQR, still in use today, were designed with a heavier emphasis on reducing the number of floating point operations (flops) than on running efficiently on parallel systems. Thus, it is difficult for either factorization to maximally leverage the computing power of a GPU.
For CPQR, the limitations of the parallelism are well understood, at least relative to comparable matrix computations. The most popular algorithm for computing a CPQR uses Householder transformations and chooses the pivot columns by selecting the column with the largest norm. We will refer to this algorithm as HQRCP. See Section 2.3 for a brief overview of HQRCP, or, e.g. [8,25] for a thorough description. The process of selecting pivot columns inherently prevents full parallelization. In particular, HQRCP as written originally in [25] uses no higher than Level-2 BLAS. Quintana-Ortí et al. developed HQRCP further in [44], casting about half of the flops in terms of Level-3 BLAS kernels. Additional improvement in this area, though, is difficult to find for this algorithm. Given a sequence of matrix operations, it is well known that an appropriate implementation using Level-3 BLAS, or matrixmatrix, operations will run more efficiently on modern processors than an optimal implementation using Level-2 or Level-1 BLAS [6]. This is largely due to the greater potential for the Level-3 BLAS to make more efficient use of memory caching in the processor.
The situation for the SVD is even more bleak. It is usually computed in two stages. The first is a reduction to bidiagonal form via, e.g. Householder reflectors. Only about half the flops in this computation can be cast in terms of the Level-3 BLAS, similarly (and for similar reasons) to HQRCP. The second stage is the computation of the SVD of the bidiagonal matrix. This is usually done with either an iterative algorithm (a variant of the QR algorithm) or a recursive algorithm (divide-and-conquer) which reverts to the QR algorithm at the base layer. See [54,27,15,30] for details. The recursive option inherently resists parallelization, and the current widely-used implementations of the QR approach are cast in terms of an operation that behaves like a Level-2 BLAS. 7 Another well-known method for computing the SVD is the Jacobi's method [17,27], which can compute the tiny singular values and the corresponding singular vectors much more accurately for some matrices. But it is generally slower than the aforementioned methods.
1.3. Proposed algorithms. In this paper, we present two randomized algorithms for computing an RRUTV. Both algorithms are designed to run efficiently on GPUs in that the majority of their flops are cast in terms of matrixmatrix multiplication. We show through extensive numerical experiments in Section 6 that each reveals rank nearly as well as the SVD but often costs less than HQRCP to compute on a GPU. For matrices with uncertain or high rank, then, these algorithms warrant strong consideration for this computing environment.
The first algorithm POWERURV, discussed in Section 3, was first introduced in the technical report [28]. POWERURV is built on another randomized RRUTV algorithm developed by Demmel et al. in [16], adding better rank revelation at a tolerable increase in computational cost. The algorithm itself is quite simple, capable of description with just a few lines of code. The simplicity of its implementation is a significant asset to developers, and it has just one input parameter, whose effect on the resulting calculation can easily be understood.
The second algorithm, RANDUTV, was first presented in [38]. RANDUTV is a blocked algorithm, meaning it operates largely inside a loop, "processing" multiple columns of the input matrix during each iteration. Specifically, 7 The QR algorithm can be cast in terms of Level-3 BLAS, but for reasons not discussed here, this approach has not yet been adopted in most software. See [55] for details.
for an input matrix A ∈ R m×n with m ≥ n, 8 a block size b is chosen, and the bulk of RANDUTV's work occurs in a loop of s = ⌈n/b⌉ steps. During step i, orthogonal matrices U (i) and V (i) are computed which approximate singular vector subspaces of a relevant block of T (i−1) . Then, T (i) is formed with
T (i) := U (i) * T (i−1) V (i) .
The leading ib columns of T (i) are upper trapezoidal (see Figure 2 for an illustration of the sparsity pattern), so we say that RANDUTV drives A to upper trapezoidal form b columns at a time. After the final step in the loop, we obtain the final T, U, and V factors with
T := T (s) , U := U (1) U (2) · · · U (s) , V := V (1) V (2) · · · V (s) .
See Section 4 for the full algorithm. A major strength of RANDUTV is that it may be adaptively stopped at any point in the computation, for instance when the singular value estimates on the diagonal of the T (i) matrices drop below a certain threshold. If stopped early after k ≤ min(m, n) steps, the algorithm incurs only a cost of O(mnk) for an m × n input matrix. Each matrix U (i) and V (i) is computed using techniques similar to that of the randomized SVD [32], which spends most of its flops in matrix multiplication and therefore makes efficient use of GPU capabilities.
In this paper, we propose several modifications to the original RANDUTV algorithm given in [38]. In particular, we add oversampling and orthonormalization to enhance the accuracy of the rank-revealing properties of the resulting RRUTV factorization. These changes lead to additional computational cost on a CPU as observed in [38]. Here, we introduce an efficient algorithm to minimize the additional cost of oversampling and orthonormalization. The new algorithm takes advantage of the fact that matrix-matrix multiplication is far more efficient on a GPU than unpivoted QR, RANDUTV's other building block.
In summary, we present POWERURV and RANDUTV for computing rank-revealing factorizations on a GPU. Both methods are much faster than the SVD. Compared to HQRCP, they are faster for sufficiently large matrices and much more accurate. As an example, Figure 1 shows the running time of the four methods on two Intel 18-core CPUs and an NVIDIA GPU. Remark 1. In this manuscript, we assume that the input and output matrices reside in CPU main memory in our numerical experiments, and reported compute times include the communication time for transferring data to and from the GPU. The storage complexity of all methods discussed is O(n 2 ) for an n × n matrix. We restrict our attention to the case where all the data used in the computation fits in RAM on the GPU, which somewhat limits the size of matrices that can be handled. For instance, in the numerical experiments reported in Section 6, the largest problem size we could handle involved matrices of size about 30 000 × 30 000. The techniques can be modified to allow larger matrices to be handled and for multiple GPUs to be deployed, but we leave this extension for future work.
1.4. Outline of paper. In Section 2, we survey previous work in rank-revealing factorizations, discussing competing methods as well as previous work in randomized linear algebra that is foundational for the methods presented in this article. Section 3 presents the first algorithmic contribution of this article, POWERURV. In Section 4, we discuss and build on the recently developed RANDUTV algorithm, culminating in a modification of the algorithm RANDUTV BOOSTED with greater potential for low rank estimation. Finally, Section 6 presents numerical experiments which demonstrate the computational efficiency of POWERURV and RANDUTV BOOSTED as well as their effectiveness in low rank estimation.
PRELIMINARIES
2.1. Basic notation. In this manuscript, we write A ∈ R m×n to denote a real-valued matrix with m rows and n columns, and A(i, j) refers to the element in the i-th row and j-th column of A. The indexing notation A(i : j, k : l) is used to reference the submatrix of A consisting of the entries in the i-th through j-th rows of the k-th through l-th columns. σ i (A) is the i-th singular value of A, and A * is the transpose. The row and column spaces of A are denoted as Row(A) and Col(A), respectively. An orthonormal matrix is a matrix whose columns have unit norm and are pairwise orthogonal, and an orthogonal matrix is a square orthonormal matrix. The default norm · is the spectral norm. If all the entires of a matrix G ∈ R m×n are independent, identically distributed standard Gaussian variables, we call G a standard Gaussian matrix, and we may denote it as G = RANDN(m, n). ǫ machine denotes the machine epsilon, say, 2.22 × 10 −16 in IEEE double precision 2.2. The singular value decomposition (SVD). Given a matrix A ∈ R m×n and r = min(m, n), the (full) SVD of A takes the form
A = U opt Σ V * opt , m × n m × m m × n n × n
where U opt and V opt are orthogonal, and Σ is (rectangular) diagonal. The diagonal elements {σ i } r i=1 of Σ are the singular values of A and satisfy σ 1 ≥ σ 2 ≥ . . . ≥ σ r ≥ 0. The columns u i and v i of U opt and V opt are called the left and right singular vectors, respectively, of A. In this article, we write [U opt , Σ, V opt ] = SVD(A) for computing the (full) SVD decomposition of A. Importantly, the SVD provides theoretically optimal rank-k approximations to A. Specifically, the Eckart-Young-Mirsky Theorem [22,41] states that given the SVD of a matrix A and a fixed k ∈ {1, 2, . . . , r}, we have that A − U opt (:, 1 : k)Σ(1 : k, 1 : k)V opt (:, 1 : k) * = inf{ A − B : B has rank k}, and the relation also holds with respect to the Frobenius norm.
The thin SVD of A takes the form
A =Û optΣV * opt , m × n m × r r × r r × n
whereÛ opt andV opt are orthonormal, andΣ is diagonal containing the singular values. 5 2.3. The column pivoted QR (CPQR) decomposition. Given a matrix A ∈ R m×n , the (full) CPQR decomposition of A takes the form A = Q R P * , m × n m × m m × n n × n where Q is orthogonal, R is trapezoidal, and P is a permutation matrix. There exists a number of algorithms for choosing the permutation, but a general option, as implemented in LAPACK 9 , ensures monotonic decay in magnitude of the diagonal entries of R so that |R(1, 1)| ≥ |R(2, 2)| ≥ . . . ≥ |R(r, r)|. The details of the most popular algorithm for computing such a factorization, called HQRCP hereafter, are not essential to this article, but they may be explored by the reader in, e.g. [10,27,54,49]. In this article, we write [Q, R, P] = HQRCP(A) for computing the CPQR decomposition of A.
It is well known that HQRCP is not guaranteed to be rank-revealing, and it can fail by an exponentially large factor (on, e.g. Kahan matrices) [31]. Such pathological cases are rare, particularly in practice, and HQRCP is so much faster than computing an SVD that HQRCP is used ubiquitously for low rank approximation. A communication avoiding variant based on "tournament pivoting" is given in [18], with a related method for fully pivoted LU described in [29].
2.4. The unpivoted QR decomposition. Given a matrix A ∈ R m×n , the full (unpivoted) QR decomposition of A takes the form
A = Q R m × n m × m m × n where Q is orthogonal, and R is trapezoidal. When m ≥ n, the thin (unpivoted) QR decomposition of A takes the form A =QR m × n m × n n × n
whereQ is orthonormal, andR is upper triangular. Unpivoted QR decompositions have no rank-revealing properties, but in this article we make critical use of the fact that if m ≥ n and A has full column rank, then the columns ofQ form an orthonormal basis for Col(A).
The standard algorithm for computing an unpivoted QR factorization relies on Householder reflectors. We shall call this algorithm as HQR in this article and write [Q, R] = HQR FULL(A) or [Q,R] = HQR THIN(A). We refer the reader once again to textbooks such as [27,54,49] for a complete discussion of the algorithm. The lack of the pivoting also allows HQR to rely more heavily on the level-3 BLAS than HQRCP, translating to better performance in parallel environments.
Of particular interest is the fact that the output orthogonal matrix Q of the HQR FULL algorithm can be stored and applied efficiently even when m ≫ n. In HQR FULL, suppose that we have determined n Householder matrices H 1 , H 2 , . . . , H n ∈ R m×m such that H * n H * n−1 · · · H * 1 A = R. We have that H i = I − 2y i y * i , 1 ≤ i ≤ n, where y i ∈ R m×1 is the Householder vector associated with the transformation. Then the matrix Q = H 1 H 2 · · · H n can be represented as
Q = I − YTY * ,
where T ∈ R n×n is upper triangular and Y ∈ R m×n is lower trapezoidal with columns containing the y i [47]. The form I − YTY * of Q is called the compact-WY representation of a product of Householder reflectors.
Remark 2.
Observe that the compact-WY form reduces the storage requirement of Q from O(m 2 ) to O(mn + n 2 ) and that the HQR FULL algorithm requires O(mn 2 ) work. More importantly, this representation of Q allows an efficient application of Q using matrix-matrix multiplications, which is crucial for efficiently building factorizations on a GPU.
If matrix A is rank-deficient, i.e., rank(A) = k < min(m, n), but the first k columns in A are linearly independent, then the HQR algorithm will detect this situation as R(k + 1 : m, k + 1 : n) = 0 andR(k + 1 : n, k + 1 : n) = 0 in exact arithmetic.
2.5. The randomized range finder. Consider a matrix A ∈ R m×n and a block size b such that 1 ≤ b ≤ rank(A). The randomized range finder algorithm [32] computes a set of b orthonormal vectors that approximately span Row(A) or Col(A). To be precise, suppose we seek to find an orthonormal matrixQ ∈ R n×b that
(2) A − AQQ * ≈ inf{ A − B : B has rank b}.
In other words, the columns ofQ approximately span the same space as the dominant b right singular vectors of A. This task can be accomplished using randomized projections. An extremely simple way of buildingQ is the following: 10 .
1. Generate a standard Gaussian matrix G ∈ R m×b . 2. Compute a "sampling matrix" Y = A * G ∈ R n×b . 3. Build an orthonormal basis of Col(Y) via [Q, ∼] = HQR THIN(Y)
This method will yield a reasonably good approximation when the singular values of A decay fast (see theoretical error bounds in, e.g. [32, Section 10]). However, for certain matrices, particularly when the decay in singular values is slow, an improved approximation will be desired. We may improve the approximation provided byQ in two ways:
(i) Oversampling: We may interpret Y as b different random projections onto Row(A). As shown in [32], the approximation of Col(Y) to Row(A) may be improved by gathering a few, say p, extra projections. In practice p = 5 or p = 10 is sufficient, adding very little additional computational cost (O (mn + nb)p extra work) to the algorithm. Using a small amount of oversampling also improves the expected error bounds and vastly decreases the probability of deviating from those bounds. For example, assume b ≥ 2 and p ≥ 4 while b + p ≤ min{m, n}, it holds that [32, Corollary 10.9]
A − AQQ * ≤ 1 + 16 1 + b p + 1 σ b+1 + 8 √ k + p p + 1 j>b σ 2 j 1/2 ,
with failure probability at most 3e −p , where σ 1 ≥ σ 2 ≥ . . . are the singular values of A. Thus, this technique makes the uses of randomization in the algorithm safe and reliable. (ii) Power iteration: In the construction of Y, we may replace the matrix A * with (A * A) q A * for q ∈ N, a matrix with the same column space as A * but whose singular values are (σ i (A)) 2q+1 . The contributions of singular vectors that correspond to smaller singular values will be diminished using this new matrix, thus improving the approximation of Col(Y) to the desired singular vector subspace. 11 Again, assume b ≥ 2 and p ≥ 4 while b + p ≤ min{m, n}, it holds that [32, Corollary 10.10]
E A − AQQ * ≤ C 1/(2q+1) σ b+1 ,
where the constant C depends on k, p, and min{m, n}. In other words, the power scheme drives the extra constant factor to one exponentially fast. In practice, the above error bound can be too pessimistic, and choosing q to be 0, 1 or 2 gives excellent results. When using a power iteration scheme, numerical linear dependence of the samples becomes a concern since the information will be lost from singular vectors corresponding to singular values less than ǫ
1/(2q+1) machine σ max (A).
To stabilize the algorithm, we build Y incrementally, orthonormalizing the columns of the intermediate matrix in between applications of A * and A.
THE POWERURV ALGORITHM.
Let A ∈ R m×n be a given matrix. Without loss of generality, we assume m ≥ n; otherwise, we may operate on A * instead. The POWERURV algorithm is a slight variation of a randomized algorithm proposed by Demmel et al. [16] for computing a rank-revealing URV factorization of A. The modification we propose makes the method more accurate in revealing the numerical rank, at only a marginal increase in computational time in a GPU environment. 10 According to Theorem 3 in Appendix A, matrix Y has full rank with probability 1. 11 See [46] for bounds on principle angles between the two subspaces.
In addition, POWERURV is remarkably simple to implement and has a close connection to the randomized SVD (RSVD) [32].
A randomized algorithm for computing a UTV factorization proposed by Demmel, Dumitriu, and Holtz.
In [16], Demmel et al. give a randomized algorithm RURV (randomized URV) for computing a rank-revealing factorization of matrix A. The algorithm can be written quite simply as follows: 13 where U ∈ R m×m is orthogonal and R ∈ R m×n is upper trapezoidal.
1. Generate a standard Gaussian matrix G ∈ R n×n . 2. Build an orthonormal basis of Col(G) via [V, ∼] = HQR FULL(G) 12 , where V ∈ R n×n . 3. Compute = AV ∈ R m×n . 4. Perform the full (unpivoted) QR factorization [U, R] = HQR FULL(Â),
Note that after step (4), we have [16,Sec. 5] for its rank-revealing properties.) A key advantage of this algorithm is its simplicity. Since it relies only on unpivoted QR and matrix multiplication computations, it can easily be shown to be stable (see [16] for the proof). Furthermore, both of these operations are relatively well-suited for parallel computing environments like GPUs. Since they are extremely common building blocks for problems involving matrices, highly optimized implementations for both are readily available. Thus, a highly effective implementation of RURV may be quickly assembled by a non-expert.
AV = UR ⇒ A = URV * , a URV decomposition of A. (See
Demmel et al. find a use for RURV as a fundamental component of a fast, stable solution to the eigenvalue problem. For this application, RURV is used iteratively, so not much is required of the accuracy of the rank-revealing aspect of the resulting factorization. For other problems such as low-rank approximations, though, the error e k in (1) can be quite large (compared to, e.g., the POWERURV algorithm that we are going to introduce; for interested readers, see [28, Figure 3] for a comparison of errors between RURV and POWERURV). This is because the matrix V does not incorporate any information from the row space of the input A.
3.2. POWERURV: A randomized algorithm enhanced by power iterations. The POWERURV algorithm is inspired by the RURV of Section 3.1 combined with the observation that the optimal matrix V ∈ R n×n to use for rank-revealing purposes (minimizing the error e k in (1)) is the matrix whose columns are the right singular vectors of the input matrix A ∈ R m×n . If such a V were used, then the columns of = AV would be the left singular vectors of A scaled by its singular values. Thus finding the right singular vectors of A yields theoretically optimal low rank approximations. This subproblem is as difficult as computing the SVD of A, though, so we settle for choosing V as an efficiently computed approximation to the right singular vectors of A.
Specifically, we compute an approximation to the row space of A using a variant of the randomized range finder algorithm in Section 2.5. The computation of V consists of three steps:
1. Generate a standard Gaussian matrix G ∈ R n×n . 2. Compute a "sampling matrix" Y = (A * A) q G ∈ R n×n , where q is a small positive integer. 3. Build an orthonormal basis of Col(Y) via [V, ∼] = HQR FULL(Y). 14
The matrix Y can be thought of as a random projection onto the row space of A. Specifically, the columns of Y are random linear combinations of the columns of (A * A) q . The larger the value of q, the faster the singular values of the sampled matrix (A * A) q decay. Therefore, choosing q to be large makes V better aligned with the right singular vectors. It also increases the number of matrix multiplications required, but they execute efficiently on a GPU. 12 According to Theorem 2 in Appendix A, G is invertible. 13 According to Corollary 4 in Appendix A, we know that, with probability 1, rank(Â) = rank(A) and the first rank(A) columns in are linearly independent. 14 According to Theorem 5 and Corollary 4 in Appendix A, we know that, with probability 1, rank(Y) = rank(A), and the first rank(A) columns in Y are linearly independent.
Unfortunately, a naive evaluation of Y = (A * A) q G is sensitive to the effects of roundoff errors. In particular, the columns of Y will lose the information provided by singular vectors with corresponding singular values smaller than ǫ 1/2q machine σ max (A). This problem can be remedied by orthonormalizing the columns of the intermediate matrices in between each application of A and A * , employing 2q − 1 extra unpivoted QR factorizations. The complete instruction set for POWERURV is given in Algorithm 1.
Algorithm 1 [U,R,V] = POWERURV(A, q)
Input: matrix A ∈ R m×n (m ≥ n) and a non-negative integer q (if q = 0, this algorithm becomes the RURV.) Output: A = URV * , where U ∈ R m×m and V ∈ R n×n are orthogonal, and R ∈ R m×n is upper trapezoidal.
1: V = RANDN(n, n) 2: for i = 1: q do 3:Ŷ = AVŶ ∈ R m×n 4: [V, ∼] = HQR THIN(Ŷ)V ∈ R m×n 5: Y = A * V Y ∈ R n×n 6: [V, ∼] = HQR THIN(Y) V ∈ R n×n 7: end for 8:Â = AVÂ ∈ R m×n 9: [U, R] = HQR FULL(Â)
Remark: According to theorems in Appendix A, we know that, with probability 1, rank(Ŷ) = rank(Y) = rank(Â) = rank(A), and the first rank(A) columns in these matrices are linearly independent, so the HQR algorithm can be applied.
3.3.
Relationship with RSVD. The POWERURV algorithm is closely connected to the standard randomized singular value decomposition algorithm (RSVD). The equivalency between RSVD and POWERURV allows for much of the theory for analyzing the RSVD in [32,46] to directly apply to the POWERURV algorithm.
Let A ∈ R m×n with m ≥ n and G ∈ R n×n be a standard Gaussian matrix. Let G rsvd = G(:, 1 : ℓ), where ℓ ≤ min(n, rank(A)) is a positive integer. Let q be a non-negative integer.
POWERURV. Recall that POWERURV has two steps. First, compute the full (unpivoted) QR factorization of (A * A) q G:
(3) (A * A) q G = VS,
where V ∈ R n×n is orthogonal and S ∈ R n×n is upper triangular. Second, compute the full (unpivoted) QR factorization of AV:
(4) AV = UR,
where U ∈ R m×m is orthogonal and R ∈ R m×n is upper trapezoidal.
RSVD. The RSVD builds an approximation to a truncated SVD via the following steps. First, evaluate
(5) Y rsvd = (AA * ) q AG rsvd ∈ R m×ℓ .
The columns of Y rsvd are orthonormalized via a thin (unpivoted) QR factorization 15
(6) Y rsvd = Q rsvd R rsvd ,
where Q rsvd ∈ R m×ℓ is orthonormal, and R rsvd ∈ R ℓ×ℓ is upper triangular and invertible. Next, the thin SVD of Q * rsvd A is computed to obtain (7)
Q * rsvd A = W rsvd Σ rsvd (V rsvd ) * , where W rsvd ∈ R ℓ×ℓ is orthogonal, V rsvd ∈ R n×ℓ is orthonormal and Σ rsvd ∈ R ℓ×ℓ is diagonal with singular values.
We know all the singular values are positive because the columns of Q rsvd lie in Col(A). The final step is to define the m × ℓ matrix (8) U rsvd = Q rsvd W rsvd . 15 It is easy to show that rank((AA * ) q A) = rank(A) ≥ ℓ, so Y rsvd has full rank with probability 1 according to Theorem 3 in Appendix A. 9 The end result is an approximate SVD:
A ≈ U rsvd Σ rsvd V * rsvd .
The key claim in this section is the following theorem:
Theorem 1. Given two matrices A, G, and two integers ℓ, q, as described at the beginning of Section 3.3, it holds that
U(:, 1 : ℓ)U(:, 1 : ℓ) * A = U rsvd U * rsvd A,
where the two matrices U and U rsvd are computed by the POWERURV and the RSVD, respectively, in exact arithmetic.
Proof. We will prove that Col(U(:, 1 : ℓ)) = Col(U rsvd ), which immediately implies that the two projectors U(:, 1 : ℓ)U(:, 1 : ℓ) * and U rsvd U * rsvd are identical. Let us first observe that restricting (3) to the first ℓ columns, we obtain
(9) (A * A) q G rsvd = V(:, 1 : ℓ)S(1 : ℓ, 1 : ℓ).
We can then connect Y rsvd and U(:, 1 : ℓ) via a simple computation
(10) Y rsvd (5) = (AA * ) q AG rsvd = A(A * A) q G rsvd (9) = AV(:, 1 : ℓ)S(1 : ℓ, 1 : ℓ) (4) = U(:, 1 : ℓ)R(1 : ℓ, 1 : ℓ)S(1 : ℓ, 1 : ℓ).
Next we link U rsvd and Y rsvd via
(11) U rsvd (8) = Q rsvd W rsvd (6) = Y rsvd R −1 rsvd W rsvd .
Combining (10) and (11), we find that
(12) U rsvd = U(:, 1 : ℓ)R(1 : ℓ, 1 : ℓ)S(1 : ℓ, 1 : ℓ)R −1 rsvd W rsvd .
We know that, with probability 1, matrices R(1 : ℓ, 1 : ℓ), S(1 : ℓ, 1 : ℓ), R rsvd and W rsvd are invertible, which establishes that Col(U(:, 1 : ℓ)) = Col(U rsvd ).
It is important to note that while there is a close connection between RSVD and POWERURV, the RSVD is able to attain substantially higher overall accuracy than POWERURV. Notice that the RSVD requires 2q + 2 applications of either A or A * , while POWERURV requires only 2q + 1. The RSVD takes advantage of one additional application of A in (7), where matrix W rsvd rearranges the columns inside Q rsvd to make the leading columns much better aligned with the corresponding singular vectors. Another perspective to see it is that the columns of V rsvd end up being a far more accurate basis for Row(A) than the columns of the matrix V resulting from the POWERURV. This is of course expected since for q = 0, the matrix V incorporates no information from A at all.
Remark 3. Theorem 1 extends to the case when ℓ > rank(A). The same proof will apply for a modified ℓ ′ = rank(A), and it is easy to see that adding additional columns to the basis matrices will make no difference since in this case U(:, 1 : ℓ)U(:, 1 :
ℓ) * A = U rsvd U * rsvd A = A.
Remark 4. The RSVD requires an estimate of the numerical rank in order to compute a partial factorization. By contrast, the other methods discussed in this paper, which include the SVD, the column-pivoted QR (CPQR), and the two new methods (powerURV and randUTV), compute full factorizations without any a priori information about the numerical rank.
after 0 steps: after 1 step: after 2 steps: after 3 steps:
T (0) := A T (1) := (U (1) ) * T (0) V (1) T (2) := (U (2) ) * T (1) V (2) T (3) := (U (3) ) * T (2) V (3)
THE RANDUTV ALGORITHM.
In this section, we describe the RANDUTV algorithm. Throughout this section, A ∈ R m×n is the input matrix. Without loss of generality, we assume m ≥ n (if m < n, we may operate on A * instead). A factorization
A = U T V * . m × n m × m m × n n × n
where U and V are orthogonal and T is upper trapezoidal is called a UTV factorization. Note that the SVD and CPQR are technically two examples of UTV factorizations. In the literature, however, a decomposition is generally given the UTV designation only if that is its most specific label; we will continue this practice in the current article. Thus, it is implied that T is upper trapezoidal but not diagonal, and V is orthogonal but not a permutation. The flexibility of the factors of a UTV decomposition allow it to act as a compromise between the SVD and CPQR in that it is not too expensive to compute but can reveal rank quite well. A UTV factorization can also be updated and downdated easily; see [49, Ch. 5, Sec. 4] and [24,43,48] for details.
RANDUTV is a blocked algorithm for computing a rank-revealing UTV decomposition of a matrix. It is tailored to run particularly efficiently on parallel architectures due to the fact that the bulk of its flops are cast in terms of matrix-matrix multiplication. The block size b is a user-defined parameter which in practice is usually a number such as 128 or 256 (multiples of the tile size used in libraries such as cuBLAS). In Section 4.1, we review the original RANDUTV algorithm in [38]. Then, Section 4.2 presents methods of modifying RANDUTV to enhance the rank-revealing properties of the resulting factorization. Finally, in Section 4.3 we combine these methods with a bootstrapping technique to derive an efficient algorithm on a GPU.
4.1. The RANDUTV algorithm for computing a UTV decomposition. The algorithm performs the bulk of its work in a loop that executes s = ⌈n/b⌉ iterations. We start with T (0) := A. In the i-th iteration (i = 1, 2, . . . , s), a matrix T (i) ∈ R m×n is formed by the computation
(13) T (i) := U (i) * T (i−1) V (i) ,
where U (i) ∈ R m×m and V (i) ∈ R n×n are orthogonal matrices chosen to ensure that T (i) satisfies the sparsity (nonzero pattern) and rank-revealing properties of the final factorization. Consider the first s − 1 steps. Similar to other blocked factorization algorithms, the i-th step is meant to "process" a set of b columns of T (i−1) , so that after step i, T (i) satisfies the following sparsity requirements:
• to obtain U (s) and V (s) . The sparsity pattern followed by the T (i) matrices is demonstrated in Figure 2. When the U and V matrices are desired, they can be built with the computations
U := U (1) U (2) · · · U (s) , V := V (1) V (2) · · · V (s) .
In practice, the T (i) , U (i) and V (i) are not stored separately to save memory. Instead, the space for T, U, and V is allocated at the beginning of randUTV, and at iteration i each is updated with
T ← U (i) * TV (i) , V ← VV (i) , U ← UU (i) ,
where U (i) and V (i) may be discarded or overwritten after an iteration completes.
To motivate the process of building U (i) and V (i) , consider the first step of RANDUTV. We begin by initializing T := A ∈ R m×n and creating a helpful partition
T = T 11 T 12 T 21 T 22 , where T 11 is b × b, T 12 is b × (n − b), T 21 is (m − b) × b, and T 22 is (m − b) × (n − b).
The goal is to construct V (1) ∈ R n×n and U (1) ∈ R m×m such that, after the update T ← (U (1) ) * TV (1) ,
(1) T 11 is diagonal, (with entries that closely approximate the leading b singular values of A)
(2) T 21 = 0, (3) σ min (T 11 ) ≈ σ b (A), (4) σ max (T 22 ) ≈ σ b+1 (A), (5) T 11 (k, k) ≈ σ k (A), k = 1, 2, . . . , b.
Items (1) and (2) in the list are basic requirements for any UTV factorization, and the rest of the items relate to the decomposition's rank-revealing properties.
The key observation is that if V (1) and U (1) were orthogonal matrices whose leading b columns spanned the same subspace as the leading b right and left singular vectors, respectively, of A, items (2)-(4) in the previous list would be satisfied perfectly. (Items (1) and (5) could then be satisfied with an inexpensive post-processing step: compute the SVD of T 11 ∈ R b×b and update V (1) and U (1) accordingly.) However, determining the singular vector subspaces is as difficult as computing a partial SVD of A. We therefore content ourselves with the goal of building V (1) and U (1) such that the spans of the leading b columns are approximations of the subspaces spanned by the leading b right and left singular vectors, respectively, of A. We can achieve this goal efficiently using a variant of the randomized range finder algorithm discussed in Section 2.5. Specifically, we build V (1) as follows:
1. Generate a standard Gaussian matrix G ∈ R m×b .
Sample Row(A) by calculating
Y = (A * A) q A * G ∈ R n×b ,
where q is a small non-negative integer. 3. Compute the full (unpivoted) QR factorization of Y to obtain an orthogonal matrix V (1) , i.e., [V (1) , ∼] = HQR FULL(Y). 16 Once V (1) is built, the path to U (1) becomes clear after one observation. If the leading b columns of V (1) span exactly the same subspace as the leading b right singular vectors of A, then the leading b columns of AV (1) span exactly the same subspace as the leading b left singular vectors of A. Therefore, after computing V (1) , we build U (1) as follows:
1. Perform the matrix multiplication B = AV (1) (:, 1 : b) ∈ R m×b . 16 According to Theorem 3 and Theorem 5 in Appendix A, we know that, with probability 1, rank(Y) = rank(A) and the first rank(A) columns in Y are linearly independent.
2. Compute the full (unpivoted) QR factorization of B to obtain an orthogonal matrix U (1) , i.e., [U (1) , ∼] = HQR FULL(B). 17 Following the same procedure, we can buildV (i+1) ∈ R (n−ib)×(n−ib) andÛ
V (i+1) = I ibV (i+1)
and U (i+1) = I ibÛ (i+1) , for i = 0, 1, 2, . . . , s − 1.
Notice that the transformation matrices U (i) and V (i) can be applied to the T (i) matrices efficiently as discussed in Section 2.4. We describe the basic RANDUTV algorithm, adapted from [40], in Appendix B.
An important feature of RANDUTV is that if a low-rank approximation of A is needed, the factorization can be halted once a prescribed tolerance has been met. In particular, consider the following partition
A = UTV * = U 1 U 2 T 11 T 12 0 T 22 V 1 V 2 * ,
where U 1 and V 1 contain the first k columns in the corresponding matrices and T 11 is k × k. The rank-k approximation from RANDUTV is
A k = U 1 (T 11 V * 1 + T 12 V * 2 )
, and the approximation error is (14) A
− A k = U 2 T 22 V * 2 = T 22 ,
where U 2 and V 2 are orthonormal matrices. Therefore, the factorization can be terminated as long as T 22 becomes smaller than a prescribed tolerance. In our blocked algorithm, we can calculate and check For applications where errors are measured using the Frobenius norm, a more efficient algorithm is the following. Notice that (14) holds for the Frobenius norm as well. In fact, we have
A − A k 2 F = T 22 2 F = T 2 F − T 11 2 F − T 12 2 F = A 2 F − T 11 2 F − T 12 2 F ,
where · F denotes the Frobenius norm. So we need to only pre-compute A 2 F and update it with T 11 2 F + T 12 2 F involving two small matrices at every iteration in RANDUTV. Specifically, we have
e 0 = A F and e 2 i = e 2 i−1 − T (i) (((i − 1)b + 1) : ib, ((i − 1)b + 1) : n) 2 F
for i = 1, 2, . . . , s − 1. A similar approach for the RSVD was proposed in [56].
Remark 5. The RANDUTV can be said to be a "blocked incremental RSVD" in the sense that the first step of the method is identical to the RSVD. In [38, Section 5.3], the authors demonstrate that the low-rank approximation error that results from the single-step RANDUTV factorization is identical to the error produced by the corresponding RSVD.
4.2.
Using oversampling in the RANDUTV algorithm. In RANDUTV, the computation of matrix V (i) ∈ R n×n relies on the randomized range finder algorithm (discussed in Section 2.5). Just as the randomized range finder algorithm can use an oversampling parameter p to improve the success probability, we add a similar parameter p to the construction of V (i) in Algorithm 3 to improve the rank-revealing properties of the resulting factorization.
To do so, we first change the size of the random matrix G from m × b to m × (b + p) (we once again consider only the building of V (1) to simplify matrix dimensions). This effectively increases the number of times we sample Row(A) by p, providing a "correction" to the information in the first b samples.
Next, we must modify how we obtain the orthonormal vectors that form V (1) . Recall that the first b columns of V (1) must contain the approximation to the leading b right singular vectors of A. If we merely orthonormalized the columns of Y = (A * A) q A * G ∈ R n×(b+p) with HQR again, the first b columns of V (1) would only contain information from the first b columns of Y. We must therefore choose a method for building V (1) such that its first b columns contain a good approximation of Col(Y). The following approaches use two of the most common rank-revealing factorizations to solve this subproblem: a) Perform a CPQR on Y to obtain the orthogonal matrix V (1) ∈ R n×n . The additional computational expense of computing HQRCP is relatively inexpensive for thin matrices like Y. However, the approximation provided by V (1) (:, 1 : b) to Col(Y) in this case will be suboptimal as mentioned in Section 2.3. b) Perform an SVD on Y to obtain an orthogonal matrix W ∈ R n×n . Then W(:, 1 : b) contains the optimal rank-b approximation of Col(Y). However, this method requires one more step since V (1) must be represented as a product of Householder vectors in order to compute AV (1) efficiently in the following step of RANDUTV. After computing the SVD, therefore, we must perform a full (unpivoted) QR decomposition on W(:, 1 : b), i.e., [V (1) , ∼] = HQR FULL(W (:, 1 : b)). 18 This method yields the optimal approximation of Col(Y) given by V (1) (:, 1 : b).
In this article, we use method b) because it provides better approximations. As discussed in Remark 6 below, method b) requires a full (unpivoted) HQR of size n × (b + p), an SVD of size (b + p) × (b + p), and a full (unpivoted) HQR of size n × b. While the SVD is small and therefore quite cheap, the first HQR is an extra expense which is nontrivial when aggregated across every iteration in RANDUTV. This extra cost is addressed and mitigated in Section 4.3.
Remark 6. The SVD of method b) above may look expensive at first glance. However, recall that Y is of size n × (b + p), where n ≫ b + p. For tall, thin matrices like this, the SVD may be inexpensively computed as follows [9]:
1. Compute a full (unpivoted) QR decomposition of Y to obtain an orthogonal matrix Q ∈ R n×n and an upper trapezoidal matrix R ∈ R n×(b+p) , i.e., [Q, R] = HQR FULL(Y). 19 After these steps, we recognize that the matrix of left singular vectors of matrix Y is
W = Q Û I n−(b+p) ∈ R n×n .
The costs of the first step dominate the entire procedure, which are O(n(b + p)) storage and O(n(b + p) 2 ) work, respectively, according to Remark 2.
For randomized subspace iteration techniques like the one used to build V (i) , the stability of the iteration is often a concern. As in the POWERURV algorithm, the information from any singular values less than ǫ 1/(2q+1) machine σ max (A) will be lost unless an orthonormalization procedure is used during intermediate steps of the computation Y = (A * A) q A * G. For RANDUTV, only b singular vectors are in view in each iteration, so orthonormalization is not required as often as it is for POWERURV. However, if oversampling is used (p > 0), performing one orthonormalization before the final application of A * markedly benefits the approximation of Col(Y) to the desired singular vector subspace. Numerical experiments show that this improvement occurs even when the sampling matrix is not in danger of loss of information from roundoff errors. Instead, we may intuitively consider that using orthonormal columns to sample A * ensures that the "extra" p columns of Y contain information not already accounted for in the first b columns.
4.3.
Reducing the cost of oversampling and orthonormalization. Adding oversampling to RANDUTV, as discussed in Sections 4.2, adds noticeable cost to the algorithm. First, it requires a costlier method of building V (i) . Second, it increases the dimension of the random matrix G, increasing the cost of all the operations involving G and, therefore, Y. We will in this section demonstrate that the overhead caused by the latter cost can be essentially eliminated by recycling the "extra" information we collected in one step when we execute the next step.
To demonstrate, consider the state of RANDUTV for input matrix A ∈ R m×n with q = 2, p = b after one step of the main iteration. Let A q = (A * A) q A * denote the matrix for power iteration. At this point, we have computed and stored the following relevant matrices:
• Y (1) ∈ R n×(b+p) : The matrix containing random samples of Row(A), computed with
Y (1) = A q G (1) , where G (1) ∈ R m×(b+p) is a standard Gaussian matrix. • W (1) ∈ RT (1) = U (1) * AV (1) .
Finally, consider the partitions
Y (1) = Y (1) 1 Y (1) 2 , W (1) = W (1) 1 W (1) 2 , V (1) = V (1) 1 V (1) 2 , U (1) = U (1) 1 U (1) 2 , T (1) = T 11 T 12 T 21 T 22 , where Y (1) 1 , W(1)1 , V(1)
1 and U
1 contain the first b columns in corresponding matrices, and T 11 is b × b.
In the second iteration of RANDUTV, the first step is to approximate Row(T 22 ), where
T * 22 = V (1) 2 * A * U (1) 2 .
Next, we make the observation that, just as Col(W A * . This multiplication involving two small matrix dimensions is much cheaper than carrying out the full power iteration process.
Putting it all together, on every iteration of RANDUTV after the first, we build the sampling matrix Y (i) of b + p columns in two stages. First, we build Y (i) (:, 1 : b) without oversampling or orthonormalization as in Section 4.1. Second, we build
Y (i) (:, (b + 1) : (b + p)) = V (i−1) (:, (b + 1) : (n − b(i − 1))) * W (i−1) (:, (b + 1) : (b + p))
by reusing the samples from the last iteration. The complete algorithm, adjusted to improve upon the low-rank estimation accuracies of the original RANDUTV, is given in Algorithm 2.
IMPLEMENTATION DETAILS
As mentioned earlier, POWERURV (Algorithm 1) and RANDUTV (Algorithm 2) mainly consist of Level-3 BLAS routines such as matrix-matrix multiplication (GEMM), which can execute extremely efficiently on modern computer architectures such as GPUs. Two essential features of GPUs from the algorithmic design perspective are the following: (1) The amount of parallelism available is massive. For example, an NVIDIA V100 GPU has 5120 CUDA cores. (2) The costs of data-movement against computation are orders of magnitude higher. As a result, Level-1 or Level-2 BLAS routines do not attain a significant portion of GPUs' peak performance.
Our GPU implementation of the POWERURV algorithm and RANDUTV algorithm uses a mix of routines from the cuBLAS library 20 from NVIDIA and the MAGMA library [52,53,20]. The MAGMA library is a collection of next generation linear algebra routines with GPU acceleration. For the POWERURV algorithm, we use the GEMM routine from cuBLAS and routines related to (unpivoted) QR decomposition from MAGMA. For the RANDUTV algorithm, we mostly use MAGMA routines except for the GEMM routine from cuBLAS to apply the Householder reflectors (see Section 2.4). These choices are mainly guided by empirical performance.
Our implementation consists of a sequence of GPU level-3 BLAS calls. The matrix is copied to the GPU at the start, and all computations (with one exception, see below) are done on the GPU with no communication back to main memory until the computation completes.
The exception is that the SVD subroutines in MAGMA do not support any GPU interface that takes an input matrix on the GPU. This is a known issue of the MAGMA library, and we follow the common workaround: copy the input matrix from GPU to CPU and then call a MAGMA SVD subroutine (MAGMA copies the matrix back to GPU and computes its SVD).
Fortunately, the extra cost of data transfer is negligible because the matrices whose SVDs are needed in Algorithm 2 are all very small (of dimensions at most (b + p) × (b + p)). Input: matrix A ∈ R m×n , positive integers b and p, and a non-negative integer q. Output: A = UTV * , where U ∈ R m×m and V ∈ R n×n are orthogonal, and T ∈ R m×n is upper trapezoidal. 1 19: In this section, we present numerical experiments to demonstrate the performance of POWERURV (Algorithm 1) and RANDUTV (Algorithm 2). In particular, we compare them to the SVD and the HQRCP in terms of speed and accuracy. Since the SVD is the most accurate method, we use it as the baseline to show the speedup and the accuracy of other methods. Results of the SVD were obtained using the MAGMA routine MAGMA DGESDD, 21 where all orthogonal matrices are calculated. Results of the HQRCP were obtained using the MAGMA routine MAGMA DGEQP3, where the orthogonal matrix Q was calculated.
X = X − W next W * next X
All experiments were performed on an NVIDIA Tesla V100 graphics card with 32 GB of memory, which is connected to two Intel Xeon Gold 6254 18-core CPUs at 3.10GHz. Our code makes extensive use of the MAGMA (Version 2.5.4) library linked with the Intel MKL library (Version 20.0). It was compiled with the NVIDIA compiler NVCC (Version 11.3.58) on the Linux OS (5.4.0-72-generic.x86 64).
6.1. Computational speed. In this section, we investigate the speed of POWERURV and RANDUTV on the GPU and compare them to highly optimized implementations of the SVD and the HQRCP for the GPU. In Figures 3, 4, and 5, every factorization is computed on a standard Gaussian matrix A ∈ R n×n . All methods discussed here are "direct" methods, whose running time does not depend on the input matrix but its size. We compare the times (in seconds) of different algorithms/codes, where the input and output matrices exist on the CPU (time for moving data between CPU and GPU is included).
In each plot, we divide the given time by n 3 to make the asymptotic relationships between each experiment more clear, where n = 3 000, 4 000, 5 000, 6 000, 8 000, 10 000, 12 000, 15 000, 20 000, 30 000. The MAGMA routine MAGMA DGESDD for computing the SVD run out of memory when n = 30 000. Raw timing results are given in Table 1 in Appendix C. We observe in Figure 3 that POWERURV with q = 1, 2 power iterations consistently outperforms HQRCP. POWERURV with q = 3 power iterations delivers similar performance with HQRCP for large matrices, but arrives at the asymptotic region much faster. As expected, the SVD is much slower than the other two methods.
We observe in Figure 4 that RANDUTV without oversampling handily outperforms HQRCP. The cost of increasing the parameter q is also quite small due to the high efficiency of matrix multiplication on the GPU. 21 The DGESDD algorithm uses a divide-and-conquer algorithm, which is different from the DGESVD algorithm based on QR iterations.
The former is also known to be faster for large matrices; see https://www.netlib.org/lapack/lug/node71.html and Table 1 with CPU timings of both DGESVD and DGESDD in [38] Speedup over SVD MAGMA dgeqp3 (CPQR) randUTV, p=0, q=2 randUTV, p=0, q=1 randUTV, p=0, q=0 Speedup over SVD MAGMA dgeqp3 (CPQR) randUTV, p=128, q=2 randUTV, p=128, q=1 randUTV, p=128, q=0 We observe in Figure 5 that RANDUTV with oversampling still outperforms HQRCP when n ≥ 15 000. In addition, observe that the distance between the lines for q = 2 and q = 1 is less than the distance between the lines for q = 0 and q = 1. This difference is representative of the savings gained with bootstrapping technique whereby extra samples from one iteration are carried over to the next iteration.
To summarize, our results show that the two newly proposed algorithms (POWERURV and RANDUTV) both achieved clear speedups over the SVD. They are also faster than HQRCP for sufficiently large matrices.
6.2. Approximation error. In this section, we compare the errors in the low-rank approximations produced by SVD, HQRCP, POWERURV, and RANDUTV. Given an matrix A ∈ R n×n , each rank-revealing factorization produces a decomposition A = UTV * , where U ∈ R n×n and V ∈ R n×n are orthogonal, and T ∈ R n×n is upper triangular. Given this factorization, a natural rank-k approximation to A is
(15) A k = U(:, 1 : k)T(1 : k, :)V * .
Recall in Section 2.2 that the rank-k approximation produced by the SVD is the optimal among all rank-k approximations, so we denote it as A optimal k . For each of the factorizations that we study, we evaluated the error
(16) e k = A − A k
as a function of k, and report the results in Figures 6 and 7. Three different test matrices are considered:
• Fast decay: This matrix is generated by first creating random orthogonal matrices U and V by performing unpivoted QR factorizations on two random matrices with i.i.d entries drawn according to the standard normal distribution. Then, A fast is computed with A fast = UDV * , where D is a diagonal matrix with diagonal entries d ii = β (i−1)/(n−1) , where β = 10 −5 . • S-shaped decay: This matrix is generated in the same manner as "fast decay," but the diagonal entries are chosen to first hover around 1, then quickly decay before leveling out at 10 −2 , as shown in Figure 6c. • BIE: This matrix is the result of discretizing a boundary integral equation (BIE) defined on a smooth closed curve in the plane. To be precise, we discretized the so called "single layer" operator associated with the Laplace equation using a sixth order quadrature rule designed by Alpert [1]. This operator is wellknown to be ill-conditioned, which necessitates the use of a rank-revealing factorization in order to solve the corresponding linear system in as stable a manner as possible.
Remark 7. Evaluating the error e k defined in (16) for all k = 1, 2, . . . , n requires as much as O(n 3 ) work, which in practice took significantly longer than computing the rank revealing factorizations. Figure 7 shows results corresponding to relatively large matrices of size n = 4 000, which look similar to those in Figure 6.
Remark 8. As a curiosity, let us note that we have empirically observed that RANDUTV does surprisingly well at controlling relative errors. Figure 8 reports the error metric
A − A k A − A optimal k − 1 = A − A k − A − A optimal k A − A optimal k ≥ 0
for our three test matrices. We see that while CPQR does not manage to control this error, RANDUTV does a decent job, in particular when the power iteration is incorporated.
We make three observations based on results in Figures 6, 7, and 8. First, POWERURV and RANDUTV are more accurate than HQRCP for a given k. Errors from POWERURV and RANDUTV are substantially smaller in almost all cases studied. Take Figure 6a as an example. The errors from HQRCP (green lines) are far from the minimals (black lines) achieved by SVD, whereas the errors from POWERURV and RANDUTV (red and blue lines) are much closer to the minimals. It is also obvious in Figure 8 that the normalized errors from HQRCP (green lines) are usually larger (higher) than those from POWERURV and RANDUTV (red and blue lines).
Second, RANDUTV is better than POWERURV when the same steps of power iteration are used. For a large number of cases, errors from RANDUTV are significantly smaller. This is shown in Figure 6 that the blue lines (RANDUTV) lie in between the black lines (SVD) and the red lines (POWERURV). It is more obvious in Figure 8 that the blue lines (RANDUTV) are lower than the red lines (POWERURV).
Third, the new oversampling scheme in RANDUTV provides a boost in accuracies of low rank approximations obtained, even when the singular values decay slowly. The accuracy improvement is most pronounced when the singular values decay fast as shown in Figure 8a. The figure also shows that without oversampling in RANDUTV, the accuracies of rank-k approximations deteriorate when k is approximately a multiple of the block size b. This phenomenon is known in [32] and the reason of incorporating the oversampling scheme in RANDUTV.
Remark 9.
The error e k defined in (16) measures how well condition (b) in Section 1.1 is satisfied. In the interest of space, we do not report analogous metrics for condition (a). Generally speaking, the results for POWERURV and RANDUTV are similar, as reported in [28,38].
CONCLUSIONS
The computation of a rank-revealing factorization has important applications in, e.g., low-rank approximation and subspace identification. While the SVD is theoretically optimal, computing the SVD of a matrix requires a || -1 cpqr powerURV, q=2 randUTV, p=0, q=2 randUTV, p=50, q=2 (F) "BIE", Frobenius norm FIGURE 8. Relative errors in low-rank approximations for matrices "fast decay", "S-shaped decay", and "BIE" of size n = 400. For the RANDUTV factorizations, the block size was b = 50. The x-axis is the rank of corresponding approximations. significant amount of work that can not fully leverage modern parallel computing environments such as a GPU. For example, computing the SVD of an 15 000 by 15 000 matrix took more than a minute (79.4 seconds) on an NVIDIA V100 GPU.
We have described two randomized algorithms-POWERURV and RANDUTV-as economical alternatives to the SVD. As we demonstrate through numerical experiments, both methods are much faster than the SVD on a GPU since they are designed to fully leverage parallel communication-constrained hardwares, and they provide close-tooptimal accuracy. The main features of the two new methods, respectively, include (1) POWERURV has a simple formulation as shown in Algorithm 1 that it can be implemented easily on a GPU (or other parallel architectures), and (2) RANDUTV is a blocked incremental method that can be used to compute partial factorizations efficiently.
Compared to the CPQR factorization, which is commonly used as an economical alternative to the SVD for lowrank approximation, the new methods-POWERURV and RANDUTV-are much more accurate and are similar or better in terms of speed on a GPU. Between the two methods, the RANDUTV is more accurate and generally faster, especially when power iteration is used. The accuracy of the RANDUTV can be further improved through the described oversampling technique (Section 4.2), which requires a modest amount of extra computation.
The two proposed methods can be viewed as evolutions of the RSVD. The distinction, however, is that the RSVD and related randomized methods for low-rank approximation [32,36,39]) work best when the numerical rank k of an input matrix is much smaller than its dimensions. The key advantage of the two new methods is computational efficiency, in particular on GPUs, and they provide high speed for any rank. According to [45,Equation (3.2)], the probability that G(1 : n, 1 : n) is singular equals zero. Therefore, the theorem holds.
Theorem 3. Let A ∈ R m×n have rank k ≤ min(m, n) and G ∈ R n×ℓ be a standard Gaussian matrix. Let r = min(k, ℓ). Then, with probability 1, matrix B = AG has rank r and the first r columns of B are linearly independent.
Proof. Let the thin SVD of A ∈ R m×n be A = UΣV * , where U ∈ R m×k is an orthonormal matrix, Σ ∈ R k×k is a diagonal matrix, and V ∈ R k×n is an orthonormal matrix. Therefore, B(:, 1 : r) = UΣV * G(:, 1 : r).
Since V * G(:, 1 : r) ∈ R k×r also has the standard Gaussian distribution, it is full rank with probability 1 according to Theorem 2. So it is obvious that B(:, 1 : r) has full rank.
On the other hand, we know rank(B) ≤ min(rank(A), rank(G)) = r.
Therefore, it holds that rank(B) = r.
Corollary 4. Let A ∈ R m×n have rank k ≤ min(m, n) and G ∈ R n×n be a standard Gaussian matrix. Let [Q, R] = HQR FULL(G) (G is invertible with probability 1 according to Theorem 2). Then, with probability 1, matrix B = AQ has rank k and the first k columns of B are linearly independent.
Proof. Since Q, a unitary matrix, has full rank, we know rank(B) = rank(A) = k.
Let C = BR = AG. We know that C(:, 1 : k) = B(:, 1 : k)R(1 : k, 1 : k) has full rank according to Theorem 3. So it is obvious that B(:, 1 : k) has full rank since R is invertible with probability 1. Proof. Suppose rank(A) = k. Let the thin SVD of A be A = UΣV * , where U ∈ R m×k and V ∈ R k×n are orthonormal matrices, and Σ ∈ R k×k is a diagonal matrix with the positive singular values. We know that Timing results (in seconds) for computing rank-revealing factorizations on a GPU using the SVD, the HQRCP, the POWERURV, and the RANDUTV. The SVD is calculated using the MAGMA routine MAGMA DGESDD. The HQRCP is calculated using the MAGMA routine MAGMA DGEQP3, and the orthogonal matrix Q is calculated using the MAGMA routine MAGMA DORGQR2. The POWERURV and the RANDUTV are given in Algorithm 1 and 2, respectively.
Theorem 5. Given a matrix
N
SVD HQRCP POWERURV q = 1 q = 2 q = 3
3,000 1.98e+00 9.26e-01 5.06e-01 6.59e-01 9.38e-01 4,000 4.51e+00 1.58e+00 7.80e-01 9.79e-01 1.42e+00 5,000 5.97e+00 2.52e+00 1.13e+00 1.46e+00 1.92e+00 6,000 9.29e+00 3.66e+00 1.60e+00 2.14e+00 2.76e+00 8,000 1.85e+01 7.42e+00 2.68e+00 3.87e+00 5.29e+00 10,000 3.12e+01 1.11e+01 4.85e+00 7.29e+00 9.37e+00 12,000 4.74e+01 1.70e+01 7.62e+00 1.10e+01 1.50e+01 15,000 7.94e+01 2.84e+01 1.29e+01 1.94e+01 2.51e+01 20,000 1.71e+02 5.61e+01 2.85e+01 4.27e+01 6.03e+01 30,000 N/A 1.60e+02 9.12e+01 1.32e+02 1.87e+02
N RANDUTV (p = 0, b = 128) RANDUTV (p = b = 128) q = 0 q = 1 q = 2 q = 0 q = 1 q = 2
( 1 )
1e k = A − U(:, 1 : k)T(1 : k, :)V * ≈ inf{ A − B : B has rank k},
FIGURE 2 .
2An illustration of the sparsity pattern of the four matrices T (i) that appear in RAN-DUTV, for the particular case where m = 11, n = 8, and b = 3.
T (i) (:, 1 : ib) is upper trapezoidal, and • the b × b blocks on the main diagonal of T (i) (:, 1 : ib) are themselves diagonal. After s−1 iterations, we compute the SVD of the bottom right block T (s−1) (((s − 1)b + 1) : m, ((s − 1)b + 1) : n)
( i+1 )
i+1∈ R (m−ib)×(m−ib) using the bottom right block T (i) ((ib + 1) : m, (ib + 1) : n) for i = 1, 2, . . . , s − 2. At the last step, the SVD of the remaining block T (
e i = T (i) ((ib + 1) : m, (ib + 1) : n)at the i-th iteration for i = 0, 1, . . . , s − 1.
2 .
2Compute the SVD of the square matrix R(1 : (b + p), :) to obtain an orthogonal matrixÛ ∈ R (b+p)×(b+p) with left singular vectors of R(1 : (b + p), :), i.e., [Û, ∼, ∼] = SVD(R(1 : (b + p), :)).
n×n : The matrix whose columns are the left singular vectors of Y (1) , computed with [W, ∼, ∼] = SVD(Y); see Section 4.2 method b). • V (1) ∈ R n×n : The right transformation in the main step of RANDUTV, computed with [V (1) , ∼] = HQR FULL(W (1) (:, 1 : b)); see Section 4.2 method b). • U (1) ∈ R m×m : The left transformation in the main step of RANDUTV, computed with [U (1) , ∼] = HQR FULL(AV (1) (:, 1 : b)); see Section 4.1. • T (1) ∈ R m×n : The matrix being driven to upper trapezoidal form. At this stage in the algorithm,
approximates the subspace spanned by the leading b right singular vectors of A, the span of the first p columns of W the subspace spanned by the leading (b + 1)-th through (b + p)-th right singular vectors of A. Thus, Col V
X
: T = A; U = I m ; V = I n ; :, 1 : b) = T([I 2 , I 3 ], [J 2 , J 3 ]) :, 1 : b) = T([I 2 , I 3 ], [J 2 , J 3 ]) * (T([I 2 , I 3 ], [J 2 , J 3 ])Y(:, 1 : = T([I 2 , I 3 ], [J 2 , J 3 ])Y(:, 1 : b)
Y[[[[
= T([I 2 , I 3 ], [J 2 , J 3 ]) * [Q(:, 1 : b), W next ] V (i) , ∼] = HQR FULL(W Y (:, 1 : b)) 25: T(:, [J 2 , J 3 ]) = T(:, [J 2 , J 3 ])V (i) 26: V(:, [J 2 , J 3 ]) = V(:, [J 2 , J 3 ])V (i) 27:W next = V (i) (:, (b + 1) : (n − b(i − 1)) U (i) , R] = HQR FULL(T([I 2 , I 3 ], J 2 ))30: U(:, [I 2 , I 3 ]) = U(:, [I 2 , I 3 ])U (i) 31: T([I 2 , I 3 ], J 3 ) = U (i) * T([I 2 , I 3 , J 3 ]) 32: T(I 3 , J 2 ) = ZEROS(m − bi, b) U small , D small , V small ] = SVD(R(1 : b, 1 : U small , D small , V small ] = SVD(T([I 2 , I 3 ], [J 2 , J 3 ])) 42: U(:, [I 2 , I 3 ]) = U(:, [I 2 , I 3 ])U small 43: V(:, [J 2 , J 3 ]) = V(:, [J 2 , J 3 ])V small 44: T([I 2 , I 3 ], [J 2 , J 3 ]) = D small 45: T([I 1 , [J 2 , J 3 ]) = T(I 1 , [J 2 , J 3 ])V small 46: end if 47: end for 6. NUMERICAL RESULTS
FIGURE 3 .
3(Left) computation times for the SVD, HQRCP and POWERURV on the GPU. (Right) speedups of the HQRCP and POWERURV over the SVD.
FIGURE 4 .
4(Left) computation times for the RANDUTV without oversampling (b = 128) plotted against the computation time for HQRCP and the SVD on the GPU. (Right) Speedups of the RANDUTV without oversampling and the HQRCP over the SVD.
FIGURE 5 .
5(Left) computation times for the RANDUTV with oversampling (b = 128) plotted against the computation time for HQRCP and the SVD on the GPU. (Right) Speedups of the RANDUTV with oversampling and the HQRCP over the SVD.
FIGURE 6 .FIGURE 7 .
67Errors in low-rank approximations for matrices "fast decay", "S-shaped decay", and "BIE" of size n = 400. For the RANDUTV factorizations, the block size was b = 50. The x-axis is the rank of corresponding approximations. Errors in low-rank approximations for matrices "fast decay", "S-shaped decay", and "BIE" of size n=4 000. For the RANDUTV factorizations, the block size was b = 128 (as used in numerical experiments in Section 6.1). The x-axis is the rank of corresponding approximations.
APPENDIX A. RESULTS RELATED TO RANDOM MATRICES Theorem 2. Let G ∈ R m×n be a standard Gaussian matrix. Then, with probability 1, G has full rank.Proof. Without loss of generality, assume m ≥ n. It is obvious thatPr [ G(1 : n, 1 : n) is rank-deficient ] ≥ Pr [ G is rank-deficient ] .
A ∈ R m×n , it holds that, for a positive integer p,rank ((A * A) p ) = rank((AA * ) p ) = rank(A),and, for a non-negative integer q,rank (A(A * A) q ) = rank((AA * ) q A) = rank ((A * A) q A * ) = rank(A * (AA * ) q ) = rank(A).
(
A * A) p = VΣ 2p V * and (AA * ) p = UΣ 2p U * , and A(A * A) q = (AA * ) q A = UΣ 2q+1 V * and (A * A) q A * = A * (AA * ) q = VΣ 2q+1 U * ,So it is obvious that the theorem holds. APPENDIX B. RANDUTV ALGORITHM ADAPTED FROM [40]. APPENDIX C. RAW DATA FOR FIGURE 3, 4, AND 5
FIGURE 1. Computation time of the SVD, HQRCP, POWERURV, and RANDUTV for an n × n input matrix (SVD ran out of memory for the largest matrix on the GPU). CPU: two Intel Xeon Gold 6254 18-core CPUs at 3.10GHz; GPU: NVIDIA Tesla V100. Results for SVD and HQRCP were obtained through the Intel MKL library on the CPU and the MAGMA library[52,53,20] on the GPU.0.5
1
1.5
2
2.5
3
n
10 4
10
0
10 1
10
2
10
3
Time [s]
SVD (cpu)
CPQR (cpu)
powerURV (cpu)
randUTV (cpu)
SVD (gpu)
CPQR (gpu)
powerURV (gpu)
randUTV (gpu)
TABLE 1 .
1
if m < n, we may simply factorize the transpose and take the transpose of the result.
https://www.netlib.org/lapack/lug/node42.html
It is easy to show that, with probability 1, rank(B) = rank(A) and the first rank(A) columns in B are linearly independent.
We would have detected rank deficiency rank(A) < b with the SVD, and the first rank(A) columns in W must be linearly independent.19 According to Theorem 3 and Theorem 5 in Appendix A, we know that, with probability 1, rank(Y) = rank(A) and the first rank(A) columns in Y are linearly independent.
https://docs.nvidia.com/cuda/cublas/index.html
Hybrid Gauss-trapezoidal quadrature rules. Bradley K Alpert, SIAM Journal on Scientific Computing. 205Bradley K Alpert, Hybrid Gauss-trapezoidal quadrature rules, SIAM Journal on Scientific Computing 20 (1999), no. 5, 1551-1584.
Modification and maintenance of ULV decompositions. L Jesse, Barlow, Applied Mathematics and Scientific Computing. SpringerJesse L Barlow, Modification and maintenance of ULV decompositions, Applied Mathematics and Scientific Computing, Springer, 2002, pp. 31-62.
Adaptive low-rank approximation of collocation matrices. Mario Bebendorf, Sergej Rjasanow, Computing. 701Mario Bebendorf and Sergej Rjasanow, Adaptive low-rank approximation of collocation matrices, Computing 70 (2003), no. 1, 1-24.
On updating signal subspaces. H Christian, Bischof, M Gautam, Shroff, IEEE Transactions on Signal Processing. 401Christian H Bischof and Gautam M Shroff, On updating signal subspaces, IEEE Transactions on Signal Processing 40 (1992), no. 1, 96-105.
Numerical methods for least squares problems. Ake Bjorck, 51SiamAke Bjorck, Numerical methods for least squares problems, vol. 51, Siam, 1996.
An updated set of basic linear algebra subprograms (BLAS). L Susan, Antoine Blackford, Roldan Petitet, Karin Pozo, Clint Remington, James Whaley, Jack Demmel, Iain Dongarra, Sven Duff, Greg Hammarling, Henry, ACM Transactions on Mathematical Software. 282L Susan Blackford, Antoine Petitet, Roldan Pozo, Karin Remington, R Clint Whaley, James Demmel, Jack Dongarra, Iain Duff, Sven Hammarling, Greg Henry, et al., An updated set of basic linear algebra subprograms (BLAS), ACM Transactions on Mathematical Software 28 (2002), no. 2, 135-151.
State-of-the-art in heterogeneous computing. Christopher Andre R Brodtkorb, Dyken, R Trond, Jon M Hagen, Olaf O Hjelmervik, Storaasli, Scientific Programming. 181Andre R Brodtkorb, Christopher Dyken, Trond R Hagen, Jon M Hjelmervik, and Olaf O Storaasli, State-of-the-art in heterogeneous computing, Scientific Programming 18 (2010), no. 1, 1-33.
Linear least squares solutions by Householder transformations. Peter Businger, Gene H Golub, Numerische Mathematik. 73Peter Businger and Gene H Golub, Linear least squares solutions by Householder transformations, Numerische Mathematik 7 (1965), no. 3, 269-276.
An improved algorithm for computing the singular value decomposition. F Tony, Chan, ACM Transactions on Mathematical Software (TOMS). 81Tony F Chan, An improved algorithm for computing the singular value decomposition, ACM Transactions on Mathematical Software (TOMS) 8 (1982), no. 1, 72-83.
Rank revealing QR factorizations. Linear algebra and its applications. 88, Rank revealing QR factorizations, Linear algebra and its applications 88 (1987), 67-82.
Computing truncated singular value decomposition least squares solutions by rank revealing QR-factorizations. F Tony, Per Christian Chan, Hansen, SIAM Journal on Scientific and Statistical Computing. 113Tony F Chan and Per Christian Hansen, Computing truncated singular value decomposition least squares solutions by rank revealing QR-factorizations, SIAM Journal on Scientific and Statistical Computing 11 (1990), no. 3, 519-530.
Some applications of the rank revealing QR factorization. SIAM Journal on Scientific and Statistical Computing. 133, Some applications of the rank revealing QR factorization, SIAM Journal on Scientific and Statistical Computing 13 (1992), no. 3, 727-741.
On rank-revealing factorisations. Shivkumar Chandrasekaran, C F Ilse, Ipsen, SIAM Journal on Matrix Analysis and Applications. 152Shivkumar Chandrasekaran and Ilse CF Ipsen, On rank-revealing factorisations, SIAM Journal on Matrix Analysis and Applications 15 (1994), no. 2, 592-622.
Low-rank approximation and regression in input sparsity time. L Kenneth, David P Clarkson, Woodruff, Journal of the ACM (JACM). 63654Kenneth L Clarkson and David P Woodruff, Low-rank approximation and regression in input sparsity time, Journal of the ACM (JACM) 63 (2017), no. 6, 54.
A divide and conquer method for the symmetric tridiagonal eigenproblem. J M Jan, Cuppen, Numerische Mathematik. 362Jan JM Cuppen, A divide and conquer method for the symmetric tridiagonal eigenproblem, Numerische Mathematik 36 (1980), no. 2, 177-195.
. James Demmel, Ioana Dumitriu, Olga Holtz, Numerische Mathematik. 1081Fast linear algebra is stableJames Demmel, Ioana Dumitriu, and Olga Holtz, Fast linear algebra is stable, Numerische Mathematik 108 (2007), no. 1, 59-91.
. W James, Demmel, SIAMApplied numerical linear algebraJames W Demmel, Applied numerical linear algebra, SIAM, 1997.
Communication avoiding rank revealing qr factorization with column pivoting. Laura James W Demmel, Ming Grigori, Hua Gu, Xiang, SIAM Journal on Matrix Analysis and Applications. 361James W Demmel, Laura Grigori, Ming Gu, and Hua Xiang, Communication avoiding rank revealing qr factorization with column pivoting, SIAM Journal on Matrix Analysis and Applications 36 (2015), no. 1, 55-89.
Yijun Dong, Per-Gunnar Martinsson, arXiv:2104.05877Simpler is better: a comparative study of randomized algorithms for computing the cur decomposition. arXiv preprintYijun Dong and Per-Gunnar Martinsson, Simpler is better: a comparative study of randomized algorithms for computing the cur decomposition, arXiv preprint arXiv:2104.05877 (2021).
A,b,q) Input: matrix A ∈ R m×n , a positive integer b, and a non-negative integer q. Output: A = UTV * , where U ∈ R m×m and V ∈ R n×n are orthogonal, and T ∈ R m×n is upper trapezoidal. U , T , V] = Randutv Basic, Algorithm. 1T = AAlgorithm 3 [U,T,V] = RANDUTV BASIC(A,b,q) Input: matrix A ∈ R m×n , a positive integer b, and a non-negative integer q. Output: A = UTV * , where U ∈ R m×m and V ∈ R n×n are orthogonal, and T ∈ R m×n is upper trapezoidal. 1: T = A;
. U , U = I m ;
for i = 1: min(⌈m/b⌉, ⌈n/b⌉) do 3: I 1 = 1 : (b(i − 1)). V = I N, 2V = I n ; 2: for i = 1: min(⌈m/b⌉, ⌈n/b⌉) do 3: I 1 = 1 : (b(i − 1));
I 2 = (b(i − 1) + 1) : min(bi, m). I 2 = (b(i − 1) + 1) : min(bi, m);
. J 1 =. 1b(i − 1)J 1 = 1 : (b(i − 1));
J 2 = (b(i − 1) + 1) : min(bi, n). J 2 = (b(i − 1) + 1) : min(bi, n);
. J 3 = (bi + 1. J 3 = (bi + 1) : n;
5: if (I 3 and J 3 are both nonempty) then. 5: if (I 3 and J 3 are both nonempty) then
. G = Randn, m − b(i − 1G = RANDN(m − b(i − 1), b)
J 2 )V small 23: T(I 2 , J 2 ) = D small 24: T(I 2 , J 3 ) = U * small T(I 2 , J 3 ) 25: T(I 1 , J 2 ) = T(I 1 , J 2 )V small. Y = T ; Y = T ; * (t ; = Hqr, YFull, Ysmall 30: TSVD(R(1 : b, 1 : b)) 21: U(:, I 2 ) = U(:, I 2 )U small. U small , D small , V small ] = SVD(TV(:, J 2 ) = V12U (i). J 2 , J 3 ]) = V(:, [J 2 , J 3. I 2 , I 3 ], [J 2 , J 3 ]) = D small 31: T([I 1 , [J 2 , J 3 ]) = T(I 1 , [J 2 , J 3 ])V small 32: end if 33: end forY = T([I 2 , I 3 ], [J 2 , J 3 ]) * G 8: for j = 1 : q do 9: Y = T([I 2 , I 3 ], [J 2 , J 3 ]) * (T([I 2 , I 3 ], [J 2 , J 3 ])Y) 10: end for 11: [V (i) , ∼] = HQR FULL(Y) 12: T(:, [J 2 , J 3 ]) = T(:, [J 2 , J 3 ])V (i) 13: V(:, [J 2 , J 3 ]) = V(:, [J 2 , J 3 ])V (i) 14: 15: [U (i) , R] = HQR FULL(T([I 2 , I 3 ], J 2 )) 16: U(:, [I 2 , I 3 ]) = U(:, [I 2 , I 3 ])U (i) 17: T([I 2 , I 3 ], J 3 ) = U (i) * T([I 2 , I 3 ], J 3 ) 18: T(I 3 , J 2 ) = ZEROS(m − bi, b) 19: 20: [U small , D small , V small ] = SVD(R(1 : b, 1 : b)) 21: U(:, I 2 ) = U(:, I 2 )U small 22: V(:, J 2 ) = V(:, J 2 )V small 23: T(I 2 , J 2 ) = D small 24: T(I 2 , J 3 ) = U * small T(I 2 , J 3 ) 25: T(I 1 , J 2 ) = T(I 1 , J 2 )V small 26: else 27: [U small , D small , V small ] = SVD(T([I 2 , I 3 ], [J 2 , J 3 ])) 28: U(:, [I 2 , I 3 ]) = U(:, [I 2 , I 3 ])U small 29: V(:, [J 2 , J 3 ]) = V(:, [J 2 , J 3 ])V small 30: T([I 2 , I 3 ], [J 2 , J 3 ]) = D small 31: T([I 1 , [J 2 , J 3 ]) = T(I 1 , [J 2 , J 3 ])V small 32: end if 33: end for
Accelerating numerical dense linear algebra calculations with gpus. Jack Dongarra, Mark Gates, Azzam Haidar, Jakub Kurzak, Piotr Luszczek, Stanimire Tomov, Ichitaro Yamazaki, Numerical Computations with GPUs. Jack Dongarra, Mark Gates, Azzam Haidar, Jakub Kurzak, Piotr Luszczek, Stanimire Tomov, and Ichitaro Yamazaki, Accelerating numerical dense linear algebra calculations with gpus, Numerical Computations with GPUs (2014), 1-26.
Fast Monte Carlo algorithms for matrices. II. Computing a low-rank approximation to a matrix. Petros Drineas, Ravi Kannan, Michael W Mahoney, electronic). MR MR2231644SIAM J. Comput. 36168243Petros Drineas, Ravi Kannan, and Michael W. Mahoney, Fast Monte Carlo algorithms for matrices. II. Computing a low-rank approx- imation to a matrix, SIAM J. Comput. 36 (2006), no. 1, 158-183 (electronic). MR MR2231644 (2008a:68243)
The approximation of one matrix by another of lower rank. Carl Eckart, Gale Young, Psychometrika. 13Carl Eckart and Gale Young, The approximation of one matrix by another of lower rank, Psychometrika 1 (1936), no. 3, 211-218.
Low-rank revealing UTV decompositions. Ricardo D Fierro, Per Christian Hansen, Numerical Algorithms. 151Ricardo D. Fierro and Per Christian Hansen, Low-rank revealing UTV decompositions, Numerical Algorithms 15 (1997), no. 1, 37-55.
D Ricardo, Per Fierro, Peter Søren Kirk Christian Hansen, Hansen, UTV tools: Matlab templates for rank-revealing UTV decompositions. 20165Ricardo D Fierro, Per Christian Hansen, and Peter Søren Kirk Hansen, UTV tools: Matlab templates for rank-revealing UTV decom- positions, Numerical Algorithms 20 (1999), no. 2-3, 165.
Numerical methods for solving linear least squares problems. Gene Golub, Numerische Mathematik. 73Gene Golub, Numerical methods for solving linear least squares problems, Numerische Mathematik 7 (1965), no. 3, 206-216.
An analysis of the total least squares problem. H Gene, Charles F Van Golub, Loan, SIAM journal on numerical analysis. 176Gene H Golub and Charles F Van Loan, An analysis of the total least squares problem, SIAM journal on numerical analysis 17 (1980), no. 6, 883-893.
Gene H Golub, Charles F Van Loan, Johns Hopkins Studies in the Mathematical Sciences. Baltimore, MDJohns Hopkins University PressMatrix computationsGene H. Golub and Charles F. Van Loan, Matrix computations, third ed., Johns Hopkins Studies in the Mathematical Sciences, Johns Hopkins University Press, Baltimore, MD, 1996.
Abinand Gopal, Per-Gunnar Martinsson, arXiv:1812.06007The powerURV algorithm for computing rank-revealing full factorizations. arXiv preprintAbinand Gopal and Per-Gunnar Martinsson, The powerURV algorithm for computing rank-revealing full factorizations, arXiv preprint arXiv:1812.06007 (2018).
Low rank approximation of a sparse matrix based on lu factorization with column and row tournament pivoting. Laura Grigori, Sebastien Cayrols, James W Demmel, SIAM Journal on Scientific Computing. 402Laura Grigori, Sebastien Cayrols, and James W Demmel, Low rank approximation of a sparse matrix based on lu factorization with column and row tournament pivoting, SIAM Journal on Scientific Computing 40 (2018), no. 2, C181-C209.
A divide-and-conquer algorithm for the bidiagonal SVD. Ming Gu, C Stanley, Eisenstat, SIAM Journal on Matrix Analysis and Applications. 161Ming Gu and Stanley C Eisenstat, A divide-and-conquer algorithm for the bidiagonal SVD, SIAM Journal on Matrix Analysis and Applications 16 (1995), no. 1, 79-92.
Efficient algorithms for computing a strong rank-revealing qr factorization. SIAM Journal on Scientific Computing. 174, Efficient algorithms for computing a strong rank-revealing qr factorization, SIAM Journal on Scientific Computing 17 (1996), no. 4, 848-869.
Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. Nathan Halko, Joel A Per-Gunnar Martinsson, Tropp, SIAM Review. 532Nathan Halko, Per-Gunnar Martinsson, and Joel A. Tropp, Finding structure with randomness: Probabilistic algorithms for construct- ing approximate matrix decompositions, SIAM Review 53 (2011), no. 2, 217-288.
Numerical linear algebra. William Kahan, Canadian Mathematical Bulletin. 95William Kahan, Numerical linear algebra, Canadian Mathematical Bulletin 9 (1966), no. 5, 757-801.
Adaptive detection using low rank approximation to a data matrix. P Ivars, Donald W Kirsteins, Tufts, IEEE Transactions on Aerospace and Electronic Systems. 301Ivars P Kirsteins and Donald W Tufts, Adaptive detection using low rank approximation to a data matrix, IEEE Transactions on Aerospace and Electronic Systems 30 (1994), no. 1, 55-67.
Solving least squares problems. L Charles, Richard J Lawson, Hanson, 15SiamCharles L Lawson and Richard J Hanson, Solving least squares problems, vol. 15, Siam, 1995.
Randomized algorithms for the low-rank approximation of matrices. Edo Liberty, Franco Woolfe, Vladimir Per-Gunnar Martinsson, Mark Rokhlin, Tygert, Proceedings of the National Academy of Sciences. 10451Edo Liberty, Franco Woolfe, Per-Gunnar Martinsson, Vladimir Rokhlin, and Mark Tygert, Randomized algorithms for the low-rank approximation of matrices, Proceedings of the National Academy of Sciences 104 (2007), no. 51, 20167-20172.
CUDA: Scalable parallel programming for high-performance scientific computing. David Luebke, 5th IEEE international symposium on biomedical imaging: from nano to macro. IEEEDavid Luebke, CUDA: Scalable parallel programming for high-performance scientific computing, 2008 5th IEEE international sym- posium on biomedical imaging: from nano to macro, IEEE, 2008, pp. 836-838.
randUTV: A blocked randomized algorithm for computing a rank-revealing UTV factorization. Gregorio Per-Gunnar Martinsson, Nathan Quintana-Ortí, Heavner, ACM Transactions on Mathematical Software (TOMS). 451Per-Gunnar Martinsson, Gregorio Quintana-Ortí, and Nathan Heavner, randUTV: A blocked randomized algorithm for computing a rank-revealing UTV factorization, ACM Transactions on Mathematical Software (TOMS) 45 (2019), no. 1, 1-26.
A randomized algorithm for the decomposition of matrices. Vladimir Per-Gunnar Martinsson, Mark Rokhlin, Tygert, Applied and Computational Harmonic Analysis. 301Per-Gunnar Martinsson, Vladimir Rokhlin, and Mark Tygert, A randomized algorithm for the decomposition of matrices, Applied and Computational Harmonic Analysis 30 (2011), no. 1, 47-68.
A randomized blocked algorithm for efficiently computing rank-revealing factorizations of matrices. P G Martinsson, S Voronin, SIAM Journal on Scientific Computing. 385P.G. Martinsson and S. Voronin, A randomized blocked algorithm for efficiently computing rank-revealing factorizations of matrices, SIAM Journal on Scientific Computing 38 (2016), no. 5, S485-S507.
Symmetric gauge functions and unitarily invariant norms. Leon Mirsky, The quarterly journal of mathematics. 111Leon Mirsky, Symmetric gauge functions and unitarily invariant norms, The quarterly journal of mathematics 11 (1960), no. 1, 50-59.
. D John, Mike Owens, David Houston, Simon Luebke, John E Green, James C Stone, Phillips, GPU computing. John D Owens, Mike Houston, David Luebke, Simon Green, John E Stone, and James C Phillips, GPU computing, (2008).
Downdating the rank-revealing URV decomposition. Haesun Park, Lars Eldén, SIAM Journal on Matrix Analysis and Applications. 161Haesun Park and Lars Eldén, Downdating the rank-revealing URV decomposition, SIAM Journal on Matrix Analysis and Applications 16 (1995), no. 1, 138-155.
A BLAS-3 version of the QR factorization with column pivoting. Gregorio Quintana-Ortí, Xiaobai Sun, Christian H Bischof, SIAM Journal on Scientific Computing. 195Gregorio Quintana-Ortí, Xiaobai Sun, and Christian H. Bischof, A BLAS-3 version of the QR factorization with column pivoting, SIAM Journal on Scientific Computing 19 (1998), no. 5, 1486-1494.
Non-asymptotic theory of random matrices: extreme singular values. Mark Rudelson, Roman Vershynin, Proceedings of the International Congress of Mathematicians 2010 (ICM 2010) (In 4 Volumes). the International Congress of Mathematicians 2010 (ICM 2010) (In 4 Volumes)World ScientificIPlenary Lectures and Ceremonies Vols. II-IV: Invited LecturesMark Rudelson and Roman Vershynin, Non-asymptotic theory of random matrices: extreme singular values, Proceedings of the International Congress of Mathematicians 2010 (ICM 2010) (In 4 Volumes) Vol. I: Plenary Lectures and Ceremonies Vols. II-IV: Invited Lectures, World Scientific, 2010, pp. 1576-1602.
Randomized subspace iteration: Analysis of canonical angles and unitarily invariant norms. K Arvind, Saibaba, SIAM Journal on Matrix Analysis and Applications. 401Arvind K Saibaba, Randomized subspace iteration: Analysis of canonical angles and unitarily invariant norms, SIAM Journal on Matrix Analysis and Applications 40 (2019), no. 1, 23-48.
A storage-efficient WY representation for products of Householder transformations. Robert Schreiber, Charles Van Loan, SIAM Journal on Scientific and Statistical Computing. 101Robert Schreiber and Charles Van Loan, A storage-efficient WY representation for products of Householder transformations, SIAM Journal on Scientific and Statistical Computing 10 (1989), no. 1, 53-57.
An updating algorithm for subspace tracking. W Gilbert, Stewart, IEEE Transactions on Signal Processing. 406Basic decompositionsMatrix algorithmsGilbert W Stewart, An updating algorithm for subspace tracking, IEEE Transactions on Signal Processing 40 (1992), no. 6, 1535-1541. [49] , Matrix algorithms: Volume 1: Basic decompositions, vol. 1, Siam, 1998.
Rank degeneracy. G W Stewart, SIAM Journal on Scientific and Statistical Computing. 52GW Stewart, Rank degeneracy, SIAM Journal on Scientific and Statistical Computing 5 (1984), no. 2, 403-413.
. Pitman Utv Decompositions, Research, In, Series, , UTV decompositions, PITMAN RESEARCH NOTES IN MATHEMATICS SERIES (1994), 225-225.
Towards dense linear algebra for hybrid GPU accelerated manycore systems. Stanimire Tomov, Jack Dongarra, Marc Baboulin, Parallel Computing. 365-6Stanimire Tomov, Jack Dongarra, and Marc Baboulin, Towards dense linear algebra for hybrid GPU accelerated manycore systems, Parallel Computing 36 (2010), no. 5-6, 232-240.
Dense linear algebra solvers for multicore with GPU accelerators. Stanimire Tomov, Rajib Nath, Hatem Ltaief, Jack Dongarra, 10.1109/IPDPSW.2010.5470941Proc. of the IEEE IPDPS'10. of the IEEE IPDPS'10Atlanta, GAIEEE Computer SocietyStanimire Tomov, Rajib Nath, Hatem Ltaief, and Jack Dongarra, Dense linear algebra solvers for multicore with GPU accelerators, Proc. of the IEEE IPDPS'10 (Atlanta, GA), IEEE Computer Society, April 19-23 2010, DOI: 10.1109/IPDPSW.2010.5470941, pp. 1- 8.
. N Lloyd, David Trefethen, Iii Bau, Numerical linear algebra. 50Lloyd N Trefethen and David Bau III, Numerical linear algebra, vol. 50, Siam, 1997.
Restructuring the tridiagonal and bidiagonal qr algorithms for performance. Robert A Field G Van Zee, Gregorio Van De Geijn, Quintana-Ortí, ACM Transactions on Mathematical Software (TOMS). 403Field G Van Zee, Robert A Van de Geijn, and Gregorio Quintana-Ortí, Restructuring the tridiagonal and bidiagonal qr algorithms for performance, ACM Transactions on Mathematical Software (TOMS) 40 (2014), no. 3, 1-34.
Efficient randomized algorithms for the fixed-precision low-rank matrix approximation. Wenjian Yu, Yu Gu, Yaohang Li, SIAM Journal on Matrix Analysis and Applications. 393Wenjian Yu, Yu Gu, and Yaohang Li, Efficient randomized algorithms for the fixed-precision low-rank matrix approximation, SIAM Journal on Matrix Analysis and Applications 39 (2018), no. 3, 1339-1359.
| [] |
[
"The Dynamic Persistence of Economic Shocks *",
"The Dynamic Persistence of Economic Shocks *"
] | [
"Jozef Baruník \nInstitute of Economic Studies\nCharles University\nOpletalova 26110 00PragueCzech Republic\n\nInstitute of Information Theory and Automation\nThe Czech Academy of Sciences\nPod Vodarenskou Vezi 4182 00PragueCzech Republic\n",
"Lukáš Vácha \nInstitute of Economic Studies\nCharles University\nOpletalova 26110 00PragueCzech Republic\n\nInstitute of Information Theory and Automation\nThe Czech Academy of Sciences\nPod Vodarenskou Vezi 4182 00PragueCzech Republic\n"
] | [
"Institute of Economic Studies\nCharles University\nOpletalova 26110 00PragueCzech Republic",
"Institute of Information Theory and Automation\nThe Czech Academy of Sciences\nPod Vodarenskou Vezi 4182 00PragueCzech Republic",
"Institute of Economic Studies\nCharles University\nOpletalova 26110 00PragueCzech Republic",
"Institute of Information Theory and Automation\nThe Czech Academy of Sciences\nPod Vodarenskou Vezi 4182 00PragueCzech Republic"
] | [] | This paper presents a model for smoothly varying heterogeneous persistence of economic data. We argue that such dynamics arise naturally from the dynamic nature of economic shocks with various degree of persistence. The identification of such dynamics from data is done using localised regressions. Empirically, we identify rich persistence structures that change smoothly over time in two important data sets: inflation, which plays a key role in policy formulation, and stock volatility, which is crucial for risk and market analysis. | 10.2139/ssrn.4467110 | [
"https://export.arxiv.org/pdf/2306.01511v1.pdf"
] | 259,063,936 | 2306.01511 | 4d2cebb1c8e42e1de71a5249160eb14a14267a15 |
The Dynamic Persistence of Economic Shocks *
June 5, 2023
Jozef Baruník
Institute of Economic Studies
Charles University
Opletalova 26110 00PragueCzech Republic
Institute of Information Theory and Automation
The Czech Academy of Sciences
Pod Vodarenskou Vezi 4182 00PragueCzech Republic
Lukáš Vácha
Institute of Economic Studies
Charles University
Opletalova 26110 00PragueCzech Republic
Institute of Information Theory and Automation
The Czech Academy of Sciences
Pod Vodarenskou Vezi 4182 00PragueCzech Republic
The Dynamic Persistence of Economic Shocks *
June 5, 2023* We thank Rainer von Sachs, Roman Liesenfeld, Nikolas Hautsch, Christian Hafner, Wolfgang Härdle, Lubos Hanus for invaluable discussions and comments. We acknowledge insightful com-ments from numerous seminar presentations, such as: the Recent Advances in Econometrics (2023, Louvain), the 2021 and 2022 STAT of ML conference in Prague; the 15th International Conference on Computational and Financial Econometrics.. The support from the Czech Science Foundation under the 19-28231X (EXPRO) project is gratefully acknowledged. For estimation of the quan-tities proposed, we provide a package tvPersistence.jl in JULIA. The package is available at https://github.com/barunik/tvPersistence.jl. Disclosure Statement: Jozef Baruník and Lukas Vacha have nothing to disclose. † Corresponding author,persistence heterogeneitywold decompositionlocal stationaritytime-varying parameters JEL: C14C18C22C50
This paper presents a model for smoothly varying heterogeneous persistence of economic data. We argue that such dynamics arise naturally from the dynamic nature of economic shocks with various degree of persistence. The identification of such dynamics from data is done using localised regressions. Empirically, we identify rich persistence structures that change smoothly over time in two important data sets: inflation, which plays a key role in policy formulation, and stock volatility, which is crucial for risk and market analysis.
Introduction
It is well documented that macroeconomic and financial variables have exhibited a very high degree of time variation over the past decades (Primiceri, 2005;Justiniano and Primiceri, 2008;Bekierman and Manner, 2018;Chen et al., 2018) as both stable and uncertain periods associated with different states of an economy were driven by different shocks. At the same time, an increasing number of authors argue that these variables are driven by shocks that influence their future value with heterogeneous levels of persistence (Bandi and Tamoni, 2017;Dew-Becker and Giglio, 2016;Bandi and Tamoni, 2022). 1 A possibly non-linear combination of transitory and persistent responses to shocks will produce time series with heterogeneous persistence structures that remain hidden to the observer using traditional methods. Given this discussion, it is natural to ask to what extent economic data are driven by shocks that are both heterogeneously persistent and dynamic, and how we can infer such rich dynamics from the data.
Inferring time-varying persistence from data on important economic series such as inflation, consumption, economic growth, unemployment rates or various measures of uncertainty has crucial implications for policy making, modelling or forecasting. However, despite the progress made in exploring unit roots (Evans and Savin, 1981;Nelson and Plosser, 1982;Perron, 1991), structural breaks (Perron, 1989), or more complicated long memory or fractionally integrated structures that can exhibit large amounts of time persistence without being non-stationary (Hassler and Wolters, 1995;Baillie et al., 1996), there is still no clear consensus on how to explore such dynamic nature of data. The inability to identify the dependence from the data alone leads to a tendency to rely on assumptions that are difficult, if not impossible, to validate. To better understand and forecast economic time series, we need an approach that can precisely localise the horizons and time periods in which the crucial information 1 We use the term persistence to capture a property of a time series that is closely related to its autocorrelation structure. In particular, the degree of persistence gives us a precise description of how a shock will affect the series. A low degree of persistence indicates the transitory nature of shocks, which force the time series to return to its mean path. In contrast, when shocks push the time series away from the mean path, they are said to be highly persistent. A shock tends to persist for a long time.
occurs.
The aim of this paper is to provide a representation for a non-stationary time series that allows a researcher to identify and explore its rich time-varying heterogeneous persistence structures. We aim to identify localised persistence that will be useful for modelling and forecasting purposes. Our work is closely related to the recent strand of the literature that proposes to represent a covariance stationary time series as a linear combination of orthogonal components carrying information about heterogeneous cycles of alternative lengths (Ortu et al., 2020). While these methods are particularly well suited to studying the heterogeneously persistent structure of a time series, stable as well as uncertain times associated with different states of the economy imply a time-varying nature of responses to shocks that remains hidden when assuming stationary data. Thus, the localisation of persistence structures will open up new avenues for modelling and forecasting. A model that allows the persistence structure to change smoothly over time is essential, since it is unrealistic to assume that the stochastic future of a time series is stable in the long run. At the same time we observe non-stationary behaviour of data even in shorter time periods in a number of cases. Therefore, modelling and forecasting under the assumption of stationarity can be misleading.
Different degrees of persistence in economic variables are natural and can be reconciled with agents' preferences, which differ according to their horizon of interest. Economic theory suggests that the marginal utility of agents' preferences depends on the cyclical components of consumption (Giglio et al., 2015;Bandi and Tamoni, 2017), and the literature documents frequency-specific investor preferences. (Dew-Becker and Giglio, 2016;Neuhierl and Varneskov, 2021;Bandi et al., 2021a) and relates them to investment horizons in their risk attitudes (Dew-Becker and Giglio, 2016). Such behaviour can be observed, for example, under myopic loss aversion, where an agent's decision depends on the valuation horizon. Unexpected shocks or news have the capacity to alter such preferences and may therefore generate transitory and persistent fluctuations of different magnitudes. 2 Importantly, not many economic relationships remain constant over decades, years or even months, and the evolution of the economy with unprecedented declines in economic activity, such as the COVID pandemic or the recent severe impact of the Russian war with Ukraine, generates very different persistence structures. Output fluctuations may persist for a long time, but not forever and will eventually disappear (Cochrane, 1988). The discussion calls for a new framework in which heterogeneity in decision making across horizons, horizon-specific risk aversion and the like are not based on the assumption of stationary data, but are truly time-varying.
To identify time-varying transitory and persistent components of a time series, we propose a time-varying extended world decomposition (TV-EWD) that works with localised heterogeneous persistence structures. Assuming stationarity of a small neighbourhood around a given fixed point in time, we allow time variation in the coefficients with the notion of locally stationary processes (Dahlhaus and Polonik, 2009). Our time-varying extended wold decomposition, which relaxes the stationarity assumption of Ortu et al. (2020), then formalises the idea that a time series is represented by time-varying persistence structures. Our decomposition is informative about the duration of the fluctuation that is most relevant to the variability of the time series at a given point in time, and sheds light on potential economic mechanisms driving the time series under consideration at a given point in time.
To the best of our knowledge, we are the first to study the time-varying degree of persistence in time series. 3 While such a decomposition is potentially useful for modelling, as it allows us to better characterise the dependence structures of the data, our results can also be used by the forecasting literature. As noted by (Stock and Watson, 2017), we have seen very slow progress in the forecasting accuracy of economic time series over the past decades. The first reason is that the information is hidden under the noise dividend payments (Balke and Wohar, 2002). Conversely, a shock that affects shorter horizons may suggest temporary changes in future price movements. For example, suppose the shock is only a change in an upcoming dividend payment. This would likely lead to a very short-term change, reflecting the transitory nature of the news.
3 Lima and Xiao (2007) notes that a large number of time series show the existence of local or temporary persistence. and is unevenly distributed over different horizons. Second, economic time series are dynamic and very often non-stationary when we model them over a long period. The analysis proposed in this paper can accurately extract the relevant information and build a more accurate forecasting model.
The identification of the time-varying persistence structure has a number of advantages over traditional methods based on Wold decomposition. Traditional Wold decomposition, which underlies the vast majority of contemporaneous models, gives us aggregate information about the speed, horizon and intensity of shock persistence. It is a coarse and imprecise description that is insufficient to identify the precise structure of persistence in a given period. To capture the heterogeneity of persistence, it is necessary to consider the duration (propagation) of shocks at different levels of persistence and at different points of time.
In the two different empirical examples, we show that the persistence structure found in the data is not only highly heterogeneous, but also time-varying. We have chosen to study two very different data sets that are important to economists: inflation and stock volatility. At the same time, both datasets exhibit typical persistence features and are crucial to understand. While it is the properties of aggregate inflation that are ultimately of interest to policymakers, the characteristics and determinants of the behavioural mechanisms underlying price-setting are an important factor in the way inflation behaves over time. The persistence of inflation has direct implications for the conduct of monetary policy. Similarly, stock market volatility is of great interest as one of the key measures of risk and uncertainty. We show that even in periods of very high persistence, we can uncover less persistent sub-periods where the transient nature of shocks prevails. Our model, which can accurately identify such dynamics within the time-varying persistence structure, is then useful for identifying the dynamics driving the data and leading to improved forecasts.
The remainder of the paper is structured as follows. Section 2 proposes a timevarying extended world decomposition based on a locally stationary process, discusses methodology, forecasting models based on such a decomposition, and estimation. Section 3 examines the time-varying persistence of US inflation and the volatility of major US stocks. Section 4 then concludes.
Time variation of time series components with different levels of persistence
The most fundamental justification for time series analysis is Wold's decomposition theorem. It states that any covariance stationary time series can be represented as a linear combination of its own past shocks and moving average components of finite order (Wold, 1938;Hamilton, 2020). This is an enormously important fact in the economic literature, useful to macroeconomists when studying impulse response functions, and central to tracing the mechanisms of economic shocks to improve policy analysis.
At the same time, this is only one of the possible representations of a time series, which is particularly suitable for cases where we can assume stationarity of the model. In other cases, where we cannot assume that the stochastic properties of the data are stable over time, and where the unconditional approach may be useful, the stationarity assumption may be misleading. It is important to recognise that other representations may capture deeper properties of the series, which may also change smoothly over time. As argued in the introduction, we want to explore properties of time variation as well as properties related to different levels of persistence in the time series.
The latter is made possible by the persistence-based Wold decomposition proposed by Ortu et al. (2020); Bandi et al. (2021b), who show how to decompose stationary time series into the sum of orthogonal components associated with their own levels of persistence. These individual components have a Wold representation defined with respect to the scale-specific shocks with heterogeneous persistence. Here we aim to provide a persistence-based representation for a locally stationary process (Dahlhaus, 1996) and discuss how to decompose a locally stationary process into independent components with different degrees of persistence. With the proposed model we will be able to study the time variation of components with different degrees of persistence.
Locally stationary processes
While stationarity plays an important role in time series analysis over decades due to the availability of natural linear Gaussian modelling frameworks, many economic relationships are not stationary in the longer run. The state of the economy, as well as the behaviour of agents, is often highly dynamic, and the assumption of time-invariant mechanisms generating the data is often unrealistic. 4 A more general nonstationary process can be one that is locally close to a stationary process at each point in time, but whose properties (covariances, parameters, etc.) gradually change in a nonspecific way over time. The idea that the process can only be stationary for a limited period of time and that the process is still valid for estimation is not new. So-called locally stationary processes were introduced in Dahlhaus (1996).
More formally, assume an economic variable of interest follows a nonstationary process x t depending on some time-varying parameter model. In this framework, we replace x t by a triangular array of observations (x t,T ; t = 1, . . . , T ) where T is the sample size, and we assume that we observe x t,T at time points t = 1, . . . , T . Such nonstationary process x t,T can be approximated locally (Dahlhaus, 1996) around each rescaled and fixed time point u ≈ t/T such that u ∈ [0, 1], by a stationary process x t (u). In other words, under some suitable regularity conditions, |x t,
T − x t (u)| = O p | t T − u| + 1 T .
While stationary approximations vary smoothly over time as u → x t (u), locally stationary processes can be interpreted as processes which change their (approximate) stationary properties smoothly over time. The main properties of x t,T are therefore encoded in the stationary approximations, and hence in the estimation, we will focus on quantities
E [g(x t (u), x t−1 (u), . . .)] with some function g(.) as a natural approximation of E [g(x t,T , x t−1,T , . . .)].
Crucially, a linear locally stationary process x t,T can be represented by a time varying MA(∞)
x t,T = +∞ h=−∞ α t,T (h) ϵ t−h ,(1)
where coefficients α t,T (h) can be approximated under certain (smoothness) assump-
tions (see Assumption 1 in Appendix A) with coefficient functions α t,T (h) ≈ α (t/T, h), and ϵ t are independent random variables with Eϵ t = 0, Eϵ s ϵ t = 0 for s ̸ = t , E|ϵ t | < ∞.
The construction with α t,T (h) and α (t/T, h) looks complicated at first glance, but a function α (u, h) is needed for rescaling and to impose smoothness conditions while the additional use of α t,T (h) makes the class rich enough to cover autoregressive models (see Theorem 2.3. in Dahlhaus (1996)) in which we are interested in later. It is straightforward then to construct a stationary approximation (with existing derivative processes)
x t (u) = +∞ h=−∞ α (u, h) ϵ t−h ,(2)
where at every fixed point of time u the original process x t,T can be represented as a linear combination of uncorrelated innovations with time-varying impulse response (TV-IRF) functions α (u, h). Note the process is assumed to have zero mean µ(t/T ) so far. While this may be unrealistic in number of applications, we will return to this assumption later in the estimation.
Time-Varying Extended Wold Decomposition
Having a representation that allows for time variation of the impulse response function, we further introduce a localised persistence structure. We use the extended Wold decomposition of Ortu et al. (2020), which allows us to decompose the time series into several components with different levels of persistence. Ortu et al. (2020) and Bandi et al. (2021b) show that the decomposition brings substantial benefits in understanding the persistence dynamics of economic time series and improves forecasting performance, as many economic time series exhibit a heterogeneous persistence structure (across horizons, scales). Importantly, we argue that in addition to recovering the heterogeneous persistence structure of a typical economic time series, we need to localise it. Localisation, together with persistence decomposition, can dramatically improve our understanding of dynamic economic behaviour by allowing the persistence structure to change smoothly over time. In turn, models built with such an understanding can bring significant forecasting benefits, as we will show later with empirical examples.
Specifically, we propose a model that uses locally stationary processes to capture the dynamics of heterogeneous persistence. Knowing that we can express the locally stationary process using the TV-MA(∞) representation, we can adapt the Extended Wold decomposition proposed by Ortu et al. (2020) under alternative assumptions and localise the decomposition. The proposition 1 formalises the main result and proposes the Time-Varying Extended Wold Decomposition (TV-EWD) model.
Proposition 1 (Time-Varying Extended Wold Decomposition (TV-EWD)). If x t,T
is a zero mean, locally stationary process in the sense of Assumption 1 in Appendix A that has a representation x t,T = +∞ h=−∞ α t,T (h)ϵ t−h , then it can be decomposed as
x t,T = +∞ j=1 +∞ k=0 β {j} t,T (k)ϵ {j} t−k2 j ,(3)
where for any
j ∈ N, k ∈ N β {j} t,T (k) = 1 √ 2 j 2 j−1 −1 i=0 α t,T (k2 j + i) − 2 j−1 −1 i=0 α t,T k2 j + 2 j−1 + i ,(4)ϵ {j} t = 1 √ 2 j 2 j−1 −1 i=0 ϵ t−i − 2 j−1 −1 i=0 ϵ t−2 j−1 −i ,(5)
where coefficients β {j} t,T (k) can be approximated under Assumption 2 in Appendix A with coefficient functions
β {j} t,T (k) ≈ β {j} (t/T, k), ϵ t are independent random variables with Eϵ t = 0, Eϵ s ϵ t = 0 for s ̸ = t , E|ϵ t | < ∞, and ∞ k=0 β {j} t,T (k) 2 < ∞ for all j.
Proof. Follows directly from the properties of locally stationary processes (Dahlhaus, 1996) and extended Wold decomposition (Ortu et al., 2020).
Proposition 1 formalizes the discussion about representation of a time series that offers decomposition to a j uncorrelated persistence components that can smoothly change over time. Specifically it allows to construct a stationary approximation (with existing derivative processes) to the process x t,T with time-varying uncorrelated persistent components
x {j} t (u) = +∞ k=0 β {j} (u, k)ϵ {j} t−k2 j ,(6)
and always reconstruct the original process as
x t (u) = +∞ j=1 x {j} t (u),(7)
In other words, we are able to decompose the time series into uncorrelated components with different level of persistence at any fixed point of time. Further note that ϵ {j} t is a localized MA(2 j − 1) with respect to fundamental innovations of x t,T , and β {j} (u, k) is the time-varying multiscale impulse response associated with scale j and time-shift k2 j at a fixed point of time approximated by u.
The decomposition hence allows us to explore the time-varying impulse responses at different persistence levels. A scale-specific impulse response provide exact information how a unit shock to the system propagates in various horizons at a given point of time. For example, in case of daily data, the first scale, j = 1, describe how a unit shock dissipates in 2 days, for the second scale, j = 2− days and so on.
Obtaining time-varying persistence structures from data
While the proposed approach will identify localized time-varying persistence structure in the time series, next step will be to use it to build a parametric model that can be used to improve forecasts. The first step will be to obtain the quantities from the previous section.
In light of the assumptions that underpin the model, we conjecture that an economic variable of interest follows a time-varying parameter autoregressive (TVP-AR) model with p lags
x t,T = ϕ 0 t T + ϕ 1 t T x t−1,T + . . . + ϕ p t T x t−p,T + ϵ t ,(8)
that has a representation given in Proposition 1 and can be under appropriate conditions approximated locally by a stationary process x t,T ≈ x(u) for a given t/T ≈ u with ϕ i t T ≈ ϕ i (u). To obtain the decomposition, we need to identify the time-
varying coefficient estimates Φ t T = ϕ 1 t T , . . . , ϕ p t T ′ on the centered data x t,T = x t,T − ϕ 0 t
T . This is particularly important in some datasets that display clear time trend, while it can be negligible in others. Still while our model assumes zero mean process, we need to take this step.
Local linear estimation
We estimate the coefficient functions ϕ i t T by the local linear method. The local linear methods of estimation has been employed in the nonparametric regression estimation due to its attractive properties such as efficiency, bias reduction, and adaptation of boundary effects (Fan and Gijbels, 1996). Assuming each ϕ i t T has a continuous second-order derivative in the interval [0, 1], it can be approximated around u by a linear function through the first-order Taylor expansion
ϕ i t T ≈ ϕ i (u) + ϕ ′ i (u) t T − u ,(9)
where ϕ ′ i (u) = ∂ϕ i (u)/∂u is its first derivative. Based on the local approximation of the model 8, minimising the locally weighted sum of squares estimate the parameters
Φ (u) = {ϕ 1 (u), . . . , ϕ p (u)} ′ Φ (u) , Φ ′ (u) = argmin (θ,θ ′ )∈R 2 T t=1 x t,T − U ⊤ t,T θ − t T − u U ⊤ t,T θ ′ 2 K b t T − u ,(10)
where U t,T = (x t−1,T , x t−2,T , . . . , x t−p,T ) ⊤ , and K b (z) = 1/bK(z/b) is a kernel function with b = b T > 0 being bandwidth satisfying the conditions that b → 0 and T b → ∞ as T → ∞. Note that b controls the amount of smoothing used in the local linear estimation. Roughly, we fit a set of weighted local regressions with an optimally chosen window size chosen by bandwidth b discussed below. The estimator has nice general expression that can be obtained by elementary calculations (Fan and Gijbels, 1996), and estimates of coefficients are asymptotically normally distributed under some regularity conditions. Note we use centered data x t,T = x t,T − ϕ 0 t T obtained as
ϕ 0 (u) , ϕ ′ 0 (u) = argmin (µ,µ ′ )∈R 2 T t=1 x t,T − µ − µ ′ t T − u 2 K b t T − u ,(11)
As is well-known, the local linear estimator is sensitive to the choice of the bandwidth b, and thus it is critical to choose an appropriate bandwidth in the applications. Here we follow the commonly used cross validation bandwidth choice for the time series case (Härdle and Vieu, 1992) Finally, after obtaining the time-varying coefficients, we express the time-series as a (local ) Wold's MA representation with innovation process ϵ t (see Eq. 2). Then, we use result of the Proposition (1) to obtain the horizon specific impulse response coefficients associated with scale (horizon) j and time-shift k2 j . The decomposition needs to be truncated at a finite number of scales J and observations T . Hence a finite version of the time-varying extended Wold decomposition at a given point of time u is considered
x t,T = J j=1 x {j} t,T + π {J} t = J j=1 N −1 k=0 β {j} (u, k) ϵ {j} t−k2 j + π {J} t (u),(12)
where
ϵ {j} t = 1 √ 2 j 2 j−1 −1 i=0 ϵ t−i − 2 j−1 −1 i=0 ϵ t−2 j−1 −i , and π {J} t is a residual compo- nent at scale J, defined as π {J} t (u) = +∞ k=0 γ {J} k (u)ϵ {J} t,T and ϵ {J} t = 1 √ 2 J 2 J −1 j=0 ϵ t−i , γ {J} k (u) = 1 √ 2 J 2 J −1 j=0 α(u, k2 J + i), the estimate of scale-specific coefficients β {j} (u, k) are com- puted as β {j} (u, k) = 1 √ 2 j 2 j−1 −1 i=0 α(u, k2 j + i) − 2 j−1 −1 i=0 α(u, k2 j + 2 j−1 + i) .
Forecasting models with time-varying persistence
One of the key advantages of our model is that it captures a smoothly changing persistence structure that can be explored in forecasting. In a number of cases, it may be unrealistic to assume that the stochastic structure of time series is stable over longer periods. Moreover, non-stationarity may also be observed in shorter time series and forecasting under the assumption of stationarity may be misleading. A common approach to dealing with non-stationarity is to assume a model with smoothly changing trend and variance but a stationary error process (Stȃricȃ and Granger, 2005). While a number of authors consider forecasting in the locally stationary setting (Dette and Wu, 2022), our approach extends these models by exploring the smoothly changing persistence structures of the data. Our aim is to determine an h-step-ahead predictor for the unobserved x T +h,T from the observed x 1,T , . . . , x T,T data. Having the β {j} (u, k), ϵ {j} t and ϕ 0 (u) we can decompose original time series x t,T to a deterministic trend and orthogonal persistence components x {j} t,T and estimate the weights w {j} that identify importance of specific horizons in the time series as
x t,T = ϕ 0 (t/T ) + J j=1 w {j} x {j} t,T + η t,T .(13)
Working with stationary representation of the process, conditional h-step-ahead forecasts can be obtained directly by combination of the trend forecast and weighted forecast of scale components following Ortu et al. (2020)
E t [x T +h,T ] = E t [ x {0} T +h,T ] + J j=1 w {j} E t [x {j} T +h,T ](14)
where conditional expected value of the trend E t [ x
{0}
T +h,T ] is forecasted as TV-AR(1) 5 5 The process is forecasted with the local linear estimator in 10, with the Epanechnikov kernel and for the conditional expectation of the scale components E t [x {j} t+1,T ] we use forecasting procedure provided by Ortu et al. (2020).
Time-Varying Persistence in Data
The proposed approach is useful for any time series that can be expected to change its persistence structure over time. Here we aim to demonstrate the importance of identifying the persistence structure on two different and important datasets. Both time series are quite different in nature, but share the common feature of a smoothly changing persistence structure.
In the first example, we examine the time-varying persistence structure of inflation, which is one of the most important macroeconomic time series. While it is the properties of aggregate inflation that are ultimately of interest to policymakers, an important factor underlying the behaviour of inflation over time is the characteristics and determinants of the behavioural mechanisms underlying price setting. The persistence of inflation has direct implications for the conduct of monetary policy. While time-varying models are used in the literature (Lansing, 2009) to capture the time variation, a number of authors also consider the decomposition of inflation into transitory and permanent components (Stock and Watson, 2007). Here we will build a more flexible model that explores the time-varying persistence structure of inflation and builds a more precise model.
In the second example, we will look at the volatility of stocks. Similar to inflation, stock market volatility is of great interest as one of the key measures of risk and uncertainty. The study of its heterogeneous persistence structure, which evolves dynamically over time, will be useful to a wide audience.
having the width denoted as the kernel width -2 in the subsequent forecasting excercisses.
Time-varying persistence in the U.S. inflation
The data we use is the Personal Consumption Expenditures (PCE) price index 6 available on the Federal Reserve of St-Louis website 7 as a proxy for US inflation. Our data contain 781 monthly observations over the period from January 1959 to February 2023, and we look at the logarithmic change in the index.
Inflation is an interesting time series for our analysis because the shocks that drive inflation have varying degrees of persistence and tend to change over time. Inflation is driven by different shocks in stable periods than in turbulent periods such as the COVID-19 crisis. Such smoothly changing persistence structure of inflation remains hidden to the observer when using classical time series tools such as impulse response functions. Figure 1 illustrates this using our TV-EWD. Specifically, the plot shows the ratio of β j (t/T, k) to the sum across scales j β j (t/T, k) with j scales representing 2,4,8,16,32 month persistence of shocks at the first (k = 1)horizon k2 j . That is we look at relative importance of the information at 2 j horizon in multiscale impulse response function. At each time period, we identify the persistence structure of the shocks affecting the US inflation series. There are periods where most of the shocks have a transitory duration of 2 or 4 months. For example, transitory shocks of up to two months had the largest share of information in the years 1959-1964, 1968-1970, 1994-2001. In contrast, the years 1966-1967, 1976-1978and 2008-2010 were mainly driven by more persistent shocks of up to 8 months. It is also interesting to note that the persistence structure is very different during several different crises, which are marked by NBER recession periods in the plot. During the recession from November 1973 to March 1975, inflation was mainly driven by shocks lasting 16 and 32 months and was therefore very persistent.
We see that the persistence structure of the inflation time series is rich and changes smoothly over time. Next, we explore how this precise identification helps in modelling and forecasting US inflation.
Forecasting Inflation
The exploratory analysis in the previous section shows that the persistence structure of US inflation is rich and varies smoothly over time. We aim to explore this feature in order to propose a forecasting model based on the precisely identified time-varying persistence. To evaluate the performance of our model, we use the unconditional AR(3) model, the extended Wold decomposition model of Ortu et al. (2020), and two time-varying autoregression models. In this way we will see how persistence decomposition improves on the usual time-varying models, and how time variation improves on persistence decomposition. 8 For TV-EWD estimation and forecasting, we use the procedure described in section 3, with J = 5 scales, two autoregressive lags Table 1: Forecasting performance measured by root mean square error (RMSE) and mean absolute error (MAE) for the Ortu et al. (2020) extended world decomposition model, the TV-AR(3) and TV-HAR models, and our TV-EWD model. The performance of all models is relative to the benchmark AR(3) for forecasts 1, 2, 6 and 12 months ahead. We divide the observed time series into two parts:
x 1,T , . . . , x m,T in-sample , x m+1,T , . . . , x T,T out-of-sample ,(15)
where the in-sample data is used to fit the models and then we compare the outof-sample predictive performance using the root mean square error (RMSE) and mean absolute error (MAE) loss functions. Using the first 645 observations for the in-sample, we are left with 136 out-of-sample observations. The forecast horizons considered are h =[1,2,6,12] months ahead. The results of the forecast, expressed as the mean of the loss functions relative to the benchmark AR(3) model, are shown in table 1.
The results show that the TV-EWD model provides the best forecasting performance at all forecasting horizons. This advantage increases as the forecast horizon lengthens. This suggests that the accurate identification of the rich persistence structure of the inflation time series is important for longer-term forecasting. Interestingly, this advantage is also strong for the non-time-varying model, EWD. As this model uses the same persistence levels (horizons) as our TV-EWD, we can conclude that the identification of the persistence structure is more important than the time-varying ability in the case of long-run forecasts. Importantly, the results also show that identifying the persistence structure further improves forecasts.
Persistence structure of volatility
The second important time series with a potentially interesting structure for our analysis is volatility. Volatility is one of the key measures in finance as it captures fluctuations in asset prices and hence their risk. We use daily data on volatility 9 for all stocks listed in the S&P 500 index from 5 July 2005 to 31 August 2018 from TickData, and thus we work with 3278 days of the 496 stock returns. Again, we start by illustrating the persistence structure of data. Since our sample contains 496 stocks, we have chosen to look at the first available (in alphabetical order), which is the company Agilent Technologies Inc. Note that we have looked at other stocks and the persistence structure is similarly rich to the one we are discussing. Figure 2 plots the average β j (t/T, k)/ j β j (t/T, k) ratio j scales representing 2,4,8,16,32 days persistence of shocks for each year of the sample for the first horizon (k = 1) of a multiscale impulse response function. That is, each year we can see the average contribution of the shocks to the volatility series. The reason we look at averages is that the daily sample contains rich dynamics that are difficult to visualise, and at the same time the aggregate information for one year strongly supports our objective.
Specifically, we can again see some periods that are mostly driven by transitory shocks of up to 4 days, such as 2005, 2010 or 2014, as well as periods that are driven by more persistent shocks, such as 2008, 2011 or 2016-2018. Overall, we can see how rich the persistence dynamics of the volatility series are. While it is important to capture the time variation of the dependence structures in the series, it is also crucial to capture the smoothly changing persistence.
Forecasting volatility
Finally, we use the time-varying persistence structure we identify to build a more accurate forecasting model for volatility. We compare the out-of-sample forecasting performance of our TV-EWD model with the popular heterogeneous autoregressive (HAR) model of Corsi (2009), then the Extended Wold Decomposition (EWD) of Ortu et al. (2020), two time-varying parameter alternatives, TV-AR(3), and TV-HAR 10 for the realised volatilities of all S&P 500 constituents available over the sample. The model set benchmarks both the time variation and the persistence structure, and thus we will be able to see how our model improves the forecasts. We estimate the model parameters on the information set containing the first 1000 observations and save the rest for 1, 5 and 22 day out-of-sample tests. As we are exploring the changing behaviour of the data, we also look at different time periods to see how sample specific the results are. The richer the localised structure in the data, the larger the gains we expect. Therefore, along with the aggregate results for the entire out-of-sample period from August 2009 to August 2018, we look at two specific periods, August 2009 to August 2012 at the beginning of the out-of-sample period, and then August 2016 to August 2017. For TV-EWD estimation and forecasting we use the procedure described in section 3 with J = {5, 5, 7} scales, {2, 5, 15} autoregressive lags for h = {1, 5, 22} forecasts respectively. Note that the choice of higher order lags in the autoregression naturally improves forecasts with increasing horizons. The kernel width of 0.2 minimises the mean square error of the forecasts. We compare the forecast performance using the MAE (mean absolute error) and RMSE (root mean square error) loss functions relative to the benchmark HAR model.
As the results are obtained on the large sample of 496 stocks, we concentrate the results in Table 2, which reports the median estimates for all stocks and is accompanied by box plots showing the errors for all stocks in Figures 4 (MAE) and 3 (RMSE). Focusing on the results in Table 2, we can see that TVP-EWD outperforms all other models over both forecast horizons and different samples. This is particularly strong as this result holds for the median of the errors computed for 496 stocks in the sample, except for the RMSE for the first period August 2009 -August 2012, where TV-HAR produces consistently better forecasts. Figures 4 (MAE) and 3 (RMSE), which show more granular results for all stocks in box plots, confirm that TV-EWD produces a much better forecast than all other models for most of the stocks considered, across both horizons and samples considered.
Looking more closely at the results, it is important to note that we are comparing several different approaches. First, a popular HAR model captures the unconditional persistence structure with 22 lags in the autoregression, while EWD improves the results at longer horizons by identifying more precise persistence structures. This result is consistent with the findings of Ortu et al. (2020), although they use only single time series, and our result holds for a large cross-section of stock volatilities.
Second, and more importantly, adding time variation to autoregressive models significantly improves the results, as time-varying parameters capture the dynamics in the data. In particular, TV-HAR significantly improves forecasts. Finally, when the persistence structure is allowed to vary smoothly over time by our TV-EWD model, we document further improvements in forecasts. The ability to appropriately incorporate changing persistence structure in the data gives TV-EWD an advantage especially at longer horizons.
It is also interesting to note that in the much quieter period from August 2016 to August 2017, where we do not find a very heterogeneous persistence structure, the complex TV-EWD model performs similarly to both HAR and TV-HAR models in terms of RMSE, although it still has the best results in terms of MAE.
Conclusion
A representation that allows for smoothly changing persistence structures in economic data has been constructed to study the dynamic persistence structures of important macroeconomic and financial data and to improve their forecasting. The model provides valuable information about the fundamental behaviour of the time series, which can be used to construct more precise models and forecasts.
t
(u) is usually negligible so we do not consider it in the esti-mation. For more details seeOrtu et al. (2020).
Figure 1 :
1Time-varying persistence structure of the U.S. Inflation. Plot shows ratios of β j (t/T, 1)/ j β j (t/T, 1) on y-axis with j corresponding to 2,4,8,16,32 months persistence of shocks over the period from January 1959 until February 2023 on the x-axis. Grey areas mark the NBER recession periods.
Figure 2 :
2Time-varying persistence structure of the Realized Volatility of the Agilent Technologies Inc. Plots show ratios of β j (t/T, 1)/ j β j (t/T, 1) with j corresponding to 2,4,8,16,32 days persistence of shocks over the period from July 2015 until August 2018.
Figure 3 :
3Root mean square error of our TV-EWD model compared to Ortu et al. (2020)'s extended world decomposition (EWD), time-varying autoregression (TV-AR) and heterogeneous autoregressive model (TV-HAR). All errors are relative to the HAR model of Corsi (2009) over h = 1 (left), h = 5 (middle) and h = 22 (right). The box plots show the RMSE for all 496 S&P 500 companies in the sample, computed on the forecasts for the three different periods represented by the three different colours.
Figure 4 :
4Mean absolute error of our TV-EWD model compared to Ortu et al. (2020)'s extended world decomposition (EWD), time-varying autoregression (TV-AR) and heterogeneous autoregressive model (TV-HAR). All errors are relative to the HAR model of Corsi (2009) over h = 1 (left), h = 5 (middle) and h = 22 (right). The box plots show the MAE for all 496 S&P 500 companies in the sample, computed on the forecasts for the three different periods represented by the three different colours.
Table 2 :
2Root mean square error (RMSE) and mean absolute error (MAE) of our TV-EWD model compared to Ortu et al. (2020)'s extended world decomposition (EWD), time-varying autoregression (TV-AR) and heterogeneous autoregressive model (TV-HAR). All errors are relative to the HAR model of Corsi (2009) over h = 1 (left), h = 5 (middle) and h = 22 (right). The figures show median values for all 496 S&P 500 companies in the sample, calculated on forecasts for the three different periods.RMSE
MAE
h = 1 h = 5 h = 22
h = 1 h = 5 h = 22
For example, a shock that affects longer horizons may reflect permanent changes in expectations about future price movements. Such a shock may lead to a permanent change in a firm's future
An exception are nonstationary models where persistence is generated by integrated or cointegrated processes.
The Personal Consumption Expenditures price index measures the prices that US consumers pay for goods and services. The change in the PCE price index captures inflation or deflation across a wide range of consumer expenditures. 7 https://fred.stlouisfed.org
Both TV-HAR and TV-AR(3) use the local linear estimator with kernel width 0.3
Realised volatility is computed as the sum of the squared logarithmic 5-minute returns for each day of the sample.
Both TV-HAR, TV-AR(3) use the local linear estimator with kernel width 0.3.
A Appendix: Locally Stationary Processes Assumption 1. (Locally Stationary Processes)Dahlhaus and Polonik (2009): Let the sequence of stochastic processes x t,T , (t = 1, · · · , T ) be called a locally stationary process if x t,T has a representationsatisfying the following conditions:where l(h) for some κ > 0 is defined as:and K is not dependent on T , and there exist functions α(·, h) :where V (·) denotes the total variation on [0, 1], ϵ ∼ iid, Eϵ t ≡ 0, Eϵ 2 t ≡ 1. We also assume that all moments of ϵ t exist.Assumption 2. Let the sequence of stochastic processes x t,T , (t = 1, · · · , T ) be called a locally stationary process if x t,T has a representationsatisfying the following conditions ∀j:where l(h) for some κ > 0 is defined as:and K is not dependent on T , and there exist functionswhere V (·) denotes the total variation on [0, 1], ϵ ∼ iid, Eϵ t ≡ 0, Eϵ 2 t ≡ 1. We also assume that all moments of ϵ t exist.
Analysing inflation by the fractionally integrated arfima-garch model. R T Baillie, C.-F Chung, M A Tieslau, Journal of applied econometrics. 111Baillie, R. T., C.-F. Chung, and M. A. Tieslau (1996). Analysing inflation by the fractionally integrated arfima-garch model. Journal of applied econometrics 11 (1), 23-40.
Low-frequency movements in stock prices: A state-space decomposition. N S Balke, M E Wohar, Review of Economics and Statistics. 844Balke, N. S. and M. E. Wohar (2002). Low-frequency movements in stock prices: A state-space decomposition. Review of Economics and Statistics 84 (4), 649-667.
Business-cycle consumption risk and asset prices. Available at SSRN 2337973. F Bandi, A Tamoni, Bandi, F. and A. Tamoni (2017). Business-cycle consumption risk and asset prices. Available at SSRN 2337973 .
Spectral factor models. F M Bandi, S Chaudhuri, A W Lo, A Tamoni, Journal of Financial Economics forthcoming. Bandi, F. M., S. Chaudhuri, A. W. Lo, and A. Tamoni (2021a). Spectral factor models. Journal of Financial Economics forthcoming.
Spectral factor models. F M Bandi, S E Chaudhuri, A W Lo, A Tamoni, Journal of Financial Economics. 1421Bandi, F. M., S. E. Chaudhuri, A. W. Lo, and A. Tamoni (2021b). Spectral factor models. Journal of Financial Economics 142 (1), 214-238.
F M Bandi, A Tamoni, Spectral financial econometrics. Econometric Theory. Bandi, F. M. and A. Tamoni (2022). Spectral financial econometrics. Econometric Theory, 1-46.
Forecasting realized variance measures using time-varying coefficient models. J Bekierman, H Manner, International Journal of Forecasting. 342Bekierman, J. and H. Manner (2018). Forecasting realized variance measures using time-varying coefficient models. International Journal of Forecasting 34 (2), 276- 287.
Nonparametric estimation and forecasting for time-varying coefficient realized volatility models. X B Chen, J Gao, D Li, P Silvapulle, Journal of Business & Economic Statistics. 361Chen, X. B., J. Gao, D. Li, and P. Silvapulle (2018). Nonparametric estimation and forecasting for time-varying coefficient realized volatility models. Journal of Business & Economic Statistics 36 (1), 88-100.
How big is the random walk in gnp. J H Cochrane, Journal of political economy. 965Cochrane, J. H. (1988). How big is the random walk in gnp? Journal of political economy 96 (5), 893-920.
A simple approximate long-memory model of realized volatility. F Corsi, Journal of Financial Econometrics. 72Corsi, F. (2009). A simple approximate long-memory model of realized volatility. Journal of Financial Econometrics 7 (2), 174-196.
On the kullback-leibler information divergence of locally stationary processes. Stochastic processes and their applications. R Dahlhaus, 62Dahlhaus, R. (1996). On the kullback-leibler information divergence of locally sta- tionary processes. Stochastic processes and their applications 62 (1), 139-168.
Empirical spectral processes for locally stationary time series. R Dahlhaus, W Polonik, Bernoulli. 151Dahlhaus, R. and W. Polonik (2009). Empirical spectral processes for locally sta- tionary time series. Bernoulli 15 (1), 1-39.
Prediction in locally stationary time series. H Dette, W Wu, Journal of Business & Economic Statistics. 401Dette, H. and W. Wu (2022). Prediction in locally stationary time series. Journal of Business & Economic Statistics 40 (1), 370-381.
Asset pricing in the frequency domain: theory and empirics. I Dew-Becker, S Giglio, Review of Financial Studies. 298Dew-Becker, I. and S. Giglio (2016). Asset pricing in the frequency domain: theory and empirics. Review of Financial Studies 29 (8), 2029-2068.
Testing for unit roots: 1. G Evans, N E Savin, Econometrica: Journal of the Econometric Society. Evans, G. and N. E. Savin (1981). Testing for unit roots: 1. Econometrica: Journal of the Econometric Society, 753-779.
Local polynomial modelling and its applications: monographs on statistics and applied probability 66. J Fan, I Gijbels, CRC Press66Fan, J. and I. Gijbels (1996). Local polynomial modelling and its applications: mono- graphs on statistics and applied probability 66, Volume 66. CRC Press.
Very long-run discount rates. S Giglio, M Maggiori, J Stroebel, The Quarterly Journal of Economics. 1301Giglio, S., M. Maggiori, and J. Stroebel (2015). Very long-run discount rates. The Quarterly Journal of Economics 130 (1), 1-53.
Time series analysis. J D Hamilton, Princeton university pressHamilton, J. D. (2020). Time series analysis. Princeton university press.
Kernel regression smoothing of time series. W Härdle, P Vieu, Journal of Time Series Analysis. 133Härdle, W. and P. Vieu (1992). Kernel regression smoothing of time series. Journal of Time Series Analysis 13 (3), 209-232.
Long memory in inflation rates: International evidence. U Hassler, J Wolters, Journal of Business & Economic Statistics. 131Hassler, U. and J. Wolters (1995). Long memory in inflation rates: International evidence. Journal of Business & Economic Statistics 13 (1), 37-45.
The time-varying volatility of macroeconomic fluctuations. A Justiniano, G E Primiceri, American Economic Review. 983Justiniano, A. and G. E. Primiceri (2008). The time-varying volatility of macroeco- nomic fluctuations. American Economic Review 98 (3), 604-641.
Time-varying us inflation dynamics and the new keynesian phillips curve. K J Lansing, Review of Economic Dynamics. 122Lansing, K. J. (2009). Time-varying us inflation dynamics and the new keynesian phillips curve. Review of Economic Dynamics 12 (2), 304-326.
Do shocks last forever? local persistency in economic time series. L R Lima, Z Xiao, Journal of Macroeconomics. 291Lima, L. R. and Z. Xiao (2007). Do shocks last forever? local persistency in economic time series. Journal of Macroeconomics 29 (1), 103-122.
Trends and random walks in macroeconmic time series: some evidence and implications. C R Nelson, C R Plosser, Journal of monetary economics. 102Nelson, C. R. and C. R. Plosser (1982). Trends and random walks in macroeconmic time series: some evidence and implications. Journal of monetary economics 10 (2), 139-162.
Frequency dependent risk. A Neuhierl, R T Varneskov, Journal of Financial Economics. 1402Neuhierl, A. and R. T. Varneskov (2021). Frequency dependent risk. Journal of Financial Economics 140 (2), 644-675.
A persistence-based woldtype decomposition for stationary time series. F Ortu, F Severino, A Tamoni, C Tebaldi, Quantitative Economics. 111Ortu, F., F. Severino, A. Tamoni, and C. Tebaldi (2020). A persistence-based wold- type decomposition for stationary time series. Quantitative Economics 11 (1), 203-230.
The great crash, the oil price shock, and the unit root hypothesis. P Perron, Econometrica: journal of the Econometric Society. Perron, P. (1989). The great crash, the oil price shock, and the unit root hypothesis. Econometrica: journal of the Econometric Society, 1361-1401.
A continuous time approximation to the unstable first-order autoregressive process: The case without an intercept. P Perron, Econometrica: Journal of the Econometric Society. Perron, P. (1991). A continuous time approximation to the unstable first-order autoregressive process: The case without an intercept. Econometrica: Journal of the Econometric Society, 211-236.
Time varying structural vector autoregressions and monetary policy. G E Primiceri, The Review of Economic Studies. 723Primiceri, G. E. (2005). Time varying structural vector autoregressions and monetary policy. The Review of Economic Studies 72 (3), 821-852.
Nonstationarities in stock returns. C Stȃricȃ, C Granger, Review of economics and statistics. 873Stȃricȃ, C. and C. Granger (2005). Nonstationarities in stock returns. Review of economics and statistics 87 (3), 503-522.
Why has us inflation become harder to forecast. J H Stock, M W Watson, Journal of Money, Credit and banking. 39Stock, J. H. and M. W. Watson (2007). Why has us inflation become harder to forecast? Journal of Money, Credit and banking 39, 3-33.
Twenty years of time series econometrics in ten pictures. J H Stock, M W Watson, Journal of Economic Perspectives. 312Stock, J. H. and M. W. Watson (2017, may). Twenty years of time series econometrics in ten pictures. Journal of Economic Perspectives 31 (2), 59-86.
A study in the analysis of stationary time series. H Wold, Almquist & Wiksells BoktryckeriWold, H. (1938). A study in the analysis of stationary time series. Almquist & Wiksells Boktryckeri.
| [
"https://github.com/barunik/tvPersistence.jl."
] |
[
"Numerical aspects of Casimir energy computation in acoustic scattering",
"Numerical aspects of Casimir energy computation in acoustic scattering"
] | [
"Xiaoshu Sun \nDepartment of Mathematics\nUniversity College London\nWC1E 6BTLondonUK\n",
"Timo Betcke \nDepartment of Mathematics\nUniversity College London\nWC1E 6BTLondonUK\n",
"Alexander Strohmaier \nSchool of Mathematics\nUniversity of Leeds\nLS2 9JTLeedsUK\n"
] | [
"Department of Mathematics\nUniversity College London\nWC1E 6BTLondonUK",
"Department of Mathematics\nUniversity College London\nWC1E 6BTLondonUK",
"School of Mathematics\nUniversity of Leeds\nLS2 9JTLeedsUK"
] | [] | Computing the Casimir force and energy between objects is a classical problem of quantum theory going back to the 1940s. Several different approaches have been developed in the literature often based on different physical principles. Most notably a representation of the Casimir energy in terms of determinants of boundary layer operators makes it accessible to a numerical approach. In this paper, we first give an overview of the various methods and discuss the connection to the Krein-spectral shift function and computational aspects.We propose variants of Krylov subspace methods for the computation of the Casimir energy for large-scale problems and demonstrate Casimir computations for several complex configurations. This allows for Casimir energy calculation for large-scale practical problems and significantly speeds up the computations in that case. | null | [
"https://export.arxiv.org/pdf/2306.01280v1.pdf"
] | 259,064,023 | 2306.01280 | c87179f83a9167607bf7497b4500c25944ecdc43 |
Numerical aspects of Casimir energy computation in acoustic scattering
Xiaoshu Sun
Department of Mathematics
University College London
WC1E 6BTLondonUK
Timo Betcke
Department of Mathematics
University College London
WC1E 6BTLondonUK
Alexander Strohmaier
School of Mathematics
University of Leeds
LS2 9JTLeedsUK
Numerical aspects of Casimir energy computation in acoustic scattering
Krein spectral shift functionCasimir energyKrylov subspaceinverse-free generalized eigenvalue problemBempp-cl
Computing the Casimir force and energy between objects is a classical problem of quantum theory going back to the 1940s. Several different approaches have been developed in the literature often based on different physical principles. Most notably a representation of the Casimir energy in terms of determinants of boundary layer operators makes it accessible to a numerical approach. In this paper, we first give an overview of the various methods and discuss the connection to the Krein-spectral shift function and computational aspects.We propose variants of Krylov subspace methods for the computation of the Casimir energy for large-scale problems and demonstrate Casimir computations for several complex configurations. This allows for Casimir energy calculation for large-scale practical problems and significantly speeds up the computations in that case.
Introduction
Casimir interactions are forces between objects such as perfect conductors. Hendrik Casimir predicted and computed this effect in the special case of two planar conductors in 1948 using a divergent formula for the zero point energy and applying regularisation to it [1]. This resulted in the famous formula for the attractive Casimir force per unit area
F (a) = − 1 A ∂E ∂a = − cπ 2 240a 4 ,
between two perfectly conducting plates, where A is the cross-sectional area of the boundary plates and E is the Casimir energy as computed from a zeta regularised mode sum. The result here is for the electromagnetic field which differs by a factor of two from the force resulting from a massless scalar field. This force was measured experimentally by Sparnaay about 10 years later [2] and the Casimir effect has since become famous for its intriguing derivation and its counterintuitive nature. In 1996, precision measurements of the Casimir force between extended bodies were conducted by S.K. Lamoreaux [3] confirming the theoretical predictions including corrections for realistic materials. From 2000 to 2008, the Casimir force has been measured in various experimental configurations, such as cylinder-cylinder [4], plate-plate [5], sphere-plate [6] and sphere-comb [7].
The presence of the Casimir force has also been quoted as evidence for the zero point energy of the vacuum having direct physical significance. The classical way to compute Casimir forces mimicks Casimir's original computation and is based on zeta function regularisation of the vacuum energy. This has been carried out for a number of particular geometric situations (see [8,9,10,11,12] and references therein). The derivations are usually based on special functions and their properties and require explicit knowledge of the spectrum of the Laplace operator. In 1960s, Lifshitz and collaborators extended and modified this theory to the case of dielectric media [13] gave derivations based on the stress energy tensor. It has also been realised by quantum field theorists (see e.g. [14,13,15,16,17]) with various degrees of mathematical rigour that the stress energy approach yields Casimir's formula directly without the need for renormalisation or artificial regularisation. This tensor is defined by comparing the induced vacuum states of the quantum field with boundary conditions and the free theory. Once the renormalised stress energy tensor is mathematically defined, the computation of the Casimir energy density becomes a problem of spectral geometry (see e.g. [18]). The renormalised stress energy tensor and its relation to the Casimir effect can be understood at the level of rigour of axiomatic algebraic quantum field theory. We note however that the computation of the local energy density is non-local and requires some knowledge of the spectral resolution of the Laplace operator, the corresponding problem of numerical analysis is therefore extremely hard.
Lifshitz and collaborators also offered an alternative description based on the van der Waals forces between molecules. The plates consist of a collection of atomic-scale electric dipoles randomly oriented in the absence of the external forcing field. Quantum and thermal fluctuations may make the dipoles align spontaneously, resulting in a net electric dipole moment. The dipoles in the opposite plate feel this field across the gap and align as well. The two net electric dipole moments make the two plates attract each other. This approach emphasizes the influence from the materials more than the fluctuations in the empty space between the plates.
Somewhat independently from the spectral approach determinant formulae based on the van der Waal's mechanism were derived by various authors. We note here Renne [19] who gives a determinant formula for the van der Waals force based on microscopic considerations. Various other authors give path-integral considerations based on considerations of surface current fluctuations [20,21,22,23,24,25,26,27,28]. The final formulae proved suitable for numerical schemes and were also very useful to obtain asymptotic formulae for Casimir forces for large and small separations. The mathematical relation between the various approaches remained unclear, with proofs of equality only available in special cases. A full mathematical justification of the determinant formulae as the trace of an operator describing the Casimir energy was only recently achieved in [29] for the scalar field and [30] for the electromagnetic field. It was also proved recently in [31] that the formulae arising from Zeta regularistation, from the stress energy tensor, and from the determinant of the single layer operator all give the same Casimir forces.
We will now describe the precise mathematical setting and review the theory. Let Ω ⊂ R d be a nonempty bounded open subset with Lipschitz boundary ∂Ω, which is the union of connected open sets Ω j , for j = 1, . . . , N . We assume that the complement R d \Ω of Ω is connected and the closures of Ω j are pairwise non-intersecting. We denote the N connected components of the boundary ∂Ω by ∂Ω j . We will think of the open set Ω as a collection of objects Ω j placed in R d and will refer to them as obstacles.
Then, several unbounded self-adjoint operators densely defined in L 2 (R d ) can be defined.
• The operator ∆ is the Laplace operator with Dirichlet boundary conditions on ∂Ω.
• For j = 1, . . . , N , the operator ∆ j is the Laplace operator with Dirichlet boundary conditions on ∂Ω j .
• The operator ∆ 0 is the "free" Laplace operator on R d with domain H 2 (R d ).
These operators contain the dense set C ∞ 0 (R d \∂Ω) in their domains. If f : R → R is a polynomially bounded function this set is also contained in the domain of the operators f (∆
D f = f (∆ 1 2 ) − f (∆ 1 2 0 ) − N j=1 [f (∆ 1 2 j ) − f (∆ 1 2 0 )]
is densely defined. It was shown in [29] that under additional analyticity assumptions on f the operator D f is bounded and extends by continuity to a trace-class operator on L 2 (R d ). These analyticity assumptions are in particular satisfied by f (k) = (k 2 + m 2 ) s 2 for any s > 0, m ≥ 0 and one has
Tr (D f ) = s π sin π 2 s ∞ m k(k 2 + m 2 ) s 2 −1 Ξ(ik)dk,(1)
where the function Ξ is given by
Ξ(k) = log det V kṼ −1 k
and the operators V k andṼ k are certain boundary layer operators that will be defined later. It was proved in [29] that the above determinant is well-defined in the sense of Fredholm as the operator V kṼ −1 k near the positive imaginary axis differs from the identity operator by a trace-class operator on the Sobolev space H 1 2 (∂Ω). We remark here that the paper [29] assumed the boundary to be smooth and the operators V kṼ −1 k was considered as a map on L 2 (∂Ω). The main result of the paper also holds for Lipschitz boundaries if L 2 (∂Ω) is replaced by H 1 2 (∂Ω). This requires minor modifications of the proof but we will not discuss this further here, as we are now focusing on computational aspects.
We also recall that by the Birman-Krein formula we have for any even function h ∈ S(R) the equality
Tr h(∆ 1 2 ) − h(∆ 1 2 0 ) − N j=1 [h(∆ 1 2 j ) − h(∆ 1 2 0 )] = ∞ 0 h (k)ξ(k)dk,(2)
where
ξ(k) = 1 2πi log det(S(k)) det(S 1,k ) · · · det(S N,k (k))
will be called the relative Krein spectral shift function. Here, S j,k are the scattering matrices of ∆ j associated to the objects Ω j . Note here that the class of functions for which this is true can be relaxed to a certain extent, but even the most general version does not allow unbounded functions such as f (k) with s > 0, m ≥ 0.
The relative spectral shift function can however be related via a Laplace transform to the Fourier transform of the relative spectral shift function (see [32]). Under mild convexity assumptions this can be connected to the Duistermaat-Guillemin trace formula in obstacle scattering theory to give an asymptotic expansion of Ξ(k) in terms of the minimal distance δ > 0 between the obstacles and the linearised Poincaré map of the bouncing ball orbits between the obstacles of that length. One has
Ξ(k) = − j 1 | det(I − P γj )| 1 2 e 2iδk + o(e −2δImk ),
where the sum is over bouncing ball modes of length 2δ and P γj is the associated Poincaré map, where γ j is the shortest bouncing ball orbits. The Casimir energy of the configuration Ω for a massless scalar field would then be given by D f in the case when f (k) = k and is therefore equal to
ζ = c 2π ∞ 0 Ξ(ik)dk.
In this paper, we are going to introduce the numerical framework of computing the Casimir energy based on the evaluation of the log determinant of the integral operators in the acoustic case 1 in Section 2. Afterwards, two efficient methods for computing the integrand of the Casimir energy will be illustrated in Section 3 which makes us compute the large-scale problem efficiently. In Section 4, several examples on computing the Casimir energy between compact objects will be shown and we will also compare our results with others computed in other methods. Note that all the tests and examples in this paper were computed with version 0.2.4 of the Bempp-cl library [33]. Finally, Section 5 will conclude our paper and discuss the future plan as well.
Numerical methods for computing the Casimir energy in acoustic scattering
In this section, we give details of computing the Casimir energy via boundary integral operator discreti-
sations. Assume Ω − ⊂ R d , for d ≥ 2
is the interior open bounded domain that the scatterer occupies with piecewise smooth Lipschitz boundary Γ. The exterior domain is denoted as Ω + = R d \Ω − . n is the almost everywhere defined exterior unit normal to the surface Γ pointing outwards from Ω − and n x is normal to Γ at the point x ∈ Γ.
In the scalar case, the Casimir energy can be expressed in terms of certain single-layer boundary operator, which we will define below. We then present its relationship with the Krein-Spectral shift function and demonstrate how it can practically be computed.
The single-layer boundary operator
For the bounded interior domain Ω − or the unbounded exterior domain Ω + , the space of the (locally) square integrable functions is
L 2 (Ω − ) := f : Ω − → C, f is Lebesgue measurable and Ω − |f | 2 < ∞ , L 2 loc (Ω + ) := f : Ω + → C, f is Lebesgue measurable and K |f | 2 < ∞, for all compact K ⊂ Ω +
and note that the subscript "loc" can be removed if the domain is bounded (i.e. L 2 loc (Ω − ) = L 2 (Ω − )). We denote by H s loc (Ω ± ) the standard Sobolev spaces associated with the Lipschitz domains. In particular, for integers s ≥ 0, we have
H s loc (Ω ± ) := f ∈ L 2 loc (Ω ± ), ∀α s.t.|α| ≤ s, D α f ∈ L 2 loc (Ω ± ) ,
where α = (α 1 , α 2 , . . . , α d ) is a multi-index and |α| = α 1 + α 2 + · · · + α d , and the derivative is defined in the weak sense. One also has the Sobolev spaces on the boundary H s (Γ) for any − 1 2 ≤ s ≤ 1 2 . For a function p on Ω that is continuous on the boundary we have the trace map γ ± D defined by
γ ± D p(x) := lim Ω ± x →x∈Γ p(x )
that maps the function to its boundary value. This trace map is well-known to extend continuously to a map
γ ± D : H 1 loc (Ω) → H 1/2 (Γ).
For the purposes of this paper it is sufficient to understand H 1/2 (Γ) as range space of the trace operator on H 1 loc (Ω) . We also need the space H −1/2 (Γ), which is the dual space of H 1/2 (Γ) with L 2 (Γ) as pivot space.
We can now define the single-layer boundary V k : H −1/2 (Γ) → H 1/2 (Γ) as the continuous extension of the map defined in terms of an integral kernel as follows
(V k µ)(x) := Γ g k (x, y)ψ(y)dS y ,
for µ ∈ H − 1 2 (Γ) and x ∈ Γ.
Here,
g k (x, y) = i 4 H(1)0 (k|x − y|), for d = 2 e ik|x−y| 4π|x−y| , for d = 3,(3)
with H
(1) 0 a Hankel function of the first kind.
The formula of the Casimir energy
Before we present the formula of the Casimir energy, let us introduce the following theorem.
Theorem 1. [29] Consider Ω as a domain assembling from individual objects Ω i . Let V k be the single-layer boundary operator defined on the boundary ∂Ω = N i=1 ∂Ω i , andṼ k is the "diagonal part" of V k by restricting the integral kernel to the subset
N i=1 ∂Ω i × ∂Ω i ⊂ ∂Ω × ∂Ω then the operator V kṼ −1 k − I with I the identity
operator is trace-class and
Ξ(k) = log det V kṼ −1 k , where the Fredholm determinant det(V kṼ −1 k ) is well-defined 2 .
and by taking m = 0 and s = 1 in (1), this gives the formula
Tr ∆ 1 2 + (N − 1)∆ 1 2 0 − N i=1 ∆ 1 2 j = 1 π ∞ 0 Ξ(ik)dk.(4)
Equation (4) is used to compute the Casimir energy between the objects and the formula is written as
ζ = c 2π ∞ 0 Ξ(ik)dk.(5)
This formula is identical to the one proposed by Johnson in [35] who uses a non-rigorous path integral argument for its derivation.
Remark 1.
There is a relation between the relative Krein spectral shift function and the single-layer boundary integral operator. That is, for k > 0,
− 1 π Im Ξ(k) = i 2π (Ξ(k) − Ξ(−k)) = ξ(k).
Galerkin discretization and boundary element spaces
In order to compute the integral (5), we need to compute the log determinant of the operators V kṼ −1 k . In this section we discuss Galerkin discretisations to compute this quantity.
Define the triangulation T h of the boundary surface Γ with triangular surface elements τ l and associated nodes x i s.t. T h = l τ l , where h is the mesh size and define the space of the continuous piecewise linear
functions P 1 h (Γ) = {v h ∈ C 0 (Γ) : v h | τ l ∈ P 1 (τ l ), for τ l ∈ T h },
where P 1 (τ l ) denotes the space of polynomials of order less than or equal to 1 on τ . We have
P 1 h (Γ) := span{φ j } ⊂ H − 1 2 (Γ) with φ j (x i ) = 1, i = j, 0, i = j
being the nodal basis functions.
(V kṼ −1 k ) is well-defined.
Remark 2.
Since H −1/2 (Γ) does not require continuity we could use a space of simple piecewise constant functions. The reason why we choose piecewise linear functions is the size of the arising matrix systems for dense calculations. The computation of the log-determinant requires O(n 3 ) operations, where n is the dimension of our approximation basis. For sphere-like and other similar geometries there are in practice roughly twice as many triangles as nodes in the mesh. Hence, while the assembly cost with piecewise linear functions is higher, the resulting matrix has only half the dimension, resulting in roughly a factor eight reduction of computational complexity for the log determinant. A disadvantage is that on geometries with corners or edges the converges close to these singularities is suboptimal with continuous piecewise linear functions.
Having defined the basis function φ j , we can represent each element inside the Galerkin discretization form.
Assume there are N objects, then the matrix of the operator V k is an N by N block matrix, written as
V(k) = V 11 (k) V 12 (k) · · · V 1N (k) V 21 (k) V 22 (k) · · · V 2N (k) . . . . . . . . . . . . V N 1 (k) V N 2 (k) · · · V N N (k) (6)
and the matrixṼ k is the diagonal part of V k :
V(k) = V 11 (k) 0 · · · 0 0 V 22 (k) · · · 0 . . . . . . . . . . . . 0 0 · · · V N N (k) .(7)
Therefore, the element in the mth row and nth column of the block matrix V ij (k) is
V (m,n) ij (k) = V ij (k)φ (j) n , φ (i) m = Γj φ (i) m (x) Γi g k (x, y)φ (j) n (y)dS y dS x ,(8)
where
φ (i) = φ (i) 1 φ (i) 2 . . . φ (i) N
is the set of basis functions defined on the ith object and ·, · denotes the standard L 2 (Γ) inner product.
The value of Ξ(ik) = log det(V(ik)Ṽ(ik) −1 ) can now be explicitly computed by evaluating the corresponding log determinants.
The function Ξ(ik) has a very favourable decay behaviour for growing k that we can use to limit the number of quadrature points necessary to evaluate the corresponding Casimir integral, namely under certain convexity assumptions on the obstacles it holds that
Ξ(ik) = O(e −2Zk ).
Here, Z is the minimum distance between the obstacles [36, Theorem 4.1].
This result can be justified heuristically, using a simple matrix perturbation argument. Consider a symmetric matrix A partitioned as
A = A 1 0 0 A 2 .
and a symmetric matrix E partitioned as
E = 0 E T 1 E 1 0
Then it holds for the ith eigenvalue λ i (A) and the ith eigenvalue
λ i (A + E) that |λ i (A) − λ i (A + E)| ≤ E 2 gap i , where gap i is the distance of λ i (A) to the spectrum of A 2 if λ i (A)
is an eigenvalue of A 1 , and to the spectrum
of A 1 if λ i (A)
is an eigenvalue of A 2 . Details can be found in [37].
Now assume that we have two different obstacles. Then we have A 1 = V 11 (ik), A 2 = V 22 (ik) and E 1 = V 21 (ik) as the matrix of cross interactions between the two obstacles. For complex wavenumbers ik, the Green's function between two obstacles decays exponentially like e −Zk , where Z is the minimal distance between them, resulting in a matrix perturbation result of the form
|λ i (V) − λ i (Ṽ)| = O(e −2Zk )
for increasing k (see Figure 1 ), from which the corresponding perturbation result for the log determinant follows. While the linear algebra argument is useful to give a heuristical explanation, it is not as rigorous as the analytical result in [36]. In particular, we want to emphasize that the exponential decay bound with the quadratic factor also holds if the two obstacles are identical, which is not obvious from pure linear algebraic
considerations. An example of this is given in Figure 2. In what follows we demonstrate and compare iterative solver approaches based on standard Arnoldi iterations [38,39], and based on the inverse free Krylov subspace method [40,41]. We will also discuss on acceleration strategy which is based on the idea of recycling projection bases from one quadrature point to the next.
Method I: Standard Arnoldi method
The first efficient method for solving our eigenvalue problem V(ik)Ṽ(ik) −1 x = λx is the Arnoldi method [39, Section 6.2]. The idea of this method is to use Arnoldi iterations to construct the Krylov subspace
K m (V(ik)Ṽ(ik) −1 , b),
where b is some initial vector and m is the dimension of this Krylov subspace, and to then compute the eigenvalues of the resulting projected Hessenberg matrix H m (see [42]). These eigenvalues have a good approximation on the extreme eigenvalues of V(ik)Ṽ(ik) −1 [39, Proposition 6.10, Theorem 6.8].
The main cost of this standard Arnoldi method is the computation of the Krylov subspace K m . In this process, one has to compute the matrix-vector productṼ −1 y, for some vector y, which is equivalent to solve the linear systemṼx = y. This can be efficiently implemented as the matrixṼ is a block diagonal matrix.
Therefore, we just need to compute the LU decomposition for each diagonal block, V jj rather than the whole system matrix and apply the forward and backward substitution to solve the linear system V jj x j = y j . Note that if all the scatterers are identical, one only needs to compute one diagonal block and one LU decomposition.
Method II: Inverse-free Krylov subspace method
An alternative to the standard Arnoldi method is the inverse-free projection method, which is also based on the Arnoldi iterations but without computing any matrix inversions. Consider the eigenvalue problem V(ik)Ṽ(ik) −1 x = λx, it is equivalent to the following generalized eigenvalue problem:
V(ik)x = λṼ(ik)x.(9)
An important property of this problem is that as we are only interested in ik along the imaginary axis, the corresponding matrixṼ(ik) is positive definite, and V(ik) is still symmetric.
In [40,41], the authors proposed an inverse-free Krylov subspace method for computing a few extreme eigenvalues of the symmetric definite generalized eigenvalue problem. The following algorithm summarizes the method. Algorithm 1: Inverse-free Krylov subspace method for computing multiple extreme eigenvalues of the generalized eigenvalue problem Ax = λBx Input: Symmetric matrix A ∈ R n×n , s.p.d matrix B ∈ R n×n , an initial approximation x with ||x|| = 1, a given shift ρ and the dimension of the Krylov subspace m ≥ 1
Output: A set of approximate eigenvalues of Ax = λBx and associated eigenvectors.
Recycling Krylov subspace based variant
The main cost of the standard Arnoldi method and inverse-free method comes from the matrix-vector products (matvecs) in the Arnoldi iterations, where the involving matrices are large and dense as they represent discretized integral operators. In order to reduce the computational cost of a Krylov subspace basis for each wavenumber ik, we introduce a subspace recycling based method for speeding up the computational process.
This can be regarded as a variant of these two methods.
This recycling strategy is based on the idea that a Krylov subspace for a previous quadrature point in the KSSF integral will be a good approximation to a Krylov subspace for the current quadrature point. We initially compute a Krylov basis for the wavenumber ik 1 associated with the first quadrature point. We then extract several eigenvectors associated with the extremal eigenvalues based on Algorithm 1 and then orthogonalize to obtain an initial approximation basis for the wavenumber ik 2 . For this wavenumber we project the matrices onto the recycled basis, compute approximate eigenpairs (λ i ,x i ) and then extend the subspace basis with the
residuals r i = V(ik)x i −λ iṼ (ik)x i .
With the extended subspace we recompute the eigenpairs for the second wavenumber's case and extract eigenvectors as starting basis for the third wavenumber, and so on.
Numerical results
In this section, we compare the performance of standard Arnoldi and inverse-free Krylov subspace method and their recycled variants on computing the log determinant term of V(ik)Ṽ(ik) −1 . As the dominant cost of these methods is the matrix-vector products (matvec) associated with the discretized boundary integral operators, we will also compare the number of the matvecs between these methods. All the tests are performed on two spheres with equal radii r 1 = r 2 = 1. The sphere meshes are refined with size h = 0.1 and this results in the matrix size V(ik) being 3192 × 3192. Again, the minimum distance between them is denoted by Z, which is set as 0.5, 1.5 and 3.0. The number of the quadrature points is 20.
We start by comparing the relative error for approximating the log det V(ik)Ṽ(ik) −1 using all these methods.
The dimension of the Krylov subspace K m in all the algorithms is m = 100. For the methods with subspace recycled, the number of the recycled eigenvectors is not fixed but depends on the number of the relevant eigenvalues in each wavenumber case. In our experiments, we only recycle the eigenvector whose corresponding eigenvalue has the logarithm value greater than 10 −s , where s = 3, 4, 5 when Z = 0.5, 1.5, 3.0, respectively, in which case the estimates of the log determinant have at least three significant digits match with the ones obtained from the direct computations. With these settings, the number of the recycled eigenvectors becomes less and less as the k gets larger. Figure 4 plots the dimension of the extended subspace for each wavenumber case. It is equal to the number of the recycled eigenvectors plus the number of the residuals {r i } i . This table indicates that with the settings above, one can have at least three significant digits accuracy and the accuracy of the methods with subspace recycled is similar to the ones without any recycling processes. As for the performance of these methods and their variants at larger quadrature points, we cannot always have three digits accuracy. However, this will not affect the estimates of the Casimir energy too much as their corresponding log determinant value is relatively smaller than the others and contributes very little to the Casimir energy. The shift is set as ρ = 1 for the inverse-free method. The recycled eigenvector has the corresponding eigenvalue whose logarithm is larger than 10 −s , where s = 3, 4, 5 when Z = 0.5, 1.5, 3.0, respectively.
Recall that the main cost in these algorithms is from the computation of the Krylov basis and the matrix projections. For large problems, the dominating cost is the involved matrix-vector products with the discretized integral operators. We count the number of matvecs associated with the discretized integral operators (V ij ) for each algorithm and the results are summarized in Table 2. In Figure 5, we plot the number of actual matvecs associated with the discretized matrix form V ij , for i, j = 1, 2, · · · , N in each individual algorithm when computing the Casimir energy between 2 spheres with different different distance Z. It shows that the recycling strategy significantly reduces the overall number of matvecs. Although the number of matvecs in standard Arnoldi method with subspace recycled (light red in Figure 5) is smaller than the one of the inverse-free method with subspace recycled (light blue in Figure 5), one has to compute the LU decomposition for each diagonal block in each Arnoldi iteration, which has cubic complexity.
(6m − 2)N q (6m − 2) + 12 Nq−1 i=1 s i (4m − 4)N q (4m − 4) + 8 Nq−1 i=1 s i
Numerical experiments
In this section, we demonstrate numerical results for computing the Casimir energy between two conducting objects, where the objects are spheres, sphere-torus, Menger sponges, ice crystals and ellipsoids. One reference value of the Casimir energy is computed by the Richardson extrapolation method which is often used for obtaining the higher-order estimate at zero grid spacing.
In the case of spheres we also compare with known asymptotic expansions [24].
4.1.
Sphere-sphere and sphere-torus case Consider two spheres with the equal radii r 1 = r 2 = 1 and the spacing Z (see Figure 6) as the scatterers.
We denote Ξ h , the value of Ξ computed under the refinement level with mesh size h and denote Ξ h=0 , the higher-order estimate of Ξ h at zero grid space, which is computed by Richardson extrapolation,
Ξ h=0 ≈ h 2 coarse Ξ h fine − h 2 fine Ξ hcoarse h 2 coarse − h 2 fine ,(10)
where h fine and h coarse are two different mesh sizes with h fine < h coarse . In this example, we set h coarse = 0.1 and h fine = 0.05.
We begin with validating the construction of the integrand function Ξ of the Casimir integral (5) by comparing the value of Ξ h (ik) for different refinement levels with the extrapolation value Ξ h=0 for ik = 0.8i (see Figure 7). In the tables of Figure 7, we also provide reference values computed by discretizing the single-layer boundary integral operators in terms of the spherical harmonic functions as suggested by [26]. They are believed to be accurate within 0.05%.
In Figure 7, one can see that Ξ h (ik) converges to Ξ h=0 as h increases. This figure is plotted in log-log scale plot and the slope of these lines is around 2. This numerical result indicates that this convergence is quadratic.
= ∞ κ f (k)dk ≈ Ce −2Zκ 2Z ,(11)
where κ is the upper bound of the integration. Recall that we have changed the variable from k to y = e −k when applying the normal trapezodial rule. This upper bound κ corresponds to the lowerbound of y.
With the upper bound of the integration determined, one can start to estimate the Casimir energy between two spheres with radius r 1 = r 2 = 1 at the distance of Z via the formula (5) in two different refinement levels:
h fine = 0.05 (dim(V ik ) = 12603) and h coarse = 0.1 (dim(V ik ) = 3192). Afterwards, the extrapolation result can be computed by these Casimir energy estimates.
According to [24], the Casimir energy between two spheres (with equal radii r) at asymptotically large separations can be obtained as a series in terms of the ratio of centre distance L (l = 2r + Z) to sphere radius R: Figure 8 shows the comparison between the Casimir energy computed from asymptotic series (12) and the exact value evaluated through Richardson extrapolation and the reference value ζ ref provided in [26,Equation (64)]. Here, we observe that the asymptotic value gradually approaches to the exact value as the distance Z increases since the asymptotic expansion (12) only works when the distance between two spheres is asymptotically large. Now, let us consider the case when two spheres have different radii r 1 , r 2 (see Figure 9).
ζ asy = − c π 1 L ∞ n=0 b n r l n+2 ,(12)
In this case, one can still determine the upper bound of the integration by fitting the integrand function curve and considering the error tolerance. Afterwards, we would like to compare the extrapolation value of the Casimir energy computed through the Richardson extrapolation with the asymptotic expansion. By denoting the centre distance as l = r 1 + r 2 + Z, the asymptotic series of the Casimir energy between these two spheres is written by
ζ asy = − c π 1 l ∞ n=0b n (η) r 1 L n+2 ,(13)
where the coefficients {b n } depend on the parameter η = r 2 /r 1 and the first six coefficients arẽ
b 0 = − η 4 ,b 1 = − η + η 2 8 ,b 2 = − 34(η + η 3 ) + 9η 2 48 ,b 3 = − 2(η + η 4 ) + 23(η 2 + η 3 ) 32 , b 4 = − 8352(η + η 5 ) + 1995(η 2 + η 4 ) + 38980η 3 5760 ,b 5 = − −1344(η + η 6 ) + 5478(η 2 + η 5 ) + 2357(η 3 + η 4 ) 2304 .
In the following experiment, the radii of the spheres shown in Figure 9 are set as r 1 = 0.5 and r 2 = 1. As in the previous example, the exact value of the Casimir energy is computed through the Richardson extrapolation and the coarse and fine grid size are h fine = 0.05 (dim(V ik ) = 7893) and h coarse = 0.1 (dim(V ik ) = 2023), separately.
In this case, the asymptotic value of the Casimir energy was estimated by the series (13) and the comparison between the extrapolation value and asymptotic one is shown in Figure 10. Again, one can notice that when the distance between two spheres decreases, the asymptotic value gets close to the extrapolation one. After showing the validation of the numerical framework for computing the Casmir energy, we would like to end this section with computing the negative normalized Casimir energy between one torus and one sphere.
For the torus, it is centering at the origin and the distance from the center of the tube to the center of the torus is l 1 = 2 and the radius of the tube is l 2 = 0.5; for the sphere, it has radius r = 1 and its center is always on the z-axis (see Figure 11 (Right)). By Figure 11 (Left), one can see that when the sphere and the torus share the same center, the negative normalized Casimir energy has the largest magnitude.
Realistic objects case
In this part, the Casimir energy between the objects with special shapes such as the menger sponges, ice crystals and ellipsoids will be computed through the Richardson extrapolation mentioned in the beginning of this section and the values labelled in the following figures are accurate within three significant digits. Note that the matrix size of the involved matrix in each example has been stated in the figures. Figure 12 plots the menger sponges in different levels (0, 1 and 2) and the length of these sponges is always 1. Afterwards, the Casimir energy between two menger sponges in the same level are listed in Table 3 with the estimates evaluated from the standard Arnoldi method with subspace recycled (solid red circles) and inverse-free Krylov subspace method with subspace recycled (solid blue triangles). The dimension of the Krylov subspace is m = 100. The recycled eigenvectors have the corresponding eigenvalue whose logarithm is larger than 10 −5 .
In the next example, the scatterers are ice crystals with different number of branches ranging from 2 to 6 (see Figure 15). with the estimates evaluated from the standard Arnoldi method with subspace recycled (solid red circles) and inverse-free Krylov subspace method with subspace recycled (solid blue triangles). The dimension of the Krylov subspace is m = 100. The recycled eigenvector has the corresponding eigenvalue whose logarithm is larger than 10 −5 .
It is not hard to imagine that the Casimir energy would be different when rotating the scatterers and keeping the distance between them unchanged. Therefore, in the last example, we would see how the Casimir energy between two identical ellipsoids changes as one of the ellipsoids rotates.
In Figure 18a, the above ellipsoid is centering at (0, 0, 0) and the below one is centering at (0, 0, −(0.5 + 0.5 + Z)), where Z is the distance between these two ellipsoids. Without rotation, the Casimir energy between them with different distance Z is plotted in Figure 19a.
To explore how the rotation affects the change of the Casimir energy, one can always keep one ellipsoid fixed and rotate the other one. The Figure 18b and 18c describe the case when one of the ellipsoids rotates around z− and x−axis, respectively. From the Figure 19b, the Casimir energy changes periodically since we rotate one ellipsoid around z− or x−axis by 360 degrees. Now, consider 4 ellipsoids located on the vertices of a regular tetrahedron with edge length l = 2 ( Figure 20) and the principal semi-axes of all these ellipsoids are r 1 = 0.6 and r 2 = 0.3. Figure 20b and Figure 20c show the rotation of the ellipsoids inwards and outwards 360 degrees towards the centroid of this tetrahedron, separately. Afterwards, in order to use the Richardson extrapolation method to estimate the Casimir energy, we evaluate the integral (5) with the grid size set as h fine = 0.05 and h coarse = 0.03. Note that the number of the scatterers has increased to four, the matrices V ik andṼ ik have become to 4 by 4 block and diagonal block matrix, respectively. From the Figure 21, it shows that the Casimir energy between these four ellipsoids changes periodically with the rotation. The scatterers of the last example are described inside the Figure 22. These six ellipsoids locate on the vertices of a regular octahedron with edge length l = 2 and again the principal semi-axes of all these ellipsoids are r 1 = 0.6 and r 2 = 0.3 (shown in the Figure 22). This time, the ellipsoids rotate inwards and outwards 360 degrees towards the centroid of this octahedron (Figure 22b and Figure 22c). By closely looking at these two rotation figures, we can notice that Figure 22b can be obtained by rotating Figure 22c 180 degrees. Therefore, the Casimir energies for the inwards and outwards cases are the same. Figure (23) shows how the Casimir energy changes among these six ellipsoids as they rotate.
Conclusion
We have demonstrated in this paper the practical performance and error behaviour for computing the Casimir energy for a number of different configurations, using the log determinant approach. A remaining problem is to speed up this method for large-scale configurations. Here, we demonstrated the performance of different Krylov subspace methods, demonstrating that together with recycling tricks we can significantly reduce the computational effort for large problems.
While we have demonstrated the results in this paper for the acoustic case, the prinicple techniques also transfer to the electromagnetic case. We aim to report corresponding results in future publications.
Supported by Leverhulme grant RPG-2017-329.
0
), in particular the operator
2
The Fredholm determinant is a generalization of a determinant of finite dimensional matrix to finite dimensional linear operator which differ from the identity operator by a trace class operator [34, Section 6.5.2]. Since the operator V kṼ −1 k −I with I the identity operator is trace-class in the close upper half space [29, Theorem 1.7], the determinant det
Figure 1 :
1Exponential decay of Ξ(ik) for two distinct spheres with radii r 1 = 0.5 and r 2 = 1 and minimum distance Z = 1.5. The red line is the decay bound and the blue line is the actual decay. This purely linear algebraic consideration is not fully robust as it ignores the importance of the eigenvalue gap in the perturbation result. But we can heuristically explain the gap as follows. On the continuous level the perturbations E 1 and E 2 are compact, so the tail end of the spectrum that converges to zero with small values of gap i , is little affected by E, and the corresponding eigenvalues have a contribution of log λi(A) λi(A+E) ≈ 0 to the value of Ξ. The relevant eigenvalues are the larger ones who for distinct obstacles have a sufficiently large value of gap i .
Figure 2 :
2(Left) Exponential decay of Ξ(ik) for two identical spheres with radius r 1 = r 2 = 1 and minimum distance Z = 1.5. The red line is the decay bound and the blue line is the actual decay. (Right) The integrand Ξ(ik) after varlable transformation to apply a numerical trapezoid rule for its evaluation.The exponentially decay property motivates a simple change of variables through y = e −k in the integrantΞ(ik) = log det(V(ik)Ṽ(ik) −1 ), which after transformation we can numerically evaluate with a simple trapezoidal rule.Figure 2(Right) plots the integrand with regard to the new variable y.3. Efficient methods for computing log det(V(ik)Ṽ(ik) −1 )By Section 2, to compute the Casimir energy, it is necessary to evaluate the term log det(V(ik)Ṽ(ik) −1 )with different values of k. In this section, several efficient methods will be introduced to compute this log determinant.The log determinant of the matrix V(ik)Ṽ(ik) −1 is equal to the sum of the logarithm of the eigenvalues of V(ik)Ṽ(ik) −1 . SinceṼ(ik) is a compact perturbation of V(ik), most of the eigenvalues of the matrix V(ik)Ṽ(ik) −1 are close to 1 (shown inFigure 3) and contribute little to the value of the Casimir energy. Hence, we do not need to compute all eigenvalues but only the extremal ones, making subspace methods such as Krylov solvers attractive for this problem.
Figure 3 :
3The eigenvalues of the matrix V(ik)Ṽ(ik) −1 when ik = 0.8i. The scatterers are two spheres with equal radii r 1 = r 2 = 1 and the minimal distance between them is Z = 0.5. The grid size of the mesh is h = 0.2.
1 :
1Construct a basis Z m for the Krylov subspace K m = span(x, (A − ρB)x, . . . , (A − ρB) m−1 x) with dimension m 2: Project A and B on Z: A m = Z T m (A − ρB)Z m , B m = Z T m BZ m 3: Compute all the eigenpairs {(λ i , x i )} i=1,...,m for the matrix pencil (A m , B m ) 4: Reverse the shift to obtain λ i =λ i + ρ. Algorithm 1 approximates m eigenvalues close to the shift ρ for the matrix pencil (A, B), where m is the dimension of the Krylov subspace K m in Step 1, Algorithm 1. The question is what shift strategy to use for ρ. In numerical experiments it turned out that for the KSSF problem choosing ρ = 1 sufficiently approximate the eigenvalues that have the main contribution to log det(V(ik j )Ṽ(ik j ) −1 ). Additionally, the main cost of this inverse free Krylov subspace method is the computation of the Krylov subspace and the projection of the matrices A and B. In our case these are large dense matrices representing integral operators.
Figure 4 :
4The dimension of the extended subspace in the inverse-free Krylov subspace 3 with subspace recycled for each k in log det(V(ik)Ṽ(ik) −1 ), which is equal to the number of the recycled eigenvectors plus the number of the residuals {r i } i . The recycled eigenvector has the corresponding eigenvalue whose logarithm is larger than 10 −s , where s = 3the relative error for approximating the value of log det(V(ik)Ṽ(ik) −1 ) computed via the inversefree Krylov subspace method and standard Arnoldi method with or without recycling the subspace. The reference value is computed by the direct dense computation of the log determinant. The wavenumbers ik are chosen to be associated with the first five consecutive quadrature points, whose corresponding log determinant values account for a great proportion in the Casimir integral.
Figure 5 :
5The number of matvecs inside the inverse-free and standard Arnoldi methods with or without recycling subspace when computing the normalized Casimir energy between two spheres with equal radii R = 1 and the distance Z is 0.5, 1.5 and 3.0. The number of the quadrature point is Nq = 20. The dimension of the Krylov subspace is set as m = 100 (Left) and 200 (Right).
Figure 6 :
6Two spheres with equal radii r 1 = r 2 = 1 and Z is the minimal distance between them.hcoarse = 0.1: dim(V(ik)) = 3192, N o of elements on both grids = 6384; h fine = 0.05: dim(V(ik)) = 12603, N o of elements on both grids = 25180
Figure 7 :
7h-convergence of Ξ h (ik) to the extrapolation value Ξ h=0 (ik) when ik = 0.8i. The provided reference values are accurate within 0.05%. The scatterers are two spheres with equal radii 1 and the distance between them is set as Z = 0.5, 1.5 and 3.0. The relative distance between Ξ h and Ξ h=0 decreases as we refine the mesh. The dashed line shows order 2 convergence. The tables list the values of Ξ h (0.8i) for h = 0, 0.05, 0.1, 0.15 and 0.2, and the provided reference values. Having shown the validation of the construction of Ξ, we are left with determining a proper upper bound for the Casimir integration. The method for determining the upper bound of the integration is inspired by the asymptotic decay behavior of its integrand function. By Figure 1 and Figure 2, the integrand value log det V(ik)Ṽ(ik) −1 shares the same trend with e −2Zk , this inspires us to apply the function f (k) = Ce −2Zk to fit the curve of the estimated integrand values. With the coefficient C determined, one can estimate the absolute error for approximating the Casimir integral by computing:
Figure 8 :
8Left: Negative normalized Casimir energy 4 computed by the extrapolation (blue circle), asymptotic series (orange hollow circle) in two spheres with equal radii's case. The radius is r 1 = r 2 = 1 and the distance Z ranges from 0.5 to 3.0. The green hollow square represents the data of[26]. Right: The table lists all the relevant data values in the figure.
Figure 9 :
9Two spheres with unequal radii r 1 = 0.5 and r 2 = 1 and Z is the minimal distance between them.hcoarse = 0.1: dim(V ik ) = 2023, N o of elements on both grids = 4038; h fine = 0.05: dim(V ik ) = 7891, N o of elements on both grids = 15774
Figure 10 :
10Negative normalized Casimir energy in two spheres with unequal radii's case. The radius is R = 1 and the distance Z ranges from 0.5 to 3.0. The exact value of the (negative normalized) Casimir energy has been written beside the data point, which is round up to 4 significant digits.
Figure 11 :
11Negative normalized Casimir energy between a torus and a sphere when the sphere moves along the z-axis. The parameters of the torus are l 1 = 2, l 2 = 0.5 and the radius of the sphere is r = 1.
54912 Figure 12 :
5491212. In addition, inside the extrapolation process, when h fine = 0.05, the dim(V ik ) = 5664, 8510 and 27136 and when h coarse = 0.1, the dim(V ik ) = 1456, 3092 and 14464 in different level (0, 1 and 2) cases, separately. By comparing the data point in this table, it is easy to find that the Casimir energy decreases as the number of the iteration increases since the cross-sectional area gets smaller.(a) Level 0 hcoarse = 0.1: dim(V ik ) = 1456, N o of elements on both grids = 2904 h fine = 0.05: dim(V ik ) = 5664, N o of elements on both grids = 11120 (b) Level 1 hcoarse = 0.1: dim(V ik ) = 3092, N o of elements on both grids = 6216 h fine = 0.05: dim(V ik ) = 8510, N o of elements on both grids = 17052 (c) Level 2 hcoarse = 0.1: dim(V ik ) = 14464, N o of elements on both grids = 29568 h fine = 0.05: dim(V ik ) = 27136, N o of elements on both grids = Menger sponges in different levels. The length of each sponge is 1.
Figure 13 :
13The integrand of the Casimir energy between two menger sponges in Level 1 with distance Z = 0.5, 1.5 and 3.0.
Figure 14 :
14Menger sponges in Level 2's case: relative distance between the reference value (computed by Richardson extrapolation)
52556 Figure 15 :
5255615branches: dim(V ik ) = 8792 N o of elements on both grids = 17576 (b) Three branches: dim(V ik ) = 13104 N o of elements on both grids = 26200 (c) Four branches: dim(V ik ) = 17554 N o of elements on both grids = 35100 (d) Five branches: dim(V ik ) = 21950 N o of elements on both grids = 43900 (e) Six branches: dim(V ik ) = 26262 N o of elements on both grids = Ice crystals with different number of branches
Figure 16 :
16The integrand of the Casimir energy between two five-branches ice crystals with distance Z = 0.5, 1.5 and 3.0.
Figure 17 :
17Six-branches ice crystals' case: relative distance between the reference value (computed by Richardson extrapolation)
Figure 18 := 1
181Two ellipsoids with or without rotation: when h fine = 0.05, dim(V ik ) = 5517; hcoarse = 0.1, dim(V ik ) = 1498. The principal semi-axes of two ellipsoids are r 1 = 0.5 and r 2 energy between two ellipsoids with rotations (b) Casimir energy when one of the ellipsoids rotates
Figure 19 :
19The dependence of the Casimir energy and rotation angle of one of the ellipsoids.
Figure 20 :
20Four ellipsoids with or without rotations: when h fine = 0.03, dim(V ik ) = 11024; hcoarse = 0.05, dim(V ik ) = 4160. The principal semi-axes of these ellipsoids are r 1 = 0.6 and r 2 = 0.3 and they locate on the vertices of a regular octahedron with edge length l = 2.
Figure 21 :
21The dependence of the Casimir energy and rotation angle of one of the ellipsoids. Inwards towards the centroid case (solid blue square). Outwards towards the centroid case (solid orange triangle).
Figure 22 :Figure 23 :
2223Six ellipsoids with or without rotations: when h fine = 0.03, dim(V ik ) = 16536; hcoarse = 0.05, dim(V ik ) = 6240. The principal semi-axes of these ellipsoids are r 1 = 0.6 and r 2 = 0.3 and they locate on the vertices of a regular octahedron with edge length l = 2. The dependence of the Casimir energy and rotation angle of one of the ellipsoids.
Table 1 :
1Relative error for approximating the value of log det(V(ik)Ṽ(ik) −1 ) on the wavenumbers associated with the first five consecutive quadrature points via the inverse-free Krylov subspace and standard Arnoldi methods with/without subspace recycled.
Table 2 :
2The number of matvecs associated with the discretized integral operators inside the inverse-free Krylov subspace and standard Arnoldi methods with or without recycling subspace. Nq is the number of wavenumbers/quadrature points, m is the dimension of the Krylov subspace for the first wavenumber (in recycling case); for all the wavenumbers (in non-recycling case), and s i is the number of the recycled eigenvectors from the ith wavenumber's case (in recycling case)
Table 3 :
3Negative normalized Casimir energy in two menger sponges' case
Negative normalized Casimir energy in ice crystals' caseDistance
2-branches
3-branches
4-branches
5-branches
6-branches
0.5
0.04112
0.05989
0.07848
0.07873
0.01128
0.75
0.01499
0.02184
0.02855
0.02873
0.005017
1.0
0.007403
0.01080
0.01412
0.01428
0.002965
1.25
0.004242
0.006198
0.008113
0.008242
0.001985
1.5
0.002672
0.003905
0.005117
0.005223
0.001427
1.75
0.001797
0.002624
0.003442
0.003530
0.001074
2.0
0.001268
0.001849
0.002428
0.002501
0.0008357
2.25
0.0009288
0.001353
0.001776
0.001839
0.0006664
2.5
0.0007007
0.001019
0.001338
0.001391
0.0005410
2.75
0.0005413
0.0007863
0.001033
0.001078
0.0004469
3.0
0.0004270
0.0006188
0.0008134
0.0008526
0.0003741
Table 4 :
4Negative normalized Casimir energy in 2-to 6-branched ice crystals' case
The mathematical theories and numerical experiments in the Maxwell case have been done as well and they will be reported in another paper.
Same figure also applies for standard Arnoldi methods.
The negative normalized Casimir energy is −ξ/ c, for ξ defined in(5). Note that for the labels in all the figures, the normalized Casimir energy means the negative normalized.
On the attraction between two perfectly conducting plates. H B Casimir, Proc. Kon. Kon51793H. B. Casimir, On the attraction between two perfectly conducting plates, in: Proc. Kon. Ned. Akad. Wet., Vol. 51, 1948, p. 793.
Measurements of attractive forces between flat plates. M J Sparnaay, Physica. 246M. J. Sparnaay, Measurements of attractive forces between flat plates, Physica 24 (6-10) (1958) 751-764.
Demonstration of the Casimir force in the 0.6 to 6 µ m range. S K Lamoreaux, Physical Review Letters. 7815S. K. Lamoreaux, Demonstration of the Casimir force in the 0.6 to 6 µ m range, Physical Review Letters 78 (1) (1997) 5.
Template-stripped gold surfaces with 0.4-nm rms roughness suitable for force measurements: Application to the casimir force in the 20-100-nm range. T Ederth, Physical Review A. 62662104T. Ederth, Template-stripped gold surfaces with 0.4-nm rms roughness suitable for force measurements: Application to the casimir force in the 20-100-nm range, Physical Review A 62 (6) (2000) 062104.
Measurement of the Casimir force between parallel metallic surfaces. G Bressi, G Carugno, R Onofrio, G Ruoso, Physical review letters. 88441804G. Bressi, G. Carugno, R. Onofrio, G. Ruoso, Measurement of the Casimir force between parallel metallic surfaces, Physical review letters 88 (4) (2002) 041804.
Experimental investigation of the Casimir force beyond the proximity-force approximation. D Krause, R Decca, D López, E Fischbach, Physical review letters. 98550403D. Krause, R. Decca, D. López, E. Fischbach, Experimental investigation of the Casimir force beyond the proximity-force approximation, Physical review letters 98 (5) (2007) 050403.
Measurement of the Casimir force between a gold sphere and a silicon surface with nanoscale trench arrays. H B Chan, Y Bao, J Zou, R Cirelli, F Klemens, W Mansfield, C Pai, Physical review letters. 101330401H. B. Chan, Y. Bao, J. Zou, R. Cirelli, F. Klemens, W. Mansfield, C. Pai, Measurement of the Casimir force between a gold sphere and a silicon surface with nanoscale trench arrays, Physical review letters 101 (3) (2008) 030401.
New developments in the Casimir effect. M Bordag, U Mohideen, V M Mostepanenko, Physics reports. 3531-3M. Bordag, U. Mohideen, V. M. Mostepanenko, New developments in the Casimir effect, Physics reports 353 (1-3) (2001) 1-205.
M Bordag, G L Klimchitskaya, U Mohideen, V M Mostepanenko, Advances in the Casimir effect. Oxford145M. Bordag, G. L. Klimchitskaya, U. Mohideen, V. M. Mostepanenko, Advances in the Casimir effect, Vol. 145, OUP Oxford, 2009.
Expressions for the zeta-function regularized Casimir energy. E Elizalde, A Romeo, Journal of mathematical physics. 305E. Elizalde, A. Romeo, Expressions for the zeta-function regularized Casimir energy, Journal of mathemat- ical physics 30 (5) (1989) 1133-1139.
Heat-kernel approach to the zeta-function regularization of the Casimir energy for domains with curved boundaries. E Elizalde, A Romeo, International Journal of Modern Physics A. 509E. Elizalde, A. Romeo, Heat-kernel approach to the zeta-function regularization of the Casimir energy for domains with curved boundaries, International Journal of Modern Physics A 5 (09) (1990) 1653-1669.
K Kirsten, Spectral functions in mathematics and physics. Chapman and Hall/CRCK. Kirsten, Spectral functions in mathematics and physics, Chapman and Hall/CRC, 2001.
The general theory of van der waals forces. I E Dzyaloshinskii, E M Lifshitz, L P Pitaevskii, Advances in Physics. 1038I. E. Dzyaloshinskii, E. M. Lifshitz, L. P. Pitaevskii, The general theory of van der waals forces, Advances in Physics 10 (38) (1961) 165-209.
Vacuum stress between conducting plates: an image solution. L S Brown, G J Maclay, Physical Review. 18451272L. S. Brown, G. J. Maclay, Vacuum stress between conducting plates: an image solution, Physical Review 184 (5) (1969) 1272.
Boundary effects in quantum field theory. D Deutsch, P Candelas, Physical Review D. 20123063D. Deutsch, P. Candelas, Boundary effects in quantum field theory, Physical Review D 20 (12) (1979) 3063.
Casimir effect in quantum field theory. B S Kay, Physical Review D. 20123052B. S. Kay, Casimir effect in quantum field theory, Physical Review D 20 (12) (1979) 3052.
On the Casimir effect without cutoff. G Scharf, W Wreszinski, Foundations of Physics Letters. 55G. Scharf, W. Wreszinski, On the Casimir effect without cutoff, Foundations of Physics Letters 5 (5) (1992) 479-487.
Vacuum energy as spectral geometry. S A Fulling, SIGMA. Symmetry, Integrability and Geometry: Methods and Applications. 394S. A. Fulling, et al., Vacuum energy as spectral geometry, SIGMA. Symmetry, Integrability and Geometry: Methods and Applications 3 (2007) 094.
Microscopic theory of retarded van der waals forces between macroscopic dielectric bodies. M Renne, Physica. 561M. Renne, Microscopic theory of retarded van der waals forces between macroscopic dielectric bodies, Physica 56 (1) (1971) 125-137.
Nonequilibrium fluctuational quantum electrodynamics: Heat radiation, heat transfer, and force. G Bimonte, T Emig, M Kardar, M Krüger, Annual Review of Condensed Matter Physics. 8G. Bimonte, T. Emig, M. Kardar, M. Krüger, Nonequilibrium fluctuational quantum electrodynamics: Heat radiation, heat transfer, and force, Annual Review of Condensed Matter Physics 8 (2017) 119-143.
Casimir forces between arbitrary compact objects. T Emig, N Graham, R Jaffe, M Kardar, Physical review letters. 9917170403T. Emig, N. Graham, R. Jaffe, M. Kardar, Casimir forces between arbitrary compact objects, Physical review letters 99 (17) (2007) 170403.
Casimir interaction between a plate and a cylinder. T Emig, R Jaffe, M Kardar, A Scardicchio, Physical review letters. 96880403T. Emig, R. Jaffe, M. Kardar, A. Scardicchio, Casimir interaction between a plate and a cylinder, Physical review letters 96 (8) (2006) 080403.
Casimir forces between arbitrary compact objects. T Emig, R Jaffe, Journal of Physics A: Mathematical and Theoretical. 4116164001T. Emig, R. Jaffe, Casimir forces between arbitrary compact objects, Journal of Physics A: Mathematical and Theoretical 41 (16) (2008) 164001.
Casimir forces between compact objects: The scalar case. T Emig, N Graham, R Jaffe, M Kardar, Physical Review D. 77225005T. Emig, N. Graham, R. Jaffe, M. Kardar, Casimir forces between compact objects: The scalar case, Physical Review D 77 (2) (2008) 025005.
Opposites attract: A theorem about the Casimir force. O Kenneth, I Klich, Physical review letters. 9716160401O. Kenneth, I. Klich, Opposites attract: A theorem about the Casimir force, Physical review letters 97 (16) (2006) 160401.
Casimir forces in a t-operator approach. O Kenneth, I Klich, Physical Review B. 78114103O. Kenneth, I. Klich, Casimir forces in a t-operator approach, Physical Review B 78 (1) (2008) 014103.
Multiple scattering methods in Casimir calculations. K A Milton, J Wagner, Journal of Physics A: Mathematical and Theoretical. 4115155402K. A. Milton, J. Wagner, Multiple scattering methods in Casimir calculations, Journal of Physics A: Mathematical and Theoretical 41 (15) (2008) 155402.
Scattering theory approach to electrodynamic Casimir forces. S J Rahi, T Emig, N Graham, R L Jaffe, M Kardar, Physical Review D. 80885021S. J. Rahi, T. Emig, N. Graham, R. L. Jaffe, M. Kardar, Scattering theory approach to electrodynamic Casimir forces, Physical Review D 80 (8) (2009) 085021.
A relative trace formula for obstacle scattering. F Hanisch, A Strohmaier, A Waters, 10.1215/00127094-2022-0053Duke Math. J. 17111F. Hanisch, A. Strohmaier, A. Waters, A relative trace formula for obstacle scattering, Duke Math. J. 171 (11) (2022) 2233-2274. doi:10.1215/00127094-2022-0053.
. 10.1215/00127094-2022-0053URL https://doi.org/10.1215/00127094-2022-0053
The classical and quantum photon field for non-compact manifolds with boundary and in possibly inhomogeneous media. A Strohmaier, Communications in Mathematical Physics. 3873A. Strohmaier, The classical and quantum photon field for non-compact manifolds with boundary and in possibly inhomogeneous media, Communications in Mathematical Physics 387 (3) (2021) 1441-1489.
A mathematical analysis of Casimir interactions i: The scalar field. Y.-L Fang, A Strohmaier, Annales Henri Poincaré. SpringerY.-L. Fang, A. Strohmaier, A mathematical analysis of Casimir interactions i: The scalar field, in: Annales Henri Poincaré, Springer, 2021, pp. 1-51.
Trace singularities in obstacle scattering and the Poisson relation for the relative trace. Y.-L Fang, A Strohmaier, 10.1007/s40316-021-00188-0Ann. Math. Qué. 461Y.-L. Fang, A. Strohmaier, Trace singularities in obstacle scattering and the Poisson relation for the relative trace, Ann. Math. Qué. 46 (1) (2022) 55-75. doi:10.1007/s40316-021-00188-0. URL https://doi.org/10.1007/s40316-021-00188-0
M W Scroggs, T Betcke, E Burman, W Śmigaj, E Van't Wout, Software frameworks for integral equations in electromagnetic scattering based on calderón identities. 74M. W. Scroggs, T. Betcke, E. Burman, W.Śmigaj, E. van't Wout, Software frameworks for integral equations in electromagnetic scattering based on calderón identities, Computers & Mathematics with Ap- plications 74 (11) (2017) 2897-2914.
History of Banach spaces and linear operators. A Pietsch, Birkhäuser Boston, IncBoston, MAA. Pietsch, History of Banach spaces and linear operators, Birkhäuser Boston, Inc., Boston, MA, 2007.
Efficient computation of casimir interactions between arbitrary 3d objects. M H Reid, A W Rodriguez, J White, S G Johnson, Physical review letters. 103440401M. H. Reid, A. W. Rodriguez, J. White, S. G. Johnson, Efficient computation of casimir interactions between arbitrary 3d objects, Physical review letters 103 (4) (2009) 040401.
Trace singularities in obstacle scattering and the poisson relation for the relative trace. Y.-L Fang, A Strohmaier, Annales mathématiques du Québec. 461Y.-L. Fang, A. Strohmaier, Trace singularities in obstacle scattering and the poisson relation for the relative trace, Annales mathématiques du Québec 46 (1) (2022) 55-75.
Quadratic residual bounds for the hermitian eigenvalue problem. R Mathias, SIAM journal on matrix analysis and applications. 192R. Mathias, Quadratic residual bounds for the hermitian eigenvalue problem, SIAM journal on matrix analysis and applications 19 (2) (1998) 541-550.
The principle of minimized iterations in the solution of the matrix eigenvalue problem. W E Arnoldi, Quarterly of applied mathematics. 91W. E. Arnoldi, The principle of minimized iterations in the solution of the matrix eigenvalue problem, Quarterly of applied mathematics 9 (1) (1951) 17-29.
Y Saad, 10.1137/1.9781611970739.ch1Numerical methods for large eigenvalue problems. Philadelphia, PA66revised edition of the 1992 original [ 1177405Y. Saad, Numerical methods for large eigenvalue problems, Vol. 66 of Classics in Applied Mathematics, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2011, revised edition of the 1992 original [ 1177405]. doi:10.1137/1.9781611970739.ch1. URL https://doi.org/10.1137/1.9781611970739.ch1
An inverse free preconditioned krylov subspace method for symmetric generalized eigenvalue problems. G H Golub, Q Ye, SIAM Journal on Scientific Computing. 241G. H. Golub, Q. Ye, An inverse free preconditioned krylov subspace method for symmetric generalized eigenvalue problems, SIAM Journal on Scientific Computing 24 (1) (2002) 312-334.
Eigifp: A matlab program for solving large symmetric generalized eigenvalue problems. J H Money, Q Ye, ACM Transactions on Mathematical Software (TOMS). 8452AlgorithmJ. H. Money, Q. Ye, Algorithm 845: Eigifp: A matlab program for solving large symmetric generalized eigenvalue problems, ACM Transactions on Mathematical Software (TOMS) 31 (2) (2005) 270-279.
Numerical methods for large eigenvalue problems: revised edition. Y Saad, SIAMY. Saad, Numerical methods for large eigenvalue problems: revised edition, SIAM, 2011.
| [] |
[
"Predicting the Energy Output of Wind Farms Based on Weather Data: Important Variables and their Correlation",
"Predicting the Energy Output of Wind Farms Based on Weather Data: Important Variables and their Correlation"
] | [
"Katya Vladislavleva \nEvolved Analytics Europe BVBA\nVeldstraat, 372110WijnegemBelgium\n",
"Tobias Friedrich \nMax-Planck-Institut für Informatik\nCampus E1.466123SaarbrückenGermany\n",
"Frank Neumann \nSchool of Computer Science\nUniversity of Adelaide\n5005AdelaideSAAustralia\n",
"Markus Wagner \nSchool of Computer Science\nUniversity of Adelaide\n5005AdelaideSAAustralia\n"
] | [
"Evolved Analytics Europe BVBA\nVeldstraat, 372110WijnegemBelgium",
"Max-Planck-Institut für Informatik\nCampus E1.466123SaarbrückenGermany",
"School of Computer Science\nUniversity of Adelaide\n5005AdelaideSAAustralia",
"School of Computer Science\nUniversity of Adelaide\n5005AdelaideSAAustralia"
] | [] | Wind energy plays an increasing role in the supply of energy world-wide. The energy output of a wind farm is highly dependent on the weather condition present at the wind farm. If the output can be predicted more accurately, energy suppliers can coordinate the collaborative production of different energy sources more efficiently to avoid costly overproductions.With this paper, we take a computer science perspective on energy prediction based on weather data and analyze the important parameters as well as their correlation on the energy output. To deal with the interaction of the different parameters we use symbolic regression based on the genetic programming tool DataModeler.Our studies are carried out on publicly available weather and energy data for a wind farm in Australia. We reveal the correlation of the different variables for the energy output. The model obtained for energy prediction gives a very reliable prediction of the energy output for newly given weather data. | 10.1016/j.renene.2012.06.036 | [
"https://arxiv.org/pdf/1109.1922v1.pdf"
] | 2,223,547 | 1109.1922 | 7307cfd99b4e1d2e3d57d38055348347dccb2f1e |
Predicting the Energy Output of Wind Farms Based on Weather Data: Important Variables and their Correlation
Katya Vladislavleva
Evolved Analytics Europe BVBA
Veldstraat, 372110WijnegemBelgium
Tobias Friedrich
Max-Planck-Institut für Informatik
Campus E1.466123SaarbrückenGermany
Frank Neumann
School of Computer Science
University of Adelaide
5005AdelaideSAAustralia
Markus Wagner
School of Computer Science
University of Adelaide
5005AdelaideSAAustralia
Predicting the Energy Output of Wind Farms Based on Weather Data: Important Variables and their Correlation
Wind energy plays an increasing role in the supply of energy world-wide. The energy output of a wind farm is highly dependent on the weather condition present at the wind farm. If the output can be predicted more accurately, energy suppliers can coordinate the collaborative production of different energy sources more efficiently to avoid costly overproductions.With this paper, we take a computer science perspective on energy prediction based on weather data and analyze the important parameters as well as their correlation on the energy output. To deal with the interaction of the different parameters we use symbolic regression based on the genetic programming tool DataModeler.Our studies are carried out on publicly available weather and energy data for a wind farm in Australia. We reveal the correlation of the different variables for the energy output. The model obtained for energy prediction gives a very reliable prediction of the energy output for newly given weather data.
Introduction
Renewable energy such as wind and solar energy plays an increasing role in the supply of energy world-wide. This trend will continue because the global energy demand is increasing and the use of nuclear power and traditional sources of energy such as coal and oil is either considered as non-safe or leads to a large amount of CO 2 emission.
Wind energy is a key-player in the field of renewable energy. The capacity of wind energy production was increased drastically during the last years. In Europe for example, the capacity of wind energy production has doubled since 2005. However, the production of wind energy is hard to predict as it relies on the rather unstable weather conditions present at the wind farm. In particular, the wind speed is crucial for energy production based on wind and the wind speed may vary drastically during different periods of time. Energy suppliers are interested in accurate predictions, as they can avoid overproductions by coordinating the collaborative production of traditional power plants and weather dependent energy sources.
Our aim is to map weather data to energy production. We want to show that even data that is publicly available for weather stations close to wind farms can be used to give a good prediction of the energy output. Furthermore, we examine the impact of different weather conditions on the energy output of wind farms. We are, in particular, interested in the correlation of different components that characterize the weather conditions such as wind speed, pressure, and temperature A good overview on the different methods that were recently applied in forecasting of wind power generation can be found in [2]. Statistical approaches use historical data to predict the wind speed on an hourly basis or to predict energy output directly. On the other hand, short term prediction is often done based on meteorological data and learning approaches are applied. Kusiak, Zheng, and Song [8] have shown how wind speed data may be used to predict the power output of a wind farm based on times series prediction modeling. Neural networks are a very popular learning approach for wind power forecasting based on given time series. They provide an implicit model of the function that maps the given weather data to an energy output.
Jursa and Rohrig [3] have used particle swarm optimization and differential evolution to minimize the prediction error of neural networks for short-term windpower forecasting. Kramer and Gieseke [7] used support vector regression for short term energy forecast and kernel methods and neural networks to analyze wind energy time series [6]. These studies are all based on wind data and do not take other weather conditions into account. Furthermore, neural networks have the disadvantage that they give an implicit model of the function predicting the output, and these models are rarely accessible to a human expert. Usually, one is also interested in the function itself and the impact of the different variables that determine the output. We want to study the impact of different variables on the energy output of the wind farm. Surely, the wind speed available at the wind farm is a crucial parameter. Other parameters that influence the energy output are for example air pressure, temperature and humidity. Our goal is to study the impact and correlation of these parameters with respect to the energy output.
Genetic programming (GP) (see [9] for a detailed presentation) is a type of evolutionary algorithm that can be used to search for functions that map input data to output data. It has been widely used in the field of symbolic regression and the goal of this paper is to show how it can be used for the important real-world problem of predicting energy outputs of wind farms from whether data. The advantage of this method is that it comes up with an explicit expression mapping weather data to energy output. This expression can be further analyzed to study the impact of the different variables that determine the output. To compute such an expression, we use the tool DataModeler [1] which is the state of the art tool for doing symbolic regression based on genetic programming. We will use DataModeler also to carry out a sensitivity analysis which studies the correlation between the different variables and their impact on the accuracy of the prediction.
We proceed as follows. In Section 2, we give a basic introduction into the field of genetic programming and symbolic regression and describe the DataModeler. Section 3, describes our approach of predicting energy output based on weather data and in Section 4 we report on our experimental results. Finally, we finish with some concluding remarks and topics for future research.
Genetic Programming and DataModeler
Genetic programming [5] is a type of evolutionary algorithm that is used in the field of machine learning. Motivated by the evolution process observed in nature computer programs are evolved to solve a given task. Such programs are usually encoded as syntax expression trees. Starting with a given set of trees called the population, new trees called the offspring population are created by applying variation operators such as crossover and mutation. Finally, a new parent population is selected out of the previous parent and the offspring based on how good these trees perform for the given task.
Genetic programming has its main success stories in the field of symbolic regression. Given a set of input output vectors, the task is to find a function that maps the input to the output as best as possible, while avoiding overfitting. The resulting function is later often used to predict the output for a newly given input. Syntax trees represent functions in this case and the functions are changed by crossover and mutation to produce new functions. The quality of a syntax trees is determined by how good it maps the given set of inputs to their corresponding outputs.
The task in symbolic regression can be stated as follows. Given a set of data vectors (x 1i , x 2i , . . . , x ki , y i ) ∈ R k+1 , 1 ≤ i ≤ n, find a function f : R k → R such that the approximation error, e.g. the root mean square error
n i=1 (y i − f (x i )) 2 n with x i = (x 1i , x 2i , . . . , x ki ) is minimized.
We want to use a tool called DataModeler for our investigations. It is based on genetic programming and designed for solving symbolic regression problems.
DataModeler
Evolved Analytics' DataModeler is a complete data analysis and feature selection environment running under Wolfram Mathematica 8. It offers a platforms for data exploration, data-driven model building, model analysis and management, response exploration and variable sensitivity analysis, model-based outlier detection, data balancing and weighting.
Data-driven modeling in DataModeler happens by symbolic regression via genetic programming. The SymbolicRegression function offers several evolutionary strategies which differ in the applied selection schemes, elitism, reproduction strategies, and fitness evaluation strategies. An advanced user can take full control over symbolic regression and introduce new function primitives, new fitness functions, selection and propagation schemes, etc. by specifying appropriate options in the function call. We, however, used the default settings, and default evolution strategy, called in DataModeler ClassicGP. 1 In the symbolic regression performed here a population of individuals (syntax trees) is evolving over a variable number of generations at the Pareto front in the three dimensional objective space of model complexity, model error, and model age [4,10].
Model error in the default setting ranges between 0 and 1 with the best value of 0. It is computed as 1−R 2 , where R is a scaled correlation coefficient. The correlation coefficient of the predicted output is scaled to have the same mean and standard deviation as observed output.
The model complexity is the expressional complexity of models, and it is computed as the total sum of nodes in all subtrees of the given GP tree. The model age is computed as the number of generations that the model survived in the population. The age of a child individual is computed by incrementing the age of the parent contributing to the root node of the child. We use the age as a secondary optimization objective, as it is used only internally for evolution. At the end of symbolic regression runs results are displayed in the two-objective space of userselected objectives, in our case these objectives are model expressional complexity and 1 − R 2 .
The default population size is 300. The default elite set size is 50 individuals from the 'old' population closest to the 3-dimensional Pareto front in the objective space. These individuals are copied to the 'new' population of 300 individuals, after which the size of the new population is decreased down to the necessary 300 This is done by selecting models from Pareto layers until the specified amount is found.
The Selection of individuals for propagation happens by means of Pareto tournaments. By default, 30 models are randomly sampled from the current population, and Pareto optimal Table 1: Internal regression model representation in DataModeler for the model with an expression −25.2334 + 3.21666 · windGust 2 (see also Figure 1):
GPModel [{11, 0.300409}, Σ[−25.2334, Π[3.21666, windGust2]], {ModelAge → 1, ModelingObjective → ModelComplexity[#1], 1 − AbsoluteCorrelation[#2, #3] 2 & , ModelingObjectiveNames → {Complexity, 1-R 2 },
DataVariables → {year, month, day, hour, minute, temperature, apparentTemperature, dewPoint, relativeHumidity, wetBulbDepression, windSpeed, windGust, windSpeed2, windGust2, pressureQNH, rainSince9am},
DataVariableRange → {{2010, 2011}, {1, 12}, {1, 31}, {0, 23}, {0, 30}, {4.2, 23.4}, {−14.2, 24.}, {−3.2, 19.1}, {40, 100}, {0., 6.6}, {0, 106}, {0, 130}, {0, 57}, {0, 70}, {987.8, 1037.5}, {0., 50.4}}, RangeExpansion → None,
ModelingVariables → {year, month, day, hour, minute, temperature, apparentTemperature, dewPoint, relativeHumidity, wetBulbDepression, windSpeed, windGust, windSpeed2, windGust2, pressureQNH, rainSince9am}, Table 1. Model complexity is the sum of nodes in all subtrees of the given tree (11). Model error computed as 1 − R 2 = 0.30.
FunctionPatterns → {Σ[ , ], Π[ , ], D[ , ], S[ , ], P2[ ], SQ[ ], IV[ ], M[ ]}, StoreModelSet → True, ProjectName → fullDataAllVars, TemplateTopLevel → {Σ[ , ]}, TimeConstraint → 2000, IndependentEvolutions → 10}].
individuals from this sample are determined as winners to undergo variation until a necessary number of new individuals is created. Models are coded as parse-trees using the GPmodel structure, which contains placeholders for information about model quality, data variables and ranges used to develop the model, and some settings of symbolic regression. For example, the internal GPmodel representation of the first Pareto front model from a set of models from Figure 3 with an expression −25.2334 + 3.21666windGust 2 is presented in Table 1. Note, that the first vector inside the GPmodel structure represents model quality. Model complexity is 11, model error is 0.300409. The parse tree of the same model is plotted in Figure 1.
When a specified execution threshold of a run in seconds is reached, the independent evolution run terminates and a vector of model objectives in the final population is re-evaluated to only contain model complexity and model error. The set of models can further be analyzed for variable drivers, most frequent variable combinations, behavior of the response, consistency in prediction, accuracy vs. complexity trade-offs, etc.
When the goal is the prediction of the output in the unobserved region of the data space, it is essential to use 'model ensemble' rather than individual models for this purpose. Because of built-in niching, complexity control, and independent evolutions used in DataModeler's symbolic regression, the final models are developed to be diverse (with respect to structural complexity, model forms, residuals), but they all are global models, built to predict training response in the entire training region. Due to diversity and high quality, rich sets of final models allow us to select multiple individuals to model ensembles. Prediction of a set of individuals is then computed as a median or a median average of individual predictions of ensemble members, while disagreement in the predictions (standard deviation in this paper) is used to specify the confidence interval of prediction. When models are extrapolated, the confidence of predictions naturally deteriorates and confidence intervals become wider. This allows first, a more robust prediction of the response (since over-fitting is further mitigated by choosing models of different accuracy and complexity into an ensemble), and second, it makes the predictions more trustworthy, since predictions are also supplied with confidence intervals.
To select ensembles we used a built-in function of DataModeler, that focuses on most typical individuals of the model set as well as on individuals that have least correlated residuals. Because of space constraints we refer the reader to [1] for further information.
Our Approach
The main goal of this paper is to use public data to check feasibility of wind energy prediction by using a industrial-strength off-the-shelf non-linear modeling and feature selection tool. In our study, we investigate and predict the energy production of the wind farm Woolnorth in Tasmania, Australia based on publicly available data. The energy production data is made publicly available by the Australian Energy Market Operator (AEMO) in real time to assist in maintaining the security of the power system. 2 For the creation of our models and the prediction, we associate the wind farm with the Australian weather station ID091245, located at Cape Grim, Tasmania. Its data is available for free for a running observation time window of 72 hours. 3
Data
We collected both the weather and energy production data for the time window September 2010 till July 2011. The output of the farm is available with a rate of one measurement every five minutes, and the weather data with a rate of one measurement every 30 minutes.
The wind farm's production capacity is split into two sites, which complicated the generation of models. The site "Studland Bay" has a maximum output of 75 MW, and "Bluff Point" has a maximum output of 65 MW and is located 50km south of the first site. The weather station is located on the first site. For wind coming from west (which is the prevailing wind direction), the difference in location is negligible. But if wind comes from north, there will be an energy and wind increase right away, plus another energy increase 1-2 hours later (the time delay depends on the actual wind speed). Similarly, if wind comes from south, there will be an increase in the energy production (although no wind is indicated by the weather station) and then 1-2 hours later an energy increase accompanied by a measured wind speed increase.
Data pre-processing
To perform data modeling and variable selection on collected data, we had to perform data pre-processing to create a table of weather and energy measurements taken at the same time intervals. Energy output of the farm is measured at the rate of 5 minutes, including the time stamps of 0 and 30 minutes of every hour when the weather is measured. Our approach was to correlate weather measurements with the average energy energy output of the farm reported in the [0, 25] and [30, 55] minute intervals of every hour. Such averaging makes modeling more difficult, but uses all energy information available.
Different time scales used in the weather and energy data were automatically converted to one scale using a DateList function of Wolfram Mathematica 8, which is the scientific computing environment in which DataModeler operates.
Because of many missing, erroneous, and duplicate time stamps in the weather data we obtained 11022 common measurements of weather and averaged energy produced by the farm from October 2010 till June 2011. These samples were used as training data to build regression models. From 18 variables of the weather data at Cape Grim we excluded two variables prior to modeling: the Pressure MSL variable had more than 75% missing values and the Wind Direction variable was non-numeric.
As test data we used 1408 common half-hour measurements of weather and averaged energy in July 2011.
Data Analysis and Model Development
As soon as weather and energy data from different sources were put in an appropriate inputoutput form, we were able to apply a standard data-driven modeling approach to them.
A good approach employs iterations between three stages: Data Collection/Reduction, Model Development, and Model Analysis and Variable Selection. In hard problems many iterations are required to identify a subspace of minimal dimensionality where models of appropriate accuracy and complexity trade-offs can be built.
Our problem is challenging for several reasons. First, it is hard to predict the total wind energy output of the farm in half an hour following the moment when weather is measured, especially when the weather station is several kilometers away from the farm). Second, public data does not offer any information about the wind farm except for wind energy output. Third, our training data covers the range of weather conditions observed only between October 2010 and June 2011 while the test data contains data from July implying that our models must have good generalization capabilities as they will be extrapolated to the unseen regions of the data space. And lastly, our challenging goal is to use all 16 publicly available numeric weather characteristics for energy output prediction, while many of them are heavily correlated (see Table 2).
Multi-collinearity in hard high-dimensional problems is a major hurdle for most regression methods. Symbolic regression via GP is one of the very few methods which does not suffer from multicollinearity and is capable of naturally selecting variables from the correlated subset for final regression models.
Because ensemble-based symbolic regression and robust variable selection methodology are implemented in DataModeler we settled for a standard model development and variable selection procedures using default settings.
The modeling goals of this study are:
1. to identify the minimal subset of driving weather features that are significantly related to the wind energy output of the wind farm, 2. to let genetic programming express these relationships in the form of explicit input-output regression models, and 3. to select model ensembles for improved generalization capabilities of energy predictions and to analyze the quality of produced model ensembles using an unseen test set.
Our approach is to achieve these goals using two iterations of symbolic regression modeling. At the first exploratory stage we run symbolic regression on training data to identify driving weather characteristics significantly related to the energy output. At the second modeling stage we reduce the training data to the set of selected inputs and run symbolic regression to obtain models, and model ensembles for predicting energy output.
Experimental Results
Experimental setup
The setup of symbolic regression used default settings of DataModeler except for the number of independent runs, execution time of each run and the template operator at the root of the GP trees. We executed 10 independent evolutionary runs of 2000 seconds in both stages. The root node of all GP trees was fixed to a Plus. The primitives for regression models consisted of an extended set of arithmetic operators: {Plus, Minus, Subtract, Divide, Times, Sqrt, Square, Inverse}. The maximum arity of Plus and Times operators is limited to 5.
Model trees have terminals labelled as variables or constants (random integers or reals), with a maximum allowed model complexity of 1000. Population size is 300, elite set size is 50. Population individuals are selected for reproduction using Pareto tournaments with the tournament size of 30. Propagation operators are crossover (at rate 0.9), subtree mutation (rate 0.05), and depth preserving subtree mutation (rate 0.05). At the end of each independent evolution the population and archive individuals are merged together to produce a final set of models. At each stage of experiments the results of all independent evolutions are merged together to produce a super set of solutions (see an example in Figure 3).
For model analysis we applied additional model selection strategies to these super sets of models. We describe the additional model selection strategies, discovered variable drivers, final models, and the quality of predictions in the next section.
Feature selection
The initial set of experiments targets the feature selection, using all 16 input variables and all training data from October 2010 till June 2011. In the allowed 2000 seconds each symbolic regression run completed at most 217 generations.
The 10 independent evolutions generated a super set of 4450 models. We reduced this set to robust models only, by applying interval arithmetic to remove models with potential for pathologies and unbounded response in the training data range. This generated 2559 unique Figure 4 with Pareto front individuals indicated in red. The limit 350 on Model Complexity preserved the best of the run model (the right-most red dot), but excluded dominated individuals with model complexities up to 600. We used the set M 1 to perform variable presence and variable contribution analysis to identify the variable drivers significantly related to energy output. The presence of input variables in models from M 1 is vizualised in Figure 5 and Figure 6. We can observe from Figure 6 that the six most frequently used variables are (in the order of decreasing importance) windGust 2 , windGust, dewPoint, month, relativeHumidity, and pressureQNH. While we observe that these variables are most frequently used in a good set of candidate solutions in M 1 , it is somewhat hard to define a threshold on these presence-based variable importances to select variable drivers. For example it is unclear whether we should select the top three, top four, or top five inputs.
For a crisper feature selection analysis we performed a variable contribution analysis using DataModeler to see how much contribution does each variable have to the relative error of the model where it is present. The median variable contributions computed using the model set M 1 are depicted in Figure 7. The plot clearly demonstrates that the contribution is negligible of other variables besides the top three mentioned above and identified using variable presence analysis.
Results of the first stage of experiments suggest that the weather inputs windGust2, wind-Gust, and DewPoint are 1) the most frequently present in M 1 and 2) have the highest contribution to the relative errors of models in M 1 and are sufficient to achieve the accuracy of M 1 . In other words these inputs are sufficient to predict energy output with accuracy between 70% and 80% R 2 on the training data.
The high correlation between windGust and windGust2 variables motivated us to select only one of them for the second round of modeling together with dewPoint to generate prediction models. Symbolic regression does not guarantee that only one particular input variable out of the set of correlated inputs will be present in final models. It might be that either only one out of two is sufficient to predict the response with the same accuracy, or that both are necessary for success. Our choice was to select the windGust2 (as the most frequent variable in the models) together with dewPoint for the second stage of experiments and see whether predictive accuracy of new models in the new two-dimensional design space will not get worse, when compared to the accuracy of M 1 models developed in the original space of 16 dimensions.
Energy output prediction
The second stage of experiments used only the two input variables windGust2 and dewPoint, with all other symbolic regression settings identical to the first stage experiments. As a result, a new set of one and two-variable models was generated. We again applied a selection procedure to the superset of models by selecting only 25% of robust models closest to the Pareto front with the training error of at most 1 − R 2 = 0.30 and model complexity of at most 250. The resulting set of 587 simplest models, denoted as M 2 is depicted in Figures 8 and 9. Figure 9 is obtained using the VariableContributionTable function of DataModeler, and it exposes the trade-offs for input spaces and prediction accuracy for energy prediction.
We emphasize here that this is the decision and the responsibility of the domain expert to pick an appropriate input space for the energy prediction models. This decision will be guided by the costs and risks associated with different prediction accuracies, and by the time needed to perform measurements of associated design spaces. The responsibility of a good model development tool is to empower experts with robust information about the trade-offs.
At the last stage of model analysis we used the CreateModelEnsemble function of DataModeler to select an ensemble of regression models from M 2 but only allowing models with model complexities not exceeding 150. As can be seen in Figure 8, an increase of model complexity does not provide a sufficient increase in the training error. Since our goal is to predict energy production on a completely new interval of weather conditions (here: July 2011) we settle for the simplest models to avoid potential over-fitting.
The selected model ensemble consists of six models presented in Table 2. The values of model complexity, training error, and test error 4 for six models in the ensemble are respectively Figure 9: Visualization of models in M 2 niched per driving variable combination. Note, that windGust2 alone is insufficient to predict energy output with the accuracy that is achieved when windGust2 and dewPoint are used. The model error is computed using training data. The created model ensemble can now be evaluated on the test data. As mentioned in Section 2.1 ensemble prediction is computed as a median of predictions of individual ensemble members, while ensemble confidence is computed as a standard deviation of individual predictions. We report that the normalized root mean squared error of ensemble prediction on the test data is RMSE Test = 12.6%. Figure 10 presents the predicted versus observed energy output in July 2011, with whiskers corresponding to ensemble confidence. Note that the confidence intervals for prediction are very high for many training samples. This is normal and should be expected when prediction is evaluated well beyond the training data range. Figure 11 presents ensemble prediction versus actual energy production over time in July 2011.
Conclusions
In this study we showed that wind energy output can be predicted from publicly available weather data with accuracy at best 80% R 2 on the training range and at best 85, 5% on the unseen test data. We identified the smallest space of input variables (windGust2 and dewPoint), where reported accuracy can be achieved, and provided clear trade-offs of prediction accuracy for decreasing the input space to the windGust2 variable. We demonstrated that an off-the-shelf data modeling and variable selection tool can be used with mostly default settings to run the symbolic regression experiments as well as variable importance, variable contribution analysis, ensemble selection and validation.
We are looking forward to discuss the results with domain experts and check the applicability of produced models in real-life for short term energy production prediction. We are glad that the presented framework is so simple that it can be used literally by everybody for predicting wind energy production on a smaller scale-for individual wind mills on private farms or urban buildings, or small wind farms. For future work, we are planning to study further the possibilities for longer-term wind energy forecasting.
Figure 1 :
1Model tree plot of the individual from
Figure 2 :
2Data variables are heavily correlated (Blue -positively, Red -negatively).
Figure 3 :
3A super set of models generated in the first stage of experiments with 10 independent evolutions using all inputs. Red dots are Pareto front models, which are non-dominated tradeoffs in the space of model complexity and model error.
Figure 4 :
4Selected set M 1 of 'best' models in all variables and two modeling objectives. robust models, and from those we selected the final set M 1 . This set contained 587 individuals with the model error not exceeding 0.30, and model complexity not exceeding 350, which lie closest to the Pareto front in Model Complexity versus Model Error objective space. The set M 1 is depicted in
Figure 5 :Figure 6 :
56Presence of input variables in the selected set M 1 . Presence of input variables in the selected set of models.
Figure 7 :Figure 8 :
78Individual contributions of input variables in the selected set of models to the relative training error. Selected set M 2 of 'best' models in up to two-dimensional input space and two modeling objectives.
Figure 10 :Figure 11 :
1011Ensemble prediction versus observed energy output in July (Test Data) of the final model ensemble. Whiskers correspond to ensemble disagreement measured as a standard deviation between predictions of individual ensemble members for any given input sample. vs. Observed Blue Energy Output over Time in July Test Data Ensemble Prediction versus Actual energy output over time on the Test Data.
Variable Combination Table
Combinationof Models
of Models Variables Used
ParetoFrontPlot
1
83.3
489
dewPoint
windGust2
50
100
150
200
250
0.20
0.22
0.24
0.26
0.28
0.30
Complexity
1 R 2
2
16.7
98
windGust2
50
100
150
200
250
0.20
0.22
0.24
0.26
0.28
0.30
Complexity
1 R 2
Table 2 :
2Model Ensemble (six models) selected from M 2 . Constants are rounded to one digit after comma. 112.0 − 3.5 * 10 −5 −1956.3 + dewPoint 2 + windGust 2 ) −4.5 + 4.3 * 10 −4 −8.9 + √ windGust 2 − √ windGust 2 + 0.1windGust 2 windGust 2 −12 + dewPoint 2 + windGust 2 209, 0.146), (78, 0.207, 0.149), (121, 0.205, 0.145), (124, 0.211, 0.145).−32.1 + 2.9
√
windGust 2 + windGust 2
2 2
−6.4 + 1.3 * 10 −4 9 −
√
windGust 2
2 windGust 2
2 (−9.9 + dewPoint + 2windGust 2 2
−3.1 + 1.5 * 10 −4 −3 dewPoint windGust 2
2 + 9 −
√
windGust 2
2 windGust 2
2 (−16.3 + dewPoint + 2windGust 2 )
−11.2 + 9.4 * 10 −7 9 −
√
windGust 2
2 √
windGust 2 (39.4 + 4dewPoint + 7windGust 2 ) 1
9 + dewPoint + (10 + 2windGust 2 ) 2
(24, 0.299, 0.426), (42, 0.247, 0.472), (63, 0.
All models reported in this paper were generated using two calls of SymbolicRegression with only the following arguments: input matrix, response vector, execution time, number of independent evolutions, an option to archive models with a certain prefix-name, and a template specification.
Australian Landscape Guardians: AEMO Non-Scheduled Generation Data: www.landscapeguardians.org. au/data/aemo/ (last visited August 31st, 2011) 3 Australian Government, Bureau of Meteorology: weather observations for Cape Grim: www.bom.gov.au/ products/IDT60801/IDT60801.94954.shtml (last visited August 31st, 2011)
Test error is, of course, evaluated post facto, after the models are selected into the model ensemble.
Evolved Analytics LLC. DataModeler 8.0. Evolved Analytics LLC. Evolved Analytics LLC. DataModeler 8.0. Evolved Analytics LLC, 2010.
Current methods and advances in forecasting of wind power generation. A M Foleya, P G Leahya, A Marvugliac, E J Mckeogha, Renewable Energy. 37A. M. Foleya, P. G. Leahya, A. Marvugliac, and E. J. McKeogha. Current methods and advances in forecasting of wind power generation. Renewable Energy, 37:1-8, 2012.
Short-term wind power forecasting using evolutionary algorithms for the automated specification of artificial intelligence models. R Jursa, K Rohrig, International Journal of Forecasting. 24R. Jursa and K. Rohrig. Short-term wind power forecasting using evolutionary algorithms for the automated specification of artificial intelligence models. International Journal of Forecasting, 24:694-709, 2008.
Pursuing the pareto paradigm tournaments, algorithm variations & ordinal optimization. M Kotanchek, G Smits, E Vladislavleva, Genetic Programming Theory and Practice IV. Springer5M. Kotanchek, G. Smits, and E. Vladislavleva. Pursuing the pareto paradigm tournaments, algorithm variations & ordinal optimization. In Genetic Programming Theory and Prac- tice IV, volume 5 of Genetic and Evolutionary Computation, chapter 12, pages 167-186. Springer, 11-13 May 2006.
Genetic Programming II: Automatic Discovery of Reusable Programs. J R Koza, MIT PressCambridge MassachusettsJ. R. Koza. Genetic Programming II: Automatic Discovery of Reusable Programs. MIT Press, Cambridge Massachusetts, May 1994.
Analysis of wind energy time series with kernel methods and neural networks. O Kramer, F Gieseke, Seventh International Conference on Natural Computation. to appearO. Kramer and F. Gieseke. Analysis of wind energy time series with kernel methods and neural networks. In Seventh International Conference on Natural Computation, 2011. to appear.
Short-term wind energy forecasting using support vector regression. O Kramer, F Gieseke, International Conference on Soft Computing Models in Industrial and Environmental Applications. SpringerO. Kramer and F. Gieseke. Short-term wind energy forecasting using support vector re- gression. In International Conference on Soft Computing Models in Industrial and Envi- ronmental Applications, pages 271-280. Springer, 2011.
Short-term prediction of wind farm power: A data mining approach. A Kusiak, H Zheng, Z Song, IEEE Transactions on Energy Conversion. 241A. Kusiak, H. Zheng, and Z. Song. Short-term prediction of wind farm power: A data mining approach. IEEE Transactions on Energy Conversion, 24(1):125 -136, 2009.
. R Poli, W B Langdon, N F Mcphee, A Field Guide to Genetic Programming. lulu.comR. Poli, W. B. Langdon, and N. F. McPhee. A Field Guide to Genetic Programming. lulu.com, 2008.
Age-fitness pareto optimization. M Schmidt, H Lipson, Genetic Programming Theory and Practice VIII, Genetic and Evolutionary Computation. SpringerM. Schmidt and H. Lipson. Age-fitness pareto optimization. In Genetic Programming Theory and Practice VIII, Genetic and Evolutionary Computation, chapter 8, pages 129- 146. Springer, 2010.
| [] |
[
"Population Fluctuation Promotes Cooperation in Networks OPEN",
"Population Fluctuation Promotes Cooperation in Networks OPEN"
] | [
"Steve Miller [email protected] ",
"Joshua Knowles "
] | [] | [] | We consider the problem of explaining the emergence and evolution of cooperation in dynamic network-structured populations. Building on seminal work by Poncela et al., which shows how cooperation (in one-shot prisoner's dilemma) is supported in growing populations by an evolutionary preferential attachment (EPA) model, we investigate the effect of fluctuations in the population size. We find that a fluctuating model -based on repeated population growth and truncation -is more robust than Poncela et al.'s in that cooperation flourishes for a wider variety of initial conditions. In terms of both the temptation to defect, and the types of strategies present in the founder network, the fluctuating population is found to lead more securely to cooperation. Further, we find that this model will also support the emergence of cooperation from pre-existing non-cooperative random networks. This model, like Poncela et al.'s, does not require agents to have memory, recognition of other agents, or other cognitive abilities, and so may suggest a more general explanation of the emergence of cooperation in early evolutionary transitions, than mechanisms such as kin selection, direct and indirect reciprocity. | 10.1038/srep11054 | null | 12,966,391 | 1407.8032 | 1e881d08a8a0c5d8d6ce5d84ece71c65547d5c1c |
Population Fluctuation Promotes Cooperation in Networks OPEN
Published: 10 June 2015
Steve Miller [email protected]
Joshua Knowles
Population Fluctuation Promotes Cooperation in Networks OPEN
Published: 10 June 201510.1038/srep11054received: 08 January 2015 accepted: 11 May 20151 University of Manchester, School of Computer Science -Machine Learning & Optimisation Group, UK. Correspondence and requests for materials should be addressed to S.M. (
We consider the problem of explaining the emergence and evolution of cooperation in dynamic network-structured populations. Building on seminal work by Poncela et al., which shows how cooperation (in one-shot prisoner's dilemma) is supported in growing populations by an evolutionary preferential attachment (EPA) model, we investigate the effect of fluctuations in the population size. We find that a fluctuating model -based on repeated population growth and truncation -is more robust than Poncela et al.'s in that cooperation flourishes for a wider variety of initial conditions. In terms of both the temptation to defect, and the types of strategies present in the founder network, the fluctuating population is found to lead more securely to cooperation. Further, we find that this model will also support the emergence of cooperation from pre-existing non-cooperative random networks. This model, like Poncela et al.'s, does not require agents to have memory, recognition of other agents, or other cognitive abilities, and so may suggest a more general explanation of the emergence of cooperation in early evolutionary transitions, than mechanisms such as kin selection, direct and indirect reciprocity.
Cooperation among organisms is observed, both within and between species, throughout the natural world. It is necessary for the organization and functioning of societies, from insect to human. Cooperation is also posited as essential to the evolutionary development of complex forms of life from simpler ones, such as the evolutionary transition from prokaryotes to eukaryotes or the development of multicellular organisms 1 . A widespread phenomenon in nature, cooperative behaviour has been studied in detail in a wide variety of situations and lifeforms; in viruses 2 , bacteria 3 , insects 4 , fish 5 , birds 6 , mammals 7 , primates 8 and of course in humans 9 , where the evolution of cooperation has been linked to the development of language 10 .
The riddle of cooperation is how to resolve the tension between the ubiquitous existence of cooperation in the natural world, and the competitive struggle for survival between organisms (or genes or groups), that is an essential ingredient of the Darwinian evolutionary perspective. Based on existing theories [11][12][13] , Nowak 14 describes a framework of enabling mechanisms to address the existence of cooperation under a range of differing scenarios. This framework consists of the following five mechanisms: kin selection, direct and indirect reciprocity, multi-level selection and network reciprocity. These mechanisms have been developed and much studied within the flourishing area of evolutionary game theory, and to a lesser extent in the simulated evolution and artificial life areas (in computer science).
Our interest in this paper is network reciprocity, where the interactions between organisms in relation to their network structure, offer an explanation of cooperation. This mechanism is important for two reasons. First, whilst cooperation is widespread and found in a broad range of scenarios in the real world, many of the mechanisms that have been proposed to explain it require specific conditions such as familial relationships (for kin selection), the ability to recognise or remember (for direct and indirect reciprocity) and transient competing groups (for multi-level selection) (see Nowak 14 for specific details).
The requirement for such conditions limits the use of each of these mechanisms as a more general explanation for a widespread phenomenon. Secondly, the more specific behavioural or cognitive abilities required by some of these mechanisms precludes their use in explaining the role of cooperation in early evolutionary transitions. Network-based mechanisms which focus on simple agents potentially offer explanations which do not require such abilities. All forms of life exist in some sort of relationship with other individuals, or environments, and as a result can all be considered to exist within networks. Thus the network explanation has ingredients for generality which are lacking in the other models.
It has been shown that networks having heterogeneous structure can demonstrate cooperation in situations where individuals interact with differing degrees 15 , effectively by clustering. Populations studied in this way are represented in the form of a network, with individuals existing at the nodes of the network and connections represented by edges. The degree of an individual node indicates the number of neighbour nodes it is connected to. Heterogeneity refers to the range of the degree distribution within the network. Behaviour in these networks can be investigated using the prisoner's dilemma game which is widely adopted as a metaphor for cooperation.
The majority of studies [15][16][17] investigating cooperation with regards to network structure have focused on static networks and hence consider the behaviour of agents distinct from the networks within which they exist. Specifically in these works, the behaviour of the individuals within the network has no effect on their environment. More recently however, the interaction between behaviour and the development of network structure has been considered in an interesting development which shows promise in understanding evolutionary origins of cooperation. The Evolutionary Preferential Attachment (EPA) model developed by Poncela et al. 18 proposes a fitness-based coevolutionary mechanism where scale-free networks, which are supportive of cooperation, emerge in a way that is influenced by the behaviour of agents connecting to the network.
There is a large body of work devoted to coevolutionary investigations of cooperation (see Perc and Szolnoki 19 for a review) of which a subset focus on coevolutionary studies within networks [20][21][22][23] . However the EPA approach of Poncela, which we investigate further in this report, is notable in that it addresses an area that seems to have received very little attention: Specifically it explores how environment affects the behaviour of individuals, simultaneously with how such individuals, in return, affect their developing environment.
In the EPA model, new network nodes connect preferentially to existing network nodes that have higher fitness scores. Accumulated fitness scores arise from agents located on nodes within the network playing one-shot prisoner's dilemma with their neighbours. Strategies are subsequently updated on a probabilistic basis by comparison with the relative fitness of a randomly-selected neighbour. The linking of evolutionary agent behaviours to their environment in this way has been shown to promote cooperation, whilst the use of fitness rather than degree allows for a broader and more natural representation of preferential attachment 24 . Whilst further exploring the role of scale-free network growth with regards to cooperation (which of itself is interesting) the EPA model also implements one-shot rather than iterated prisoner's dilemma and it utilises agents having unconditional (fixed) strategies; hence it potentially presents a very simple minimal model, in the light of reported findings, for the coemergence of cooperation and scale-free networks.
Our investigations here are driven by two observations regarding the EPA model. First, we note that the model achieves a fixed network structure very early within simulations, from which point onwards agent behaviour has no effect on network structure. Secondly, whilst the EPA model supports the growth of cooperation in networks from a founder population consisting solely of cooperators, it does not achieve similar levels of success for networks grown from defectors. The broader question of how cooperation emerges requires an answer which can generalise to explain emergence from populations that are not assumed cooperative initially; hence, in this work we investigate networks grown from founder populations which may be cooperators or defectors.
We introduce a modification to the EPA model (see Methods for details) which we consider an abstraction common to most, if not all, real populations, that of population size fluctuation. We investigate whether the resulting opportunity for the agents to continually modify the network, leads to increased levels of cooperation in the population. We achieve this fluctuation by truncating the evolved network whenever it reaches the specified maximum size. At truncation, agents are selected for deletion on the basis of fitness. Those least fit are removed from the network. The network is grown and truncated repeatedly until the simulation is ceased. The original EPA model offered a limited period of time for agents to initially affect the structure of their network. Our modification makes this 'window of opportunity' repeatedly available. Whilst a small number of interesting studies have explored the effect on cooperation of deleting network links [25][26][27][28] , or to a much lesser extent, nodes [29][30][31] , the process we have implemented here differs in that it specifically targets individuals (nodes) on the basis of least-fitness. As such it has a very clear analogue in nature, in terms of natural selection.
The question, "How does cooperation emerge?" can be considered from two extreme perspectives, firstly the scenario where cooperation develops within a population from its very earliest origins and secondly in terms of its emergence within a pre-existing non-cooperative network. In reality cooperation may occur anywhere within a spectrum bounded by these two extrema, at different times for different sets of events and circumstances, therefore a network-based mechanism to explain this phenomenon should be able to deal with either extreme and other positions in between. In testing this model, we investigate scenarios where cooperation may develop as a population grows from its founder members and we also apply the model to pre-existing randomly structured networks.
Results
Unless stated otherwise in the text, the general outline of the evolutionary process by which our results were obtained, occurs for one generation as follows:
1. Play prisoner's dilemma: Each agent plays one-shot prisoner's dilemma with all neighbours and achieves a fitness score that is the sum of all the payoffs. 2. Update strategies: Those strategies that result in low scores are replaced on a probabilistic basis by comparison with the strategies of randomly selected neighbours. 3. Grow network: A specified number (we used 10 in all our simulations) of new nodes are added to the network, connecting to m distinct existing nodes via m edges using either EPA or CRA. 4. Remove nodes (only in the case of attrition models): If the network has reached maximum size, it is pruned by a truncation process that removes agents on the basis of least fitness Full details on the specifics of the implementations are provided in the methods section.
Results for networks grown from founder populations. We investigate the effect of population fluctuation on networks grown from founder populations consisting of three nodes.
Low levels of truncation result in increased levels of cooperation. For simulations starting from founder networks consisting solely of cooperators, we achieved similar profiles to those from the EPA model, however when lower levels of truncation (less than 20%) were used we were able to demonstrate consistently higher levels of cooperation than the EPA model for values of b (the temptation to defect) greater than 1.6 (see Fig. 1a). Highest levels of cooperation were achieved using as little as 2.5% and 5% truncation. We observed that cooperation does not reduce to the levels seen for EPA until truncation values are reduced to as little as 0.1% (not shown). Whilst large percentage truncations risk deleting some of the higher fitness nodes which give the network its heterogeneous (power-law) degree distribution and hence aid cooperation, small truncation percentages will be focused on low-fitness, low-connectivity nodes, the deletion of which is unlikely to have such a detrimental effect. Also, given small truncation values, truncation events will occur at higher frequencies, thus supplying a steady 'drip-feed' of new nodes which will attach to existing hubs by preferential attachment and hence continually promote a power-law degree distribution within the network. The reason that the EPA model can achieve higher levels of cooperation at b = 1 to 1.6 than the fluctuation model is because whilst it is possible for the EPA model to be completely overrun by cooperators, in the fluctuation model however, repeated truncation prevents such a situation occurring. Defectors are being added to the population after every truncation event. A similar constraint also applies at the other end of the scale, with regards to very low levels of cooperation. So there are limits below 1 and above 0 which are a result of the truncation size and frequency. These limits restrict the range of cooperation values achievable by the fluctuation model.
Cooperation occurs even for populations that are founded entirely with defectors. Our results starting from founder populations consisting solely of defectors show an increase compared with levels of cooperation achieved by the EPA model (see Fig. 1b). Further, we note that final levels of cooperation arising from the fluctuation model for networks founded from cooperator and from defector strategies were almost indistinguishable statistically: we tested the dependence of the final cooperation levels observed as a function of b, the temptation to defect, and the founding strategy type (C or D), using a nonparametric Sign Test 32 (see Table 1).
Fluctuation using random selection can still improve cooperation for defector-founded populations. As a control to the effect seen in the fluctuation model, we repeated the above simulations, deleting nodes randomly rather than on the basis of lowest fitness. Results are illustrated in Fig. 2. By comparing with Fig. 1, it can be seen that there is a clear difference in outcomes. First, for random deletion (Fig. 2), fractions of cooperators present are reduced compared to least fitness (Fig. 1); although, we note that levels of cooperation achieved are still independent of the founder population strategy (Fig. 2a,b are approximately equivalent for fluctuation model simulations). Secondly, the percentage truncation parameter no longer appears to have any effect on cooperation (all truncation graphs in Fig. 2 look approximately equivalent regardless of % values).
Focusing solely on Fig. 2, we now consider the fluctuation model compared to the EPA model. In the case of networks grown from cooperator-founders (Fig. 2a), EPA demonstrates higher levels of cooperation than the fluctuation simulations. Truncating the network by a method that simply deletes nodes at random is unsurprisingly, less effective at promoting cooperation than the EPA model which has been shown to be effective for cooperator-founded networks. In the case of networks grown from defector-founders (Fig. 2b), the fluctuation model still achieves the same results as it did for cooperator-founders (in Fig. 2a). However in this case, EPA achieves lower levels of performance than the fluctuation model featuring 'random' truncation. As we have mentioned EPA is generally less effective at promoting cooperation when networks are grown from defector-founders. How is the fluctuation model, with random truncation, still able to promote cooperation (albeit at reduced levels compared to targeted truncation)? The random deletion process will inevitably disrupt the formation of the heterogeneous network by deleting the higher degree nodes that are key to the scale-free structure. However, this disruption will be countered by the preferential process for addition of replacement nodes which is still fitness-based, i.e. new nodes added will still be preferentially attached to existing nodes of higher fitness. Heterogeneity of degree, (which supports cooperation) still arises, but not to the extent seen for the fluctuation model, which targets least fit nodes for deletion. The preferential Table 1. Results of a nonparametric Sign Test (using a two-tailed exact binomial calculation), comparing the final level of cooperation observed in networks founded with cooperators and networks founded with defectors. For each level of truncation, the * 240 2 independent samples were paired by the value of b, the temptation to defect. The column n is the number of non-tied sample pairs. The column k is the number of times the C-founded population had a larger final cooperation level than the D-founded population. With the standard EPA model there is strong statistical evidence that the cooperator-and defector-founded networks differ. For the fluctuation model, the evidence is much less clear. Given the power of the test is high here due to the relatively large number of samples used, we can tentatively conclude that there is little or no effect of the type of network founding strategy (cooperator or defector) in those fluctuation models having above 2.5% truncation.
Scientific RepoRts | 5:11054 | DOi: 10.1038/srep11054 attachment process explains why the fluctuation model is still able to have a positive (but reduced) effect on levels of cooperation, even when nodes are deleted randomly.
Fluctuation in population size reduces variability within simulation results and increases cooperation. In Fig. 3, we provide sample illustrations of time profiles from the EPA and fluctuation models respectively (starting from cooperator founders, b = 2.2). The EPA model (Fig. 3a) results in high variability between different simulation replicates with some replicates being overrun by defectors (i.e. fraction of cooperators ≈0). The fluctuation model (Fig. 3b) demonstrates far less variability, with clear transitions between two states. Whilst the time at which transition occurs varies, most replicates achieved transitions to a consistent level of cooperation -equivalent to, or greater than the highest level observed from amongst all simulations in a comparable EPA model.
We considered the possibility that the EPA model may simply require longer for convergence and hence ran extended simulations up to 200,000 generations (not shown). We did not see any consistent convergence over later generations: whilst some replicates achieved higher levels of cooperation beyond 2,000 generations, others did not, and some oscillated continually.
Fluctuation results in dramatic increases in cooperation for networks grown from defectors. Figure 4 shows replicate simulations grown from defector founders, using the EPA and fluctuation models respectively. In the EPA model (Fig. 4a), all replicates are overrun by defectors. In the fluctuation model (Fig. 4b), all replicates transition to cooperation.
Ultimately, levels of cooperation achieved are similar for the fluctuation model regardless of whether the founder network is cooperators or defectors. We have however noticed that whilst final outcomes are typically similar for both types of founding strategy, defector-founded simulations tend to result in later times to transitions and greater variation in such times (Figs 3b,4b illustrate this). Generally, for cooperator-founded populations, with b values where cooperation was able to arise, we observed transition of the majority (> 95%) of replicates within our typical simulation period of 2000 generations, with delayed transitions becoming more common given increasing b values. For defector-founded populations, delayed transitions occurred more frequently and to achieve consistent results (> 95% transitioned) required 20,000 generations. Figure 5 illustrates time profile plots, with corresponding network degree distributions, for replicate simulations grown from cooperator founders using the fluctuation model. We see that the fluctuation model enables all replicates to consistently reach an apparent power-law degree distribution, as previously reported for the EPA model 33 . We also observe the same result (not shown) for the fluctuation model operating on networks grown from defector-founded populations. In addition, the replicate data makes clear that, when cooperation arises, variability in transition times (Fig. 5a) does not correspond to variability in degree distribution (Fig. 5b).
The presence of a small number of nodes with degree k = 1 is an artefact of the fluctuation model implementation. The fluctuation model grows the network in the same way as EPA (with each new node extending m = 2 connections), however the truncation component of the fluctuation model can leave residual nodes of degree k = 1 (at low frequencies) due to the deletion of connections from removed nodes. Cooperation has a characteristic degree distribution. Whilst in the majority of cases, the fluctuation model supported a transition of networks to a higher level of cooperation, we observed that as b values increased, the transition was not guaranteed. Figure 6 captures an example of this, for 1 replicate out of 10 (for b = 2.2). The replicate data demonstrates clearly the difference in degree distributions between networks that transition to cooperation and those that do not (the red lines in plots 6a and 6b refer to the same replicate). Results for pre-existing random networks. The following results look at the effect of the fluctuation model when applied to pre-existing random networks.
Fluctuation drives non-cooperative pre-existing networks to cooperation. Figure 7 shows final levels of cooperation achieved in simulations which started from randomly structured networks. Nodes within these networks were allocated cooperator (defector) strategies according to probability P (1− P) Simulations were run for 20,000 generations during which time the majority (> 95%) of replicates transitioned to cooperation (for those simulations using b values where cooperation was seen to emerge). Three pre-existing networks were tested, consisting of i) all cooperators, ii) cooperators and defectors in approximately equal amounts, and iii) all defectors. The curves for these three networks are almost entirely coincident, again illustrating the emergence of cooperation in the fluctuation model, regardless of starting criteria (as seen previously in networks grown from founder populations). A static network (iv), where structural changes were disallowed (i.e. strategy updating only), is shown for comparison and clearly illustrates the contribution of the fluctuation mechanism.
Fluctuation transforms pre-existing network structure from random to scale-free. In Fig. 8 we show the effect of the fluctuation model on degree distribution, for pre-existing random networks, initially composed entirely of defectors. Figure 8a, using linear axes, highlights the initial Poisson degree distribution for the pre-existing random network, and Fig. 8b highlights apparent log-log linearity of the final degree distribution, that is characteristic of a power-law distribution.
Cooperation appears to be permanent. In several thousands of simulations, excluding the small fluctuations visible in asymptotic states (see Figs 3,4,5), whilst we have observed failures to transition to cooperation, we have not observed a single instance of widespread reversion to defection once cooperation has been achieved within a population. It would appear that once cooperation is established by means of this model, it does not collapse.
Discussion
The main findings of our investigations are that:
1. fluctuation of population size leads to an increase in levels of cooperation compared with the EPA model, 2. that the levels of cooperation achieved thus are largely independent of whether the populations were founded from defectors or cooperators, 3. that the fluctuation model supports the emergence of cooperation both in networks grown from founder populations and also pre-existing random networks.
The time profile plots we have provided in our results, give an indication as to how the fluctuation model is able to reach the increased levels of cooperation. Whilst the EPA model results in a high degree of variability, the fluctuation model produces consistent transition profiles. The EPA model has two interacting dynamic components: preferential attachment and strategy updating. Structural organization within the EPA model, which is driven by the preferential attachment component, occurs until the network reaches its defined size limit. Changes in levels of cooperation continue to occur after this point. Given that the network structure is fixed, these latter changes can only occur as a result of the remaining active component of the EPA model process: strategy updating. Close examination of EPA simulation replicate time profiles (as shown in Fig. 3) reveals an interesting observation (for b values greater than 1.6): At the point where network structure becomes fixed, those replicates having higher levels of cooperation at this time, tend to finish with higher levels of cooperation than replicates experiencing lower levels of cooperation at the network fixation point. Whilst we do not yet have a detailed understanding of how cooperative structure develops within our networks, this observation suggests that prior to the network fixation point, some structural precedent is set which gives a probabilistic indication of how a network will profit, in cooperative terms, from strategy updating subsequent to structure fixation.
Based on the work of Poncela et al. 33 which describes the connection between scale-free network structure and cooperation, a plausible explanation for such a structural precedent is as follows: Whilst new nodes are preferentially attached to a growing network in a way that may generate hubs and hence a scale-free structure, there is no guarantee that the early clusters of nodes appearing in the network will be cooperators (cooperator and defector strategies are assigned to newly added nodes with equal probability). If the first network hubs appearing in the network are largely occupied by cooperators who in turn have cooperative neighbours, then these agents are likely to accumulate high fitness scores. This would potentially set the foundation for cooperation since such a group is likely to have a hub which would then draw further connections from newcomer nodes and hence promote scale-free structure. On the other hand, if early groupings of cooperators are interspersed with large numbers of defectors, this is likely to result in defectors preying on cooperators and initially accumulating higher fitness values. Strategy updating around such groups would in turn result in the conversion of cooperators present to the (at that point in time) more successful defectors. Eventually large groups of defectors will arise and result in lower fitness values. In a sea of low fitness values, preferential attachment is less able to demonstrate the "rich get richer effect" and this is more likely to result in random rather than targetted connections for newcomer nodes. After network fixation, strategies can be updated, but the network structure cannot change and early groupings of defectors in this way are likely to disrupt, impair or sufficiently delay the foundation of the scale-free structure that is needed to support higher levels of cooperation. We anticipate forming a testable hypothesis around this explanation as the basis for a subsequent work on this model. The fluctuation model effectively allows a network to "go back in time" to fix structure that may have caused such a poor start. The model targets low fitness nodes (and their edges) for deletion. Such nodes are more likely to be defectors surrounded by other defectors (a defector-defector interaction results in a payoff of zero for both parties). Replacement nodes do not inherit the deleted node's connections; they are preferentially attached to higher fitness nodes. In this way, networks that have a poor start are no longer permanently fixed, they have repeated opportunities to address the 'poorest performing' elements of their structure. When viewed in this way, it is no longer surprising that similar levels of cooperation are ultimately achieved regardless of starting strategies. In the same way that this process of continual readjustment allows the network to deal with a less-than-ideal initial structure, it similarly allows the network to deal with less than favourable starting strategies. If such strategies perform poorly, then sooner or later there is a likelihood they will be deleted, and should their replacements also perform poorly, there is a similar likelihood that they too will be deleted.
It is this ability to continually replace poor performing network nodes and connections that supports the fluctuation model's striking ability to convert pre-existing random networks, initially populated entirely by defectors, to highly cooperative networks with a power-law distribution.
The fluctuation model studied in this paper is not intended as an accurate representation of any specific real world process, and probably does not map onto any such process precisely. However it may be interpreted in several ways as analogues of natural phenomena. We now briefly consider possible interpretations of specific aspects of our model. Firstly, as in the original EPA model, new nodes joining the network could be considered as being "newly born into the network" or as "newcomers from outside the network". In either case, they are positioned by preferential attachment in network 'locations' which may prove advantageous or disadvantageous to them. Secondly, given that the model is one of (Darwinian) evolution, we tend to view strategy updates as equivalent to new population members replacing old, rather than any form of learning or imitation. This may be viewed as birth-death. In this situation, the new strategy 'inherits' a set of connections forged by its ancestors along with the advantages or disadvantages that those connections confer. Thirdly, fluctuation as we have implemented it, deletes not only the least fit agents, but also the connections established over generations by successive offspring at that network location. The purpose of deleting both agent and network node is to introduce some form of flux into the actual network itself rather than just its constituents' behaviour. This is a different and more disruptive process to that described by strategy updating -perhaps akin to real world scenarios where external environmental effects may have wider consequences for an entire population than for just specific individuals.
Each of these processes is open to alternative interpretations. However, we suspect based on our results from this work, that it is not necessarily the exact process that is the important issue here, it is merely that, much like most ecological systems in nature, a network continues to be perturbed in some way and is hence unable to achieve a permanently fixed structure: it thus continues to adapt. We anticipate that there may be alternative ways of perturbing a similar model to achieve results akin to those we have demonstrated.
Conclusion
In summary, natural selection acts as a culling process that maintains diversity. In this work we have attempted to generalise the effect of that process across a model of networked individuals -with the crucial ability that individuals can locally affect their network in response to such culls. We have introduced a relatively simple modification to the original EPA model, symbolising an effect elemental to the behaviour of populations in the real world -effectively some sort of representation of flux in the environment. This modification creates the opportunity for individuals in a population to continue to test adaptation against the selective pressures of the ecosystem. We have shown that such a feature brings about marked increases in levels of cooperation in networks grown from defector-founded populations. We have also shown that this feature results in levels of cooperation which are independent of starting behaviour and we have shown that the model can bring about cooperation in both growing and pre-existing non-cooperative networks.
It is important that models which seek to explain the origins of cooperation are general and also robust to starting conditions. We believe that our model achieves both of these aims and hence our findings are of value in the search to understand the emergence and the ubiquity of cooperation.
Methods
Our model and simulations are based on those described in 18 , but with the addition of a pruning step which deletes nodes from the network. We here give a full description of the approach for completeness.
Overview of approach. The models consist of a network (i.e. graph) with agents situated at the nodes. Edges between nodes represent interactions between agents. Interactions are behaviours between agents playing the one-shot prisoner's dilemma game. These behaviours are encoded by a 'strategy' variable, associated with each agent, which takes one of two values: cooperate or defect. The game is played Scientific RepoRts | 5:11054 | DOi: 10.1038/srep11054 in a round robin fashion, with each agent playing its strategy against all its connected neighbours, in turn. Each agent thus accumulates a fitness score which is the sum of all the individual game payoffs.
Within an evolutionary simulation, starting from a founding population, this process is repeated over generations. The evolutionary process assesses agents at each generation on the basis of their fitness score; fitter agents' strategies remain unchanged; less fit agents are more likely to have strategies displaced by those of fitter neighbours. The evolutionary preferential attachment (EPA) process 18 connects strategy dynamics to network growth: Starting from a small founding population new nodes are added which preferentially connect to fitter agents within the network.
Our adaptation of the EPA model adds a further component which repeatedly truncates the network: Whenever the population reaches a maximum size, a specified percentage of nodes in the network are removed, on the basis of least fitness, after which the network grows again.
Outline of the evolutionary process. As described earlier, the general evolutionary process we implement is as follows:
1. Play prisoner's dilemma 2. Update strategies 3. Grow network 4. Remove nodes (only in the case of attrition models)
In the following, we provide more detail on the specifics of each of the four steps:
Play prisoner's dilemma. We use the single parameter representation of the one-shot prisoner's dilemma as formulated in 16 . In this form (the 'weak' prisoner's dilemma), payoff values for the actions, referred to as T, R, P and S, become b, 1, 0 and 0 (see Fig. 9). The b parameter represents the 'temptation to defect' and is set at a value greater than 1 for the dilemma to exist.
From the accumulated prisoner's dilemma interactions, each agent achieves a fitness score as follows:
∑ π = , ( ) = , f 1 i j k i j 1 i
where k i is the number of neighbours that node i has, j represents a connected neighbour and π , i j represents the payoff achieved by node i from playing prisoner's dilemma with node j.
Update strategies. Each node i selects a neighbour j at random. If the fitness of node i, f i is greater or equal to the neighbour's fitness f j , then i's strategy is unchanged. If the fitness of node i, f i is less than the neighbour's fitness, f j , then i's strategy is replaced by a copy of the neighbour j's strategy, according to a probability proportional to the difference between their fitness values. Thus poor scoring nodes have their strategies displaced by the strategies of more successful neighbours.
Hence, at generation t, if f i (t) ≥ f j then i's strategy remains unchanged. If f i (t) < f j then i's strategy is replaced with that of the neighbour j with the following probability: where k i and k j are degrees of node i and its neighbour j respectively. The purpose of the denominator is to normalise the difference between the two nodes. The term b.max[k i (t),k j (t)] represents the largest achievable fitness difference between the two nodes given their respective degrees. (The highest payoff value in the prisoner's dilemma is T, equivalent to b in the single-parameter version of the game used here. The maximum possible score for a node of degree k is therefore k * b. The lowest payoff value is P or S, both equal to zero, giving k * b = 0. Thus the maximum possible difference between two nodes is simply the maximum possible score of the fitter node.)
Grow network. New nodes with randomly allocated strategies are added, to achieve a total of 10 at each generation. Each new node uses m edges to connect to existing nodes. In all our simulations, we use m = 2 edges. Duplicate edges and self-edges are not allowed. The probability that an existing node i receives one of the m new edges is as follows:
( ) where f i (t) is the fitness of an existing node i and N(t) is the number of nodes available to connect to at time t in the existing population. Given that in our model each new node extends m = 2 new edges, and multiple edges are not allowed, N is therefore determined without replacement. The parameter ε ∈ [0,1) is used to adjust selection pressure. For all of our simulations ε = 0.99, hence focusing our model on selection occurring directly as a result of the preferential attachment process.
Truncate network. On achieving a specified size, the network is truncated according to a percentage X. Truncation is achieved by ranking all nodes in order of current fitness scores from maximum to minimum. The X least fit nodes are then deleted from the network. All edges from deleted nodes are removed from the network. Any nodes that become disconnected from the network as a result of this process are also deleted. (Failure to do this would result in small numbers of single, disconnected, non-playing nodes, having static strategies, whose zero fitness values would result in continual isolation from the network.) When there are multiple nodes of equivalent low fitness value, the earliest (oldest) nodes are deleted first. Where X = 0, no truncation occurs and the fluctuation model becomes the EPA model.
Investigations of the fluctuation model in networks grown from founder populations. We investigated networks grown from an initial complete network with N 0 = 3 agents at generation t 0 . Founding populations were either entirely cooperators or entirely defectors. We tested a range of different sized truncation values from 0.001 to 50% starting from each of the two founder populations (cooperators or defectors). Networks were grown to a maximum size of N = 1000 nodes with an overall average degree of approximately k = 4. Simulations were run until 2000 generations. The 'fraction of cooperators' values we use are means, averaged over the last 20 generations of each simulation (to compensate for variability that might occur if just using final generation values). Each data point on 'b-profile' plots (Figs 1,2,7) is the mean of 25 simulations. Simulations run for the purposes of illustrating time profiles or degree distributions were limited to 10 replicates in the interests of clarity.
Investigations of the fluctuation model applied to pre-existing random networks. Random networks were generated by randomly connecting edges between a specified number of nodes (i.e. maximum size of network). Total number of edges added N * k/2, was determined on the basis of a random graph of degree k = 4. Simulation parameters were as described previously for founder population investigations except, i) we focused on a single truncation value of X = 2.5% and ii) longer run times (e.g. 20,000 generations) were generally required for replicate simulations to stabilise, when looking at pre-existing networks initially populated entirely with defectors.
In applying the fluctuation model to pre-existing networks, the model simply 'sees' a pre-existing network, as a 'grown-from-founders' network which has reached the point where it requires truncation. In essence, the fluctuation model is the same when it is applied to pre-existing networks as it is when applied to networks grown from founders.
Where parameters were modified from those described in this methods section (e.g. longer simulations), this is made clear in the results.
Figure 1 .
1Effect of truncation size on cooperation. Simulations were run for 2000 generations using the EPA and fluctuation models at a range of b values. The graphs plot the fraction of cooperators in the population against b values. Each point on the graph represents an average of 25 individual results. Each of the individual results is in turn an average of the last 20 generations of a simulation replicate. Plots are shown from fluctuation model simulations using 2.5 to 50% network truncation. An EPA simulation is also shown which does not feature any truncation. (a) illustrates results grown from founder populations of 3 cooperators. (b) illustrates results grown from founder populations of 3 defectors. See Figs 3,4, and accompanying text for more detailed discussion of within-simulation data variability.
Figure 2 .
2Effect of random rather than weighted selection of nodes for network truncation. Simulations were run as described forFig. 1however, in the fluctuation model, least-fitness based node deletion was replaced with random deletion. Plots are shown from fluctuation model simulations using 2.5 to 50% network truncation. For reference, an EPA simulation is also shown which does not feature any truncation.(a) illustrates results grown from founder populations of 3 cooperators. (b) illustrates results grown from founder populations of 3 defectors.
Figure 3 .
3Example simulation plots for EPA and fluctuation models starting from cooperator founders. Plots show the individual time profiles for 10 replicates using a b value of 2.2. (a) shows the EPA model. (b) shows the fluctuation model operating with 2.5% truncation. Generation 100 is marked in both figures by a vertical black line. This is the point at which the EPA model reaches a fixed network structure, after which no further nodes are added.
Figure 4 .
4Example simulation plots for EPA and fluctuation models starting from defector founders. Plots show the individual time profiles for 10 replicates using a b value of 2.2. (a) shows the EPA model with network fixation occurring at generation 100. (b) shows the fluctuation model operating with 2.5% truncation. Generation 100 is marked in both figures by a vertical black line. This is the point at which the EPA model reaches a fixed network structure, after which no further nodes are added. Scientific RepoRts | 5:11054 | DOi: 10.1038/srep11054
Figure 5 .
5Time profiles and corresponding final degree distributions for networks grown from cooperator founders using the fluctuation model. (a) shows the time profile for a simulation consisting of 10 replicates with a b value of 2.2. The fluctuation model is truncating networks using 2.5% truncation. (b) shows the final degree distributions (at generation 2000) for each of the 10 simulation replicates.
Figure 6 .
6Plot illustrating the difference in degree distribution observed for a replicate that fails to achieve cooperation (fluctuation model). Networks were grown from cooperator founders. Simulation consists of 10 replicates with a b value of 2.2. The fluctuation model is truncating networks using 2.5% truncation. (a) shows the time profile. (b) shows the final degree distributions (at generation 2000) for each of the 10 replicates. The red line in (a) (defectors predominate the population) corresponds to the red line in (b) (steeper exponent than all other replicates). Scientific RepoRts | 5:11054 | DOi: 10.1038/srep11054
Figure 7 .
7Effect of fluctuation model on pre-existing random networks. The plot shows temptation to defect plotted against final fraction of cooperators. Each data point represents an average of 25 individual results. Each of the individual results is an average of the last 20(of 20,000) generations of a simulation replicate. The fluctuation model used 2.5% truncation. The pre-existing networks were in the form of random graphs with each node in the network being populated by cooperators according to probabilities: i) 0, ii) 0.5 and iii) 1. For reference, simulations involving a network that was structurally immutable are also shown in iv. For the immutable network, nodes were populated with cooperators (or defectors) according to a probability of 0.5.
Figure 8 .
8Degree distributions for pre-existing network at start and end of fluctuation model simulations. The plots present aggregate data from fluctuation model simulations of 25 replicates and illustrate the starting and finishing degree distributions, after 20,000 generations. The simulations used a b value of 2.2 and truncation at 2.5%. The starting networks were in the form of random graphs populated entirely by defectors. The same data are represented on linear plots (a) and log log plots (b) in order to clearly illustrate, respectively, the apparent Poisson initial and power-law final distributions. In the interests of visualising both curves, the linear graph only includes degree values up to k = 20. The error bars shown represent 95% confidence intervals for the data.Scientific RepoRts | 5:11054 | DOi: 10.1038/srep11054
Figure 9 .
9Payoff matrix for weak prisoner's dilemma.
Scientific RepoRts | 5:11054 | DOi: 10.1038/srep11054
Scientific RepoRts | 5:11054 | DOi: 10.1038/srep11054
AcknowledgementsThis work was supported by funding from the Engineering and Physical Sciences Research Council (Grant reference number EP/I028099/1).Author ContributionsS.M. conceived and conducted the experiment(s), and analysed the results. J.K. performed statistical analysis. Both authors reviewed the manuscript.Additional InformationCompeting financial interests: The authors declare no competing financial interests.
The Major Transitions in Evolution. J M Smith, E Szathmary, Oxford University PressSmith, J. M. & Szathmary, E. The Major Transitions in Evolution (Oxford University Press, 1997).
Cooperation: another mechanism of viral evolution. Y Shirogane, S Watanabe, Y Yanagi, Trends in microbiology. 21Shirogane, Y., Watanabe, S. & Yanagi, Y. Cooperation: another mechanism of viral evolution. Trends in microbiology 21, 320-324 (2013).
The evolution of social behavior in microorganisms. B J Crespi, Trends in ecology & evolution. 16Crespi, B. J. The evolution of social behavior in microorganisms. Trends in ecology & evolution 16, 178-183 (2001).
E O Wilson, The Insect Societies. Harvard University PressWilson, E. O. The Insect Societies. (Harvard University Press, 1971).
Tit for tat in sticklebacks and the evolution of cooperation. M Milinski, Nature. 325Milinski, M. Tit for tat in sticklebacks and the evolution of cooperation. Nature 325, 433-435 (1987).
. 11054|DOi:10.1038/srep11054Scientific RepoRts |. 5Scientific RepoRts | 5:11054 | DOi: 10.1038/srep11054
P B Stacey, W D Koenig, Cooperative breeding in birds: long term studies of ecology and behaviour. Cambridge University PressStacey, P. B. & Koenig, W. D. Cooperative breeding in birds: long term studies of ecology and behaviour (Cambridge University Press, 1990).
Cooperation, control, and concession in meerkat groups. T H Clutton-Brock, Science. 291Clutton-Brock, T. H. et al. Cooperation, control, and concession in meerkat groups. Science 291, 478-481 (2001).
Capuchins do cooperate: the advantage of an intuitive task. K A Mendres, F De Waal, Animal Behaviour. 60Mendres, K. A. & de Waal, F. Capuchins do cooperate: the advantage of an intuitive task. Animal Behaviour 60, 523-529 (2000).
The evolution of cooperation. R Axelrod, W D Hamilton, Science. 211Axelrod, R. & Hamilton, W. D. The evolution of cooperation. Science 211, 1390-1396 (1981).
The evolution of language. M A Nowak, D C Krakauer, Proceedings of the National Academy of Sciences. 96Nowak, M. A. & Krakauer, D. C. The evolution of language. Proceedings of the National Academy of Sciences 96, 8028-8033 (1999).
The genetical evolution of social behaviour. I, II. W D Hamilton, Journal of Theoretical Biology. 7Hamilton, W. D. The genetical evolution of social behaviour. I, II. Journal of Theoretical Biology 7, 1-52 (1964).
The evolution of reciprocal altruism. R L Trivers, The Quarterly Review of Biology. 46Trivers, R. L. The evolution of reciprocal altruism. The Quarterly Review of Biology 46, 35-57 (1971).
Natural Selection Kin Selection and Group Selection. A Grafen, Behavioural Ecology: An Evolutionary Approach. Oxford, UKBlackwell Scientific Publications2Grafen, A. Natural Selection Kin Selection and Group Selection. In Behavioural Ecology: An Evolutionary Approach 2, 62-84 (Blackwell Scientific Publications, Oxford, UK, 1984).
Five rules for the evolution of cooperation. M A Nowak, Science. 314Nowak, M. A. Five rules for the evolution of cooperation. Science 314, 1560-1563 (2006).
A new route to the evolution of cooperation. F C Santos, J M Pacheco, Journal of Evolutionary Biology. 19Santos, F. C. & Pacheco, J. M. A new route to the evolution of cooperation. Journal of Evolutionary Biology 19, 726-733 (2006).
Evolutionary games and spatial chaos. M A Nowak, R M May, Nature. 359Nowak, M. A. & May, R. M. Evolutionary games and spatial chaos. Nature 359, 826-829 (1992).
Evolutionary games on graphs. G Szabó, G Fãth, Physics Reports. 446Szabó, G. & FÃth, G. Evolutionary games on graphs. Physics Reports 446, 97-216 (2007).
Complex cooperative networks from evolutionary preferential attachment. J Poncela, J Gómez-Gardeńes, L M Floría, A Sánchez, Y Moreno, PLoS one. 32449Poncela, J., Gómez-Gardeńes, J., Floría, L. M., Sánchez, A. & Moreno, Y. Complex cooperative networks from evolutionary preferential attachment. PLoS one 3, e2449 (2008).
Coevolutionary games-A mini review. M Perc, A Szolnoki, BioSystems. 99Perc, M. & Szolnoki, A. Coevolutionary games-A mini review. BioSystems 99, 109-125 (2010).
Coevolutionary games on networks. H Ebel, S Bornholdt, Physical Review E. 6656118Ebel, H. & Bornholdt, S. Coevolutionary games on networks. Physical Review E 66, 056118 (2002).
Coevolution of strategy and structure in complex networks with dynamical linking. J M Pacheco, A Traulsen, M A Nowak, Physical Review Letters. 97258103Pacheco, J. M., Traulsen, A. & Nowak, M. A. Coevolution of strategy and structure in complex networks with dynamical linking. Physical Review Letters 97, 258103 (2006).
Making new connections towards cooperation in the prisoner's dilemma game. A Szolnoki, M Perc, Z Danku, Europhysics Letters). 8450007EPLSzolnoki, A., Perc, M. & Danku, Z. Making new connections towards cooperation in the prisoner's dilemma game. EPL (Europhysics Letters) 84, 50007 (2008).
Co-evolution of strategies and update rules in the prisoner's dilemma game on complex networks. A Cardillo, J Gómez-Gardeñes, D Vilone, A Sánchez, New Journal of Physics. 12103034Cardillo, A., Gómez-Gardeñes, J., Vilone, D. & Sánchez, A. Co-evolution of strategies and update rules in the prisoner's dilemma game on complex networks. New Journal of Physics 12, 103034 (2010).
Fitness-Based Generative Models for Power-Law Networks. K Nguyen, D A Tran, Handbook of Optimization in Complex Networks. SpringerNguyen, K. & Tran, D. A. Fitness-Based Generative Models for Power-Law Networks. In Handbook of Optimization in Complex Networks, 39-53 (Springer, 2012).
Cooperation, social networks, and the emergence of leadership in a prisoner's dilemma with adaptive local interactions. M G Zimmermann, V M Eguã Luz, Physical Review E. 7256118Zimmermann, M. G. & Eguà luz, V. M. Cooperation, social networks, and the emergence of leadership in a prisoner's dilemma with adaptive local interactions. Physical Review E 72, 056118 (2005).
Cooperation prevails when individuals adjust their social ties. F C Santos, J M Pacheco, T Lenaerts, PLoS Computational Biology. 2140Santos, F. C., Pacheco, J. M. & Lenaerts, T. Cooperation prevails when individuals adjust their social ties. PLoS Computational Biology 2, e140 (2006).
Active linking in evolutionary games. J M Pacheco, A Traulsen, M A Nowak, Journal of theoretical biology. 243Pacheco, J. M., Traulsen, A. & Nowak, M. A. Active linking in evolutionary games. Journal of theoretical biology 243, 437-443 (2006).
Evolutionary games in self-organizing populations. A Traulsen, F C Santos, J M Pacheco, Adaptive Networks. SpringerTraulsen, A., Santos, F. C. & Pacheco, J. M. Evolutionary games in self-organizing populations. In Adaptive Networks, 253-267 (Springer, 2009).
Evolution of cooperation on scale-free networks subject to error and attack. M Perc, New Journal of Physics. 1133027Perc, M. Evolution of cooperation on scale-free networks subject to error and attack. New Journal of Physics 11, 033027 (2009).
Impact of aging on the evolution of cooperation in the spatial prisoner's dilemma game. A Szolnoki, M Perc, G Szabó, H.-U Stark, Physical Review E. 8021901Szolnoki, A., Perc, M., Szabó, G. & Stark, H.-U. Impact of aging on the evolution of cooperation in the spatial prisoner's dilemma game. Physical Review E 80, 021901 (2009).
Robustness of cooperation on scale-free networks under continuous topological change. G Ichinose, Y Tenguishi, T Tanizawa, Physical Review E. 8852808Ichinose, G., Tenguishi, Y. & Tanizawa, T. Robustness of cooperation on scale-free networks under continuous topological change. Physical Review E 88, 052808 (2013).
. W J Conover, Practical Nonparametric Statistics (J. Wiley&Sons. third ednConover, W. J. Practical Nonparametric Statistics (J. Wiley&Sons, New York, 1999), third edn.
Growing networks driven by the evolutionary prisoners' dilemma game. J Poncela, J Gómez-Gardeñes, L M Floría, Y Moreno, Handbook of Optimization in Complex Networks. SpringerPoncela, J., Gómez-Gardeñes, J., Floría, L. M. & Moreno, Y. Growing networks driven by the evolutionary prisoners' dilemma game. In Handbook of Optimization in Complex Networks, 115-136 (Springer, 2012).
| [] |
[
"Metriplectic geometry for gravitational subsystems",
"Metriplectic geometry for gravitational subsystems",
"Metriplectic geometry for gravitational subsystems",
"Metriplectic geometry for gravitational subsystems",
"Metriplectic geometry for gravitational subsystems",
"Metriplectic geometry for gravitational subsystems"
] | [
"Viktoria Kabel \nInstitute for Quantum Optics and Quantum Information (IQOQI) Austrian Academy of Sciences Boltzmanngasse 3\n1090ViennaAustria\n\nVienna Center for Quantum Science and Technology (VCQ) Faculty of Physics\nUniversity of Vienna\nBoltzmanngasse 51090ViennaAustria\n",
"Wolfgang Wieland \nInstitute for Quantum Optics and Quantum Information (IQOQI) Austrian Academy of Sciences Boltzmanngasse 3\n1090ViennaAustria\n\nVienna Center for Quantum Science and Technology (VCQ) Faculty of Physics\nUniversity of Vienna\nBoltzmanngasse 51090ViennaAustria\n",
"\n2022\n",
"Viktoria Kabel \nInstitute for Quantum Optics and Quantum Information (IQOQI) Austrian Academy of Sciences Boltzmanngasse 3\n1090ViennaAustria\n\nVienna Center for Quantum Science and Technology (VCQ) Faculty of Physics\nUniversity of Vienna\nBoltzmanngasse 51090ViennaAustria\n",
"Wolfgang Wieland \nInstitute for Quantum Optics and Quantum Information (IQOQI) Austrian Academy of Sciences Boltzmanngasse 3\n1090ViennaAustria\n\nVienna Center for Quantum Science and Technology (VCQ) Faculty of Physics\nUniversity of Vienna\nBoltzmanngasse 51090ViennaAustria\n",
"\n2022\n",
"Viktoria Kabel \nInstitute for Quantum Optics and Quantum Information (IQOQI) Austrian Academy of Sciences Boltzmanngasse 3\n1090ViennaAustria\n\nVienna Center for Quantum Science and Technology (VCQ) Faculty of Physics\nUniversity of Vienna\nBoltzmanngasse 51090ViennaAustria\n",
"Wolfgang Wieland \nInstitute for Quantum Optics and Quantum Information (IQOQI) Austrian Academy of Sciences Boltzmanngasse 3\n1090ViennaAustria\n\nVienna Center for Quantum Science and Technology (VCQ) Faculty of Physics\nUniversity of Vienna\nBoltzmanngasse 51090ViennaAustria\n",
"\n2022\n"
] | [
"Institute for Quantum Optics and Quantum Information (IQOQI) Austrian Academy of Sciences Boltzmanngasse 3\n1090ViennaAustria",
"Vienna Center for Quantum Science and Technology (VCQ) Faculty of Physics\nUniversity of Vienna\nBoltzmanngasse 51090ViennaAustria",
"Institute for Quantum Optics and Quantum Information (IQOQI) Austrian Academy of Sciences Boltzmanngasse 3\n1090ViennaAustria",
"Vienna Center for Quantum Science and Technology (VCQ) Faculty of Physics\nUniversity of Vienna\nBoltzmanngasse 51090ViennaAustria",
"2022",
"Institute for Quantum Optics and Quantum Information (IQOQI) Austrian Academy of Sciences Boltzmanngasse 3\n1090ViennaAustria",
"Vienna Center for Quantum Science and Technology (VCQ) Faculty of Physics\nUniversity of Vienna\nBoltzmanngasse 51090ViennaAustria",
"Institute for Quantum Optics and Quantum Information (IQOQI) Austrian Academy of Sciences Boltzmanngasse 3\n1090ViennaAustria",
"Vienna Center for Quantum Science and Technology (VCQ) Faculty of Physics\nUniversity of Vienna\nBoltzmanngasse 51090ViennaAustria",
"2022",
"Institute for Quantum Optics and Quantum Information (IQOQI) Austrian Academy of Sciences Boltzmanngasse 3\n1090ViennaAustria",
"Vienna Center for Quantum Science and Technology (VCQ) Faculty of Physics\nUniversity of Vienna\nBoltzmanngasse 51090ViennaAustria",
"Institute for Quantum Optics and Quantum Information (IQOQI) Austrian Academy of Sciences Boltzmanngasse 3\n1090ViennaAustria",
"Vienna Center for Quantum Science and Technology (VCQ) Faculty of Physics\nUniversity of Vienna\nBoltzmanngasse 51090ViennaAustria",
"2022"
] | [] | In general relativity, it is difficult to localise observables such as energy, angular momentum, or centre of mass in a bounded region. The difficulty is that there is dissipation. A self-gravitating system, confined by its own gravity to a bounded region, radiates some of the charges away into the environment. At a formal level, dissipation implies that some diffeomorphisms are not Hamiltonian. In fact, there is no Hamiltonian on phase space that would move the region relative to the fields. Recently, an extension of the covariant phase space has been introduced to resolve the issue. On the extended phase space, the Komar charges are Hamiltonian. They are generators of dressed diffeomorphisms. While the construction is sound, the physical significance is unclear. We provide a critical review before developing a geometric approach that takes into account dissipation in a novel way. Our approach is based on metriplectic geometry, a framework used in the description of dissipative systems. Instead of the Poisson bracket, we introduce a Leibniz bracket-a sum of a skew-symmetric and a symmetric bracket. The symmetric term accounts for the loss of charge due to radiation. On the metriplectic space, the charges are Hamiltonian, yet they are not conserved under their own flow. | 10.1103/physrevd.106.064053 | [
"https://export.arxiv.org/pdf/2206.00029v1.pdf"
] | 249,240,432 | 2206.00029 | 465f1979d25091da963eb74e04ddcfe130ed40ac |
Metriplectic geometry for gravitational subsystems
May 2022
Viktoria Kabel
Institute for Quantum Optics and Quantum Information (IQOQI) Austrian Academy of Sciences Boltzmanngasse 3
1090ViennaAustria
Vienna Center for Quantum Science and Technology (VCQ) Faculty of Physics
University of Vienna
Boltzmanngasse 51090ViennaAustria
Wolfgang Wieland
Institute for Quantum Optics and Quantum Information (IQOQI) Austrian Academy of Sciences Boltzmanngasse 3
1090ViennaAustria
Vienna Center for Quantum Science and Technology (VCQ) Faculty of Physics
University of Vienna
Boltzmanngasse 51090ViennaAustria
2022
Metriplectic geometry for gravitational subsystems
May 2022
In general relativity, it is difficult to localise observables such as energy, angular momentum, or centre of mass in a bounded region. The difficulty is that there is dissipation. A self-gravitating system, confined by its own gravity to a bounded region, radiates some of the charges away into the environment. At a formal level, dissipation implies that some diffeomorphisms are not Hamiltonian. In fact, there is no Hamiltonian on phase space that would move the region relative to the fields. Recently, an extension of the covariant phase space has been introduced to resolve the issue. On the extended phase space, the Komar charges are Hamiltonian. They are generators of dressed diffeomorphisms. While the construction is sound, the physical significance is unclear. We provide a critical review before developing a geometric approach that takes into account dissipation in a novel way. Our approach is based on metriplectic geometry, a framework used in the description of dissipative systems. Instead of the Poisson bracket, we introduce a Leibniz bracket-a sum of a skew-symmetric and a symmetric bracket. The symmetric term accounts for the loss of charge due to radiation. On the metriplectic space, the charges are Hamiltonian, yet they are not conserved under their own flow.
The problem considered
Consider a region of space with fixed initial data. What is the total energy contained in the region? General relativity gives no definite answer to this question. There is no unique quasi-local [1][2][3] notion of energy in general relativity. This is due to two features of the theory: first of all, there are no preferred coordinates. If there are no preferred coordinates, there is no preferred notion of time. Time is dual to energy. If there is no preferred clock [4], there is also no preferred notion of energy. The second issue is dissipation. If we insist to restrict ourselves to local observables in a finite region of space, we have to specify what happens at the boundary. Since gravity can not be shielded, there is always dissipation. A local gravitational system will always be open. Gravitational radiation carries away gravitational charge, including mass, energy, angular momentum, centre of mass, and additional soft modes related to gravitational memory [5][6][7], which makes it difficult to characterise gravitational charges on the full non-perturbative phase space of the theory.
That there is no preferred notion of energy or momentum does not mean, of course, that it would be impossible to speak about such important physical concepts in general relativity. One possibility to do so is to introduce material frames of reference {X µ }, which depend themselves-in a highly non-linear but covariant way 1 -on the metric g ab and the matter fields ψ I . The resulting dressed observables, a version of Rovelli's relational observables [8][9][10][11][12][13], evaluate the kinematical observables at those events in spacetime, where the physical frames of reference take a certain value. Such a dressing turns a gauge dependent kinematical observable, such as the metric, into a gauge invariant (Dirac) observable [14,15]. An example for such an observable is the dressed metric itself,
g µν g ab , ψ I (x o ) = M dX 0 ∧ · · · ∧ dX 3 δ (4) X µ − x µ o g ab ∂ a X µ ∂ b X ν ,(1)
where the reference frame X µ itself depends functionally on metric and matter fields, i.e. X µ ≡ X µ [g ab , ψ I ]. Given a material reference frame X µ [g ab , ψ I ], we have a natural class of statedependent vector fields ξ a = ξ µ X[g ab , ψ I ] ∂ ∂X µ a . On shell, the corresponding Hamiltonian, so it exists, defines a surface charge Q ξ , which is conjugate to the reference frame, i.e. Q ξ , g ab = L ξ g ab ,
Q ξ , ψ I = L ξ ψ I =⇒ Q ξ , X µ = ξ µ (X),(2)
where L ξ is the Lie derivative. That the Hamiltonian is a surface charge is a consequence of Noether's theorem and the diffeomorphism invariance of the action. While this approach is intuitive, it is unpractical. It is unpractical, because the construction depends for any realistic choice of coordinates {X µ [g ab , ψ I ]} on the metric and matter fields in a complicated and highly non-local way [16]. A further difficulty is that there are no such coordinates defined globally on the entire state space.
A more practical approach is to take advantage of asymptotic boundary conditions. In an asympotically flat spacetime, the asymptotic boundary conditions select a specific class of asymptotic (BMS) coordinates [17,18]. Any two members of this class can be mapped into each other via an asymptotic symmetry, generated by an asymptotic BMS vector field ξ a BMS . One may then expect that there is a corresponding charge Q ξ BMS that would generate the asymptotic symmetry as a motion on phase space. This, however, immediately leads to the second problem mentioned above: dissipation. The system is open, because radiation escapes to null infinity, and the charges cannot be conserved under their own flow. Hence, the BMS charges cannot be Hamiltonian, i.e. there is no charge on a two-dimensional cross section of future (past) null infinity that would generate an asymptotic symmetry, see also [19] for a more detailed explanation of this issue on the radiative phase space.
The same issues appear also at finite distance [20][21][22][23][24][25]. A candidate for a quasi-local notion of gravitational energy in a finite region Σ, often mentioned in the literature, is the Komar charge [26]. On the usual covariant phase space [27][28][29], it is not at all obvious what the Hamiltonian vector field of the Komar charges should be. The naive expectation that would identify the Komar charge with the generator of a diffeomorphism is incorrect. It is incorrect, because there is dissipation. The charges cannot be conserved under their own flow. This, at least, is the usual story.
Recently, a different viewpoint appeared on the issue of dissipation and Hamiltonian charges. The basic idea put forward by Ciambelli, Leigh, Pai [30,31], Freidel and collaborators [32,33] and Speranza and Chandrasekaran [34,35] is to add boundary modes and extend the covariant phase space in such a way that the Komar charges become Hamiltonian. The resulting modified Poisson bracket on phase space returns the Barnich-Troessaert bracket [36][37][38] between the charges. These ideas resulted from a wider research programme concerned with gravitational subsystems, quasi-local observables, physical reference frames, deparametrisation, and the meaning of gauge [20-25, 34, 35, 39-41, 41-47].
In the following, we shall give a concise and critical summary (Section 2) of the construction [30][31][32] before developing a more geometric metriplectic approach [48][49][50] in Section 3. In the metriplectic approach, the usual Poisson bracket is replaced by a Leibniz bracket on covariant phase space. This new bracket consists of a symmetric and a skew-symmetric part. The skew-symmetric part defines a Poisson bracket on the extended phase space. The symmetric part captures dissipation. Some of the charge aspect is carried away under the Hamiltonian flow into the environment. For a gravitational system, restricted to a bounded region of space, the Komar charges are canonical with respect to the Leibniz bracket. The charges generate diffeomorphisms of the region relative to the fields inside. They are Hamiltonian, but are not conserved under their own Hamiltonian flow, thus accounting for dissipation in gravitational subsystems.
Dressing and covariant phase space
Extended symplectic structure
The starting point of the original dressed phase space approach due to [31] and [32] is the usual state space of general relativity consisting of solutions to the Einstein equations R ab [g]− 1 2 g ab R[g] = 8πG T ab [g ab , ψ I ] for a metric g ab , and some matter fields ψ I on an abstract and differentiable manifold M . The state space F ∋ (g ab , ψ I ) is then extended by including a gravitational dressing for the diffeomorphism group. A point on the extended state space F ext ∋ (g ab , ψ I , φ) is thus characterised by a solution (g ab , ψ I ) to Einstein's equations on M and a diffeomorphism φ : M → M , which is purely kinematical. 2 The diffeomorphism, which has now been added to state space, allows us to introduce dressed solutions to the Einstein's equations, i.e. (φ * g ab , φ * ψ I ), where φ * denotes the pull-back.
At first, the construction seems to merely add further redundancy and to run against our basic physical intuition about background invariance. In a generally covariant theory, a diffeomorphism should have no physical meaning whatsoever and (g ab , ψ I ) ought to represent the same physical state as (φ * g ab , φ * ψ I ). But this intuition is slightly misleading. It is misleading for two reasons. The first reason is that boundaries break gauge symmetries, turning otherwise redundant gauge directions into physical boundary modes. If φ : M → M is a large diffeomorphism such that φ| ∂M = id, the two states (φ * g ab , φ * ψ I ) and (g ab , ψ I ) are no longer gauge equivalent (in the phase space sense of the word). The second reason is that the extended symplectic potential proposed in [31] and [32] has a highly-non trivial dependence on φ. The gravitational dressing φ * enters the extended pre-symplectic current ϑ ext through two independent terms
ϑ ext = φ * ϑ + φ * (Y L),(3)
where L is the Lagrangian, which, in turn, defines the pre-symplectic current 3
∀δ ∈ T F : δ[L] = d ϑ(δ) .(4)
In addition, the extended pre-symplectic current depends on Y a , which is a T M -valued one-form on the extended state space F ext , i.e. a section of the tensor bundle T M ⊗ T * F ext , and behaves like a Maurer-Cartan form (dφ)(φ −1 ) for diffeomorphisms. The one-form Y a on field space can be introduced as follows. Consider an ordinary stateindependent differentiable function on spacetime, say f : M → R. Since (spacetime) vector fields are derivations acting on scalars, the expression Y p [f ] ≡ Y a ∂ a f p must be read as a one-form on field space, i.e. for all p ∈ M : Y p [f ] ∈ T * F ext . This one-form, which will depend linearly on df ∈ T * M , is itself defined by
Y p [f ] := d(f • φ) (φ −1 (p)) ≡ Y a p (∂ a f ) p ,(5)
where the symbol "d" denotes the exterior derivative on field space F ext ∋ (g ab , ψ I , φ). An explicit coordinate expression of Y a with respect to some fixed and fiducial coordinate chart {x µ } in a neighbourhood of p ∈ M is thus given by
Y a p = d[x µ • φ] φ −1 (p) ∂ ∂x µ a p .(6)
In the following, let us study this one-form a little more carefully. If we commute the field space derivative with the dressing, we obtain
dφ * = φ * d + φ * L Y .(7)
This equation is obviously true for scalars. The generalisation to arbitrary p-form fields is immediate and is the consequence of two basic observations: the exterior derivative (on spacetime) commutes with the pull-back, i.e. φ * d = dφ * , and the exterior derivative on field space commutes with the exterior derivative on spacetime, i.e. [d, d] = 0. The one-form Y a behaves like a Maurer-Cartan form for the diffeomorphism group [31,43]. If, in fact, δ 1 and δ 2 are two tangent vectors on field space, equation (6) immediately implies
dY a (δ 1 , δ 2 ) = −[Y (δ 1 ), Y (δ 2 )] a .(8)
In other words,
dY a = −Y b V ∇ b Y a ,(9)
where ∇ a is the metric compatible torsionless derivative with respect to g ab , i.e. ∇ a g bc = 0 and
∇ [a ∇ b] f = 0, ∀f : M → R.
For a background invariant theory, it is now always possible to trivially absorb the dressing field φ ∈ Diff(M : M ) back into a redefinition of metric and matter fields. We shall find this redefinition useful, because it will clarify the physical significance of the construction. If, in fact, the theory is background invariant, the symplectic current transforms covariantly under diffeomorphisms. In other words,
∀p ∈ M : φ * ϑ[g ab , ψ I ; dg ab , dψ I ] (p) = ϑ[φ * g ab , φ * ψ I ; φ * dg ab , φ * dψ I ](p).(10)
On the other hand, we also have
φ * dg ab = d(φ * g ab ) − φ * L Y g ab ,(11)φ * dψ I = d(φ * ψ I ) − φ * L Y ψ I ,(12)
where
L Y denotes the Lie derivative with respect to Y a ∈ T * F ext ⊗ T M , i.e. L Y [g ab ] = 2∇ (a Y b) , L Y ψ I = d dε ε=0 exp(ε Y ) * ψ I .(13)
A trivial field redefinition allows us to remove the dressing fields and absorb them back into the definition of metric and matter fields
φ * g ab −→ g ab , φ * ψ I −→ ψ I ,(14)φ −1 * Y a = d(x µ • φ) φ −1 * ∂ ∂x µ a = d(x µ • φ) ∂ ∂(x µ • φ) a =: X a .(15)
So far, we kept the fiducial coordinate system fixed {x µ } and treated the diffeomorphisms φ : M → M as a new dynamical element of the thus extended state space F ext . Equation (15) tells us that we could also adopt a different viewpoint. Instead of adding the diffeomorphism to the state space, we could equally well reabsorb the dressing φ * into a redefinition of the coordinates, i.e. φ * x µ = x µ • φ −→ x µ . Adopting this viewpoint amounts to adding the four coordinate scalars x µ : M → R 4 to the state space. The addition of coordinate functions to phase space seems to run against the very idea of background invariance. To restore formal coordinate invariance, it is useful to introduce the Maurer-Cartan form
X a = d[x µ ] ∂ a µ ,(16)
which sends the coordinate variations δx µ = δ dx µ back into tangent space T M . Once again, this one-form on field space behaves like a ghost field for the diffeomorphism group, i.e.
dX a = −dx µ V d ∂ a µ = −dx µ V ∂ b µ dx ν b d ∂ a ν = +dx µ V ∂ b µ d dx ν b ∂ a ν = = +dx µ V ∂ b µ ∂ b dx ν ]∂ a ν = X b V ∂ b X a .(17)
In other words,
dX a = X b V ∇ b X a = 1 2 [X, X] a ,(18)
where ∇ a denotes the covariant derivative for the metric g ab .
Let us now proceed to write the extended pre-symplectic potential (3) in terms of the new variables, where the dressing is absorbed into a redefinition of the fields, as done in (14) and (15) above. Taking into account the covariance (10) of the pre-symplectic potential and the commutators (11) and (12), we immediately obtain
ϑ ext = ϑ − ϑ(L X ) + X L =: ϑ − dq X ,(19)
where q X denotes the charge aspect, which is a two-form on spacetime and one-form on field space, i.e. a section of 2 T * M ⊗ T * F ext . It is then also useful to introduce the anticommuting 4 Noether charge one-form on field space
Q X = ∂Σ q X ∈ T * F kin .(20)
To proceed, we also need to introduce the pre-symplectic potential Θ ext on the extended state space, whose exterior derivative defines the pre-symplectic two-form, i.e
Θ ext = Σ ϑ ext ,(21)Ω ext = dΘ ext .(22)
Note that for any two vector fields δ 1 , δ 2 ∈ T * F ext , we thus have
Ω ext (δ 1 , δ 2 ) = δ 1 Θ ext (δ 2 ) − δ 2 Θ ext (δ 1 ) − Θ ext [δ 1 , δ 2 ] ,(23)
where [·, ·] is the Lie bracket between vector fields on state space.
Noether charges on the extended phase space
The idea, which was developed in [30][31][32][33][34], is to consider dressed diffeomorphisms on the extended field space. A dressed diffeomorphism acts on metric and matter fields as well as on the dressing itself. Given a vector field ξ a ∈ T M , we consider thus the flow on field space,
g ab −→ exp(εξ) * g ab , ψ I −→ exp(εξ) * ψ I .(24)
This flow is then compensated by a corresponding transformation of the dressing fields
φ −→ exp(−εξ) • φ = φ • exp(−εφ −1 * ξ),(25)
such that the dressed fields φ * g ab and φ * ψ I are trivially invariant under (24) and (25).
Let us now consider what happens to this flow upon the field redefiniton
(φ * g ab −→ g ab , φ * ψ I −→ ψ I , φ −1 * ξ a −→ ξ a ).(26)
Notice that this field redefinition has a natural and simultaneous action on the spacetime vector field ξ a ∈ T M , sending ξ a into φ −1 * ξ a . If we start out, in fact, with a field-independent vector field ξ a , i.e. dξ a = 0, the field redefinition (26) maps ξ a into a field-dependent vector field on the extended state space. We shall see below how we are naturally led to consider such field dependent vector fields to render the charges integrable. After the field redefinition, the flow (24) and (25) will only change the dressing fields, whereas its action on the metric and matter fields vanishes trivially. This flow lifts the vector field ξ a ∈ T M into a vector field δ drssd ξ on field space. Upon performing the field redefinition (26), the components of this vector field are given by
δ drssd ξ [g ab ] = 0, δ drssd ξ [ψ I ] = 0, X a (δ drssd ξ ) = −ξ a .(27)4 That is QX V QX = 0.
Let us now identify the conditions necessary to make this vector field δ drssd ξ ∈ T F ext Hamiltonian. To this end, consider the interior product between the extended pre-symplectic two-form, which, upon performing the field redefinition (14) and (15), is given by (19) and (22), and the bivector δ ⊗ δ drssd ξ − δ drssd ξ ⊗ δ. A short calculation, see also [29], gives
Ω ext (δ, δ drssd ξ ) = δ Θ(δ drssd ξ ) − δ drssd ξ Θ(δ) − Ω [δ, δ drssd ξ ] + + δ Q ξ + δ drssd ξ [Q X(δ) ] + Q X([δ,δ drssd ξ ]) ,(28)
where δ ∈ T F ext is a second and linearly independent vector field and
Q ξ = Σ ϑ(L ξ ) − ξ L = ∂Σ q ξ(29)
is the Noether charge. Given the definition (27) of the dressed diffeomorphisms δ drssd ξ , the first line of equation (28) vanishes trivially. The second line gives a non-trivial contribution
δ drssd ξ [Q X(δ) ] = L δ drssd ξ [Q X(δ) ] = Q L δ drssd ξ [X(δ)] ,(30)
where
L δ [·] = δ (d·) + d( ·)
is the Lie derivative on the extended field space.
L δ drssd ξ [X a (δ)] = (L δ drssd ξ X a )(δ) + X a [δ drssd ξ , δ] = = (dX a )(δ drssd ξ , δ) − δ[ξ a ] + X a [δ drssd ξ , δ] = = −[ξ, X(δ)] a − δ[ξ a ] + X a [δ drssd ξ , δ] .(31)
Thus
Ω ext (δ, δ drssd ξ ) = δ[Q ξ ] − Q δ[ξ]−[X(δ),ξ] .(32)
If the second term vanishes, the vector field δ drssd ξ , defined in Equation (27) above, is Hamiltonian. The corresponding Hamiltonian is the Noether charge Q ξ . The second term vanishes for any generic configuration on state space iff
δ[ξ a ] = [X(δ), ξ] a .(33)
This equation is satisfied for a specific class of field-dependent vector fields on spacetime, namely those that depend explicitly on the coordinates {x µ } via their component functions ξ µ (x),
ξ a = ξ µ (x) ∂ ∂x µ a ≡ ξ µ (x)∂ a µ .(34)
In fact, such a vector field is field-dependent, because the four coordinate scalars {x µ } have been added to the state space. To see that Equation (33) holds for such vector fields ξ a and field variations δ, notice that
δ[ξ a ] = δ[ξ µ (x)] ∂ a µ + ξ µ (x) δ[∂ a µ ] = = δ[x ν ] (∂ ν ξ µ )(x) ∂ a µ − ξ µ (x) ∂ b µ δ[dx ν b ] ∂ a ν = = δ[x ν ] (∂ ν ξ µ )(x) ∂ a µ − ξ µ (x) (∂ µ δ[x ν ])(x) ∂ a ν = [δ[x], ξ] a = [X(δ), ξ] a ,(35)
where [·, ·] a is the Lie bracket between vector fields on spacetime. Going back to (32), we thus see that the Noether charge (29)
Q ξ , Q ξ ′ = δ drssd ξ [Q ξ ′ ] = Q δ drssd ξ [ξ ′ ] = Q [X(δ drssd ξ ),ξ ′ ] = −Q [ξ,ξ ′ ] .(36)
Let us stop here and discuss the physcial significance of the approach outlined thus far. On the usual covariant phase space, the Komar charges for radial or time-like diffeomorphisms are not integrable. This is hardly surprising. Diffeomorphisms that move the boundary enlarge the system. They bring new data into the region that was in the exterior before. Since there is new data outside, there is no Hamiltonain for radial or time-like diffeomorphisms on the quasi-local phase space. Otherwise it would be possible to extend in a unique way initial data on a partial Cauchy surface into initial data on the entire Cauchy slice.
Upon performing a trivial field redefinition, we saw that the extended state space [31,32] consists of the ordinary (undressed) fields in the bulk and the coordinate scalars {x µ : M → R 4 }. The pre-symplectic structure on the extended state space is then carefully tuned in such a way that the conjugate momentum to the coordinates {x µ } is the pull-back to Σ of the exterior derivative of the Noether charge aspect, i.e. p µ = φ * Σ d[q ∂µ ]. In this way, the pre-symplectic two-form is only changed by a boundary term at ∂Σ. Furthermore, all commutation relations between the new boundary fields and the dynamical fields in the interior vanish (upon the field redefinitions (14) and (15)). Consider, for example, the total momentum charge with respect to the reference frame {x µ }, i.e. P µ = Σ p µ = Q ∂µ . Since the vector field δ drssd ∂µ [·] = {Q ∂µ , ·} is Hamiltonian with respect to the extended pre-symplectic structure, and since δ drssd ∂µ annihilates all dynamical fields in the bulk, see (27), the total momentum P µ trivially commutes with all bulk degrees of freedom, i.e. {P µ , g ab } = 0, {P µ , ψ I } = 0. The only-non vanishing Poisson bracket between P µ and the elements of the extended state space is simply the Poisson bracket {P µ , x ν } = δ ν µ . While this is not a problem per se, it does raise the question of how physically meaningful this extension of the phase space really is. There is no relational change of matter and geometry relative to the hypersurface. In the original construction due to [31], dressed diffeomorphisms transform the fundamental fields, i.e. g ab → φ * g ab , but they also deform the hypersurface, sending Σ into φ −1 (Σ). From the perspective of an observer locked to Σ, the net effect is zero. Such dressed diffeomorphisms leave all covariant functionals of the metric at Σ unchanged. Consider, for example, the total three-volume of Σ, i.e. the integral
Vol[g ab , Σ] = Σ d 3 x det(h ij ), h ij = g ab ∂ a i ∂ b j ,(37)
where {x i : i = 1, 2, 3} are coordinates intrinsic to Σ, ∂ a i ∈ T Σ. Such a functional trivially Poisson commutes with all Noether charges Q ξ under the extended symplectic structure [31,32], i.e. {Q ξ , Vol} = 0, even for those ξ a that are timelike. On the extended phase space, the Noether charge Q ξ does not behave like a physical time translation. A physical Hamiltonian should not preserve the total three-volume. Note that this constitutes an important difference between the dressed phase space approach and deparametrisation via physical reference frames. A material reference frame depends (in a complicated and non-linear manner) on the metric and matter fields and therefore does not commute in general with the dynamical quantities in the bulk, see e.g. (2). To put it simply, what is happening in [31,32] is that the classical phase space is extended by adding new variables x µ and p µ and then carefully choosing a symplectic structure that allows us to identify p µ with the Noether charge aspect, while, at the same time, all the newly added boundary variables (edge modes) trivially commute with all the dynamical fields in the bulk.
Gravitational subsystems and metriplectic geometry
In the following, we propose a different approach. We want to take seriously dissipation and treat the system as open. Hence the Hamiltonian can not be conserved under its own flow. This can be formalised by replacing the symplectic structure by a metriplectic structure [48][49][50] with modified bracket, which captures dissipation (see Appendix B for a brief introduction to metriplectic geometry). The metriplectic structure consists of an extended symplectic two-form Ω ext (·, ·) ∈ T * F ext T * F ext and a symmetric bilinear form, namely a super-metric 5 G(·, ·) ∈ T * F ext sym T * F ext , which describes the interaction of the system with its environment. The resulting bilinear is then given by
K(·, ·) = Ω ext (·, ·) − G(·, ·) ∈ T * F ext ⊗ T * F ext .(38)
Given a functional H : F ext → R on the extended state space, i.e. a functional H[g ab , ψ I , x µ ] of the metric g ab , the matter fields ψ I and the four coordinate scalars x µ , we say a vector field X H ∈ T F ext is Hamiltonian with respect to the metriplectic structure provided the following equation is satisfied,
∀δ ∈ T F ext : δ[H] = K(δ, X H ).(39)
The new bracket between any two such functionals H and F on phase space is then given by
(H, F ) = X H [F ].(40)
This bracket clearly satisfies the Leibniz rule in both arguments,
(H 1 H 2 , F ) = H 1 (H 2 , F ) + (H 1 , F )H 2 ,(41)(H, F 1 F 2 ) = (H, F 1 )F 2 + F 1 (H, F 2 ).(42)
If there is dissipation, i.e. G(δ 1 , δ 1 ) = 0, the bracket will pick up a symmetric term such that the Hamiltonian will not be preserved under its own evolution, i.e.
If, in addition, H is the energy of the system, and the super-metric G(·, ·) is positive (negative) semi-definite, the system can only lose (gain) energy.
To apply metriplectic geometry to a gravitational subsystem in a finite domain Σ, we must identify the skew-symmetric symplectic two-form Ω ext and the super-metric G(·, ·) ∈ T * F ext sym T * F ext that render the charges Hamiltonian. Our starting point is the familiar definition of the Noether charge itself, i.e.
Q ξ = Σ ϑ(L ξ ) − ξ L ,(44)
where ϑ is the ordinary, undressed symplectic current and L denotes the Lagrangian (a four-form on spacetime). In addition, L ξ ∈ T F ext is a vector field on field space, whose components are given by the Lie derivative on the spacetime manifold, i.e.
L ξ [g ab ] = L ξ g ab = 2∇ (a ξ b) , L ξ [ψ I ] = L ξ ψ I , X a (L ξ ) = ξ a ,(45)
where L ξ is the Lie derivative of tensor fields on spacetime. Notice that this differs from the dressed diffeomorphism δ drssd ξ which annihilates all fields in the bulk, see (14) and (27). 5 Superspace is the space of fields.
On shell, 6 the Noether charge (44) is a surface integral,
Q ξ = Σ dq ξ = ∂Σ q ξ .(46)
We now want to identify the metriplectic structure that renders these charges the Hamiltonian generators of the field space vector field (45). Consider first the usual, undressed pre-symplectic two-form in the region Σ, i.e.
Ω = Σ dϑ ≡ dΘ.(47)
Given a vector field δ on field space, we then have
Ω(δ, L ξ ) = δ Θ(L ξ ) − L ξ Θ(δ) − Θ [δ, L ξ ] .(48)
A standard calculation, see e.g. [29], allows us to simplify the second term. First of all, we have
L ξ Θ(δ) = Σ L ξ ϑ[g ab , ψ I ; δg ab , δψ I ](p) = = Σ M (L ξ g ab )(q) δϑ[g ab , ψ I ; h ab , χ I ](p) δg ab (q) + (L ξ h ab )(q) δϑ[g ab , ψ I ; h ab , χ I ](p) δh ab (q) + + (L ξ ψ I )(q) δϑ[g ab , ψ I ; h ab , χ I ](p) δψ I (q) + (L ξ χ I )(q) δϑ[g ab , ψ I ; h ab , χ I ](p) δχ I (q) ,(49)
where (δg ab , δψ I ) ≡ (h ab , χ I ) is a linearised solution of the field equations around (g ab , ψ I ). The action of the vector field L ξ on the metric perturbation h ab yields
L ξ h ab = [L ξ , δ]g ab + δ[L ξ g ab ] = [L ξ , δ]g ab + δ[L ξ g ab ] = = [L ξ , δ]g ab + [δ, L ξ ]g ab + L ξ [δg ab ] = = [L ξ , δ]g ab + L δξ g ab + L ξ [δg ab ].(50)
In the same way, we also have
L ξ ψ I = [L ξ , δ]ψ I + L δξ ψ I + L ξ [δψ I ].(51)
Taking these results back to (49), we obtain
L ξ [Θ(δ)] = Θ [L ξ , δ] + Θ(L δξ ) + Σ L ξ [ϑ(δ)] = = Θ [L ξ , δ] + Θ(L δξ ) + ∂Σ ξ [ϑ(δ)] + Σ ξ δ[L],(52)
where we used Stoke's theorem as well as the definition of the pre-symplectic potential in terms of the Lagrangian, i.e. the on-shell equation
δ[L] = d[ϑ(δ)].
Let us now return to (48) above. Using the definition of the Noether charge (29), we obtain the well known result
Ω(δ, L ξ ) = δ Q ξ − Q δ[ξ] − ∂Σ ξ ϑ(δ).
(53) 6 That is provided the field equations are satisfied.
In the following, we shall restrict ourselves to a specific class of state dependent vector fields on the extended state space. The extended state space F ext ∋ (g ab , ψ I , x µ ) contains the coordinate functions x µ : M → R 4 . A vector field, given in terms of its x µ -coordinate representation, must be understood, therefore, as a state-dependent vector field,
ξ a = ξ µ (x) ∂ ∂x µ a ≡ ξ µ (x)∂ a µ ,(54)δ[ξ µ ] = M δ[x ν ] δ δx ν ξ µ x(p) = δ[x ν ] ∂ ν ξ µ x(p) .(55)
Note that ξ a depends as a functional on x µ : M → R 4 , but there is no functional dependence on g ab or ψ I . This way, the functional differential dξ a of any such vector field returns the Lie derivative on spacetime with respect to the Maurer-Cartan form X a , i.e. ξ a = X a (L ξ ) and δ[ξ a ] = [X(δ), ξ] a (cf. Equation (35) above). For any such specific state-dependent vector field, we can rewrite Equation (53) as
δ[Q ξ ] = Ω(δ, L ξ ) + Q [X(δ),X(L ξ )] + ∂Σ X(L ξ ) ϑ(δ).(56)
Comparing this equation with the definition of Hamiltonian vector fields for a dissipative system, i.e. Equation (39), and demanding that the Lie derivative L ξ ∈ T F ext be the Hamiltonian vector field of the Noether charge Q ξ , we are led to the following definition: a vector field X H ∈ T F kin is Hamiltonian, if there exists a functional H : F ext → R on state space, such that for all vector fields δ ∈ T F ext the following condition is satisfied,
δ[H] = Ω(δ, X H ) + Q [X(δ),X(X H )] + ∂Σ X(X H ) ϑ(δ) ≡ K(δ, X H ).(57)
The new bracket between any two such functionals is then given by Equation (40). Moreover, we are now ready to identify the metrisymplectic structure that renders the charges integrable, i.e.
K(·, ·) = Ω ext (·, ·) − G(·, ·).
The skew-symmetric part defines the extended symplectic two-form
Ω ext (δ 1 , δ 2 ) = −Ω ext (δ 2 , δ 1 ) = Ω(δ 1 , δ 2 ) + Q [X(δ 1 ),X(δ 2 )] + ∂Σ X(δ [1 ) ϑ(δ 2] ),(59)
where square brackets around the indices stand for anti-symmetrisation, i.e. (α V β)(δ 1 , δ 2 ) = 2α(δ [1 )β(δ 2] ) = α(δ 1 )β(δ 2 ) − (1 ↔ 2) for all α, β ∈ T * F kin . The symmetric part, on the other hand, determines the super-metric
G(δ 1 , δ 2 ) = − ∂Σ X(δ (1 ) ϑ(δ 2) ),(60)
where the round brackets around the indices stand for symmetrisation, i.e. (α ⊗ β)(δ (1 , δ 2) ) = 1 2 α(δ 1 )β(δ 2 )+α(δ 2 )β(δ 1 ) . Note that the super-metric G(·, ·) is a boundary term. This is consistent with our physical intuition that the interaction of an open system with its environment takes place at the boundary.
Let us briefly summarise. We introduced a new bracket (·, ·) on state space that turns the covariant phase space into a metriplectic manifold. This bracket is a generalisation of the Poisson bracket. It takes into account dissipation and renders the vector field L ξ [·], defined in (45), integrable. The corresponding Hamiltonian is the Noether charge,
(Q ξ , g ab ) = L ξ g ab , (Q ξ , ψ I ) = L ξ ψ I , (Q ξ , x µ ) = ξ µ .(61)
These results are only possible at the expense of changing the bracket. Neither does the new bracket satisfy the Jacobi identity nor is it skew-symmetric. The symmetric part describes dissipation. The skew-symmetric part defines the usual Poisson bracket on the extended phase space.
Let us add a few further observations. We built the Leibniz bracket in such a way that the Noether charge generates the Hamiltonian vector field (45). Given two state dependent vector fields ξ a 1 = ξ µ 1 (x)∂ a µ and ξ a 2 = ξ µ 2 (x)∂ a µ that satisfy Equation (55), we can now also obtain immediately the new bracket between two such charges, i.e.
(Q ξ 1 , Q ξ 2 ) = L ξ 1 [Q ξ 2 ] = ∂Σ ξ 1 (dq ξ 2 ) = ∂Σ ξ 1 ϑ(L ξ 2 ) − ξ 1 ξ 2 L .(62)
In the same way, we obtain the Leibniz bracket of the Noether charge with itself,
(Q ξ , Q ξ ) = −G(L ξ , L ξ ) = ∂Σ ξ ϑ(L ξ ).(63)
If the vector field ξ a ∈ T M lies tangential to the corner, i.e. ξ a ∈ T (∂Σ), the charge is conserved under its own Hamiltonian flow. Intuitively, this must be so, because the resulting diffeomorphism maps the corner relative to the metric into itself. Hence, there is no relational change. On the other hand, a generic diffeomorphism that moves the boundary relative to the metric, will not preserve its own Hamiltonian if there is flux, i.e. ξ ϑ(L ξ ) ∂Σ = 0.
Outlook and Discussion
In this work, we discussed two different approaches towards describing the phase space of a gravitational subsystem localised in a compact region of space: the extended covariant phase space approach due to [31,32] as well as a new geometrical framework based on metriplectic geometry [48][49][50]. The former is focused on obtaining integrable charges for diffeomorphisms, including large diffeomorphisms that change the boundary. To achieve this, the phase space is extended. Embedding fields x µ : M → R 4 are added to phase space and the pre-symplectic structure is modified accordingly. The key result [30][31][32][33][34] is algebraic: On the extended phase space, the Komar charges close under the Poisson bracket. This yields a new Hamiltonian representation of the Lie algebra of vector fields on spacetime. However, this comes at the cost of weakening the physical interpretation of the charges. Upon performing a trivial field redefinition, we saw that the charges commute with all bulk degrees of freedom. The Hamiltonian vector field of the charges only shifts the embedding coordinates at the boundary. Put differently, on the extended phase space [31,32], the Komar charge generates diffeomorphisms of the metric and the matter fields, but such change is always made undone by a deformation of the hypersurface Σ. For an observer locked to Σ, the net effect is zero.
The metriplectic approach provides a new perspective on how to obtain meaningful charges on phase space. Once again, the Komar charges are rendered Hamiltonian, yet the bracket is different. Instead of the Poisson bracket, we now have a Leibniz bracket (·, ·). The resulting Hamiltonian vector field (Q ξ , ·) generates the full non-linear dynamics in the interior of Σ while accounting for the interaction of the system with its environment. This is achieved by replacing the usual presymplectic structure on phase space with the metriplectic structure commonly used in the context of dissipative systems [48][49][50]. The main difference to the extended phase space approach is that the Leibniz bracket will no longer provide a representation of the diffeomorphism group, i.e. there is an anomaly (Q ξ 1 , Q ξ 2 ) = −Q [ξ 1 ,ξ 2 ] . The extra terms account for dissipation and flux.
What both approaches have in common is that they give a rigorous meaning to the Komar charges on phase space. Therefore, they both face the same problem of what is the physical interpretation of these charges. To compute the Komar charge on state space, we need three inputs: a choice of hypersurface Σ, a vector field ξ a ∈ T M , and a solution to the field equations. This leaves a lot of functional freedom. At finite distance, it is difficult to explain how such charges are connected to physical observables such as energy, momentum, angular momentum. Given the metric and the Cauchy hypersurface, one is left with infinitely many choices for the vector field ξ a . It is unclear which ξ a gives rise to energy, which to momentum, and which to angular momentum. However, this is just a reflection of background invariance. If the theory is background invariant, there is an infinite-dimensional group of gauge symmetries (diffeomorphisms). These infinitely many gauge symmetries, give rise to infinitely many charges, hence the vast functional freedom in defining the quasi-local charges. A second potential criticism is that the first derivative of the Komar charge, accounting for flux, does not vanish in Minkowski space. This may seem counterintuitive at first. Minkowski space is empty and thus no flux expected. However, for a given choice of vector field ξ a , the flux of the Komar charge depends not only on radiative data, but also on kinematical data. The kinematical data is the choice of boundary ∂Σ and the choice of vector field ξ a . To probe the presence of curvature and distinguish purely kinematical flux from physical flux due to gravitational radiation, it seems necessary to add further derivatives, e.g. (Q ξ , (Q η , Q τ )). Future research will be necessary to clarify the physical significance of such nested brackets and their algebraic properties in terms of e.g. the Jacobiator J(ξ, η, τ ) = (Q ξ , (Q η , Q τ )) + (Q η , (Q τ , Q ξ )) + (Q τ , (Q ξ , Q η )).
Another important avenue for future research concerns black holes. Black holes have an entropy and there is a notion of energy and temperature. The outside region, connected to asymptotic infinity, defines a dissipative system: Radiation can fall into the black hole, but nothing comes out. The metriplectic approach is tailor-made to study such thermodynamical systems out of equilibrium, to investigate chaos, stability, and dissipation. Entropy production and energy loss are captured by the super-metric G(·, ·) on metriplectic space.
Finally, let us briefly comment on the implications for quantum gravity. In metriplectic geometry, the Liouville theorem is violated. The volume two-form on phase space is no longer conserved under the Hamiltonian flow (Q ξ , ·). An analogous statement should be possible at the quantum level. Evolution should be now governed by a non-unitary dynamics, e.g. a flow-equation consisting of an anti-symmetric commutator, representing the unitary part, and a symmetric Lindbladian describing the radiation.
Appendix A. Notation and conventions
-Index Notation. We use a hybrid notation. p-form indices are often suppressed, but tensor indices are kept. Indices a, b, c, . . . from the first half of the alphabet are abstract indices on tangent space. Indices µ, ν, ρ, . . . from the second half of the Greek alphabet refer to coordinate charts
{x µ : U µ ⊂ M → R 4 }.
-Spacetime. We are considering a spacetime manifold M , with signature (−+++), metric g ab and matter fields ψ I that satisfy the Einstein equations R ab − 1 2 g ab R = 8πG T ab and the field equations for the matter content. On this manifold, we have several natural derivatives. ∇ a denotes the usual (metric compatible, torsionless) derivative, L ξ is the Lie derivative for a vector field ξ a ∈ T M , and "d" denotes the exterior derivative, i.e. (dω) a 1 ...a p+1 = (p + 1)∇ [a 1 ω a 2 ...a p+1 ] . If ω is a p-form on M , the Lie derivative satisfies L ξ ω = d(ξ ω) + ξ (dω), where (ξ ω)(η, . . . ) = ω(ξ, η, . . . ) = ω ab... ξ a η b · · · is the interior product. If applied to a vector field, the Lie derivative acts via the Lie bracket
L ξ η a = [ξ, η] a = ξ b ∇ b η a − η b ∇ b ξ a .
-Field space. Field space F is the state space of the solutions of the field equations. For simplicity, we always go on-shell; otherwise, we would need to constantly carry around terms that are constrained to vanish. As for the differential calculus on F, the following notation is used. Linearised solutions δ[g ab ] =: h ab , δ[ψ I ] =: χ I define tangent vectors δ ∈ T F on field space. If, in fact, (g (ε)
ab , ψ I (ε) ) is a smooth one-parameter family of solutions to the field equations, through the point on field space (g ab , ψ I ) = (g To distinguish the differential calculus on field space from the differential calculus on spacetime, we use a double stroke notation wherever necessary: d is the exterior derivative on field space, V denotes the wedge product between differential forms on F, and is the interior product. If F : F → R is a differentiable functional on state space, we may thus write,
δ[F ] = δ dF. (A.3)
If, in addition, δ is a vector field on field space, and Ξ is a p-form on field space, the Lie derivative on state space will satisfy the familiar identities For the Einstein-Hilbert action with matter action L matter [g ab , ψ I , ∇ a ψ I ], the pre-symplectic potential is given by
Θ(δ) = Σ ϑ(L ξ ) = 1 16πG Σ d 3 v a ∇ b h ab − ∇ a h b b + Σ d 3 v a ∂L matter ∂(∇ a ψ I ) χ I , (A.6)
where (h ab , χ I ) ≡ (δg ab , δψ I ) solves the linearised field equations and d 3 v a is the directed volume element (a tensor-valued p-form). More generally, d p v a 1 ...a 4−p = 1 p! ε a 1 ...a 4−p b 1 ...bp ∂ b 1 µ 1 · · · ∂ bp µp dx µ 1 ∧ · · · ∧ dx µp . (A.7)
On shell, the Noether charge is given by the Komar formula
Q ξ = Σ ϑ(L ξ ) − ξ L = − 1 16πG ∂Σ d 2 v ab ∇ [a ξ b] , (A.8)
where L ξ is the Lie derivative and L = d 4 v (16πG) −1 R[g, ∂g, ∂ 2 g] + L matter [g, ψ, ∇ψ] is the total Lagrangian.
Appendix B. Metriplectic space and dissipation
In this section, we briefly review the formalism of metriplectic systems [48][49][50] as an extension of the framework for Hamiltonian systems. To simplify the exposition, we restrict ourselves in this appendix to a finite-dimensional system. The generalisation to field theory is straightforward. For Hamiltonian systems, the phase space is characterised by the equations of motion where ω ij = −ω ji is the inverse of the symplectic two-form with inverse metric tensor g ij = g ji and line element
ω = 1 2 ω ij dz i ∧ dz j , dω = 0, ω jm ω im = δ j i ,(ds 2 = g ij dz i ⊗ dz j . (B.6)
If one requires, in addition, that g ij is positive-definite it follows that
d dt S = g ij ∂S ∂z i ∂S ∂z j ≥ 0, (B.7)
i.e. S only increases over time and finds its interpretation as a form of entropy. Finally, one obtains a metriplectic system by combining the two brackets to form the Leibniz bracket where the relative sign depends on the conventions and on which thermodynamical potential generates the evolution (e.g. internal energy, free energy, entropy).
(
H, H) = −G(X H , X H ).
L
δ [Ξ] = δ d[Ξ] + d δ [Ξ] , (A.4) L δ [dΞ] = d[L δ Ξ].(A.5) -Komar charge.
terms of the anti-symmetric Poisson bracket {f, g} = ω ij ∂f ∂z i ∂g ∂z j , (B.2)
i , i = 1 . . . 2N are coordinates on phase space. Analogously, one can define a metric system through the equations of motion d dt f = {|S, f | } (B.4) and a symmetric bracket {|f, g| } = g ij ∂f ∂z i ∂g ∂z j , (B.5)
(f, g) = {f, g} ± {|f, g| } (B.8)
generates the dressed diffeomorphism δ drssdξ
[·] on the extended
state space. From the definition (27) of the vector field, and the fact that these vector fields are
Hamiltonian, we can now immediately infer the canonical commutation relations
The condition is ∀φ ∈ Diff(M : M ), p ∈ M : X µ [g ab , ψ I ] φ(p) = X µ [φ * g ab , φ * ψ I ](p).
This is to say that there are no field equations or gauge conditions that would constrain φ.
In the following, all equations are taken on-shell, i.e. provided the field equations are satisfied.
In gauge theories, k(·, ·) will have non-trivial null vectors. A gauge fixing amounts to taking the pull-back to a submanfiold, where k(·, ·) is non-degenerate.
AcknowledgementsWe acknowledge financial support by the Austrian Science Fund (FWF) through BeyondC (F7103-N48), the Austrian Academy of Sciences (ÖAW) through the project "Quantum Reference Frames for Quantum Fields" (ref. IF 2019 59 QRFQF), and of the ID 61466 grant from the John Templeton Foundation, as part of the "Quantum Information Structure of Spacetime (QISS)" project (qiss.fr).Note that this bracket lives up to its name and satisfies the Leibniz rule in either argument (f, gh) = (f, g)h + g(f, h), (B.9)for all phase-space functions f, g, h. Based on this bracket, there are different ways to define the equations of motion. On the one hand, one can introduce a generalised free energy F = H − T S to generate the flow of the metriplectic system, in which case the Hamiltonian is conserved and the dissipation captured by an increase in entropy[48]. Alternatively one can use the Hamiltonian itself to generate the time evolution throughIn this case, the Hamiltonian is no longer conserved since (H, H) = 0, due to the symmetric part of the Leibniz bracket, and captures directly the loss or gain of energy through dissipation, depending on the sign of {|H, H| },[49,50]. Furthermore, just as we can associate Hamiltonian vector fields X f to each function f on phase space given a symplectic two-form ω through δ[f ] = ω(δ, X f ) for all variations δ, we can define Hamiltonian vector fields X f with respect to the metriplectic structure asprovided the bilinear k(·, ·) = ω(·, ·) ± g(·, ·) is non-degenerate.7Given the metric and symplectic two-form, we define the Leibniz bracket
Quasi-Local Energy-Momentum and Angular Momentum in GR: A Review Article. L B Szabados, Living Rev. Rel. 74L. B. Szabados, "Quasi-Local Energy-Momentum and Angular Momentum in GR: A Review Article," Living Rev. Rel. 7 (2004) 4.
Action and Energy of the Gravitational Field. J Brown, S Lau, J York, arXiv:gr-qc/0010024Annals of Physics. 2972J. Brown, S. Lau, and J. York, "Action and Energy of the Gravitational Field," Annals of Physics 297 (2002), no. 2, 175 -218, arXiv:gr-qc/0010024.
Quasilocal energy and conserved charges derived from the gravitational action. J D Brown, J W York, arXiv:gr-qc/9209012Phys. Rev. D. 47J. D. Brown and J. W. York, "Quasilocal energy and conserved charges derived from the gravitational action," Phys. Rev. D 47 (1993) arXiv:gr-qc/9209012.
Forget time. C Rovelli, arXiv:0903.3832Found. Phys. 41C. Rovelli, "Forget time," Found. Phys. 41 (2011) 1475-1490, arXiv:0903.3832.
Nonlinear nature of gravitation and gravitational-wave experiments. D Christodoulou, Phys. Rev. Lett. 67D. Christodoulou, "Nonlinear nature of gravitation and gravitational-wave experiments," Phys. Rev. Lett. 67 (Sep, 1991) 1486-1489.
Note on the memory effect. J Frauendiener, Class. Quant. Grav. 961639J. Frauendiener, "Note on the memory effect," Class. Quant. Grav. 9 (1992), no. 6, 1639.
Soft Hair on Black Holes. S W Hawking, M J Perry, A Strominger, arXiv:1601.00921Phys. Rev. Lett. 116231301S. W. Hawking, M. J. Perry, and A. Strominger, "Soft Hair on Black Holes," Phys. Rev. Lett. 116 (2016) 231301, arXiv:1601.00921.
What is observable in classical and quantum gravity?. C Rovelli, Classical and Quantum Gravity. 82C. Rovelli, "What is observable in classical and quantum gravity?," Classical and Quantum Gravity 8 (1991), no. 2, 297-316.
Quantum mechanics without time: A model. C Rovelli, Phys. Rev. D. 42C. Rovelli, "Quantum mechanics without time: A model," Phys. Rev. D 42 (Oct, 1990) 2638-2646.
Partial observables. C Rovelli, arXiv:gr-qc/0110035Phys. Rev. D. 65124013C. Rovelli, "Partial observables," Phys. Rev. D 65 (2002) 124013, arXiv:gr-qc/0110035.
Reduced phase space quantization and Dirac observables. T Thiemann, arXiv:gr-qc/0411031Class. Quant. Grav. 23T. Thiemann, "Reduced phase space quantization and Dirac observables," Class. Quant. Grav. 23 (2006) 1163-1180, arXiv:gr-qc/0411031.
Partial and complete observables for canonical general relativity. B Dittrich, arXiv:gr-qc/0507106Class. Quant. Grav. 23B. Dittrich, "Partial and complete observables for canonical general relativity," Class. Quant. Grav. 23 (2006) 6155-6184, arXiv:gr-qc/0507106.
. J Tambornino, arXiv:1109.0740Relational Observables in Gravity: a Review. 817SIGMAJ. Tambornino, "Relational Observables in Gravity: a Review," SIGMA 8 (2012) 017, arXiv:1109.0740.
Observables, gravitational dressing, and obstructions to locality and subsystems. W Donnelly, S B Giddings, arXiv:1607.01025Phys. Rev. D. 9410W. Donnelly and S. B. Giddings, "Observables, gravitational dressing, and obstructions to locality and subsystems," Phys. Rev. D 94 (2016), no. 10, 104038, arXiv:1607.01025.
Gravitational splitting at first order: Quantum information localization in gravity. W Donnelly, S B Giddings, arXiv:1805.11095Phys. Rev. D. 98886006W. Donnelly and S. B. Giddings, "Gravitational splitting at first order: Quantum information localization in gravity," Phys. Rev. D 98 (2018), no. 8, 086006, arXiv:1805.11095.
Gravitational observables and local symmetries. C G Torre, arXiv:gr-qc/9306030Phys. Rev. D. 48C. G. Torre, "Gravitational observables and local symmetries," Phys. Rev. D 48 (1993) R2373-R2376, arXiv:gr-qc/9306030.
Asymptotic Symmetries in Gravitational Theory. R Sachs, Phys. Rev. 128R. Sachs, "Asymptotic Symmetries in Gravitational Theory," Phys. Rev. 128 (Dec, 1962) 2851-2864.
Gravitational waves in general relativity VIII. Waves in asymptotically flat space-time. R Sachs, Proceedings of the Royal Society London A. 2701340R. Sachs, "Gravitational waves in general relativity VIII. Waves in asymptotically flat space-time," Proceedings of the Royal Society London A 270 (1962), no. 1340, 103-126.
A Ashtekar, Asymptotic Quantization. Bibliopolis, Napoli, 1987. Based on 1984 Naples Lectures. A. Ashtekar, Asymptotic Quantization. Bibliopolis, Napoli, 1987. Based on 1984 Naples Lectures.
Local subsystems in gauge theory and gravity. W Donnelly, L Freidel, arXiv:1601.04744JHEP. 09102W. Donnelly and L. Freidel, "Local subsystems in gauge theory and gravity," JHEP 09 (2016) 102, arXiv:1601.04744.
How is quantum information localized in gravity?. W Donnelly, S B Giddings, arXiv:1706.03104Phys. Rev. D. 96886013W. Donnelly and S. B. Giddings, "How is quantum information localized in gravity?," Phys. Rev. D 96 (2017), no. 8, 086013, arXiv:1706.03104.
Covariant phase space with boundaries. D Harlow, J.-Q Wu, Journal of High Energy Physics. 202010146D. Harlow and J.-Q. Wu, "Covariant phase space with boundaries," Journal of High Energy Physics 2020 (2020), no. 10, 146.
New boundary variables for classical and quantum gravity on a null surface. W Wieland, arXiv:1704.07391Class. Quantum Grav. 34215008W. Wieland, "New boundary variables for classical and quantum gravity on a null surface," Class. Quantum Grav. 34 (2017) 215008, arXiv:1704.07391.
Gravitational SL(2, R) algebra on the light cone. W Wieland, arXiv:2104.05803JHEP. 0757W. Wieland, "Gravitational SL(2, R) algebra on the light cone," JHEP 07 (2021) 057, arXiv:2104.05803.
Null infinity as an open Hamiltonian system. W Wieland, arXiv:2012.01889JHEP. 2195W. Wieland, "Null infinity as an open Hamiltonian system," JHEP 21 (2020) 095, arXiv:2012.01889.
Covariant Conservation Laws in General Relativity. A Komar, Phys. Rev. 113A. Komar, "Covariant Conservation Laws in General Relativity," Phys. Rev. 113 (1959) 934-936.
Local symmetries and constraints. J Lee, R M Wald, J. Math. Phys. 31J. Lee and R. M. Wald, "Local symmetries and constraints," J. Math. Phys. 31 (1990) 725-743.
The Covariant Phase Space Of Asymptotically Flat Gravitational Fields. A Ashtekar, L Bombelli, O Reula, Mechanics, Analysis and Geometry: 200 Years after. Lagrange, M. Francaviglia and D. HolmAmsterdamNorth HollandA. Ashtekar, L. Bombelli, and O. Reula, "The Covariant Phase Space Of Asymptotically Flat Gravitational Fields," in Mechanics, Analysis and Geometry: 200 Years after Lagrange, M. Francaviglia and D. Holm, eds. North Holland, Amsterdam, 1990.
A General definition of 'conserved quantities' in general relativity and other theories of gravity. R M Wald, A Zoupas, arXiv:gr-qc/9911095Phys. Rev. D. 6184027R. M. Wald and A. Zoupas, "A General definition of 'conserved quantities' in general relativity and other theories of gravity," Phys. Rev. D 61 (2000) 084027, arXiv:gr-qc/9911095.
Isolated surfaces and symmetries of gravity. L Ciambelli, R G Leigh, arXiv:2104.07643Phys. Rev. D. 104446005L. Ciambelli and R. G. Leigh, "Isolated surfaces and symmetries of gravity," Phys. Rev. D 104 (2021), no. 4, 046005, arXiv:2104.07643.
Embeddings and Integrable Charges for Extended Corner Symmetry. L Ciambelli, R G Leigh, P.-C Pai, arXiv:2111.13181Phys. Rev. Lett. 128171302L. Ciambelli, R. G. Leigh, and P.-C. Pai, "Embeddings and Integrable Charges for Extended Corner Symmetry," Phys. Rev. Lett. 128 (Apr, 2022) 171302, arXiv:2111.13181.
A canonical bracket for open gravitational system. L Freidel, arXiv:2111.14747L. Freidel, "A canonical bracket for open gravitational system," arXiv:2111.14747.
Extended corner symmetry, charge bracket and Einstein's equations. L Freidel, R Oliveri, D Pranzetti, S Speziale, arXiv:2104.12881JHEP. 0983L. Freidel, R. Oliveri, D. Pranzetti, and S. Speziale, "Extended corner symmetry, charge bracket and Einstein's equations," JHEP 09 (2021) 083, arXiv:2104.12881.
Local phase space and edge modes for diffeomorphism-invariant theories. A J Speranza, arXiv:1706.05061JHEP. 0221A. J. Speranza, "Local phase space and edge modes for diffeomorphism-invariant theories," JHEP 02 (2018) 021, arXiv:1706.05061.
Anomalies in gravitational charge algebras of null boundaries and black hole entropy. V Chandrasekaran, A J Speranza, arXiv:2009.10739JHEP. 01137V. Chandrasekaran and A. J. Speranza, "Anomalies in gravitational charge algebras of null boundaries and black hole entropy," JHEP 01 (2021) 137, arXiv:2009.10739.
BMS charge algebra. G Barnich, C Troessaert, arXiv:1106.0213JHEP. 12105G. Barnich and C. Troessaert, "BMS charge algebra," JHEP 12 (2011) 105, arXiv:1106.0213.
Aspects of the BMS/CFT correspondence. G Barnich, C Troessaert, arXiv:1001.1541JHEP. 0562G. Barnich and C. Troessaert, "Aspects of the BMS/CFT correspondence," JHEP 05 (2010) 062, arXiv:1001.1541.
Barnich-Troessaert bracket as a Dirac bracket on the covariant phase space. W Wieland, arXiv:2104.08377Class. Quant. Grav. 39225016W. Wieland, "Barnich-Troessaert bracket as a Dirac bracket on the covariant phase space," Class. Quant. Grav. 39 (2022), no. 2, 025016, arXiv:2104.08377.
Why Gauge?. C Rovelli, arXiv:1308.5599Found. Phys. 441C. Rovelli, "Why Gauge?," Found. Phys. 44 (2014), no. 1, 91-104, arXiv:1308.5599.
Gauging the Boundary in Field-space. H Gomes, arXiv:1902.09258Stud. Hist. Phil. Sci. B. 67H. Gomes, "Gauging the Boundary in Field-space," Stud. Hist. Phil. Sci. B 67 (2019) 89-110, arXiv:1902.09258.
Edge modes of gravity. Part I. Corner potentials and charges. L Freidel, M Geiller, D Pranzetti, arXiv:2006.12527JHEP. 1126L. Freidel, M. Geiller, and D. Pranzetti, "Edge modes of gravity. Part I. Corner potentials and charges," JHEP 11 (2020) 026, arXiv:2006.12527.
Fock representation of gravitational boundary modes and the discreteness of the area spectrum. W Wieland, arXiv:1706.00479Ann. Henri Poincaré. 18W. Wieland, "Fock representation of gravitational boundary modes and the discreteness of the area spectrum," Ann. Henri Poincaré 18 (2017) 3695-3717, arXiv:1706.00479.
The observer's ghost: notes on a field space connection. H Gomes, A Riello, arXiv:1608.08226JHEP. 0517H. Gomes and A. Riello, "The observer's ghost: notes on a field space connection," JHEP 05 (2017) 017, arXiv:1608.08226.
The quasilocal degrees of freedom of Yang-Mills theory. H Gomes, A Riello, arXiv:1910.04222SciPost Phys. 106130H. Gomes and A. Riello, "The quasilocal degrees of freedom of Yang-Mills theory," SciPost Phys. 10 (2021), no. 6, 130, arXiv:1910.04222.
Quasi-local holographic dualities in non-perturbative 3d quantum gravity. B Dittrich, C Goeller, E R Livine, A Riello, arXiv:1803.02759Class. Quant. Grav. 3513B. Dittrich, C. Goeller, E. R. Livine, and A. Riello, "Quasi-local holographic dualities in non-perturbative 3d quantum gravity," Class. Quant. Grav. 35 (2018), no. 13, 13LT01, arXiv:1803.02759.
Geometric formulation of the Covariant Phase Space methods with boundaries. J Margalef-Bentabol, E J S Villaseñor, arXiv:2008.01842Phys. Rev. D. 103225011J. Margalef-Bentabol and E. J. S. Villaseñor, "Geometric formulation of the Covariant Phase Space methods with boundaries," Phys. Rev. D 103 (2021), no. 2, 025011, arXiv:2008.01842.
Edge modes as dynamical frames: charges from post-selection in generally covariant theories. S Carrozza, S Eccles, P A Hoehn, arXiv:2205.00913S. Carrozza, S. Eccles, and P. A. Hoehn, "Edge modes as dynamical frames: charges from post-selection in generally covariant theories," arXiv:2205.00913.
A paradigm for joined Hamiltonian and dissipative systems. P J Morrison, Physica D: Nonlinear Phenomena. 181-3P. J. Morrison, "A paradigm for joined Hamiltonian and dissipative systems," Physica D: Nonlinear Phenomena 18 (1986), no. 1-3, 410-419.
The Euler-Poincaré equations and double bracket dissipation. A Bloch, P Krishnaprasad, J E Marsden, T S Ratiu, Communications in Mathematical Physics. 1751A. Bloch, P. Krishnaprasad, J. E. Marsden, and T. S. Ratiu, "The Euler-Poincaré equations and double bracket dissipation," Communications in Mathematical Physics 175 (1996), no. 1, 1-42.
. D Fish, Metriplectic Systems. Portland State UniversityPhD DissertationD. Fish, Metriplectic Systems. Portland State University, 2005. PhD Dissertation.
| [] |
[
"First order phase transition and corrections to its parameters in the O(N) -model",
"First order phase transition and corrections to its parameters in the O(N) -model"
] | [
"M Bordag ",
"V Skalozub ",
"\nInstitute for Theoretical Physics\nUniversity of Leipzig\nAugustusplatz 10/1104109LeipzigGermany\n",
"\nDniepropetrovsk National University\n49050DniepropetrovskUkraine\n"
] | [
"Institute for Theoretical Physics\nUniversity of Leipzig\nAugustusplatz 10/1104109LeipzigGermany",
"Dniepropetrovsk National University\n49050DniepropetrovskUkraine"
] | [] | The temperature phase transition in the N -component scalar field theory with spontaneous symmetry breaking is investigated using the method combining the second Legendre transform and with the consideration of gap equations in the extrema of the free energy. After resummation of all super daisy graphs an effective expansion parameter, (1/2N ) 1/3 , appears near T c for large N . The perturbation theory in this parameter accounting consistently for the graphs beyond the super daisies is developed. A certain class of such diagrams dominant in 1/N is calculated perturbatively. Corrections to the characteristics of the phase transition due to these contributions are obtained and turn out to be next-to-leading order as compared to the values derived on the super daisy level and do not alter the type of the phase transition which is weakly first-order. In the limit N goes to infinity the phase transition becomes second order. A comparison with other approaches is done. * | 10.1142/9789812702883_0030 | [
"https://arxiv.org/pdf/hep-th/0211260v1.pdf"
] | 1,905,492 | hep-th/0211260 | edad7b261a3577c9eac88be54c0893317c127e25 |
First order phase transition and corrections to its parameters in the O(N) -model
arXiv:hep-th/0211260v1 27 Nov 2002 December 10, 2018
M Bordag
V Skalozub
Institute for Theoretical Physics
University of Leipzig
Augustusplatz 10/1104109LeipzigGermany
Dniepropetrovsk National University
49050DniepropetrovskUkraine
First order phase transition and corrections to its parameters in the O(N) -model
arXiv:hep-th/0211260v1 27 Nov 2002 December 10, 2018
The temperature phase transition in the N -component scalar field theory with spontaneous symmetry breaking is investigated using the method combining the second Legendre transform and with the consideration of gap equations in the extrema of the free energy. After resummation of all super daisy graphs an effective expansion parameter, (1/2N ) 1/3 , appears near T c for large N . The perturbation theory in this parameter accounting consistently for the graphs beyond the super daisies is developed. A certain class of such diagrams dominant in 1/N is calculated perturbatively. Corrections to the characteristics of the phase transition due to these contributions are obtained and turn out to be next-to-leading order as compared to the values derived on the super daisy level and do not alter the type of the phase transition which is weakly first-order. In the limit N goes to infinity the phase transition becomes second order. A comparison with other approaches is done. *
Introduction
Investigations of the temperature phase transition in the N-component scalar field theory (O(N)-model) have a long history and were carried out by either perturbative or non perturbative methods. This model enters as an important part unified field theories and serves to supply masses to fermion and gauge fields via the mechanism of the spontaneous symmetry breaking. A general believe about the type of the symmetry restoration phase transition is that it is of second order for any values of N (see, for instance, the text books [1]- [3]). This conclusion results mainly from non perturbative analytic and numeric methods [4]- [8]. In opposite, a first order phase transition was observed in most perturbative approaches [9]- [14]. An analysis of the sources of this discrepancy was done in different places, in particular, in our previous papers [12]- [14] devoted to the investigation of the phase transition in the O(N)-model in perturbation theory (PT). Therein a new method has been developed which combines the second Legendre transform with consideration of the gap equations in the extrema of the free energy functional. This allows for considerable simplification of calculations and for analytic results. The main of them is the discovery in the so-called super daisy approximation (SDA) of an effective expansion parameter ǫ = 1 N 1/3 near the phase transition temperature T c . All quantities (the particle masses, the temperatures T + , T − ) are expandable in this parameter. The phase transition was found to be weakly first order converting into a second order one in the limit N → ∞. The existence of this small parameter improves the status of the resummed perturbative approach making it as reliable as any perturbative calculation in quantum field theory. For comparison we note that formerly there had been two problems with the perturbative calculations near the phase transition. First, unwanted imaginary parts had been observed. Using functional methods we showed in [12,14] that they disappear after re-summing the super daisy graphs. The second problem was that near T c the masses become small (although not zero) compensating the smallness of the coupling constant hence making the effective expansion parameter of order one.
In the present paper we construct the PT in the effective expansion parameter ǫ = 1 N 1/3 near T c for the O(N)-model at large N. It uses as input parameters the values obtained in the SDA. As an application we calculate corrections to the characteristics of the phase transition near T c which follow from taking into account all BSDA graphs in order 1 N . Since the masses of particles calculated in the SDA are temperature dependent, we consider in detail the renormalization at finite temperature of the graphs investigated. It will be shown that the counter terms renormalizing these graphs at zero temperature are sufficient to carry out renormalizations at finite temperature.
The paper is organized as follows. In sect.2 we adduce the necessary information on the second Legendre transform and formulate the BSDA PT at temperatures T ∼ T c . In the next section we calculate the contribution to the free energy of the graphs called the "bubble chains" having a next-next-to-leading order in 1 N . In sect.4 the renormalization is discussed. The corrections to the masses and other parameters near T c are calculated in sect.5. The last section is devoted to discussion.
Perturbation theory beyond super daisy approximation
In this section we develop the PT in the effective expansion parameter ǫ = 1 N 1/3 for the graphs BSDA in the frameworks of the second Legendre transform. Consideration of this problem is quite general and independent of the specific form of the Lagrangian. So, it will be carried out in condensed notations of Refs. [12]- [14].
The second Legendre transform is introduced by representing the connected Green functions in the form
W = S[0] + 1 2 T rlogβ − 1 2 ∆ −1 β + W 2 ,(1)
where S[0] is the tree level action, β is the full propagator of the scalar Ncomponent field, ∆ is the free field propagator, W 2 is the contribution of two particle irreducible (2PI) graphs taken out of the connected Green functions and having the functions β(p) on the lines. The symbol "Tr" means summation over discrete Matsubara frequencies and integration over a three momentum (see for more details Refs. [12]- [14]). The propagator is related to the mass operator by the Schwinger-Dyson equation
β −1 (p) = ∆ −1 (p) − Σ(p),(2)Σ(p) = 2 δW 2 δβ(p) .(3)
The general expressions (1) and (2) will be the starting point for the construction of the BSDA PT. Calculations in SDA have been carried out already in [13]- [14] and delivered the masses of the fields and the characteristics of the phase transition: T c -transition temperature, and T + , T − -upper and lower spinodal temperatures. These parameters will be used in the new PT as the input parameters (zeroth approximation). Contributions of BSDA diagrams will be calculated perturbatively. First let us write the propagator in the form
β(p) = β 0 (p) + β ′ (p),(4)
where β 0 is derived in the SDA and β ′ is a correction which has to be calculated in the BSDA PT in the small parameter ǫ = 1 N 1/3 for large N. The 2PI part can be presented as
W 2 = W SD + W ′ 2 = W SD [β 0 + β ′ ] + W ′ 2 [β 0 + β ′ ](5)
and assuming β ′ to be small of order ǫ value we write
W SD = W SD [β 0 ] + δW SD [β 0 ] δβ 0 β ′ + O(ǫ 2 ),(6)W ′ 2 = W ′ 2 [β 0 ] + δW ′ 2 [β 0 ] δβ 0 β ′ + O(ǫ 2 ).(7)
In the above formulas the squared brackets denote a functional dependence on the propagator. The curly brackets as usual mark a parameter dependence. In such a way other functions can be expanded. For β −1 we have
β −1 = β −1 0 − β −1 0 β ′ β 0 (8) = ∆ −1 − Σ 0 [β 0 ] − Σ ′ [β 0 ],(9)
where
Σ ′ [β 0 ] = 2 δW 2 [β 0 ] δβ 0
is a correction following from the 2PI graphs, Σ 0 [β 0 ](p) is the super daisy mass operator. In a high temperature limit within the ansatz adopted in Refs. [12]-[14] (β −1 = p 2 + M 2 ) it looks as follows
β −1 (p) = p 2 + M 2 0 − Σ ′ [β 0 ](p),(10)
where M 2 0 is the field mass calculated in the SDA as the solution of the gap equations.
In a similar way, the free energy functional can be presented as
W = W (0) + W ′(11)
with
W (0) = S (0) + 1 2 T rlogβ 0 − 1 2 T rβ 0 ∆ −1 + W SD [β 0 ](12)
and
W ′ = − 1 2 T rβ ′ ∆ −1 + δW SD [β 0 ] δβ 0 β ′ + W ′ 2 [β 0 ] + 1 2 T rβ ′ β −1 0 + O(ǫ 2 ).(13)
Taking into account that β
′ = 2 β 2 0 δW ′ 2 [β 0 ] δβ 0 one finds 1 2 T rβ ′ β −1 0 = T rβ 0 δW 2 [β 0 ] ′ δβ 0 ,(14)
and hence
W ′ = W ′ 2 [β 0 ].(15)
Thus, within the representation (4) we obtain for the W functional
W = W (0) SD [β 0 ] + W ′ 2 [β 0 ],(16)
where W (0) SD [β 0 ] is the expression (1) containing as the W 2 [β 0 ] the SDA part only and the particle masses have to be calculated in the SDA. The term W ′ 2 [β 0 ] corresponds to the 2PI graphs taken with the β 0 propagators on lines.
From the above consideration it follows that perturbative calculations in the parameter ǫ derived in the SDA within the second Legendre transform can be implemented in a simple procedure including as the input masses of propagators β the ones obtained in the SDA. Different types of the BSDA diagrams exist. They can be classified as the sets of diagrams having the same orders in 1 N from the number of components. So, it is reasonable to account for the contributions of the each class by summing up all diagrams with the corresponding specific power of 1 N . A particular example of such type calculations will be done below.
Expansion near T c
In this section we shall calculate a first correction in the effective expansion parameter ǫ = 1 N 1/3 for large N at T ∼ T c . Before to elaborate that, let us take into consideration the main results on the SDA which have to be used as a PT input. In Ref. [14] it was shown that the masses of the Higgs M η and the Goldstone M φ fields near the phase transition temperature are ( for large N)
M (0) η = λT + 4π 1 (2N) 1/3 − 1 2N + ... ,(17)M (0) φ = λT + 2π 1 (2N) 2/3 − 1 2N + ... ,
where λ is a coupling constant, upper script zero means that in what follows these masses will be chosen as zero approximation, m -initial mass in the Lagrangian. The upper spinodal temperature T + is close to the transition temperature T c ∼ m √ λ . We adduced here the masses for the T + case because that delivers simple analytic expressions. Results for other temperatures in between, T − ≤ T ≤ T + , are too large to be presented here. Note also that the temperatures T + , T − in the SDA are related as (see Refs.
[13], [14])
T + T − = 1 + 9λ 16π 2 1 (2N) 2/3 + ...,(18)
and
T − = 12N λ(N +2) m.
Hence it is clear that the transition is of a weakly first-order transforming into a second order one in the limit N → ∞.
With these parameters taken into account an arbitrary graph beyond the SDA having n vertexes can be presented as
( λ N ) n T C M 3C−2L V n ∼ ( λ N ) n (1/ √ λ) C ( λ N 2/3 ) 3C−2L V n = λ 1 ( 1 N ) 1 3 n+2 V n .(19)
Here the notation is introduced: C = L -n + 1 -number of loops, L -number of lines. Since we are interested in diagrams with closed loops only, the relation holds: 2n = L. The vertex factor V n comes from combinatorics. The right-handside of Eq. (19) follows when one shifts the three dimensional momentum of each loop, p → M p, and substitutes instead of M the mass M (0) φ Eq. (17) to have the 'worst case' to consider. In this estimate the static modes (l = 0 Matsubara frequency) of propagators were accounted for. One may wonder is it sufficient at T c ? The positive answer immediately follows if one takes into consideration that side by side with the three-momentum rescaling the temperature component of the propagator is also shifted as T → T M ∼ T N 2/3 . Hence it is clear that at N goes to infinity a high temperature expansion is applicable and the static mode limit is reasonable.
As it follows from Eq. (19), λ is not a good expansion parameter near T c whereas 1 N 1/3 is the one because it enters in the power of the number of vertexes of the graph. Of course, we have to consider the graphs beyond the SDA. That is, all diagrams with closed loops of one line (tadpoles) have to be excluded because they were summed up completely already at the SDA level. Note also the important factor 1 N 2 coming from the rescaling of three-momentum with the mass M 2) The simple ansatz for the two-particle-irreducible Green function β −1 = p 2 + M 2 is exact in this case. Here, p is a four-momentum, M is a mass parameter which is determined from the solution of gap equations. 3) As it is known for many years, T c is well determined by this approximation and it is not altered when further resummations are achieved. 4) The existence of the expansion parameter ǫ = 1 N 1/3 near T c . Since the estimate (19) assumes the rescaling of momenta p → M p the same procedure should be fulfilled in perturbation calculations in the parameter ǫ. They are carried out in the following way. First, since λ is not the expansion parameter it can be skipped. Second, only the BSDA diagrams have to be taken into consideration. Moreover, since N is a large number it is convenient to sum up series having different powers in 1 N . Third, rescaling p → M p can be done before actual calculations. In this case one starts with the expressions like in Eq. (19) (for diagrams of φ-sort to consider). When η-particles are included one has to account for the corresponding mass value and rescale the momentum accordingly. Fourth, the temperature is fixed to be T c . In other words, one has to start with the expressions like in Eq. (19).
To demonstrate the procedure, let us calculate a series of graphs giving leading in 1 N contribution F ′ to the free energy F. Then, the total to be F =
F (0) + F ′ , where F (0) is the result of the SDA.
The 'bubble chains' of φ-field are the most divergent in N and other diagrams have at least one power of N less. So, below we discuss the φ-bubble chains, only. The contribution of these sequences with the mass M (0) φ and various n is given by the series
D φ = D (2) φ + λ 1 N 2 ∞ n=3 1 n2 n V n N n T r p (6Σ (1) φ (p)N 2/3 ) n .(20)
Here, D
φ is the contribution of the "basketball" diagram, Σ
(1) φ (p) is the diagram of the type Σ (1) φ (p) = T r k β (0) φ (k)β (0) φ (k + p)(21)
and the power of the parameter λ is written explicitly. Now, let us sum up this series for large fixed N. The vertex combinatorial factor for the diagram with n circles is [13]
V n = N + 3 3 [( N + 3 3 ) n−1 + ( 2 3 ) n−1 (N − 2)].(22)
The leading in N term in the V n is ∼ (N + 1/3) n and we have for the series in Eq. (20)
λ 1 N 2 ∞ n=3 1 n T r p [(−Σ (1) φ (p)N 2/3 ) n (N + 1/N) n ].(23)
To sum up over n we add and subtract two terms: −Σ
(1) φ (p)N 2/3 (N + 1/N), and
1 2 [−Σ (1) φ (p)N 2/3 (N + 1/N)] 2 . We get D φ = D (2) φ − λ 1 N 2 T r p [log(1 + Σ (1) φ (p)N 2/3 (N + 1/N)) (24) − Σ (1) φ (p)N 2/3 )(N + 1/N) + 1 2 (−Σ (1) φ (p)N 2/3 (N + 1/N)) 2 ].
To find all together we insert for the first term
D (2) φ = 1 4 λ 1 (N 2 − 1) N 2 T r p (−Σ (1) φ (p)) 2 N 2/3 .(25)
Then the limit N goes to infinity has to be calculated. We obtain finally:
D φ = − λ 1 N 2 T r p log(1 + Σ (1) φ (p)N 2/3 )(26)+ λ 1 N 4/3 T r p Σ (1) φ (p) − λ 1 4N 2/3 T r p (Σ (1) φ (p)) 2 .
Thus, after summing over n the limit N goes to infinity exists and the series is well convergent. Eq.(26) gives the leading contribution to the free energy calculated in the BSDA in the limit N goes to infinity. This is the final result, if we are interested in the leading in 1 N correction.
To obtain the squared mass, M 2 φ , one has to sum up the value calculated in the SDA (M (0) φ ) 2 and -2 N −1 δD φ /δβ φ (0), in accordance with a general expression for the mass [13]. By substituting the propagators one is able to find leading in 1 N 1/3 corrections to free energy and masses at temperatures close to T c . In this way other quantities can be computed.
The most important observation following from this example is that the limit N goes to infinity does not commute with summing up of infinite series in n at n → ∞. It is seen that the first two terms in the Eq.(26) can be neglected as compared to the last one which is leading in 1/N. As it is occured, this contribution is of order N −5/3 that is smaller then the value of (M (17) determined in the SDA. Note that the correction is positive and the complete field mass is slightly increased. We shall calculate the value of the mass in sect.5 simultaneously with other parameters of the phase transition. Contributions of other next-next-to-leading classes of BSDA diagrams can also be obtained perturbatively.
(0) φ ) 2 ∼ N −4/3
Renormalization of vortexes and bubble chains
Calculations carried out in the previous section deal with the unrenormalized functions. But the question may arise: whether the graphs, which are series of the Σ 1 (p, M φ ) function with the temperature dependent mass M φ (T ), are renormalized by the temperature independent counter terms as one expects at finite temperature? We shall consider in detail this question for the non symmetric vertexes V (n) absd (p) and the graphs S (n) (p) of "bubble chain circles " with n Σ 1 (p, M φ ) insertions. These are of interest for computations in the previous section. We will show that this is the case for both of these objects. They contain, correspondingly, n − 1 and n insertions of
Σ (1) (p, M φ ), p = p a + p b is a momentum incoming in the one-loop vertex Σ 1 [β 0 ] (p, M φ ). The functions S (n) (p) are calculated from V (n)abc 1 d 1 = ρ 2 2 Σ 1 2 (C 1 s abc 1 d 1 + 2 3 V 1 abc 1 d 1 ),(2)
where the notation is introduced:
ρ = −6λ N -expansion parameter in the O(N)- model, C 1 = N +1 N −1 ( N +1 3 − 2 3
) is a combinatorial factor appearing at the symmetric tensor s abc 1 d 1 = 1 3 δ ab δ c 1 d 1 , V 1 abc 1 d 1 = 1 3 (δ ab δc 1 d 1 + δ ac 1 δ bd 1 + δ ad 1 δ bc 1 ) is the tree vertex in the O(N)-model. Subscript 1 in the indexes c 1 , d 1 counts the number of loops in the diagram. The diagram with n loops ( Σ 1
[β 0 ] (p) insertions) has the form V (n+1) abcndn = ρ (n+1) 2 ( Σ 1 2 ) n (C n s abcndn + ( 2 3 ) n V 1 abcndn ),(28)
where now C n = N +1 N −1 (( N +1 3 ) n − ( 2 3 ) n ) and other notations are obvious. At zero temperature, the function Σ 1 (p) has a logarithmic divergence which in a dimensional regularization exhibits itself as a pole ∼ 1 ǫ , ǫ = d − 4. We denote the divergent part of Σ 1 (p) as D 1 . To eliminate this part we introduce into the Lagrangian the counter term C 2 of order ρ 2 :
C 2 = −ρ 2 D 1 2 v (2) abc 1 d 1 ,(29)
where v
(2)
abc 1 d 1 = (C 1 s + 2 3 V 1 ) abc 1 d 1 . Thus, the renormalized one-loop vertex is V (2) abc 1 d 1 = 1 2 ρ 2 Σ 1 ren. 2 v (2) abc 1 d 1(30)
with Σ 1 ren. = Σ 1 − D 1 . In order ρ 3 three diagrams contribute. The first comes from the tree vertexes V 1 abcd and contains two Σ 1 insertions. Two other graphs are generated by V 1 abcd and the counter term vertex C 2 . Each of them has one Σ 1 insertion. The sum of these three diagrams is
V (3) abc 2 d 2 = 1 2 ρ 3 v (3) abc 2 d 2 ( 1 4 ((Σ 1 ren. ) 2 + 2Σ 1 ren. D 1 + D 2 1 ) − 1 2 D 1 Σ 1 ren. − 1 2 D 2 1 ).(31)
The terms with the products Σ 1 ren. D 1 cancel in the total. To have a finite expression one has to introduce a new counter term vertex
C 3 = 1 4 ρ 3 D 2 1 v (3) abc 2 d 2 .(32)
It cancels the last independent of Σ 1 divergence. Thus the renormalized vertex V (3) is given by the first term in the expression (31). This procedure can be easily continued with the result that in the order ρ (n+1) one has to introduce the counter term of the form
C n+1 = ρ n+1 ( D 1 2 ) n v (n+1)
abcndn .
The finite vertex calculated with all types of diagrams of the order ρ n+1 looks as follows
V n+1 abcndn = 1 2 ρ n+1 ( Σ 1 ren. 2 ) n v (n+1) abcndn .(34)
Above we considered the s-channel diagram contributions to the vertex V (n) . This is sufficient to study the leading in 1 N correction D φ (26) of interest. To have a symmetric renormalized vertex one has to add the contributions of the t-and u-channels and multiply the total by 1 3 . Now it is easy to show that the counter terms C n rendering the finiteness of the vertexes at zero temperature are sufficient to renormalize them at finite temperature. Really, Σ 1 (p, M(T ), T ) can be divided in two parts, Σ 1 (p, M(T ), T = 0) = Σ 1 (p, M(T )) corresponding to field theory and Σ 1 (p, M(T ), T ) -the statistical part. The divergent part D 1 and the counter terms C n are independent of mass parameters. So, to obtain the renormalized vertex at finite temperature V n+1 abcndn (p, T ) one has to sum up the same series of the usual and the counter term diagrams as at zero temperature and substitute Σ 1 For this procedure to hold it is important that D 1 is logarithmically divergent and independent of mass. So, the field theoretical part of V n+1 abcndn (p, T, M(T )) as well as the statistical part is renormalized by the same counter terms as the vertex at zero temperature. In the BSDA PT we have to use the mass M (0) φ (T ). Let us turn to the functions S (n) (p). They can be obtained from the vertexes V n+1 abcndn (p, T, M(T )) in the following way. One has to contract the initial and the final indexes a, b and c n , d n and form two propagators β 0 which after integration over the internal line give the term Σ 1 . The combinatorial factor of this diagram is 1 n+1 . We proceed with the function S (3) at zero temperature. In the order ρ 3 two diagrams contribute. One includes three ordinary vertexes V 1 absd , S
1 (p) = 2 1 3 v (3) abab Σ 1 (p) 2 3 ,(3)
and two other diagrams containing one vertex v 1 abcd and the counterterm vertex
C 2 , 2S (3) 2 (p) = − 1 3 ρ 3 v 1 abcd v (2) cdab Σ 1 (p) 2 2 D 1 2 .(37)
The contraction of indexes in Eq.(37) gives the factor v (3) = D 3
s = v (3) abab = N +1 3 [( N +1 3 ) 2 + ( 2 3 ) 2 (N − 2)
]. Again, substituting Σ 1 = Σ 1 ren. + D 1 one can see that the terms with products (Σ 1 ren. ) l (D 1 ) (3−l) are canceled in the sum of expressions (36) and (37). To obtain a finite S (3) we introduce the counter term of order ρ 3 ,
C (3) = 1 3 ρ 3 D 1 2 3 D 3 s .(38)
After that the renormalized circle S (3) is
S (3) ren. (p) = 1 3 ρ 3 Σ 1 ren. (p) 2 3 D 3 s .(39)
Note that since C (3) does not depend on any parameter, it can be omitted as well as the divergent term in the expression S (3) . In other words, there is no need to introduce new counter terms into the Lagrangian in order to have a finite S (3) and the counter term renormalizing the vertex V (3) are sufficient. This procedure can be continued step by step for diagrams with any number of Σ 1 insertions. The renormalized graph S (n) looks as follows,
S (n) ren. (p) = 1 n ρ n Σ 1 ren. (p) 2 n D n s ,(40)
where the factor coming from the contraction of the v (n)
abc n−1 d n−1 is D n s = N + 1 3 N + 1 3 n−1 + 2 3 n−1 (N − 2) .(41)
Now, it is a simple task to show that the counter terms renormalizing vertexes V (n) and circles S (n) at zero temperature are sufficient to obtain finite S (n) (p, T ) when the temperature is switched on. This is based on the property that the temperature dependent graph Σ 1 (p, M, T ) can be presented as the sum of the zero temperature part and the statistical part independently of the mass term entering. Then it is easy to check that the divergent terms of the form [Σ 1 (p, M, T = 0) + Σ 1 (p, M, T )] l (D 1 ) n−l are canceled when all the diagrams of order ρ n forming the circle S (n) ren. (p, T ) are summed up. Thus we have shown, the renormalization counter terms of leading in 1/N BSDA graphs calculated at zero temperature being independent of the mass parameter entering Σ 1 renormalize the S (n) (p, T ) functions at finite temperature. This gives the possibility to construct a PT based on the solutions of the gap equations. In fact, just the series of Σ 1 functions are of interest at the transition temperature T c which has to be considered as a fixed given number. Other parameters of the phase transitions can be found by means of some iteration procedure of the investigated already gap equations.
Corrections to the parameters of the phase transition
Having obtained the leading correction to the free energy (26) one is able to find perturbatively the characteristics of the phase transition near T c . We shall do that for the limit N → ∞. (Eq. (28)). Here we write that in the form when only the term containing D φ is included,
M 2 η 2 = m 2 − 3λ N Σ (0) η − λ N − 1 N Σ (0) φ ,(42)M 2 φ 2 = λ N (Σ (0) φ − Σ (0) η ) − 1 N − 1 δD φ δβ φ (0) .
Remind that these equations give the spectrum of mass in the extrema of the free energy in the phase with broken symmetry. The complete system (Eqs.(45) in Ref.
[13]) contains other terms which have higher orders in 1 N and were omitted. Here m is the mass parameter in the Lagrangian. The functions Σ (0) η , Σ
φ are the tadpole graphs with the full propagators β η , β φ on the lines. For the ansatz adopted in Refs.[13], [14], β −1 η,φ = p 2 + M 2 η,φ , they have at high temperature the asymptotic expansion
Σ (0) η,φ = T rβ η,φ = T 2 12 − M η,φ T 4π + ...,(43)
where dots mark next-next-to-leading terms. Within Eqs.
φ ) = − 2 N − 1 δD φ δβ φ (0) = λ N − 1 1 N 2/3 T rβ 3 (M (0) φ ).(44)
The sunset diagram entering the right-hand-side can be easily computed to give [13]
T rβ 3 (M (0) φ ) = T 2 32π 2 1 − 2 ln 3M (0) φ m .(45)Thus for f (M (0) φ ) we obtain in the large N limit f (M (0) φ ) = λ N 5/3 T 2 32π 2 1 − 2 ln 3 √ 3λ π(2N) 2/3 ,(46)
where the mass M (0) φ (17) was inserted and T = T + has to be substituted. Here we again turn to the T + case to display analytic results.
Since f (M
M η = M (0) η + x,(47)M φ = M(0)
φ + y assuming x, y to be small. Substituting these in the equations (42) -(43) and preserving linear in x, y terms we obtain the system
2M (0) φ y = λT 2πN (x − y) + f (M (0) φ ),(48)2M (0) η x = 3λT 2πN x + λ(N − 1)T 2πN y
of linear equations. Its solutions for large N are
x = 1 3 T + 32π 1 N 2/3 1 − 2 ln 3 √ 3λ π(2N) 2/3 ,(49)y = 1 2 2 2/3 3 1/2 T + 32π 1 N 1 − 2 ln 3 √ 3λ π(2N) 2/3 .
As one can see, these corrections are positive numbers smaller than the masses (17) calculated in the SDA, as it should be in a consistent PT. Notice that the correction x is larger as compared to the next to leading term in the SDA ∼ (1/N). So, the BSDA graphs deliver the next-to-leading correction. The value y is of the same order as the next-to-leading term in the SDA.
In a similar way the correction to the mass in the restored phase, ∆M r , can be calculated. In this symmetric case the all components have equal masses which in the SDA within the representation (43)
M 2 r = −m 2 + λ(N + 2)T 2 12N − 2M r λ(N + 2)T 8πN .(50)
It has a simple analytic solution
M (0) r = − λ(N + 2)T 8πN + λ(N + 2)T 8πN 2 − m 2 + λ(N + 2)T 2 12N ,(51)
where again the upper script zero means the SDA result. The contribution of the bubble graphs (26) corresponds either to the broken or to the restored phases. The only difference is in the number of contributing field components. In the broken phase this is N − 1 and in the restored it is N. But it does not matter for large N. So, the function which has to be substituted into Eq. (50) is the one in Eq. (44) with the replacements N − 1 → N and M (0) φ → M (0) r . To calculate the mass correction z which is assumed to be small we put M r = M (0) r + z into Eq.(50) and find in the limit N → ∞:
z = f (M (0) r ) 2M (0) r + λT 4π .(52)
and BSDA graphs decrease slightly the lower spinodal temperature. Now we calculate the correction to T + . To do that let us turn again to the system (42) with the expressions (43), (46) been substituted. From the second equation of the system we find
M η = M φ + 2πN λT (M 2 φ − f (M (0) φ ))(53)
and insert this into the first equation to have
2πN λT 2 (M 4 φ − 2M 2 φ f ) + M 2 φ + 4πN λT (M 3 φ − M φ f ) = (54) 2m 2 − λ(N + 2)T 2 6N + 3λT 2πN M φ + 2πN λT (M 2 φ − f ) + λ(N − 1)T 2πN M φ ,
where the linear in f terms are retained. This fourth order algebraic equation can be rewritten in the dimensionless variables µ = 2πN λT M φ and g = ( 2πN λT ) 2 f as follows:
F (µ) = 0,(55)F (µ) = µ 4 + 2µ 3 − 2µ 2 − h − (N + 2)µ − 2g(µ 2 + µ).
Here the µ-independent function is
h = h (0) − 3g = (2m 2 − λ(N + 2)T 2 6N )( 2πN λT ) 2 − 3g(56)
and as before the SDA part is marked by the upper script zero. Remind that this equation determines the Goldstone field masses in the extrema of the free energy. It has two real solutions corresponding to a minimum and a maximum. The equation (55) has roots for any N and T which are too large to be displayed here.
To have simple analytic results and investigate the limit of large N we proceed as in Ref.
[14] and consider the condition for the upper spinodal temperature. In the case of T = T + the two solutions merge and we have a second equation,
F ′ (µ) = µ 3 + 3 2 µ 2 − 1 2 µ − ( N 4 + 1 2 ) − g(µ + 1 2 ) = 0.(57)
It can be easily solved for large N:
µ = N 4 1/3 − 1 2 + ... .(58)
Here the first two terms of asymptotic expansion are included that is sufficient for what follows. Notice that since g-dependent terms are of next-next-to-leading order they do not affect the main contributions. The function h can be calculated from Eq.(55) and its first two terms are
h (0) = −3 N 4 4/3 − 7 2 N 4 2/3 + ... .(59)
The temperature T + computed from Eq.(56) can be rewritten in the form
T + = √ 2 2πN λ 2π 2 (N + 2)N 3λ + h (0) − 3g −1/2 ,(60)
where the values of h (0) Eq.(59 ) and g Eq. (46) have to be substituted. Again, since g is a small next-next-to-leading term the solution can be obtained perturbatively. We find
T + = T (0) + 1 + 9λ 32π 2 (1 − 2 ln 3 √ 3λ π(2N ) 2/3 ) N 5/3 ,(61)
where the value T (0)
+ = T (0) − (1 + 9λ 16π 2 1 (2N ) 2/3 ) must be inserted.
From this result it follows that the upper spinodal temperature is slightly increased due to the BSDA contributions. But this is next-next-to-leading correction to the T (0) + of the SDA.
Thus, we have calculated the main BSDA corrections to the particle masses and the upper and lower spinodal temperatures in the ( 1 N ) 1/3 PT. We found that T − is decreased and T + is increased as compared to the SDA results due to the leading in 1 N BSDA graphs -bubble chains. So, the strength of the first-order phase transition is slightly increased when this contribution is accounted for. However, this is a next-to-leading effect. In such a way we prove the results on the type of the phase transition obtained already in the SDA [13], [14].
Discussion
As the carried out calculations showed, the phase transition in the O(N)-model at large finite N is weakly of first-order. It becomes a second order one in the limit N → ∞. This conclusion has been obtained in the SDA and was proved in the perturbative calculations in the consistent BSDA PT in the effective expansion parameter ǫ = ( 1 N ) 1/3 . This parameter appears near the phase transition temperature T c at the SDA level. Let us summarize the main steps of the computation procedure applied and the approximations used to derive that.
In Refs. [12] -[14] as a new method the combination of the second Legendre transform with considering of gap equations in the extrema of a free energy functional was proposed. This has simplified calculations considerably and resulted in transparent formulas for many interesting quantities. Within this approach the phase transitions in the O(1) -and O(N)-models were investigated in the super daisy and beyond approximations. To have analytic expressions a high temperature expansion was systematically applied and the ansatz for the full propagators β −1 = p 2 + M 2 has been used. This ansatz is exact for the SDA which sums up completely the tadpole graphs. Just within these assumptions a first order phase transition was observed and the effective parameter ǫ has been found. In terms of it all interesting characteristics can be expressed in the limit of large N and perturbation theory at T ∼ T c constructed. This solves an old problem on the choice of a zero approximation for perturbative computations near T c (if such exist). We have shown explicitly here that these are the SDA parameters taken as the input values for BSDA resummations. Clearly, this does not change qualitatively the results obtained in the SDA.
Two important questions should be answered in connection with the results presented. The first is on the relation with other investigations where a second order phase transition was observed (see, for example, the well known literature [1] -[8]), and in fact this is a general opinion. The second is the non zero mass (17) of the Goldstone excitations in the broken phase at finite temperature that was determined in SDA by solving of gap equations [13], [14].
What concerns the first question, we are able to analyze the papers [4], [5] [15], [16], [17], where analytic calculations have been reported. Partially that was done in Refs.
[13], [14]. We repeat that here for completeness. First of all we notice that there are no discrepancies for the limit N → ∞ where all the methods determine a second order phase transition. The discrepancy is for large finite N.
In Ref. [15] the renormalization group method at finite temperature has been used and a second order phase transition was observed. Results obtained in this approach are difficult to compare with that found in case of a standard renormalization at zero temperature. This is because a renormalization at finite temperature replaces some resummations of diagrams which remain unspecified basically. In Refs. [16], [17] a non-perturbative method of calculation of the effective potential at finite temperature -an auxiliary field method -has been developed and a second order phase transition was observed in both, the oneand the N-component models. This approach seems to us not self-consistent because it delivers an imaginary part to the effective potential in its minima. This important point is crucial for any calculation procedure as a whole. Really, the minima of an effective potential describe physical vacua of a system. An imaginary part is signaling either the false vacuum or the inconsistency of a calculation procedure used. This is well known beginning from the pioneer work by Dolan and Jackiw [18] who noted the necessity of resummations in order to have a real effective potential. In our method of calculation this requirement is automatically satisfied when the SDA diagrams are resummed in the extrema of free energy. This consistent approximation is widely used and discussed in different aspects in literature nowadays [19]- [21].
Let us discuss in more detail the results of Refs. [4], [5] where an interesting method -the method of average potential -was developed and a second order phase transition has been observed for any value of N. To be as transparent as possible we consider the equation for the effective potential derived in Ref.
[5] (equation (3.13))
U ′ (ρ, T ) =λ a 2 + ρ − b U ′ (ρ, T ) ,(62)
where the notation is introduced: U ′ (ρ, T ) is the derivative with respect to ρ of U(ρ, T ), ρ = 1 2 φ a φ a is the condensate value of the scalar field,λ is the coupling constant. The parameters a 2 and b are:
a 2 = (T 2 − T 2 cr ) N 24 , T 2 cr = 24m 2 R λ R N ,(63)b = NT 8π .
Here m R and λ R are renormalized values, and λ R =λ(1 +λ N 32π 2 ln(L 2 /M 2 )) −1 (see for details the cited paper).
Considering the temperatures T ∼ T cr and ρ << T 2 the authors have neglected U ′ (ρ, T ) as compared to the U ′ (ρ, T ) in Eq.(62) and obtained after integration over ρ (formula (3.16) of Ref.
[5])
U(ρ, T ) = π 2 9 T 2 − T 2 cr T 2 ρ + 1 N 8π 2 3 T 2 − T 2 cr T 2 ρ 2 + 1 N 2 64π 2 3 1 T 2 ρ 3 .(64)
Near T ∼ T cr this effective potential includes the ρ 3 (φ 6 ) term and predicts a second order phase transition. That is the main conclusion. However, this analysis is insufficient to distinguish between a second and a weakly-first-order phase transitions. Actually, the temperature T cr determined from the initial condition U ′ (0, T cr ) = 0 in the former case corresponds to the temperature T − in the latter one. To determine the type of the phase transition one has to verify whether or not a maximum at finite distances from the origin in the ρ-plane exists. To check that the equation (62) has to be integrated exactly, without truncations. This is not a difficult problem if one first solves Eq.(62) with respect to U ′ (ρ, T ),
U ′ (ρ, T ) =λ a 2 + λb 2 2 + ρ − b λ (a 2 + ρ) +λ 2 b 2 4 ,(65)
and then integrates to obtain U(ρ, T ) = λ (T 2 − T 2 cr )
N 24 +λ 2 T 2 N 2 128π 2 ρ + 1 2 ρ 2 (66) − 2 3λ T N 8π (T 2 − T 2 cr ) N 24 +λ T 2 N 2 256π 2 + ρ 3/2 ,
where the values of a and b Eq.(63) are accounted for. We see that in contrast to Eq.(64) this potential includes a cubic term which is responsible for the appearance of a maximum in some temperature interval and a first order phase transition, that is quite known. Obviously, expanding the expression (66) at ρ → 0 and retaining three first terms, one reproduces the ρ 3 term of Eq.(64). This consideration convinces us that there are no discrepancies with the actual results of the average action method. The expression (62) gives the potential (66) predicting a first order phase transition.
Our final remarks are on the Goldstone theorem at finite temperature. In fact, at T = 0 the Goldstone bosons should not inevitably be massless as it was argued, in particular, in Ref. [22]. Formally, the reason is that at finite temperature Lorentz invariance is broken and therefore the condition p 2 = 0 does not mean zero mass of the particle in contrast to as it should be at T = 0 (see for details Ref. [22]). We observed in the SDA that this is the case when the first order phase transition happens. In the limit N → ∞ corresponding to a second order phase transition the Goldstone bosons are massless, as it is seen from Eq. (17). The same conclusion follows from the results of Ref.
[5] when a second order phase transition is assumed. Probably this problem requires a separate investigation by means of other methods.
One of the authors (V.S.) thanks Institute for Theoretical Physics University of Leipzig for hospitality when the final part of this work has been done.
proceed we note the most important advantages of resummations in the SDA [13],[14]: 1) There is no an imaginary part in the extrema of the free energy.
absd (p) by means of contracting the indexes and integrating over internal momentum to form one extra Σ 1 (p, M φ ) term. Note that the Goldstone field bubble chains give a leading in 1 N contribution among the BSDA diagrams. To carry out actual calculations we adopt the O(N)-model with the notations introduced inRefs.[13],[14]. In the restored phase, we have N scalar fields with the same masses M r (T ). In the broken phase, there is one Higgs field η with the mass M η and N − 1 Goldstone fields φ having the mass M φ (17) derived in the SDA.Let us first consider the one-loop vertex of the φ-field in the s-channel, s = p 2 = (p a + p b ) 2 , a, b mark incoming momenta, V
(p, M(T ), T ) = Σ 1 (p, M(T )) + Σ 1 T (p, M(T ), T ). Again, the terms of the form [Σ 1 (p, M(T ), T )] l D
First let us calculate the corrections to the Higgs boson mass, ∆M η , and the Goldstone boson mass, ∆M φ , due to D φ term (26). The starting point of this calculations is the system of gap equations derived in Ref.[13] (Eqs. (28), (30) and (45)) and Ref.[14]
(42) -(43) (without the D φ term) the masses (17) have been derived in the limit of large N. Now we compute the last term in the Eq.(42) for large N. In this case the last term of D φ in the Eq.(26) is dominant and calculating the functional derivative we find f (M (0)
φ
) is small, it can be treated perturbatively when the masses M η and M φ are calculated. Let as write them as
are the solutions of the gap equation (Eq. (36) in Ref.[13] and Eq.(27) in Ref. [14])
Jean Zinn-Justin, Quantum F ield T heory and Critical P henomena. Clarendon, OxfordJean Zinn-Justin, Quantum F ield T heory and Critical P henomena (Clarendon, Oxford,1996).
Linde, P article P hysics and Inf lationary Cosmology. Harwood, Academic, Chur, Switzerland2. A. Linde, P article P hysics and Inf lationary Cosmology (Harwood, Academic, Chur, Switzerland, 1990).
. J I Kapusta, Heory, Cambridge University PressCambridge, EnglandJ.I. Kapusta, F inite − T emperature F ield T heory (Cambridge Univer- sity Press, Cambridge, England, 1989).
. C Wetterich, Nucl. Phys. 3986594. N. Tetradis and C. Wetterich, Nucl. Phys. B398, 659 (1993).
. M Reuter, N Tetradis, C Wetterich, Nucl. Phys. 401567M. Reuter, N. Tetradis and C. Wetterich, Nucl. Phys. B401, 567 (1993).
. J Adams, Mod. Phys. Lett. A. 102367J. Adams et al., Mod. Phys. Lett. A 10, 2367 (1995).
I Montvay, G Muenster, Quantum F ields on a Lattice Cambridge Monographs on Mathematical Physics. Cambridge, EnglandCambridge University PressI. Montvay and G. Muenster, Quantum F ields on a Lattice Cambridge Monographs on Mathematical Physics (Cambridge University Press, Cam- bridge, England, 1994).
. Z Fodor, J Hein, K Jansen, A Jaster, I Montvay, Nucl. Phys. 439147Z. Fodor, J. Hein, K. Jansen, A. Jaster and I. Montvay, Nucl. Phys. B439, 147 (1995).
. . K Takahashi, Z. Phys. C. 266019. K. Takahashi, Z. Phys. C 26, 601 (1985).
. M E Carrington, Phys. Rev. D. 452933[10] 10. M. E. Carrington, Phys. Rev. D 45, 2933 (1992).
. Phys. Rev. D. 11. P. Arnold46262811. P. Arnold, Phys. Rev. D 46, 2628 (1992).
. M Bordag, V Skalozub, J. Phys. A. 3446112. M. Bordag and V. Skalozub, J. Phys. A 34, 461 (2001).
. M Bordag, V Skalozub, Phys. Rev. D. 6585025M. Bordag and V. Skalozub, Phys. Rev. D 65, 085025 (2002).
. M Bordag, V Skalozub, Phys. Lett. B. 533189M. Bordag and V. Skalozub, Phys. Lett. B 533 189 (2002).
. P Elmfors, Z. Phys. C. 5660115. P. Elmfors, Z. Phys. C 56 601 (1992).
. . K Ogure, J Sato, Phys. Rev. D. 57746016. K. Ogure and J. Sato, Phys. Rev. D 57, 7460 (1998).
. K Ogure, J Sato, Phys. Rev. D. 588501017. K. Ogure and J. Sato, Phys. Rev. D 58, 085010 (1998).
. L Dolan, R Jackiw, Phys. Rev. D. 9332018. L. Dolan and R. Jackiw, Phys. Rev. D 9, 3320 (1974).
. I T Drummond, R R Horgan, P V Landshoff, A Rebhan, Phys. Lett.b. 39832619. I.T. Drummond, R.R. Horgan, P.V. Landshoff and A. Rebhan, Phys. Lett.b 398, 326 (1997).
. . I T Drummond, R R Horgan, P V Landshoff, A Rebhan, Nucl. Phys. B. 52457920. I.T. Drummond, R.R. Horgan, P.V. Landshoff and A. Rebhan, Nucl. Phys. B 524, 579 (1998).
. . A Peshier, Phys. Rev. D. 6310500421. A. Peshier, Phys. Rev. D 63, 105004 (2001).
. . K L Kowalski, Phys. Rev. D. 35394022. K.L. Kowalski,Phys. Rev. D 35, 3940 (1987).
| [] |
[
"Sparse Matrix Multiplication in the Low-Bandwidth Model",
"Sparse Matrix Multiplication in the Low-Bandwidth Model"
] | [
"Chetan Gupta \nAalto University\nAalto University\nAalto University\nAalto University\n\n",
"Chetan Gupta@aalto Fi · \nAalto University\nAalto University\nAalto University\nAalto University\n\n",
"Juho Hirvonen [email protected] \nAalto University\nAalto University\nAalto University\nAalto University\n\n",
"Janne H Korhonen [email protected] \nAalto University\nAalto University\nAalto University\nAalto University\n\n",
"· Tu Berlin \nAalto University\nAalto University\nAalto University\nAalto University\n\n",
"Jan Studený \nAalto University\nAalto University\nAalto University\nAalto University\n\n",
"Jan Studeny@aalto Fi · \nAalto University\nAalto University\nAalto University\nAalto University\n\n",
"Jukka Suomela [email protected] \nAalto University\nAalto University\nAalto University\nAalto University\n\n"
] | [
"Aalto University\nAalto University\nAalto University\nAalto University\n",
"Aalto University\nAalto University\nAalto University\nAalto University\n",
"Aalto University\nAalto University\nAalto University\nAalto University\n",
"Aalto University\nAalto University\nAalto University\nAalto University\n",
"Aalto University\nAalto University\nAalto University\nAalto University\n",
"Aalto University\nAalto University\nAalto University\nAalto University\n",
"Aalto University\nAalto University\nAalto University\nAalto University\n",
"Aalto University\nAalto University\nAalto University\nAalto University\n"
] | [] | We study matrix multiplication in the low-bandwidth model: There are n computers, and we need to compute the product of two n × n matrices. Initially computer i knows row i of each input matrix. In one communication round each computer can send and receive one O(log n)-bit message. Eventually computer i has to output row i of the product matrix.We seek to understand the complexity of this problem in the uniformly sparse case: each row and column of each input matrix has at most d non-zeros and in the product matrix we only need to know the values of at most d elements in each row or column. This is exactly the setting that we have, e.g., when we apply matrix multiplication for triangle detection in graphs of maximum degree d. We focus on the supported setting: the structure of the matrices is known in advance; only the numerical values of nonzero elements are unknown.There is a trivial algorithm that solves the problem in O(d 2 ) rounds, but for a large d, better algorithms are known to exist; in the moderately dense regime the problem can be solved in O(dn 1/3 ) communication rounds, and for very large d, the dominant solution is the fast matrix multiplication algorithm using O(n 1.158 ) communication rounds (for matrix multiplication over fields and rings supporting fast matrix multiplication).In this work we show that it is possible to overcome quadratic barrier for all values of d: we present an algorithm that solves the problem in O(d 1.907 ) rounds for fields and rings supporting fast matrix multiplication and O(d 1.927 ) rounds for semirings, independent of n. | 10.1145/3490148.3538575 | [
"https://arxiv.org/pdf/2203.01297v3.pdf"
] | 247,218,595 | 2203.01297 | 530c505dba6d5b5fcb3fa17ec6754f278af4fed1 |
Sparse Matrix Multiplication in the Low-Bandwidth Model
Chetan Gupta
Aalto University
Aalto University
Aalto University
Aalto University
Chetan Gupta@aalto Fi ·
Aalto University
Aalto University
Aalto University
Aalto University
Juho Hirvonen [email protected]
Aalto University
Aalto University
Aalto University
Aalto University
Janne H Korhonen [email protected]
Aalto University
Aalto University
Aalto University
Aalto University
· Tu Berlin
Aalto University
Aalto University
Aalto University
Aalto University
Jan Studený
Aalto University
Aalto University
Aalto University
Aalto University
Jan Studeny@aalto Fi ·
Aalto University
Aalto University
Aalto University
Aalto University
Jukka Suomela [email protected]
Aalto University
Aalto University
Aalto University
Aalto University
Sparse Matrix Multiplication in the Low-Bandwidth Model
We study matrix multiplication in the low-bandwidth model: There are n computers, and we need to compute the product of two n × n matrices. Initially computer i knows row i of each input matrix. In one communication round each computer can send and receive one O(log n)-bit message. Eventually computer i has to output row i of the product matrix.We seek to understand the complexity of this problem in the uniformly sparse case: each row and column of each input matrix has at most d non-zeros and in the product matrix we only need to know the values of at most d elements in each row or column. This is exactly the setting that we have, e.g., when we apply matrix multiplication for triangle detection in graphs of maximum degree d. We focus on the supported setting: the structure of the matrices is known in advance; only the numerical values of nonzero elements are unknown.There is a trivial algorithm that solves the problem in O(d 2 ) rounds, but for a large d, better algorithms are known to exist; in the moderately dense regime the problem can be solved in O(dn 1/3 ) communication rounds, and for very large d, the dominant solution is the fast matrix multiplication algorithm using O(n 1.158 ) communication rounds (for matrix multiplication over fields and rings supporting fast matrix multiplication).In this work we show that it is possible to overcome quadratic barrier for all values of d: we present an algorithm that solves the problem in O(d 1.907 ) rounds for fields and rings supporting fast matrix multiplication and O(d 1.927 ) rounds for semirings, independent of n.
Introduction
We study the task of multiplying very large but sparse matrices in distributed and parallel settings: there are n computers, each computer knows one row of each input matrix, and each computer needs to output one row of the product matrix. There are numerous efficient matrix multiplication algorithms for dense and moderately sparse matrices, e.g. [4,5,7,15]-however, these works focus on a high-bandwidth setting, where the problem becomes trivial for very sparse matrices. We instead take a more fine-grained approach and consider uniformly sparse matrices in a low-bandwidth setting. In this regime, the state-of-the-art algorithm for very sparse matrices is a trivial algorithm that takes O(d 2 ) communication rounds; here d is a density parameter we will shortly introduce. In this work we present the first algorithm that breaks the quadratic barrier, achieving a running time of O(d 1.907 ) rounds. We will now introduce the sparse matrix multiplication problem in detail in Section 1.1, and then describe the model of computing we will study in this work in Section 1.2.
Setting: uniformly sparse square matrices
In general, the complexity of matrix multiplication depends on the sizes and shapes of the matrices, as well as the density of each matrix and the density of the product matrix; moreover, density can refer to e.g. the average or maximum number of non-zero elements per row and column. In order to focus on the key challenge-overcoming the quadratic barrier-we will introduce here a very simple version of the sparse matrix multiplication problem, with only two parameters: n and d.
We are given three n × n matrices, A, B, andX. Each matrix is uniformly sparse in the following sense: there are at most d non-zero elements in each row, and at most d non-zero elements in each column. Here A and B are our input matrices, andX is an indicator matrix. Our task is to compute the matrix product X = AB, but we only need to report those elements of X that correspond to non-zero elements ofX. That is, we need to output X ik = j A ij B jk for each i, k withX ik = 0.
Note that the product X itself does not need to be sparse (indeed, there we might have up to Θ(d 2 ) non-zero elements per row and column); it is enough that the set of elements that we care about is sparse.
We will study here both matrix multiplication over rings and matrix multiplication over semirings.
Application: triangle detection and counting. While the problem setting is rather restrictive, it captures precisely e.g. the widely-studied task of triangle detection and counting in bounded-degree graphs (see e.g. [6, 8-10, 12, 14, 17] for prior work related to triangle detection in distributed settings). Assume G is a graph with n nodes, of maximum degree d, represented as an adjacency matrix; note that this matrix is uniformly sparse. We can set A = B =X = G, and compute X = AB in the uniformly sparse setting. Now consider an edge (i, k) in G; by definition we haveX ik = 0, and hence we have computed the value of X ik . We will have X ik = 0 if and only if there exists a triangle of the form {i, j, k} in graph G; moreover, X ik will indicate the number of such triangles, and givenX and X, we can easily also calculate the total number of triangles in the graph (keeping in mind that each triangle gets counted exactly 6 times).
Supported low-bandwidth model
Low-bandwidth model. We seek to solve the uniformly sparse matrix multiplication problem using n parallel computers, in a message-passing setting. Each computer has got its own local memory, and initially computer number i knows row i of each input matrix. Computation proceeds in synchronous communication rounds, and in each round each computer can send one O(log n)-bit message to another computer and receive one such message from another computer (we will assume that the elements of the ring or semiring over which we do matrix multiplication can be encoded in such messages). Eventually, computer number i will have to know row i of the product matrix (or more precisely, those elements of the row that we care about, as indicated by row i of matrixX). We say that an algorithm runs in time T if it stops after T communication rounds (note that we do not restrict local computation here; the focus is on communication, which tends to be the main bottleneck in large-scale computer systems).
Recent literature has often referred to this model as the node-capacitated clique [2] or nodecongested clique. It is also a special case of the classical bulk synchronous parallel model [19], with local computation considered free. In this work we will simply call this model the low-bandwidth model, to highlight the key difference with e.g. the congested clique model [16] which, in essence, is the same model with n times more bandwidth per node per round.
Supported version. To focus on the key issue here-the quadratic barrier-we will study the supported version of the low-bandwidth model: we assume that the structure of the input matrices is known in advance. More precisely, we know in advance uniformly sparse indicator matricesÂ,B, andX, where ij = 0 implies that A ij = 0,B jk = 0 implies that B jk = 0, and X ik = 0 implies that we do not need to calculate X ik . However, the values of A ij and B jk are only revealed at run time.
In the triangle counting application, the supported model corresponds to the following setting: We have got a fixed, globally known graph G, with degree at most d. Edges of G are colored either red or blue, and we need to count the number of triangles formed by red edges; however, the information on the colors of the edges is only available at the run time. Here we can set A =B =X = G, and then the input matrix A = B will indicate which edges are red.
We note that, while the supported version is a priori a significant strengthening of the low-bandwidth model, it is known that e.g. the supported version of the CONGEST model is not significantly stronger than baseline CONGEST, and almost all communication complexity lower bounds for CONGEST are also known to apply for supported CONGEST [11].
Contributions and prior work
The high-level question we set to investigate here is as follows: what is the round complexity of uniformly sparse matrix multiplication in the supported low-bandwidth model, as a function of parameters n and d. Figure 1 gives an overview of what was known by prior work and what is our contribution; the complexity of the problem lies in the shaded region.
The complexity has to be at least Ω(d) rounds, by a simple information-theoretic argument (in essence, all d units of information held by a node have to be transmitted to someone else); this gives the lower bound marked with (a) in Fig. 1. Above Ω(d) the problem definition is also robust to minor variations (e.g., it does not matter whether element A ij is initially held by node i, node j, or both, as in O(d) additional rounds we can transpose a sparse matrix).
The complexity of dense matrix multiplication over rings in the low-bandwidth model is closely connected to the complexity of matrix multiplication with centralized sequential algorithms: if there is a centralized sequential algorithm that solve matrix multiplication with O(n ω ) element-wise multiplications, there is also an algorithm for the congested clique model that runs in O(n 1−2/ω ) rounds [5], and this can be simulated in the low-bandwidth model in O(n 2−2/ω ) rounds. For fields and at least certain rings such as integers, we can plug in the latest For sparse matrices we can do better by using the algorithm from [7]. This algorithm is applicable for both rings and semirings, and in our model the complexity is O(dn 1/3 ) rounds; this is illustrated with line (c). However, for small values of d we can do much better even with a trivial algorithm where node j sends each B jk to every node i with A ij = 0; then node i can compute X ik for all k. Here each node sends and receives O(d 2 ) values, and this takes O(d 2 ) rounds (see Lemma 18 for the details). We arrive at the upper bound shown in line (e).
In summary, the problem can be solved in O(d 2 ) rounds, independent of n. However, when d increases, we have got better upper bounds, and for d = n we eventually arrive at O(n 1. We prove that the quadratic barrier can be indeed broken; our new upper bound is shown with line (d) in Fig. 1:
Theorem 1.
There is an algorithm that solves uniformly sparse matrix multiplication over fields and rings supporting fast matrix multiplication in O(d 1.907 ) rounds in the supported low-bandwidth model.
It should be noted that the value of the matrix multiplication exponent ω can depend on the ground field or ring we are operating over. The current best bound ω < 2.3728596 [1] holds over any field and at least certain rings such as integers, and Theorem 1 should be understood as tacitly referring to rings for which this holds. More generally, for example Strassen's algorithm [18] giving ω < 2.8074 can be used over any ring, yielding a running time of O(d 1.923 ) rounds using techniques of Theorem 1. We refer interested readers to [7] for details of translating matrix multiplication exponent bounds to congested clique algorithms, and to [3] for more technical discussion on matrix multiplication exponent in general.
We can also break the quadratic barrier for arbitrary semirings, albeit with a slightly worse exponent:
Theorem 2.
There is an algorithm that solves uniformly sparse matrix multiplication over semirings in O(d 1.927 ) rounds in the supported low-bandwidth model.
We see our work primarily as a proof of concept for breaking the barrier; there is nothing particularly magic about the specific exponent 1.907, other than that it demonstrates that values substantially smaller than 2 can be achieved-there is certainly room for improvement in future work, and one can verify that even with ω = 2 we do not get O(d) round complexity. Also, we expect that the techniques that we present here are applicable also beyond the specific case of supported low-bandwidth model and uniformly sparse matrices. (1) What is the smallest α such that uniformly sparse matrix multiplication can be solved in O(d α ) rounds in the supported low-bandwidth model?
Open questions
(2) Can we eliminate the assumption that there is a support (i.e., the structure of the matrices is known)?
(3) Can we use fast uniformly sparse matrix multiplication to obtain improvements for the general sparse case, e.g. by emulating the Yuster-Zwick algorithm [20]?
(4) Can the techniques that we introduce here be applied also in the context of the CONGEST model (cf. [8,9,12,14])?
Proof overview and key ideas
Even though the task at hand is about linear algebra, it turns out that it is helpful to structure the algorithm around graph-theoretic concepts.
Nodes, triangles, and graphs
In what follows, it will be convenient to view our input and our set of computers as having a tripartite structure. Let I, J, and K be disjoint sets of size n; we will use these sets to index the matrices A, B, and X so that the elements are A ij , B jk , and X ik for i ∈ I, j ∈ J, and k ∈ K. Likewise, we use the sets I, J, and K to index the indicator matricesÂ,B, andX given to the computers in advance. We will collectively refer to V = I ∪ J ∪ K as the set of nodes. We emphasize that we have got |V | = 3n, so let us be careful not to confuse |V | and n. Concretely, we interpret this so that we are computing an n × n matrix product using 3n computers-each node v ∈ V is a computer, such that node i ∈ I initially knows A ij for all j, node j ∈ J initially knows B jk for all k, and node i ∈ I needs to compute X ik for all k. In our model of computing we had only n computers, but we can transform our problem instance into the tripartite formalism by having one physical computer to simulate 3 virtual computers in a straightforward manner. This simulation only incurs constant-factor overhead in running times, so we will henceforth assume this setting as given.
Now we are ready to introduce the key concept we will use throughout this work:
Definition 3. Let i ∈ I, j ∈ J, and k ∈ K. We say that {i, j, k} is a triangle if ij = 0, B jk = 0, andX ik = 0. We writeT for the set of all triangles.
In other words, a triangle {i, j, k} corresponds to a possibly non-zero product A ij B jk included in an output X ik we need to compute.
Let T ⊆T be a set of triangles. We write G(T) for the graph G(T) = (V, E), where E consists of all edges {u, v} such that {u, v} ⊆ T for some triangle T ∈ T. As the matrices are uniformly sparse, we have the following simple observations:
Observation 4. Each node i ∈ I in G(T)
is adjacent to at most d nodes of J and at most d nodes of K. A similar observation holds for the nodes of J and K. In particular, the maximum degree of G(T) is at most 2d, and hence the maximum degree of G(T) for any T is also at most 2d.
Observation 5. Each node i ∈ I can belong to at most d 2 triangles, and hence the total number of triangles inT is at most d 2 n.
Observation 6. There are at most dn edges between J and K in graph G(T).
Note thatT only depends onÂ,B, andX, which are known in the supported model, and it is independent of the values of A and B.
Processing triangles
We initialize X ik ← 0; this is a variable held by the computer responsible for node i ∈ I. We say that we have processed a set of triangles T ⊆T if the current value of X ik equals the sum of products a ij b jk over all j such that {i, j, k} ∈ T.
By definition, we have solved the problem if we have processed all triangles inT. Hence all that we need to do is to show that all triangles can be processed in O(d 1.907 ) rounds.
Clustering triangles
The following definitions are central in our work; see If T ⊆T is a set of triangles and U is a cluster, we write
T[U ] = {T ∈ T : T ⊆ U }
for the set of triangles contained in U .
Definition 8. A collection of triangles P ⊆T is clustered if there are disjoint clusters U 1 , . . . , U k ⊆ V such that P = P[U 1 ] ∪ · · · ∪ P[U k ].
That is, P is clustered if the triangles of P can be partitioned into small node-disjoint tripartite structures. The key observation is this:
Lemma 9.
For matrix multiplication over rings, if P ⊆T is clustered, then all triangles in P can be processed in O(d 1.158 ) rounds.
Proof. Let U 1 , . . . , U k ⊆ V denote the clusters (as in Definition 8). The task of processing P[U i ] using the computers of U i is equivalent to a dense matrix multiplication problem in a network with 3d nodes. Hence each subset of nodes U i can run the dense matrix multiplication algorithm from [5] in parallel, processing all triangles of P in O(d 1.158 ) rounds.
By applying the dense matrix multiplication algorithm for semiring from [5], we also get:
Lemma 10. For matrix multiplication over semirings, if P ⊆T is clustered, then all triangles in P can be processed in O(d 4/3 ) rounds. Figure 2: A collection of triangles that is clustered, for d = 3; in this example we have two clusters, U 1 and U 2 . Note that there can be nodes that are not part of any cluster, but each triangle has to be contained in exactly one cluster.
I J K U 1 U 2
High-level idea
Now we are ready to present the high-level idea; see Fig. 3 for an illustration. We show that anŷ T can be partitioned in two components,T = T I ∪ T II , where T I has got a nice structure that makes it easy to process, while T II is small. The more time we put in processing T I , the smaller we can make T II , and small sets are fast to process: We will now explain how to construct and process T I and T II in a bit more detail. We emphasize that the constructions of T I and T II only depends onT, and hence all of this preparatory work can be precomputed in the supported model. Only the actual processing of T I and T II takes place at run time. We will use the case of rings (Theorem 1) here as an example; the case of semirings (Theorem 2) follows the same basic idea.
-
Component T I is large but clustered. To construct T I , we start with T 1 =T. Then we repeatedly choose a clustered subset P i ⊆ T i , and let T i+1 = T i \ P i . Eventually, after some L steps, T L+1 will be sufficiently small, and we can stop.
As each set P i is clustered, by Lemma 9 we can process each of them in O(d 1.158 ) rounds, and hence the overall running time will be O(Ld 1.158 ) rounds. We choose a small enough L so that the total running time is bounded by O(d 1.907 ), as needed.
This way we have processed T I = P 1 ∪ · · · ∪ P L . We will leave the remainder T II = T L+1 for the second phase.
For all of this to make sense, the sets P i must be sufficiently large so that we make rapid progress towards a small remainder T II . Therefore the key nontrivial part of our proof is a graph-theoretic result that shows that if T is sufficiently large, we can always find a large clustered subset P ⊆ T. To prove this, we first show that if T is sufficiently large, we can find a cluster U 1 that contains many triangles. Then we discard U 1 and all triangles touching it, and We can process T I efficiently by applying dense matrix multiplication inside each cluster, and we can process T II efficiently since it is small.
repeat the process until T becomes small enough; this way we have iteratively discovered a large clustered subset
P = P[U 1 ] ∪ · · · ∪ P[U k ] ⊆ T,
together with disjoint clusters U 1 ∪ · · · ∪ U k ⊆ V . For the details of this analysis we refer to Section 4.
Component T II is small. Now we are left with only a relatively small collection of triangles T II ; we chose the parameters so that the total number of triangles in T II is O(d 1.814 n). We would like to now process all of T II in O(d 1.907 ) rounds.
The key challenge here is that the structure of T II can be awkward: even though the average number of triangles per node is low, there might be some nodes that are incident to a large number of triangles. We call nodes that touch too many triangles bad nodes, and triangles that touch bad nodes are bad triangles.
We first show how we can process all good triangles. This is easy: we can, in essence, make use of the trivial brute-force algorithm.
Then we will focus on the bad triangles. The key observation is that there are only a few bad nodes. If we first imagined that each bad node tried to process its own share of bad triangles, a vast majority of the nodes in the network would be idle. Hence each bad node is able to recruit a large number of helper nodes, and with their assistance we can process also all bad triangles sufficiently fast. We refer to Section 5 for the details.
Finding one cluster
Now we will introduce the key technical tool that we will use to construct a decomposition T = T I ∪ T II , where T I is clustered and T II is small. In this section we will show that given any sufficiently large collection of triangles T ⊆T, we can always find one cluster U ⊆ V that contains many triangles (recall Definition 7). The idea is that we will then later apply this lemma iteratively to construct the clustered sets P 1 ∪ · · · ∪ P L = T I . Lemma 11. Assume that n ≥ d and ε ≥ 0. Let T ⊆T be a collection of triangles with
T ≥ d 2−ε n.
Then there exists a cluster U ⊆ V with
T[U ] ≥ 1 24 d 3−4ε .
Before we prove this lemma, it may be helpful to first build some intuition on this claim. When ε = 0, the assumption is that there are d 2 n triangles in T. Recall that this is also the largest possible number of triangles (Observation 5). One way to construct a collection with that many triangles is to split the 3n nodes in V into n/d clusters, with 3d nodes in each, and then inside each cluster we can have d 3 triangles. But in this construction we can trivially find a cluster U ⊆ V with |T[U ]| = d 3 ; indeed, the entire collection of triangles is clustered. Now one could ask if there is some clever way to construct a large collection of triangles that cannot be clustered. What Lemma 11 shows is that the answer is no: as soon as you somehow put d 2 n triangles into collection T (while respecting the assumption that the triangles are defined by some uniformly sparse matrices, and hence G(T) is of degree at most d) you cannot avoid creating at least one cluster that contains Ω(d 3 ) triangles. And a similar claim then holds also for slightly lower numbers of triangles, e.g. if the total number of triangles is d 1.99 n, we show that you can still find a cluster with Ω(d 2.96 ) triangles.
Let us now prove the lemma. Our proof is constructive; it will also give a procedure for finding such a cluster.
Proof of Lemma 11. Consider the tripartite graph G(T) defined by collection T. Let {j, k} be an edge with j ∈ J and k ∈ K. We call edge {j, k} heavy if there are at least 1 2 d 1−ε triangles T with {j, k} ⊆ T . We call a triangle heavy if its J-K edge is heavy.
By Observation 6 there can be at most dn non-heavy J-K edges, and by definition each of them can contribute to at most 1 2 d 1−ε triangles. Hence the number of non-heavy triangles is at most 1 2 d 2−ε n, and therefore the number of heavy triangles is at least 1 2 d 2−ε n. Let T 0 ⊆ T be the set of heavy triangles. For the remainder of the proof, we will study the properties of this subset and the tripartite graph G(T 0 ) defined by it. So from now on, all triangles are heavy, and all edges refer to the edges of G(T 0 ).
First, we pick a node x ∈ I that touches at least 1 2 d 2−ε triangles; such a node has to exists as on average each node touches at least 1 2 d 2−ε triangles. Let J 0 be the set of J-corners and K 0 be the set of K-corners of these triangles. By Observation 4, we have |J 0 | ≤ d and |K 0 | ≤ d. We label all nodes i ∈ I by the following values:
-t(i) is the number of triangles of the form i-J 0 -K 0 , -y(i) is the number of i-J 0 edges, -z(i) is the number of i-K 0 edges, -e(i) = y(i) + z(i) is the total number of edges from i to J 0 ∪ K 0 .
By the choice of x, we have got at least 1 2 d 2−ε triangles of the form x-J 0 -K 0 . Therefore there are also at least 1 2 d 2−ε edges of the form J 0 -K 0 , and all of these are heavy, so each of them is contained in at least 1 2 d 1−ε triangles. We have
i∈I t(i) ≥ 1 2 d 2−ε · 1 2 d 1−ε ≥ 1 4 d 3−2ε .(1)
Since J 0 and K 0 have at most d nodes, each of degree at most d, there are at most d 2 edges of the form I-J 0 , and at most d 2 edges of the form I-K 0 . Therefore
i∈I y(i) ≤ d 2 , i∈I z(i) ≤ d 2 , i∈I e(i) ≤ 2d 2 .(2)
For each triangle of the form i-J 0 -K 0 there has to be an edge i-J 0 and an edge i-K 0 , and hence for each i ∈ I we have got
t(i) ≤ y(i)z(i) ≤ e(i) 2 4 .(3)
Let I 0 ⊆ I consists of the d nodes with the largest t(i) values (breaking ties arbitrarily), and let I 1 = I \ I 0 . Define
T 0 = i∈I 0 t(i), T 1 = i∈I 1 t(i), t 0 = min i∈I 0 t(i).
First assume that
T 0 < 1 24 d 3−4ε .(4)
Then
t 0 ≤ T 0 d < 1 24 d 2−4ε .(5)
By definition, t(i) ≤ t 0 for all i ∈ I 1 . By (3) we have got
t(i) ≤ t(i) · e(i) 2 ≤ √ t 0 2 · e(i)(6)
for all i ∈ I 1 . But from (2) we have
i∈I 1 e(i) ≤ i∈I e(i) ≤ 2d 2 .(7)
By putting together (5), (6), and (7), we get
T 1 = i∈I 1 t(i) ≤ i∈I 1 √ t 0 2 e(i) ≤ √ t 0 d 2 < 1 √ 24 d 3−2ε .
But we also have from (4) that
T 0 < 1 24 d 3−4ε ≤ 1 24 d 3−2ε ,
and therefore we get
i∈I t(i) = T 0 + T 1 < 1 24 + 1 √ 24 d 3−2ε < 1 4 d 3−2ε ,
but this contradicts (1). Therefore we must have
T 0 ≥ 1 24 d 3−4ε .
Recall that there are exactly T 0 triangles of the form I 0 -J 0 -K 0 , set I 0 contains by construction exactly d nodes, and J 0 and K 0 contain at most d nodes. Since we had n ≥ d, we can now add arbitrarily nodes from J to J 0 and nodes from K to K 0 so that each of them has size exactly d.
Then U = I 0 ∪ J 0 ∪ K 0 is a cluster with T[U ] ≥ T 0 [U ] ≥ T 0 ≥ 1 24 d 3−4ε .
Finding many disjoint clusters
In Section 3 we established our key technical tool. We are now ready to start to follow the high-level plan explained in Section 2.4 and Fig. 3. Recall that the plan is to partitionT in two parts, T I and T II , where T I has got a nice clustered structure and the remaining part T II is small. In this section we will show how to construct and process T I . In Section 5 we will then see how to process the remaining part T II .
One clustered set
With the following lemma we can find one large clustered subset P ⊆ T for any sufficiently large collection of triangles T. In Fig. 3, this corresponds to the construction of, say, P 1 .
Lemma 12. Let ε 2 ≥ 0 and δ > 0, and assume that d is sufficiently large. Let T ⊆T be a collection of triangles with T ≥ d 2−ε 2 n.
Then T can be partitioned into disjoint sets
T = P ∪ T ,
where P is clustered and P ≥ 1 144
d 2−5ε 2 −4δ n.
Proof. We construct P and T iteratively as follows. Start with the original collection T. Then we repeat the following steps until there are fewer than d 2−ε 2 −δ n triangles left in T:
(1) Apply Lemma 11 to T with ε = ε 2 + δ to find a cluster U .
(2) For each triangle T ∈ T[U ], add T to P and remove it from T.
(3) For each triangle T ∈ T with T ∩ U = ∅, add T to T and remove it from T.
Finally, for each triangle T ∈ T that still remains, we add T to T and remove it from T. Now by construction, P ∪ T is a partition of T. We have also constructed a set P that is almost clustered: if U shares some node v with a cluster that was constructed earlier, then v is not incident to any triangle of T, and hence v can be freely replaced with any other node that we have not used so far. Hence we can easily also ensure that P is clustered, with minor additional post-processing.
We still need to prove that P is large. Each time we apply Lemma 11, we delete at most 3d 3 triangles from T: there are 3d nodes in cluster U , and each is contained in at most d 2 triangles (Observation 5). On the other hand, iteration will not stop until we have deleted at least
d 2−ε 2 n − d 2−ε 2 −δ n ≥ 1 2 d 2−ε 2 n
triangles; here we make use of the assumption that d is sufficiently large (in comparison with the constant δ). Therefore we will be able to apply Lemma 11 at least
1 2 d 2−ε 2 n 3d 3 = 1 6 d −1−ε 2 n
times, and each time we add to P at least
1 24 d 3−4ε 2 −4δ
triangles, so we have got
P ≥ 1 24 d 3−4ε 2 −4δ · 1 6 d −1−ε 2 n = 1 144 d 2−5ε 2 −4δ n.
Many clusterings
Next we will apply Lemma 12 repeatedly to construct all clustered sets P 1 , . . . , P L shown in Fig. 3.
Lemma 13. Let 0 ≤ ε 1 < ε 2 and δ > 0, and assume that d is sufficiently large. Let T ⊆T be a collection of triangles with T ≤ d 2−ε 1 n.
Then T can be partitioned into disjoint sets
T = P 1 ∪ . . . ∪ P L ∪ T ,
where each P i is clustered, the number of layers is
L ≤ 144d 5ε 2 −ε 1 +4δ ,
and the number of triangles in the residual part T is
T ≤ d 2−ε 2 n. Proof. Let T 1 = T. If T i ≤ d 2−ε 2 n,
we can stop and set L = i − 1 and T = T i . Otherwise we can apply Lemma 12 to partition T i into the clustered part P i and the residual part T i+1 . By Lemma 12, the number of triangles in each set P i is at least
P i ≥ 1 144 d 2−5ε 2 −4δ n,
while the total number of triangles was by assumption
T ≤ d 2−ε 1 n.
Hence we will run out of triangles after at most
L = d 2−ε 1 n 1 144 d 2−5ε 2 −4δ n = 144d 5ε 2 −ε 1 +4δ
iterations.
Simplified version
If we are only interested in breaking the quadratic barrier, we have now got all the ingredients we need to splitT into a clustered part T I and a small part T II such that both T I and T II can be processed fast.
Lemma 14.
For matrix multiplication over rings, it is possible to partitionT into T I ∪ T II such that (1) T I can be processed in O(d 1.858 ) rounds in the supported low-bandwidth model, and
(2) T II contains at most d 1.9 n triangles.
Proof. We will solve the case of a small d by brute force; hence let us assume that d is sufficiently large. We apply Lemma 13 to T =T with ε 1 = 0, ε 2 = 0.1, and δ = 0.05. We will set T I = P 1 ∪ · · · ∪ P L and T II = T . The size of T II is bounded by d 1.9 n. The number of layers is
L = O(d 0.7 ).
We can now apply Lemma 9 for L times to process P 1 , . . . , P L ; each step takes O(d 1.158 ) rounds and the total running time will be therefore O(d 1.858 ) rounds.
We will later in Section 5 see how one could then process T II in O(d 1.95 ) rounds, for a total running time of O(d 1.95 ) rounds. However, we can do slightly better, as we will see next.
Final algorithm for rings
We can improve the parameters of Lemma 14 a bit, and obtain the following result. Here we have chosen the parameters so that the time we take now to process T I will be equal to the time we will eventually need to process T II . Proof. We apply Lemma 13 iteratively with the parameters given in Table 1. Let ε i 1 ,ε i 2 , and δ i be the respective parameters in iteration i. After iteration i, we process all triangles in P 1 ∪ · · · ∪ P L , in the same way as in the proof of Lemma 14. Then we are left with at most d 2−ε i 2 n triangles. We can then set ε i+1 1 = ε i 2 , and repeat. In Table 1, T c is the number of rounds required to process the triangles; the total running time is bounded by O(d 1.907 ), and the number of triangles that are still unprocessed after the final iteration is bounded by d 2−0.187166 n < d 1.814 n.
Final algorithm for semirings
In the proof of Lemma 15 we made use of Lemma 9, which is applicable for matrix multiplication over rings. If we plug in Lemma 10 and use the parameters from Table 2, we get a similar result for semirings: Then all triangles in T can be processed in O(d 2−ε/2 ) rounds in the supported low-bandwidth model, both for matrix multiplication over rings and semirings.
We first show how to handle the uniformly sparse case, where we have a non-trivial bound on the number of triangles touching each node. We then show how reduce the small component case of O(d 2−ε n) triangles in total to the uniformly sparse case with each node touching at most O(d 2−ε/2 ) triangles.
Handling the uniformly sparse case
To handle the uniformly sparse cases, we use a simple brute-force algorithm. Note that setting t = d 2 here gives the trivial O(d 2 )-round upper bound.
Lemma 18. Assume each node i ∈ V is included in at most t triangles in T. Then all triangles in T can be processed in O(t) rounds, both for matrix multiplication over rings and semirings.
Proof. To process all triangles in T, we want, for each triangle {i, j, k} ∈ T, that node j sends the entry B jk to node i. Since node i knows A ij and is responsible for output X ik , this allows node i to accumulate products A ij B jk to X ik .
As each node is included in at most t triangles, this means that each node has at most t messages to send, and at most t messages to receive, and all nodes can compute the sources and destinations of all messages from T. To see that these messages can be delivered in O(t) rounds, consider a graph with node set V × V and edges representing source and destination pairs of the messages. Since this graph has maximum degree t, it has an O(t)-edge coloring (which we can precompute in the supported model). All nodes then send their messages with color k on round k directly to the receiver, which takes O(t) rounds.
From small to uniformly sparse
We now proceed to prove Lemma 17. For purely technical convenience, we assume that T contains at most d 2−ε n triangles-in the more general case of Lemma 17, one can for example split T into constantly many sets of size at most d 2−ε n and run the following algorithm separately for each one.
Setup We say that a node i ∈ V is a bad node if i is included in at least d 2−ε/2 triangles in T, and we say that a triangle is a bad triangle if it includes a bad node. Since every triangle touches at most 3 nodes, the number of bad nodes is at most 3d 2−ε n/d 2−ε/2 = 3n/d ε/2 .
We now want to distribute the responsibility for handling the bad triangles among multiple nodes. The following lemma gives us the formal tool for splitting the set of bad triangles evenly.
Lemma 19. Assume d ≥ 2 and ε < 1/2. There exists a coloring of the bad triangles with d ε/2 /3 colors such that for any color c, each bad node i ∈ V touches at most 6d 2−ε/2 bad triangles of color c.
Proof. We prove the existence of the desired coloring by probabilistic method. We color each triangle uniformly at random with d ε/2 /3 colors. Let A v,c denote the event that there are at least 6d 2−ε/2 triangles of color c touching a bad node v in this coloring; these are the bad events we want to avoid.
Fixing v and c, let X v,c be the random variable counting the number of triangles of color c touching v. This is clearly binomially distributed with the number of samples corresponding to the number of triangles touching v and p = 3/d ε/2 .
Let t v be the number of triangles touching v. We have that
d 2−ε/2 ≤ t v ≤ d 2 ,
and thus
3d 2−ε ≤ E[X v,c ] ≤ 3d 2−ε/2 .
By Chernoff bound, it follows that
Pr[A v,c ] ≤ Pr[X v,c ≥ 2E[X v,c ]] ≤ e −4E[Xv,c]/3 ≤ e −4d 2−ε .
We now want to apply Lovás Local Lemma to show that the probability that none of the events A v,c happen is positive. We first observe that event A v,c is trivially independent of any set of other events A u,d that does not involve any neighbor of v. Since the degree of graph G is at most d, the degree of dependency (see e.g. [13]) for events A v,c is at most d ε/2 (d + 1)/3.
It remains to show that Lovás Local Lemma condition ep(D + 1) ≤ 1 is satisfied, where p is the upper bound for the probability of a single bad event happening, and D is the degree of dependency. Since we assume that d ≥ 2 and ε < 1/2, we have
1/pe = e 4d 2−ε −1 > e 2d 2−ε ≥ 1 + (e − 1)2d 2−ε ≥ 1 + 2d 2−ε > 1 + 2d 1+ε/2 ≥ 1 + (d + 1)d ε/2 ≥ D + 1 .
The claim now follows from Lovás Local Lemma.
Virtual instance. We now construct a new virtual node set
V = I ∪ J ∪ K ,
along with matrices A and B indexed by I , J , and K , and a set of triangles T we need to process from this virtual instance. The goal is that processing all triangles in T allows us to recover the solution to the original instance.
Let χ be a coloring of bad triangles as per Lemma 19, and let C be the set of colors used by χ. We construct the virtual node set I (resp., J and K ) by adding a node i 0 for each non-bad node i ∈ I (resp., j ∈ J and k ∈ K), and nodes v c for bad node v ∈ V and color c ∈ C. Note that the virtual node set V has size at most 2|V |. Finally, for technical convenience, we define a color set c(v) for node v ∈ V as
c(v) = {0} , if v is non-bad, and C , if v is bad.
The matrix A is now defined by setting A icj d = A ij for i ∈ I, j ∈ J and colors c ∈ c(i) and d ∈ c(j). Matrix B is defined analogously. The set of triangles T is constructed as follows. For each non-bad triangle {i, j, k} ∈ T, we add {i 0 , j 0 , k 0 } to T . For each bad triangle T ∈ T with color χ(T ), we add to T a new triangle obtained from T by replacing each bad node v ∈ T with i χ(T) and each non-bad node by v 0 . By construction, each node in V is included in at most 6d 2−ε/2 triangles in T . Moreover, if matrix X represents the result of processing all triangles in T , we can recover the results X of processing triangles in T as X ij = c∈c(i) d∈c(j)
X icj d .(8)
Simulation. We now show that we can construct the virtual instance as defined above from A, B and T, simulate the execution of the algorithm of Lemma 18 in the virtual instance with constant-factor overhead in the round complexity, and recover the output of the original instance from the result. As preprocessing based on the knowledge of T, all nodes perform the following steps locally in an arbitrary and consistent way:
(1) Compute the set of bad nodes and bad triangles.
(2) Compute a coloring χ of bad triangles as per Lemma 19.
(3) For each bad node v ∈ V , assign a set of helper nodes U v = {v c : c ∈ C} ⊆ V of size d ε/2 /2 so that helper node sets are disjoint for distinct bad nodes. Note that this is always possible, as the number of required nodes is at most |V |.
We now simulate the execution of triangle processing on the virtual instance as follows. Each non-bad node v simulates the corresponding node v 0 in the virtual instance, as well as a duplicate of bad node u c if they are assigned as helper node v c . Each bad node simulates their assigned bad node duplicate. To handle the duplication of the inputs and collection of outputs, we use the following simple routing lemma. Proof. For both parts, fix an arbitrary binary tree T on {v} ∪ U rooted at v. For part (a), a single message can be broadcast to all nodes along T in O(log k) rounds, by simply having each node spend 2 rounds sending it to both its children once the node receives the message. For d messages, we observe that the root v can start the sending the next message immediately after it has sent the previous one-in the standard pipelining fashion-and the communication for these messages does not overlap. Thus, the last message can be sent by v in O(d) rounds, and is received by all nodes in O(d + log k) rounds. For part (b), we use the same idea in reverse. For a single index i, all leaf nodes u send s u,i to their parent, alternating between left and right children on even and odd rounds. Subsequently, nodes compute the sum of values they received from their children, and send it their parents. For multiple values, the pipelining argument is identical to part (a). Now the simulation proceeds as follows:
(1) Each bad node v ∈ V sends their input to all nodes in U v in parallel. This takes O(d ε/2 + log d) rounds by Lemma 20(a).
(2) Each node locally computes the rows of A and B for the nodes of V they are responsible for simulating.
(3) Nodes collectively simulate the algorithm of Lemma 19 on the virtual instance formed by A , B and T to process all triangles in the virtual triangle set T . Since all nodes in V touch O(d 2−ε/2 ) triangles, and overhead from simulation is O(1), this takes O(d 2−ε/2 ) rounds.
Figure 1 :
1Complexity of sparse matrix multiplication in the low-bandwidth model: prior work and the new result. bound for matrix multiplication exponent ω < 2.3728596 [1] to arrive at the round complexity of O(n 1.158 ), illustrated with line (b) in the figure. For semirings the analogous result is O(n 4/3 ) rounds.
158 ) or O(n 4/3 ) instead of the trivial bound O(n 2 ). Now the key question is if we can break the quadratic barrier O(d 2 ) for all values of d. For example, could one achieve a bound like O(d 1.5 )? Or, is there any hope of achieving a bound like O(d 1.158 ) or O(d 4/3 ) rounds for all values of d?
For future work, there are four main open question:
Fig. 2for an illustration:Definition 7. A set of nodes U ⊆ V is a cluster if it consists of d nodes from I, d nodes from J, and d nodes from K.
Figure 3 :
3Any set of trianglesT can be decomposed in two components, a clustered component T I and a small component T II .
Lemma 15 .
15For matrix multiplication over rings, it is possible to partitionT into T I ∪ T II such that (1) T I can be processed in O(d 1.907 ) rounds in the supported low-bandwidth model, and (2) T II contains at most d 1.814 n triangles.
Lemma 16 .
16For matrix multiplication over semirings, it is possible to partitionT into T I ∪ T II such that (1) T I can be processed in O(d 1.927 ) rounds in the supported low-bandwidth model, and (2) T II contains at most d 1.854 n triangles.
to show how to process the triangles in the remaining small component T II . Lemma 17. Let d ≥ 2 and ε < 1. Let T ⊆T be a collection of triangles with T = O(d 2−ε n).
Lemma 20 .
20Let v ∈ V be a node and let U ⊆ V be a set of k nodes. The following communication tasks can be performed in O(d + log k) rounds in low-bandwidth model using only communication between nodes in {v} ∪ U :(a) Node v holds d messages of O(log n) bits, and each node in U needs to receive each message held by v.(b) Each node u ∈ U holds d values s u,1 , . . . s u,d , and node v needs to learn u∈U v u,i for i = 1, 2, . . . , d.
For matrix multiplication over rings, if we spend O(d 1.907 ) rounds in the first phase to process T I , we can ensure that T II contains only O(d 1.814 n) triangles, and then it can be also processed in O(d 1.907 ) rounds. Theorem 1 follows. -For matrix multiplication over semirings, if we spend O(d 1.927 ) rounds in the first phase to process T I , we can ensure that T II contains only O(d 1.854 n) triangles, which can be processed in O(d 1.927 ) rounds. Theorem 2 follows.
Table 1 :
1Proof of Lemma 15: parameters for the five iterations of Lemma 13.Iteration
ε 1
ε 2
δ
T c
1
0
0.149775 0.00001 O(d 1.906016 )
2
0.149775 0.179736 0.00001 O(d 1.906044 )
3
0.179736 0.185724 0.00001 O(d 1.906024 )
4
0.185724 0.186926 0.00001 O(d 1.906044 )
5
0.186926 0.187166 0.00001 O(d 1.906044 )
Table 2: Proof of Lemma 16: parameters for the five iterations of Lemma 13.
Iteration
ε 1
ε 2
δ
T c
1
0
0.118537 0.00001 O(d 1.926026 )
2
0.118537 0.142249 0.00001 O(d 1.926050 )
3
0.142249 0.146986 0.00001 O(d 1.926020 )
4
0.146986 0.147937 0.00001 O(d 1.926040 )
5
0.147937 0.148127 0.00001 O(d 1.926040 )
AcknowledgementsWe are grateful to the anonymous reviewers for their helpful feedback on the previous versions of this work. This work was supported in part by the Academy of Finland, Grant 321901.
Lemma 15, we can partitionT into T I and T II such that T I can be processed in O(d 1.907 ) rounds, and T II has at most d 1.814 n triangles. Then we can apply Lemma 17 to T II with ε = 0.186 and hence process also. Each bad node i ∈ I recovers row i of output X according to Eq. (8) by using Lemma 20(b). II in O(d 1.907 ) roundsEach bad node i ∈ I recovers row i of output X according to Eq. (8) by using Lemma 20(b). Lemma 15, we can partitionT into T I and T II such that T I can be processed in O(d 1.907 ) rounds, and T II has at most d 1.814 n triangles. Then we can apply Lemma 17 to T II with ε = 0.186 and hence process also T II in O(d 1.907 ) rounds.
By Lemma 16, we can partitionT into T I and T II such that T I can be processed in O(d 1.927 ) rounds, and T II has at most d 1.854 n triangles. Then we can apply Lemma 17 to T II with ε = 0.146 and hence process also. II in O(d 1.927 ) roundsProof of Theorem 2. By Lemma 16, we can partitionT into T I and T II such that T I can be processed in O(d 1.927 ) rounds, and T II has at most d 1.854 n triangles. Then we can apply Lemma 17 to T II with ε = 0.146 and hence process also T II in O(d 1.927 ) rounds.
A refined laser method and faster matrix multiplication. Josh Alman, Virginia Vassilevska Williams, 10.1137/1.9781611976465.32Proc. ACM-SIAM Symposium on Discrete Algorithms (SODA 2021). ACM-SIAM Symposium on Discrete Algorithms (SODA 2021)Josh Alman and Virginia Vassilevska Williams. A refined laser method and faster matrix multiplication. In Proc. ACM-SIAM Symposium on Discrete Algorithms (SODA 2021), pages 522-539, 2021. doi:10.1137/1.9781611976465.32.
Distributed computation in node-capacitated networks. John Augustine, Mohsen Ghaffari, Robert Gmyr, Kristian Hinnenthal, Christian Scheideler, Fabian Kuhn, Jason Li, 10.1145/3323165.3323195Proc. 31st ACM Symposium on Parallelism in Algorithms and Architectures (SPAA 2019). 31st ACM Symposium on Parallelism in Algorithms and Architectures (SPAA 2019)ACMJohn Augustine, Mohsen Ghaffari, Robert Gmyr, Kristian Hinnenthal, Christian Scheideler, Fabian Kuhn, and Jason Li. Distributed computation in node-capacitated networks. In Proc. 31st ACM Symposium on Parallelism in Algorithms and Architectures (SPAA 2019), page 69-79. ACM, 2019. doi:10.1145/3323165.3323195.
Algebraic complexity theory. Peter Bürgisser, Michael Clausen, Mohammad A Shokrollahi, Peter Bürgisser, Michael Clausen, and Mohammad A Shokrollahi. Algebraic complexity theory. 1997.
Sparse matrix multiplication and triangle listing in the congested clique model. Keren Censor-Hillel, Dean Leitersdorf, Elia Turner, 10.4230/LIPIcs.OPODIS.2018.4Proc. OPODIS. OPODISKeren Censor-Hillel, Dean Leitersdorf, and Elia Turner. Sparse matrix multiplica- tion and triangle listing in the congested clique model. In Proc. OPODIS 2018, 2018. doi:10.4230/LIPIcs.OPODIS.2018.4.
Algebraic methods in the congested clique. Keren Censor-Hillel, Petteri Kaski, Janne H Korhonen, Christoph Lenzen, Ami Paz, Jukka Suomela, 10.1007/s00446-016-0270-2Distributed Comput. 326Keren Censor-Hillel, Petteri Kaski, Janne H. Korhonen, Christoph Lenzen, Ami Paz, and Jukka Suomela. Algebraic methods in the congested clique. Distributed Comput., 32(6): 461-478, 2019. doi:10.1007/s00446-016-0270-2.
On distributed listing of cliques. Keren Censor-Hillel, François Le Gall, Dean Leitersdorf, 10.1145/3382734.3405742Proc. 39th Symposium on Principles of Distributed Computing (PODC 2020). 39th Symposium on Principles of Distributed Computing (PODC 2020)Keren Censor-Hillel, François Le Gall, and Dean Leitersdorf. On distributed listing of cliques. In Proc. 39th Symposium on Principles of Distributed Computing (PODC 2020), page 474-482, 2020. doi:10.1145/3382734.3405742.
Fast approximate shortest paths in the congested clique. Keren Censor-Hillel, Michal Dory, H Janne, Dean Korhonen, Leitersdorf, 10.1007/s00446-020-00380-5Distributed Computing. 346Keren Censor-Hillel, Michal Dory, Janne H Korhonen, and Dean Leitersdorf. Fast approxi- mate shortest paths in the congested clique. Distributed Computing, 34(6):463-487, 2021. doi:10.1007/s00446-020-00380-5.
Improved distributed expander decomposition and nearly optimal triangle enumeration. Yi-Jun Chang, Thatchaphol Saranurak, Proc. 38th ACM Symposium on Principles of Distributed Computing (PODC 2019). 38th ACM Symposium on Principles of Distributed Computing (PODC 2019)Yi-Jun Chang and Thatchaphol Saranurak. Improved distributed expander decomposition and nearly optimal triangle enumeration. In Proc. 38th ACM Symposium on Principles of Distributed Computing (PODC 2019), pages 66-73, 2019.
Near-optimal distributed triangle enumeration via expander decompositions. Yi-Jun Chang, Seth Pettie, Thatchaphol Saranurak, Hengjie Zhang, 10.1145/3446330Journal of the ACM. 6832021Yi-Jun Chang, Seth Pettie, Thatchaphol Saranurak, and Hengjie Zhang. Near-optimal distributed triangle enumeration via expander decompositions. Journal of the ACM, 68(3), 2021. doi:10.1145/3446330.
tri, tri again": Finding triangles and small subgraphs in a distributed setting. Danny Dolev, Christoph Lenzen, Shir Peled, Proc. DISC 2012. DISC 2012Danny Dolev, Christoph Lenzen, and Shir Peled. "tri, tri again": Finding triangles and small subgraphs in a distributed setting. In Proc. DISC 2012, pages 195-209, 2012.
Does preprocessing help under congestion?. Klaus-Tycho Foerster, Janne H Korhonen, Joel Rybicki, Stefan Schmid, 10.1145/3293611.3331581Proc. 38nd ACM Symposium on Principles of Distributed Computing, (PODC 2019). 38nd ACM Symposium on Principles of Distributed Computing, (PODC 2019)Klaus-Tycho Foerster, Janne H. Korhonen, Joel Rybicki, and Stefan Schmid. Does prepro- cessing help under congestion? In Proc. 38nd ACM Symposium on Principles of Distributed Computing, (PODC 2019), pages 259-261, 2019. doi:10.1145/3293611.3331581.
Triangle finding and listing in CONGEST networks. Taisuke Izumi, François Le Gall, Proc. 36th ACM Symposium on Principles of Distributed Computing (PODC 2017). 36th ACM Symposium on Principles of Distributed Computing (PODC 2017)Taisuke Izumi and François Le Gall. Triangle finding and listing in CONGEST networks. In Proc. 36th ACM Symposium on Principles of Distributed Computing (PODC 2017), pages 381-389, 2017.
Extremal combinatorics: with applications in computer science. Stasys Jukna, SpringerStasys Jukna. Extremal combinatorics: with applications in computer science. Springer, 2011.
Deterministic subgraph detection in broadcast CONGEST. H Janne, Joel Korhonen, Rybicki, 10.4230/LIPIcs.OPODIS.2017.4Proc. 21st International Conference on Principles of Distributed Systems. 21st International Conference on Principles of Distributed Systems2017Janne H. Korhonen and Joel Rybicki. Deterministic subgraph detection in broadcast CONGEST. In Proc. 21st International Conference on Principles of Distributed Systems (OPODIS 2017), 2017. doi:10.4230/LIPIcs.OPODIS.2017.4.
Further algebraic algorithms in the congested clique model and applications to graph-theoretic problems. François Le Gall, Proc. DISC 2016. DISC 2016François Le Gall. Further algebraic algorithms in the congested clique model and applications to graph-theoretic problems. In Proc. DISC 2016, pages 57-70, 2016.
Mst construction in o(log log n) communication rounds. Zvi Lotker, Elan Pavlov, Boaz Patt-Shamir, David Peleg, 10.1145/777412.777428Proc. 15th Annual ACM Symposium on Parallel Algorithms and Architectures (SPAA 2003). 15th Annual ACM Symposium on Parallel Algorithms and Architectures (SPAA 2003)ACMZvi Lotker, Elan Pavlov, Boaz Patt-Shamir, and David Peleg. Mst construction in o(log log n) communication rounds. In Proc. 15th Annual ACM Symposium on Parallel Algorithms and Architectures (SPAA 2003), page 94-100. ACM, 2003. doi:10.1145/777412.777428.
On the distributed complexity of large-scale graph computations. Gopal Pandurangan, Peter Robinson, Michele Scquizzato, 10.1145/3460900ACM Transactions on Parallel Computing. 822021Gopal Pandurangan, Peter Robinson, and Michele Scquizzato. On the distributed complexity of large-scale graph computations. ACM Transactions on Parallel Computing, 8(2), 2021. doi:10.1145/3460900.
Gaussian elimination is not optimal. Strassen, Numerische Mathematik. 134Strassen. Gaussian elimination is not optimal. Numerische Mathematik, 13(4), 1969.
A bridging model for parallel computation. Leslie G Valiant, 10.1145/79173.79181Commun. ACM. 338Leslie G. Valiant. A bridging model for parallel computation. Commun. ACM, 33(8): 103-111, 1990. doi:10.1145/79173.79181.
Fast sparse matrix multiplication. Raphael Yuster, Uri Zwick, 10.1145/1077464.1077466ACM Trans. Algorithms. 11Raphael Yuster and Uri Zwick. Fast sparse matrix multiplication. ACM Trans. Algorithms, 1(1), 2005. doi:10.1145/1077464.1077466.
| [] |
[
"Low-lying doubly heavy baryons: Regge relation and mass scaling",
"Low-lying doubly heavy baryons: Regge relation and mass scaling"
] | [
"Yongxin Song \nInstitute of Theoretical Physics\nCollege of Physics and Electronic Engineering\nNorthwest Normal University\n730070LanzhouChina\n",
"Duojie Jia \nInstitute of Theoretical Physics\nCollege of Physics and Electronic Engineering\nNorthwest Normal University\n730070LanzhouChina\n\nLanzhou Center for Theoretical Physics\nLanzhou University\n730000LanzhouChina\n",
"Wenxuan Zhang \nInstitute of Theoretical Physics\nCollege of Physics and Electronic Engineering\nNorthwest Normal University\n730070LanzhouChina\n",
"Atsushi Hosaka \nResearch Center for Nuclear Physics\nOsaka University\n10-1 Mihogaoka567-0047IbarakiOsakaJapan\n"
] | [
"Institute of Theoretical Physics\nCollege of Physics and Electronic Engineering\nNorthwest Normal University\n730070LanzhouChina",
"Institute of Theoretical Physics\nCollege of Physics and Electronic Engineering\nNorthwest Normal University\n730070LanzhouChina",
"Lanzhou Center for Theoretical Physics\nLanzhou University\n730000LanzhouChina",
"Institute of Theoretical Physics\nCollege of Physics and Electronic Engineering\nNorthwest Normal University\n730070LanzhouChina",
"Research Center for Nuclear Physics\nOsaka University\n10-1 Mihogaoka567-0047IbarakiOsakaJapan"
] | [] | In framework of heavy-diquark−light-quark endowed with heavy-pair binding, we explore excited baryons QQ ′ q containing two heavy quarks (QQ ′ = cc, bb, bc) by combining the method of Regge trajectory with the perturbative correction due to the heavy-pair interaction in baryons. Two Regge relations, one linear and the other nonlinear, are constructed semi-classically in the QCD string picture, and a mass scaling relation based on the heavy diquark-heavy antiquark symmetry are employed between the doubly heavy baryons and heavy mesons. We employ the ground-state mass estimates compatible with the observed doubly charmed baryon Ξ cc and spectra of the heavy quarkonia to determine the trajectory parameters and the binding energies of the heavy pair, and thereby compute the low-lying masses of the excited baryons Ξ QQ ′ and Ω QQ ′ up to 2S and 1P excitations of the light quark and heavy diquark. The level spacings of heavy diquark excitations are found to be smaller generally than that of the light-quark excitations, in according with the nature of adiabatic expansion between the heavy and light-quark dynamics. | 10.1140/epjc/s10052-022-11136-9 | [
"https://export.arxiv.org/pdf/2204.00363v3.pdf"
] | 254,247,334 | 2204.00363 | af4f39b967b6805e4fd15e56106f48bfb6d60761 |
Low-lying doubly heavy baryons: Regge relation and mass scaling
1 Jan 2023
Yongxin Song
Institute of Theoretical Physics
College of Physics and Electronic Engineering
Northwest Normal University
730070LanzhouChina
Duojie Jia
Institute of Theoretical Physics
College of Physics and Electronic Engineering
Northwest Normal University
730070LanzhouChina
Lanzhou Center for Theoretical Physics
Lanzhou University
730000LanzhouChina
Wenxuan Zhang
Institute of Theoretical Physics
College of Physics and Electronic Engineering
Northwest Normal University
730070LanzhouChina
Atsushi Hosaka
Research Center for Nuclear Physics
Osaka University
10-1 Mihogaoka567-0047IbarakiOsakaJapan
Low-lying doubly heavy baryons: Regge relation and mass scaling
1 Jan 2023HEP/4464211239Jh1240Yx1240NnRegge trajectoryPhenomenological ModelsQCD PhenomenologyHeavy baryons * Electronic address: jiadj@nwnueducn
In framework of heavy-diquark−light-quark endowed with heavy-pair binding, we explore excited baryons QQ ′ q containing two heavy quarks (QQ ′ = cc, bb, bc) by combining the method of Regge trajectory with the perturbative correction due to the heavy-pair interaction in baryons. Two Regge relations, one linear and the other nonlinear, are constructed semi-classically in the QCD string picture, and a mass scaling relation based on the heavy diquark-heavy antiquark symmetry are employed between the doubly heavy baryons and heavy mesons. We employ the ground-state mass estimates compatible with the observed doubly charmed baryon Ξ cc and spectra of the heavy quarkonia to determine the trajectory parameters and the binding energies of the heavy pair, and thereby compute the low-lying masses of the excited baryons Ξ QQ ′ and Ω QQ ′ up to 2S and 1P excitations of the light quark and heavy diquark. The level spacings of heavy diquark excitations are found to be smaller generally than that of the light-quark excitations, in according with the nature of adiabatic expansion between the heavy and light-quark dynamics.
I. INTRODUCTION
Being systems analogous to hydrogen-like atoms with notable level splitting, by which QED interaction is tested in minute detail, the doubly heavy (DH) baryon provides a unique opportunity to probe the fundamental theory of the strong interaction, quantum chromodynamics (QCD) and models inspired by it [1][2][3][4]. It is expected that the excited DH baryons can form a set of nontrivial levels with various mass splitting and confront the QCD interaction with the experiments in a straightforward way.
In 2017, the LHCb Collaboration at CERN discovered the doubly charmed baryon Ξ ++ cc in the Λ + c K − π + π + mass spectrum [5] and reconfirms it in the decay channel of Ξ + c π + [6], with the measured mass 3621.55 ± 0.23 ± 0.30 MeV [7]. This observation sets up, for the first time, a strength scale of interaction between two heavy quarks and is of help to understand the strong interaction among heavy quarks [2,4,8]. For instance, the mass of the Ξ ++ cc helps to "calibrate" the binding energy between a pair of heavy quark governed by short-distance interaction. Further more, very recent observation of the doubly charmed tetraquark [9] is timely to examine the assumed interquark (QQ) interaction in hadrons and invite conversely systematic study of the DH baryons. DH baryons have been studied over the years using various theoretical methods such as the nonrelativistic (NR) [4,[10][11][12] as well as relativistic [3] quark models, heavy-quark effective theory [13], QCD sum rules [14][15][16], the Feynman-Hellmann theorem [17], mass formula [18], the Skyrmion model [19], the lattice QCD [1,[20][21][22][23][24][25] and the effective field theory [8,26,27]. Till now, the features(masses, spin-parity and decay widths) of the DH baryons in their excited states remain to be explored.
In this work, we introduce an energy term of heavy-pair binding due to the perturbative dynamics into the Regge relations of heavy hadrons to explore excited doubly heavy baryons with the help of the observed data of the light and heavy mesons, and of the Ξ ++ cc as well as the ground-state masses of the DH baryons compatible with the measured data. A nonlinear Regge relation for the heavy diquark excitations in DH baryons, endowed with heavy-pair binding is constructed and applied to compute low-lying mass spectra of the excited DH baryons Ξ QQ ′ and Ω QQ ′ (QQ ′ = cc, bb, bc), combining with the mass scaling that is tested successfully for the experimentally available mass splittings of the heavy baryons. An agreement with other calculations is achieved and a discussion is given for the excitation modes of the DH baryons in the QCD string picture.
We illustrate that the QCD string picture [28], when endowed with the two-body binding energy of the heavy quarks, gives an unified description of the DH baryons in which both aspects of the long-distance confinement and perturbative QCD are included, similar to that of the effective QCD string (EFT) in Ref. [8]. While our approach is semiclassical in nature, it treats quark (diquark) and string dynamically and full relativistically, in contrast with the static(nondynamic) approximation of the heavy quarks in DH baryons in Ref. [8] and of the lattice QCD simulation [1]. This paper is organized as follows. In Sect. II, we apply linear and nonlinear Regge relation to formulate and estimate the spin-independent masses of the excited DH baryons in the 1S1p, 2S1s and 1P1s waves. In Sect. III, we formulate the spin-dependent forces and the ensuing mass splitting of the low-lying excited DH baryons in the scheme of the jj coupling. In Sect. IV, we estimate the spin coupling parameters via the method of mass scaling these parameters and compute the excited spectra of the DH baryons. We end with summary and remarks in Sect. V.
II. SPIN-INDEPENDENT MASS OF EXCITED DH BARYONS
We use heavy-diquark−light-quark picture for the DH baryons qQQ ′ (q = u, d and s) with heavy quarks QQ ′ = cc, bb or bc, in which the heavy pair QQ ′ can be scalar(spin zero) diquark denoted The mass of a DH baryon qQQ ′ consists of the sum of two parts: M =M + ∆M , whereM is the spin-independent mass and ∆M is the mass splitting due to the spin-dependent interaction.
As the diquark QQ ′ is heavy, compared to the light quark q, the heavy-light limit applies to the DH baryons qQQ ′ .
A. Low-lying DH baryons with ground-state diquark
For a DH baryon with heavy diquark QQ ′ in internal S wave, we employ, by analogy with Ref. [29], a linear Regge relation derived from the QCD string model(Appendix A). Basic idea of this derivation is to view the baryon to be a q − QQ ′ system of a massive QCD string with diquark QQ ′ at one end and q at the other. Denoting byM l the mass of a DH baryon with orbital angular momentum l of light quark , the Regge relation takes form [29](Appendix A)
(M l − M QQ ′ ) 2 = πal + m q + M QQ ′ − m 2 bareQQ ′ M QQ ′ 2 ,(1)
where M QQ ′ is the effective mass of the S-wave heavy diquark, m q the effective mass of the light quark q, a stands for the tension of the QCD string [28]. Here, m bareQQ ′ is the bare mass of QQ ′ , given approximately by sum of the bare masses of each heavy quark: m bare QQ ′ = m bareQ + m bareQ ′ .
Using the bare data m bare,c = 1.275 GeV m bare,b = 4.18 GeV, one has numerically, m bare cc = 2.55 GeV, m bare bb = 8.36GeV, m bare bc = 5.455GeV.
We set the values of m q in Eq. (1) to be that in Table I, which was previously determined in
Ref. [29] via matching the measured mass spectra of the excited singly heavy baryons and mesons. To determine M QQ ′ in Eq. (1), we apply Eq. (1) to the ground state (n = 0 = l) to find [29] M (1S1s
) = M QQ ′ + m q + k 2 QQ ′ M QQ ′ ,(3)k QQ ′ ≡ M QQ ′ v QQ ′ = M QQ ′ 1 − m 2 bareQQ ′ M 2 QQ ′ 1/2 ,(4)
which agrees with the mass formula given by heavy quark symmetry [30]. As the experimental data are still lacking except for the doubly charmed baryon Ξ ++ cc , we adopt the masses (Table II) of DH baryons computed by Ref. [3], which successfully predicts M (Ξ cc , 1/2 + ) = 3620 MeV, very close to the measured data 3621.55 MeV of the doubly charmed baryon Ξ ++ cc [5,7]. With the mass parameters m q=n (n = u, d) in Table I
where the subscript cc(bb) stands for the vector diquark {cc}({bb}) for short. The mean mass of the diquark bc isM bc = 5892.3 MeV. It is examimed in Refs. [29,31] for the singly heavy (SH) baryon families(the Λ c /Σ c , the Ξ c /Ξ ′ c , the Λ b /Σ b and the Ξ b /Ξ ′ b ) that the Regge slope of the baryons are almost same for identical flavor constituents, independent of the light-diquark spin(= 0 or 1). The trajectory slope is found to rely crucially on the heavy quark mass M Q , but has little dependence on the diquark spin of qq ′ .
Let us consider first the string tensions of the nonstrange DH baryons Ξ QQ ′ = nQQ ′ . Natively, the string tension a are same for all heavy-light systems, namely, the ratio a D /a B = 1 = a D /a QQ as gluondynamics is flavor-independent, as implied by heavy quark symmetry and heavy quarkdiquark (HQD) symmetry at the leading order [32]. In the real world, the breaking of the HQD symmetry and heavy quark symmetry yields that the ratios a D /a B and a D /a QQ are not unity but depend upon the respective masses (M c , M b ) and (M c , M QQ ). The requirement that they tend to unity suggests that this dimensionless ratio depends on some functional of the respective dimensionless ratio M c /M b and M c /M QQ . For simplicity, we use a power-law of the mass scaling between the string tensions of the hadrons D/B and Ξ QQ ′ :
a D a B = M c M b P ,(7)a D a Ξ QQ ′ = M c M QQ ′ P .(8)
Here, the parameters (a D = a cn , a B = a bn , M c and M b ) are previously evaluated in Ref [29] and listed in Table I. Putting the tensions and heavy-quark masses in Table I into Eq. (7) gives P = 0.185 and thereby predicts, by Eq. (8) a Ξcc = 0.2532 GeV 2 , a Ξ bb = 0.3123 GeV 2 ,
a Ξ bc = 0.2893 GeV 2 = a Ξ ′ bc ,(9)
combining with Eqs. (5) and (6).
Given the tensions in Eqs. (9) and Eq. (10), and the diquark masses in Eqs. (5), (6) and the light quark mass in Table I, one can use Eq. (1) to obtain the spin-averaged masses of the baryon system Ξ QQ ′ in p-wave(l = 1). The results are
where Ξ ′ bc = n[bc] and Ξ bc = n{bc} stand for the bottom-charmed baryons with diquark spin 0 and 1, respectively.
The same procedure applies to the strange DH baryons, the Ω QQ ′ = sQQ ′ , and the associated D s /B s (= cs/bs) mesons, for which the mass scaling, corresponding to Eqs. (7) and (8), has the same form
a Ds a Bs = M c M b P (12) a Ds a Ω QQ ′ = M c M QQ ′ P .(13)
Putting the tensions for the strange heavy mesons and the heavy-quark masses in Table I to Eq.
(12) gives P = 0.202. One can then use Eq. (13) and the heavy-diquark masses in Eqs. (5) as well as (6) to predict a Ωcc = 0.2860 GeV 2 , a Ω bb = 0.3596 GeV 2 ,
a Ω bc = 0.3308 GeV 2 = a Ω ′ bc ,
where Ω ′ bc and Ω bc stand for the DH Ω baryons with bc-diquark spin 0 and 1, respectively. Given the tensions (including a Ξ QQ ′ = a nQQ ′ ) in Eqs. (14) and (15), and the heavy-diquark masses in Eqs. (5) and (6) as well as the strange quark mass m s = 0.328 GeV in Table I, one can use Eq. (1) to find the mean (spin-averaged) masses of the baryon system Ω QQ ′ in p-waves (l = 1).
The results are
where Ω ′ bc = s[bc] stands for the strange baryons Ω bc with scalar diquark [bc] and Ω bc = s{bc} for the Ω bc with axial vector diquark {bc}.
To extend the above analysis to radially excited states, one needs a Regge relation to include the radial excitations. For the heavy mesons, this type of Regge relation is proposed in Ref. [33] in (17), (27), (28), with the masses in Eqs. (11), (16). All masses in GeV. BaryonsM consideration that the trajectory slope ratio between the radial and angular excitations is π : 2 for the heavy mesonsqQ in the heavy quark limit. Extending this ratio to the heavy diquark-quark picture of the DH system q − QQ ′ can be done by viewing the system a QCD string tied to the diquark QQ ′ as one end and to the light quark q at the other. Then, a Regge relation for the radially and angular excitations follows, via utilizing HQD symmetry to replace heavy quark Q there by the diquarkQQ ′ and πal in Eq. (1) by πa l + π 2 n , giving rise to [33] M − M QQ ′ 2 = πa l + π 2 n
+ m q + M QQ ′ − m 2 bare QQ ′ M QQ ′ 2 .(17)
This gives a linear Regge relation applicable to the DH anti-baryonsqQQ ′ , or the DH baryons qQQ ′ . With the masses of m q in Table I wave. The results are listed in Table III. First of all, we consider excited heavy mesons in 1P and 2S waves in QCD string picture. For an excited heavy quarkonia QQ, a rotating-string picture [34] in which a heavy quark at one end and an antiquark at the other with relative orbital momentum L, infers a nonlinear Regge relation
(M 3/2 ∼ L), akin to Eq. (1),M = 2M Q + 3 T 2 L 2 4M Q 1/3 .(18)
On the other hand, a semiclassical WKB analysis of a system of heavy quarkonia QQ in a linear confining potential T |r| leads to a quantization condition for its radial excitations (labeled by N , Appendix B):
M − 2M Q 3/2 = 3πT 4 M Q (N + 2c 0 ) ,(19)
with the constant 2c 0 the quantum defect. Comparing the radial and angular slopes (linear coefficients in N and L, respectively, which is π : √ 12) of the trajectory in RHS of Eq. (19) and in that of Eq. (18), one can combine two trajectories into(Appendix B)
M N,L − 2M Q − B(QQ) N,L 3/2 = 3 √ 3T 2 M Q L + πN √ 12 + 2c 0 ,(20)
where a term of extra energy B(QQ) N,L , named heavy-pair binding energy, enters to represent a corrections to the picture of Regge spectra due to the short-distance interquark forces [35] when two heavy quarks (Q andQ) come close each other. Such a term is ignored in the semiclassical picture of massive QCD string as well as in the WKB analysis (Appendix B). For the ground-state DH hadrons, a similar binding between two heavy quarks in them was considered in Ref. [4,36].
For applications in this work, we rewrite Eq. (20) and extend it to a general form in which QQ ′ can be bc in flavor (by heavy quark symmetry),
M (QQ ′ ) N,L = M Q + M Q ′ + B(QQ ′ ) N,L + 3 T L + πN/ √ 12 + 2c 0 2/3 [2(M Q + M Q ′ )] 1/3 .(21)
Setting N = 0 = L in Eq. (21) gives Table VIII, and the corresponding binding energies for 1S wave in Table IX,
T c 0 (QQ ′ ) = 2(M Q + M Q ′ ) 6 √ 3 M (QQ) 1S − M Q − M Q ′ − B(QQ ′ ) 1S 3/2 .(22)B(cc) 1S = 0.25783 GeV 2 ,B(bb) 1S = 0.56192 GeV 2 , B(bc) 1S = 0.3464 GeV 2 .(24)M (QQ ′ ) N,L = M Q + M Q ′ + ∆B(QQ ′ ) N,L + 3 T QQ ′ (L + πN/ √ 12) + 2c 0 2/3 [2(M Q + M Q ′ )] 1/3 + c 1 ,(25)
with T QQ ′ the tension of string of the subsystem [Q − Q] and c 0 given by Eq. (22). Here, c 1 is an additive constant, defined up to the ground state of the whole DH system.
Since the inverse Regge slope (1/α ′ ) is derived to scale like √ Cα s in Ref. [37], where C is the Casimir operator and equals to 2/3 in a color antitriplet3 c of the pair QQ ′ , and 4/3 in a color
singlet (1 c ) of the pair QQ ′ , one can write(by 1/α ′ ∼ T ) T QQ ′ [3 c ] = 1 √ 2 T QQ ′ [1 c ],(26)
for the heavy pairs with color configuration indicated. So,
√ 2T QQ ′ c 0 = T QQ ′ c 0 .N,L −M 0,0 = ∆B(QQ ′ ) N,L + 3 √ 2 T (L + πN/ √ 12)/2 + T c 0 2/3 [2(M Q + M Q ′ )] 1/3 + C 1 ,(27)
where C 1 is a constant related to c 1 and determined by(setting N = 0 = L), (27) it follows that the baryon mass shift due to diquark excitations becomes
C 1 [qQQ ′ ] = −3 √ 2 (T c 0 ) 2 2(M Q + M Q ′ ) 1/3 , with ∆B(QQ ′ ) 0,0 = 0. From Eq.(∆M ( * ) ) N,L = ∆B(QQ ′ ) N,L + 3 √ 2 T (L + πN/ √ 12)/2 + T c 0 2/3 − (T c 0 ) 2/3 [2(M Q + M Q ′ )] 1/3 ,(28)
with T c 0 given by Eq. (23).
Given the parameters in Eqs. (23), (64) and Tables I, II, IX and the mean masses of the groundstate baryons Ω cc , Ω bb and Ω bc in Ref. [3], one can apply Eqs. (27) and (28) to find the mean masses of all DH baryons (Ξ cc , Ξ bb , Ξ bc , Ω cc , Ω bb , Ω bc ) with diquark excited radially and angularly.
Here, three values (T cc , T bb , T bc ) of the tension T in Eq. (28) are obtained in Eq. (64) in Section IV via matching Eq. (21) to the mean-mass spectra and binding energies of the heavy quarkonia and bc systems. The results for the mean masses of all DH baryons are shown in Table III.
III. SPIN-DEPENDENT INTERACTIONS IN jj COUPLING
In heavy-diquark quark picture of DH baryon in the ground state (1S1s), two heavy quarks form a S-wave color antitriplet 3 c diquark (QQ ′ ), having spin zero (S QQ ′ = 0) when QQ ′ = bc, or spin one (S QQ ′ = 1) when QQ ′ = cc, bc and bb. When Q = Q ′ the diquark QQ must have spin one due to full antisymmetry under exchange of two quarks. The spin S QQ ′ can couple with the spin S q = 1/2 of the light quark q(= u, d and s) to form total spin S tot = 1 ± 1/2 = 1/2, 3/2 if
S QQ ′ = 1 or S tot = 1/2 if S QQ ′ = 0.
For the internal ground state of diquark (S-wave), the spin-dependent interaction between light quark and heavy diquark can generally be given by [38,39]
H SD = a 1 l · S q + a 2 l · S QQ ′ + bS 12 + cS q · S QQ ′ , S 12 = 3S q ·rS QQ ′ ·r − S q · S QQ ′ ,(29)
where the first two terms are spin-orbit forces of the quark q with S q and the diquark QQ ′ with spin S QQ ′ , the third is a tensor force, and the last describes hyperfine splitting. Here, l stands for the relative orbital angular momentum of q with respective to QQ ′ ,r = r/r is the unity vector of position pointing from the center of mass (CM) of the diquark to the light quark q.
For the 1Sns waves of the DH baryons (l = 0), only the last term survives in Eq. (29), in which S QQ ′ · S q has the eigenvalues {−1, 1/2} when S QQ ′ = 1. The mass splitting for the systems q{QQ ′ } becomes (J = 1/2, 3/2),
∆M (q{QQ ′ }) = c(q{QQ ′ }) −1 0 0 1/2 .(30)
For the 1Snp waves of the bottom charmed baryons Ξ bc and Ω bc with zero diquark spin S bc = 0, the spin interaction (29) yields the mass splitting (J = 1/2, 3/2),
∆M (q[QQ ′ ]) = a 1 (q[QQ ′ ]) −1 0 0 1/2 .(31)
For the case of excited(2S or 1P wave) diquark QQ ′ , a correction to Eq. (29) emerges due to the interaction between the diquark and light quark, given by
H dSD = c * (L + S QQ ′ ) · S q ,(32)
where the diquark spin S QQ ′ = 1 (S wave of diquark) or 0 (P wave) when QQ ′ = cc or bb. This correction stems from the interaction of the effective magnetic moments e QQ ′ (L + S QQ ′ )/(2M QQ ′ ) of the excited diquark and the spin magnetic moment e q S q /m q of the light quark q. Here, L is the internal orbital angular moment of the diquark, e QQ ′ and e q stand for the respective charges of diquark and light quark. Consider now the excited (1p) DH baryon in which light quark is excited to 1p wave (l = 1) relative to QQ ′ . Note that coupling S tot = 1/2 to l = 1 gives the states with the total angular momentums J = 1/2, 3/2, while coupling S tot = 3/2 to l = 1 leads to the states with J = 1/2, 3/2 and 5/2. We use then the LS basis 2Stot+1 P J = { 2 P 1/2 , 2 P 3/2 , 4 P 1/2 , 4 P 3/2 , 4 P 5/2 } to label these multiplets in p-wave (J = 1/2, 1/2 ′ , 3/2, 3/2 ′ and 5/2). The two J = 1/2 states are the respective eigenstates of a 2 × 2 matrices M J representing H SD for J = 1/2 and 3/2. In terms of the basis [ 2 P J , 4 P J ], they can be given by the matrix [40,41](see Appendix D also)
M J=1/2 = 1 3 (a 1 − 4a 2 ) √ 2 3 (a 1 − a 2 ) √ 2 3 (a 1 − a 2 ) − 5 3 a 2 + 1 2 a 1 + b 0 √ 2 2 √ 2 2 −1 + c −1 0 0 1 2 ,(33)
in the J = 1/2 subspace,
M J=3/2 = − 2 3 1 4 a 1 − a 2 √ 5 3 (a 1 − a 2 ) √ 5 3 (a 1 − a 2 ) − 2 3 1 2 a 1 + a 2 + b 0 − √ 5 10 − √ 5 10 4 5 + c −1 0 0 1 2 ,(34)
in the J = 3/2 subspace, and by
M J=5/2 = 1 2 a 1 + a 2 − b 5 + c 2 .(35)
for the J = 5/2. One can verify that the spin-weighted sum of these matrixes over J = 1/2, 3/2 and 5/2 is zero:
J (2J + 1)M J = 0,(36)
as it should be for the spin-dependent interaction H SD .
In the heavy quark limit (M Q → ∞), all terms except for the first (= a 1 l · S q ) in Eq. (29) behave as 1/M QQ ′ and are suppressed. Due to heavy quark spin symmetry (S QQ ′ conserved), the total angular momentum of the light quark j = l + S q = J − S QQ ′ is conserved and forms a set of the conserved operators {J, j}, where J is the total angular momentum of the DH hadrons. We use then the jj coupling scheme to label the spin multiplets of the DH baryons, denoted by the basis |J, j (Appendix D), in which the spin of diquark QQ ′ decouples and l · S q becomes diagonal.
As such, the formula for mass splitting ∆M can be obtained by diagonalizing a 1 l · S q and treating other interactions in Eq. (29) perturbatively.
The eigenvalues (two diagonal elements) of l · S q can be obtained to be Table IV.
l · S q = 1 2 [j(j + 1) − l(l + 1) − S q (S q + 1)] = −1(j = 1/2) or 1/2(j = 3/2).(37)(J-j) l · S QQ ′ S 12 S q · S QQ ′ (1/2, 1/2) −4/3 −4/3 1/3 (1/2, 3/2) −5/3 1/3 −5/6 (3/2, 1/2) 2/3 2/3 −5/6 (3/2, 3/2) −2/3 2/15 −1/3 (5/2, 3/2) 1 −1/5 1/2
Given Table III, one can use the lowest perturbation theory to find
∆M (1/2, 1/2) = −a 1 − 4 3 a 2 − 4 3 b + 1 3 c,(38)∆M (1/2, 3/2) = 1 2 a 1 − 5 3 a 2 + 1 3 b − 5 6 c,(39)∆M (3/2, 1/2) = −a 1 + 2 3 a 2 + 2 3 b − 1 6 c,(40)∆M (3/2, 3/2) = 1 2 a 1 − 2 3 a 2 + 2 15 b − 1 3 c,(41)∆M (5/2, 3/2) = 1 2 a 1 + a 2 − 1 5 b + 1 2 c,(42)
which express the baryon mass splitting in p wave in terms of four parameters (a 1 , a 2 , b, c). The mass formula for the 1S1p states is then M (J, j) =M (1S1p) + ∆M (J, j), withM (1S1p) the spin-independent masses given in Eqs. (11) and (16) in section II.
B. The DH baryons with 2S and 1P wave diquark
In the heavy diquark-quark picture in this work, diquark QQ is fundamentally two body system connected by string and can be excited internally. In this subsection, we consider the spin interaction due to the 2S and 1P wave excitations of diquark in DH baryons.
(1) The 2S1s states. In this state, L = 0 and the spin-interaction (32) reduces to H SD (2S) = c * S QQ ′ · S q , in which S QQ ′ · S q has the eigenvalues {−1, 1/2} for the spin of total system J = 1/2 or 3/2, respectively. Note that we occasionally use 2S to stand for 2S1s for short. Here, the excited baryon energy stems from the spin interaction H SD (2S) as well as the string energy shift (∆M ( * ) ) 2S
given in Eq. (28). Thus, the mass splitting for the 2S1s wave baryons q{QQ ′ } becomes(J = 1/2,
3/2), ∆M (q{QQ ′ }) 2S = (∆M ( * ) ) 2S + c * (q{QQ ′ }) −1 0 0 1/2 .(43)
In the case of 2S1s wave DH system q[bc], no mass splitting happens since L = 0 = S [bc] , namely, the baryon mass is simplyM (q[bc]) nS .
(2) The 1P 1s states. For the systems q{cc}, q{bb} and q{bc}, S QQ ′ = 0 and the system spin J = L ⊕ 1/2 takes values J = 1/2 and 3/2. The spin interaction becomes c * L · S q , which equals to c * diag[−1, 1/2]. So, the baryon mass splitting is(J = 1/2, 3/2),
∆M (q{QQ ′ }) 1P = (∆M ( * ) ) 1P + c * −1 0 0 1/2 .(44)
For the systems q[bc], S QQ ′ = 1, and J d = L ⊕ S QQ ′ can take values J d = 0, 1 or 2 so that
J = 1/2, 1/2 ′ , 3/2, 3/2 ′ , 5/2. Labelling the DH baryon systems by |J, J d , the relation J d · S q = [J(J + 1) − J d (J d + 1) − 3/4]/2
yields the mass splitting matrix for the P wave multiplets,
∆M (q[bc]) 1P = (∆M ( * ) ) 1P + c * diag 0, −1, 1 2 , − 3 2 , 1 ,(45)
in the subspace of
{|J, J d }={|1/2, 0 , |1/2, 1 , |3/2, 1 ,|3/2, 2 , |5/2, 2 }.
IV. SPIN COUPLINGS OF THE DH BARYONS VIA MASS SCALING
To evaluate spin coupling parameters in Eq. (38) through Eq. (45) for the excited DH baryons, we utilize, in this section, the relations of mass scaling, which apply successfully to the SH hadrons [29,38]. For this, we list, in Table V, the experimentally matched values of the spin couplings (data before parentheses) for the existing heavy-light systems, such as the D s in Refs. [29,38] and the SH baryons (the Σ Q /Ξ ′ Q /Ω Q , with Q = c, b) in Ref. [31]. [29] and Ref. [31]. Before considering DH baryons, let us first examine the mass scaling of the spin couplings between heavy baryons Qqq ′ and heavy mesons(e.g., the D s = cs). In Refs. [29,38], a relation of mass scaling is explored based on Breit-Fermi like interaction [42,43], and is given by [29](see Eqs.
Hadrons Σ c Σ b Ξ ′ c Ξ ′ b Ω c Ω b m
(22-24))
l qq ′ · S qq ′ : a 1 Qqq ′ = M c M Q · m s m qq ′ · a 1 (D s ) ,(46)l qq ′ · S Q : a 2 Qqq ′ = M c M Q · 1 1 + m qq ′ /M c · a 2 (D s ) ,(47)S 12 : b Qqq ′ = M c M Q · 1 1 + m qq ′ /m s · b (D s ) ,(48)
where l qq ′ and S qq ′ denote the orbital angular momentum of light diquark qq ′ relative to the heavy quark Q and the spin of the light diquark, respectively, m qq ′ is the diquark mass and M Q the heavy-quark mass in the SH hadrons. The factor M c /M Q enters to account for the heavy quark dependence. In Eq. (48) , the extra recoil factor 1/ 1 + m qq ′ /m s enters to take into account the correction due to comparable heaviness between the diquark qq ′ and the strange quark. Note that a similar (recoil) factor, 1/ 1 + m qq ′ /M c , entering Eq. (47), has been confirmed for the charmed and bottom baryons in P-wave and D-wave [29]. For instance, Eq. (47) in Ref. [29](Eq.
(60)) well reproduces the measured masses 6146.2 MeV and 6152.5 MeV of the Λ b (6146) and
the Λ b (6152) observed by LHCb [44]. Similar verifications were demonstrated in Ref. [31] for the excited Ω c discovered by LHCb [45], for which the ss-diquark is comparable with the charm quark in heaviness. (Q = c, b). The data before parentheses are the parameters matched to the measured spectra of the SH baryons shown and the data in parentheses are that computed by the mass scaling Eqs. (46), (47) Putting the parameters in Tables I and the light diquarks in Table V Table VI, we list the obtained results within parentheses so that they are comparable with that (data before parentheses) matched to the measured data in Ref. [31]. Evidently, the mismatch shown is small: ∆a 1 ≤ 2.63 MeV, ∆a 2 ≤ 5.19 MeV and ∆b ≤ 2.80 MeV, and agreement is noticeable .
Consider the DH baryons with S-wave diquark now. Regarding mass scaling between the DH baryons(QQq) and the heavy mesons(Qq), the heavy diquark-antiquark(HDA) duality or symmetry(QQ ↔Q) in Refs. [32,46,47] suggests that two hadrons share the same chromodynamics in the heavy quark limit(M Q → ∞) up to a color factor. In the real world where M Q is finite, this symmetry(duality) breaks and the dynamics degenerates to a similar dynamics which mainly depends on the similarity(asymmetric) parameter M Q /M QQ . As two heavy quarks in diquark QQ ′ moves in relatively smaller region (∝ 1/M Q ), compared to the DH baryon itsef(∝ 1/m q ), the string structure of the q − QQ ′ resembles that of the heavy meson D s , one naturally expects that similar relations of mass scaling apply to the coupling parameters in Eq. (29) for the DH baryons.
Replacing M Q in Eqs. (46)-(48) by the diquark mass M QQ ′ , and the mass of the light diquark mass m qq ′ by that of the light quark, one obtains
a 1 [QQ ′ q] = M c M QQ ′ · m s m q · a 1 (D s ) ,(49)a 2 [QQ ′ q] = M c M QQ ′ · 1 1 + m q /M c · a 2 (D s ) ,(50)b[QQ ′ q] = M c M QQ ′ · 1 1 + m q /m s · b (D s ) .(51)
Since the hyperfine parameter c ∝ |ψ B (0)| 2 /(M QQ ′ m q ) , scales as hadron wavefunction |ψ B (0)| 2
near the origin, it should be small in p-wave. One can write a relation of mass scaling relative to the singly charmed baryon Ω c = css as below:
c[QQ ′ q] = M c M QQ ′ m ss m q c (Ω c ) .(52)
Experimentally, there exists five excited Ω c 's discovered by LHCb [45], with the masses 3000.4 MeV, 3050.2 MeV, 3065.6 MeV, 3095.2 MeV and 3119.2 MeV. Assigning the five states to be in p-wave and matching the jj mass formula to the five measured masses lead to [31,40] c (Ω c ) = 4.07 MeV, m ss = 991 MeV.
Now, one can employ Eqs. (49)- (52) to estimate the four parameters of spin couplings of the DH baryons using the parameters in Table I and the diquark masses in Eqs. (5), (6) and that in Eq. (53). The results are listed collectively in Table VII. As seen in Table, the parameter a 2 are smaller(three times roughly) but comparable to a 1 . The magnitudes of b is as large as a 2 roughly while c is the smallest. This agrees qualitatively with the parameter hierarchy of the spin-couplings [29,31] implied by heavy quark symmetry. With a 1 (D s ) = 89.4 MeV in Table VI [38], one can employ the relations of mass scaling (49) relative to the D s to give
where M [bc] = 5.8918 GeV and other mass parameters are from Table I
M Ω bc ,1/2 − = 7421.2 MeV, M Ω bc ,3/2 − = 7453.9 MeV.
We list all masses of the DH baryons in their 1S1p-wave in Tables X to XVI, by Breit-Fermi like interaction [42,43], can be given by Further application of mass scaling between the DH baryons,
c QQ ′ u 2s = M c M QQ ′ c (D) 2s .(57)c (Ξ cc ) 2s = c (Ξ bb ) M bb M cc , c (Ξ bc ) 2s = c (Ξ bb ) M bb M bc , c (Ω cc ) 2s = c (Ω bb ) M bb M cc , c (Ω bc ) 2s = c (Ω bb ) M bb M bc ,(58)
leads to the following parameter c for the 2s-wave baryons Ξ cc , Ξ bc , Ω cc and Ω cc ,
c(Ξ cc ) 2s = 39.2 MeV, c (Ξ bc ) 2S = 19.1 MeV, c(Ω cc ) 2s = 27.4 MeV, c (Ω bc ) 2s = 13.3 MeV.(59)
With the mean(2s) masses in Table III, one can employ Eq. (30) to compute the 2s-wave(namely, the 1S2s-wave) masses of the DH baryons. The results are listed in Table X through Table XVI, with comparison with other calculations, and alson shown in FIGs 1-8, respectively.
It is of interest to apply the same strategy to the ground states to check if one can reproduce the masses in Table II with the two numbers in curly braces corresponding to J P = 1/2 + and 3/2 + , respectively. Our predictions for the ground-state masses of the most DH baryons (Ω QQ ′ ) are in consistent with that by Ref. [3].
C. Heavy pair binding and effective masses of excited diquarks
We first explore and extract binding energies of heavy pairs in DH baryons and then compute the effective masses of heavy diquarks in its excited states. The later is done via estimating the energy shifts due to diquark excitations.
In QCD string picture, one simple and direct way to extract heavy pair binding is to use the "half" rule for the short-distance interaction of the QQ ′ systems and to subtract from the hadron masses of the QQ ′ systems (short string plus binding energy plus 2M Q ) all involved (heavy and light) masses and string energies of mesons, leaving only the binding energy between Q andQ ′ .
The relation for the QQ ′ systems is
B(QQ ′ ) N,L =M (QQ ′ ) −M (Qn) −M (Qn ′ ) +M (nn ′ ),(60)
in whichM represents the mean(spin-averaged) masses of the respective mesons formed by the quark pairs QQ ′ , Qn and nn ′ (n = u, d). The experimental mean-mass data [48] for these mesons are shown collectively in Table VIII, where the mean (1S) masses of the cc system, for instance,
where we have used the computed 1S-wave mass 154.0 MeV of the pion in Ref. [49], instead of the observed (very light) mass of the physical pion, for which an extra suppression mechanism (Nambu-Goldstone mechanism) enters due to the breaking of chiral symmetry. Note that the Table IX.
For the B c meson, no data in P wave is available. We estimate it via interpolating the P wave masses of the cc and bb systems which are known experimentally. Inspired by atomic spectra in a purely Coulombic potential, we write the binding energy B(QQ ′ ) as a power form of the reduced [36,51],
pair mass µ QQ ′ = M Q M Q ′ /(M Q + M Q ′ )B(QQ ′ ) − B 0 = k[µ QQ ′ ] P = k M Q M Q ′ M Q + M Q ′ P .(62)
where B 0 are a constant while the parameters k = k N,L and P = P N,L depend on the radial and angular quantum numbers of the excited QQ ′ . In the ground(1S1s) state, the binding B(cc) =
as shown in Table IX.
For the heavy quark-antiquark pairs {cc, bb, bc}, three values of string tension T that reproduce, by Eq. (21), the mean masses in Table VIII and the binding energies in Table IX,
Next, we consider binding energy B(QQ ′ ) of heavy quark pair QQ ′ in the color antitriplet(3 c ).
In a baryon qQQ ′ , such a binding enters as a correction to energy of the QCD string connecting Q and Q ′ in short-distance, as indicated by heavy-quarkonia spectra [48]. By color-SU (3) argument and by lattice simulations in Ref. [52], the QQ ′ interaction strength in a color triplet is half of that of the QQ ′ in a color singlet when the interquark(QQ) distance is small. One can then write, as in
Ref. [4],
B(QQ ′ ) = 1 2 B(QQ ′ ),(65)
Using the binding data for the QQ ′ in Table IX, Eq. (65) and Eq. (66) give all binding shifts ∆B(QQ) for the 2S and 1P states, and all binding shifts ∆B(QQ ′ ) for the 2S and 1P states, as listed collectively in Table IX.
With the binding data in Table IX
c * (Ξ bb (bbq)) 2S,1P = M c (E bb ) 2S,1P m n m q c (D) 1S ,(71)
where (E bb ) 2S = 9.2637 GeV. Further application of the following scaling relations between the doubly charmed and bottome baryons
c * (Ξ cc ) 2S = c * (Ξ bb ) E bb E cc , c * (Ξ bc ) 2S = c * (Ξ bb ) E bb E bc , c * (Ω cc ) 2S = c * (Ω bb ) E bb E cc , c * (Ω bc ) 2S = c * (Ω bb ) E bb E bc ,(73)
Given the masses in Table II (43), the multiplet masses of the DH baryons Ξ QQ ′ and Ω QQ ′ in the 2S1s state, listed in Table X through XVI, and shown in FIGs 1-8, respectively.
(2) The 1P 1s states. Application of the mass scaling Eq. (71) leads to(E bb = 9.3211 GeV)
c * (Ξ bb ) 1P = M c E bb (140.6 MeV) = 21.72 MeV, c * (Ω bb ) 1P = M c E bb
Given the masses in Table II Table XI. and diquark, which are derived semiclassically from the QCD string model and tested successfully against the observed heavy baryons and mesons. The heavy-pair binding are extracted from the excitation energies of heavy diquarks phenomenologically and incorporated into Regge relation, by which effective masses of excited diquarks are estimated and mass scaling is constructed.
(1P 1s)1/2 − 7344.3 − − 7212 − − (1P 1s)3/2 − 7392.7 − − 7214 − −
Our mass analysis suggests (see FIGs. 1-8) that the mean mass-level spacings ∆M of heavy diquark excitations are generally narrower than that of light-quark excitations. This is due to Low-energy QCD interaction is known to be involved in many aspects, especially in excited hadrons, and there exist various approaches exploring the excited doubly heavy baryons [3,10,12,[53][54][55][56][57]. The relations of Regge trajectory employed in this work are established phenomenologically Table IX phenomenologically.
(iii) While the mass predictions for excited DH baryons in this work is by no means rigorous, our estimation of the baryon spin-multiplet splitting is general in that mass scaling employed is based on chromodynamics similarity between heavy baryons and heavy mesons. The parameterized spin-dependent interactions Eq. (29) and Eq. (32) is built generally according to Lorentz structure and tensor nature of interaction [41,42].
Producing and measuring the DH baryons in the e + e − , pp, or pp collisions require simultaneous production of two heavy quark-antiquark pairs and subsequent searches among the final states to which DH baryons decay. A heavy quark Q from one pair needs to coalesce with a heavy quark Q ′ from the other pair, forming together a color antitriplet heavy diquark QQ ′ . The heavy diquark QQ ′ then needs to pick up a light quark q to finally hadronize as a DH baryon QQ ′ q.
To search experimental signals of the DH baryons, it is useful to check the baryon's decay modes which are easier to detect. One way is to exmaine the strong decay processes like Ξ QQ ′ → Ξ Q π and Ξ QQ ′ → Ξ Q ρ(ρ → ππ), which are explored in [62,63]. The other way is to check the decay modes with two-lepton in their final states, e.g., the semi-leptonic decays. We list the estimated lifetimes (except for Ξ ++ cc and Ξ + cc for which we use the experimental values) of the DH baryons and some notable branching fractions (Br, all from most recent computation [63]) in the channel of semi-leptonic decay: Ξ ++ cc = ccu: τ = 256 ± 27 fs( [48]), Br(Ξ ++ cc → Ξ + c lν l ) = 4.99%, Br(Ξ ++ cc → Ξ ′+ c lν l ) = 5.98%; Ξ + cc = ccd: τ < 33 fs( [48]), Br(Ξ + cc → Ξ 0 c lν l ) = 1.65%, Br(Ξ + cc → Ξ ′0 c lν l ) = 1.98%; Ξ + bc = bcu: τ = 244 fs [4], Br(Ξ + bc → Ξ 0 b lν l ) = 2.3%, Br(Ξ + bc → Ξ ++ cc lν l ) = 1.58%; Ξ 0 bc = bcu: τ = 93 fs [4], Br(Ξ 0 bc → Ξ − b lν l ) = 0.868%, Br(Ξ 0 bc → Ξ + bcc lν l ) = 0.603%; Ξ 0 bb = bbu: τ = 370 fs [4], Br(Ξ 0 bb → Ξ + bc lν l ) = 2.59%, Br(Ξ 0 bb → Ξ ′+ bc lν l ) = 1.15%; Ξ − bb = bbd: τ = 370 fs [4], Br(Ξ − bb → Ξ 0 bc lν l ) = 2.62%; Ω 0 bc = bcs: τ = 220 fs [62], Br(Ω − bc → Ω − b lν l ) = 6.03%; Ω − bb = bbs: τ = 800 fs [62], Br(Ω − bb → Ω 0 bc lν l ) = 4.81%. The more details of branching fractions in other channels can be found in Refs. [63,64] and reference therein. We hope the upcoming experiments(and data analysis) at LHC, Belle II and CEPC can test our mass computation of the DH baryons in this work.
E = i=QQ ′ ,q ′ m barei 1 − (ωr i ) 2 + a ω ωr i 0 du √ 1 − u 2 , (A1) l = i=QQ ′ ,q ′ m i ωr 2 i 1 − (ωr i ) 2 + a ω 2 ωr i 0 u 2 du √ 1 − u 2 ,(A2)
where m bareQQ ′ and m bareq are the respective bare masses of the heavy diquarks QQ ′ and light quark q, and ωr i = v i are the velocity of the string end tied to the quark i = QQ ′ , q, a stands for the tension of the QCD string. We define the effective (dynamical) masses of the heavy diquarks and the light quark in the CM frame of the baryon by
M QQ ′ = m bareQQ ′ 1 − v 2 QQ ′ , m q = m bareq 1 − v 2 q ,(A3)
Integrating Eq. (A1) and Eq. (A2) gives
E = M QQ ′ + m q + a ω arcsin v QQ ′ + arcsin (v q ) ,(A4)l = 1 ω M QQ v 2 QQ ′ + m q v 2 q + i=QQ ′ ,q arcsin (v i ) − v i 1 − v 2 i ,(A5)
The boundary condition of string at ends linked to heavy quark gives
a ω = (m bareQQ ′ )v QQ ′ 1 − v 2 QQ ′ = M QQ v QQ ′ 1 − v 2 QQ ′ .(A6)
As the diquark QQ ′ is very heavy and moves nonrelativistically in hadrons, v QQ is small in the limit of heavy quark(m bareQQ ′ → 0). A series expansion of Eq. (A6) upon v QQ gives
a ω ≃ M QQ v QQ ′ + 1 2 M QQ v 3 QQ ′ = P QQ ′ + P 3 QQ ′ 2M 2 QQ ′ ,(A7)From Eq. (A3) one has v q = 1 − (m bareq /m q ) 2 .
Assuming q to move relativistically(v q → 1), or, m bareq /m q ≪ 1, one finds arcsin (v q ) = arcsin
1 − m bareq m q 2 ≃ π 2 − m bareq m q ,(A8)
arcsin
v QQ ′ = v QQ ′ + 1 6 v 3 QQ ′ + O v 5 QQ ′ , (A9) v QQ ′ 1 − v 2 QQ ′ = v QQ ′ − 1 2 v 3 QQ ′ + O v 5 QQ ′ ,(A10)
Substitute the above relations into Eqs. (A4) and (A5) yields
E ≃ M QQ ′ + m q + a ω π 2 − m bareq m q + v QQ ′ + 1 6 v 3 QQ ′ , (A11) ωl ≃ M QQ ′ v 2 QQ ′ + m q + a ω π 4 − m bareq m q + α 3ω v 3 QQ ′ .(A12)
Using Eq. (A7) and eliminating ω, Eqs. (A11) and (A12) combines to give, when ignoring the tiny term m bareq /m q ,
E − M QQ ′ 2 = πal + m q + P 2 QQ ′ M QQ ′ 2 − 2m bareq P QQ ′ .(A13)
where
P QQ ′ ≡ M QQ ′ v QQ ′ ≃ M QQ ′ 1 − m 2 bareQQ ′ /M 2 QQ ′ 1/2
. Taking the small bare-mass limit(m bareq → 0), Eq. (A13) leads to Eq. (1), where P 2 QQ ′ /M QQ ′ = M QQ ′ − m 2 bareQQ ′ /M QQ ′ .
(2) Improved(nonlinear) Regge relation
Rewriting in a typical form of standard trajectory, M − const. 3/2 ∼ quantum numbers, for the heavy quarkonia system, the Regge relation Eq. (18) in Ref. [34] becomes
M − 2M Q 3/2 = 3 √ 3 2 M Q T L.(B5)
Comparing the radial and angular slopes(linear coefficients in N and L) in RHS of Eq. (19) and
Eq. (B5), which is π : √ 12, one can combine two equations into one unified form:
M N,L − 2M Q 3/2 = 3 √ 3T 2 M Q L + πN √ 12 + 2c 0 .(B6)
In the derivation [34] of Eq. (18) and that of Eq. (19) shown in Appendix Eq.(B1), the linear confining interaction between Q andQ is assumed, with the short-distance force between them ignored. As the short-distance force is required for low-lying quarkonia system and violates the typical linear trajectory in Eq. (B6), one way out is to assume that the short-distance attractive Eq. (B5) and Eq. (19). Normally, such a term is negative since when two quarks(Q andQ) are heavy enough to stay close each other, they both experiences an attractive Coulomb-like force of single gluon exchange, as the observed heavy-quarkonia spectra [48] of the cascade type indicated. For each interaction terms of Eq. (29), one can evaluate its matrix elements by explicit construction of states with a given J 3 as linear combinations of the baryon states S QQ ′ 3 , S q3 , l 3 in the LS coupling where S q3 + S QQ ′ 3 + l 3 = J 3 . Due to the rotation invariance of the matrix elements, it suffices to use a single J 3 for each term. Then, one can use
l · S i = 1 2 [l + S i− + l − S i+ ] + l 3 S i3 ,(D1)
to find their elements by applying l · S i (i = q, QQ ′ ) on the the third components of angular momenta. For projected states of baryon with given J 3 , they can be expressed, in the LS coupling,
by [QQ ′ ] or axial-vector(spin-1) diquark denoted by {QQ ′ }. The charm diquark cc and the bottom diquark bb can only form axial-vector diquark, {cc} or {bb}, while the bottom-charm diquark bc can form both, {bc} or [bc]. We use the notations N D L D nl to label the quantum numbers of the DH baryons, with the value of principal quantum number(N D ) of diquark, its orbital momentum(L D ) denoted by a capital letter and the principal quantum number(n) for the excitations of light quark and its orbital momentum(l) by a lowercase letter.
cc , 1p) = 4.0813 GeV,M (Ξ bb , 1p) = 10.5577 GeV, M (Ξ ′ bc , 1p) = 7.3258 GeV,M (Ξ bc , 1p)
cc , 1p) = 4.1895 GeV,M (Ω bb , 1p) = 10.6795 GeV, M (Ω ′ bc , 1p) = 7.4430 GeV,M (Ω bc , 1p)
and the inputs in Eqs. (2),(5),(6),(9),(10),(14) and(15), one obtains, by Eq. (17), the mean(spin-averaged) masses of the low-lying DH baryons with 1S2s
B. The DH baryons with 1P and 2S waves diquarkConsider excited DH baryons qQQ ′ with diquark QQ ′ in the internal 1P and 2S waves. The diquark cc or bb has to be in spin singlet (asymmetry) in the internal 1P wave or spin triplet in the internal 2S wave, while bc can be either in the internal 1P or in 2S waves. As treated before,one can decompose the excited baryon mass as M =M N,L + (∆M ) N,L , whereM N,L is the spinindependent mass and (∆M ) N,L is mass shift(splitting) due to the spin interactions in DH baryons qQQ ′ .
Substitution of the quark masses (M c = 1.44 GeV, M b = 4.48 GeV) into Eq. (22) gives (in GeV 2 ), T c 0 (cc) = 0.0689 GeV 2 , T c 0 (bb) = 0.4363 GeV 2 , T c 0 (bc) = 0.2136 GeV 2 . (23) in which we have used the observed data in section VI,M (cc) 1S =3.06865 GeV,M (bb) 1S =9.4449 GeV,M (bc) 1S =6.32025 GeV for the ground-state heavy quarkonia and B c meson in
Accounting the excitation energy of the baryons, (∆M ( * ) ) N,L =M ( * ) N,L −M 0,0 , for the energy shift of the subsystem [Q − Q ′ ] relative to its ground state, one can write this excitation energy due to diquark excitation (T QQ ′ = T given in Eq. (22)) as M ( * )
For
the bc-diquark for which QQ ′ = [bc] or {bc}, the spin S [bc] = 0 in internal S wave and S [bc] = 1 in internal P wave by the symmetry of the baryon states q[bc] and q{bc}. Also, S {bc} = 1 in internal S wave and S {bc} = 0 in internal P wave. For the baryon q[bc], J d = L ⊕ S QQ ′ = {0} in S wave of diquark or {0, 1, 2} in P wave of diquark, while for the baryon q{bc}, J d = L ⊕ S QQ ′ = {1} in the 2S and 1P wave of the diquark. A. The 1p wave DH baryons with S wave diquark
Thus, one can compute mass splitting ∆M (J, j) = J, j H SD J, j in the jj coupling via three steps: Firstly, one solves the eigenfunctions(the LS bases) S QQ ′ 3 , S q3 , l 3 of the l · S q for its eigenvalues j = 1/2 and 3/2 in the LS coupling; Secondly, one transforms the obtained LS bases into |J, j in the jj coupling (Appendix D); Finally, one evaluates the diagonal elements of the spin-interaction in Eq.(29) in the new bases |J, j . The results for ∆M (J, j) are listed in
to Eqs. (46)-(48), one can estimate the parameters a 1 , a 2 and b for the heavy baryons Σ Q , the Ξ ′ Q and the Ω c . In
a 1 (D s ) = 21.84 MeV,
. Putting above parameters into Eq.(31), and addingM L (Ξ bc ,Ω bc ) in Eqs.(11) and(16), one obtains the P-wave masses M L (Ξ bc ,Ω bc ) + ∆M (Ξ bc ,Ω bc ) for the baryons Ξ bc and Ω bc (with scalar diquark bc and J P = 1/2 − , 3/2 − ) to be M Ξ bc ,1/2 − = 7294.7 MeV, M Ξ bc ,3/2 − = 7341.4 MeV,
with comparison with other predictions cited. These masses are also shown in FIGs 1-8 for the respective states of the DH baryons. Our computation suggests that the 1S1p states of the DH baryons have increasing masses with baryon spin J rising from the lowest 1/2 − to highest 5/2 − . B. The 2s and 1s wave baryons with 1S wave diquark We consider first the DH baryons qQQ ′ with excited diquark QQ ′ in internal 2S waves and then examine the DH baryons in ground state. As two spin-states of the D mesons are established in 1s and 2s waves experimentally, we shall use the mass scaling relative to the D mesons. In 2s wave(n = 1), two D mesons are the D 0 (2550) 0 with mass 2549 ± 19 MeV and the D * 1 (2600) ±0 with mass 2627 ± 10 MeV[48]). The mass scaling relative to the D mesons, inspired
It follows ( c(D) 2s = mass spliting of the heavy mesons) thatc (Ξ bb (bbu)) = M c M bb M (D(2s), 1 − )) − M D(2s), 0 − = 12.6 MeV, c (Ω bb (bbs)) = M c M bb m u m s M (D(2s), 1 − ) − M D(2S), 0 − = 8.8 MeV.
cc
. For the 1s wave Ξ cc firstly, the c value can be scaled to that of the ground state D mesons with mass difference m(D * (2010) + ) − m(D + ) = 140.6 MeV[48] via mass scaling: c (Ξ cc ) 1s = c (D) Ω cc , the relevant meson for scaling is the ground-state D s meson with mass differencem(D * ± s ) − m(D ± s ) = 143.8 MeV[48], and the c value is thenc (Ω cc ) 1s = c (D s ) qQQ ′ ) 1s = M QQ ′ + m q + M QQ ′ v 2 QQ ′ given by Eq. (3) with v 2 QQ ′ ≡ 1−m 2 bareQQ ′ /M 2QQ ′ and Eq. (30), one can use the c values above to give (1/2, 3/2) + : M (Ξ cc ) 1s = 3691.7 + 70.66 {−1, 1/2} = {3620.8, 3726.8} MeV, (1/2, 3/2) + : M (Ω cc ) 1s = 3789.5 + 72.3 {−1, 1/2} = {3717.2, 3825.7} MeV, where the mean mass(3691 MeV and 3789.5 MeV) in the 1s wave(v 2 cc = 0.208) are obtained for the systems q{cc} and the two numbers in curly braces correspond to the respective states with J P = 1/2 + and 3/2 + . Further, similar calculation applying to the 1s wave Ξ bb and Ω bb gives c (Ξ bb ) 1s = c (D) (Ω bb ) 1s = c (D s ) + : M (Ξ bb ) 1s = 10226 + 22.7 {−1, 1/2} = {10203.3, 10237.4} MeV, (1/2, 3/2) + : M (Ω bb ) 1s = 10324 + 23.2 {−1, 1/2} = {10301.8, 10335.6} MeV, whereM (q{bb}) 1S = M bb + m q + M bb v 2 bb = 10226.0(10324.0) MeV have been used for q = n(or s). Finally, for the DH systems q(bc), there are three ground states, Ξ bc = n{bc} with J P = 1/2 + and 3/2 + , Ξ ′ bc = n[bc] with J P = 1/2 + , and three ground states Ω bc = s{bc} with J P = 1/2 + and 3/2 + , Ω ′ bc = s[bc] with J P = 1/2 + . Notice that v 2 bc = 0.1428(0.1429) for the diquark {bc}([bc]), one finds, by Eq. (3), the mean masses for them to bē M (n{bc}) 1s = M {bc} + m n + M {bc} v 2 bc = 6964.3 MeV, M (s{bc}) 1s = M {bc} + m s + M {bc} v 2 bc = 7062.3 MeV, M (n[bc]) 1s = 6963.0 MeV,M (s[bc]) 1S = 7061.0 MeV, and the c parameter to be c (Ξ bc ) 1s = c (D) (Ω bc ) 1s = c (D s ) state masses of the baryons Ξ bc and Ω bc are then (1/2, 3/2) + : M (Ξ bc ) 1s = 6964.3 + 33.8 {−1, 1/2} = {6930.5, 6981.2} MeV, (1/2, 3/2) + : M (Ω bc ) 1s = 7062.3 + 34.6{−1, 1/2} = {7027.7, 7079.6} MeV, 1/2 + : M (Ξ ′ bc ) 1s = 6963.0 MeV, M (Ω ′ bc ) 1s = 7061.0 MeV,
areM (cc) 1S = [3M (J/ψ) + M (η c )]/4 = 3068.65 MeV,M (cn) 1S = [3M (D * ) + M (D)]/4 = 1973.23 MeV,M (nn ′ ) 1S = [3M (ρ) + 154]/4 = 619.98 MeV. Given the data in Table VIII, the binding energy of the heavy quark-antiquark is calculable by Eq. (60). In the case of QQ ′ = cc, Eq. (60) gives B(cc) 1S =M (cc) 1S − 2M (cn) 1S +M (nn ′ ) 1S , = 3068.65 − 2(1973.23) + 619.98, = −257. 83 MeV,
For
the pair bc, only the 1S and 2S wave mesons(the B c ) are available forM (bc) experimentally, and they give, by Eq. (60), B(bc) 1S =M (bc) 1S −M (bn) 1S −M (cn) 1S +M (nn) 1S , = 6320.3 − 5313.4 − 1973.2 + 619.98, = −346.4 MeV, in whichM (bc) 1S is estimated by the mean mass of the measured M (B c , 0 − ) 1S = 6274.5 MeV and the predicted mass splitting ∆M (B c ) 1S = 61 MeV between the 1 − and 0 − states by Ref. [50]: M (bc) 1S = 1 4 [6274.5 + 3 × (6274.5 + ∆M (B c ) 1S )] = 6320.3 MeV. For 2S wave bc, one can use the measured 2S-wave mass M (B c , 0 − ) 2S = 6871.2 MeV of the B c meson and the predicted mass splitting ∆M (B c ) 2S = 40 MeV by Ref. [50] to get M (bc) 2S = 1 4 [6871.2 + 3 × (6871.2 + ∆M (B c ) 2S )] = 6901.2 MeV, and thereby to find B(bc) 2S = 6901.2 − 5917.0 − 2607.5 + 1423.7 = −199.6 MeV, whereM (cn) 2S = 2607.5 MeV[48], andM (bn) 2S = 5917.0 MeV, taken from the predicted mean mass[33] of the B mesons in 2S-wave. Similarly, one can find B(cc) 2S = −117.3 MeV and B(cc) 2S = −392.98 MeV, as shown in
( 62 )=
62, to the parameters {B 0 = 62.066 MeV, P 0,0 = 0.58805, k 0,0 = −388.342 }.When QQ ′ excited to the 1P wave, substitution of the measured mean masses of the charmonium and bottomonium inTable VIIIinto Eq. (60) leads to, B(cc) 1P = −100.38 MeV, B(bb) 1P = −328.81 MeV. One can then interpolate these two binding energies via Eq. (62) to predict (for the bc system) P 0,1 = 0.77365, k 0,1 = −0.209448, and further gives, by Eq. (62), B(bc) 1P = B 0 + B(cc) 1P − B 0 −161.78 MeV.
are given by {T (cc), T (bb), T (bc)} = {0.21891, 0.43367, 0.34278} GeV 2 .
with
QQ ′ in color singlet(1 c ). Relative to the ground states(1S1s), the shift ∆B N,L ≡ B(QQ ′ ) N,L − B(QQ ′ ) 0,0 of the binding energy are, by Eq. ′ ) N,L − B(QQ ′ ) 0,0 ].
, and the values of T in Eq.(64), Eq.(28) gives rise to the mean-mass shifts of the DH baryons due to the (2S and 1P) diquark excitations relative to their ground states(1S1s),{∆M ( * ) (Ξ cc ), ∆M ( * ) (Ξ bb ), ∆M ( * ) (Ξ bc )} 2S = {393.56, 347.02, 365.97} MeV,(67){∆M ( * ) (Ξ cc ), ∆M ( * ) (Ξ bb ), ∆M ( * ) (Ξ bc )} 1P = {430.87, 404.39, 412.27} MeV.(68)and the same values for the Ω cc ′ , Ω bb ′ and Ω bc ′ . This enables us to define effective masses E QQ ′ of heavy-diquark in its excited state via the energy shift due to diquark excitations:(E QQ ′ ) N,L = M QQ ′ + (∆M ( * ) ) N,L ,which gives explicitly, with the usage of Eq. (5), Eq. (6), Eq. (67) and Eq. (68), (E cc , E bb , E bc ) 2S = {3.2591, 9.2637, 6.2583} GeV, (69) (E cc , E bb , E bc ) 1P = {3.2964, 9.3211, 6.3046} GeV. (70) D. Spin couplings due to diquark excitations Let us consider the 2S1s and 1P 1s states of the DH baryons. For the spin couplings of the baryon multiplets, we utilize the mass scaling relative to the ground state D mesons (the D ± with mass 1869.66 ± 0.05 MeV and the D * (2010) ± with mass 2010.26 ± 0.05 MeV[48]),
in which c (D) 1S = M (D * ± ) − M (D ± ) = 140.6 MeV (that corresponds to the contact-term) for the D mesons, as Ref.[29].(1) The 2S1s states. Using data in Eq. (67) and Eq. (68), Eq. (71) leads to c * (Ξ bb (bbn)) 2S = M c E bb (140.6 MeV) = 21.86 MeV, c * (Ω bb (bbs)) 2S = M c E bb m n m s (140.6 MeV) = 15.33 MeV,
gives c * (Ξ cc ) 2S = 62.12 MeV, c * (Ξ bc ) 2S = 32.35 MeV, c * (Ω cc ) 2S = 43.56 MeV, c * (Ω bc ) 2S = 22.69 MeV.
and the mean masses of the ground-state baryons Ω cc , Ω bb and Ω bc in Ref. [3], ∆M ( * ) in Eq. (67) and the values of c * in Eqs. (72) and (73), one gets, by Eq.
c
* (Ξ cc ) 1P = c * (Ξ bb ) 1P 9.3211 3.2964 = 61.42 MeV, c * (Ξ bc ) 1P = c * (Ξ bb ) 1P 9.3211 6.3046 = 32.31 MeV, c * (Ω cc ) 1P = c * (Ω bb ) 1P 9.3211 3.2964 = 43.07 MeV, c * (Ω bc ) 1P = c * (Ω bb )
and the mean masses of the ground-state baryons Ω cc , Ω bb and Ω bc in Ref. [3], ∆M ( * ) in Eq. (68) and the c * values in Eqs. (75) and (76), one can apply Eq. (44) and Eq. (45) to obtain the mean massM 1P + (∆M ( * ) ) 1P and the multiplet masses of the 1P 1s-wave DH baryons. The results are listed in Table X through XVI and shown in FIGs. 1-8, respectively, for the 2S1s and 1P 1s states of the DH baryons.We collect all obtained masses of the DH baryons, which are computed via summing of the spin averaged masses(M ) in section III and the mass splitting ∆M in section IV, in Tables X through XVI. The ground state masses obtained in Section VI are also listed there.
FIG. 1 :
1Mass spectrum(in MeV) of the Ξ cc baryons, corresponding to the Table X. The horizontal dashed line shows the Λ c D threshold. heavy hadrons, to explore excited doubly heavy baryons Ξ QQ ′ and Ω QQ ′ in the picture of heavydiquark-light-quark. The low-lying masses of the excited doubly-heavy baryons are computed up to 2S and 1P waves of the light quark and heavy diquark internally, and compared to other calculations. Two Regge trajectories, one linear and the other nonlinear and endowed with an entra binding of heavy quark pair, are employed to describe the respective excitations of the light quark
FIG. 2 :
2XIII: Mass spectra of Ω cc baryons(in MeV). Our results are obtained by the respective sum of Mass spectrum(in MeV) of the Ξ bb baryons, corresponding to the
FIG. 3 :FIG. 4 :
34Mass spectrum(in MeV) of the Ξ bc baryons, corresponding to the Table XII. The horizontal dashed line shows the Λ b D threshold.for the diquark excitions than for the excitation of light quarks, which remain to be explained in the further explorations. We hope the oncoming experiments like LHCb and Belle to examine our predictions for the excited DH resonances addressed.The binding function Eq. (62), inspired by atomic spectra, mimic the nonrelativistic spectra of a system of heavy quark and antiquark bounded by the Coulomb-like force of single gluon exchange.In a purely Coulombic potential V (r) = −(4/3)α s /r, the energy levels E n = −[(4/3)α s ] 2 µ/(2n 2 ) (n = n r + L + 1) depends linearly on the reduced mass µ of system. In a quarkonia or the heavy Mass spectrum(in GeV) of the Ξ ′ bc baryons, corresponding to theTable XVI. The horizontal dashed line shows the Λ b D threshold.
FIG. 5 :
5Mass spectrum(in MeV) of the Ω cc baryons, corresponding to the Table XIII. The horizontal dashed line shows the Λ c D s threshold. meson B c , Eq. (62) holds qualitatively as B 0 may also depend weakly on the quantum numbers (N, L) and upon the effective mass of the quarks[35, 36, 51]. So, the prediction in Eq.(63) for the 1P-wave binding B(bc) and ensuing 1P-wave mass calculations of the baryon Ξ bc and Ω bc are of approximated(uncertain within 10 MeV roughly).
FIG. 6 :
6Mass spectrum(in MeV) of the Ω bb baryons, corresponding to theTable XIV.
FIG. 7 :FIG. 8 :
78Mass spectrum(in MeV) of the Ω bc baryons, corresponding to the Table XV. The horizontal dashed line shows the Λ b D s threshold. and rooted in the underlying interaction of QCD. Some remarks and discussions are in order: (i) For the DH baryons with 1S-wave diquark, the Regge trajectory stems from excitation of the light quark q which moves relativistically and is away from the heavy mass-center of diquark most of time. The short-distance interaction and binding energy between the heavy diquark and the light quark q is small and ignorable, especially for excited states. In this case, the linear Regge relation applies as in Eq. (1), without including binding term. (ii) When diquark of DH baryons excited, an improved(nonlinear) Regge relation entails a correction due to extra heavy-pair binding, as in Eqs. (21) and (25), since the heavy-pairs in hadrons are deep in attractive Coulomb-like potential. This is supported by the energy drop(roughly half Mass spectrum(in MeV) of Ω ′ bc baryons, corresponding to the Table XVI. The horizontal dashed line shows the Λ b D s threshold. or 1/3) of the extracted binding of the 2S and 1P wave baryons relative to the ground state, as indicated by
ACKNOWLEDGMENTS Y. S thanks W-N Liu for useful discussions. D. J thanks Xiang Liu and Xiong-Fei Wang for useful discussions. This work is supported by the National Natural Science Foundation of China under Grant No. 12165017. Appendix A For the orbitally excitations of a DH baryon q(QQ ′ ) with S-wave diquark QQ ′ , the classical energy and orbital angular momentum for the rotating QCD string read
force between Q andQ in the color-singlet(1 c ) would provide an extra(negative) energy B(QQ) N,L to the spectra of the QQ system and to deduct B(QQ) N,L (named the binding energy) from the hadron massM N,L in Eq. (B6) so that the LHS of Eq. (B6) becomes the remaining string energy solely: (M N,L − 2M Q ) 3/2 → (M N,L − B(QQ) N,L − 2M Q ) 3/2 . In doing so, the arguments by the classical string picture[34] and that in Eq. (1) given above remain valid and the formula Eq. (B6) remains intact formally up to a replacementM N,L →M N,L − B(QQ) N,L . This gives rise to Eq. (20). The binding energy B(QQ) N,L (in the excited states |N, L of system) depends on the quantum numbers(N, L) of the system considered [36] and thereby violates the linearity of the Regge relations
QCD string picture, one can view a DH baryons qQQ ′ , with excited diquark QQ ′ , as a string system [Q − Q]q, consisting of a (heavy) subsystem of massive string [Q − Q] (each Q at one of ends) and a light subsystem of a light-quark q and the string connected to it. In the semiclassical approximation, one can assume the light subsystem of q and attached string to it to be in stationary state while the heavy subsystem [Q − Q] is excited to the excited state(denoted by |N, L , say). As such, the excitation of the heavy subsystem [Q − Q] in color antitriplet(3 c ) with string tension T QQ resembles the excitation of a heavy quarkonia QQ in color singlet(1 c ) with string tension T up to a color(strength) factor of string interaction which is taken to be half commonly. Based on this similarity, one can write the excitation energy of the heavy system, by analogy with Eq. (21), M (QQ) N,L − 2M Q = ∆B(QQ) N,L + 3 T QQ (L + πN/ the first term ∆B(QQ) N,L = B(QQ) N,L − B(QQ) 0,0 in the RSH accounts for the shortdistance contribution due to heavy-quark binding and the second term for the excited string energy of the heavy subsystem in the |N, L state, c 0 is given by Eq. (22). We add an addictive constant c 1 sinceM (QQ) is defined up to the ground state of the whole DH system containing QQ. Extending Eq. (C1) to the diquark case of QQ = bc, one gets Eq. (25) generally, by heavy quark symmetry. Appendix D Given the eigenvalues j = 1/2 and 3/2, one solves the bases(eigenfunctions) S QQ ′ 3 , S q3 , l 3 of the l · S q in the LS coupling. The mass formula ∆M = H SD for a DH baryon qQQ ′ with S-wave diquark QQ ′ can be obtained by diagonalizing the dominate interaction a 1 l · S q and adding the diagonal elements of other perturbative spin-interactions in Eq. (29). This can be done by evaluating the matrix elements of H SD in the LS coupling and then changing the bases S QQ ′ 3 , S q3 , l 3to the new bases |J, j in the jj coupling to find the mass formula ∆M = H SD .
TABLE I :
IThe effective masses (in GeV) of the quark masses and the string tensions a (in GeV 2 ) of the D/D s and B/B s mesons in Ref.[29].Parameters M c M b m n m s a(cn) a(bn) a(cs) a(bs)
Input
1.44 4.48 0.23 0.328 0.223 0.275 0.249 0.313
and the average massesM (1S1s) inTable II, one can solve Eq.(3) to extract the heavy-diquark mass, with the results M cc = 2865.5 MeV, M bb = 8916.7 MeV,
(5)
M [bc] = 5891.8 MeV, M {bc} = 5892.3 MeV,
TABLE II :
IIGround state masses (GeV) and their spin averagesM l=0 of the doubly heavy baryons in Ref.[3].
State J P Baryons Content MassM l=0
1 2 S 1/2
1
2
+
Ξ cc
n{cc}
3.620
3.6913
1 4 S 1/2
3
2
+
Ξ *
cc
n{cc}
3.727
1 2 S 1/2
1
2
+
Ξ bb
n{bb}
10.202
10.2253
1 4 S 1/2
3
2
+
Ξ *
bb
n{bb}
10.237
1 2 S 1/2
1
2
+
Ξ bc
n{bc}
6.933
6.9643
1 4 S 1/2
3
2
+
Ξ *
bc
n{bc}
6.980
1 2 S 1/2
1
2
+
Ξ ′
bc
n[bc]
6.963
6.963
TABLE III :
IIISpin-averaged(mean) masses of the excited DH baryons Ξ QQ ′ and Ω QQ ′ predicted by Eqs.
Now, we consider the DH baryons qQQ ′ with excited diquark. In the string picture, one canview the baryon as a Y-shaped string system [Q − Q ′ ] − q in which a subsystem of massive (rotating
or vibrating) string [Q − Q ′ ] (tied to one heavy quark Q at each end) is connected via a static
string to a light quark q at third end. For 1S or 1P wave diquark in a DH baryon labelled by the
quantum numbers N and L, respectively, the Regge relation similar to Eq. (21) takes the form
(Appendix C),
TABLE IV :
IVThe matrix elements of the mass splitting operators in the p-wave doubly heavy baryons states in the jj coupling.
TABLE V :
VEffective masses (in GeV) of light diquarks in the singly heavy baryons shown. Data are from the Ref.
A. The 1p wave baryons with 1S wave diquarkqq
0.745 0.745 0.872 0.872 0.991 0.991
TABLE VI :
VISpin-coupling parameters(in MeV) of the heavy meson D s and the SH baryons Σ Q , Ξ ′ Q and Ω Q
and(48).Hadrons
a 1
a 2
b
D s
89.36
40.7
65.6
Σ c
39.96(39.34)
21.75(26.82)
20.70(20.05)
Σ b
12.99(12.65)
6.42(8.62)
6.45(6.45)
Ξ ′
c
32.89(33.62)
20.16(25.35)
16.50(17.93)
Ξ ′
b
9.37(10.81)
6.29(8.15)
5.76(5.76)
Ω c
26.96(29.59)
25.76(24.11)
13.51(16.31)
Ω b
8.98(9.51)
4.11(7.75)
7.61(5.24)
TABLE VII :
VIISpin-coupling parameters(in MeV) in the spin interaction (29) of the DH baryons Ξ QQ ′ and Ω QQ ′ (Q, Q ′ = c, b) in p wave.DH baryons
a 1
a 2
b
c
Ξ cc
64.05
17.64
19.38
8.81
Ξ bb
20.58
5.67
6.23
2.83
Ξ {bc}
31.14
8.58
9.42
4.29
Ω cc
44.91
16.66
16.48
6.18
Ω bb
14.43
5.35
5.30
1.99
Ω {bc}
21.84
8.10
8.01
3.01
TABLE VIII :
VIIIMeans masses (MeV) of heavy quarkonia, B c mesons, the B mesons, D mesons and the light unflavored mesons. The data comes from the spin-averaging of the measured masses for these mesons[48].Mesons
cc
bb
bc
cn
bn
nn
M (1S)
3068.65
9444.9
6320.25
1973.23
5313.40
619.98
M (2S)
3627.50
10017.3
6901.20
2627.00
5917.00
1423.75
M (1P )
3525.26
9899.7
6765.32
2435.72
5737.17
1245.79
Nambu-Goldstone mechanism for the heavy quarkonia is ignorable and not comparable with that of the physical pion. With the 1S wave data of the respective bottomonium, the B meson and M (nn ′ ) 1S = 619.98 MeV inTable VIII, one can similarly estimate B(bb) via Eq.(60). The results are listed inTable IX.
TABLE IX :
IXThe binding energyB(QQ ′ ), B(QQ ′ ) and their shift ∆B relative to the ground state, computedfrom Eq. (60), Eq. (65) and Eq. (66), respectively. All items are in MeV.
Binding energy
1S
2S
1P
∆B(2S)
∆B(1P )
B(cc)
-257.83
-117.30
-100.38
140.53
157.45
B(bb)
-561.92
-392.98
-328.81
168.94
233.11
B(bc)
-346.40
-199.55
-161.78
146.85
184.32
B(cc)
-128.92
-58.65
-50.19
70.27
78.73
B(bb)
-280.96
-196.49
-164.41
84.47
116.55
B(bc)
-173.20
-99.78
-80.89
73.43
92.32
−257.83 MeV, B(bb) = −561.92 MeV and B(bc) = −346.40 MeV in Table IX correspond, by Eq.
TABLE X :
XMass spectra of the baryon Ξ cc (in MeV). Our results are obtained by the respective sum of the mean mass of the Ξ cc inTable III, the binding energy shift(vanishes in 1S1s wave) inTable IX and itssplittings in Section III with the couplings given in Section IV.(N Ln q l) J P
This work
[3]
[53]
[10]
[8]
[55]
[1]
[57]
[12]
(1S1p)1/2 −
3970.3
TABLE XII :
XIIMass spectra(MeV) of Ξ bc baryons. Our results are obtained by the respective sum of the mean mass of the Ξ bc inTable III, the binding energy shift(vanishes in 1S1s wave) inTable IX and itssplittings in Section III with the couplings given in Section IV.State
(N Ln q l) J P
This work
[54]
[56]
[57]
[58]
[12]
(1S1p)1/2 −
7273.2
−
7156
7397
6842
7206
(1S1p)1/2 ′−
7327.7
−
7161
7390
6847
7231
(1S1p)3/2 −
7307.0
−
7144
7392
6831
7208
(1S1p)3/2 ′−
7336.6
−
7150
7394
6837
7229
(1S1p)5/2 −
7351.3
−
7171
7399
6856
7272
(1S2s)1/2 +
7478.6
7478
7240
7634
6919
−
(1S2s)3/2 +
7507.2
7495
7263
7676
6939
−
(2S1s)1/2 +
7297.9
−
−
7321
−
7044
(2S1s)3/2 +
7346.4
−
−
7353
−
7386
TABLE
TABLE XIV :
XIVMass spectra of Ω bb baryons (in MeV). Our results are obtained by the respective sum of
TABLE XVI :
XVIMass spectra of Ξ ′ bc /Ω ′ bc baryons (in MeV). Our results are obtained by the respective sum of the mean mass of the Ξ ′ bc /Ω ′ bc inTable III, the binding energy shift(vanishes in 1S1s wave) inTable IXand its splittings in Section III with the couplings given in Section IV.(N d Ln q l) J P
This work(Ξ ′
bc )
[57]
This work(Ω ′
bc )
(1S1p)1/2 −
7294.7
7388
7421.2
(1S1p)3/2 −
7341.4
7390
7453.9
(1S2s)1/2 +
7497.0
7645
7624.2
(2S1s)1/2 +
7329.0
7333
7528.3
(1P 1s)1/2 −
7375.3
7230
7528.3
(1P 1s)1/2 ′−
7343.0
7199
7505.6
(1P 1s)3/2 −
7391.4
7228
7539.6
(1P 1s)3/2 ′−
7326.8
7201
7494.3
(1P 1s)5/2 −
7407.6
7265
7550.9
the mean mass of the Ω cc inTable III, the binding energy shift(vanishes in 1S1s wave) inTable IXand its splittings in Section III with the couplings given in Section IV.
(1)Quantization condition for heavy quarkonia QQ Consider a heavy quarkonia QQ (at a distance of r) in a linear confining potential T |r| for which the system Hamiltonian is H QQ = 2 p 2 + M 2 Q + T |r|. Here, we have ignored the short-distance interaction. In the frame of center-of-mass, the heavy quark Q moves equivalently in a confining potential T x, with x = r/2, and then the Hamiltonian for Q as a half system of the quarkonia QQ, becomesUsing the semiclassical WKB analysis upon Eq. (B1) for the radial excitations(l q = 0), one has a WKB quantization condition[61][65],withthe classical "turning points", N the radial quantum number, c 0 a constant. Here, E(Q) is the semiclassical value of H Q , and the factor 2 before the integral in Eq. (B2) arises from underlying spinor nature of a quark whose wave function returns to original value after double cycle of journey in the position space[33]. Assuming Q to be in S wave(moving radially), integration of Eq. (B2) givesAbove result is only for the half system. Transforming to the whole system of quarkonia by mapping E(Q) → E/2 and N → N/2[61], Eq. (B3) gives(B →B = 2M Q /E) .This gives the required baryon states in the heavy diquark limit, by which the diagonal matrix elements of l · S QQ ′ , S 12 /2 and S QQ ′ · S q can be obtained. The detailed results are collected inTable IV.
. M Padmanath, R G Edwards, N Mathur, M Peardon, Phys. Rev. 9194502M. Padmanath, R. G. Edwards, N. Mathur, and M. Peardon, Phys. Rev. D91 (2015)094502 .
Implications of heavy quark-diquark symmetry for excited doubly heavy baryons and tetraquarks. T Mehen, Phys. Rev. D. 9694028T. Mehen, Implications of heavy quark-diquark symmetry for excited doubly heavy baryons and tetraquarks, Phys. Rev. D 96 (2017) 094028.
Mass spectra of doubly heavy baryons in the relativistic quark model. D Ebert, R N Faustov, V O Galkin, A P Martynenko, Phys. Rev. 6614008D. Ebert, R. N. Faustov, V. O. Galkin and A. P. Martynenko, Mass spectra of doubly heavy baryons in the relativistic quark model, Phys. Rev. D66, (2002) 014008.
Baryons with two heavy quarks: masses, production, decays, and detection. M Karliner, J L Rosner, Phys. Rev. 9094007M. Karliner and J. L. Rosner, Baryons with two heavy quarks: masses, production, decays, and detec- tion, Phys. Rev. D90,(2014) 094007 .
Observation of the doubly charmed baryon Ξ ++ cc. R Aaij, LHCb collaborationPhys. Rev. Lett. 119112001R. Aaij et al. [LHCb collaboration], Observation of the doubly charmed baryon Ξ ++ cc , Phys. Rev. Lett. 119,(2017) 112001 .
First observation of the doubly charmed baryon Decay Ξ ++ cc → Ξ + c π +. R Aaij, LHCb collaborationPhys. Rev. Lett. 121162002R. Aaij et al. [LHCb collaboration], First observation of the doubly charmed baryon Decay Ξ ++ cc → Ξ + c π + , Phys. Rev. Lett. 121,(2018) 162002 .
Precision measurement of the Ξ ++ cc mass. R Aaij, LHCb collaborationJHEP. 0249R. Aaij et al. [LHCb collaboration], Precision measurement of the Ξ ++ cc mass, JHEP 02 (2020) 049.
Effective QCD string and doubly heavy baryons. J Soto, J T Castellà, Phys. Rev. D. 10474027J. Soto and J. T. Castellà, Effective QCD string and doubly heavy baryons, Phys. Rev. D 104, (2021)074027 .
Observation of an exotic narrow doubly charmed tetraquark. R Aaij, LHCb collaborationarXiv:2109.01038Nature Physics. 18751LHCb collaboration, R. Aaij et al., Observation of an exotic narrow doubly charmed tetraquark, Nature Physics, 18 (2022)751; arXiv:2109.01038.
Spectroscopy of doubly heavy baryons. S S Gershtein, V V Kiselev, A K Likhoded, A I Onishchenko, Phys. Rev. D. 6254021S. S. Gershtein, V. V. Kiselev, A. K. Likhoded and A. I. Onishchenko, Spectroscopy of doubly heavy baryons, Phys. Rev. D 62, (2000) 054021.
Mass spectra of doubly heavy Omega QQ ′ baryons. V V Kiselev, A K Likhoded, O N Pakhomova, V A Saleev, Phys. Rev. 6634030V. V. Kiselev, A. K. Likhoded, O. N. Pakhomova and V. A. Saleev, Mass spectra of doubly heavy Omega QQ ′ baryons, Phys. Rev. D66,(2002) 034030
Heavy baryons in a quark model. W Roberts, M Pervin, arXiv:0711.2492Int. J. Mod. Phys. A. 23nucl-thW. Roberts and M. Pervin, Heavy baryons in a quark model, Int. J. Mod. Phys. A 23, (2008) 2817; [arXiv:0711.2492 [nucl-th]].
. J G Korner, M Kramer, D Pirjol, Prog. Part. Nucl. Phys. 33787J. G. Korner, M. Kramer, and D. Pirjol, Prog. Part. Nucl. Phys. 33, (1994) 787 .
. E Bagan, M Chabab, S Narison, Phys. Lett. B. 306350E. Bagan, M. Chabab, and S. Narison, Phys. Lett. B 306, (1993) 350.
. J.-R Zhang, M.-Q Huang, Phys. Rev. D. 7894007J.-R. Zhang and M.-Q. Huang, Phys. Rev. D 78,(2008) 094007.
. Z.-G Wang, Eur. Phys. J. A. 45267Z.-G. Wang, Eur. Phys. J. A 45,(2010) 267.
. R Roncaglia, D B Lichtenberg, E Predazzi, Phys. Rev. D. 521722R. Roncaglia, D. B. Lichtenberg, and E. Predazzi, Phys. Rev. D 52, (1995) 1722.
. L Burakovsky, J T Goldman, L P Horwitz, Phys. Rev. D. 567124L. Burakovsky, J. T. Goldman, and L. P. Horwitz, Phys. Rev. D 56, (1997)7124.
. M Rho, D O Riska, N N Scoccola, Phys. Lett. B. 251597M. Rho, D. O. Riska, and N. N. Scoccola, Phys. Lett. B 251,(1990) 597.
. N Mathur, R Lewis, R M Woloshyn, Phys. Rev. D. 6614502N. Mathur, R. Lewis, and R. M. Woloshyn, Phys. Rev. D 66,(2002) 014502.
. R Lewis, N Mathur, R M Woloshyn, Phys. Rev. D. 6494509R. Lewis, N. Mathur, and R. M. Woloshyn, Phys. Rev. D 64, (2001) 094509.
. J M Flynn, UKQCD CollaborationF Mescia, UKQCD CollaborationA S B Tariq, UKQCD CollaborationJ. High Energy Phys. 0766J. M. Flynn, F. Mescia, and A. S. B. Tariq (UKQCD Collaboration), J. High Energy Phys. 07 (2003) 066.
. L Liu, H.-W Lin, K Orginos, A Walker-Loud, Phys. Rev. D. 8194505L. Liu, H.-W. Lin, K. Orginos, and A. Walker-Loud, Phys. Rev. D 81, (2010) 094505.
. C Alexandrou, J Carbonell, D Christaras, V Drach, M Gravina, M Papinutto, Phys. Rev. D. 86114501C. Alexandrou, J. Carbonell, D. Christaras, V. Drach, M. Gravina, and M. Papinutto, Phys. Rev. D 86, (2012)114501 .
. Y Namekawa, PACS-CS CollaborationPhys. Rev. D. 8794512Y. Namekawa et al. (PACS-CS Collaboration), Phys. Rev. D 87, (2013) 094512 .
. G Perez-Nadal, J Soto, Phys. Rev. D. 79114002G. Perez-Nadal and J. Soto, Phys. Rev. D 79, (2009)114002 .
. J Soto, J T Castella, Phys. Rev. D. 10214013J. Soto and J. T. Castella, Phys. Rev. D 102, (2020) 014013.
Strings, monopoles and gauge fields. Y Nambu, Phys. Rev. D. 104262Y. Nambu, Strings, monopoles and gauge fields, Phys. Rev. D 10, (1974) 4262.
Regge behaviors in orbitally excited spectroscopy of charmed and bottom baryons. Duojie Jia, Wen-Nian Liu, A Hosaka, arXiv:1907.04958Phys.Rev. 10134016hep-phDuojie Jia, Wen-Nian Liu and A. Hosaka, Regge behaviors in orbitally excited spectroscopy of charmed and bottom baryons, Phys.Rev. D101, (2020) 034016; arXiv:1907.04958[hep-ph].
A V Manohar, M B Wise, pg. 45Heavy quark physics. U.KCambridge University PressA. V. Manohar and M. B. Wise, Heavy quark physics, Cambridge University Press, U.K. (2000), pg. 45.
A mixing coupling scheme for spectra of singly heavy baryons with spin-1 diquarks in P-waves. Duojie Jia, Ji-Hai Pan, C-Q Pang, arXiv:2007.01545Euro. Phys. J. C. 81434hep-phDuojie Jia, Ji-Hai Pan, C-Q Pang. A mixing coupling scheme for spectra of singly heavy baryons with spin-1 diquarks in P-waves, Euro. Phys. J. C 81, (2021) 434; [arXiv:2007.01545 [hep-ph]].
. J Hu, T Mehen, Phys. Rev. D. 7354003J. Hu and T. Mehen, Phys. Rev. D 73,(2006) 054003 .
Regge-like spectra of excited singly heavy mesons. Duojie Jia, W-C Dong, arXiv:1811.04214Eur. Phys. J. Plus. 134123hep-phDuojie Jia and W-C Dong, Regge-like spectra of excited singly heavy mesons, Eur. Phys. J. Plus 134 (2019)123;[arXiv:1811.04214[hep-ph]].
The 2 −+ assignment for the X(3872). T J Burns, F Piccinini, A D Polosa, C Sabelli, Phys. Rev. 8274003T. J. Burns, F. Piccinini, A. D. Polosa, and C. Sabelli, The 2 −+ assignment for the X(3872), Phys. Rev. D82, (2010) 074003.
Hadron masses in a gauge theory. A De Rujula, H Georgi, S L Glashow, Phys. Rev. 12147A. De Rujula, H. Georgi, and S. L. Glashow, Hadron masses in a gauge theory, Phys. Rev. D12, (1975)147 .
Scaling of P-wave excitation energies in heavy-quark systems. M Karliner, J L Rosner, Phys. Rev. 9874026M. Karliner and J. L. Rosner, Scaling of P-wave excitation energies in heavy-quark systems, Phys. Rev. D98, (2018)074026.
Stringlike solutions of the bag model. K Johnson, C B Thorn, Phys. Rev. D. 131934K. Johnson and C.B. Thorn, Stringlike solutions of the bag model, Phys. Rev. D 13, (1976) 1934 .
Prospects for observing the lowest-lying odd-parity Σ c and Σ b baryons. M Karliner, J L Rosner, arXiv:1506.01702Phys. Rev. 9274026hep-phM. Karliner and J. L. Rosner, Prospects for observing the lowest-lying odd-parity Σ c and Σ b baryons, Phys. Rev. D92, (2015) 074026; arXiv:1506.01702 [hep-ph].
Spectroscopy and Regge trajectories of heavy baryons in the relativistic quark-diquark picture. D Ebert, R N Faustov, V O Galkin, arXiv:1105.0583Phys. Rev. D. 8414025D. Ebert, R. N. Faustov and V. O. Galkin, Spectroscopy and Regge trajectories of heavy baryons in the relativistic quark-diquark picture, Phys. Rev. D 84 (2011) 014025; arXiv:1105.0583.
Very narrow excited Ω c baryons. M Karliner, J L Rosner, Phys. Rev. D. 95114012M. Karliner and J. L. Rosner, Very narrow excited Ω c baryons, Phys. Rev. D 95 (2017) 114012.
L D Landau, E M Lifshitz, Quantum Mechanics(Nonrelativistic Theory). OxfordPergamon Press3rd edL. D. Landau and E. M. Lifshitz, Quantum Mechanics(Nonrelativistic Theory), 3rd ed. Pergamon Press, Oxford,1977.
Quark potential approach to baryons and mesons. S N Mukherjee, R Nag, S Sanyal, Phys. Rep. 231201References thereinS.N. Mukherjee, R. Nag, S. Sanyal et. al., Quark potential approach to baryons and mesons, Phys. Rep. 231 (1993) 201. References therein.
QCD forces and heavy quark bound states. G S Bali, Phys. Rep. 3431G. S. Bali, QCD forces and heavy quark bound states, Phys. Rep. 343 (2001) 1.
Observation of new resonances in the Λ 0 b π + π − system. arXiv:1907.13598Phys. Rev. Lett. 123152001hep-ex(LHCb collaboration), Observation of new resonances in the Λ 0 b π + π − system, Phys. Rev. Lett. 123, (2019) 152001; arXiv:1907.13598[hep-ex].
Observation of Five New Narrow Ω 0 c States Decaying to ΞK −. R Aaij, LHCb CollaborationarXiv:1703.04639Phys. Rev. Lett. 118hep-exR. Aaij et al. (LHCb Collaboration), Observation of Five New Narrow Ω 0 c States Decaying to ΞK − , Phys. Rev. Lett.118, (2017) 182001; [arXiv:1703.04639 [hep-ex]].
Spectrum of baryons with two heavy quarks. M J Savage, M B Wise, Phys. Lett. B. 248177M.J.Savage and M.B. Wise, Spectrum of baryons with two heavy quarks, Phys. Lett. B 248, (1990) 177.
Effective field theory Lagrangians for baryons with two and three heavy quarks. N Brambilla, A Vairo, T Rosch, arXiv:hep-ph/0506065Phys. Rev. D. 7234021hep-phN.Brambilla, A.Vairo and T.Rosch, Effective field theory Lagrangians for baryons with two and three heavy quarks, Phys. Rev. D 72, 034021 (2005) [arXiv:hep-ph/0506065 [hep-ph]].
P A Zyla, Review of Particle Physics. Particle Data GroupP.A. Zyla et al. [Particle Data Group], Review of Particle Physics, Prog. Theor. Exp. Phys. (2020), 083C01.
Mass spectra and Regge trajectories of light mesons in the relativistic quark model. D Ebert, R N Faustov, V O Galkin, Phys. Rev. D. 79114029D. Ebert, R. N. Faustov, and V. O. Galkin, Mass spectra and Regge trajectories of light mesons in the relativistic quark model, Phys. Rev. D 79, (2009) 114029.
Spectroscopy and Regge trajectories of heavy quarkonia and B c mesons. D Ebert, R N Faustov, V O Galkin, arXiv:1111.0454Eur. Phys. J. C. 71hep-phD. Ebert, R. N. Faustov and V. O. Galkin, Spectroscopy and Regge trajectories of heavy quarkonia and B c mesons, Eur. Phys. J. C 71 (2011) 1825; [arXiv:1111.0454 [hep-ph]].
Masses and magnetic moments of hadrons with one and two open heavy quarks: Heavy baryons and tetraquarks. W-X Zhang, H Xu, Duojie Jia, Phys. Rev. 104114011W-X Zhang, H. Xu, Duojie Jia, Masses and magnetic moments of hadrons with one and two open heavy quarks: Heavy baryons and tetraquarks, Phys. Rev. D104, (2021) 114011.
Static-static-light baryonic potentials. J Najjar, G Bali, arXiv:0910.2824PoS. 200989hep-latJ.Najjar and G.Bali, Static-static-light baryonic potentials, PoS LAT2009, (2009) 089; arXiv:0910.2824 [hep-lat].
Mass spectra and radiative transitions of doubly heavy baryons in a relativized quark model. Q F Lü, K L Wang, L Y Xiao, X H Zhong, arXiv:1708.04468Phys. Rev. D. 9611114006hep-phQ. F. Lü, K. L. Wang, L. Y. Xiao and X. H. Zhong, Mass spectra and radiative transitions of doubly heavy baryons in a relativized quark model, Phys. Rev. D 96, 11, (2017) 114006; arXiv:1708.04468 [hep-ph].
Doubly heavy baryons in a Salpeter model with AdS/QCD inspired potential. F Giannuzzi, arXiv:0902.4624Phys. Rev. D. 7994002hep-phF. Giannuzzi, Doubly heavy baryons in a Salpeter model with AdS/QCD inspired potential, Phys. Rev. D 79, (2009) 094002 ; arXiv:0902.4624 [hep-ph].
Spectrum of heavy baryons in the quark model. T Yoshida, E Hiyama, A Hosaka, M Oka, K Sadato, arXiv:1510.01067Phys. Rev. D. 92114029hep-phT. Yoshida, E. Hiyama, A. Hosaka, M. Oka and K. Sadato, Spectrum of heavy baryons in the quark model, Phys. Rev. D 92, (2015) 114029 ; arXiv:1510.01067 [hep-ph].
Excited state mass spectra of doubly heavy Ξ baryons. Z Shah, A K Rai, arXiv:1702.02726Eur. Phys. J. C. 77129hep-phZ. Shah and A. K. Rai, Excited state mass spectra of doubly heavy Ξ baryons, Eur. Phys. J. C 77, (2017)129 ; arXiv:1702.02726 [hep-ph].
Symmetries and Systematics of Doubly Heavy Hadrons. B Eakins, W Roberts, arXiv:1201.4885Int. J. Mod. Phys. A. 271250039nucl-thB. Eakins and W. Roberts, Symmetries and Systematics of Doubly Heavy Hadrons, Int. J. Mod. Phys. A 27, (2012)1250039; [arXiv:1201.4885 [nucl-th]].
A New Model for Calculating the Ground and Excited States Masses Spectra of Doubly Heavy Ξ Baryons. N Mohajery, N Salehi, H Hassanabadi, Adv. High Energy Phys. 1326438N. Mohajery, N. Salehi and H. Hassanabadi, A New Model for Calculating the Ground and Excited States Masses Spectra of Doubly Heavy Ξ Baryons, Adv. High Energy Phys. 2018,(2018) 1326438 ;
Excited State Mass spectra of doubly heavy baryons Ω cc , Ω bb and Ω bc. Z Shah, K Thakkar, A K Rai, arXiv:1609.03030Eur. Phys. J. C. 76530hep-phZ. Shah, K. Thakkar and A. K. Rai, Excited State Mass spectra of doubly heavy baryons Ω cc , Ω bb and Ω bc , Eur. Phys. J. C 76, (2016)530 ; arXiv:1609.03030 [hep-ph].
Spectroscopy of the Ω cc , the Ω bb and the Ω bc Baryons in Hypercentral Constituent Quark Model via Ansatz Method. N Salehi, Acta Phys. Polon. B. 50735N. Salehi, Spectroscopy of the Ω cc , the Ω bb and the Ω bc Baryons in Hypercentral Constituent Quark Model via Ansatz Method, Acta Phys. Polon. B 50, (2019) 735.
Rotating strings confronting PDG mesons. J Sonnenschein, D Weissman, JHEP. 0813J. Sonnenschein and D. Weissman, Rotating strings confronting PDG mesons, JHEP 08, (2014) 013.
Baryons with two heavy quarks. V V Kiselev, A K Likhoded, Phys. Usp. 45455V.V. Kiselev, A.K. Likhoded, Baryons with two heavy quarks, Phys. Usp. 45 (2002) 455.
Weak decays of doubly heavy baryons: Decays to a system of π mesons. A G Gerasimov, A V Luchinsky, Phys. Rev. D. 10073015A. G. Gerasimov and A. V. Luchinsky, Weak decays of doubly heavy baryons: Decays to a system of π mesons, Phys. Rev. D 100 (2019) 073015
Weak decays of doubly heavy baryons: the 1/2 to 1/2 case. W Wang, Fu-Sheng Yu, Z X Zhao, Eur. Phys. J. C. 77781W. Wang, Fu-Sheng Yu, Z. X. Zhao, Weak decays of doubly heavy baryons: the 1/2 to 1/2 case, Eur. Phys. J. C 77 (2017) 781
It is written as x+ x− p(x)dx = πN in the cited literature. It is written as x+ x− p(x)dx = πN in the cited literature.
| [] |
[
"Analyzing Resource Utilization in an HPC System: A Case Study of NERSC's Perlmutter",
"Analyzing Resource Utilization in an HPC System: A Case Study of NERSC's Perlmutter"
] | [
"Jie Li [email protected] \nTexas Tech University\n79409LubbockTXUSA\n\nBerkeley Lab\n94720BerkeleyCAUSA\n\nUniversity of California\n94720BerkeleyCAUSA\n"
] | [
"Texas Tech University\n79409LubbockTXUSA",
"Berkeley Lab\n94720BerkeleyCAUSA",
"University of California\n94720BerkeleyCAUSA"
] | [] | 0000−0002−5311−3012] , George Michelogiannakis 2[0000−0003−3743−6054] , Brandon Cook 2[0000−0002−4203−4079] , Dulanya Cooray 3[0009−0000−1727−6298] , and Yong Chen 1[0000−0002−9961−9051]Abstract. Resource demands of HPC applications vary significantly. However, it is common for HPC systems to primarily assign resources on a per-node basis to prevent interference from co-located workloads. This gap between the coarse-grained resource allocation and the varying resource demands can lead to HPC resources being not fully utilized. In this study, we analyze the resource usage and application behavior of NERSC's Perlmutter, a state-of-the-art open-science HPC system with both CPU-only and GPU-accelerated nodes. Our one-month usage analysis reveals that CPUs are commonly not fully utilized, especially for GPU-enabled jobs. Also, around 64% of both CPU and GPU-enabled jobs used 50% or less of the available host memory capacity. Additionally, about 50% of GPU-enabled jobs used up to 25% of the GPU memory, and the memory capacity was not fully utilized in some ways for all jobs. While our study comes early in Perlmutter's lifetime thus policies and application workload may change, it provides valuable insights on performance characterization, application behavior, and motivates systems with more fine-grain resource allocation. | 10.48550/arxiv.2301.05145 | [
"https://export.arxiv.org/pdf/2301.05145v3.pdf"
] | 255,749,293 | 2301.05145 | 52d96048b415b8592eab3870a4384f41ef140fbe |
Analyzing Resource Utilization in an HPC System: A Case Study of NERSC's Perlmutter
Jie Li [email protected]
Texas Tech University
79409LubbockTXUSA
Berkeley Lab
94720BerkeleyCAUSA
University of California
94720BerkeleyCAUSA
Analyzing Resource Utilization in an HPC System: A Case Study of NERSC's Perlmutter
HPCLarge-scale CharacterizationResource UtilizationGPU UtilizationMemory SystemDisaggregated Memory
0000−0002−5311−3012] , George Michelogiannakis 2[0000−0003−3743−6054] , Brandon Cook 2[0000−0002−4203−4079] , Dulanya Cooray 3[0009−0000−1727−6298] , and Yong Chen 1[0000−0002−9961−9051]Abstract. Resource demands of HPC applications vary significantly. However, it is common for HPC systems to primarily assign resources on a per-node basis to prevent interference from co-located workloads. This gap between the coarse-grained resource allocation and the varying resource demands can lead to HPC resources being not fully utilized. In this study, we analyze the resource usage and application behavior of NERSC's Perlmutter, a state-of-the-art open-science HPC system with both CPU-only and GPU-accelerated nodes. Our one-month usage analysis reveals that CPUs are commonly not fully utilized, especially for GPU-enabled jobs. Also, around 64% of both CPU and GPU-enabled jobs used 50% or less of the available host memory capacity. Additionally, about 50% of GPU-enabled jobs used up to 25% of the GPU memory, and the memory capacity was not fully utilized in some ways for all jobs. While our study comes early in Perlmutter's lifetime thus policies and application workload may change, it provides valuable insights on performance characterization, application behavior, and motivates systems with more fine-grain resource allocation.
Introduction
In the past decade, High-Performance Computing (HPC) systems shifted from traditional clusters of CPU-only nodes to clusters of more heterogeneous nodes, where accelerators such as GPUs, FPGAs, and 3D-stacked memories have been introduced to increase compute capability [7]. Meanwhile, the collection of openscience HPC workloads is particularly diverse and recently increased its focus on machine learning and deep learning [4]. Heterogeneous hardware combined with diverse workloads that have a wide range of resource requirements makes it arXiv:2301.05145v3 [cs.DC] 13 Mar 2023 difficult to achieve efficient resource management. Inefficient resource management threatens to not fully utilize expensive resources that can rapidly increase capital and operating costs. Previous studies have shown that the resources of HPC systems are often not fully utilized, especially memory [10,17,20].
NERSC's Perlmutter also adopts a heterogeneous design to bolster performance, where CPU-only nodes and GPU-accelerated nodes together provide a three to four times performance improvement over Cori [12,13], making Perlmutter rank 8th in the Top500 list as of December 2022. However, Perlmutter serves a diverse set of workloads from fusion energy, material science, climate research, physics, computer science, and many other science domains [11]. In addition, it is useful to gain insight into how well users are adapting to Perlmutter's heterogeneous architecture.
Consequently, it is desirable to understand how system resources in Perlmutter are used today. The results of such an analysis can help us evaluate current system configurations and policies, provide feedback to users and programmers, offer recommendations for future systems, and motivate research in new architectures and systems. In this work, we focus on understanding CPU utilization, GPU utilization, and memory capacity utilization (including CPU host memory and GPU memory) on Perlmutter. These resources are expensive, consume significant power, and largely dictate application performance.
In summary, our contributions are as follows:
-We conduct a thorough utilization study of CPUs, GPUs, and memory capacity in Perlmutter, a top 8 state-of-the-art HPC system that contains both CPU-only and GPU-accelerated nodes. We discover that both CPU-only and GPU-enabled jobs usually do not fully utilize key resources. -We find that host memory capacity is largely not fully utilized for memorybalanced jobs, while memory-imbalanced jobs have significant temporal and/or spatial memory requirements. -We show a positive correlation between job node hours, maximum memory usage, as well as temporal and spatial factors. -Our findings motivate future research such as resource disaggregation, job scheduling that allows job co-allocation, and research that mitigates potential drawbacks from co-locating jobs.
Related Work
Many previous works have utilized job logs and correlated them with system logs to analyze job behavior in HPC systems [3,5,9,16,26]. For example, Zheng et al. correlated the Reliability, Availability, and Serviceability (RAS) logs with job logs to identify job failure and interruption characteristics [26]. Other works utilize performance monitoring infrastructure to characterize application and system performance in HPC [6,8,10,18,19,23,24]. In particular, the paper presented by Ji et al. analyzed various application memory usage in terms of object access patterns [6]. Patel et al. collected storage system data and performed a correlative analysis of the I/O behavior of large-scale applications [18]. The resource utilization analysis of the Titan system [24] summarized the CPU and GPU time, memory, and I/O utilization across a five-year period. Peng et al. focused on the memory subsystem and studied the temporal and spatial memory usage in two production HPC systems at LLNL [19]. Michelogiannakis et al. [10] performed a detailed analysis of key metrics sampled in NERSC's Cori to quantify the potential of resource disaggregation in HPC. System analysis provides insights into resource utilization and therefore drives research on predicting and improving system performance [2,17,20,25]. Xie et.al developed a predictive model for file system performance on the Titan supercomputer [25]. Desh [2], proposed by Das et al., is a framework that builds a deep learning model based on system logs to predict node failures. Panwar et al. performed a large-scale study of system-level memory utilization in HPC and proposed exploiting unused memory via novel architecture support for OS [17]. Peng et al. performed a memory utilization analysis of HPC clusters and explored using disaggregated memory to support memory-intensive applications [20].
Background
System Overview
NERSC's latest system, Perlmutter [13], contains both CPU-only nodes and GPU-accelerated nodes with CPUs. Perlmutter has 1,536 GPU-accelerated nodes (12 racks, 128 GPU nodes per rack) and 3,072 CPU-only nodes (12 racks, 256 CPU nodes per rack). These nodes are connected through HPE/Cray's Slingshot Ethernet-based high performance network. Each GPU-accelerated node features four NVIDIA A100 Tensor Core GPUs and one AMD "Milan" CPU. The memory subsystem in each GPU node includes 40 GB of HBM2 per GPU and 256 GB of host DRAM. Each CPU-only node features two AMD "Milan" CPUs with 512 GB of memory. Perlmutter currently uses SLURM version 21.08.8 for resource management and job scheduling. Most users submit jobs to the regular queue that has no maximum number of nodes and a maximum allowable duration of 12 hours.
The workload served by the NERSC systems includes applications from a diverse range of science domains, such as fusion energy, material science, climate research, physics, computer science, and more [11]. From the over 45-year history of the NERSC HPC facility and 12 generations of systems with diverse architectures, the traditional HPC workloads evolved very slowly despite the substantial underlying system architecture evolution [10]. However, the number of deep learning and machine learning workloads across different science disciplines has grown significantly in the past few years [22]. Furthermore, in our sampling time, Perlmutter was operating in parallel with Cori. Thus, the NERSC workload was divided among the two machines and Perlmutter's workload may change once Cori retires. Therefore, while our study is useful to (i) find the gap between resource provider and resource user and (ii) extract insights early in Perlmutter's lifetime to guide future policies and procurement, as in any HPC Figure. 1: Data are collected from CPU-only and GPU nodes, aggregated by aggregation nodes, stored in CSV files, and then processed using python's parquet library after being joined by job-level data provided by SLURM.
system the workload may change in the future. Still, our methodology can be reused in the future and on different systems.
Data Collection
NERSC collects system-wide monitoring data through the Lightweight Distributed Metric Service (LDMS) [1] and Nvidia's Data Center GPU Manager (DCGM) [14]. LDMS is deployed on both CPU-only and GPU nodes; it samples node-level metrics either from a subset of hardware performance counters or operating system data, such as memory usage, I/O operations, etc. DCGM is dedicated to collecting GPU-specific metrics, including GPU utilization, GPU memory utilization, NVlink traffic, etc. The sampling interval of both LDMS and DCGM is set by the system at 10 seconds. The monitoring data are aggregated into CSV files from which we build a processing pipeline for our analysis, shown in Figure 1. As a last step, we merge the job metadata from SLURM (job ID, job step, allocated nodes, start time, end time, etc.) with the node-level monitoring metrics. The output from our flow is a set of parquet files. Due to the large volume of data, we only sample Perlmutter from November 1 to December 1 of 2022. The system's monitoring infrastructure is still under deployment and some important traces such as memory bandwidth are not available at this time. A duration of one month is typically representative in an open-science HPC system [10], which we separately confirmed by sampling other periods. However, Perlmutter's workload may shift after the retirement of Cori as well as the introduction of policies such as allowing jobs to share nodes in a limited fashion. Still, a similar extensive study in Cori [10] that allows node sharing extracted similar resource usage conclusions as our study. Therefore, we anticipate that the key insights from our study in Perlmutter will remain unchanged, and we consider that studies conducted in the early stages of a system's lifetime hold significant value.
We measure CPU utilization from cpu_id (CPU idle time among all cores in a node, expressed as a percentage) reported from vmstat through LDMS [1]; we then calculate CPU utilization (as a percentage) as: 100 − cpu_id. GPU utilization (as a percentage) is directly read from DCGM reports [15]. Memory capacity utilization encompasses both the utilization of memory by userspace applications and the operating system. We use fb_free (framebuffer memory free) from DCGM to calculate GPU HBM2 utilization and mem_free (the amount of idle memory) from LDMS to calculate host DRAM capacity utilization. Memory capacity utilization (as a percentage) is calculated as M emU til = M emT otal−M emF ree M emT otal × 100, where M emT otal, as described above, is 512GB for CPU nodes, 256GB for the host memory of GPU nodes, and 40GB for each GPU HBM2. M emF ree is the unused memory of a node, which essentially shows how much more memory the job could have used.
In order to understand the temporal and spatial imbalance of resource usage among jobs, we use the equations proposed in [19] to calculate the temporal imbalance factor (RI temporal ) and spatial imbalance factor (RI spatial ). These factors allow us to quantify the imbalance in resource usage over time and across nodes, respectively. For a job that requests N nodes and runs for time T, and its utilization of resource r on node n at time t is U n,t , the temporal imbalance factor is defined as:
RI temporal (r) = max 1≤n≤N (1 − T t=0 U n,t T t=0 max 0≤t≤T (U n,t ) )(1)
Similarly, the spatial imbalance factor is defined as:
RI spatial (r) = 1 − N n=1 max 0≤t≤T (U n,t ) N n=1 max 0≤t≤T,1≤n≤N (U n,t )(2)
Both RI temporal and RI spatial are bound within the range of [0, 1]. Ideally, a job uses fully all resources on all allocated nodes across the job's lifetime, corresponding to a spatial and temporal factor of 0. A larger factor value indicates a variation in resource utilization temporally/spatially and the job experiences more temporal/spatial imbalance. We exclude jobs with a runtime of less than 1 hour in our subsequent analysis, as such jobs are likely for testing or debugging purposes. Furthermore, since our sampling frequency is 10 seconds, it is difficult to capture peaks that last less than 10 seconds accurately. As a result, we concentrate on analyzing the behavior of sustained workloads. Table 1 summarizes job-level statistics in which each job's resource usage is represented by its maximum resource usage among all allocated nodes throughout its runtime.
Analysis Methods
To distill meaningful insights from our dataset we use Cumulative Distribution Functions (CDFs), Probability Density Functions (PDFs), and Pearson correlation coefficients. The CDF shows the probability that the variable takes a value less than or equal to x, for all values of x; the PDF shows the probability that the variable has a value equal to x. To evaluate the resource utilization of jobs, we analyze the maximum resource usage that occurred during each job's entire runtime, and we factor in the job's impact on the system by weighting the job's data points based on the number of nodes allocated and the duration of the job. We then calculate the CDF and PDF of job-level metrics using these weighted data points. The Pearson correlation coefficient, which is a statistical tool to identify potential relationships between two variables, is used to investigate the correlation between two characteristics. The correlation factor, or Pearson's r, ranges from −1.0 to 1.0; a positive value indicates a positive correlation, zero indicates no correlation, and a negative value indicates a negative correlation.
Results
In this section, we start with an overview of the job characteristics, including their size, duration, and the applications they represent. Then we use CDF and PDF plots to investigate the resource usage pattern across jobs, followed by the characterization of the temporal and spatial variability of jobs. Lastly, we assess the correlation between the different resource types assigned to each job.
Workloads Overview
We divide jobs into six groups by the number of allocated nodes and calculate the percentage of each group compared to the total number of jobs. The details are shown in Table 2. As shown, 68.10% of CPU jobs and 65.89% of GPU jobs only request one node, while large jobs that allocate more than 128 nodes are only 0.40% and 0.30% on CPU and GPU nodes, respectively. Also, 40.90% of CPU jobs and 59.86% of GPU jobs execute for less than three hours (as aforementioned, jobs with less than one hour of runtime are discarded from the dataset). We also observe that about 88.86% of CPU jobs and 96.21% of GPU jobs execute less than 12 hours, and only a few CPU jobs and no GPU jobs exceed 48 hours. This is largely a result of policy since Perlmutter's regular queue allows a maximum of 12 hours. However, jobs using a special reservation can exceed this limit [13]. Next, we analyze the job names obtained from Slurm's sacct and estimate the corresponding applications through empirical analysis. Although this approach has limitations, such as the inability to identify jobs with undescriptive names such as "python" or "exec", it still offers useful information. Figure 2 shows that most node hours on both CPU-only and GPU-accelerated nodes are consumed by a few recurring applications. The top four CPU-only applications account for 50% of node hours, with ATLAS alone accounting for over a quarter. Over 600 CPU applications make up only 22% of the node hours, using less than 2% each (not labeled on the pie chart). On GPU-accelerated nodes, the top 11 applications consume 75% of node hours, while the other 400+ applications make up the remaining 25%. The top six GPU applications account for 58% of node hours, with usage roughly evenly divided. We further classify system workloads into three groups according to their maximum host memory capacity utilization. In particular, jobs using less than 25% of the total host memory capacity are categorized as low intensity, jobs that use 25-50% are considered moderate intensity, and those exceeding 50% are classified as high intensity [19]. Node-hours and the number of jobs can also be decomposed in these three categories, where node-hours is calculated by multiplying the total number of allocated nodes by the runtime (duration) of each job.
As shown in Figure 3a, CPU-only nodes have about 63% of low memory capacity intensity jobs. Although moderate and high memory intensity jobs are 37% of the total CPU jobs, they consume about 54% of the total node-hours. This indicates that moderate and high memory intensity jobs are likely to use more nodes and/or run for a longer time. This observation holds true for GPU nodes in which 37% of memory-intensive jobs compose 58% of the total nodehours. In addition, we observe that even though the percentage of high memory intensity jobs on GPU nodes (17%) is less than that on CPU nodes (26%), the corresponding percentages of the node-hours are close, indicating that high memory intensity GPU jobs consume more nodes and/or run for a longer time than high memory intensity CPU jobs. Observation: The analysis shows that both CPU and GPU nodes have around two-thirds of jobs that only occupy one node. GPU jobs have a higher proportion of short-lived jobs that run for less than three hours compared to CPU jobs. Additionally, jobs rarely allocate more than 128 nodes, which suggests that the majority of jobs can be accommodated within a single rack in the Perlmutter system. Furthermore, the analysis indicates that jobs that are intensive in host memory tend to consume more node-hours, despite representing a relatively small proportion of total jobs.
Resource Utilization
This subsection analyzes resource usage among jobs and compares the characteristics of CPU-only jobs and GPU-enabled jobs. We consider the maximum resource usage of a job across all allocated nodes and throughout its entire runtime to represent its resource utilization because maximum utilization must be accounted for when scheduling a job in a system. As jobs with larger sizes and longer durations have a greater impact on system resource utilization, and the system architecture is optimized for node-hours, we calculate the resource utilization for each job and multiply the number of data points we add to our dataset that measure that utilization by the job's node-hours. Figure 4 shows the distribution of the maximum CPU utilization of CPU jobs and GPU jobs weighted by node-hours. As shown, 40.2% of CPU node-hours have at most 50% CPU utilization, and about 28.7% of CPU node-hours has a maximum CPU utilization of 50-55%. In addition, 24.4% of jobs reach over 95% CPU utilization, creating a spike at the end of the CDF line. Over one-third of CPU jobs only utilize up to 50% of the CPU resources available, which could potentially be attributed to Simultaneous Multi-threading (SMT) in the Milan architecture. While SMT can provide benefits for specific types of workloads, such as communication-bound or I/O-bound parallel applications, it may not necessarily improve performance for all applications and may even reduce it in some cases [21]. Consequently, users may choose to disable SMT, leading to half of the logical cores being unused during runtime. Additionally, certain applications are not designed to use SMT at all, resulting in a reported utilization of only 50% in our analysis even with 100% compute core utilization.
CPU Utilization
In contrast to CPU jobs, GPU-enabled jobs exhibit a distinct distribution of CPU usage, with the majority of jobs concentrated in the 0-5% bin and only a small fraction of jobs utilizing the CPUs in full. We also obverse that nodehours with high utilization of both CPU and GPU resources are rare, with only 2.47% of node-hours utilizing over 90% of these resources (not depicted). This is because the CPUs in GPU nodes are primarily tasked with data preprocessing, data retrieval, and loading computed data, while the bulk of the computational load is offloaded to the GPUs. Therefore, the utilization of the CPUs in GPUenabled jobs is comparatively low, as their primary function is to support and facilitate the GPU's heavy computational tasks.
Host DRAM Utilization We plot the CDF and PDF of the maximum host memory utilization of job node-hours in Figure 5. To help visualize the distribution of memory usage, the red vertical lines at the X axis indicate the 25% and 50% thresholds that we previously used to classify jobs into three memory intensity groups. A considerable fraction of the jobs on both CPU and GPU nodes use between 5% and 25% of host memory capacity, respectively. Specifically, 47.4% of all CPU jobs and 43.3% of all GPU jobs fall within these ranges. The distribution of memory utilization, like that of CPU utilization, displays spikes at the end of the CDF lines due to a small percentage of jobs (12.8% for CPU and 9.5% for GPU, respectively) that fully exhaust host memory capacity.
Our results indicate that a significant proportion of both CPU and GPU jobs, 64.3% and 62.8% respectively, use less than 50% of the available memory capacity. As a reminder, the available host memory capacity is 512 GB in CPU nodes and 256 GB in GPU nodes. While memory capacity is also not fully utilized in Cori [10], the higher memory capacity per node in Perlmutter exacerbates the challenge of fully utilizing the available memory capacity.
GPU Resources
The utilization of GPUs in DCGM indicates the percentage of time that GPU kernels are active during the sampling period, and it is reported per GPU instead of per node. Therefore, we analyze GPU utilization in terms of GPU-hours instead of node-hours. The left subfigure of Figure 6 displays the CDF plot of maximum GPU utilization, indicating that 50% of GPU jobs achieve a maximum GPU utilization of up to 67%, while 38.45% of GPU jobs reach a maximum GPU utilization of over 95%. To assess the idle time of GPUs allocated to jobs, we separate the GPU utilization of zero from other ranges in the PDF histogram plot. As shown in the green bar, approximately 15% of GPU hours are fully idle.
Similarly, we measure the maximum GPU HBM2 capacity utilization for each allocated GPU during the runtime of each job. As shown in the right subfigure of Figure 6, the HBM2 utilization is close to evenly distributed from 0% to 100%, resulting in a nearly linear CDF line. The green bar in the PDF plot suggests that 10.6% of jobs use no HBM2 capacity, which is lower than the percentage of GPU idleness (15%). This finding is intriguing as it indicates that even though some allocated GPUs are idle, their corresponding GPU memory is still utilized, possibly by other GPUs or for other purposes.
The GPU resources' idleness can be attributed to the current configuration of GPU-accelerated nodes, which are not allowed to be shared by jobs at the same time. As a result, each user has exclusive access to four GPUs per node, even if they require fewer resources. Sharing nodes may be enabled in the future, potentially leading to more efficient use of GPU resources.
Observation: After analyzing CPU and host DRAM utilization, we find that GPU node-hours consume fewer CPU and host memory resources in comparison to CPU node-hours, likely because the computation is offloaded to GPUs. Although most GPU-hours reach high GPU utilization rates, we find that 15% of them have fully idle GPUs, and 10.6% of GPU-hours do not utilize HBM2 capacity, due to current configurations that do not allow for job sharing of GPU nodes. Allowing GPU sharing could alleviate the idleness of GPU resources and increase their average utilization. Figure. 7: Temporal patterns illustrated with the memory capacity utilization metrics of randomly selected jobs in Perlmutter, one representative job for each of the three categories. Each color represents the memory capacity utilization (%) of each node assigned to the job over the job's runtime. The area plots at the bottom show the normalized metrics for the node that has the maximum temporal factor among nodes allocated to the job; the percentage of the blank area corresponds to the value of RI temporal of a job. A larger blank area indicates more temporal imbalance.
Temporal Characteristics
Memory capacity utilization can become temporally imbalanced when a job does not utilize memory capacity evenly over time. Temporal imbalance is particularly common in applications that consist of phases that require different memory capacities. In such cases, a job may require significant amounts of memory capacity during some phases, while utilizing much less during others, resulting in a temporal imbalance of memory utilization. We classify jobs into three patterns by the RI temporal value of host DRAM utilization: constant, dynamic, and sporadic [19]. Jobs with RI temporal lower than 0.2 are classified in the constant pattern, where memory utilization does not show significant change over time. Jobs with RI temporal between 0.2 and 0.6 are in the dynamic pattern, where jobs have frequent and considerable memory utilization changes. The sporadic pattern is defined by RI temporal larger than 0.6. In this pattern, jobs have infrequent and sporadic higher memory capacity usage than the rest of the time. Figure 7 illustrates three memory utilization patterns that were constructed from our monitoring data. Each color in the scatter plot represents a different node allocated to the job. The constant pattern job shows a nearly constant memory capacity utilization of about 80% across all allocated nodes for its entire runtime, resulting in the bottom area plot being almost fully covered. The dynamic pattern job also exhibits similar behavior across its allocated nodes, but due to variations over time, the shaded area has several bumps and dips, resulting in an increase in the blank area. For the sporadic pattern job, the memory utilization readings of all nodes have the same temporal pattern, with sporadic spikes and low memory capacity usage between spikes, resulting in the blank area occupying most of the area and indicating poor temporal balance. The CDFs and PDFs of the host memory temporal imbalance factor of CPU jobs and GPU jobs are illustrated in Figure 8, in which two vertical red lines separate the jobs into three temporal patterns. Overall, both CPU jobs and GPU jobs have good temporal balance: 55.3% of CPU jobs and 74.3% of GPU jobs belong to the constant pattern, i.e, their RI temporal values are below 0.2. Jobs on CPU nodes have a higher percentage of dynamic patterns: 35.9% of CPU jobs have RI temporal value between 0.2 and 0.4, while GPU jobs have 24.9% in the dynamic pattern. On GPU nodes, we only observe very few jobs (0.8%) in the sporadic pattern, which means the cases of host DRAM having severe temporal imbalance are few.
We further analyze the memory capacity utilization distribution of jobs in each temporal pattern; the results are shown in Figure 9a. We extract the maximum, minimum, and difference between maximum and minimum memory capacity used from jobs in each category and present the distribution in box plots. The minimum memory used for all categories on the same nodes is similar: about 25 GB and 19 GB on CPU and GPU nodes, respectively. 75% of jobs in the con- stant category on CPU nodes use less than 86 GB while 75% jobs on GPU nodes use less than 56 GB. As 55.3% CPU jobs and 74.3% GPU jobs are in the constant category, 41.5% CPU jobs and 55.7% GPU jobs do not use 426 GB and 200 GB of the available capacity, respectively. The maximum memory used in the constant pattern is 150 GB on CPU nodes and 94 GB on GPU nodes, both of which do not exceed half of the memory capacity. Jobs using high memory capacity are only observed in dynamic and sporadic patterns, where 75% sporadic jobs use up to 429 GB on CPU nodes and 189 GB on GPU nodes, respectively.
Observation: Our analysis suggests that GPU nodes exhibit a greater proportion of jobs with temporal balance in host DRAM usage compared to CPU nodes. While over half of both CPU and GPU jobs fall under the category of temporal constant jobs, jobs with temporal imbalance, characterized by dynamic and sporadic patterns, generally require higher maximum memory capacity compared to constant pattern jobs. Furthermore, the distribution of host memory capacity usage among jobs with different temporal patterns reveals that memory capacity is not fully utilized for constant pattern jobs, whereas dynamic and sporadic pattern jobs may achieve high memory capacity utilization at some point during their runtime.
Spatial Characteristics
The job scheduler and resource manager of current HPC systems do not consider the varying resource requirements of individual tasks within a job, leading to spatial imbalances in resource utilization across nodes. One common type of spatial imbalance is when a job requires a significant amount of memory in a small number of nodes, while other nodes use relatively less memory. Spatial imbalance of memory capacity quantifies the uneven usage of memory capacity across nodes allocated to a job.
To characterize the spatial imbalance of jobs, we use equation 2 presented in 3.2 to calculate the spatial factor RI_spatial of memory capacity usage for (a) CPU jobs.
(b) GPU jobs. Figure. 11: CDFs and PDFs of the spatial factor of host memory capacity utilization of jobs. The larger the value of the spatial factor, the more spatial imbalance.
each job. Similar to the temporal factor, RI_spatial falls in the range [0, 1] and larger values represent higher spatial imbalance. Jobs are classified into one of three spatial patterns: (i) convergent pattern that has RI_spatial less than 0.2, (ii) scattered pattern that has RI_spatial between 0.2 and 0.6, and (iii) deviational pattern with its RI_spatial larger than 0.6. As shown in the examples in Figure 10, a job that exhibits a convergent pattern has similar or identical memory capacity usage among all of its assigned nodes. A job with a scattered pattern shows diverse memory usage and different peak memory usage among its nodes. A spatial deviational pattern job has a similar memory usage pattern in most of its nodes but has one or several nodes deviate from the bunch. It is worth noting that low spatial imbalance does not indicate low temporal imbalance. The spatial convergent pattern job shown in the example has several spikes in memory usage and therefore is a temporal sporadic pattern.
We present the CDFs and PDFs of the job-wise host memory capacity spatial factor in Figure 11. Overall, 83.5% of CPU jobs and 88.9% of GPU nodes are in the convergent pattern and very few jobs are in the deviational pattern. Because jobs that allocate a single node always have a spatial imbalance factor of zero, if we include single-node jobs, the overall memory spatial balance is even better: 94.7% for CPU jobs and 96.2% for GPU jobs.
We combine the host memory spatial pattern with the host memory capacity usage behavior in each job and plot the distribution of memory capacity utilization by spatial patterns; the results are shown in Figure 9b. Similar to the distribution of the temporal patterns, we use the maximum, minimum, and difference of job memory to evaluate the memory utilization imbalance. Spatial convergent jobs have relatively low memory usage. As shown in the green box plots, 75% of spatial convergent jobs (upper quartile) use less than 254 GB on CPU nodes and 95 GB on GPU nodes. Given that spatial convergent jobs account for over 94% of total jobs, over 70% of jobs have 258 GB and 161 GB of memory capacity unused for CPU and GPU nodes, respectively. Memory imbal-(a) CPU jobs.
(b) GPU jobs. Figure. 12: Correlation of job node-hours, maximum memory capacity used, temporal, and spatial factors.
ance, i.e, the difference between the maximum and minimum memory capacity usage of a job (red box plots), is also the lowest in convergent pattern jobs. For spatial-scattered jobs on CPU nodes, even though they are a small portion of the total jobs, the memory difference spans a large range: from 115 GB at 25% percentile to 426 GB at 75% percentile. Spatial deviational CPU jobs have a shorter span in memory imbalance compared to GPU jobs; it only ranges from 286 GB to 350 GB at the lower and upper quartiles, respectively.
Observation: Our analysis shows that a significant number of CPU and GPU jobs on Perlmutter have a convergent pattern of spatial balance for host memory capacity usage across allocated nodes. Even after eliminating single-node jobs, the proportion of jobs with a convergent spatial pattern remains high, suggesting that Perlmutter's jobs generally have good spatial balance. However, jobs with scattered and deviational spatial patterns, albeit fewer in number, tend to consume more memory capacity in some allocated nodes, leading to uneven memory capacity utilization across nodes and some nodes exhibiting low memory capacity utilization.
Correlations
We conduct an analysis of the relationships between various job characteristics on Perlmutter, including job size and duration (measured as node_hours), maximum CPU and host memory capacity utilization, and temporal and spatial factors. The results of the analysis are presented in a correlation matrix in Figure 12. Our findings show that for both CPU and GPU nodes, job nodehours are positively correlated with the spatial imbalance factor (ri_spatial). This suggests that larger jobs with longer runtimes are more likely to experience spatial imbalance. Maximum CPU utilization is strongly positively correlated with host memory capacity utilization and temporal factors in CPU jobs, while the correlation is weak in GPU jobs. Moreover, the temporal imbalance factor (ri_temporal) is positively correlated with maximum memory capacity utilization (mem_max), with correlation coefficients (r-value) of 0.75 for CPU jobs and 0.59 for GPU jobs. These strong positive correlations suggest that jobs requiring a significant amount of memory are more likely to experience temporal memory imbalance, which is consistent with our previous observations. Finally, we find a slight positive correlation (r-value of 0.16 for CPU jobs and 0.29 for GPU jobs) between spatial and temporal imbalance factors, indicating that spatially imbalanced jobs are also more likely to experience temporal imbalance.
Discussion and Conclusion
In light of the increasing demands of HPC and the varied resource requirements of open-science workloads, there is a risk of not fully utilizing expensive resources. To better understand this issue, we conducted a comprehensive analysis of memory, CPU, and GPU utilization in NERSC's Perlmutter. Our analysis spanned one month and yielded important insights. Specifically, we found that only a quarter of CPU node-hours achieved high CPU utilization, and CPUs on GPU-accelerated nodes were typically utilized for only 0-5% of the node-hours. Moreover, while a significant proportion of GPU-hours demonstrated high GPU utilization (over 95%), more than 15% of GPU-hours had idle GPUs. Moreover, both CPU host memory and GPU HBM2 were not fully utilized for the majority of node-hours. Interestingly, jobs with temporal balance consistently did not fully utilize memory capacity, while those with temporal imbalance had varying idle memory capacity over time. Finally, we observed that jobs with spatial imbalance did not have high memory capacity utilization for all allocated nodes. Insufficient resource utilization can be attributed to various application characteristics, as similar issues have been observed in other HPC systems. Although simultaneous multi-threading can potentially improve CPU utilization and mitigate stalls resulting from cache misses, it may not be suitable for all applications. Furthermore, GPUs, being a new compute resource to NERSC users, may be currently not fully utilized because users and applications are still adapting to the new system, and the current configurations are not optimized yet to support GPU node sharing. Furthermore, it is important to note that in most systems, various parameters such as memory bandwidth and capacity are interdependent. For instance, the number and type of memory modules significantly impact memory bandwidth and capacity. Therefore, when designing a system, it may be challenging to fully utilize every parameter while optimizing others. This may result in some resources being not fully utilized to improve the overall performance of the system. Thus, not fully utilizing system resources can be an intentional trade-off in the design of HPC systems.
Our study provides valuable insights for system operators to understand and monitor resource utilization patterns in HPC workloads. However, the scope of our analysis was limited by the availability of monitoring data, which did not include information on network and memory bandwidth as well as file system statistics. Despite this limitation, our findings can help system operators identify areas where resources are not fully utilized and optimize system configuration.
Our analysis also reveals several opportunities for future research. For instance, given that 64% of jobs use only half or less of the on-node host DRAM capacity, it is worth exploring the possibility of disaggregating the host memory and using a remote memory pool. This remote pool can be local to a rack, group of racks, or the entire system. Our job size analysis indicates that most jobs can be accommodated within the compute resources provided by a single rack, suggesting that rack-level disaggregation can fulfill the requirements of most Perlmutter jobs if they are placed in a single rack. Furthermore, a disaggregated system could consider temporal and spatial characteristics when scheduling jobs since high memory utilization is often observed in memory-unbalanced jobs. Such jobs can be given priority for using disaggregated memory.
Another promising area for improving resource utilization is to reevaluate node sharing for specific applications with compatible temporal and spatial characteristics. One of the main challenges in job co-allocation is the potential for shared resources, such as memory, to become saturated at high core counts and significantly degrade job performance. However, our analysis reveals that both CPU and memory resources are not fully utilized, indicating that there may be room for co-allocation without negatively impacting performance. The observation that memory-balanced jobs typically consume relatively low memory capacity suggests that it may be possible to co-locate jobs with memory-balanced jobs to reduce the probability of contention for memory capacity. By optimizing resource allocation and reducing the likelihood of resource contention, these approaches can help maximize system efficiency and performance.
a) CPU-only nodes. (b) GPU-accelerated nodes.
Figure. 2 :
2Decomposition of node-hours by applications. Infrequent applications are not labeled.(a) CPU-only jobs. (b) GPU-accelerated jobs.
Figure. 3 :
3Node-hours and job counts by host memory capacity intensity (utilization).
Figure. 4 :
4Maximum CPU utilization of CPU node-hours (left) and GPU nodehours (right).
Figure. 5 :
5Maximum host memory capacity utilization of CPU node-hours (left) and GPU node-hours (right).
Figure. 6 :
6Maximum GPU (left) and HBM2 capacity (right) utilization of GPUhours.
Figure. 8 :
8CDFs and PDFs of the temporal factor of host memory capacity utilization across nodes. The larger the value of the temporal factor, the more temporal imbalance.(a) Temporal categories.(b) Spatial categories.
Figure. 9 :
9Host DRAM distribution by temporal and spatial categories. The left portion of each subfigure represents CPU jobs and the right portion GPU jobs.
Figure. 10 :
10Spatial patterns illustrated with the memory capacity utilization metrics of randomly selected jobs in Perlmutter, one representative job for each of the three categories. Each color represents memory utilization (%) of a different node allocated to each job.
Table 1 :
1Perlmutter measured data summary. Each job's resource utilization is represented by its peak usage.Metric
Statistics of all jobs
Statistics of jobs ≥ 1h
Median Mean Max
Std
Dev
Median Mean Max
Std
Dev
CPU Jobs
21.75% of CPU jobs ≥ 1h
Allocated nodes
1
6.51
1713 37.83
1
4.84
1477 25.43
Job duration (hours) 0.16
1.40 90.09 3.21
4.19 5.825 90.09 4.73
CPU util (%)
35.0 39.98 100.0 34.60
51.0 56.68 100.0 35.89
DRAM util (%)
13.29 22.79 98.62 23.65 18.61 33.69 98.62 30.88
GPU Jobs
23.42% GPU jobs ≥ 1h
Allocated nodes
1
4.66
1024 27.71
1
5.88
512
23.33
Job duration (hours) 0.30
1.14 13.76 2.42
2.2
4.12 13.76 3.67
Host CPU util (%)
4.0
19.60 100.0 23.53
4.0
18.00 100.0 24.81
Host DRAM util (%) 17.57 29.76 98.29 12.51 18.04 28.24 98.29 20.94
GPU util (%)
96.0 71.08 100.0 40.07 100.0 83.73 100.0 30.45
GPU HBM2 util (%) 16.28 34.07 100.0 37.49 18.88 40.23 100.0 36.33
Table 2 :
2Job size and duration. Jobs shorter than one hour are excluded.Job Size (Nodes)
1
(1, 4] (4, 16]
(16,
64]
(64,
128]
(128,
128+)
CPU Jobs
Total Number: 21706 14783
2486
3738
550
62
87
Percentage (%)
68.10 11.45
17.22
2.54
0.29
0.40
GPU Jobs
Total Number: 24217 15924
5358
1837
706
318
74
Percentage (%)
65.89 22.04
7.56
2.90
1.31
0.30
Job Duration (Hours)
[1, 3] (3, 6] (6, 12]
(12,
24]
(24,
48]
(48,
48+)
CPU Jobs
Total Number: 21706 8879
4109
6300
2393
15
10
Percentage (%)
40.90 18.94
29.02
11.02
0.07
0.05
GPU Jobs
Total Number: 24217 14495
3888
4916
918
0
0
Percentage (%)
59.86 16.05
20.30
3.79
0
0
AcknowledgmentWe would like to express our gratitude to the anonymous reviewers for their insightful comments and suggestions. We also thank Brian Austin, Nick Wright, Richard Gerber, Katie Antypas, and the rest of the NERSC team for their feedback. This research used resources of the National Energy
The lightweight distributed metric service: a scalable infrastructure for continuous monitoring of large scale computing systems and applications. A Agelastos, B Allan, J Brandt, P Cassella, J Enos, J Fullop, A Gentile, S Monk, N Naksinehaboon, J Ogden, SC'14: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis. IEEEAgelastos, A., Allan, B., Brandt, J., Cassella, P., Enos, J., Fullop, J., Gentile, A., Monk, S., Naksinehaboon, N., Ogden, J., et al.: The lightweight distributed metric service: a scalable infrastructure for continuous monitoring of large scale computing systems and applications. In: SC'14: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis. pp. 154-165. IEEE (2014)
Desh: deep learning for system health prediction of lead times to failure in hpc. A Das, F Mueller, C Siegel, A Vishnu, Proceedings of the 27th international symposium on high-performance parallel and distributed computing. the 27th international symposium on high-performance parallel and distributed computingDas, A., Mueller, F., Siegel, C., Vishnu, A.: Desh: deep learning for system health prediction of lead times to failure in hpc. In: Proceedings of the 27th international symposium on high-performance parallel and distributed computing. pp. 40-51 (2018)
Logaider: A tool for mining potential correlations of hpc log events. S Di, R Gupta, M Snir, E Pershey, F Cappello, 2017 17th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID). IEEEDi, S., Gupta, R., Snir, M., Pershey, E., Cappello, F.: Logaider: A tool for mining potential correlations of hpc log events. In: 2017 17th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID). pp. 442-451. IEEE (2017)
Amplify scientific discovery with artificial intelligence. Y Gil, M Greaves, J Hendler, H Hirsh, Science. 3466206Gil, Y., Greaves, M., Hendler, J., Hirsh, H.: Amplify scientific discovery with arti- ficial intelligence. Science 346(6206), 171-172 (2014)
Failures in large scale systems: long-term measurement, analysis, and implications. S Gupta, T Patel, C Engelmann, D Tiwari, Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis. the International Conference for High Performance Computing, Networking, Storage and AnalysisGupta, S., Patel, T., Engelmann, C., Tiwari, D.: Failures in large scale systems: long-term measurement, analysis, and implications. In: Proceedings of the Inter- national Conference for High Performance Computing, Networking, Storage and Analysis. pp. 1-12 (2017)
Understanding object-level memory access patterns across the spectrum. X Ji, C Wang, N El-Sayed, X Ma, Y Kim, S S Vazhkudai, W Xue, D Sanchez, Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis. the International Conference for High Performance Computing, Networking, Storage and AnalysisJi, X., Wang, C., El-Sayed, N., Ma, X., Kim, Y., Vazhkudai, S.S., Xue, W., Sanchez, D.: Understanding object-level memory access patterns across the spectrum. In: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis. pp. 1-12 (2017)
Trends in high-performance computing. V Kindratenko, P Trancoso, Computing in Science & Engineering. 133Kindratenko, V., Trancoso, P.: Trends in high-performance computing. Computing in Science & Engineering 13(3), 92-95 (2011)
Monster: an out-of-the-box monitoring tool for high performance computing systems. J Li, G Ali, N Nguyen, J Hass, A Sill, T Dang, Y Chen, 2020 IEEE International Conference on Cluster Computing (CLUSTER). IEEELi, J., Ali, G., Nguyen, N., Hass, J., Sill, A., Dang, T., Chen, Y.: Monster: an out-of-the-box monitoring tool for high performance computing systems. In: 2020 IEEE International Conference on Cluster Computing (CLUSTER). pp. 119-129. IEEE (2020)
Analysis and correlation of application i/o performance and system-wide i/o activity. S Madireddy, P Balaprakash, P Carns, R Latham, R Ross, S Snyder, S M Wild, 2017 International Conference on Networking, Architecture, and Storage (NAS). IEEEMadireddy, S., Balaprakash, P., Carns, P., Latham, R., Ross, R., Snyder, S., Wild, S.M.: Analysis and correlation of application i/o performance and system-wide i/o activity. In: 2017 International Conference on Networking, Architecture, and Storage (NAS). pp. 1-10. IEEE (2017)
A case for intra-rack resource disaggregation in hpc. G Michelogiannakis, B Klenk, B Cook, M Y Teh, M Glick, L Dennison, K Bergman, J Shalf, 11. NERSC: NERSC-10 Workload Analysis (Data from 2018ACM Transactions on Architecture and Code Optimization (TACO). 192PerlmutterNVIDA: NVIDIA DCGM ExporterMichelogiannakis, G., Klenk, B., Cook, B., Teh, M.Y., Glick, M., Dennison, L., Bergman, K., Shalf, J.: A case for intra-rack resource disaggregation in hpc. ACM Transactions on Architecture and Code Optimization (TACO) 19(2), 1-26 (2022) 11. NERSC: NERSC-10 Workload Analysis (Data from 2018) (2018), https://portal.nersc.gov/project/m888/nersc10/workload/N10_Workload_ Analysis.latest.pdf 12. NERSC: Cori (2022), https://www.nersc.gov/systems/cori/ 13. NERSC: Perlmutter (2022), https://www.nersc.gov/systems/perlmutter/ 14. NVIDA: NVIDIA DCGM (2022), https://developer.nvidia.com/dcgm 15. NVIDA: NVIDIA DCGM Exporter (2022), https://github.com/NVIDIA/ dcgm-exporter/blob/main/etc/dcp-metrics-included.csv
What supercomputers say: A study of five system logs. A Oliner, J Stearley, 37th annual IEEE/IFIP international conference on dependable systems and networks (DSN'07). IEEEOliner, A., Stearley, J.: What supercomputers say: A study of five system logs. In: 37th annual IEEE/IFIP international conference on dependable systems and networks (DSN'07). pp. 575-584. IEEE (2007)
Quantifying memory underutilization in hpc systems and using it to improve performance via architecture support. G Panwar, D Zhang, Y Pang, M Dahshan, N Debardeleben, B Ravindran, X Jian, Proceedings of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture. the 52nd Annual IEEE/ACM International Symposium on MicroarchitecturePanwar, G., Zhang, D., Pang, Y., Dahshan, M., DeBardeleben, N., Ravindran, B., Jian, X.: Quantifying memory underutilization in hpc systems and using it to improve performance via architecture support. In: Proceedings of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture. pp. 821-835 (2019)
Revisiting i/o behavior in largescale storage systems: The expected and the unexpected. T Patel, S Byna, G K Lockwood, D Tiwari, Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis. the International Conference for High Performance Computing, Networking, Storage and AnalysisPatel, T., Byna, S., Lockwood, G.K., Tiwari, D.: Revisiting i/o behavior in large- scale storage systems: The expected and the unexpected. In: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis. pp. 1-13 (2019)
A holistic view of memory utilization on hpc systems: Current and future trends. I Peng, I Karlin, M Gokhale, K Shoga, M Legendre, T Gamblin, The International Symposium on Memory Systems. Peng, I., Karlin, I., Gokhale, M., Shoga, K., Legendre, M., Gamblin, T.: A holistic view of memory utilization on hpc systems: Current and future trends. In: The International Symposium on Memory Systems. pp. 1-11 (2021)
On the memory underutilization: Exploring disaggregated memory on hpc systems. I Peng, R Pearce, M Gokhale, 2020 IEEE 32nd International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD). IEEEPeng, I., Pearce, R., Gokhale, M.: On the memory underutilization: Exploring dis- aggregated memory on hpc systems. In: 2020 IEEE 32nd International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD). pp. 183-190. IEEE (2020)
An empirical study of hyper-threading in high performance computing clusters. R A Tau Leng, J Hsieh, V Mashayekhi, R Rooholamini, Linux HPC Revolution. 45Tau Leng, R.A., Hsieh, J., Mashayekhi, V., Rooholamini, R.: An empirical study of hyper-threading in high performance computing clusters. Linux HPC Revolution 45 (2002)
Monitoring scientific python usage on a supercomputer. R Thomas, L Stephey, A Greiner, B Cook, Thomas, R., Stephey, L., Greiner, A., Cook, B.: Monitoring scientific python usage on a supercomputer (2021)
A survey of application memory usage on a national supercomputer: an analysis of memory requirements on archer. A Turner, S Mcintosh-Smith, International Workshop on Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems. SpringerTurner, A., McIntosh-Smith, S.: A survey of application memory usage on a na- tional supercomputer: an analysis of memory requirements on archer. In: Interna- tional Workshop on Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems. pp. 250-260. Springer (2018)
Learning from five-year resource-utilization data of titan system. F Wang, S Oral, S Sen, N Imam, 2019 IEEE International Conference on Cluster Computing (CLUSTER). IEEEWang, F., Oral, S., Sen, S., Imam, N.: Learning from five-year resource-utilization data of titan system. In: 2019 IEEE International Conference on Cluster Comput- ing (CLUSTER). pp. 1-6. IEEE (2019)
Predicting output performance of a petascale supercomputer. B Xie, Y Huang, J S Chase, J Y Choi, S Klasky, J Lofstead, S Oral, Proceedings of the 26th International Symposium on High-Performance Parallel and Distributed Computing. the 26th International Symposium on High-Performance Parallel and Distributed ComputingXie, B., Huang, Y., Chase, J.S., Choi, J.Y., Klasky, S., Lofstead, J., Oral, S.: Predicting output performance of a petascale supercomputer. In: Proceedings of the 26th International Symposium on High-Performance Parallel and Distributed Computing. pp. 181-192 (2017)
Co-analysis of ras log and job log on blue gene/p. Z Zheng, L Yu, W Tang, Z Lan, R Gupta, N Desai, S Coghlan, D Buettner, 2011 IEEE International Parallel & Distributed Processing Symposium. IEEEZheng, Z., Yu, L., Tang, W., Lan, Z., Gupta, R., Desai, N., Coghlan, S., Buettner, D.: Co-analysis of ras log and job log on blue gene/p. In: 2011 IEEE International Parallel & Distributed Processing Symposium. pp. 840-851. IEEE (2011)
| [
"https://github.com/NVIDIA/"
] |
[
"The McDonald Accelerating Stars Survey (MASS): White Dwarf Companions Accelerating the Sun-like Stars 12 Psc and HD 159062",
"The McDonald Accelerating Stars Survey (MASS): White Dwarf Companions Accelerating the Sun-like Stars 12 Psc and HD 159062"
] | [
"Brendan P Bowler \nDepartment of Astronomy\nThe University of Texas at Austin\n78712AustinTXUSA\n",
"William D Cochran \nCenter for Planetary Systems Habitability and McDonald Observatory\nThe University of Texas at Austin\n78712AustinTXUSA\n",
"Michael Endl \nDepartment of Astronomy\nMcDonald Observatory\nThe University of Texas at Austin\n78712AustinTXUSA\n",
"Kyle Franson \nDepartment of Astronomy\nThe University of Texas at Austin\n78712AustinTXUSA\n",
"Timothy D Brandt \nDepartment of Physics\nUniversity of California\n93106Santa Barbara, Santa BarbaraCAUSA\n",
"Trent J Dupuy \nGemini Observatory\nNorthern Operations Center\n670 N. Aohoku Place96720HiloHIUSA\n",
"Phillip J Macqueen \nDepartment of Astronomy\nMcDonald Observatory\nThe University of Texas at Austin\n78712AustinTXUSA\n",
"Kaitlin M Kratter \nDepartment of Astronomy\nUniversity of Arizona\n85721TucsonAZUSA\n",
"Dimitri Mawet \nDepartment of Astronomy\nCalifornia Institute of Technology\n91125PasadenaCAUSA\n",
"Garreth Ruane \nDepartment of Astronomy\nCalifornia Institute of Technology\n91125PasadenaCAUSA\n"
] | [
"Department of Astronomy\nThe University of Texas at Austin\n78712AustinTXUSA",
"Center for Planetary Systems Habitability and McDonald Observatory\nThe University of Texas at Austin\n78712AustinTXUSA",
"Department of Astronomy\nMcDonald Observatory\nThe University of Texas at Austin\n78712AustinTXUSA",
"Department of Astronomy\nThe University of Texas at Austin\n78712AustinTXUSA",
"Department of Physics\nUniversity of California\n93106Santa Barbara, Santa BarbaraCAUSA",
"Gemini Observatory\nNorthern Operations Center\n670 N. Aohoku Place96720HiloHIUSA",
"Department of Astronomy\nMcDonald Observatory\nThe University of Texas at Austin\n78712AustinTXUSA",
"Department of Astronomy\nUniversity of Arizona\n85721TucsonAZUSA",
"Department of Astronomy\nCalifornia Institute of Technology\n91125PasadenaCAUSA",
"Department of Astronomy\nCalifornia Institute of Technology\n91125PasadenaCAUSA"
] | [] | We present the discovery of a white dwarf companion to the G1 V star 12 Psc found as part of a Keck adaptive optics imaging survey of long-term accelerating stars from the McDonald Observatory Planet Search Program. Twenty years of precise radial-velocity monitoring of 12 Psc with the Tull Spectrograph at the Harlan J. Smith telescope reveals a moderate radial acceleration (≈10 m s −1 yr −1 ), which together with relative astrometry from Keck/NIRC2 and the astrometric acceleration between Hipparcos and Gaia DR2 yields a dynamical mass of M B = 0.605 +0.021 −0.022 M for 12 Psc B, a semi-major axis of 40 +2 −4 AU, and an eccentricity of 0.84±0.08. We also report an updated orbit fit of the white dwarf companion to the metal-poor (but barium-rich) G9 V dwarf HD 159062 based on new radial velocity observations from the High-Resolution Spectrograph at the Hobby-Eberly Telescope and astrometry from Keck/NIRC2. A joint fit of the available relative astrometry, radial velocities, and tangential astrometric acceleration yields a dynamical mass of M B = 0.609 +0.010 −0.011 M for HD 159062 B, a semi-major axis of 60 +5 −7 AU, and preference for circular orbits (e<0.42 at 95% confidence). 12 Psc B and HD 159062 B join a small list of resolved "Sirius-like" benchmark white dwarfs with precise dynamical mass measurements which serve as valuable tests of white dwarf mass-radius cooling models and probes of AGB wind accretion onto their main-sequence companions. | 10.3847/1538-3881/abd243 | [
"https://arxiv.org/pdf/2012.04847v1.pdf"
] | 228,063,985 | 2012.04847 | 4df1fb92a3cc682db2d4d31f7ac890a97f3948c3 |
The McDonald Accelerating Stars Survey (MASS): White Dwarf Companions Accelerating the Sun-like Stars 12 Psc and HD 159062
December 10, 2020
Brendan P Bowler
Department of Astronomy
The University of Texas at Austin
78712AustinTXUSA
William D Cochran
Center for Planetary Systems Habitability and McDonald Observatory
The University of Texas at Austin
78712AustinTXUSA
Michael Endl
Department of Astronomy
McDonald Observatory
The University of Texas at Austin
78712AustinTXUSA
Kyle Franson
Department of Astronomy
The University of Texas at Austin
78712AustinTXUSA
Timothy D Brandt
Department of Physics
University of California
93106Santa Barbara, Santa BarbaraCAUSA
Trent J Dupuy
Gemini Observatory
Northern Operations Center
670 N. Aohoku Place96720HiloHIUSA
Phillip J Macqueen
Department of Astronomy
McDonald Observatory
The University of Texas at Austin
78712AustinTXUSA
Kaitlin M Kratter
Department of Astronomy
University of Arizona
85721TucsonAZUSA
Dimitri Mawet
Department of Astronomy
California Institute of Technology
91125PasadenaCAUSA
Garreth Ruane
Department of Astronomy
California Institute of Technology
91125PasadenaCAUSA
The McDonald Accelerating Stars Survey (MASS): White Dwarf Companions Accelerating the Sun-like Stars 12 Psc and HD 159062
December 10, 2020Draft version Typeset using L A T E X twocolumn style in AASTeX62White dwarf starsdirect imagingbinary starsastrometric binary starsradial velocityorbit determination
We present the discovery of a white dwarf companion to the G1 V star 12 Psc found as part of a Keck adaptive optics imaging survey of long-term accelerating stars from the McDonald Observatory Planet Search Program. Twenty years of precise radial-velocity monitoring of 12 Psc with the Tull Spectrograph at the Harlan J. Smith telescope reveals a moderate radial acceleration (≈10 m s −1 yr −1 ), which together with relative astrometry from Keck/NIRC2 and the astrometric acceleration between Hipparcos and Gaia DR2 yields a dynamical mass of M B = 0.605 +0.021 −0.022 M for 12 Psc B, a semi-major axis of 40 +2 −4 AU, and an eccentricity of 0.84±0.08. We also report an updated orbit fit of the white dwarf companion to the metal-poor (but barium-rich) G9 V dwarf HD 159062 based on new radial velocity observations from the High-Resolution Spectrograph at the Hobby-Eberly Telescope and astrometry from Keck/NIRC2. A joint fit of the available relative astrometry, radial velocities, and tangential astrometric acceleration yields a dynamical mass of M B = 0.609 +0.010 −0.011 M for HD 159062 B, a semi-major axis of 60 +5 −7 AU, and preference for circular orbits (e<0.42 at 95% confidence). 12 Psc B and HD 159062 B join a small list of resolved "Sirius-like" benchmark white dwarfs with precise dynamical mass measurements which serve as valuable tests of white dwarf mass-radius cooling models and probes of AGB wind accretion onto their main-sequence companions.
INTRODUCTION
Dynamical masses represent anchor points of stellar astronomy. Direct mass measurements are important to calibrate models of stellar and substellar evolution, especially during phases in which physical properties change significantly with time-for example throughout the pre-main sequence; along the evolved subgiant and giant branches; and as white dwarfs, brown dwarfs, and giant planets cool and fade over time (e.g., Hillenbrand & White 2004;Simon et al. 2019;Konopacky et al. 2010;Bond et al. 2017a;Parsons et al. 2017;Dupuy & Liu 2017;Snellen & Brown 2018;Brandt et al. 2019a). Masses are traditionally determined with absolute astrometry of visual binaries or radial-velocity (RV) monitoring of either eclipsing or visual binaries. Other approaches include modeling Keplerian rotation of resolved protoplanetary disks, gravitational lensing, and transit-timing variations in the case of close-in planets (see Serenelli et al. 2020 for a recent review).
It is especially challenging to measure dynamical masses of non-transiting binaries when one component is faint, as is the case of white dwarf, brown dwarf, or giant planet companions to stars. With high-contrast adaptive optics (AO) imaging, there is a pragmatic trade-off between separation and contrast: short period companions reveal their orbits on faster timescales but are more challenging to detect, whereas more distant companions are easier to image but orbit more slowly. Similarly, RV precision, stellar activity, and time baseline of the observations compete when measuring a radial acceleration induced on the star by the companion. The optimal region in which radial reflex accelerations can be measured and faint companions can be imaged with current facilities is ∼5-100 AU. Most of the known benchmark white dwarf, brown dwarf, and giant planet companions fall in this range of orbital distances (see, e.g., Table 2 of Bowler 2016).
One of the most successful means of identifying these faint "degenerate" companions (whose pressure support predominantly originates from electron degeneracy) with direct imaging has been by using radial accelerations on their host stars, which can act as "dynamical beacons" that betray the presence of a distant companion. Long-baseline RV surveys are especially wellsuited for this task, such as the California Planet Survey (Howard et al. 2010), Lick-Carnegie Exoplanet Survey (Butler et al. 2017), McDonald Observatory Planet Search (Cochran & Hatzes 1993), Lick Planet Search (Fischer et al. 2014), Anglo-Australian Planet Search (Tinney et al. 2001), and CORALIE survey for extrasolar planets (Queloz et al. 2000). With baselines spanning several decades and sample sizes of thousands of targets, these programs have facilitated the discovery and characterization of a growing list of substellar companions (HR 7672 B, Liu et al. 2002;HD 19467 B, Crepp et al. 2014;HD 4747 B, Crepp et al. 2016; HD 4113 C, Cheetham et al. 2018;Gl 758 B, Thalmann et al. 2009HD 13724 B, Rickman et al. 2020;HD 72946 B, Maire et al. 2020;HD 19467 B, Maire et al. 2020;Gl 229 B, Nakajima et al. 1995;Brandt et al. 2019b) and white dwarf companions (Gl 86 B, Els et al. 2001, Mugrauer & Neuhäuser 2005HD 8049 B, Zurlo et al. 2013;HD 114174 B, Crepp et al. 2013;HD 11112 B, Rodigas et al. 2016;HD 169889 B, Crepp et al. 2018;HD 159062 B, Hirsch et al. 2019) with direct imaging. Only a handful of these degenerate companions have dynamically measured masses, although recent efforts to determine astrometric accelerations on their host stars using Hipparcos and Gaia are increasing these numbers (e.g., Calissendorff & Janson 2018;Brandt et al. 2019a;Dupuy et al. 2019).
To find new benchmark companions and measure their dynamical masses, we launched the McDonald Accelerating Stars Survey (MASS), a high-contrast imaging program targeting stars with radial accelerations based on RV planet search programs at McDonald Observatory. The McDonald Observatory Planet Search began in 1987 at the 2.7-m Harlan J. Smith Telescope and is among the oldest radial velocity planet surveys (Cochran & Hatzes 1993). The most recent phase of the survey using the Tull Spectrograph commenced in 1998 and continues today. In addition to discoveries of giant planets spanning orbital periods of a few days to over ten years (e.g., Cochran et al. 1997;Hatzes et al. 2003;Robertson et al. 2012;Endl et al. 2016), many shallow long-term accelerations have been identified over the past three decades. Accelerating stars in our sample also draw from a planet search around 145 metal-poor stars using the 9.2-m Hobby-Eberly Telescope's High-Resolution Spectrograph (HRS). This program operated from 2008 to 2013 and, like the McDonald Observatory Planet Search, identified both planets and longer-term radial accelerations (Cochran & Endl 2008).
In Bowler et al. (2018) we presented an updated orbit and mass measurement of the late-T dwarf Gl 758 B as part of this program based on new imaging data and RVs from McDonald Observatory, Keck Observatory, and the Automated Planet Finder. The mass of Gl 758 B was subsequently refined in Brandt et al. (2019a) by taking into account the proper motion difference between Hipparcos and Gaia. Here we present the discovery and dynamical mass measurement of a faint white dwarf companion to the Sun-like star 12 Psc based on a long-term RV trend of its host star from the McDonald Observatory Planet Search. In addition, we present an updated orbit and mass measurement of HD 159062 B, a white dwarf companion to an accelerating G9 V star recently discovered by Hirsch et al. (2019) and independently identified in our program using radial velocities from HRS. These objects join only a handful of other resolved white dwarf companions with dynamical mass measurements.
This paper is organized as follows. In Section 2 we provide an overview of the properties of 12 Psc and HD 159062. Section 3 describes the RV and imaging observations of these systems from McDonald Observatory and Keck Observatory. The orbit fits and dynamical mass measurements for both companions are detailed in Section 4. Finally, we discuss the implications of the mass measurements for the evolutionary history of the system in Section 5.
OVERVIEW OF 12 PSC AND HD 159062
12 Psc (=HD 221146, HIP 115951) is a bright (V = 6.9 mag) G1 V dwarf (Gray et al. 2006) located at a parallactic distance of 36.2 ± 0.06 pc (Gaia Collaboration et al. 2018). Spectroscopy and isochrone fitting imply a slightly more massive, older, and more metal rich analog to the Sun. For example, Soto & Jenkins (2018) find an age of 5.3 +1.1 −1.0 Gyr, a mass of 1.11 ± 0.05 M , a metallicity of [Fe/H] = 0.13 ± 0.10 dex, and an effective temperature of 5950 ± 50 K. This is in good agreement with other recent analysis from Tsantaki et al. (2013), Marsden et al. (2014), and Aguilera-Gómez et al. (2018). The old age is bolstered by the low activity level, with log R HK values ranging from -5.06 dex to -4.86 dex (e.g., Isaacson & Fischer 2010;Murgas et al. 2013;Saikia et al. 2018). A summary of the physical, photometric, and kinematic properties of 12 Psc can be found in Table 1. HD 159062 is an old, metal-poor, main-sequence G9 V star (Gray et al. 2003) located at a distance of 21.7 pc (Gaia Collaboration et al. 2018). Hirsch et al. (2019) derive a mass of 0.76 ± 0.03 M using spectroscopicallyderived physical properties and stellar isochrones. A wide range of ages have been determined for HD 159062 in the literature: Isaacson & Fischer (2010) and Hirsch et al. (2019) find activity-based ages of ≈6 Gyr and ≈7 Gyr using R HK values, while typical isochrone-based ages range from 9.2 ± 3.5 Gyr from Luck (2017) Brewer & Carney (2006) also note that HD 159062 exhibits substantially enhanced s-process elements and suggest this could have been caused by mass transfer from an evolved AGB companion. This scenario is supported by Fuhrmann et al. (2017), who find anomalously high barium abundance and conclude that HD 159062 may harbor a white dwarf companion. This prediction was realized with the discovery of HD 159062 B by Hirsch et al. (2019) using a long-baseline RV trend from Keck/HIRES and follow-up adaptive optics imaging. They determine a dynamical mass of 0.65 +0.12 −0.04 M for HD 159062 B, which was refined by Brandt et al. (submitted) (Tull et al. 1995) at McDonald Observatory's 2.7-m Harlan J. Smith telescope between 2001 and 2020. All observations used the 1. 2 slit, re- sulting in a resolving power of R ≡ λ/∆λ ≈ 60,000. A temperature-stabilized gas cell containing molecular iodine vapor (I 2 ) is mounted in the light path before the slit entrance, enabling precise RV measurements with respect to an iodine-free template following the description in Endl et al. (2000). RVs are subsequently corrected for Earth's barycentric motion as well as the small secular acceleration for 12 Psc (0.000582 m s −1 yr −1 ). The time of observation is corrected to the barycentric dynamical time as observed at the solar system barycenter. Observations starting in 2009 take into account the fluxweighted barycentric correction of each observation using an exposure meter. The RVs are shown in Figure 1 and are listed in Table 2 1 . The median measurement uncertainty is 5.1 m s −1 . 12 Psc shows a constant acceleration away from the Sun with no obvious signs of curvature, indicating that a companion orbits this star with a period substantially longer than the time baseline of the observations (P 20 yr). A linear fit to the McDonald RVs gives a radial acceleration of dv r /dt = 10.60 ± 0.13 m s −1 yr −1 .
12 Psc was also observed with the Hamilton Spectrograph at Lick Observatory as part of the Lick Planet Search (Fischer et al. 2014(Fischer et al. ) between 1998(Fischer et al. and 2012 RVs were obtained with a typical precision of 4.4 m s −1 . A linear fit to the Lick RVs gives a slope of dv r /dt = 9.78 ± 0.15 m s −1 yr −1 (Figure 1). This is slightly shallower than the slope from the McDonald RVs (at the 4σ level), which may indicate modest change in acceleration between the mean epochs of both datasets (2004.0 for Lick and 2009.7 for McDonald).
Hobby-Eberly Telescope/High-Resolution
Spectrograph Radial Velocities of HD 159062 HD 159062 was monitored with HRS at the Hobby-Eberly Telescope between 2008 and 2014. HRS is a fiberfed echelle spectrograph located in the basement of the HET (Tull 1998), and is passively (rather than actively) thermally and mechanically stabilized. A temperature controlled I 2 cell is mounted in front of the entrance slit and is used as a reference for RV measurements. 64 high-SNR spectra were acquired with the HET's flexible queue scheduling system (Shetrone et al. 2007) with a resolving power of R≈60,000 between 4110Å and 7875Å. Relative RVs are measured in spectral chunks following Table 3 is published in its entirety in the machine-readable format. A portion is shown here for guidance regarding its form and content.
the procedure described in Cochran et al. (2003) and Cochran et al. (2004). The median RV uncertainty is 3.4 m s −1 .
The HRS RVs of HD 159062 are shown in Figure 2. We find an acceleration of dv r /dt = -14.1 ± 0.3 m s −1 yr −1 , which is similar to the slope of -13.30 ± 0.12 m s −1 yr −1 from Hirsch et al. (2019) based on 45 Keck/HIRES RVs spanning 2003 to 2019. A slight curvature is seen in the HIRES data; this changing acceleration is not evident in our HRS data, most likely owing to the shorter time baseline compared to the HIRES data. A list of our HRS RVs can be found in Table 3.
Keck/NIRC2 Adaptive Optics Imaging
We imaged 12 Psc and HD 159062 with the NIRC2 infrared camera in its narrow configuration (9.971 mas pix −1 plate scale; Service et al. 2016) using natural guide star adaptive optics at Keck Observatory (Wizinowich et al. 2000;Wizinowich 2013). 12 Psc was initially targeted as part of this program on 2017 October 10 UT with subsequent observations on 2018 December 24 UT and 2019 July 07 UT. HD 159062 was observed on 2017 October 10 UT and 2019 July 07 UT. For each observation, the star was centered behind the partly transparent 600 mas diameter coronagraph to avoid saturation when reading out the full 10. 2×10. 2 array. Most sequences consist of five coronagraphic ("reconnaissance") images with the H-or K S -band filters to search for readily identifiable long-period stellar or substellar companions. 12 Psc B and HD 159062 B were immediately evident in the raw frames, although it was not clear whether they were faint white dwarfs, brown dwarfs, or background stars at the time of discovery. We also acquired deeper sequences in pupil-tracking mode (or angular differential imaging; Marois et al. 2006) in July 2019 to search for additional companions at smaller separations. These observations consisted of forty 30-second frames; for 12 Psc the total field rotation was 11.6
• and for HD 159062 the total rotation was 16.4
• . On the October 2017 and July 2019 nights we also acquired short unsaturated images immediately following the coronagraphic images to photometrically calibrate the deeper frames. A summary of our observations can be found in Table 4.
We searched the Keck Observatory Archive and found that 12 Psc was also imaged on two separate occasions with NIRC2 in September 2004 and July 2005 (PI: M. Liu) in J and K p bands, respectively. Both sequences consist of coronagraphic images similar to our observations with the host star centered behind the occulting spot. 12 Psc B is visible in both frames, offering a 15-year astrometric baseline to test for common proper motion and measure orbital motion.
Basic data reduction was carried out in the same fashion for all images, which consisted of flat fielding using dome flats, correction for bad pixels and cosmic rays, and correction for geometric field distortion using the solutions derived by Yelda et al. (2010) for observations taken before April 2015 (when the adaptive optics system was realigned) and Service et al. (2016) for images taken after that date. For both distortion solutions, the direction of celestial north was calibrated by tying NIRC2 observations of globular clusters to distortion-corrected stellar positions obtained with the Hubble Space Telescope. The resulting precision in the north orientation is ≈0.001-0.002 • -much less than the typical measurement uncertainties for AO-based relative astrometry.
Images in each coronagraph sequence are aligned by fitting a 2D elliptical Gaussian to the host star, which is visible behind the partly transparent mask, then shifting each image with sub-pixel precision to a common position. We attempted several approaches to PSF subtraction for each sequence in order to increase the SNR of the companion: subtraction of a scaled median image of the sequence; a conservative implementation of the Locally Optimized Combination of Images (LOCI; Lafrenière et al. 2007); an aggressive form of LOCI with a more restrictive angular tolerance parameter; LOCI using 100 images selected in an automated fashion from a NIRC2 reference PSF library comprising >2×10 3 registered coronagraph frames; and optional masking of the companion during PSF subtraction. Details of these methods and the NIRC2 PSF library are described in Bowler et al. (2015a) and Bowler et al. (2015b). We attempted PSF subtraction for the 2004 observations of 12 Psc and the 2017 observations of HD 159062 but strong systematics were present in the residuals. The final processed images we adopt for this study are shown in Figures 3 and 4. Table 4 lists the observations and the adopted PSF subtraction method.
In general, astrometry and relative photometry of point sources measured directly from processed (PSFsubtracted) images can be biased as a result of selfsubtraction and non-uniform field of view rotation. These effects can be especially severe for longer angular differential imaging datasets (e.g., Marois et al. 2010). When possible we use the negative PSF injection approach described in Bowler et al. (2018) to mitigate these biases. This entails adding a PSF with a negative amplitude close to the position of the point source in the raw images, running PSF subtraction at that position, and measuring the RMS of the residuals in a circu-lar aperture. This process is then iteratively repeated by varying the astrometry (ρ and θ) at the sub-pixel level and the flux ratio (simply the amplitude of the negative PSF) using the amoeba algorithm (Nelder & Mead 1965;Press et al. 2007) until the resulting RMS is minimized. This method requires a PSF model; if unsaturated frames of the host star are taken close in time to the ADI sequence (to avoid changes in atmospheric conditions and AO correction), this approach can be used to reliably measure the contrast between the host star and the faint point source. When no unsaturated frames are available, any PSF can be used to measure astrometry (but not relative photometry).
We use this negative PSF injection approach to measure astrometry for the ADI datasets of 12 Psc and HD 159062 taken in July 2019. In both cases, images of 12 Psc without the coronagraph mask are used as the PSF model since unsaturated images of HD 159062 were not taken on July 2019. We also use this approach to measure relative photometry and astrometry of 12 Psc for the October 2017 dataset because unsaturated frames were acquired. Uncertainties in the relative photometry are estimated using the mean of the final ten iterations of the amoeba downhill simplex algorithm as the negative PSF separation, P.A., and amplitude settle in at values that minimize the RMS at the position of the companion. For all other observations, relative astrometry is directly computed from the unprocessed images. Total astrometric uncertainties are derived following Bowler et al. (2018) and incorporate random measurement uncertainties; systematic errors from the distortion solution; uncertainty in the plate scale and north angle; and shear (PSF blurring) caused by field rotation within an exposure. Because the coronagraph mask may introduce an additional uncalibrated source of systematic uncertainty (e.g., Konopacky et al. 2016;Bowler et al. 2018), we conservatively adopt 5 mas and 0.1 • as the floor for separation and position angle uncertainties, respectively. Final values for 12 Psc B and HD 159062 B can be found in Table 4. 2 4. RESULTS Figure 5 shows the expected relative motion of a stationary background source due to the proper and projected parallactic motion of 12 Psc. 12 Psc B is clearly comoving and exhibits significant orbital motion in separation and P.A. A linear fit of the astrometry as a func-12 Psc Table 4 for details). The position of the host star behind the 600 mas diameter coronagraph (masked out in these images) is marked with an "×". North is up and East is to the left. Based solely on its brightness (H=15.87 ± 0.16 mag; K S = 15.93 ± 0.2 mag; M H = 13.08 ± 0.16 mag; M K S =13.1 ± 0.2 mag), 12 Psc B could be either a brown dwarf or a white dwarf companion. If it is a brown dwarf, its absolute magnitude would imply a spectral type near the L/T transition and an H-K S color of ≈0.6 mag (Dupuy & Liu 2012). The measured color of 12 Psc B (H-K S =-0.1 ± 0.3 mag) is significantly bluer than this, although the photometric uncertainties are large. To estimate the expected mass of the companion assuming 12 Psc B is a brown dwarf, we use a K Sband bolometric correction from Filippazzo et al. (2015) to infer a luminosity of log L bol /L =-4.61 ± 0.10 dex. Based on the age of the host star (5.3 ± 1.1 Gyr; Soto & Jenkins 2018), substellar evolutionary models imply a mass near the hydrogen burning limit (77.8 ± 1.7 M Jup using the Burrows et al. 1997 the companion mass, enabling us to readily test whether this brown dwarf hypothesis for 12 Psc B is compatible with the measured radial acceleration. Following Torres (1999), the mass of a companion accelerating its host star can be derived by taking the time derivative of the radial velocity equation of a Keplerian orbit. The companion mass (M B ) is related to the system distance (d), the projected separation of the companion (ρ), the instantaneous slope of the radial acceleration (dv r /dt), and the orbital elements (eccentricity e, argument of periastron ω, inclination i, and Keplerian angles related to the true orbital phase-the true anomaly f and eccentric anomaly E) as follows:
Common Proper Motion
2 1 0 −1 −2 ∆RA (") −2 −1 0 1 2 ∆Dec (") B N E NIRC2/J 2004−09 2 1 0 −1 −2 ∆RA (") −2 −1 0 1 2 ∆Dec (") B N E NIRC2/K p 2005−07 2 1 0 −1 −2 ∆RA (") −2 −1 0 1 2 ∆Dec (") B N E NIRC2/H 2017−10 2 1 0 −1 −2 ∆RA (") −2 −1 0 1 2 ∆Dec (") B N E NIRC2/H 2018−12 2 1 0 −1 −2 ∆RA (") −2 −1 0 1 2 ∆Dec (") B N E NIRC2/K S 2018−12 2 1 0 −1 −2 ∆RA (") −2 −1 0 1 2 ∆Dec (") B N E NIRC2/K S 2019−07
HD 159062
3 2 1 0 −1 −2 −3 ∆RA (") −3 −2 −1 0 1 2 3 ∆Dec (") B N E NIRC2/H 2017−10 3 2 1 0 −1 −2 −3 ∆RA (") −3 −2 −1 0 1 2 3 ∆Dec (") B N E NIRC2/H 2019−07M B M = 5.341 × 10 −6 d pc 2 ρ 2 dv r /dt m s −1 yr −1 × (1 − e)(1 + cos E) × (1 − e cos E) sin (f + ω) 1 − sin 2 (f + ω) sin 2 i (1 + cos f ) sin i −1 .(1)
Using only the measured RV slope, the companion mass distribution can be constrained with reasonable assumptions about the distributions of (a priori unknown) orbital elements and orbital phase angles projected on the plane of the sky. Here we adopt a uniform distribution for the argument of periastron from 0-2π, uniform eccentricities between 0-1, and an isotropic distribution Figure 7. Joint constraints on the mass and projected separation of a companion to 12 Psc based only on the measured RV trend (left) and using both the RV trend and the astrometric acceleration (right). The hydrogen burning limit (HBL; ≈75 MJup) and deuterium-burning limit (≈13 MJup) are labeled. Based on the strength of the radial acceleration, the companion would be a brown dwarf or giant planet at close separations of 20 AU; more massive stellar or white dwarf companions are required on wider orbits. When the HGCA acceleration is included, the range of possible masses and separations is more limited. The measured separation and dynamical mass of 12 Psc B (from Section 4.5) is shown with the star. of inclination angles projected on the sky (equivalent to a uniform distribution in cos i). To calculate the orbital phase, a mean anomaly is randomly drawn from 0-2π, an eccentric anomaly is iteratively solved for using the Newton-Raphson method, and a true anomaly is computed using tan(f /2) = ((1 + e)/(1 − e)) tan(E/2). Repeating this process with Monte Carlo draws over a range of projected separations results in a joint probability distribution between the companion mass and separation based on the measured radial acceleration (and conditioned on our assumptions about the distribution of orbital elements). Results for 12 Psc are shown in Figure 7. The measured slope of dv r /dt = 10.60 ± 0.13 m s −1 yr −1 implies that if the companion causing the acceleration was a brown dwarf or giant planet, it must be located at 0. 6 ( 20 AU). Assuming the RV trend originates entirely from the imaged companion we identify at 1. 6, the minimum mass of 12 Psc B is 0.49 M . This immediately rules out the brown dwarf scenario. It also rules out any possibility that the companion could be a low mass star because the faintest absolute magnitude this mass threshold corresponds to is M K S ≈ 5.7 mag following the empirical calibrations from Mann et al. (2019). This is about 10 magnitudes brighter than the observed absolute magnitude, implying that 12 Psc B must be a white dwarf.
The Nature of HD 159062 B
Following the same procedure as for 12 Psc, we show the joint mass-separation distribution for HD 159062 in Figure 8 based on our measured RV trend of dv r /dt = -14.1 ± 0.3 m s −1 yr −1 . This radial acceleration is only consistent with a brown dwarf or giant planet if the companion is located at a separation of 0. 9 ( 20 AU); beyond this it must be a low-mass star or a white dwarf. At a separation of 2. 7, the minimum mass implied by the radial acceleration is 0.67 M . However, Equation 1 assumes that the projected separation ρ and slope dv r /dt are measured simultaneously. The mean epoch of our RV measurements is 2011.5, which is significantly earlier than our imaging observations. The closest astrometric epoch to that date from Hirsch et al. (2019) is 2012.481, at which point HD 159062 B was located at 2. 594 ± 0. 014. In Section 4.1 we found that HD 159062 B is moving away from its host at a rate of 13.8 mas yr −1 . If we correct for that average motion over the course of one year and assume a separation of 2. 580 ± 0. 014 in mid-2011, the minimum mass becomes 0.612 ± 0.015 M , which takes into account uncertainties in the RV trend, distance, and projected separation.
Hipparcos-Gaia Accelerations
Brandt (2018) carried out a cross calibration between the Hipparcos and Gaia astrometric datasets which resulted in the Hipparcos-Gaia Catalog of Accelerations (HGCA). Linking these catalogs to a common reference frame (that of Gaia DR2) provides a way to correct for local sky-dependent systematics present the Hipparcos. As a result, measurements of astrometric accelerations between these two missions (separated by ≈25 years) can be inferred through changes in proper motion. The HGCA has been an especially valuable tool to measure dynamical masses of long-period substellar companions by combining absolute accelerations with relative astrometry and radial velocities (Brandt et al. 2019a;Dupuy et al. 2019;Brandt et al. 2019b;Franson et al., in prep.).
Both 12 Psc and HD 159062 have significant astrometric accelerations in HGCA (Figure 9). Three proper motions in R.A. and Dec. are available in this catalog: the proper motion in Hipparcos with a mean epoch of 1991.25, the proper motion in Gaia with a mean epoch of 2015.5, and the scaled positional difference between Hipparcos and Gaia. The latter measurement is the most precise as a result of the long baseline between the two missions.
We compute astrometric accelerations in R.A. (dµ α /dt) and Dec. (dµ δ /dt) using the proper motion from Gaia and the inferred proper motion from the scaled Hipparcos-Gaia positional difference following Brandt et al. (2019a): dµ α /dt = 2 ∆µ α,Gaia−HG / (t α,Gaia − t α,Hip ) and dµ δ /dt = 2 ∆µ δ,Gaia−HG / (t δ,Gaia − t δ,Hip ). Here t α,Gaia and t δ,Gaia are the Gaia astrometric epochs corresponding to the proper motion measurements in R.A. and Dec., and t α,Hip and t δ,Hip are the corresponding epochs for Hipparcos. The total acceleration is then computed as dµ αδ /dt = (dµ α /dt) 2 + (dµ δ /dt) 2 . Brandt et al. (2019a) presented a simple relationship between the mass of a companion (M B ), its instantaneous projected separation (ρ), and both the radial (dv r /dt) and astrometric (dµ αδ /dt) accelerations induced on its host star. In convenient units this can be expressed as
M B M Jup = 5.599 × 10 −3 d pc 2 ρ 2 dµ αδ /dt m s −1 yr −1 2 + dv r /dt m s −1 yr −1 2 3/2 dµ αδ /dt m s −1 yr −1 −2 .
(2)
The numerical coefficient becomes 5.342 × 10 −6 when M B is in units of M . Equation 2 is valid when all three measurements (ρ, dv r /dt, and dµ αδ /dt) are obtained simultaneously. This is rarely the case in practice, but this relation offers a convenient approximation of the dynamical mass as long as the orbit has not evolved substantially, as is the case for long-period companions.
The HGCA kinematics for 12 Psc are listed in Table 1 and displayed in Figure 9. 12 Psc shows a significant change in proper motion between the Hipparcos-Gaia scaled positional difference and Gaia measurements: ∆µ α,Gaia−HG = 0.78 ± 0.12 mas yr −1 and ∆µ δ,Gaia−HG = 1.42 ± 0.09 mas yr −1 . This translates into an astrometric acceleration of dµ αδ /dt = 22.9 ± 1.4 m s −1 yr −1 -about twice as large as its radial acceleration. The inferred constraints on the companion mass and separation resulting from these radial and astrometric accelerations are shown in Figure 7. The projected separation of 12 Psc ranges from 1. 723 in 2004 to 1. 592 in 2019. Using a projected separation of ρ=1. 7, which is closer to the mid-points of the radial and astrometric accelerations, Equation 2 implies a mass of about 0.622 ± 0.018 M . This is typical of white dwarf masses (e.g., Kepler et al. 2007).
HD 159062 also shows substantial changes in proper motion in the HGCA (Figure 9): ∆µ α,Gaia−HG = -2.66 ± 0.12 mas yr −1 and ∆µ δ,Gaia−HG = 1.35 ± 0.11 mas yr −1 . This translates into an astrometric acceleration of dµ αδ /dt = 25.0 ± 1.0 m s −1 yr −1 . Figure 8 shows constraints combining this with the RV trend. At a separation of 2. 580 ± 0.014 mas, the implied mass HD 159062 B is 0.632 ± 0.014 M -in good agreement with the dynamical mass. Below we carry out a full orbit fit of 12 Psc B and HD 159062 B using relative astrometry, radial velocities, and absolute astrometry from the HGCA.
Orbit and Dynamical Mass of 12 Psc B
The orbit and dynamical mass of 12 Psc B are determined using the efficient orbit fitting package orvara (Brandt et al., submitted), which jointly fits Keplerian orbits to radial velocities, relative astrometry of resolved companions, and absolute astrometry from HGCA. The code relies on a Bayesian framework with the emcee affine-invariant implementation of Markov Chain Monte Carlo (Foreman-Mackey et al. 2013) to sample posterior distributions of orbital elements, physical parameters of the host and companion, and nuisance parameters like stellar parallax, instrument-dependent RV offsets, and RV jitter. We use 100 walkers with 10 5 steps for our orbit fit of the 12 Psc system.
Our priors are chosen to ensure the observations drive the resulting posteriors. We adopt log-flat priors for the semi-major axis, companion mass, and RV jitter; a sin i distribution for inclination; and linearly uniform priors for all other fitted parameters ( √ e sin ω, √ e cos ω, longitude of ascending node, and longitude at a reference epoch). A Gaussian prior with a mean of 1.1 and a standard deviation of 0.2 M is chosen for the primary mass based on independent estimates from the literature (e.g. 1.11 ± 0.05 M , Soto & Jenkins (2018); 1.079 ± a Maximum a posteriori probability.
b Mean longitude at the reference epoch, 2455197.5 JD. −38 yr, and a semi-major axis of 39.5 +2.8 −3.5 AU. The dynamical mass of 12 Psc B is 0.605 +0.021 −0.022 M , which is similar to our mass estimate in Section 4.4 using simplifying assumptions. A summary of prior and posterior distributions for relevant fitted parameters can be found in Table 5.
Orbit and Dynamical Mass of HD 159062 B
We fit a Keplerian orbit using orvara jointly to our HRS RVs, our new NIRC2 astrometry, and the HGCA acceleration for HD 159062 together with HIRES RVs and relative astrometry from Hirsch et al. (2019). The same priors we used for 12 Psc B in Section 4.5 are adopted for HD 159062 B except for the host star mass. Results of the orbit fit are shown in Figures 12 and 13, and a summary of the posterior distributions is listed in Table 6. The dynamical mass of HD 159062 B is 0.609 +0.010 −0.011 M , which happens to be very similar to the mass we found for 12 Psc B. HD 159062 B orbits with a semi-major axis of 60 +5 −7 AU, a period of 390 ± 70 yr, and a low eccentricity which peaks at e=0.0 and is below e=0.42 with 95% confidence. These are consistent with but more precise than the values found by Hirsch et al. 2019 and Brandt et al. (submitted).
Our dynamical mass of 0.609 +0.010 −0.011 M is somewhat lower than the value of 0.65 +0.12 12 Psc Figure 10. Keplerian orbit fit to the relative astrometry of 12 Psc B (top and lower left panels), the astrometric acceleration of the host star from the HGCA (middle panels), and RVs of 12 Psc (lower right panel). 50 random orbits drawn from the MCMC chains are shown in gray and are color coded based on their χ 2 values; darker gray indicates a lower χ 2 value and a better fit. The best fit orbit is shown in black. In the lower right panel, blue squares are Lick RVs and orange circles are from the Tull Spectrograph.
DISCUSSION AND CONCLUSIONS
Stars with masses 8 M evolve to become white dwarfs on timescales of a few tens of Myr for highmass stars near the threshold for core-collapse supernovae (e.g., Ekstrom et al. 2012;Burrows 2013) to ∼10 4 Gyr for the lowest-mass stars near the hydrogenburning limit (Laughlin et al. 1997). Given the 13.8 Gyr age of the universe, the lowest-mass stars that can have evolved in isolation to become white dwarfs have masses of ≈0.9-1 M . White dwarf masses generally increase with progenitor mass, and the corresponding minimum white dwarf mass that can result from isolated evolution of such a star at solar metallicity is 0.56 M (Cummings et al. 2018). Most white dwarfs should therefore have masses above this value, and indeed the peak of the white dwarf mass function in the solar neighborhood is ≈0.6 M (e.g., Liebert et al. 2005;Kepler et al. 2007). The majority of these have hydrogen atmospheres with DA classifications (Kepler et al. 2007).
The initial-to-final mass relation connects a progenitor star's mass to the final mass of the resulting white dwarf remnant. These relations are sensitive to metallicity and the physics of AGB evolution, including shell burning, dedge-up events, and mass loss (e.g., Marigo & Girardi 2007). Cummings et al. (2018) provide a semiempirical calibration of the white dwarf initial-to-final mass relation spanning initial masses of ≈0.9-7 M , or final masses between ≈0.5-1.2 M . Using this relation, our dynamical mass measurements for the white dwarfs 12 Psc B and HD 159062 B imply nearly identical initial progenitor masses of 1.5 ± 0.6 M for both companions. The large uncertainties reflect the significant scatter in the empirically calibrated initial-to-final mass relation, which is sparsely populated for initial masses 2.5 M . Theoretical stellar evolutionary models also exhibit substantial dispersion in the initial-to-final mass relation, with initial masses predicted to be between ≈1.3-2.2 M at solar metallicity for the final masses we measure for 12 Psc B and HD 159062 B (e.g., Marigo & Girardi 2007;Choi et al. 2016). The 12 Psc system was therefore initially a ∼1.5 M + 1.1 M binary and the HD 159062 system was initially a ∼1.5 M + 0.8 M binary. The more massive components then underwent standard evolution through the giant and AGB phases. At this point, after about 2.9 Gyr of evolution, their radii expanded to ∼450 R (∼2.1 AU) before shedding ≈60% of their initial mass to become white dwarfs (Paxton et al. 2010;Choi et al. 2016;Dotter 2016;Cummings et al. (2018)). Adopting an age of 5.3 ± 1.1 Gyr for 12 Psc from Soto & Jenkins (2018), the most likely companion progenitor mass of ∼1.5 M implies that the cooling age of 12 Psc B is ∼2-3 Gyr. A higher (lower) progenitor mass would result in a longer (shorter) cooling age. The age of HD 159062 is somewhat more uncertain, but for a system age of ∼9-13 Gyr, a main-sequence lifetime of ≈3 Gyr for the progenitor of HD 159062 B implies a cooling time of a Maximum a posteriori probability.
b Mean longitude at the reference epoch, 2455197.5 JD.
c Time of periastron, 2455197.5 JD -P (λ ref -ω)/(2π).
∼6-10 Gyr for the white dwarf. This is consistent with the cooling age of 8.2 +0.3 −0.5 Gyr derived by Hirsch et al. (2019).
Despite both binaries having broadly similar physical characteristics, it is interesting to note the differences in the orbits of these companions. 12 Psc B has a high eccentricity of e=0.84±0.08 and a semi-major axis of 40 +2 −4 AU, which takes it to a periastron distance of 6.4 AU (with a 68% credible interval spanning 2.7-10.2 AU). On the other hand, HD 159062 B has a low eccentricity most consistent with a circular orbit (e<0.42 at 95% confidence), a semi-major axis of 60 +5 −7 AU, and a periastron distance of 56 +8 −7 AU. Given the orbital properties of 12 Psc B, tidal interactions during the AGB phase should have been important for this system. Without other mechanisms to increase eccentricities, these interactions tend to dampen eccentricities and reduce orbital periods (e.g., Saladino & Pols 2019). Bonačić Marinović et al. (2008) highlight Sirius as an example of a binary which began as a ∼2.1+5.5 M pair which should have circularized, but Sirius B now orbits with an eccentricity of e=0.59 and an orbital period of 50 yr. Assuming a mass ratio of q=M 1 /M 2 ∼1.4 for the unevolved 12 Psc system, the Roche lobe for the 12 Psc B progenitor would have been ≈2.6 AU at periastron following the approximation for the Roche lobe effective radius from Eggleton (1983). This is comparable to the size of 12 Psc B during the AGB phase (∼2.1 AU). Mass transfer via Roche lobe overflow may therefore have occurred in this system, and tidal interactions would have been important. Like the Sirius system, the high eccentricity of 12 Psc B is therefore surprising and lends support to an eccentricity pumping mechanism, perhaps through enhanced mass loss at periastron which may counteract tidal circularization (Bonačić Marinović et al. 2008).
Wide companions will evolve as if in isolation whereas short-period systems will evolve through one of several channels as detached, semi-detached, or contact binaries. 12 Psc B and HD 159062 B occupy an intermediate regime at several tens of AU where direct mass transfer via Roche lobe overflow may not have occurred, but wind accretion could have been important as a source of chemical enrichment of the unevolved companion as each progenitor underwent mass loss. This is especially true for HD 159062, which shows enhanced abundance of barium and other s-process elements-a signpost of prior accretion from an AGB companion (e.g., McClure et al. 1980;Escorza et al. 2019 [Nd/Fe] = -0.09 ± 0.03 dex. This lack of enrichment is surprising when compared to HD 159062: both have white dwarf companions with similar masses and presumably similar evolutionary pathways for their progenitors, but 12 Psc B is on a highly eccentric orbit which brings it much closer to its unevolved main-sequence companion at periastron (≈6 AU for 12 Psc B versus ≈56 AU for HD 159062 B). Given that most barium stars have companions with orbital periods 10 4 days (e.g., McClure et al. 1980), that would naturally lead to the expectation that 12 Psc should be even more questions for the 12 Psc system: Why was 12 Psc B not tidally circularized during the AGB phase? Why is 12 Psc unenriched in barium and s-process elements?
The chemical peculiarities of some barium stars have been attributed to winds from former AGB companions (now white dwarfs) on orbits out to several thousand AU (De Mello & da Silva 1997). It remains unclear why some stars with white dwarf companions at moderate separations appear to have normal abundances while others show various patterns of enrichment. The answer may involve convection and dissipation of material from the host star, the amount of mass lost from the AGB companion, or perhaps a third evolved (and now engulfed) companion in the system. 12 Psc B and HD 159062 B join a small but growing list of directly imaged white dwarf companions orbiting main-sequence stars with measured dynamical masses. To our knowledge, only seven systems with resolved white dwarf companions and precise dynamical mass measurements are known (including 12 Psc B and HD 159062 B; see compilation in Table 7). These "Sirius-like" benchmark systems are valuable because they can be directly characterized with photometry and spectroscopy-yielding an effective temperature, bolometric luminosity, radius, and spectral classificationand the total system age and progenitor metallicity can be determined from the host star. These combined with a mass measurement provide fundamental tests of white dwarf mass-radius relations and cooling models (e.g., Bond et al. 2017b;Bond et al. 2017a;Serenelli et al. 2020). Follow-up spectroscopy and multi-wavelength photometry of 12 Psc B and HD 159062 B are needed to better characterize these companions and carry out robust tests of cooling models. The authors are grateful to Michal Liu for the early NIRC2 imaging observations of 12 Psc in 2004 and 2005 as well as Keith Hawkins for helpful discussions about this system. We thank Diane Paulson, Rob Wittenmyer, Erik Brugamyer, Caroline Caldwell, Paul Robertson, Kevin Gullikson, and Marshall Johnson for contributing to the Tull observations of 12 Psc presented in this study. This work was supported by a NASA Keck PI Data Award, administered by the NASA Exoplanet Science Institute. Data presented herein were obtained at the W. M. Keck Observatory from telescope time allocated to the National Aeronautics and Space Administration through the agency's scientific partnership with the California Institute of Technology and the Univer-sity of California. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. This research has made use of the Keck Observatory Archive (KOA), which is operated by the W. M. Keck Observatory and the NASA Exoplanet Science Institute (NExScI), under contract with the National Aeronautics and Space Administration. B.P.B. acknowledges support from the National Science Foundation grant AST-1909209. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain.
References-( 1 )
1Gaia Collaboration et al. (2018); (2) Gray et al. (2006); (3) Soto & Jenkins (2018); (4) Marsden et al. (2014); (5) This work; (6) Richmond et al. (2000); (7) Cutri et al. (2003); (8) Brandt (2018). a Hipparcos-Gaia Catalog of Accelerations (Brandt 2018). Proper motions in R.A. include a factor of cos δ.
Figure 1 .
1Radial velocities of 12 Psc from McDonald Observatory (top) and Lick Observatory (bottom; Fischer et al. 2014) spanning a total baseline of over twenty years (1998 to 2020). A strong radial acceleration is evident in both datasets. A linear fit gives a slope of 10.60 ± 0.13 m s −1 yr −1 for the Tull RVs and 9.78 ± 0.15 m s −1 yr −1 for the Lick RVs.
Figure 2 .
2Radial velocities of HD 159062 from HET/HRS (top) and Keck/HIRES (bottom; Hirsch et al. 2019). A linear fit to the HRS and HIRES datasets give accelerations of -14.1 ± 0.3 m s −1 yr −1 and -13.34 ± 0.05 m s −1 yr −1 , respectively.
Figure 3 .
3NIRC2 adaptive optics images of 12 Psc B between September 2004 and July 2019. PSF subtraction has been carried out for all observations except the 2004 J-band dataset (see
Figure 4 .
4NIRC2 adaptive optics H-band observations of HD 159062 and its white dwarf companion. No additional point sources at smaller separations are evident in the post-processed July 2019 ADI sequence.
Figure 5 .
5Relative astrometry of 12 Psc B. Based on the first imaging epoch in 2004, the separation (top panel) and position angle (bottom panel) of a stationary source would follow the background track shown in black as a result of proper and parallactic motion of the host star. 12 Psc B is clearly comoving and shows significant orbital motion.tion of time gives a slope of -8.6 mas yr −1 in separation and -0.30 • yr −1 in P.A. Relative astrometry of HD 159062 B from Hirsch et al. (2019) and our new observations are shown in Figure 6. HD 159062 B is moving away from its host at a rate of 13.8 mas yr −1 . The rate of change in P.A. is 0.47 • yr −1 . 4.2. The Nature of 12 Psc B
Figure 6 .
6models and 69 ± 3 M Jup using theSaumon & Marley 2008 "hybrid" models).The slope of the RV curve and the projected separation of the companion provide direct information about Same asFigure5 but for the white dwarf HD 159062 B. Blue circles show astrometry from Hirsch et al. (2019). Our new observations in 2017 (triangle) and 2019 (square) follow a similar trend of increasing separation and P.A. over time.
Figure 8 .
8Same asFigure 7but for HD 159062. The strength of the radial and astrometric accelerations together imply that the companion has a mass above the stellar limit at separations beyond about 20 AU. The measured separation and dynamical mass of HD 159062 B are in good agreement with this prediction.
Hirsch et al. (2019) andBrandt et al. (submitted) analyzed observations of HD 159062 B in detail and unambiguously demonstrated that this companion is a white dwarf.Brandt et al. (submitted) find a dynamical mass of 0.617 +0.013 −0.012 M for HD 159062 B, which is in good agreement with our inferred minimum mass from the RV slope.
Figure 9 .
9Proper motion measurements from HGCA in R.A. and Dec. for 12 Psc (top) and HD 159062 (bottom). Three measurements are available: the "instantaneous" proper motion from Hipparcos, a similar measurement from Gaia DR2, and the scaled positional difference between the two missions (see Brandt 2018 for details). Both 12 Psc and HD 159062 show clear changes in acceleration from 1991 to 2015; the slope of these proper motion measurements represents the astrometric acceleration.
c
Time of periastron, 2455197.5 JD -P (λ ref -ω)/(2π). 0.012 M , Tsantaki et al. 2013; 1.12 ± 0.08 M , Mints & Hekker 2017). Results of the orbit fit for 12 Psc B are shown in Figures 10 and 11 3 . The orbit of 12 Psc B has a high eccentricity of 0.84±0.08, an orbital period of 193 +25
For this we use a Gaussian prior with a mean of 0.8 and a standard deviation of 0.2 M , which captures the typical range of mass estimates for HD 159062 from the literature (e.g., 0.76 ± 0.03 M , Hirsch et al. 2019; 0.78 ± 0.03 M , Mints & Hekker 2017; 0..87 M , Luck 2017).
− 0 .
004 M from Hirsch et al. (2019) and 0.617 +0.013 −0.012 M fromBrandt et al. (submitted). Because the mass of white dwarf remnants scales strongly with progenitor mass, a lower final mass implies a substantially lower initial mass (and longer mainsequence lifetime) compared to the 2.4 M progenitor mass found byHirsch et al. 2019. The implications of a lower progenitor mass are discussed in more detail below.
Figure 11 .
11Corner plot showing joint posterior maps of various parameters and their marginalized probability density functions. The observations of 12 Psc B and its host star favor a high eccentricity orbit and a semi-major axis of ≈40 AU. The dynamical mass of the white dwarf companion 12 Psc B is 0.605 +0.021 −0.022 M .
Figure 12 .
12Keplerian orbit fit for HD 159062 B from relative astrometry (top), astrometric acceleration from HGCA (middle panels), and radial velocities (bottom right). See Figure 10 for details. In the lower right panel, our RVs from HRS are shown as orange circles while HIRES RVs from Hirsch et al. (2019) are shown as blue squares. et al. (2017) found a barium abundance of [Ba/Fe] = +0.40 ± 0.01 dex, and Reddy et al. (2006) found [Y/Fe] = +0.37 dex, [Ce/Fe] = +0.10 dex, and [Nd/Fe] = 0.39 dex. This led Fuhrmann et al. (2017) to predict that HD 159062 harbors a white dwarf companion, which was later confirmed with the discovery of HD 159062 B by Hirsch et al. (2019). On the other hand, 12 Psc shows no signs of barium enrichment or significant enrichment from other s-process elements: Delgado Mena et al. (2017) found abundances of [Ba/Fe] = -0.03 ± 0.02 dex, [Sr/Fe] = +0.12 ± 0.03 dex, [Y/Fe] = +0.09 ± 0.04 dex, [Zr/Fe] = +0.01 ± 0.08 dex, [Ce/Fe] = -0.07 ± 0.04 dex, and
Figure 13 .
13enriched compared to HD 159062 because of more efficient accretion at periastron. This raises two open Same as Figure 11 but for HD 159062 B. The dynamical mass of the white dwarf companion HD 159062 B is 0.609 +0.010 −0.011 M .
Facility:
Smith (Tull Spectrograph), HET (HRS), Keck:II (NIRC2)
References-( 1 )
1Gianninas et al. (2011); (2) Mason et al. (2017); (3) Bond et al. (2017a); (4) Bond et al. (2015); (5) Farihi et al. (2013); (6) Brandt et al. (2019a); (7) This work; (8) Hirsch et al. (2019); (9) Sahu et al. 2017; (10) Bond et al. (2017b). a Most recently reported projected separation.b The mass of Stein 2051 B was measured via gravitational deflection.
measure a metallicity of [Fe/H] = -0.50 dex and find that HD 159062 has an 88% probability of belonging to the thick disk based on its kinematics. HD 159062 also exhibits an enhancement of α-capture elements such as [Mg/Fe], [Si/Fe], and [Ca/Fe], further supporting membership in the thick disk. The low metallicity, enhanced [α/Fe] abundances, and thick-disk kinematics point to an older age for this system.to
13.0 +1.4
−2.4 Gyr from Brewer et al. (2016). Brewer & Car-
ney (2006)
Radial Velocities 3.1.1. Harlan J. Smith Telescope/Tull Spectrograph Radial Velocities of 12 Psc 50 RV measurements of 12 Psc were obtained with the Tull Coudé spectrographto
0.617 +0.013
−0.012 M after adding in the astrometric acceler-
ation induced on the host star using Hipparcos and Gaia
(see 4.4).
3. OBSERVATIONS
3.1.
Table 1 .
1Properties of 12 PscParameter
Value
Reference
Physical Properties
α2000.0
23:29:30.31
· · ·
δ2000.0
-01:02:09.1
· · ·
π (mas)
27.60 ± 0.05
1
Distance (pc)
36.23 ± 0.06
1
SpT
G1 V
2
Mass (M )
1.11 ± 0.05
3
Age (Gyr)
5.3 ± 1.1
3
T eff (K)
5950 ± 50
3
log(L bol /L )
0.358 ± 0.08
4
log g (dex) [cgs]
4.34 ± 0.3
3
R (R )
1.32 ± 0.03
3
[Fe/H] (dex)
+0.13 ± 0.10
3
v sin i (km s −1 )
2.3 ± 0.2
3
log R HK
-5.07 ± 0.01
4
Proj. Sep. ( )
1.6
5
Proj. Sep. (AU)
58
5
dvr/dt (m s −1 yr −1 )
10.60 ± 0.13
5
Photometry
V (mag)
6.92 ± 0.04
6
Gaia G (mag)
6.7203 ± 0.0003
1
J (mag)
5.77 ± 0.01
7
H (mag)
5.49 ± 0.03
7
KS (mag)
5.40 ± 0.01
7
HGCA Kinematics a
µα,Hip (mas yr −1 )
-11.97 ± 1.09
8
µα,Hip Epoch (yr)
1991.348
8
µ δ,Hip (mas yr −1 )
-28.50 ± 0.84
8
µ δ,Hip Epoch (yr)
1991.277
8
µα,HG (mas yr −1 )
-11.32 ± 0.03
8
µ δ,HG (mas yr −1 )
-25.66 ± 0.02
8
µα,Gaia (mas yr −1 )
-10.54 ± 0.12
8
µα,Gaia Epoch (yr)
2015.563
8
µ δ,Gaia (mas yr −1 )
-24.24 ± 0.09
8
µ δ,Hip Epoch (yr)
2015.669
8
∆µα,Gaia−HG (mas yr −1 )
0.78 ± 0.12
8
∆µ δ,Gaia−HG (mas yr −1 )
1.42 ± 0.09
8
dµ αδ /dt (m s −1 yr −1 )
22.9 ± 1.4
8
Table 2 .
2Tull Spectrograph Relative Radial Velocities of 12 PscDate
RV
σRV
(BJD)
(m s −1 ) (m s −1 )
2452115.95277 -97.23
3.69
2452145.89854 -98.83
4.20
2452219.66771 -86.14
5.03
2452473.90774 -59.28
4.44
2452494.83891 -75.32
6.33
2452540.79863 -65.29
5.13
2452598.71506 -52.06
4.50
2452896.84418 -48.27
5.57
2452931.83102 -73.12
4.71
· · ·
Note-Table 2 is published in its
entirety in the machine-readable
format. A portion is shown here
for guidance regarding its form
and content.
Table 3 .
3HRS Relative Radial Velocities of HD 159062Date
RV
σRV
(BJD)
(m s −1 ) (m s −1 )
2454698.72895 34.67
3.77
2454715.68103 45.29
3.29
2454726.62674 45.12
3.95
2454728.63730 35.47
3.13
2454873.01630 42.84
2.98
2454889.98057 39.49
3.39
2454918.88920 47.88
3.21
2455059.74200 19.49
3.33
2455100.63545 16.31
3.76
2455261.94792 15.50
2.62
· · ·
Note-
Table 4 .
4Keck/NIRC2 Adaptive Optics ImagingUT Date
Epoch N ×Coadds×texp
Filter
Sep.
P.A.
Contrast
PSF
(Y-M-D)
(UT)
(s)
( )
( • )
(∆ mag)
Sub. a
12 Psc B
2004 Sep 08 2004.688
7 × 1 × 30
J+cor600 1.723 ± 0.005 28.63 ± 0.10
· · ·
· · ·
2005 Jul 15 2005.536
12 × 2 × 15
Kp+cor600 1.720 ± 0.005 28.5 ± 0.2
· · ·
1
2017 Oct 10 2017.773
5 × 6 × 5
H+cor600 1.623 ± 0.005 25.10 ± 0.14 10.38 ± 0.16 2, 3, 4
2018 Dec 24 2018.978
5 × 6 × 5
H+cor600 1.600 ± 0.005 24.33 ± 0.12
· · ·
2, 3, 4
2018 Dec 24 2018.978
5 × 6 × 5
KS+cor600 1.604 ± 0.005 24.26 ± 0.13
· · ·
2, 3, 4
2019 Jul 07 2019.514
40 × 3 × 10
KS+cor600 1.592 ± 0.005 24.4 ± 0.2 10.53 ± 0.01
2
HD 159062 B
2017 Oct 10 2017.773
5 × 5 × 3
H+cor600 2.663 ± 0.005 301.34 ± 0.11
· · ·
· · ·
2019 Jul 07 2019.513
40 × 10 × 3
H+cor600 2.702 ± 0.005 301.9 ± 0.4
· · ·
2, 3
a PSF subtraction method: (1) scaled median subtraction; (2) "conservative LOCI" with parameters W =
5, NA = 300, g = 1, N δ = 1.5, and dr = 2; (3) 100 additional images used from PSF reference library;
(4) companion masked during PSF subtraction. See Bowler et al. (2015a) and Bowler et al. (2015b) for
additional details.
Table 5 .
512 Psc B Orbit Fit ResultsParameter
Prior
Best Fit Median MAP a
68.3% CI
95.4% CI
Fitted Parameters
M1 (M )
N (1.1,0.2)
1.10
1.10
1.07
(0.91, 1.31)
(0.70, 1.51)
M2 (M )
1/M2
0.594
0.605 0.594 (0.583, 0.625) (0.564, 0.648)
a (AU)
1/a
38.8
39.5
37.5
(36.0, 42.3)
(35.4, 57.7)
√
e sin ω
U(-1,1)
0.70
0.65
0.68
(0.30, 0.74)
(0.29, 0.96)
√
e cos ω
U(-1,1)
-0.63
-0.20
0.78 (-0.68, 0.65) (-0.67, 0.81)
i ( • )
sini
140
132
123
(118, 151)
(108, 157)
Ω ( • )
U(-180, 180) -4.77
-42.2
-142 (-145, -7.2) (-145, 13.3)
λ ref ( • ) b
U(-180, 180)
17.9
-23.3
34.5
(-125, 42.4) (-163, 45.9)
Derived Parameters
e
· · ·
0.89
0.84
0.89
(0.76, 0.92)
(0.27, 0.93)
ω ( • )
· · ·
132
104
25.5
(33.9, 138)
(22.0, 138)
P (yr)
· · ·
186
193
175
(155, 218)
(145, 348)
τ (yr) c
· · ·
2069
2074
2070 (2060, 2080) (2060, 2180)
dp (AU)
· · ·
4.5
6.4
3.5
(2.7, 10.2)
(2.4, 40.2)
da (AU)
· · ·
73.2
71.2
68.5
(66.9, 73.9)
(66.2, 85.1)
Table 6 .
6HD 159062 B Orbit Fit ResultsParameter
Prior
Best Fit Median MAP a
68.3% CI
95.4% CI
Fitted Parameters
M1 (M )
N (0.8,0.2)
0.896
0.799 0.770 (0.62, 0.97)
(0.46, 1.16)
M2 (M )
1/M2
0.597
0.609 0.610 (0.599, 0.619) (0.588, 0.630)
a (AU)
1/a
62.3
59.9
62.5
(52.7, 65.0)
(42.7, 71.0)
√
e sin ω
U(-1,1)
0.021
-0.14 -0.20 (-0.30, -0.02) (-0.35, 0.18)
√
e cos ω
U(-1,1)
0.11
0.18
0.28 (-0.07, 0.48) (-0.36, 0.64)
i ( • )
sini
64.6
61.9
62.3
(59.5, 64.9)
(54.1, 67.1)
Ω ( • )
U(-180, 180)
133
134
134
(132, 140)
(130, 141)
λ ref ( • ) b
U(-180, 180)
146
147
148
(141, 155)
(125, 161)
Derived Parameters
e
· · ·
0.013
0.092 0.010 (0.00, 0.15)
(0.00, 0.40)
ω ( • )
· · ·
10.7
139
156
(103, 180)
(8.21, 180)
P (yr)
· · ·
402
387
407
(314, 457)
(230, 533)
τ (yr) c
· · ·
1858
2000
2025 (1950, 2050) (1840, 2050)
dp (AU)
· · ·
61
56
63
(49, 64)
(28, 64)
da (AU)
· · ·
63
64
64
(62, 65)
(62, 78)
). For example, Fuhrmann2010
2020
Epoch (yr)
2.55
2.60
2.65
2.70
2.75
Separation (")
2010
2020
Epoch (yr)
297
298
299
300
301
302
303
304
Position Angle (deg)
1990 2000 2010 2020
Epoch (yr)
168
170
172
174
176
µ
α (mas/yr)
1990 2000 2010 2020
Epoch (yr)
74
76
78
µ
δ (mas/yr)
Table 7 .
7Resolved "Sirius-Like" White Dwarf Companions with Dynamical Mass Measurements Dynamical Mass White Dwarf Host Star Proj. Sep. a a System Age WD Cooling Age Note-Entries in this table are limited to white dwarf companions with precise dynamical mass constraints (σM /M <10%).Name
(M )
SpT
SpT
( )
(AU)
(Gyr)
(Gyr)
Ref.
40 Eri B
0.573 ± 0.018
DA2.9
M4.5+K0
8.3
35
≈1.8
≈0.122
1, 2 , 3
Procyon B
0.592 ± 0.006
DQZ
F5 IV-V
3.8
15
∼2.7
1.37 ± 0.04
4
Gl 86 B
0.596 ± 0.010
DQ6
K0 V
2.4
22
∼2.5
1.25 ± 0.05
5, 6
12 Psc B
0.605 +0.021
−0.022
· · ·
G1 V
1.6
40
5.3 ± 1.1
∼2-3
7
HD 159062 B
0.609 +0.010
−0.011
· · ·
G9 V
2.7
60
∼9-13
8 +3
−5
7, 8
Stein 2051 B b 0.675 ± 0.051
DC
M4
10.1
56
1.9-3.6
1.9 ± 0.4
9
Sirius B
1.018 ± 0.011
DA2
A1 V
10.7
20 0.288 ± 0.010
≈0.126
10
Note that the measured RVs are determined with respect to a dense set of iodine absorption lines rather than an absolute reference, such as a stable RV standard. The zero point is therefore arbitrary.
Note that position angles in this work correspond to the angle from celestial north through east at the epoch of observation, not for J2000.
Note that the parameter values for orbit fits quoted in this study represent the median of the parameter posterior distributions and the 68.3% credible interval, although we also list the best-fit values and the maximum a posteriori probabilities in Table 5.
. C Aguilera-Gómez, I Ramírez, J Chanamé, A&A. 61455Aguilera-Gómez, C., Ramírez, I., & Chanamé, J. 2018, A&A, 614, A55
. A A Bonačić Marinović, E Glebbeek, O R Pols, A&A. 480797Bonačić Marinović, A. A., Glebbeek, E., & Pols, O. R. 2008, A&A, 480, 797
. H E Bond, P Bergeron, A Bédard, ApJ. 84816Bond, H. E., Bergeron, P., & Bédard, A. 2017a, ApJ, 848, 16
. H E Bond, R L Gilliland, G H Schaefer, ApJ. 813106Bond, H. E., Gilliland, R. L., Schaefer, G. H., et al. 2015, ApJ, 813, 106
. H E Bond, G H Schaefer, R L Gilliland, ApJ. 84070Bond, H. E., Schaefer, G. H., Gilliland, R. L., et al. 2017b, ApJ, 840, 70
. B P Bowler, PASP. 128102001Bowler, B. P. 2016, PASP, 128, 102001
. B P Bowler, M C Liu, E L Shkolnik, M Tamura, ApJS. 2167Bowler, B. P., Liu, M. C., Shkolnik, E. L., & Tamura, M. 2015a, ApJS, 216, 7
. B P Bowler, E L Shkolnik, M C Liu, ApJ. 80662Bowler, B. P., Shkolnik, E. L., Liu, M. C., et al. 2015b, ApJ, 806, 62
. B P Bowler, T J Dupuy, M Endl, AJ. 155159Bowler, B. P., Dupuy, T. J., Endl, M., et al. 2018, AJ, 155, 159
. T D Brandt, ApJS. 23931Brandt, T. D. 2018, ApJS, 239, 31
. T D Brandt, T J Dupuy, B P Bowler, AJ. 158140Brandt, T. D., Dupuy, T. J., & Bowler, B. P. 2019a, AJ, 158, 140
. T D Brandt, T J Dupuy, B P Bowler, arXiv, 1910.01652v1Brandt, T. D., Dupuy, T. J., Bowler, B. P., et al. 2019b, arXiv, 1910.01652v1
. J M Brewer, D A Fischer, J A Valenti, N Piskunov, ApJS. 22532Brewer, J. M., Fischer, D. A., Valenti, J. A., & Piskunov, N. 2016, ApJS, 225, 32
. M.-M Brewer, B W Carney, AJ. 131431Brewer, M.-M., & Carney, B. W. 2006, AJ, 131, 431
. A Burrows, RvMP. 85245Burrows, A. 2013, RvMP, 85, 245
. A Burrows, M Marley, W B Hubbard, 491856Burrows, A., Marley, M., Hubbard, W. B., et al. 1997, 491, 856
. R P Butler, S S Vogt, G Laughlin, AJ. 1530Butler, R. P., Vogt, S. S., Laughlin, G., et al. 2017, AJ, 153, 0
. P Calissendorff, M Janson, A&A. 615149Calissendorff, P., & Janson, M. 2018, A&A, 615, A149
. A Cheetham, D Ségransan, S Peretti, 61416Cheetham, A., Ségransan, D., Peretti, S., et al. 2018, 614, A16
. J Choi, A Dotter, C Conroy, ApJ. 823102Choi, J., Dotter, A., Conroy, C., et al. 2016, ApJ, 823, 102
. W D Cochran, M Endl, Phys. Scr. 13014006Cochran, W. D., & Endl, M. 2008, Phys. Scr., T130, 014006
W D Cochran, A P Hatzes, ASP Conf. Ser. 36, Planets Around Pulsars. J. A. Phillips, S. E. Thorsett, & S. R. KulkarniSan Francisco, CAASP267Cochran, W. D., & Hatzes, A. P. 1993, in ASP Conf. Ser. 36, Planets Around Pulsars, ed. J. A. Phillips, S. E. Thorsett, & S. R. Kulkarni (San Francisco, CA: ASP), 267
. W D Cochran, A P Hatzes, R P Butler, G M Marcy, ApJ. 483457Cochran, W. D., Hatzes, A. P., Butler, R. P., & Marcy, G. M. 1997, ApJ, 483, 457
W D Cochran, R G Tull, P J Macqueen, D B Paulson, M Endl, Scientific Frontiers in Research on Extrasolar Planets. D. Deming & S. SeagerSan FranciscoASP294561Cochran, W. D., Tull, R. G., MacQueen, P. J., Paulson, D. B., & Endl, M. 2003, in ASP Conf. Ser. 294, Scientific Frontiers in Research on Extrasolar Planets, ed. D. Deming & S. Seager (San Francisco: ASP), 561
. W D Cochran, M Endl, B Mcarthur, ApJ. 611133Cochran, W. D., Endl, M., McArthur, B., et al. 2004, ApJ, 611, L133
. J R Crepp, E J Gonzales, E B Bechter, 831136Crepp, J. R., Gonzales, E. J., Bechter, E. B., et al. 2016, 831, 136
. J R Crepp, J A Johnson, A W Howard, ApJ. 78129Crepp, J. R., Johnson, J. A., Howard, A. W., et al. 2014, ApJ, 781, 29
. ApJ. 7741-. 2013, ApJ, 774, 1
. J R Crepp, E J Gonzales, B P Bowler, ApJ. 86442Crepp, J. R., Gonzales, E. J., Bowler, B. P., et al. 2018, ApJ, 864, 42
. J D Cummings, J S Kalirai, P.-E Tremblay, E Ramirez-Ruiz, J Choi, ApJ. 86621Cummings, J. D., Kalirai, J. S., Tremblay, P.-E., Ramirez-Ruiz, E., & Choi, J. 2018, ApJ, 866, 21
The 2MASS All-Sky Catalog of Point Sources. R M Cutri, M F Skrutskie, S Van Dyk, University of Massachusetts and Infrared Processing and Analysis Center; IPAC/California Institute of TechnologyCutri, R. M., Skrutskie, M. F., Van Dyk, S., et al. 2003, The 2MASS All-Sky Catalog of Point Sources, University of Massachusetts and Infrared Processing and Analysis Center; IPAC/California Institute of Technology
. G F P De Mello, L Silva, ApJ. 47689De Mello, G. F. P., & da Silva, L. 1997, ApJ, 476, L89
. E Delgado Mena, M Tsantaki, V Z Adibekyan, A&A. 60694Delgado Mena, E., Tsantaki, M., Adibekyan, V. Z., et al. 2017, A&A, 606, A94
. A Dotter, ApJS. 2228Dotter, A. 2016, ApJS, 222, 8
. T J Dupuy, T D Brandt, K M Kratter, B P Bowler, ApJL. 8714Dupuy, T. J., Brandt, T. D., Kratter, K. M., & Bowler, B. P. 2019, ApJL, 871, L4
. T J Dupuy, M C Liu, ApJS. 23115ApJSDupuy, T. J., & Liu, M. C. 2012, ApJS, 201, 19 -. 2017, ApJS, 231, 15
. P P Eggleton, ApJ. 268368Eggleton, P. P. 1983, ApJ, 268, 368
. S Ekstrom, C Georgy, P Eggenberger, A&A. 537146Ekstrom, S., Georgy, C., Eggenberger, P., et al. 2012, A&A, 537, A146
. S G Els, M F Sterzik, F Marchis, 3701Els, S. G., Sterzik, M. F., Marchis, F., et al. 2001, 370, L1
. M Endl, M Kürster, S Els, 362585Endl, M., Kürster, M., & Els, S. 2000, 362, 585
. M Endl, E J Brugamyer, W D Cochran, 81834Endl, M., Brugamyer, E. J., Cochran, W. D., et al. 2016, 818, 34
. A Escorza, D Karinkuzhi, A Jorissen, A&A. 626128Escorza, A., Karinkuzhi, D., Jorissen, A., et al. 2019, A&A, 626, A128
. J Farihi, H E Bond, P Dufour, MNRAS. 430652Farihi, J., Bond, H. E., Dufour, P., et al. 2013, MNRAS, 430, 652
. J C Filippazzo, E L Rice, J Faherty, 810158Filippazzo, J. C., Rice, E. L., Faherty, J., et al. 2015, 810, 158
. D A Fischer, G W Marcy, J F P Spronck, ApJS. 2105Fischer, D. A., Marcy, G. W., & Spronck, J. F. P. 2014, ApJS, 210, 5
. D Foreman-Mackey, D W Hogg, D Lang, J Goodman, PASP. 125306Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. 2013, PASP, 125, 306
. K Fuhrmann, R Chini, L Kaderhandt, Z Chen, R Lachaume, MNRAS. 4713768Fuhrmann, K., Chini, R., Kaderhandt, L., Chen, Z., & Lachaume, R. 2017, MNRAS, 471, 3768
. A G A Brown, Gaia CollaborationA Vallenari, Gaia CollaborationA&A. 6161Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2018, A&A, 616, 1
. A Gianninas, P Bergeron, M T Ruiz, ApJ. 743138Gianninas, A., Bergeron, P., & Ruiz, M. T. 2011, ApJ, 743, 138
. R O Gray, C J Corbally, R F Garrison, AJ. 132161Gray, R. O., Corbally, C. J., Garrison, R. F., et al. 2006, AJ, 132, 161
. R O Gray, C J Corbally, R F Garrison, M T Mcfadden, P E Robinson, AJ. 1262048Gray, R. O., Corbally, C. J., Garrison, R. F., McFadden, M. T., & Robinson, P. E. 2003, AJ, 126, 2048
. A P Hatzes, W D Cochran, M Endl, ApJ. 5991383Hatzes, A. P., Cochran, W. D., Endl, M., et al. 2003, ApJ, 599, 1383
. L A Hillenbrand, R J White, ApJ. 604741Hillenbrand, L. A., & White, R. J. 2004, ApJ, 604, 741
. L A Hirsch, D R Ciardi, A W Howard, ApJ. 87850Hirsch, L. A., Ciardi, D. R., Howard, A. W., et al. 2019, ApJ, 878, 50
. A W Howard, J A Johnson, G W Marcy, ApJ. 7211467Howard, A. W., Johnson, J. A., Marcy, G. W., et al. 2010, ApJ, 721, 1467
. H Isaacson, D Fischer, ApJ. 725875Isaacson, H., & Fischer, D. 2010, ApJ, 725, 875
. S O Kepler, S J Kleinman, A Nitta, MNRAS. 3751315Kepler, S. O., Kleinman, S. J., Nitta, A., et al. 2007, MNRAS, 375, 1315
. Q M Konopacky, A M Ghez, T S Barman, ApJ. 7111087Konopacky, Q. M., Ghez, A. M., Barman, T. S., et al. 2010, ApJ, 711, 1087
. Q M Konopacky, C Marois, B A Macintosh, AJ. 15228Konopacky, Q. M., Marois, C., Macintosh, B. A., et al. 2016, AJ, 152, 28
. D Lafrenière, C Marois, R Doyon, D Nadeau, É Artigau, ApJ. 660770Lafrenière, D., Marois, C., Doyon, R., Nadeau, D., & Artigau,É. 2007, ApJ, 660, 770
. G Laughlin, P Bodenheimer, F C Adams, Astrophysical Journal v. 482420Laughlin, G., Bodenheimer, P., & Adams, F. C. 1997, Astrophysical Journal v.482, 482, 420
. J Liebert, P Bergeron, J B Holberg, ApJ. 156156Liebert, J., Bergeron, P., & Holberg, J. B. 2005, ApJ, 156, 156
. M C Liu, D A Fischer, J R Graham, ApJ. 571519Liu, M. C., Fischer, D. A., Graham, J. R., et al. 2002, ApJ, 571, 519
. R E Luck, AJ. 15321Luck, R. E. 2017, AJ, 153, 21
. A L Maire, K Molaverdikhani, S Desidera, A&A. 63947Maire, A. L., Molaverdikhani, K., Desidera, S., et al. 2020, A&A, 639, A47
. A W Mann, T Dupuy, A L Kraus, ApJ. 87163Mann, A. W., Dupuy, T., Kraus, A. L., et al. 2019, ApJ, 871, 63
. P Marigo, L Girardi, A&A. 469239Marigo, P., & Girardi, L. 2007, A&A, 469, 239
. C Marois, D Lafrenière, R Doyon, B Macintosh, D Nadeau, ApJ. 641556Marois, C., Lafrenière, D., Doyon, R., Macintosh, B., & Nadeau, D. 2006, ApJ, 641, 556
C Marois, B Macintosh, J.-P Véran, Proc. SPIE. SPIE773677361Marois, C., Macintosh, B., & Véran, J.-P. 2010, Proc. SPIE, 7736, 77361J
. S C Marsden, P Petit, S V Jeffers, MNRAS. 4443517Marsden, S. C., Petit, P., Jeffers, S. V., et al. 2014, MNRAS, 444, 3517
. B D Mason, W I Hartkopf, K N Miles, AJ. 154200Mason, B. D., Hartkopf, W. I., & Miles, K. N. 2017, AJ, 154, 200
. R D Mcclure, J M Fletcher, J M Nemec, ApJ. 23835McClure, R. D., Fletcher, J. M., & Nemec, J. M. 1980, ApJ, 238, L35
. A Mints, S Hekker, 604108Mints, A., & Hekker, S. 2017, 604, A108
. M Mugrauer, R Neuhäuser, 36115Mugrauer, M., & Neuhäuser, R. 2005, 361, L15
. F Murgas, J S Jenkins, P Rojo, H R Jones, D J Pinfield, 55227Murgas, F., Jenkins, J. S., Rojo, P., A Jones, H. R., & Pinfield, D. J. 2013, 552, A27
. T Nakajima, B R Oppenheimer, S R Kulkarni, Nature. 378463Nakajima, T., Oppenheimer, B. R., Kulkarni, S. R., et al. 1995, Nature, 378, 463
. J A Nelder, R Mead, The Computer Journal. 7308Nelder, J. A., & Mead, R. 1965, The Computer Journal, 7, 308
. S G Parsons, B T Gänsicke, T R Marsh, MNRAS. 4704473Parsons, S. G., Gänsicke, B. T., Marsh, T. R., et al. 2017, MNRAS, 470, 4473
. B Paxton, L Bildsten, A Dotter, ApJS. 1923Paxton, B., Bildsten, L., Dotter, A., et al. 2010, ApJS, 192, 3
W Press, S A Teukolsky, W T Vetterling, B P Flannery, Numerical Recipes: The Art of Scientific Computing. 3rd EdPress, W., Teukolsky, S. A., Vetterling, W. T., & Flannery, B. P. 2007, Numerical Recipes: The Art of Scientific Computing, 3rd Ed.
. D Queloz, M Mayor, L Weber, A&A. 35499Queloz, D., Mayor, M., Weber, L., et al. 2000, A&A, 354, 99
. B E Reddy, D L Lambert, C Prieto, MNRAS. 3671329Reddy, B. E., Lambert, D. L., & Allende Prieto, C. 2006, MNRAS, 367, 1329
. M W Richmond, T F Droege, G Gombert, PASP. 112397Richmond, M. W., Droege, T. F., Gombert, G., et al. 2000, PASP, 112, 397
. E L Rickman, D Ségransan, J Hagelberg, A&A. 635203Rickman, E. L., Ségransan, D., Hagelberg, J., et al. 2020, A&A, 635, A203
. P Robertson, M Endl, W D Cochran, 74939Robertson, P., Endl, M., Cochran, W. D., et al. 2012, 749, 39
. T J Rodigas, P Bergeron, A Simon, 831177Rodigas, T. J., Bergeron, P., Simon, A., et al. 2016, 831, 177
. K C Sahu, J Anderson, S Casertano, Science. 3561046Sahu, K. C., Anderson, J., Casertano, S., et al. 2017, Science, 356, 1046
. S B Saikia, C J Marvin, S V Jeffers, 616108Saikia, S. B., Marvin, C. J., Jeffers, S. V., et al. 2018, 616, A108
. M I Saladino, O R Pols, A&A. 629103Saladino, M. I., & Pols, O. R. 2019, A&A, 629, A103
. D Saumon, M S Marley, ApJ. 6891327Saumon, D., & Marley, M. S. 2008, ApJ, 689, 1327
. A Serenelli, A Weiss, C Aerts, arXiv:2006.10868arXivSerenelli, A., Weiss, A., Aerts, C., et al. 2020, arXiv, arXiv:2006.10868
. M Service, J R Lu, R Campbell, PASP. 12895004Service, M., Lu, J. R., Campbell, R., et al. 2016, PASP, 128, 095004
. M Shetrone, M E Cornell, J R Fowler, PASP. 119556Shetrone, M., Cornell, M. E., Fowler, J. R., et al. 2007, PASP, 119, 556
. M Simon, S Guilloteau, T L Beck, ApJ. 88442Simon, M., Guilloteau, S., Beck, T. L., et al. 2019, ApJ, 884, 42
. I A G Snellen, A G Brown, Nature Astronomy. 2883Snellen, I. A. G., & Brown, A. G. A. 2018, Nature Astronomy, 2, 883
. M G Soto, Jenkins, J. S. 61576Soto, M. G., & Jenkins, J. S. 2018, 615, A76
. C Thalmann, J Carson, M Janson, ApJ. 707123Thalmann, C., Carson, J., Janson, M., et al. 2009, ApJ, 707, L123
. C G Tinney, R P Butler, G W Marcy, ApJ. 551507Tinney, C. G., Butler, R. P., Marcy, G. W., et al. 2001, ApJ, 551, 507
. G Torres, PASP. 111169Torres, G. 1999, PASP, 111, 169
. M Tsantaki, S G Sousa, V Z Adibekyan, 555150Tsantaki, M., Sousa, S. G., Adibekyan, V. Z., et al. 2013, 555, A150
R G Tull, Proc. SPIE. SPIE3355387Tull, R. G. 1998, Proc. SPIE, 3355, 387
. R G Tull, P J Macqueen, C Sneden, PASP. 107251Tull, R. G., MacQueen, P. J., & Sneden, C. 1995, PASP, 107, 251
. P Wizinowich, PASP. 125798Wizinowich, P. 2013, PASP, 125, 798
. P Wizinowich, D S Acton, C Shelton, PASP. 112315Wizinowich, P., Acton, D. S., Shelton, C., et al. 2000, PASP, 112, 315
. S Yelda, J R Lu, A M Ghez, ApJ. 725331Yelda, S., Lu, J. R., Ghez, A. M., et al. 2010, ApJ, 725, 331
. A Zurlo, A Vigan, J Hagelberg, 55421Zurlo, A., Vigan, A., Hagelberg, J., et al. 2013, 554, A21
| [] |
[
"Measurement and correction of variations in interstellar dispersion in high-precision pulsar timing",
"Measurement and correction of variations in interstellar dispersion in high-precision pulsar timing"
] | [
"M J Keith \nTelescope National Facility\nCSIRO Astronomy & Space Science\nP.O. Box 761710EppingNSWAustralia, Australia\n",
"W Coles \nElectrical and Computer Engineering\nUniversity of California at San Diego\nLa JollaCAU.S.A\n",
"R M Shannon \nTelescope National Facility\nCSIRO Astronomy & Space Science\nP.O. Box 761710EppingNSWAustralia, Australia\n",
"G B Hobbs \nTelescope National Facility\nCSIRO Astronomy & Space Science\nP.O. Box 761710EppingNSWAustralia, Australia\n",
"R N Manchester \nTelescope National Facility\nCSIRO Astronomy & Space Science\nP.O. Box 761710EppingNSWAustralia, Australia\n",
"M Bailes \nCentre for Astrophysics and Supercomputing\nSwinburne University of Technology\nP.O. Box 2183122HawthornVICAustralia\n",
"N D R Bhat \nCentre for Astrophysics and Supercomputing\nSwinburne University of Technology\nP.O. Box 2183122HawthornVICAustralia\n\nInternational Centre for Radio Astronomy Research\nCurtin University\n6102BentleyWAAustralia\n",
"S Burke-Spolaor \nNASA Jet Propulsion Laboratory\nCalifornia Institute of Technology\n4800 Oak Grove Drive91109PasadenaCAUSA\n",
"D J Champion \nMax-Planck-Institut für Radioastronomie\nAuf dem Hügel 6953121BonnGermany\n",
"A Chaudhary \nTelescope National Facility\nCSIRO Astronomy & Space Science\nP.O. Box 761710EppingNSWAustralia, Australia\n",
"A W Hotan \nTelescope National Facility\nCSIRO Astronomy & Space Science\nP.O. Box 761710EppingNSWAustralia, Australia\n",
"J Khoo \nTelescope National Facility\nCSIRO Astronomy & Space Science\nP.O. Box 761710EppingNSWAustralia, Australia\n",
"J Kocz \nCentre for Astrophysics and Supercomputing\nSwinburne University of Technology\nP.O. Box 2183122HawthornVICAustralia\n\nHarvard-Smithsonian Centre for Astrophysics\n60 Garden Street02138CambridgeMAU.S.A\n",
"S Os Lowski \nTelescope National Facility\nCSIRO Astronomy & Space Science\nP.O. Box 761710EppingNSWAustralia, Australia\n\nCentre for Astrophysics and Supercomputing\nSwinburne University of Technology\nP.O. Box 2183122HawthornVICAustralia\n",
"V Ravi \nTelescope National Facility\nCSIRO Astronomy & Space Science\nP.O. Box 761710EppingNSWAustralia, Australia\n\nSchool of Physics\nUniversity of Melbourne\nVic 3010Australia\n",
"J E Reynolds \nTelescope National Facility\nCSIRO Astronomy & Space Science\nP.O. Box 761710EppingNSWAustralia, Australia\n",
"J Sarkissian \nTelescope National Facility\nCSIRO Astronomy & Space Science\nP.O. Box 761710EppingNSWAustralia, Australia\n",
"W Van Straten \nCentre for Astrophysics and Supercomputing\nSwinburne University of Technology\nP.O. Box 2183122HawthornVICAustralia\n",
"D R B Yardley \nTelescope National Facility\nCSIRO Astronomy & Space Science\nP.O. Box 761710EppingNSWAustralia, Australia\n\nSydney Institute for Astronomy\nSchool of Physics A29\nThe University of Sydney\n2006NSWAustralia\n"
] | [
"Telescope National Facility\nCSIRO Astronomy & Space Science\nP.O. Box 761710EppingNSWAustralia, Australia",
"Electrical and Computer Engineering\nUniversity of California at San Diego\nLa JollaCAU.S.A",
"Telescope National Facility\nCSIRO Astronomy & Space Science\nP.O. Box 761710EppingNSWAustralia, Australia",
"Telescope National Facility\nCSIRO Astronomy & Space Science\nP.O. Box 761710EppingNSWAustralia, Australia",
"Telescope National Facility\nCSIRO Astronomy & Space Science\nP.O. Box 761710EppingNSWAustralia, Australia",
"Centre for Astrophysics and Supercomputing\nSwinburne University of Technology\nP.O. Box 2183122HawthornVICAustralia",
"Centre for Astrophysics and Supercomputing\nSwinburne University of Technology\nP.O. Box 2183122HawthornVICAustralia",
"International Centre for Radio Astronomy Research\nCurtin University\n6102BentleyWAAustralia",
"NASA Jet Propulsion Laboratory\nCalifornia Institute of Technology\n4800 Oak Grove Drive91109PasadenaCAUSA",
"Max-Planck-Institut für Radioastronomie\nAuf dem Hügel 6953121BonnGermany",
"Telescope National Facility\nCSIRO Astronomy & Space Science\nP.O. Box 761710EppingNSWAustralia, Australia",
"Telescope National Facility\nCSIRO Astronomy & Space Science\nP.O. Box 761710EppingNSWAustralia, Australia",
"Telescope National Facility\nCSIRO Astronomy & Space Science\nP.O. Box 761710EppingNSWAustralia, Australia",
"Centre for Astrophysics and Supercomputing\nSwinburne University of Technology\nP.O. Box 2183122HawthornVICAustralia",
"Harvard-Smithsonian Centre for Astrophysics\n60 Garden Street02138CambridgeMAU.S.A",
"Telescope National Facility\nCSIRO Astronomy & Space Science\nP.O. Box 761710EppingNSWAustralia, Australia",
"Centre for Astrophysics and Supercomputing\nSwinburne University of Technology\nP.O. Box 2183122HawthornVICAustralia",
"Telescope National Facility\nCSIRO Astronomy & Space Science\nP.O. Box 761710EppingNSWAustralia, Australia",
"School of Physics\nUniversity of Melbourne\nVic 3010Australia",
"Telescope National Facility\nCSIRO Astronomy & Space Science\nP.O. Box 761710EppingNSWAustralia, Australia",
"Telescope National Facility\nCSIRO Astronomy & Space Science\nP.O. Box 761710EppingNSWAustralia, Australia",
"Centre for Astrophysics and Supercomputing\nSwinburne University of Technology\nP.O. Box 2183122HawthornVICAustralia",
"Telescope National Facility\nCSIRO Astronomy & Space Science\nP.O. Box 761710EppingNSWAustralia, Australia",
"Sydney Institute for Astronomy\nSchool of Physics A29\nThe University of Sydney\n2006NSWAustralia"
] | [
"Mon. Not. R. Astron. Soc"
] | Signals from radio pulsars show a wavelength-dependent delay due to dispersion in the interstellar plasma. At a typical observing wavelength, this delay can vary by tens of microseconds on five-year time scales, far in excess of signals of interest to pulsar timing arrays, such as that induced by a gravitational-wave background. Measurement of these delay variations is not only crucial for the detection of such signals, but also provides an unparallelled measurement of the turbulent interstellar plasma at au scales.In this paper we demonstrate that without consideration of wavelengthindependent red-noise, 'simple' algorithms to correct for interstellar dispersion can attenuate signals of interest to pulsar timing arrays. We present a robust method for this correction, which we validate through simulations, and apply it to observations from the Parkes Pulsar Timing Array. Correction for dispersion variations comes at a cost of increased band-limited white noise. We discuss scheduling to minimise this additional noise, and factors, such as scintillation, that can exacerbate the problem.Comparison with scintillation measurements confirms previous results that the spectral exponent of electron density variations in the interstellar medium often appears steeper than expected. We also find a discrete change in dispersion measure of PSR J1603−7202 of ∼ 2×10 −3 cm −3 pc for about 250 days. We speculate that this has a similar origin to the 'extreme scattering events' seen in other sources. In addition, we find that four pulsars show a wavelength-dependent annual variation, indicating a persistent gradient of electron density on an au spatial scale, which has not been reported previously. | 10.1093/mnras/sts486 | [
"https://arxiv.org/pdf/1211.5887v1.pdf"
] | 43,035,952 | 1211.5887 | e7931795c24562572d6f3468a4d24786a015aa17 |
Measurement and correction of variations in interstellar dispersion in high-precision pulsar timing
May 2014
M J Keith
Telescope National Facility
CSIRO Astronomy & Space Science
P.O. Box 761710EppingNSWAustralia, Australia
W Coles
Electrical and Computer Engineering
University of California at San Diego
La JollaCAU.S.A
R M Shannon
Telescope National Facility
CSIRO Astronomy & Space Science
P.O. Box 761710EppingNSWAustralia, Australia
G B Hobbs
Telescope National Facility
CSIRO Astronomy & Space Science
P.O. Box 761710EppingNSWAustralia, Australia
R N Manchester
Telescope National Facility
CSIRO Astronomy & Space Science
P.O. Box 761710EppingNSWAustralia, Australia
M Bailes
Centre for Astrophysics and Supercomputing
Swinburne University of Technology
P.O. Box 2183122HawthornVICAustralia
N D R Bhat
Centre for Astrophysics and Supercomputing
Swinburne University of Technology
P.O. Box 2183122HawthornVICAustralia
International Centre for Radio Astronomy Research
Curtin University
6102BentleyWAAustralia
S Burke-Spolaor
NASA Jet Propulsion Laboratory
California Institute of Technology
4800 Oak Grove Drive91109PasadenaCAUSA
D J Champion
Max-Planck-Institut für Radioastronomie
Auf dem Hügel 6953121BonnGermany
A Chaudhary
Telescope National Facility
CSIRO Astronomy & Space Science
P.O. Box 761710EppingNSWAustralia, Australia
A W Hotan
Telescope National Facility
CSIRO Astronomy & Space Science
P.O. Box 761710EppingNSWAustralia, Australia
J Khoo
Telescope National Facility
CSIRO Astronomy & Space Science
P.O. Box 761710EppingNSWAustralia, Australia
J Kocz
Centre for Astrophysics and Supercomputing
Swinburne University of Technology
P.O. Box 2183122HawthornVICAustralia
Harvard-Smithsonian Centre for Astrophysics
60 Garden Street02138CambridgeMAU.S.A
S Os Lowski
Telescope National Facility
CSIRO Astronomy & Space Science
P.O. Box 761710EppingNSWAustralia, Australia
Centre for Astrophysics and Supercomputing
Swinburne University of Technology
P.O. Box 2183122HawthornVICAustralia
V Ravi
Telescope National Facility
CSIRO Astronomy & Space Science
P.O. Box 761710EppingNSWAustralia, Australia
School of Physics
University of Melbourne
Vic 3010Australia
J E Reynolds
Telescope National Facility
CSIRO Astronomy & Space Science
P.O. Box 761710EppingNSWAustralia, Australia
J Sarkissian
Telescope National Facility
CSIRO Astronomy & Space Science
P.O. Box 761710EppingNSWAustralia, Australia
W Van Straten
Centre for Astrophysics and Supercomputing
Swinburne University of Technology
P.O. Box 2183122HawthornVICAustralia
D R B Yardley
Telescope National Facility
CSIRO Astronomy & Space Science
P.O. Box 761710EppingNSWAustralia, Australia
Sydney Institute for Astronomy
School of Physics A29
The University of Sydney
2006NSWAustralia
Measurement and correction of variations in interstellar dispersion in high-precision pulsar timing
Mon. Not. R. Astron. Soc
0000000May 2014(MN L A T E X style file v2.2)pulsars: general -ISM: structure -methods: data analysis
Signals from radio pulsars show a wavelength-dependent delay due to dispersion in the interstellar plasma. At a typical observing wavelength, this delay can vary by tens of microseconds on five-year time scales, far in excess of signals of interest to pulsar timing arrays, such as that induced by a gravitational-wave background. Measurement of these delay variations is not only crucial for the detection of such signals, but also provides an unparallelled measurement of the turbulent interstellar plasma at au scales.In this paper we demonstrate that without consideration of wavelengthindependent red-noise, 'simple' algorithms to correct for interstellar dispersion can attenuate signals of interest to pulsar timing arrays. We present a robust method for this correction, which we validate through simulations, and apply it to observations from the Parkes Pulsar Timing Array. Correction for dispersion variations comes at a cost of increased band-limited white noise. We discuss scheduling to minimise this additional noise, and factors, such as scintillation, that can exacerbate the problem.Comparison with scintillation measurements confirms previous results that the spectral exponent of electron density variations in the interstellar medium often appears steeper than expected. We also find a discrete change in dispersion measure of PSR J1603−7202 of ∼ 2×10 −3 cm −3 pc for about 250 days. We speculate that this has a similar origin to the 'extreme scattering events' seen in other sources. In addition, we find that four pulsars show a wavelength-dependent annual variation, indicating a persistent gradient of electron density on an au spatial scale, which has not been reported previously.
software (Hobbs et al. 2006). Since the timing model is always incomplete at some level, we always see some level of post-fit residuals, which are typically a combination of 'white' noise due to the uncertainty in the ToA measurement and 'red' (i.e., time-correlated) signal. For the majority of known pulsars the dominant red signal is caused by the intrinsic instability of the pulsar, and termed 'timing noise' (e.g., Hobbs et al. 2010). However, the subset of millisecond pulsars are stable enough that other red signals are potentially measurable (Verbiest et al. 2009). Pulsar timing array projects, such as the Parkes Pulsar Timing Array (PPTA; Manchester et al. 2012), aim to use millisecond pulsars to detect red signals such as: errors in the atomic time standard ; errors in the Solar System ephemeris (Champion et al. 2010); or the effect of gravitational waves (Yardley et al. 2010(Yardley et al. , 2011van Haasteren et al. 2011). Each of these signals can be distinguished by the spatial correlation, i.e., how pulsars in different directions on the sky are affected. However, at typical observing wavelengths and time-spans, the variation of the dispersive delay due to turbulence in the ionised interstellar medium (ISM) dominates such signals (You et al. 2007). Fortunately for pulsar timing experiments, these delays can be measured and corrected using observations at multiple wavelengths.
The dispersive group delay is given by
tDM = λ 2 e 2 2πmec 3 path ne(l)dl ,(1)
where λ is the barycentric radio wavelength 1 . The path integral of electron density is the time-variable quantity. In pulsar experiments this is termed 'dispersion measure', DM, and given in units of cm −3 pc. In principle, the instantaneous DM can be computed from the difference of two arrival times from simultaneous observations at different wavelengths, or more generally by fitting to any number of observations at more than one wavelength. The question of estimation and correction of DM(t) has previously been considered by You et al. (2007). They chose a 'best' pair of wavelengths from those available and estimated the DM at every group of observations. These observation groups were selected by hand, as was the choice of wavelengths. Regardless of how the analysis is done, the estimated DM always contains white noise from differencing two observations, and correcting the group delay always adds that white noise to the arrival times. However the DM(t) variations are red, so they only need to be corrected at frequencies below the 'corner frequency' at which the power spectrum of the DM-caused fluctuations in group delay is equal to the power spectrum of the white noise in the DM(t) estimate. To minimise the additional white noise, they smoothed the DM(t) estimates over a time Ts to create a low-pass filter which cuts off the DM variations, and the associated white noise, at frequencies above the corner frequency. In this way, they avoided adding white noise at high frequencies where the DM-correction was unnecessary. Of course the added 'white' noise is no longer white; it is white below the corner frequency, but zero above the corner frequency.
Here we update this algorithm in two ways. We use all the observed wavelengths to estimate DM(t) and we integrate the smoothing into the estimation algorithm automatically. Thus, the algorithm can easily be put in a data 'pipeline'. We show the results of applying this new algorithm to the PPTA data set, which is now about twice as long as when it was analysed by You et al. (2007). Additionally, we demonstrate that our algorithm is unbiased in the presence of wavelength-independent red signals, e.g., from timing noise, clock error, or gravitational waves; and we show that failure to include wavelength-independent red signals in the estimation algorithm will significantly reduce their estimated amplitude.
THEORY OF DISPERSION REMOVAL
We assume that an observed timing residual is given by tOBS = tCM + tDM(λ/λREF) 2 where tCM is the commonmode, i.e., wavelength-independent delay and tDM is the dispersive delay at some reference wavelength λREF. Then with observations at two wavelengths we can solve for both tCM and tDM.t
DM = (tOBS,1 − tOBS,2)λ 2 REF /(λ 2 1 − λ 2 2 ),(2)
tCM = (tOBS,2λ 2 1 − tOBS,1λ 2 2 )/(λ 2 1 − λ 2 2 ).
In a pulsar timing array, tCM would represent a signal of interest, such as a clock error, an ephemeris error, or the effect of a gravitational wave. The dispersive component tDM would be of interest as a measure of the turbulence in the ISM, but is a noise component for other purposes. It is important to note thattDM is independent of tCM so one can estimate and correct for the effects of dispersion regardless of any common-mode signal present. In particular, commonmode red signals do not cause any error intDM.
If more than two wavelengths are observed, solving for tCM and tDM becomes a weighted least-squares problem, and the standard deviation of the independent white noise on each observation is needed to determine the weighting factors. For wavelength i, we will denote the white noise by tW,i and its standard deviation by σi so the observed timing residual is modelled as tOBS,i = tCM + tDM(λi/λREF) 2 + tW,i.
The weighted least-squares solutions, which are minimum variance unbiased estimators, arẽ
tDM = λ 2 REF i 1/σ 2 i i tOBS,iλ 2 i /σ 2 i − i λ 2 i /σ 2 i i tOBS,i/σ 2 i /∆ (5) tCM = i λ 4 i /σ 2 i i tOBS,i/σ 2 i − i λ 2 i /σ 2 i i tOBS,iλ 2 i /σ 2 i /∆.(6)
Here ∆ is the determinant of the system of equations,
∆ = i 1/σ 2 i i λ 4 i /σ 2 i − i λ 2 i /σ 2 i 2 .
If one were to model only the dispersive term tDM, the weighted least-squares solution would becomẽ
tDM = λ 2 REF i tOBS,iλ 2 i /σ 2 i i λ 4 i /σ 2 i .(7)
However if a common-mode signal is present, this solution is biased. The expected value is
t DM = tDM + tCMλ 2 REF i λ 2 i /σ 2 i i λ 4 i /σ 2 i .(8)
Some of the 'signal' tCM is absorbed intotDM reducing the effective signal-to-noise ratio and degrading the estimate of DM. We will demonstrate this bias using simulations in Section 4. It is important to note that the dispersion estimation and correction process is linear -the estimatorstDM and tCM are linear combinations of the residuals. The corrected residuals tOBS,cor,i = tOBS,i − (λi/λREF) 2t DM, are also linear combinations of the residuals. We can easily compute the white noise in any of these quantities from the white noise in the residuals. For example, we can collect terms in Equations (5) and (6) obtainingtDM = i aitOBS,i and tCM = i bitOBS,i, where
ai = λ 2 REF λ 2 i /σ 2 i j 1/σ 2 j − 1/σ 2 i j λ 2 j /σ 2 j /∆ (9) bi = 1/σ 2 i j λ 4 j /σ 2 j − λ 2 i /σ 2 i j λ 2 j /σ 2 j /∆.(10)
Then, the white noise variances of the estimators can be written as σ 2 TDM = i a 2 i σ 2 i and σ 2 TCM = i b 2 i σ 2 i . The actual PPTA observations are not simultaneous at all frequencies, so we cannot normally apply Equations (5) and (6) directly . We discuss how the least squares solutions fortDM andtCM can be obtained by including them in the timing model in the next section. However it is useful to have an analytical estimate of the power spectral density of the white noise that one can expect in these estimators and in the corrected residuals. At each wavelength λi we have a series of Ni error estimates σij. The variance of the weighted mean is σ 2 mi = 1/ j 1/σ 2 ij . This is the same as if we had a different number N of observations at this wavelength each of variance σ 2 = σ 2 mi N . Thus, for planning purposes we can compute σmi for each wavelength and conceptually resample each wavelength with an arbitrary number (N ) of samples. Equations (5), (6), (9), and (10) are invariant under scaling of all σi by the same factor so one can obtain the coefficients ai and bi using σmi in place of σi so the actual number (Ni) of samples need not enter the equations.
If one had a series of N samples over a time span of TOBS each with variance σ 2 , the spectral density of the white noise would be Pw = 2TOBS σ 2 /N = 2TOBS σ 2 m . We can extend this to a weighted white noise spectral density using the variance of the weighted mean. So the power spectral densities Pw,i play the same role as σ 2 i in Equations (5), (6), (9) and (10). The coefficients {ai} and {bi} are functions of λi and Pw,i. Then we find Pw,TDM = i a 2 i Pw,i and Pw,TCM = i b 2 i Pw,i. Perhaps the most important property of these estimators is that Pw,TCM is less than or equal to the white noise spectrum of the corrected residuals Pw,cor,i in any band.
Equality occurs when there are only two wavelengths. The values of Pw,i, Pw,cor,i, Pw,TDM and Pw,TCM are given for the PPTA pulsars in Table 1. Here Pw,TDM is given at the reference wavelength of 20 cm.
The situation is further complicated by red noise which depends on wavelength, but not as λ 2 . For example, diffractive angular scattering causes variations in the group delay, which scale as the scattered pulse width, i.e. approximately as λ 4 (Rickett 1977). Clearly such noise will enter the DM correction process. It can have the unfortunate effect that scattering variations, which are stronger at long wavelengths, enter the short wavelength corrected residuals even though they are negligible in the original short wavelength data. This will be discussed in more detail in Section 6.
DISPERSION CORRECTION TECHNIQUE
Rather than solving for tCM and tDM for every group of observations, or re-sampling observations at each wavelength to a common rate, it is more practical to include parametrised functions for tCM(t) and DM(t) in the timing model used to obtain the timing residuals. To provide a simple and direct parametrisation we use piece-wise linear models defined by fixed samples tCM(tj) and DM(tj) for j = 1,...,Ns. It is also required to introduce some constraints into the least-squares fitting to prevent covariance with other model parameters. For example, the values of DM(tj ) are naturally covariant with the mean dispersion measure parameter, DM0, which is central to the timing model. To eliminate this covariance, we implement the linear equality constraint that i=1 DM(tj) = 0. Additionally, the series tCM(tj) is covariant with the entire timing model, however in practise the sampling interval is such that it responds very little to any orbital parameters (in the case of binary systems). We constrain tCM(tj) to have no response to a quadratic polynomial, or to position, proper motion, and parallax. These constraints are implemented as part of the least-squares fit in Tempo2, as described in Appendix A The choice of sampling interval, Ts is essentially the same as in You et al. (2007). The process of fitting to a piece-wise linear function is equivalent to smoothing the DM(t) time series with a triangle function of base 2Ts. This is a low pass filter with transfer function Htri(f ) = (sin(πf Ts)/πf Ts) 2 . We adjust Ts such that the pass band approximately corresponds to the corner frequency fc at which the power spectrum of the DM delays, PTDM, exceeds that of the white noise, Pw,TDM. Note that this corner frequency is independent of reference wavelength at which tDM is defined.
To determine this corner frequency we need an estimate of the power spectrum of tDM, so the process is inherently iterative. We can obtain a first estimate of PTDM(f ) from the diffractive time scale, τ diff , at the reference wavelength. For signals in the regime of strong scattering, which includes all PPTA observations, τ diff is the time scale of the diffractive intensity scintillations. For the PPTA pulsars, τ diff is usually of the order of minutes and can be estimated from a dynamic spectrum taken during normal observations (see e.g. Cordes et al. 1990). Table 1. The estimated power spectral density before (Pw) and after (Pw,cor) correction of the white noise for each PPTA pulsar at each of the three wavelengths, and the expected white noise power spectral density in the 'common mode' signal (P w,TCM ) and in t DM at 20 cm (P w,TDM ), all expressed relative to the power spectral density of the uncorrected 20-cm residuals. Also shown is the effect of optimising the observing time, expressed as the ratio of P w,TCM estimated for optimal observing and P w,TCM with the current observing strategy (α = 0.5), and αopt the optimal fraction of time spent using the dual 10-and 50cm-cm observing system. Rather than directly compute PTDM, it is attractive to begin with the structure function, which is a more convenient statistic for turbulent scattering processes and is more stable when only a short duration is available. The structure function of tDM is given by
DTDM(τ ) = (tDM(t) − tDM(t + τ )) 2 = (λ/2πc) 2 D φ (τ ),(11)
where D φ (τ ) is the phase structure function. If we assume that the electron density power spectrum has an exponent of -11/3, i.e., Kolmogorov turbulence, then D φ (τ ) = (τ /τ diff ) 5/3 (Foster & Cordes 1990). The structure function DTDM(τ ) can therefore be estimated from τ diff , or directly from the tDM(t) once known.
As described in Appendix B we can use the structure function at any time lag τ to obtain a model power spectrum using
PTDM(f ) ≃ 0.0112 DTDM(τ )τ −5/3 (spy) −1/3 f −8/3(12)
The term (spy) is the number of seconds per year. Here DTDM is in s 2 , τ in s, f is in yr −1 and PTDM is in yr 3 . The spectrum of the white noise can be estimated from the ToA measurement uncertainties as discussed in section 2. However, often there are contributions to the white noise that are not reflected in the measurement uncertainties and so we prefer to estimate Pw directly from the power spectrum of the residuals.
TEST ON SIMULATED OBSERVATIONS
When dealing with real data sets it is not trivial to show that the DM-corrected residuals are 'improved' over simply taking residuals from the best wavelength (You et al. 2007). This is because much of the variations in DM are absorbed into the fit for the pulsar period and period derivative. Therefore the root-mean-square deviation (RMS) of the residuals from a single wavelength may not decrease significantly even though the RMS of the DM(t) variations that were removed is large. To demonstrate that the proposed procedure can estimate and remove the dispersion, and that it is necessary to include the common-mode in the process, we perform two sets of simulations.
The observing parameters, i.e., T obs , Ni, σij, DDM(τ ), of both simulations are based on the observations of PSR J1909−3744 in the PPTA 'DR1' data set . We find it useful to demonstrate the performance of the DM correction process in the frequency domain, but it is difficult to estimate power spectra of red processes if they are irregularly sampled. Therefore we first use simulations of regularly sampled observations with observing parameters similar to those of PSR J1909−3744 to demonstrate the performance of the DM correction algorithm. Then we will simulate the actual irregularly sampled observations of PSR J1909−3744 to show that the ultimate performance of the algorithm is the same as in the regularly sampled case.
Regular sampling, equal errors
We will compare the power spectra produced after fitting for DM(t) with and without simultaneously fitting for a common-mode signal. To generate the simulated data sets, we first generate idealised ToAs that have zero residual from the given timing model. Then we add zero-mean stochastic perturbations to the ideal ToAs to simulate the three components of the model: (1) independent white noise, corresponding to measurement error; (2) wavelength independent red noise, corresponding to the common-mode; (3) wavelength dependent red noise representing DM(t).
We simulate the measurement uncertainty with a white Gaussian process, chosen to match the high frequency power spectral density of the observed residuals. The simulated Pw is 2.2 × 10 −30 , 4.3 × 10 −30 and 2.6 × 10 −29 yr 3 at 10, 20 and 50cm cm respectively. For the common mode we choose a Gaussian process with a spectrum chosen to match a common model of the incoherent gravitational wave background (GWB), i.e. PGWB(f ) = (A 2 GWB /12π 2 )f −13/3 (Jenet et al. 2006;Hobbs et al. 2009). For the DM we use a Gaussian process with a power spectrum PDM(f ) = ADMf −8/3 , where ADM is chosen to match the observed DM fluctuations in PSR J1909−3744 shown in Figures 5 and 6, and the spectral exponent is chosen to match that expected for Kolmogorov turbulence (Foster & Cordes 1990). The levels of PTDM and PGWB are similar so that the same sample intervals can be used for both DM(ti) and tCM(ti), but this is not necessary in general and will not always be desirable.
For both algorithms, we estimate the pre-and postcorrection power spectra of the 20-cm residuals in four noise regimes: Pw; Pw +PDM; Pw +PGWB; and Pw +PDM +PGWB. In order to minimise the statistical estimation error, we average together 1000 independent realisations of the spectra for each algorithm. We note that although the averaged power spectra suggest that the input red noise signals are large, the noise on a single realisation is such that the red signals are at the limit of detection. To illustrate this, the 90% confidence limits for both the 1000 spectrum average and for a single realisation, are shown on the power spectra in Figures 1 and 2.
We show the effect of using the interpolated model for DM(t), but not fitting for the common-mode signal tCM(t), in Figure 1. This algorithm is well behaved when the GWB is not present, as shown in the two lower panels. In this case the DM correction algorithm removes the effect of the DM variations if they are present and increases the white noise below the corner frequency by the expected amount. Importantly, when the model GWB is included, i.e., in the two top panels, a significant amount of the low-frequency GWB spectrum is absorbed into the DM correction. This is independent of whether or not DM variations are actually present because the DM correction process is linear.
We show the full algorithm developed for this paper, using interpolated models for both DM(t) and the commonmode signal tCM(t), in Figure 2. One can see that the algorithm removes the DM if it is present, regardless of whether the GWB is present. It does not remove any part of the GWB spectrum. When the GWB is not present, as shown in the two lower panels, the algorithm remains well behaved. As expected, it increases the white noise below the corner frequency by a larger factor than in the previous case. This is the 'cost' of not absorbing some of the GWB signal into the DM correction. Although it has a higher variance than for the previous case, our DM(t) is the lowest variance unbiased estimator of the DM variations in the presence of wavelength-independent red noise. This increase in white noise is unavoidable if we are to retain the signal from a GWB, or indeed any of the signals of interest in PTAs.
The power spectra presented in Figure 2 demonstrate that the algorithm is working as expected, in particular that it does not remove power from any wavelength-independent signals present in the data. We note, however, two limitations in these simulations: the regular sampling and equal errors are not typical of observations, nor have we shown that the wavelength-independent signal in the post-fit data is correlated with the input signal (since our power spectrum technique discards any phase information). These limitations will be addressed in the next section.
Irregular sampling, Variable error bars
In order to test the algorithm in the case of realistic sampling and error bars, we repeated the simulations using the actual sampling and error bars for pulsar J1909−3744 from the PPTA. We use the same simulated spectral levels for the GW and DM as in the previous section. The results are also an average of 1000 realisations.
As a direct measure of performance in the estimating DM(t), we compute the difference between the DM estimated from the fit to the residuals, DMest(t), and the DM input in the simulation, DMin(t). To better compare with the timing residuals, we convert this error in the DM into the error in tDM(t) at 20 cm using Equation (1). Note that, although the residuals were sampled irregularly, the original DMin(t) was sampled uniformly on a much finer grid. Furthermore, the estimated DMest(t) is a well defined function that can also be sampled uniformly. Thus it is easy to compute the average power spectrum of this error in tDM(t) as is shown in Figure 3. We also plot the spectrum of the initial white noise, and the spectrum of the white noise after correction. If the algorithm is working correctly the white noise after correction should exactly equal the error in tDM(t) plus the white noise before correction, so we have over plotted the sum of these spectra and find that they are identical.
The spectrum of the error in tDM(t) shows the expected behaviour below the corner frequency. Above the corner frequency (where the correction is zero), it falls exactly like the spectrum of tDM(t) itself, i.e., as f −8/3 . By comparing the right and left panels one can see that the DM correction is independent of the GWB.
We can also demonstrate that the model GWB signal is preserved after DM correction, by cross-correlation of the input model GWB with the post-correction residuals. If the GWB signal is preserved this cross-correlation should equal the auto-correlation of the input GWB signal. We show the auto-correlation of the input and four different cases of the cross-correlation of the output in Figure 4. The cross-correlations are for two bands (20 and 50cm cm shown solid and dashed respectively), and for two different fitting algorithms (with and without tCM(t) shown heavy and light respectively). Again it can be seen that, without fitting for the common-mode tCM(t), a significant portion of the GWB is lost. In fact, it is apparent from the large negative correlation at 50cm cm that the 'lost' power is actually transferred from the 20-cm residuals to those at 50cm cm. Although it may be possible to recover this power post-fit, it is not clear how to do this when the GWB and DM signals are unknown. Finally, we note that when the common mode is used, the 50cm-cm residuals preserve the GWB just as well as the 20cm residuals, even though they carry the majority of the DM(t) variation.
The Robustness of the Estimator
The proposed DM correction process is only optimal if the assumptions made in the analysis are satisfied. The primary assumptions are: (1) that there is an unmodelled common-mode signal in the data; (2) that the residuals can be modelled as a set of samples toi(tj) = tCM(tj) + tDM(tj)(λi/λREF) 2 + twi(tj); (3) the variances of the samples twi(tj) are known. If Assumption 1 does not hold and we fit for tCM(tj), then our method would be sub-optimal. However in any pulsar timing experiment, we must first assume that there is a common-mode signal present. If tCM(tj) is weak or nonexistent then we will have a very low corner frequency and effectively we will not fit for tCM (tj). So this assumption is tested in every case.
Assumption 2 will fail if there are wavelength dependent terms which do not behave like λ 2 , for example the scattering effect which behaves more like λ 4 . If these terms are present they will corrupt the DM estimate and some scattering effects from longer wavelengths may leak into the shorter wavelengths due to the correction process. However the correction process will not remove any common-mode signal, so signals of interest to PTAs will survive the DM correction unchanged. Assumption 3 will not always be true a priori. Recent analysis of single pulses from bright MSPs have shown that pulse-to-pulse variations contribute significant white noise in Table 2. Scintillation and dispersion properties for the 20 PPTA pulsars, at a reference wavelength of 20 cm. The scintillation bandwidth (ν 0 ) and time scale (τ diff ) are averaged over a large number of PPTA observations except for values in parenthesis which are are taken from You et al. (2007). D 1000 is the value of the structure function at 1000 days and Ts is the optimal sampling interval for t DM (t). excess of that expected from the formal ToA measurement uncertainty (Os lowski et al. 2011;Shannon & Cordes 2012). Indeed, many pulsars appear to show some form of additional white noise which is currently unexplained but could be caused by the pulsar, the interstellar medium, or the observing system (see e.g. Cordes & Downs 1985;Hotan et al. 2005). In any case, we cannot safely assume that the uncertainties of the timing residuals σij accurately reflect the white noise level. If the σij are incorrect, our fit parameters,tc(t) andt d (t), will no longer be minimum variance estimators; however, they will remain unbiased. This means that DM estimation will be unbiased and the DM correction will not remove any GWB (or other common-mode signal) although the correction process may add more white noise than optimal. It should be noted that if all the σij were changed by the same factor our DM correction would be unchanged. Fortunately the actual white noise is relatively easy to estimate from the observations because there are more degrees of freedom in the white noise than in the red noise, so in practise we use the estimated white noise rather than the formal measurement uncertainties σij.
Source ν 0 τ diff D 1000 Ts (MHz) (s) (µs 2 )(yr)
APPLICATION TO PPTA OBSERVATIONS
We have applied the new DM correction technique to the PPTA data set . Observations of the PPTA are made in three wavelength bands: '10 cm' (∼ 3100 MHz); '20 cm' (∼ 1400 MHz); and '50cm cm' (∼ 700 MHz).
The 10-cm and 20-cm bands have been constant over the entire time span, however the long wavelength receiver was switched from a centre frequency of 685 MHz to 732 MHz around MJD 55030 to avoid RFI associated with digital TV transmissions. To allow for changes in the intrinsic pulse profile between these different wavelength bands, we fit for two arbitrary delays between one wavelength band and each of the other bands. However we did not allow an arbitrary delay between 685 and 732 MHz because the pulse shape does not change significantly in that range. We began our analysis by using the procedure described in Section 3 to compute pilot estimations of DM(t) and tCM(t) for each of the 20 pulsars, using a sampling interval Ts = 0.25 yr. Figure 5 shows the DM(t) derived from the above. Our results are consistent with the measurements made by You et al. (2007) for the ∼ 500 days of overlapping data, which is expected since they are derived from the same observations.
Determining the sampling interval
As discussed in Section 3, we can use the diffractive time scale τ diff to predict the magnitude of the DM variations in a given pulsar. This value can be computed directly from observations, however it is always quite variable on a day to day time scale (see Section 6 for discussion), and for a few pulsars τ diff approaches the duration of the observations, so it can be hard to measure. Nevertheless we have obtained an estimate of the average τ diff from the dynamic spectra for each pulsar, and this is given in Table 2. We have not provided an error estimate on the average τ diff because the variation usually exceeds a factor of two so the values tabulated are very rough estimates.
We also computed the structure function DTDM directly from the tDM values. These structure functions, scaled to delay in µs 2 at 20 cm, and those estimated from τ diff , are shown in Figure 6. The value DTDM(1000 days) is given in Table 2.
For each pulsar we also make an estimate of the white noise power directly from the power spectrum of the residuals. The estimates of Pw at each wavelength are given in Table 1.
We then use the DTDM estimates and Equation (12) to generate a model power spectrum PTDM(f ) at the reference wavelength (20 cm) for each pulsar. These assume a Kolmogorov spectral exponent. From these model spectra and the corresponding Pw,T DM , tabulated in Table 1, we determine the corner frequency and the corresponding sample interval Ts for DM for each pulsar. As we do not have any a priori knowledge of tCM for the PPTA pulsars we choose the same sample interval for tCM as for tDM.
Results
The measured DM(t) sampled at the optimal interval Ts is overlaid on the plot of the pilot analysis with Ts = 0.25 yr on Figure 5. It is not clear that there are measurable variations in DM in PSRs J1022+1001, J2124−3358 or J2145−0750, but one can see that there are statistically significant changes with time for the other pulsars. In general, the 'optimally sampled' time series (dashed line) follows the DM trend with less scatter. However, there are some significant DM fluctuations that are not well modelled by the smoother time-series. In particular we do not model the significant annual variations observed in PSR J0613−0200, and Figure 5, error bars are derived by simulation of white noise. The solid lines show the extrapolation from the scintillation time scale τ diff assuming a Kolmogorov spectrum, dashed lines mark the region occupied by 68% of simulated data sets having Kolmogorov noise with the same amplitude. These lines indicate the uncertanty in the measured structure functions resulting from the finite length of the data sets. The dotted lines show a Kolmogorov spectrum with the amplitude set to match the real data at a lag of 1000 days. Table 3. Impact of the DM corrections on the timing parameters, as determined in the 20-cm band. For each pulsar we present the change in ν andν due to the DM correction, relative to the measurement uncertainty, and the ratio of the RMS of the residuals before (Σpre) and after (Σpost) DM correction. Also included is the ratio of the power spectral density before (Ppre) and after (Ppost) correction, averaged below fc. The final column indicates if we believe that the DM corrections have 'improved' the data set for the purpose of detecting common-mode red signals. we must add a step change to account for the 250 day increase in DM observed in PSR J1603−7202 (these features are discussed more fully in Section 8). These variations do not follow the Kolmogorov model that was used to derive the optimal sampling rate, and therefore we must use a shorter Ts so we can track these rapid variations. These results illustrate the importance of making a pilot analysis before deciding on the sample interval. The ISM is an inhomogeneous turbulent process and an individual realisation may not behave much like the statistical average. The DM(t) for PSR J1909−3744 is also instructive. It is remarkably linear over the entire observation interval. This linearity would not be reflected in the timing residuals at a single wavelength because a quadratic polynomial is removed in fitting the timing model. It can only be seen by comparing the residuals at different wavelengths. Such linear behaviour implies a quadratic structure function and a power spectrum steeper than Kolmogorov.
PSR
Performance of DM Correction
The simplest and most widely used metric for the quality of timing residuals is the RMS of the residuals. Thus a natural measure of the performance of DM correction would be the ratio of the RMS of the 20-cm residuals before and after DM correction. This ratio is provided in Table 3. However, for most of these pulsars, the RMS is dominated by the white noise and so does not change appreciably after DM correction. Furthermore much of the effect of DM(t) varia-tions is absorbed by fitting for the pulse frequency and its derivative. Thus the ratio of the RMS before and after DM correction is not a very sensitive performance measure. As noted by You et al. (2007), the DM correction has a significant effect on the pulsar spin parameters, which can give an indication of the magnitude of the DM correction. Table 3 lists the change in ν andν, as a factor of the measurement uncertainty, caused by applying the DM correction. However, there are systematic uncertainties in the estimation of the intrinsic values of ν andν that may be greater than the error induced by DM variations. Judging the significance of the DM corrections depends on the intended use of the data set. Since a major goal of the PPTA is to search for common-mode red signals, we choose to consider the impact of the DM corrections on the low frequency noise. In principal, the DM correction should reduce the noise at frequencies below fc, and therefore we have estimated the ratio of the pre-and post-correction power spectrum of the 20-cm residuals, averaged over all frequencies below fc. We caution that the spectral estimates are highly uncertain, and for many pulsars we average very few spectral channels so the error is non-Gaussian. Therefore, we present these ratios in Table 3 as an estimated 68% uncertainty range, determined assuming the spectral estimates are χ 2 -distributed with mean and variance equal to the measured mean power spectral density.
There are 9 pulsars for which the DM correction appears to significantly reduce the low frequency noise, and therefore increases the signal-to-noise ratio for any commonmode signal in the data. These pulsars are listed with a 'Y' in Table 3. There are 10 pulsars for which the change in low frequency power is smaller than the uncertainty in the spectral estimation and so it is not clear if the DM correction should be performed. Table 3 indicates these pulsars with a 'y' or 'n', with the former indicating that we believe that the DM correction is likely to improve the residuals. However, the DM correction fails to 'improve' PSR J1643−1224 under any metric, even though we measure considerable DM variations (see Figure 5). As discussed in Section 6, we believe that this is due to variations in scattering delay entering the DM correction and adding considerable excess noise to the corrected residuals.
SCATTERING AND DM CORRECTION
The most important effect of the ISM on pulsar timing is the group delay caused by the dispersive plasma along the line of sight. However small scale fluctuations in the ISM also cause angular scattering by a diffractive process. This scattering causes a time delay t0 ≈ 0.5θ 2 0 L/c, where θ0 is the RMS of the scattering angle and L is the distance to the pulsar. This can be significant, particularly at longer wavelengths, because it varies much faster with λ than does the dispersive delay -approximately as λ 4 . In homogeneous turbulence one would expect this parameter to be relatively constant with time. If so, the delay can be absorbed into the pulsar profile and it will have little effect on pulsar timing. However if the turbulence is inhomogeneous the scattering delay may vary with time and could become a significant noise source for pulsar timing. We can study this effect using the PPTA pulsar PSR J1939+2134. Although this pul-sar is unusual in some respects, the scattering is a property of the ISM, not the pulsar, and the ISM in the direction of PSR J1939+2134 can be assumed to be typical of the ISM in general. PSR J1939+2134 is a very strong source and the observing parameters used for the PPTA are wellsuited to studying its interstellar scattering. The time delay, t0, can be estimated from the bandwidth of the diffractive scintillations, ν0, in a dynamic spectrum using the relationship t0 = 1/2πν0. In fact it is extremely variable, as can be seen in Figure 7. The RMS of t0 (52 ns at 20 cm) is about 28% of the mean. We can expect this to increase by a factor of (1400 MHz/700 MHz) 4 = 16 at 50cm cm. Thus in the 50cm-cm ToAs there will be delays with RMS variations of ∼ 830 ns, which do not fit the dispersive λ 2 behaviour. This will appear in the estimate of tDM at 20 cm, attenuated by a factor of ((1400 MHz/700 MHz) 2 − 1) = 3 (Equation 2). Therefore the DM correction will bring scattering noise from the 50-cm band to the 20-cm band with RMS variation ∼ 270 ns, 5.3 times larger than the scattering noise intrinsic to the 20-cm observations. This analysis is corroborated by the structure function of DM for this pulsar shown in Figure 6, which shows a flattening to about 1 µs 2 at small time lags. This implies a white process with RMS variations of about 500 ns, consistent with that expected from scattering.
We have correlated the variations in t0 with the 20-cm residuals before correction and find 18% positive correlation. This is consistent with the presence of 52 ns of completely correlated noise due to t0 added to the ToA measurement uncertainty of the order of 200 ns. PSR J1939+2134 is known to show ToA variations that are correlated with the intensity scintillations (Cognard et al. 1995) but are much stronger than expected for homogeneous turbulence ). Thus we are confident that the observed variation in t0 is showing up in the 20-cm residuals. We expect that contribution to increase in the DM corrected residuals to about 300 ns. However this is very difficult to measure directly because the DM correction is smoothed and the 50cm-cm observations are not simultaneous with the 20-cm observations. This effect increases rapidly with longer wavelength. If we had used 80-cm observations for DM correction, the RMS at 80 cm would have been ∼ 10 µs and this would have been reduced by a factor of 12 to an RMS of 800 ns in the corrected 20 cm residuals. Clearly use of low frequency antennas such as GMRT (Joshi & Ramakrishna 2006) or LO-FAR (Stappers et al. 2011) for correcting DM fluctuations in PTAs will have to be limited to weakly scattered pulsars. This is an important consideration, but it should be noted that the four PPTA pulsars that provide the best timing are all scattered much less than J1939+2134 -all could be DM corrected with 80-cm observations or even with longer wavelengths. On the other hand there are four PPTA pulsars that are scattered 20 to 80 times more strongly than J1939+2134 and even correction with 50cm-cm data causes serious increases in the white noise.
An extreme example is PSR J1643−1224. Under the above assumption, the expected white noise (2.0 µs) due to scattering at 20 cm exceeds the radiometer noise (0.63 µs). The white scattering noise at 50cm cm is much larger (32 µs) and about a third of this makes its way into the DM-corrected residuals at 20 cm. This is also corroborated by the structure function for this pulsar in Figure 6, which shows a flattening to about 10 µs 2 at small lags. This implies a white process with RMS variation of ∼ 3 µs which is consistent with that expected from scattering. Indeed, this pulsar is the only pulsar with significant DM variations for which the DM correction increases the noise in the 20-cm residuals under all metrics. It is also important to note that observing this source at the same frequencies with a more sensitive telescope will not improve the signal to noise ratio, because the noise, both before and after DM-correction, is dominated by scattering. However using a more sensitive telescope could improve matters by putting more weight on observations at 10 cm, where scattering is negligible.
Finally however, we note that the usefulness of long wavelength observations would be greatly improved if one could measure and correct for the variation in scattering delays. This may be possible using a technique such as cyclic spectroscopy, however this has only been done in ideal circumstances and with signal-to-noise ratio such that individual pulses are detectable (Demorest 2011). It is still unclear if such techniques can be generalised to other observations, or if this can be used to accurately determine the unscattered ToA.
SCHEDULING FOR DM CORRECTION
If there were no DM variation, one would spend all the observing time at the wavelength for which the pulsar has the greatest ToA precision (see Manchester et al. 2012 for discussion on choice of observing wavelength). The reality is, of course, that we need to spend some of the time observing at other wavelengths to correct for DM variations. In this section we will present a strategy for choosing the observing time at each wavelength, attempting to optimise the signal to noise ratio of the common-mode signal, tCM. We take the PPTA observations at Parkes as our example, but this work can easily be generalised to any telescope.
At Parkes it is possible to observe at wavelengths of 10 and 50cm cm simultaneously because the ratio of the wavelengths is so high that the shorter wavelength feed can be located co-axially inside the longer wavelength feed. However the 20-cm receiver does not overlap with either 10 or 50cm cm and so must be operated separately.
As noted earlier we can writetCM = i bitoi so the variance of tCM is given by σ 2 TCM = i b 2 i σ 2 i . However, the solution is not linear in the σ 2 i terms because they also appear in the bi coefficients. At present, observing time at Parkes is roughly equal between the two receivers, so we use the existing power spectral densities as the reference. We will assume that the total observing time is unity and the time devoted to 10 and 50cm cm, which are observed simultaneously, is 0 ≤ α ≤ 1.
The variance of any toi is inversely proportional to the corresponding observing time. We use to,20 as the reference because it usually has the smallest TOA uncertainty in PPTA observations. Therefore, we define σ 2 20 = 1/(1 − α) as the reference. Then we assume that, with equal observing time σ10 = xσ20 and σ50 = yσ20, so as scheduled we would have σ10 = x/α and σ50 = y/α. We can then determine the increase in white noise caused by correcting for dispersion as a function of α, the time devoted to 10 and 50cm cm. The results are shown for all the PPTA pulsars in Table 1. One can see that all the pulsars are different and the optimal strategies range from α ≈ 0.2 to α ≈ 1.0 (i.e. 100% of time spent using the dual 10 and 50cm cm system). For the four 'best' pulsars, PSRs J0437−4715, J1713+0747, J1744−1134, and J1909−3744, the optimal strategy has α > 0.7.
This suggests that a useful improvement to PTA performance could come from deploying broadband receivers, so that correction for DM(t) can be done with a single observation. This also has the benefit of reducing the difficulties of aligning pulsar profiles measured with different receivers at different times, and would therefore allow for more accurate measurement of DM variations.
THE INTERSTELLAR MEDIUM
The PPTA DM(t) observations provide an interesting picture of the ionised ISM on au spatial scales. The overall picture can be seen in Figures 5 and 6. In Figure 5 it is apparent that 17 of the 20 PPTA pulsars have measurable DM(t) variations. In Figure 6 it can be seen that 13 of these 17 show power-law structure functions, as expected in the ensemble average Of these, eight are roughly consistent with an extrapolation from the diffractive scales at the Kolmogorov spectral exponent, an average dynamic range of 4.8 decades. However five are considerably higher than is predicted by a Kolmogorov extrapolation. They may be locally Kolmogorov, i.e. an inner scale may occur somewhere between the diffractive scale and the directly measured scales of 100 to 2000 days, but establishing this would require a detailed analysis of the apparent scintillation velocity which is beyond the scope of this paper. Two of these five pulsars, J1045−4509 and J1909−3744, were already known to be inconsistent with a Kolmogorov spectral exponent (You et al. 2007), and it is clear, with the additional data that are now available, that J1024−0719, J1643−1224 and J1730−2304 should be added to this list. When the spatial power spectrum of a stochastic process is steeper than the Kolmogorov power-law, it can be expected to be dominated by linear gradients and show an almost quadratic structure function.
Indeed inspection of Figure 5 shows that the 5 steep spectrum pulsars all show a strong linear gradient in DM(t).
The time series DM(t) shown in Figure 5 often show behaviour that does not look like a homogeneous stochastic process. For example, PSR J1603−7202 shows a large increase for ∼250 days around MJD 54000 and J0613−0200 shows clear annual modulation. The increase in DM for J1603−7202 suggests that a blob of plasma moved through the line of sight. If we assume the blob is halfway between the pulsar and the Earth, the line of sight would have moved by about 0.5 au in this time, and if the blob were spherical it would need a density of ∼200 cm −3 . This value is high, but comparable to other density estimates for au-scale structure based on 'extreme scattering events' (Fiedler et al. 1987;Cognard et al. 1993).
We computed the power spectra of DM(t) for all the pulsars to see if the annual modulation that is clear by eye in PSR J0613−0200 is present in any of the other pulsars. For four pulsars we find a significant (> 5-σ) detection of an annual periodicity, PSRs J0613−0200, J1045−4509, J1643−1224 and J1939+2134.
The most likely explanation for the annual variation in DM(t) is the annual shift in the line of sight to the pulsar resulting from the orbital motion of the Earth. The trajectory of the line of sight to three example PPTA pulsars are shown in Figure 8. The relatively low proper motion and large parallax of the PPTA pulsars means that the trajectory of the line of sight to many of the PPTA pulsars show pronounced ripples. However, unless the trajectory is a tight spiral, the annual modulation will only be significant if there is a persistent gradient in the diffractive phase screen.
The presence of persistent phase gradients and annual modulation in J1045−4509 and J1643−1224 is not surprising because the ISM associated with each of these pulsars has a steeper than Kolmogorov power spectrum. Indeed, the measured DM(t) for these pulsars do show a very linear trend, which in itself evidence for a persistent phase gradient. The other steep spectrum pulsars, J1024−0719, J1730−2304 and J1909−3744, have higher proper motion, which reduces the amplitude of the annual modulation relative to the long term trend in DM(t). We note that the spectral analyses for PSRs J1024−0719 and J1909−3744 suggest annual periodicities, and it may be possible to make a significant detection by combining the PPTA data with other data sets.
PSR J1939+2134 does not show a steep spectrum, however its proper motion is very low compared to its parallax, and therefore the trajectory spirals through the ISM, reducing the requirement for a smooth phase screen. The annual modulation of J0613−0200 may be somewhat different, since it does not have a steep spectrum and although the proper motion is small the trajectory does not spiral (see Figure 8). This suggests that for J0613−0200 the turbulence could be anisotropic with the slope of the gradient aligned with the direction of the proper motion. Anisotropic structures are believed to be quite common in the ISM (Cordes et al. 2006;Brisken et al. 2010). However one can imagine various other ways in which this could occur, particularly in an inhomogeneous random process, and inhomogeneous turbulence on an au spatial scale is also believed to be common in the ISM (Stinebring et al. 2001;Cordes et al. 2006;Brisken et al. 2010).
Persistent spatial gradients will cause a refractive shift in the apparent position of the pulsar, and because of dispersion the refraction angle will be wavelength dependent. This refractive shift appears in the timing residuals as an annual sine wave which changes in amplitude like λ 2 . When the DM(t) is corrected this sine wave disappears and the inferred position becomes the same at all wavelengths. These position shifts are of order 10 −4 (λ/20 cm) 2 arcseconds for all four pulsars.
Note that the trajectory of the lines of sight shown on Figure 8 may appear quite non-sinusoidal, but the annual modulation caused by the Earth's orbital motion in a linear phase gradient will be exactly a sine wave superimposed on a linear slope due to proper motion. This will not generate any higher harmonics unless the structure shows significant nonlinearity on an au scale. We do not see second harmonics of the annual period, which suggests that the spatial structure must be quite linear on an au scale.
Annual variations in DM are also observed in pulsars for which the line of sight passes close to the Sun because of free electrons in the solar wind (Ord et al. 2007;You et al. 2012). In the PPTA, a simple symmetric model of the solar wind is used to remove this effect, but this is negligible for most pulsars. For the three pulsars where it is not negligible, the effect of the solar wind persists only for a few days at the time when the line of sight passes closest to the Sun. Neither the magnitude, phase nor shape of the variations seen in our sample can be explained by an error in the model of the solar wind. Changes in ionospheric free electron content can be ruled out for similar reasons.
In summary the ISM observations are, roughly speaking, consistent with our present understanding of the ISM. However the data will clearly support a more detailed analysis, including spectral modelling over a time scale range in excess of 10 5 from the diffractive scale to the duration of the observations. It may also be possible to make a 2dimensional spatial model of the electron density variations for some of the 20 PPTA pulsars. Although this would be useful for studying the ISM and in improving the DM correction, such detailed modelling is beyond the scope of this paper.
CONCLUSIONS
We find that it is necessary to approach the problem of estimating and correcting for DM(t) variations iteratively, beginning with a pilot analysis for each pulsar and refining that analysis as the properties of that pulsar and the associated ISM become clearer. Each pulsar is different and the ISM in the line of sight to each pulsar is different. The optimal analysis must be tailored to the conditions appropriate for each pulsar and according to the application under consideration.
We sample the DM(t) just often enough that the variations in DM are captured with the minimum amount of additional white noise. Likewise, we must also sample a commonmode signal tCM (t) at the appropriate rate. In this way we can correct for the DM variations at frequencies where it is necessary, and we can include tCM (t) at frequencies where it is necessary, but not fit for either at frequencies where the signal is dominated by white noise.
By including the common-mode signal in the analysis we preserve the wavelength-independent signals of interest for pulsar timing arrays and we improve the estimate of the pulsar period and period derivative. Without estimating the common mode, a significant fraction of wavelengthindependent signals, such as: errors in the terrestrial clocks; errors in the planetary ephemeris; and the effects of gravitational waves from cosmic sources, would have been absorbed into the DM correction and lost.
We have applied this technique to the PPTA data set, which improves its sensitivity for the detection of low frequency signals significantly. The estimated DM(t) also provides an unparallelled measure of the au scale structure of the interstellar plasma. In particular it confirms earlier suggestions that the fluctuations often have a steeper than Kolmogorov spectrum, which implies that an improved physical understanding of the turbulence will be necessary. We also find that persistent phase gradients over au scales are relatively common and are large enough to cause significant errors in the apparent positions of pulsars unless DM corrections are applied.
ACKNOWLEDGEMENTS
This work has been carried out as part of the Parkes Pulsar Timing Array project. GH is the recipient of an Australian Research Council QEII Fellowship (project DP0878388), The PPTA project was initiated with support from RNM's Federation Fellowship (FF0348478). The Parkes radio telescope is part of the Australia Telescope which is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO.
APPENDIX A: CONSTRAINED LEAST SQUARES FITTING IN TEMPO2
The least squares problem of fitting the timing model to the residuals can be written in matrix form as
R = MP + E.(A1)
Here R is a column vector of the timing residuals, P is a column vector of fit parameters, including DM(tj) and tCM(tj) as well as the other timing model parameters. M is a matrix describing the timing model and E is a column vector of errors. The least-squares algorithm solves for P, matching MP to R with a typical accuracy of E.
The sampled time series DM(tj) and tCM(tj) are covariant with the timing model, so they must be constrained to eliminate that covariance or the least squares solution will fail to converge on a unique solution. These constraints have the form of linear equations of DM(tj) and tCM(tj), such as:
DM (tj ) = 0; tCM(tj) = 0; tjtCM(tj ) = 0, t 2 j tCM(tj) = 0; sin(ωtj)tCM(tj) = 0; cos(ωtj)tCM(tj ) = 0; etc. Augmented with these equations, the least-squares problem becomes
R C = M B P + E ǫ ,
where B is a matrix describing the constraints, ǫ is a column vector of weights for the constraints. In our case C = 0, though it need not be in general. The least-squares solution will then find a vector P that matches both MP to R, with a typical accuracy of E, and also matches BP to C, with a typical accuracy of ǫ. By making ǫ very small we can enforce the constraints with high accuracy. This scheme has been called 'the method of weights' (Golub & van Loan 1996). If the uncertainties in the estimates of DM(tj) and tCM(tj) are not expected to be equal, for instance if the different observing wavelengths are irregularly sampled and the ToA uncertainties are variable across sampling windows, then it can be advantageous to use weighted constraints. Then the constraints take the form WjDM (tj ) = 0, and we need to estimate the uncertainties of the parameters to obtain the optimal weights. These uncertainties can be determined from the least-squares solution in which the timing residuals are described purely by Equation (4). This problem is linear and the covariance matrix of the parameters can be written in closed form without even solving for the parameters. The diagonal elements of the covariance matrix are the variances of the parameters and the weights, Wj , are the inverse of the square roots of the corresponding variances.
APPENDIX B: RELATION BETWEEN THE STRUCTURE FUNCTION AND POWER-SPECTRAL DENSITY
The structure function D(τ ), of a time series y(t), is well defined if y(t) has stationary differences
D(τ ) = (y(t) − y(t + τ )) 2 .(B1)
If y(t) is wide-sense stationary D(τ ) can be written in terms of the auto covariance C(τ ) by expansion of the square
D(τ ) = y(t) 2 + y(t + τ ) 2 − 2 y(t)y(t + τ ) = 2(C(0) − C(τ ))(B2)
If y(t) is real valued then by the Wiener-Khinchin theorem,
C(τ ) = ∞ 0 cos(2πf τ ) P (f ) df,(B3)
where P (f ) is the one-sided power spectral density of y(t).
Thus we can then write the structure function in terms of the power spectral density as
D(τ ) = A ∞ 0 2(1 − cos(2πf τ ))P (f ) df,(B4)
It should be noted that this expression for D(τ ) is valid if D(τ ) exists. It is not necessary that C(τ ) exist. For the case of a power-law, P (f ) = Af −α , we can change variables using x = f τ , and obtain
D(τ ) = τ α−1 A ∞ 0 2(1 − cos(2πx)) x −α dx.(B5)
The integral (Int) above converges if 1 < α < 3, yielding
Int. = 2 α π α−1 sin(−απ/2)Γ(1 − α),(B6)
where Γ is the Gamma function. Thus for Kolmogorov turbulence, with exponent α = 8/3, we have Int ≃ 89.344 and the power spectrum can be written ABSTRACT Signals from radio pulsars show a wavelength-dependent delay due to dispersion in the interstellar plasma. At a typical observing wavelength, this delay can vary by tens of microseconds on five-year time scales, far in excess of signals of interest to pulsar timing arrays, such as that induced by a gravitational-wave background. Measurement of these delay variations is not only crucial for the detection of such signals, but also provides an unparallelled measurement of the turbulent interstellar plasma at au scales.
In this paper we demonstrate that without consideration of wavelengthindependent red-noise, 'simple' algorithms to correct for interstellar dispersion can attenuate signals of interest to pulsar timing arrays. We present a robust method for this correction, which we validate through simulations, and apply it to observations from the Parkes Pulsar Timing Array. Correction for dispersion variations comes at a cost of increased band-limited white noise. We discuss scheduling to minimise this additional noise, and factors, such as scintillation, that can exacerbate the problem.
Comparison with scintillation measurements confirms previous results that the spectral exponent of electron density variations in the interstellar medium often appears steeper than expected. We also find a discrete change in dispersion measure of PSR J1603−7202 of ∼ 2×10 −3 cm −3 pc for about 250 days. We speculate that this has a similar origin to the 'extreme scattering events' seen in other sources. In addition, we find that four pulsars show a wavelength-dependent annual variation, indicating a persistent gradient of electron density on an au spatial scale, which has not been reported previously.
Key words: pulsars: general -ISM: structure -methods: data analysis
INTRODUCTION
The fundamental datum of a pulsar timing experiment is the time of arrival (ToA) of a pulse at an observatory. In practise, the ToA is referred to the solar-system barycen-⋆ Email: [email protected] tre in a standard time frame (e.g., barycentric coordinate time). This barycentric arrival time can be predicted using a 'timing model' for the pulsar. The difference between the barycentric ToAs and the arrival times predicted by the timing model are termed residuals. The timing model can be refined using a least-squares fitting procedure to minimise the residuals, as performed by, e.g., the Tempo2 software (?). Since the timing model is always incomplete at some level, we always see some level of post-fit residuals, which are typically a combination of 'white' noise due to the uncertainty in the ToA measurement and 'red' (i.e., time-correlated) signal. For the majority of known pulsars the dominant red signal is caused by the intrinsic instability of the pulsar, and termed 'timing noise' (e.g., ?). However, the subset of millisecond pulsars are stable enough that other red signals are potentially measurable (?). Pulsar timing array projects, such as the Parkes Pulsar Timing Array (PPTA; ?), aim to use millisecond pulsars to detect red signals such as: errors in the atomic time standard (?); errors in the Solar System ephemeris (?); or the effect of gravitational waves (???). Each of these signals can be distinguished by the spatial correlation, i.e., how pulsars in different directions on the sky are affected. However, at typical observing wavelengths and time-spans, the variation of the dispersive delay due to turbulence in the ionised interstellar medium (ISM) dominates such signals (?). Fortunately for pulsar timing experiments, these delays can be measured and corrected using observations at multiple wavelengths.
The dispersive group delay is given by
tDM = λ 2 e 2 2πmec 3 path ne(l)dl ,(1)
where λ is the barycentric radio wavelength 1 . The path integral of electron density is the time-variable quantity. In pulsar experiments this is termed 'dispersion measure', DM, and given in units of cm −3 pc. In principle, the instantaneous DM can be computed from the difference of two arrival times from simultaneous observations at different wavelengths, or more generally by fitting to any number of observations at more than one wavelength. The question of estimation and correction of DM(t) has previously been considered by ?. They chose a 'best' pair of wavelengths from those available and estimated the DM at every group of observations. These observation groups were selected by hand, as was the choice of wavelengths. Regardless of how the analysis is done, the estimated DM always contains white noise from differencing two observations, and correcting the group delay always adds that white noise to the arrival times. However the DM(t) variations are red, so they only need to be corrected at frequencies below the 'corner frequency' at which the power spectrum of the DM-caused fluctuations in group delay is equal to the power spectrum of the white noise in the DM(t) estimate. To minimise the additional white noise, they smoothed the DM(t) estimates over a time Ts to create a low-pass filter which cuts off the DM variations, and the associated white noise, at frequencies above the corner frequency. In this way, they avoided adding white noise at high frequencies where the DM-correction was unnecessary. Of course the added 'white' noise is no longer white; it is white below the corner frequency, but zero above the corner frequency.
Here we update this algorithm in two ways. We use all the observed wavelengths to estimate DM(t) and we integrate the smoothing into the estimation algorithm auto-matically. Thus, the algorithm can easily be put in a data 'pipeline'. We show the results of applying this new algorithm to the PPTA data set, which is now about twice as long as when it was analysed by ?. Additionally, we demonstrate that our algorithm is unbiased in the presence of wavelength-independent red signals, e.g., from timing noise, clock error, or gravitational waves; and we show that failure to include wavelength-independent red signals in the estimation algorithm will significantly reduce their estimated amplitude.
THEORY OF DISPERSION REMOVAL
We assume that an observed timing residual is given by tOBS = tCM + tDM(λ/λREF) 2 where tCM is the commonmode, i.e., wavelength-independent delay and tDM is the dispersive delay at some reference wavelength λREF. Then with observations at two wavelengths we can solve for both tCM and tDM.t
DM = (tOBS,1 − tOBS,2)λ 2 REF /(λ 2 1 − λ 2 2 ),(2)tCM = (tOBS,2λ 2 1 − tOBS,1λ 2 2 )/(λ 2 1 − λ 2 2 ).(3)
In a pulsar timing array, tCM would represent a signal of interest, such as a clock error, an ephemeris error, or the effect of a gravitational wave. The dispersive component tDM would be of interest as a measure of the turbulence in the ISM, but is a noise component for other purposes. It is important to note thattDM is independent of tCM so one can estimate and correct for the effects of dispersion regardless of any common-mode signal present. In particular, commonmode red signals do not cause any error intDM.
If more than two wavelengths are observed, solving for tCM and tDM becomes a weighted least-squares problem, and the standard deviation of the independent white noise on each observation is needed to determine the weighting factors. For wavelength i, we will denote the white noise by tW,i and its standard deviation by σi so the observed timing residual is modelled as
tOBS,i = tCM + tDM(λi/λREF) 2 + tW,i.(4)
The weighted least-squares solutions, which are minimum variance unbiased estimators, arẽ
tDM = λ 2 REF i 1/σ 2 i i tOBS,iλ 2 i /σ 2 i − i λ 2 i /σ 2 i i tOBS,i/σ 2 i /∆ (5) tCM = i λ 4 i /σ 2 i i tOBS,i/σ 2 i − i λ 2 i /σ 2 i i tOBS,iλ 2 i /σ 2 i /∆.(6)
Here ∆ is the determinant of the system of equations,
∆ = i 1/σ 2 i i λ 4 i /σ 2 i − i λ 2 i /σ 2 i 2 .
If one were to model only the dispersive term tDM, the weighted least-squares solution would becomẽ
tDM = λ 2 REF i tOBS,iλ 2 i /σ 2 i i λ 4 i /σ 2 i .(7)
Correction of DM variations in pulsar timing 3
However if a common-mode signal is present, this solution is biased. The expected value is
t DM = tDM + tCMλ 2 REF i λ 2 i /σ 2 i i λ 4 i /σ 2 i .(8)
Some of the 'signal' tCM is absorbed intotDM reducing the effective signal-to-noise ratio and degrading the estimate of DM. We will demonstrate this bias using simulations in Section 4. It is important to note that the dispersion estimation and correction process is linear -the estimatorstDM and tCM are linear combinations of the residuals. The corrected residuals tOBS,cor,i = tOBS,i − (λi/λREF) 2t DM, are also linear combinations of the residuals. We can easily compute the white noise in any of these quantities from the white noise in the residuals. For example, we can collect terms in Equations (5) and (6) obtainingtDM = i aitOBS,i and tCM = i bitOBS,i, where
ai = λ 2 REF λ 2 i /σ 2 i j 1/σ 2 j − 1/σ 2 i j λ 2 j /σ 2 j /∆ (9) bi = 1/σ 2 i j λ 4 j /σ 2 j − λ 2 i /σ 2 i j λ 2 j /σ 2 j /∆.(10)
Then, the white noise variances of the estimators can be written as σ 2 TDM = i a 2 i σ 2 i and σ 2 TCM = i b 2 i σ 2 i . The actual PPTA observations are not simultaneous at all frequencies, so we cannot normally apply Equations (5) and (6) directly (?). We discuss how the least squares solutions fortDM andtCM can be obtained by including them in the timing model in the next section. However it is useful to have an analytical estimate of the power spectral density of the white noise that one can expect in these estimators and in the corrected residuals. At each wavelength λi we have a series of Ni error estimates σij . The variance of the weighted mean is σ 2 mi = 1/ j 1/σ 2 ij . This is the same as if we had a different number N of observations at this wavelength each of variance σ 2 = σ 2 mi N . Thus, for planning purposes we can compute σmi for each wavelength and conceptually resample each wavelength with an arbitrary number (N ) of samples. Equations (5), (6), (9), and (10) are invariant under scaling of all σi by the same factor so one can obtain the coefficients ai and bi using σmi in place of σi so the actual number (Ni) of samples need not enter the equations.
If one had a series of N samples over a time span of TOBS each with variance σ 2 , the spectral density of the white noise would be Pw = 2TOBS σ 2 /N = 2TOBS σ 2 m . We can extend this to a weighted white noise spectral density using the variance of the weighted mean. So the power spectral densities Pw,i play the same role as σ 2 i in Equations (5), (6), (9) and (10). The coefficients {ai} and {bi} are functions of λi and Pw,i. Then we find Pw,TDM = i a 2 i Pw,i and Pw,TCM = i b 2 i Pw,i. Perhaps the most important property of these estimators is that Pw,TCM is less than or equal to the white noise spectrum of the corrected residuals Pw,cor,i in any band. Equality occurs when there are only two wavelengths. The values of Pw,i, Pw,cor,i, Pw,TDM and Pw,TCM are given for the PPTA pulsars in Table 1. Here Pw,TDM is given at the reference wavelength of 20 cm.
The situation is further complicated by red noise which depends on wavelength, but not as λ 2 . For example, diffrac-tive angular scattering causes variations in the group delay, which scale as the scattered pulse width, i.e. approximately as λ 4 (?). Clearly such noise will enter the DM correction process. It can have the unfortunate effect that scattering variations, which are stronger at long wavelengths, enter the short wavelength corrected residuals even though they are negligible in the original short wavelength data. This will be discussed in more detail in Section 6.
DISPERSION CORRECTION TECHNIQUE
Rather than solving for tCM and tDM for every group of observations, or re-sampling observations at each wavelength to a common rate, it is more practical to include parametrised functions for tCM(t) and DM(t) in the timing model used to obtain the timing residuals. To provide a simple and direct parametrisation we use piece-wise linear models defined by fixed samples tCM(tj) and DM(tj) for j = 1,...,Ns.
It is also required to introduce some constraints into the least-squares fitting to prevent covariance with other model parameters. For example, the values of DM(tj ) are naturally covariant with the mean dispersion measure parameter, DM0, which is central to the timing model. To eliminate this covariance, we implement the linear equality constraint that i=1 DM(tj) = 0. Additionally, the series tCM(tj) is covariant with the entire timing model, however in practise the sampling interval is such that it responds very little to any orbital parameters (in the case of binary systems). We constrain tCM(tj) to have no response to a quadratic polynomial, or to position, proper motion, and parallax. These constraints are implemented as part of the least-squares fit in Tempo2, as described in Appendix A The choice of sampling interval, Ts is essentially the same as in ?. The process of fitting to a piece-wise linear function is equivalent to smoothing the DM(t) time series with a triangle function of base 2Ts. This is a low pass filter with transfer function Htri(f ) = (sin(πf Ts)/πf Ts) 2 . We adjust Ts such that the pass band approximately corresponds to the corner frequency fc at which the power spectrum of the DM delays, PTDM, exceeds that of the white noise, Pw,TDM. Note that this corner frequency is independent of reference wavelength at which tDM is defined.
To determine this corner frequency we need an estimate of the power spectrum of tDM, so the process is inherently iterative. We can obtain a first estimate of PTDM(f ) from the diffractive time scale, τ diff , at the reference wavelength. For signals in the regime of strong scattering, which includes all PPTA observations, τ diff is the time scale of the diffractive intensity scintillations. For the PPTA pulsars, τ diff is usually of the order of minutes and can be estimated from a dynamic spectrum taken during normal observations (see e.g. ?).
Rather than directly compute PTDM, it is attractive to begin with the structure function, which is a more convenient statistic for turbulent scattering processes and is more stable when only a short duration is available. The structure function of tDM is given by
DTDM(τ ) = (tDM(t) − tDM(t + τ )) 2 = (λ/2πc) 2 D φ (τ ),(11)
where D φ (τ ) is the phase structure function. If we assume Table 1. The estimated power spectral density before (Pw) and after (Pw,cor) correction of the white noise for each PPTA pulsar at each of the three wavelengths, and the expected white noise power spectral density in the 'common mode' signal (P w,TCM ) and in t DM at 20 cm (P w,TDM ), all expressed relative to the power spectral density of the uncorrected 20-cm residuals. Also shown is the effect of optimising the observing time, expressed as the ratio of P w,TCM estimated for optimal observing and P w,TCM with the current observing strategy (α = 0.5), and αopt the optimal fraction of time spent using the dual 10-and 50cm-cm observing system. that the electron density power spectrum has an exponent of -11/3, i.e., Kolmogorov turbulence, then D φ (τ ) = (τ /τ diff ) 5/3 (?). The structure function DTDM(τ ) can therefore be estimated from τ diff , or directly from the tDM(t) once known.
As described in Appendix B we can use the structure function at any time lag τ to obtain a model power spectrum using
PTDM(f ) ≃ 0.0112 DTDM(τ )τ −5/3 (spy) −1/3 f −8/3(12)
The term (spy) is the number of seconds per year. Here DTDM is in s 2 , τ in s, f is in yr −1 and PTDM is in yr 3 . The spectrum of the white noise can be estimated from the ToA measurement uncertainties as discussed in section 2. However, often there are contributions to the white noise that are not reflected in the measurement uncertainties and so we prefer to estimate Pw directly from the power spectrum of the residuals.
TEST ON SIMULATED OBSERVATIONS
When dealing with real data sets it is not trivial to show that the DM-corrected residuals are 'improved' over simply taking residuals from the best wavelength (?). This is because much of the variations in DM are absorbed into the fit for the pulsar period and period derivative. Therefore the rootmean-square deviation (RMS) of the residuals from a single wavelength may not decrease significantly even though the RMS of the DM(t) variations that were removed is large. To demonstrate that the proposed procedure can estimate and remove the dispersion, and that it is necessary to include the common-mode in the process, we perform two sets of simulations.
The observing parameters, i.e., T obs , Ni, σij, DDM(τ ), of both simulations are based on the observations of PSR J1909−3744 in the PPTA 'DR1' data set (?). We find it useful to demonstrate the performance of the DM correction process in the frequency domain, but it is difficult to estimate power spectra of red processes if they are irregularly sampled. Therefore we first use simulations of regularly sampled observations with observing parameters similar to those of PSR J1909−3744 to demonstrate the performance of the DM correction algorithm. Then we will simulate the actual irregularly sampled observations of PSR J1909−3744 to show that the ultimate performance of the algorithm is the same as in the regularly sampled case.
Regular sampling, equal errors
We will compare the power spectra produced after fitting for DM(t) with and without simultaneously fitting for a common-mode signal. To generate the simulated data sets, we first generate idealised ToAs that have zero residual from the given timing model. Then we add zero-mean stochastic perturbations to the ideal ToAs to simulate the three components of the model: (1) independent white noise, corresponding to measurement error; (2) wavelength independent red noise, corresponding to the common-mode; (3) wavelength dependent red noise representing DM(t).
We simulate the measurement uncertainty with a white Gaussian process, chosen to match the high frequency power spectral density of the observed residuals. The simulated Pw is 2.2 × 10 −30 , 4.3 × 10 −30 and 2.6 × 10 −29 yr 3 at 10, 20 and 50cm cm respectively. For the common mode we choose a Gaussian process with a spectrum chosen to match a common model of the incoherent gravitational wave background (GWB), i.e. PGWB(f ) = (A 2 GWB /12π 2 )f −13/3 (??). For the DM we use a Gaussian process with a power spectrum PDM(f ) = ADMf −8/3 , where ADM is chosen to match the observed DM fluctuations in PSR J1909−3744 shown in Figures 5 and 6, and the spectral exponent is chosen to match that expected for Kolmogorov turbulence (?). The levels of PTDM and PGWB are similar so that the same sample intervals can be used for both DM(ti) and tCM(ti), but this is not necessary in general and will not always be desirable.
For both algorithms, we estimate the pre-and postcorrection power spectra of the 20-cm residuals in four noise regimes: Pw; Pw +PDM; Pw +PGWB; and Pw +PDM +PGWB. In order to minimise the statistical estimation error, we average together 1000 independent realisations of the spectra for each algorithm. We note that although the averaged power spectra suggest that the input red noise signals are large, the noise on a single realisation is such that the red signals are at the limit of detection. To illustrate this, the 90% confidence limits for both the 1000 spectrum average and for a single realisation, are shown on the power spectra in Figures 1 and 2.
We show the effect of using the interpolated model for DM(t), but not fitting for the common-mode signal tCM(t), in Figure 1. This algorithm is well behaved when the GWB is not present, as shown in the two lower panels. In this case the DM correction algorithm removes the effect of the DM variations if they are present and increases the white noise below the corner frequency by the expected amount. Importantly, when the model GWB is included, i.e., in the two top panels, a significant amount of the low-frequency GWB spectrum is absorbed into the DM correction. This is independent of whether or not DM variations are actually present because the DM correction process is linear.
We show the full algorithm developed for this paper, using interpolated models for both DM(t) and the commonmode signal tCM(t), in Figure 2. One can see that the algorithm removes the DM if it is present, regardless of whether the GWB is present. It does not remove any part of the GWB spectrum. When the GWB is not present, as shown in the two lower panels, the algorithm remains well behaved. As expected, it increases the white noise below the corner frequency by a larger factor than in the previous case. This is the 'cost' of not absorbing some of the GWB signal into the DM correction. Although it has a higher variance than for the previous case, our DM(t) is the lowest variance unbiased estimator of the DM variations in the presence of wavelength-independent red noise. This increase in white noise is unavoidable if we are to retain the signal from a GWB, or indeed any of the signals of interest in PTAs.
The power spectra presented in Figure 2 demonstrate that the algorithm is working as expected, in particular that it does not remove power from any wavelength-independent signals present in the data. We note, however, two limitations in these simulations: the regular sampling and equal errors are not typical of observations, nor have we shown that the wavelength-independent signal in the post-fit data is correlated with the input signal (since our power spectrum technique discards any phase information). These limitations will be addressed in the next section.
Irregular sampling, Variable error bars
In order to test the algorithm in the case of realistic sampling and error bars, we repeated the simulations using the actual sampling and error bars for pulsar J1909−3744 from the PPTA. We use the same simulated spectral levels for the GW and DM as in the previous section. The results are also an average of 1000 realisations.
As a direct measure of performance in the estimating DM(t), we compute the difference between the DM estimated from the fit to the residuals, DMest(t), and the DM input in the simulation, DMin(t). To better compare with the timing residuals, we convert this error in the DM into the error in tDM(t) at 20 cm using Equation (1). Note that, although the residuals were sampled irregularly, the original DMin(t) was sampled uniformly on a much finer grid. Furthermore, the estimated DMest(t) is a well defined function that can also be sampled uniformly. Thus it is easy to compute the average power spectrum of this error in tDM(t) as is shown in Figure 3. We also plot the spectrum of the initial white noise, and the spectrum of the white noise after correction. If the algorithm is working correctly the white noise after correction should exactly equal the error in tDM(t) plus the white noise before correction, so we have over plotted the sum of these spectra and find that they are identical.
The spectrum of the error in tDM(t) shows the expected behaviour below the corner frequency. Above the corner frequency (where the correction is zero), it falls exactly like the spectrum of tDM(t) itself, i.e., as f −8/3 . By comparing the right and left panels one can see that the DM correction is independent of the GWB.
We can also demonstrate that the model GWB signal is preserved after DM correction, by cross-correlation of the input model GWB with the post-correction residuals. If the GWB signal is preserved this cross-correlation should equal the auto-correlation of the input GWB signal. We show the auto-correlation of the input and four different cases of the cross-correlation of the output in Figure 4. The cross-correlations are for two bands (20 and 50cm cm shown solid and dashed respectively), and for two different fitting algorithms (with and without tCM(t) shown heavy and light respectively). Again it can be seen that, without fitting for the common-mode tCM(t), a significant portion of the GWB is lost. In fact, it is apparent from the large negative correlation at 50cm cm that the 'lost' power is actually transferred from the 20-cm residuals to those at 50cm cm. Although it may be possible to recover this power post-fit, it is not clear how to do this when the GWB and DM signals are unknown. Finally, we note that when the common mode is used, the 50cm-cm residuals preserve the GWB just as well as the 20cm residuals, even though they carry the majority of the DM(t) variation.
The Robustness of the Estimator
The proposed DM correction process is only optimal if the assumptions made in the analysis are satisfied. The primary assumptions are: (1) that there is an unmodelled common-mode signal in the data; (2) that the residuals can be modelled as a set of samples toi(tj) = tCM(tj) + tDM(tj)(λi/λREF) 2 + twi(tj); (3) the variances of the samples twi(tj) are known. If Assumption 1 does not hold and we fit for tCM(tj), then our method would be sub-optimal. However in any pulsar timing experiment, we must first assume that there is a common-mode signal present. If tCM(tj) is weak or nonexistent then we will have a very low corner frequency and effectively we will not fit for tCM (tj). So this assumption is tested in every case.
Assumption 2 will fail if there are wavelength dependent terms which do not behave like λ 2 , for example the scattering effect which behaves more like λ 4 . If these terms are present they will corrupt the DM estimate and some scattering effects from longer wavelengths may leak into the shorter wavelengths due to the correction process. However the correction process will not remove any common-mode signal, so signals of interest to PTAs will survive the DM correction unchanged. Assumption 3 will not always be true a priori. Recent analysis of single pulses from bright MSPs have shown that pulse-to-pulse variations contribute significant white noise in excess of that expected from the formal ToA measurement uncertainty (??). Indeed, many pulsars appear to show some form of additional white noise which is currently unexplained but could be caused by the pulsar, the interstellar medium, or the observing system (see e.g. ??). In any case, we cannot safely assume that the uncertainties of the timing residuals σij accurately reflect the white noise level. If the σij are incorrect, our fit parameters,tc(t) andt d (t), will no Table 2. Scintillation and dispersion properties for the 20 PPTA pulsars, at a reference wavelength of 20 cm. The scintillation bandwidth (ν 0 ) and time scale (τ diff ) are averaged over a large number of PPTA observations except for values in parenthesis which are are taken from ?. D 1000 is the value of the structure function at 1000 days and Ts is the optimal sampling interval for t DM (t). longer be minimum variance estimators; however, they will remain unbiased. This means that DM estimation will be unbiased and the DM correction will not remove any GWB (or other common-mode signal) although the correction process may add more white noise than optimal. It should be noted that if all the σij were changed by the same factor our DM correction would be unchanged. Fortunately the actual white noise is relatively easy to estimate from the observations because there are more degrees of freedom in the white noise than in the red noise, so in practise we use the estimated white noise rather than the formal measurement uncertainties σij .
Source ν 0 τ diff D 1000 Ts (MHz) (s) (µs 2 )(yr)
APPLICATION TO PPTA OBSERVATIONS
We have applied the new DM correction technique to the PPTA data set (?). Observations of the PPTA are made in three wavelength bands: '10 cm' (∼ 3100 MHz); '20 cm' (∼ 1400 MHz); and '50cm cm' (∼ 700 MHz). The 10-cm and 20-cm bands have been constant over the entire time span, however the long wavelength receiver was switched from a centre frequency of 685 MHz to 732 MHz around MJD 55030 to avoid RFI associated with digital TV transmissions. To allow for changes in the intrinsic pulse profile between these different wavelength bands, we fit for two arbitrary delays between one wavelength band and each of the other bands. However we did not allow an arbitrary delay between 685 and 732 MHz because the pulse shape does not change significantly in that range. We began our analysis by using the procedure described in Section 3 to compute pilot estimations of DM(t) and tCM(t) for each of the 20 pulsars, using a sampling interval Ts = 0.25 yr. Figure 5 shows the DM(t) derived from the above. Our results are consistent with the measurements made by ? for the ∼ 500 days of overlapping data, which is expected since they are derived from the same observations.
Determining the sampling interval
As discussed in Section 3, we can use the diffractive time scale τ diff to predict the magnitude of the DM variations in a given pulsar. This value can be computed directly from observations, however it is always quite variable on a day to day time scale (see Section 6 for discussion), and for a few pulsars τ diff approaches the duration of the observations, so it can be hard to measure. Nevertheless we have obtained an estimate of the average τ diff from the dynamic spectra for each pulsar, and this is given in Table 2. We have not provided an error estimate on the average τ diff because the variation usually exceeds a factor of two so the values tabulated are very rough estimates.
We also computed the structure function DTDM directly from the tDM values. These structure functions, scaled to delay in µs 2 at 20 cm, and those estimated from τ diff , are shown in Figure 6. The value DTDM(1000 days) is given in Table 2.
For each pulsar we also make an estimate of the white noise power directly from the power spectrum of the residuals. The estimates of Pw at each wavelength are given in Table 1.
We then use the DTDM estimates and Equation (12) to generate a model power spectrum PTDM(f ) at the reference wavelength (20 cm) for each pulsar. These assume a Kolmogorov spectral exponent. From these model spectra and the corresponding Pw,T DM , tabulated in Table 1, we determine the corner frequency and the corresponding sample interval Ts for DM for each pulsar. As we do not have any a priori knowledge of tCM for the PPTA pulsars we choose the same sample interval for tCM as for tDM.
Results
The measured DM(t) sampled at the optimal interval Ts is overlaid on the plot of the pilot analysis with Ts = 0.25 yr on Figure 5. It is not clear that there are measurable variations in DM in PSRs J1022+1001, J2124−3358 or J2145−0750, but one can see that there are statistically significant changes with time for the other pulsars. In general, the 'optimally sampled' time series (dashed line) follows the DM trend with less scatter. However, there are some significant DM fluctuations that are not well modelled by the smoother time-series. In particular we do not model the significant annual variations observed in PSR J0613−0200, and we must add a step change to account for the 250 day increase in DM observed in PSR J1603−7202 (these features are discussed more fully in Section 8). These variations do not follow the Kolmogorov model that was used to derive the optimal sampling rate, and therefore we must use a shorter Ts so we can track these rapid variations. These results illustrate the importance of making a pilot analysis before deciding on the sample interval. The ISM is an inhomogeneous turbulent process and an individual realisation may Table 3. Impact of the DM corrections on the timing parameters, as determined in the 20-cm band. For each pulsar we present the change in ν andν due to the DM correction, relative to the measurement uncertainty, and the ratio of the RMS of the residuals before (Σpre) and after (Σpost) DM correction. Also included is the ratio of the power spectral density before (Ppre) and after (Ppost) correction, averaged below fc. The final column indicates if we believe that the DM corrections have 'improved' the data set for the purpose of detecting common-mode red signals. not behave much like the statistical average. The DM(t) for PSR J1909−3744 is also instructive. It is remarkably linear over the entire observation interval. This linearity would not be reflected in the timing residuals at a single wavelength because a quadratic polynomial is removed in fitting the timing model. It can only be seen by comparing the residuals at different wavelengths. Such linear behaviour implies a quadratic structure function and a power spectrum steeper than Kolmogorov.
PSR
Performance of DM Correction
The simplest and most widely used metric for the quality of timing residuals is the RMS of the residuals. Thus a natural measure of the performance of DM correction would be the ratio of the RMS of the 20-cm residuals before and after DM correction. This ratio is provided in Table 3. However, for most of these pulsars, the RMS is dominated by the white noise and so does not change appreciably after DM correction. Furthermore much of the effect of DM(t) variations is absorbed by fitting for the pulse frequency and its derivative. Thus the ratio of the RMS before and after DM correction is not a very sensitive performance measure. As noted by ?, the DM correction has a significant effect on the pulsar spin parameters, which can give an indication of the magnitude of the DM correction. Table 3 lists the change in ν andν, as a factor of the measurement uncertainty, caused by applying the DM correction. However, there are systematic uncertainties in the estimation of the intrinsic values of ν andν that may be greater than the error induced by DM variations.
Judging the significance of the DM corrections depends on the intended use of the data set. Since a major goal of the PPTA is to search for common-mode red signals, we choose to consider the impact of the DM corrections on the low frequency noise. In principal, the DM correction should reduce the noise at frequencies below fc, and therefore we have estimated the ratio of the pre-and post-correction power spectrum of the 20-cm residuals, averaged over all frequencies below fc. We caution that the spectral estimates are highly uncertain, and for many pulsars we average very few spectral channels so the error is non-Gaussian. Therefore, we present these ratios in Table 3 as an estimated 66% uncertainty range, determined assuming the spectral estimates are χ 2 -distributed with mean and variance equal to the measured mean power spectral density.
There are 9 pulsars for which the DM correction appears to significantly reduce the low frequency noise, and therefore increases the signal-to-noise ratio for any commonmode signal in the data. These pulsars are listed with a 'Y' in Table 3. There are 10 pulsars for which the change in low frequency power is smaller than the uncertainty in the spectral estimation and so it is not clear if the DM correction should be performed. Table 3 indicates these pulsars with a 'y' or 'n', with the former indicating that we believe that the DM correction is likely to improve the residuals. However, the DM correction fails to 'improve' PSR J1643−1224 under any metric, even though we measure considerable DM variations (see Figure 5). As discussed in Section 6, we believe that this is due to variations in scattering delay entering the DM correction and adding considerable excess noise to the corrected residuals.
SCATTERING AND DM CORRECTION
The most important effect of the ISM on pulsar timing is the group delay caused by the dispersive plasma along the line of sight. However small scale fluctuations in the ISM also cause angular scattering by a diffractive process. This scattering causes a time delay t0 ≈ 0.5θ 2 0 L/c, where θ0 is the RMS of the scattering angle and L is the distance to the pulsar. This can be significant, particularly at longer wavelengths, because it varies much faster with λ than does the dispersive delay -approximately as λ 4 . In homogeneous turbulence one would expect this parameter to be relatively constant with time. If so, the delay can be absorbed into the pulsar profile and it will have little effect on pulsar timing. However if the turbulence is inhomogeneous the scattering delay may vary with time and could become a significant noise source for pulsar timing. We can study this effect using the PPTA pulsar PSR J1939+2134. Although this pulsar is unusual in some respects, the scattering is a property of the ISM, not the pulsar, and the ISM in the direction of PSR J1939+2134 can be assumed to be typical of the ISM in general. PSR J1939+2134 is a very strong source and the observing parameters used for the PPTA are wellsuited to studying its interstellar scattering. The time delay, t0, can be estimated from the bandwidth of the diffractive scintillations, ν0, in a dynamic spectrum using the relationship t0 = 1/2πν0. In fact it is extremely variable, as can be seen in Figure 7. The RMS of t0 (52 ns at 20 cm) is about 28% of the mean. We can expect this to increase by a factor of (1400 MHz/700 MHz) 4 = 16 at 50cm cm. Thus in the 50cm-cm ToAs there will be delays with RMS variations of ∼ 830 ns, which do not fit the dispersive λ 2 behaviour. This will appear in the estimate of tDM at 20 cm, attenuated by a factor of ((1400 MHz/700 MHz) 2 − 1) = 3 (Equation 2). Therefore the DM correction will bring scattering noise from the 50-cm band to the 20-cm band with RMS variation ∼ 270 ns, 5.3 times larger than the scattering noise intrinsic to the 20-cm observations. This analysis is corroborated by the structure function of DM for this pulsar shown in Figure ??, which shows a flattening to about 1 µs 2 at small time lags. This implies a white process with RMS variations of about 500 ns, consistant with that expected from scattering.
We have correlated the variations in t0 with the 20-cm residuals before correction and find 18% positive correlation. This is consistent with the presence of 52 ns of completely correlated noise due to t0 added to the ToA measurement uncertainty of the order of 200 ns. PSR J1939+2134 is known to show ToA variations that are correlated with the intensity scintillations (?) but are much stronger than expected for homogeneous turbulence (?). Thus we are confident that the observed variation in t0 is showing up in the 20-cm residuals. We expect that contribution to increase in the DM corrected residuals to about 300 ns. However this is very difficult to measure directly because the DM correction is smoothed and the 50cm-cm observations are not simultaneous with the 20-cm observations. This effect increases rapidly with longer wavelength. If we had used 80-cm observations for DM correction, the RMS at 80 cm would have been ∼ 10 µs and this would have been reduced by a factor of 12 to an RMS of 800 ns in the corrected 20 cm residuals. Clearly use of low frequency antennas such as GMRT (?) or LOFAR (?) for correcting DM fluctuations in PTAs will have to be limited to weakly scattered pulsars. This is an important consideration, but it should be noted that the four PPTA pulsars that provide the best timing are all scattered much less than J1939+2134 -all could be DM corrected with 80-cm observations or even with longer wavelengths. On the other hand there are four PPTA pulsars that are scattered 20 to 80 times more strongly than J1939+2134 and even correction with 50cm-cm data causes serious increases in the white noise.
An extreme example is PSR J1643−1224. Under the above assumption, the expected white noise (2.0 µs) due to scattering at 20 cm exceeds the radiometer noise (0.63 µs). The white scattering noise at 50cm cm is much larger (32 µs) and about a third of this makes its way into the DMcorrected residuals at 20 cm. This is also corroborated by the structure function for this pulsar in Figure ??, which shows a flattening to about 10 µs 2 at small lags. This implies a white process with RMS variation of ∼ 3 µs which is consistant with that expceted from scattering. Indeed, this pulsar is the only pulsar with significant DM variations for which the DM correction increases the noise in the 20-cm residuals under all metrics. It is also important to note that observing this source at the same frequencies with a more sensitive telescope will not improve the signal to noise ratio, because the noise, both before and after DM-correction, is dominated by scattering. However using a more sensitive telescope could improve matters by putting more weight on observations at 10 cm, where scattering is negligible.
Finally however, we note that the usefulness of long wavelength observations would be greatly improved if one could measure and correct for the variation in scattering delays. This may be possible using a technique such as cyclic spectroscopy, however this has only been done in ideal circumstances and with signal-to-noise ratio such that individual pulses are detectable (?). It is still unclear if such techniques can be generalised to other observations, or if this can be used to accurately determine the unscattered ToA.
SCHEDULING FOR DM CORRECTION
If there were no DM variation, one would spend all the observing time at the wavelength for which the pulsar has the greatest ToA precision (see ? for discussion on choice of observing wavelength). The reality is, of course, that we need to spend some of the time observing at other wavelengths to correct for DM variations. In this section we will present a strategy for choosing the observing time at each wavelength, attempting to optimise the signal to noise ratio of the common-mode signal, tCM. We take the PPTA observations at Parkes as our example, but this work can easily be generalised to any telescope.
At Parkes it is possible to observe at wavelengths of 10 and 50cm cm simultaneously because the ratio of the wavelengths is so high that the shorter wavelength feed can be located co-axially inside the longer wavelength feed. However the 20-cm receiver does not overlap with either 10 or 50cm cm and so must be operated separately.
As noted earlier we can writetCM = i bitoi so the variance of tCM is given by σ 2 TCM = i b 2 i σ 2 i . However, the solution is not linear in the σ 2 i terms because they also appear in the bi coefficients. At present, observing time at Parkes is roughly equal between the two receivers, so we use the existing power spectral densities as the reference. We will assume that the total observing time is unity and the time devoted to 10 and 50cm cm, which are observed simultaneously, is 0 ≤ α ≤ 1.
The variance of any toi is inversely proportional to the corresponding observing time. We use to,20 as the reference because it usually has the smallest TOA uncertainty in PPTA observations. Therefore, we define σ 2 20 = 1/(1 − α) as the reference. Then we assume that, with equal observing time σ10 = xσ20 and σ50 = yσ20, so as scheduled we would have σ10 = x/α and σ50 = y/α. We can then determine the increase in white noise caused by correcting for dispersion as a function of α, the time devoted to 10 and 50cm cm. The results are shown for all the PPTA pulsars in Table 1. One can see that all the pulsars are different and the optimal strategies range from α ≈ 0.2 to α ≈ 1.0 (i.e. 100% of time spent using the dual 10 and 50cm cm system). For the four 'best' pulsars, PSRs J0437−4715, J1713+0747, J1744−1134, and J1909−3744, the optimal strategy has α > 0.7.
This suggests that a useful improvement to PTA performance could come from deploying broadband receivers, so that correction for DM(t) can be done with a single observation. This also has the benefit of reducing the difficulties of aligning pulsar profiles measured with different receivers at different times, and would therefore allow for more accurate measurement of DM variations.
THE INTERSTELLAR MEDIUM
The PPTA DM(t) observations provide an interesting picture of the ionised ISM on au spatial scales. The overall picture can be seen in Figures 5 and 6. In Figure 5 it is apparent that 17 of the 20 PPTA pulsars have measurable DM(t) variations. In Figure 6 it can be seen that 13 of these 17 show power-law structure functions, as expected in the ensemble average Of these, 8 are roughly consistant with an extrapolation from the diffractive scales at the Kolmogorov spectral exponent, an average dynamic range of 4.8 decades. However 5 are considerably higher than is predicted by a Kolmogorov extrapolation. They may be locally Kolmogorov, i.e. an inner scale may occur somewhere between the diffractive scale and the directly measured scales of 100 to 2000 days, but establishing this would require a detailed analysis of the apparent scintillation velocity which is beyond the scope of this paper. Two of these five pulsars, J1045−4509 and J1909−3744, were already known to be inconsistent with a Kolmogorov spectral exponent (?), and it is clear, with the additional data that are now available, that J1024−0719, J1643−1224 and J1730−2304 should be added to this list. When the spatial power spectrum of a stochastic process is steeper than the Kolmogorov power-law, it can be expected to be dominated by linear gradients and show an almost quadratic structure function. Indeed inspection of Figure 5 shows that the 5 steep spectrum pulsars all show a strong linear gradient in DM(t).
It is interesting that two of the five steep spectrum pulsars, J1730−2304 and J1909−3744, show structure functions which are clearly steeper than Kolmogorov in the observed range and are thus converging towards a Kolmogorov spectrum at smaller time lags. The three other steep spectrum pulsars do not show this behavior in the structure functions for two reasons: (1) J1024−0719 appears to flatten, apparently due to a white noise contribution which is comparable with the error bars and is probably due to underestimation of bias correction due to the errors; (2) J1045−4509 and J1643−1224 are highly scattered and are showing the effect of a scattering contribution at small lags. The scattering contribution to J1730−2304 and J1909−3744 is negligible.
The time series DM(t) shown in Figure 5 often show behaviour that does not look like a homogeneous stochastic process. For example, PSR J1603−7202 shows a large increase for ∼250 days around MJD 54000 and J0613−0200 shows clear annual modulation. The increase in DM for J1603−7202 suggests that a blob of plasma moved through the line of sight. If we assume the blob is halfway between the pulsar and the Earth, the line of sight would have moved by about 0.5 au in this time, and if the blob were spherical it would need a density of ∼200 cm −3 . This value is high, but comparable to other density estimates for au-scale structure based on 'extreme scattering events' (??).
We computed the power spectra of DM(t) for all the pulsars to see if the annual modulation that is clear by eye in PSR J0613−0200 is present in any of the other pulsars. For four pulsars we find a significant (> 5-σ) detection of an annual periodicity, PSRs J0613−0200, J1045−4509, J1643−1224 and J1939+2134.
The most likely explanation for the annual variation in DM(t) is the annual shift in the line of sight to the pulsar resulting from the orbital motion of the Earth. The trajectory of the line of sight to three example PPTA pulsars are shown in Figure 8. The relatively low proper motion and large parallax of the PPTA pulsars means that the trajectory of the line of sight to many of the PPTA pulsars show pronounced ripples. However, unless the trajectory is a tight spiral, the annual modulation will only be significant if there is a persistent gradient in the diffractive phase screen.
The presence of persistent phase gradients and annual modulation in J1045−4509 and J1643−1224 is not surprising because the ISM associated with each of these pulsars has a steeper than Kolmogorov power spectrum. Indeed, the measured DM(t) for these pulsars do show a very linear trend, which in itself evidence for a persistent phase gradient. The other steep spectrum pulsars, J1024−0719, J1730−2304 and J1909−3744, have higher proper motion, which reduces the amplitude of the annual modulation relative to the long term trend in DM(t). We note that the spectral analyses for PSRs J1024−0719 and J1909−3744 suggest annual periodicities, and it may be possible to make a significant detection by combining the PPTA data with other data sets.
PSR J1939+2134 does not show a steep spectrum, however its proper motion is very low compared to its parallax, and therefore the trajectory spirals through the ISM, reducing the requirement for a smooth phase screen. The annual modulation of J0613−0200 may be somewhat different, since it does not have a steep spectrum and although the proper motion is small the trajectory does not spiral (see Figure 8). This suggests that for J0613−0200 the turbulence could be anisotropic with the slope of the gradient aligned with the direction of the proper motion. Anisotropic structures are believed to be quite common in the ISM (??). However one can imagine various other ways in which this could occur, particularly in an inhomogeneous random process, and inhomogeneous turbulence on an au spatial scale is also believed to be common in the ISM (???).
Persistent spatial gradients will cause a refractive shift in the apparent position of the pulsar, and because of dispersion the refraction angle will be wavelength dependent. This refractive shift appears in the timing residuals as an annual sine wave which changes in amplitude like λ 2 . When the DM(t) is corrected this sine wave disappears and the inferred position becomes the same at all wavelengths. These position shifts are of order 10 −4 (λ/20 cm) 2 arcseconds for all four pulsars.
Note that the trajectory of the lines of sight shown on Figure 8 may appear quite non-sinusoidal, but the annual modulation caused by the Earth's orbital motion in a linear phase gradient will be exactly a sine wave superimposed on a linear slope due to proper motion. This will not generate any higher harmonics unless the structure shows significant nonlinearity on an au scale. We do not see second harmonics of the annual period, which suggests that the spatial structure must be quite linear on an au scale.
Annual variations in DM are also observed in pulsars for which the line of sight passes close to the Sun because of free electrons in the solar wind (??). In the PPTA, a simple symmetric model of the solar wind is used to remove this effect, but this is negligible for most pulsars. For the three pulsars where it is not negligible, the effect of the solar wind persists only for a few days at the time when the line of sight passes closest to the Sun. Neither the magnitude, phase nor shape of the variations seen in our sample can be explained by an error in the model of the solar wind. Changes in ionospheric free electron content can be ruled out for similar reasons.
In summary the ISM observations are, roughly speaking, consistent with our present understanding of the ISM. However the data will clearly support a more detailed analysis, including spectral modelling over a time scale range in excess of 10 5 from the diffractive scale to the duration of the observations. It may also be possible to make a 2 dimensional spatial model of the electron density variations for some of the 20 PPTA pulsars, although such detailed modelling is far beyond the scope of this paper. Preliminary attempts to model the DM variations in PSR J0613−0200, assuming that the DM can be approximated as a linear gradient during the observation period, suggest that such modelling may be useful for both studying the ISM and to improve the DM correction.
CONCLUSIONS
We find that it is necessary to approach the problem of estimating and correcting for DM(t) variations iteratively, beginning with a pilot analysis for each pulsar and refining that analysis as the properties of that pulsar and the associated ISM become clearer. Each pulsar is different and the ISM in the line of sight to each pulsar is different. The optimal analysis must be tailored to the conditions appropriate for each pulsar and according to the application under consideration.
We sample the DM(t) just often enough that the variations in DM are captured with the minimum amount of additional white noise. Likewise, we must also sample a commonmode signal tCM (t) at the appropriate rate. In this way we can correct for the DM variations at frequencies where it is necessary, and we can include tCM (t) at frequencies where it is necessary, but not fit for either at frequencies where the signal is dominated by white noise. By including the common-mode signal in the analysis we preserve the wavelength-independent signals of interest for pulsar timing arrays and we improve the estimate of the pulsar period and period derivative. Without estimating the common mode, a significant fraction of wavelengthindependent signals, such as: errors in the terrestrial clocks; errors in the planetary ephemeris; and the effects of gravitational waves from cosmic sources, would have been absorbed into the DM correction and lost.
We have applied this technique to the PPTA data set, which improves its sensitivity for the detection of low frequency signals significantly. The estimated DM(t) also provides an unparallelled measure of the au scale structure of the interstellar plasma. In particular it confirms earlier suggestions that the fluctuations often have a steeper than Kolmogorov spectrum, which implies that an improved physical understanding of the turbulence will be necessary. We also find that persistent phase gradients over au scales are relatively common and are large enough to cause significant errors in the apparent positions of pulsars unless DM corrections are applied.
APPENDIX A: CONSTRAINED LEAST SQUARES FITTING IN TEMPO2
The least squares problem of fitting the timing model to the residuals can be written in matrix form as
R = MP + E.(A1)
Here R is a column vector of the timing residuals, P is a column vector of fit parameters, including DM(tj ) and tCM(tj) as well as the other timing model parameters. M is a matrix describing the timing model and E is a column vector of errors. The least-squares algorithm solves for P, matching MP to R with a typical accuracy of E. The sampled time series DM(tj) and tCM(tj) are covariant with the timing model, so they must be constrained to eliminate that covariance or the least squares solution will fail to converge on a unique solution. These constraints have the form of linear equations of DM(tj ) and tCM(tj), such as:
DM (tj ) = 0; tCM(tj) = 0; tjtCM(tj) = 0, t 2 j tCM(tj) = 0; sin(ωtj)tCM(tj) = 0; cos(ωtj)tCM(tj) = 0; etc. Augmented with these equations, the least-squares problem becomes
R C = M B P + E ǫ ,
where B is a matrix describing the constraints, ǫ is a column vector of weights for the constraints. In our case C = 0, though it need not be in general. The least-squares solution will then find a vector P that matches both MP to R, with a typical accuracy of E, and also matches BP to C, with a typical accuracy of ǫ. By making ǫ very small we can enforce the constraints with high accuracy. This scheme has been called 'the method of weights' (?). If the uncertainties in the estimates of DM(tj ) and tCM(tj) are not expected to be equal, for instance if the different observing wavelengths are irregularly sampled and the ToA uncertainties are variable across sampling windows, then it can be advantageous to use weighted constraints. Then the constraints take the form WjDM (tj ) = 0, and we need to estimate the uncertainties of the parameters to obtain the optimal weights. These uncertainties can be determined from the least-squares solution in which the timing residuals are described purely by Equation (4). This problem is linear and the covariance matrix of the parameters can be written in closed form without even solving for the parameters. The diagonal elements of the covariance matrix are the variances of the parameters and the weights, Wj, are the inverse of the square roots of the corresponding variances.
APPENDIX B: RELATION BETWEEN THE STRUCTURE FUNCTION AND POWER-SPECTRAL DENSITY
The structure function D(τ ), of a time series y(t), is well defined if y(t) has stationary differences D(τ ) = (y(t) − y(t + τ )) 2 .
If y(t) is wide-sense stationary D(τ ) can be written in terms of the auto covariance C(τ ) by expansion of the square D(τ ) = y(t) 2 + y(t + τ ) 2 − 2 y(t)y(t + τ )
= 2(C(0) − C(τ )) (B2)
If y(t) is real valued then by the Wiener-Khinchin theorem,
C(τ ) = ∞ 0 cos(2πf τ ) P (f ) df,(B3)
where P (f ) is the one-sided power spectral density of y(t).
Thus we can then write the structure function in terms of the power spectral density as
D(τ ) = A ∞ 0 2(1 − cos(2πf τ ))P (f ) df,(B4)
It should be noted that this expression for D(τ ) is valid if D(τ ) exists. It is not necessary that C(τ ) exist. For the case of a power-law, P (f ) = Af −α , we can change variables using x = f τ , and obtain
D(τ ) = τ α−1 A ∞ 0 2(1 − cos(2πx)) x −α dx.(B5)
The integral (Int) above converges if 1 < α < 3, yielding Int. = 2 α π α−1 sin(−απ/2)Γ(1 − α),
where Γ is the Gamma function. Thus for Kolmogorov turbulence, with exponent α = 8/3, we have Int ≃ 89.344 and the power spectrum can be written
Figure 1 .
1Average power spectra of pre-and post-correction timing residuals, in the 20-cm band, with four combinations of signals. The solid line shows the pre-correction spectrum and dashed line shows the post-correction spectrum. For the cases where variations in DM are included in the simulation, the pre-correction spectrum without DM variations is shown with a dotted line. Here the fitting routine uses the DM(t) interpolated fitting routine, without fitting a common-mode signal. The vertical bars on the left of each panel show the 90% spectral estimation uncertainty for a single realisation (left-most bar) and the average of 1000 realisations (right bar).
Figure 4 .
4Cross-correlation of post-correction residuals with input model GWB, for the simulations representing PSR J1909−3744. Solid lines show data from the 20-cm wavelength and dashed lines show data from the 50cm-cm band. The correction was computed with and without fitting for the common mode, indicated by thick and thin lines respectively. The autocorrelation of the input GWB is plotted as a dotted line, but it is completely obscured by the heavy solid line for the cross correlation in the 20-cm band.
Figure 2 .
2As forFigure 1, except the fitting routine uses the DM(t) interpolated fitting routine in addition to the wavelength independent signal, C(t).
Figure 3 .
3Average power spectra of the error in DM(t) after fitting to simulations with realistic sampling and uncertainties. The simulations contained white noise, DM variations and, in the left panel, a model GWB. The solid black line shows the power spectrum of DMest(t) − DM in (t). The dotted line is the power spectrum of the white noise only. The dashed line is the post-correction power spectrum of the residuals, after subtracting the model GWB signal if present. The crosses mark the sum of the black line and the dotted line.
Figure 6 .
6Structure functions of dispersive delay at 20 cm. The square markers indicate the structure function as measured directly from the DM time-series in
Figure 7 .
7Diffractive scattering delay, t 0 , measured from scintillation bandwidth, ν 0 , in observations of PSR J1939+2134 at a wavelength of 20 cm. The error bars are derived from the fit for ν 0 and so are roughly proportional to t 0 .
Figure 8 .
8Trajectories through the ISM of the line of sight to PSRs J0613−0200 (dashed line), J1643−1224 (solid black line) and J1909−3744 (grey line). It was assumed that the scattering takes place half way between the pulsar and the Earth and the motion of the plasma was neglected. The trajectories are marked with a cross at the DM sampling interval of 0.25 yr.
Figure 1 .
1Average power spectra of pre-and post-correction timing residuals, in the 20-cm band, with four combinations of signals. The solid line shows the pre-correction spectrum and dashed line shows the post-correction spectrum. For the cases where variations in DM are included in the simulation, the pre-correction spectrum without DM variations is shown with a dotted line. Here the fitting routine uses the DM(t) interpolated fitting routine, without fitting a common-mode signal. The vertical bars on the left of each panel show the 90% spectral estimation uncertainty for a single realisation (left-most bar) and the average of 1000 realisations (right bar).
Figure 4 .
4Cross-correlation of post-correction residuals with input model GWB, for the simulations representing PSR J1909−3744. Solid lines show data from the 20-cm wavelength and dashed lines show data from the 50cm-cm band. The correction was computed with and without fitting for the common mode, indicated by thick and thin lines respectively. The autocorrelation of the input GWB is plotted as a dotted line, but it is completely obscured by the heavy solid line for the cross correlation in the 20-cm band.
Figure 2 .
2As forFigure 1, except the fitting routine uses the DM(t) interpolated fitting routine in addition to the wavelength independent signal, C(t).
Figure 3 .
3Average power spectra of the error in DM(t) after fitting to simulations with realistic sampling and uncertainties. The simulations contained white noise, DM variations and, in the left panel, a model GWB. The solid black line shows the power spectrum of DMest(t) − DM in (t). The dotted line is the power spectrum of the white noise only. The dashed line is the post-correction power spectrum of the residuals, after subtracting the model GWB signal if present. The crosses mark the sum of the black line and the dotted line.
Figure 6 .
6Structure functions of dispersive delay at 20 cm. The square markers indicate the structure function as measured directly from the DM time-series inFigure 5, error bars are derived by simulation of white noise. The solid lines show the extrapolation from the scintillation time scale τ diff assuming a Kolmogorov spectrum, dashed lines mark the region occupied by 66% of simulated data sets having Kolmogorov noise with the same amplitude. The dotted lines show a Kolmogorov spectrum with the amplitude set to match the real data at a lag of 1000 days.
Figure 7 .
7Diffractive scattering delay, t 0 , measured from scintillation bandwidth, ν 0 , in observations of PSR J1939+2134 at a wavelength of 20 cm. The error bars are derived from the fit for ν 0 and so are roughly proportional to t 0 .
Figure 8 .
8Trajectories through the ISM of the line of sight to PSRs J0613−0200 (dashed line), J1643−1224 (solid black line) and J1909−3744 (grey line). It was assumed that the scattering takes place half way between the pulsar and the Earth and the motion of the plasma was neglected. The trajectories are marked with a cross at the DM sampling interval of 0.25 yr.
To avoid confusion, in this paper we will use wavelength for the radio wavelength and frequency to describe the fluctuation of time variable processes.
M. J. Keith et al.
P (f ) ≃ 0.0112D(τ )τ −5/3 f −8/3 .(B7)
Mon. Not. R. Astron. Soc. 000, 000-000(0000)Printed 5 May 2014 (MN L A T E X style file v2.2)
To avoid confusion, in this paper we will use wavelength for the radio wavelength and frequency to describe the fluctuation of time variable processes.
M. J. Keith et al.
ACKNOWLEDGEMENTSThis work has been carried out as part of the Parkes Pulsar Timing Array project. GH is the recipient of an Australian Research Council QEII Fellowship (project DP0878388), The PPTA project was initiated with support from RNM's Federation Fellowship (FF0348478). The Parkes radio telescope is part of the Australia Telescope which is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO.
. W F Brisken, J.-P Macquart, J J Gao, B J Rickett, W A Coles, A T Deller, S J Tingay, C J West, ApJ. 708232Brisken W. F., Macquart J.-P., Gao J. J., Rickett B. J., Coles W. A., Deller A. T., Tingay S. J., West C. J., 2010, ApJ, 708, 232
. D J Champion, ApJ. 720201Champion D. J. et al., 2010, ApJ, 720, L201
. I Cognard, G Bourgois, J.-F Lestrade, F Biraud, D Aubry, B Darchy, J.-P Drouhin, Nature. 366320Cognard I., Bourgois G., Lestrade J.-F., Biraud F., Aubry D., Darchy B., Drouhin J.-P., 1993, Nature, 366, 320
. I Cognard, G Bourgois, J F Lestrade, F Biraud, D Aubry, B Darchy, J P Drouhin, A&A. 296169Cognard I., Bourgois G., Lestrade J. F., Biraud F., Aubry D., Darchy B., Drouhin J. P., 1995, A&A, 296, 169
. W A Coles, B J Rickett, J J Gao, G Hobbs, J P W Verbiest, ApJ. 7171206Coles W. A., Rickett B. J., Gao J. J., Hobbs G., Verbiest J. P. W., 2010, ApJ, 717, 1206
. J M Cordes, G S Downs, ApJS. 59343Cordes J. M., Downs G. S., 1985, ApJS, 59, 343
. J M Cordes, B J Rickett, D R Stinebring, W A Coles, ApJ. 637346Cordes J. M., Rickett B. J., Stinebring D. R., Coles W. A., 2006, ApJ, 637, 346
. J M Cordes, A Wolszczan, R J Dewey, M Blaskiewicz, D R Stinebring, ApJ. 349245Cordes J. M., Wolszczan A., Dewey R. J., Blaskiewicz M., Stinebring D. R., 1990, ApJ, 349, 245
. P B Demorest, MNRAS. 4162821Demorest P. B., 2011, MNRAS, 416, 2821
. R L Fiedler, B Dennison, K J Johnston, A Hewish, Nature. 326675Fiedler R. L., Dennison B., Johnston K. J., Hewish A., 1987, Nature, 326, 675
. R S Foster, J M Cordes, ApJ. 364123Foster R. S., Cordes J. M., 1990, ApJ, 364, 123
Matrix Computations. G H Golub, C F Van Loan, G Hobbs, MNRAS. Johns Hopkins University Pressin press (arXive:1208.3560Golub G. H., van Loan C. F., 1996, Matrix Computations. Johns Hopkins University Press Hobbs G. et al., 2012, MNRAS, in press (arXive:1208.3560)
. G Hobbs, MNRAS. 3941945Hobbs G. et al., 2009, MNRAS, 394, 1945
. G Hobbs, A G Lyne, M Kramer, MNRAS. 4021027Hobbs G., Lyne A. G., Kramer M., 2010, MNRAS, 402, 1027
. G B Hobbs, R T Edwards, R N Manchester, 369655MN-RASHobbs G. B., Edwards R. T., Manchester R. N., 2006, MN- RAS, 369, 655
. A W Hotan, M Bailes, S M Ord, ApJ. 624906Hotan A. W., Bailes M., Ord S. M., 2005, ApJ, 624, 906
. F A Jenet, ApJ. 6531571Jenet F. A. et al., 2006, ApJ, 653, 1571
. B C Joshi, S Ramakrishna, Bulletin of the Astronomical Society of India. 34401Joshi B. C., Ramakrishna S., 2006, Bulletin of the Astro- nomical Society of India, 34, 401
. M Manchester, Proc. Astr. Soc. Aust. submitted Ord S. M., Johnston S., Sarkissian J.245109Solar PhysicsManchester M. et al., 2012, Proc. Astr. Soc. Aust., submit- ted Ord S. M., Johnston S., Sarkissian J., 2007, Solar Physics, 245, 109
. S Os, W Van Straten, G B Hobbs, M Bailes, P Demorest, MNRAS. 4181258Os lowski S., van Straten W., Hobbs G. B., Bailes M., De- morest P., 2011, MNRAS, 418, 1258
. B J Rickett, Ann. Rev. Astr. Ap. 15479Rickett B. J., 1977, Ann. Rev. Astr. Ap., 15, 479
. R M Shannon, J M Cordes, arX- ive:1210.7021ApJ. in pressShannon R. M., Cordes J. M., 2012, ApJ, in press (arX- ive:1210.7021)
. B W Stappers, A&A. 53080Stappers B. W. et al., 2011, A&A, 530, A80
. D R Stinebring, M A Mclaughlin, J M Cordes, K M Becker, J E E Goodman, M A Kramer, J L Sheckard, C T Smith, R Van Haasteren, MNRAS. 5493117ApJStinebring D. R., McLaughlin M. A., Cordes J. M., Becker K. M., Goodman J. E. E., Kramer M. A., Sheckard J. L., Smith C. T., 2001, ApJ, 549, L97 van Haasteren R. et al., 2011, MNRAS, 414, 3117
. J P W Verbiest, MNRAS. 400951Verbiest J. P. W. et al., 2009, MNRAS, 400, 951
. D R B Yardley, MNRAS. 4141777Yardley D. R. B. et al., 2011, MNRAS, 414, 1777
. D R B Yardley, MNRAS. 407669Yardley D. R. B. et al., 2010, MNRAS, 407, 669
. X P You, W A Coles, G B Hobbs, R N Manchester, MNRAS. 4221160You X. P., Coles W. A., Hobbs G. B., Manchester R. N., 2012, MNRAS, 422, 1160
. X P You, MNRAS. 378493You X. P. et al., 2007, MNRAS, 378, 493
Measurement and correction of variations in interstellar dispersion in high-precision pulsar timing. M. J. Keith 1⋆ , W. Coles 2 , R. M. Shannon 1 , G. B. Hobbs 1 , R. N. Manchester 1 , M. Bailes 3 , N. D. R. Bhat 3,4 , S. Burke-Spolaor 5 , D. J. Champion 6 , A. Chaudhary 1 , A. W. Hotan 1 , J. Khoo 1 , J. Kocz 3,7 , S. Os lowski 3,1 , V. Ravi 8,1 , J. E. Reynolds 1 , J. Sarkissian 1 , W. van Straten 3 , D. R. B. Yardley91Measurement and correction of variations in interstellar dispersion in high-precision pulsar timing M. J. Keith 1⋆ , W. Coles 2 , R. M. Shannon 1 , G. B. Hobbs 1 , R. N. Manchester 1 , M. Bailes 3 , N. D. R. Bhat 3,4 , S. Burke-Spolaor 5 , D. J. Champion 6 , A. Chaudhary 1 , A. W. Hotan 1 , J. Khoo 1 , J. Kocz 3,7 , S. Os lowski 3,1 , V. Ravi 8,1 , J. E. Reynolds 1 , J. Sarkissian 1 , W. van Straten 3 , D. R. B. Yardley 9,1
. CSIRO Astronomy & Space Science, P.O. Box. 76Australia Telescope National Facility ; Australia 2 Electrical and Computer Engineering, University of California at San DiegoAustralia Telescope National Facility, CSIRO Astronomy & Space Science, P.O. Box 76, Epping, NSW 1710, Australia 2 Electrical and Computer Engineering, University of California at San Diego, La Jolla, CA, U.S.A.
Australia 5 NASA Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive. Smithsonian Centre for Astrophysics. 218Centre for Astrophysics and Supercomputing, Swinburne University of Technology ; Curtin University ; Institut für Radioastronomie ; University of MelbourneAuf dem HügelCentre for Astrophysics and Supercomputing, Swinburne University of Technology, P.O. Box 218, Hawthorn, VIC 3122, Australia 4 International Centre for Radio Astronomy Research, Curtin University, Bentley, WA 6102, Australia 5 NASA Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109, USA 6 Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, 53121, Bonn, Germany 7 Harvard-Smithsonian Centre for Astrophysics, 60 Garden Street, Cambridge, MA 02138, U.S.A. 8 School of Physics, University of Melbourne, Vic 3010, Australia
| [] |
[
"Working in Pairs: Understanding the Effects of Worker Interactions in Crowdwork",
"Working in Pairs: Understanding the Effects of Worker Interactions in Crowdwork"
] | [
"Chien-Ju Ho ",
"Ming Yin "
] | [] | [] | Crowdsourcing has gained popularity as a tool to harness human brain power to help solve problems that are difficult for computers. Previous work in crowdsourcing often assumes that workers complete crowdwork independently. In this paper, we relax the independent property of crowdwork and explore how introducing direct, synchronous, and free-style interactions between workers would affect crowdwork. In particular, motivated by the concept of peer instruction in educational settings, we study the effects of peer communication in crowdsourcing environments. In the crowdsourcing setting with peer communication, pairs of workers are asked to complete the same task together by first generating their initial answers to the task independently and then freely discussing the tasks with each other and updating their answers after the discussion. We experimentally examine the effects of peer communication in crowdwork on various common types of tasks on crowdsourcing platforms, including image labeling, optical character recognition (OCR), audio transcription, and nutrition analysis. Our experiment results show that the work quality is significantly improved in tasks with peer communication compared to tasks where workers complete the work independently. However, participating in tasks with peer communication has limited effects on influencing worker's independent performance in tasks of the same type in the future. | null | [
"https://arxiv.org/pdf/1810.09634v1.pdf"
] | 53,036,962 | 1810.09634 | d4d19cea94fe4aec5af64e7a204548fa48b629bb |
Working in Pairs: Understanding the Effects of Worker Interactions in Crowdwork
Chien-Ju Ho
Ming Yin
Working in Pairs: Understanding the Effects of Worker Interactions in Crowdwork
Crowdsourcing has gained popularity as a tool to harness human brain power to help solve problems that are difficult for computers. Previous work in crowdsourcing often assumes that workers complete crowdwork independently. In this paper, we relax the independent property of crowdwork and explore how introducing direct, synchronous, and free-style interactions between workers would affect crowdwork. In particular, motivated by the concept of peer instruction in educational settings, we study the effects of peer communication in crowdsourcing environments. In the crowdsourcing setting with peer communication, pairs of workers are asked to complete the same task together by first generating their initial answers to the task independently and then freely discussing the tasks with each other and updating their answers after the discussion. We experimentally examine the effects of peer communication in crowdwork on various common types of tasks on crowdsourcing platforms, including image labeling, optical character recognition (OCR), audio transcription, and nutrition analysis. Our experiment results show that the work quality is significantly improved in tasks with peer communication compared to tasks where workers complete the work independently. However, participating in tasks with peer communication has limited effects on influencing worker's independent performance in tasks of the same type in the future.
Introduction
Crowdsourcing is a paradigm for utilizing human intelligence to help solve problems that computers alone can not yet solve. In recent years, crowdsourcing has gained increasing popularity as the Internet makes it easy to engage the crowd to work together. On a typical crowdsourcing platform like Amazon Mechanical Turk (MTurk), task requesters may post "microtasks" that workers can complete independently in a few minutes in exchange for a small amount of payment. A microtask might involve labeling an image, transcribing an audio clip, or determining whether a website is offensive. Much of the practice and research in crowdsourcing has made this independence assumption and has focused on designing effective aggregation methods [35,7,40] or incentive mechanisms [30,16] to improve the quality of crowdwork.
More recently, researchers have started to explore the possibility of removing this independence assumption and enable worker collaboration in crowdsourcing. One typical approach is to design workflows that coordinate crowd workers for solving complex tasks. Specifically, a workflow involves decomposing a complex task into multiple simple microtasks, and workers are then asked to work on different microtasks. Since decomposed microtasks may depend on each other (e.g., the output of one task may be used as the input for another), workers are implicitly interacting with one another and are not working independently. Along this line, there has been a great amount of research demonstrating that relaxing the worker independence assumption could enable us to go beyond microtasks and solve various complex tasks using crowdsourcing [4,24,25,36].
Another line of research has demonstrated that even when workers are working on the same microtask, enabling some form of structured interactions between workers could be beneficial as well. In particular, Drapeau et al. [13] and Chang et al. [6] has shown that, in labeling tasks, if workers are presented with alternative answers and the associated arguments, which are generated by other workers working on the same tasks, they can provide labels with higher accuracy. These results, again, imply that including worker interactions could have positive impacts on crowdwork.
In both these lines of research, however, interactions between workers are indirect and constrained by the particular format of information exchange that is pre-defined by requesters (e.g., the input-output handoffs in workflows, the elicitation of arguments for workers' answers). Such form of worker interactions can be context-specific and may not be easily adapted to different contexts. For example, it is unclear whether presenting alternative answers and arguments would still improve worker performance for tasks other than labeling, where it can be hard for workers to provide a simple justification for their answers.
Naturally, one may ask what if we can introduce direct, synchronous and free-style worker interactions in crowdwork? We refer to this alternative type of worker interactions as peer communication, and in this paper, we focus on understanding the effects of peer communication in crowdwork when a pair of workers are working on the same microtask. In particular, inspired by the concept of peer instruction in educational settings [8], we operationalize peer communication as a procedure where a pair of workers working on the same task are asked to first provide an independent answer each, then freely discuss the task, and finally provide an updated answer after the discussion. We ask the following two questions to understand the effects of peer communication:
• Whether and why peer communication improves the quality of crowdwork?
Empirical study on the effects of peer instruction suggests that students are more likely to provide correct answers to test questions after discussing with their peers [8]. Moreover, previous work in crowdsourcing also demonstrates that indirect worker interactions (e.g., showing workers the arguments from other workers) [13,6] improve the quality of crowdwork for labeling tasks. We are thus interested in exploring whether peer communication, a more general form of worker interactions, could also have positive impacts on the quality of crowdwork for a more diverse set of tasks.
• Can peer communication be used to train workers so that workers can achieve better independent performance on the same type of tasks in the future? It is observed that students learning with peer instruction obtain higher grades when they (independently) take the post-tests at the the end of the semester [8]. Moreover, previous work in crowdsourcing also shows that some types of indirect worker interactions (e.g., asking workers to review or verify the results of other workers in the same type of task) could enhance workers' independent performance for similar tasks in the future [12,42]. We are thus interested in examining whether peer communication could also be an effective approach to train workers.
We design and conduct experiments on Amazon Mechanical Turk to answer these questions. In our first set of experiments, we examine the effects of peer communication with three of the mostly commonly seen tasks in crowdsourcing markets: image labeling, optimal character recognition, and audio transcriptions. Experiment results show that workers in tasks with peer communication perform significantly better than workers who work independently. The results are robust and consistently observed for all three types of tasks. By looking into the logs of worker discussion, we find that most workers are engaged in constructive conversations and exchanging information that their peer might not notice or do not know. This observation reinforces our beliefs that consistent quality improvements can be obtained through introducing peer communication in crowdwork. However, unlike in the educational setting, workers who have completed tasks with peer communication do not produce independent work of higher quality on the same type of tasks in the future.
We then conduct a second set of experiments with nutrition analysis tasks to examine the effects of peer communication in training workers for future tasks in more depth. The experiment results suggest that workers' independent performance in future tasks only improves when the future tasks share related concepts to the training tasks (i.e., the tasks where peer communication happens), and when workers are given expert feedback after peer communication. Moreover, such improvement is likely caused by expert feedback rather than the peer communication procedure. In other words, we find that peer communication, per se, may have limited effectiveness in training workers towards better independent performance, at least for microtasks in crowdsourcing.
Our current study focuses on one-to-one communication between workers on microtasks. We believe our results provide implications for the potential benefits of introducing direct interactions among multiple workers in complex and more general tasks, and we hope more experimental research will be conducted in the future to carefully understand the effects of peer communication in various crowdsourcing contexts.
Related Work
A major line of research in crowdsourcing is to design effective quality control methods. Most of the work in this line has made the assumption that workers independently complete the tasks. One theme in the quality control literature is the development of statistical inference and probabilistic modeling methods for the purpose of aggregating workers' answers. Assuming a batch of noisy inputs, the EM algorithm [11] can be adopted to learn the skill level of workers and obtain estimates of the best answer [35,7,20,40,10]. There have also been extensions to also consider task assignments in the context of these probabilistic models of workers [21,22,15]. Another theme is to design incentive mechanisms to motivate workers to contribute high-quality work. Incentives that researchers have studied include monetary payments [30,17,41,16] and intrinsic motivation [28,37,39]. In addition, gamification [1], badges [2], and virtual points [18] are also explored to steer workers' behavior.
The goal of this work is to explore the effects of worker interactions in crowdsourcing environments. Researchers has explored implicit worker interactions through the design of workflows to coordinate multiple workers. In a workflow, a task is decomposed into multiple microtasks, which often depend on each other, e.g., the output of one microtask is served as the input for another microtask. As a result, workers are implicitly interacting with each other. For example, Little et al. [29] propose the Improve-and-Vote workflow, in which some workers are working on improving the current answer while other workers can vote on whether the updated answers are better than the original ones. Dai et al. [9] apply partially-observable Markov Decision Process (POMDP) to bet-ter set the parameters in these workflows (e.g., how many workers should be involved in voting). More complicated workflows have also been proposed to solve complex tasks using a team of crowdworkers [33,24,25,36]. These workflow-based approaches enable crowdsourcing to solve not only microtasks but also more complex tasks. However, worker interactions in this approach are often implicit and constrained (e.g., through the input-output handoffs). We are interested in studying the effects of direct and free-style communications between workers in crowdwork.
Our work focuses on worker interactions when workers work on the same microtask and is related to that of Drapeau et al. [13] and Chang et al. [6]. Drapeau et al. [13] propose an Assess-Justify-Reconsider workflow for labeling tasks: given a labeling task, workers first assess the task and give their answers independently; workers are then asked to come up with arguments to justify their answers; finally, workers are presented with arguments from a different answer and are then asked to reconsider their answers. They show that applying this workflow greatly improves the quality of answers generated by crowd workers. Chang et al. [6] also propose a similar Vote-Explain-Categorize workflow with an additional goal of collecting useful arguments as the labeling guidelines for future workers. Both these studies have relaxed the independence assumption and enabled worker interactions through presenting the arguments from another worker. However, they focus only on classification tasks (e.g., answering whether there is a cat in the image), and the worker interactions are limited to presenting arguments from another worker. In this work, we are interested in enabling more general form of interactions (i.e., direct, synchronous, and free-style communications) for more diverse types of tasks. In particular, in addition to classification tasks, we have explored worker interactions on optical character recognition and audio transcription. It is not trivial how the above two workflows can be applied in these tasks, as workers might not know how to generate arguments for these tasks without interacting with fellow workers in real time.
Regarding the role of worker interactions in "training" workers, previous research [42,12] suggests that, for complex tasks, introducing limited form of implicit worker interactions, e.g., providing (expert or peer) feedback to workers after they complete the tasks or asking workers to review or verify the work produced by other workers, could improve workers' performance in the future. In this work, we focus on examining whether direct, synchronous, and free-style communication (instead of one-directional feedback or reviewing) can be an effective training method to improve workers' independent performance in microtasks.
Niculae and Danescu-Niculescu-Mizil [32] has explored whether interactions can help improve workers' output. They designed an online game in which online players can discuss together to identify the location where a given photo is taken. However, their focus is on applying natural language processing techniques to predict whether a discussion would be constructive based on analyzing users' chat logs. Their results could be useful and interesting to apply in our setting.
This work adopts the techniques from peer instruction, which is a widely adopted interactive learning approach in many institutions and disciplines [8,14,26,34,31], and has been empirically shown to better engage students and also help students achieve better learning performance. We will provide more details on the concept of peer instruction in the next section.
Peer Communication in Crowdwork
In this section, we give a brief introduction to the concept of peer instruction. We then describe our approach of peer communication, which adapts peer instruction to crowdsourcing environments.
Peer Instruction in Educational Settings
Peer instruction is an interactive learning method developed by Eric Mazur which aims at engaging students for more effective learning during classes. Different from traditional teaching methods, which is typically centered around the instructors as the instructors convey knowledge to students in a one-sided manner through pure lectures, peer instruction creates a student-centered learning environment where students can instruct and learn from each other. More specifically, peer instruction involves students first learning outside of class by completing pre-class readings and then learning in class by engaging in the conceptual question answering process. Figure 1 summarizes the in-class questioning procedure of peer instruction. Such procedure starts with the instructor proposing to students a question that is related to one concept in the pre-class readings. Students are then asked to reflect on the question, formulate answers on their own, and report their answers to the instructor. Next, students can discuss the question with their fellow students 1 . During the discussion, students are encouraged to articulate the underlying reasoning of their answers and convince their peers that their answers are correct. After the discussion, each student reports to the instructor a (final) updated answer, which may or may not be different from her initial answer before the discussion. Lastly, after reviewing students' final responses, the instructor decides to either provide more explanation on the concept associated with the current question or move on to the next concept.
The peer instruction method has been widely adopted in a large number of institutions and disciplines [8,14,26]. Intuitively, peer instruction may improve learning as students become active knowledge producers instead of passive knowledge consumers. Compared to instructors, students might be able to provide explanations that are better understood by other students as they share similar backgrounds. Empirical observations for deploying peer instruction confirm that it successfully improves students' learning performance [8]. In particular, students are more likely to provide correct answers to the conceptual question after discussing peers than they do before the discussion. Moreover, in post-tests where students independently answer a set of test questions after the end of the semester, students who participate in courses taught with peer instruction perform significantly better than students who don't. These empirical evidences suggest that peer instruction helps students understand not only the current question but also the underlying concepts, which eventually help them obtain better independent performance in future tests.
Peer Communication: Adapting Peer Instruction to Crowdwork
We propose to study the effects of peer communication in crowdwork, applying the idea of peer instruction as a principled approach to structure the direct interactions among crowd workers. In particular, given a particular microtask, we consider the requester of it as the "instructor," all workers working on it as "students," and the task per se as the "conceptual question" proposed by the instructor. Hence, a natural way to adapt peer instruction to crowdsourcing would be asking each worker working on the same task to first complete the task independently and then allowing them to discuss the task with each other and submitting their final answers.
The success of peer instruction in improving students' learning performance in the educational domain implies the possibility of using such strategy to enhance the quality of work in crowdsourcing, both on tasks where peer communication takes place and on future tasks of the same type. However, it is unclear whether the empirical findings on the effects of peer instruction in educational settings can be directly generalized to the crowdsourcing domain. For example, while conceptual questions in educational settings typically involve problems that require specialized knowledge or domain expertise, crowdwork is often composed of "microtasks" that only ask for simple skills or basic intelligence. Moreover, in peer instruction, the instructor can provide additional explanations to clarify confusions students might have during the discussion. However, in peer communication, requesters often do not know the ground truth of the tasks and are not able to provide additional feedback after workers submit their tasks.
Therefore, in this work, we aim to examine the effects of peer communication in crowdsourcing, and in particular, whether peer communication has positive effects on the quality of crowdwork. More specifically, based on the empirical evidence on the effectiveness of peer instruction in education as well as the positive effects of indirect worker interactions demonstrated in previous research, we postulate two hypotheses on the effects of peer communication in crowdwork:
• Hypothesis 1 (H1): Workers can produce higher work quality in tasks with peer communication than that in tasks where they work independently.
• Hypothesis 2 (H2): After peer communication, workers are able to produce independent work of higher quality on the same type of tasks in the future.
Independent Task
Discussion Task Independent Task Discussion Task Independent Task Independent Task Independent Task Independent Task Discussion Task Discussion Task Independent Task Independent Task Treatment 1 Treatment 2 Task 1 Task 2 Task 3 Task 4 Task 5 Task 6 Session 1: Examine Hypothesis 1 Session 2: Examine Hypothesis 2 Session 3: Balance Task Types Figure 2: The two treatments used in our experiments. This design enables us to examine Hypothesis 1 (through comparing work quality in Session 1) and Hypothesis 2 (through comparing work quality in Session 2), while not creating significant differences between the two treatments (through adding Session 3 to make the two treatments containing equal number of independent tasks and discussion tasks).
(MTurk) with three types of microtasks that commonly appear on crowdsourcing platforms, including image labeling, optical character recognition (OCR), and audio transcription.
Independent Tasks vs. Discussion Tasks
As previously stated in our hypotheses, we are interested in understanding whether allowing workers to work in pairs and directly communicate with each other about the same tasks would lead to work of better quality compared to that when workers complete the work independently, both on tasks where peer communication happens (H1) and on future tasks of the same type after peer communication takes places (H2). To do so, in our experiments, we consider both tasks with peer communication and tasks without peer communication:
• Independent tasks (tasks without peer communication). In an independent task, workers are instructed to complete the task on their own.
• Discussion tasks (tasks with peer communication). Workers in a discussion task are guided to communicate with other workers to complete the task together, following a process adapted from the peer instruction procedure as we have discussed in Section 2.2. In particular, each worker is paired with another "co-worker" on a discussion task. Both workers in the pair are first asked to work on the task independently and submit their independent answers. Then, the pair of workers enter a chat room, where they can see each other's independent answer to the task, and they are given two minutes to discuss the task freely. Workers are instructed to explain to each other why they believe their answers are correct. After the discussion, both workers get the opportunity to update their answers and submit their final answers.
Treatments
We conduct randomized experiments to examine our hypotheses regarding the effects of peer communication on the quality of crowdwork. The most straight-forward experimental design would include two treatments, where workers in one treatment are asked to work on a sequence of independent tasks while workers in the other treatment complete a sequence of discussion tasks.
However, since the structure of independent tasks and discussion tasks are fundamentally differentdiscussion tasks naturally require more time and effort from workers but can be more interesting to workers-it is possible for us to observe significant self-selection biases in the experiments (i.e., workers may self-select into the treatment that they can complete tasks faster or find more enjoyable) if we adopt such a design.
To overcome the drawback of this simple design, we design our experimental treatments in a way that each treatment consists of the same number of independent tasks and discussion tasks, such that neither treatment appears to be obviously more time-consuming or enjoyable. Figure 2 illustrates the two treatments used in our experiments. In particular, we bundle 6 tasks in each HIT 2 . When a worker accepts our HIT, she is told that there are 4 independent tasks and 2 discussion tasks in the HIT. There are two treatments in our experiments: in Treatment 1, workers are asked to complete 4 independent tasks followed by 2 discussion tasks, while workers in Treatment 2 first work on 2 discussion tasks and then complete 4 independent tasks. Importantly, we do not tell workers the ordering of the 6 tasks, which helps us to minimize the self-selection biases as the two treatments look the same to workers. We refer to the first, middle, and last two tasks in the sequence as Session 1, 2, 3 of the HIT, respectively.
Given the way we design the treatments, we can examine H1 by comparing the work quality produced in Session 1 (i.e. the first two tasks of the HIT) between the two treatments. Intuitively, observing higher work quality in Session 1 of Treatment 2 would imply that peer communication can enhance work quality above the level of independent worker performance. Similarly, we can test H2 by comparing the work quality in Session 2 (i.e. the middle two tasks of the HIT) between the two treatments. H2 is supported if the work quality in Session 2 of Treatment 2 is also higher than that of Treatment 1, which would suggest that after communicating with peers, workers are able to produce higher quality in their independent work for the same type of tasks. Finally, Session 3 (i.e. the last two tasks of the HIT) is used to ensure that the two treatments require similar amount of work from workers.
Experimental Tasks
We conduct our experiments on three types of tasks: image labeling, optical character recognition (OCR), and audio transcription. These tasks are all very common types of tasks on crowdsourcing platforms, hence experimental results on these tasks allow us to understand how peer communication affects the quality of crowdwork for various kinds of typical tasks.
• Image labeling. In each image labeling task, we present one image to the worker and ask her to identify whether the dog in the image is a Siberian Husky or a Malamute. Dog images we use are collected from the Stanford Dogs dataset [23]. Since the task can be difficult for workers who are not familiar with dog species, we provide workers with a table summarizing the characteristics of each dog species, as shown in Figure 3. Workers can get access to this table at anytime when working on the HIT.
• Optical character recognition (OCR). For the OCR task, workers are asked to transcribe vehicles' license plate numbers from photos. The photos are taken from the dataset provided by Shah and Zhou [38], and some examples are shown in Figure 4. • Audio transcription. For the audio transcription task, workers are asked to transcribe an audio clip which contains approximately 5 seconds of speech. The audio clips are collected from VoxForge 3 .
Unlike in the image labeling task, we do not provide additional instructions for the OCR and audio transcription tasks. Indeed, for some types of crowdwork, it is difficult for requesters to provide detailed instructions. However, the existence of detailed task instruction may influence the effectiveness of peer communication (e.g., workers in the image labeling tasks can simply discuss with their co-workers whether each distinguishing feature covered in the instruction is presented in the dog image). Thus, examining the effect of peer communication on work quality for different types of tasks, where detailed instruction may or may not be possible, helps us to understand whether such effect is dependent on particular elements in the design of the tasks.
Experimental Procedure
Introducing direct communication between pairs of workers on the same tasks requires us to synchronize the work pace of pairs of workers, which is quite challenging as discussed in previous research on real-time crowdsourcing [4,3]. We address this challenge by dynamically matching workers together and sending pairs of workers to simultaneously start working on the same sequence of tasks.
In particular, when each worker arrives at our HIT, we first check whether there is another worker in our HIT who don't have a co-worker yet -if yes, she will be matched to that worker and assigned to the same treatment and task sequence as that worker. Otherwise, the worker will be randomly assigned to one of the two treatments as well as a random sequence of tasks, and she will be asked to wait for another co-worker to join the HIT for a maximum of 3 minutes. We will prompt the worker with a beep sound if another worker indeed arrives at our HIT during this 3-minute waiting period. Once we successfully match a pair of workers, both of them will be automatically redirected to the first task in the HIT and they can start working on the HIT simultaneously. In the case where no other workers arrives at our HIT within 3 minutes, we ask the worker to decide whether she is willing to complete all tasks in the HIT on her own (and we will drop the data for the analysis but still pay her accordingly) or keep waiting for another 3 minutes and receive a compensation of 5 cents for waiting.
For all types of tasks, we provide a base payment of 60 cents for the HIT. In addition to the base payments, workers are provided with the opportunity to earn performance-based bonuses, that is, workers can earn a bonus of 10 cents in a task if the final answer they submit for that task is correct. Our experiment HITs are open to U.S. workers only, and each worker is only allowed to take one HIT for each type of tasks.
Experimental Results
For the image labeling, OCR, and audio transcription tasks, we obtain data from 388, 382, and 250 workers through our experiments, respectively 4 . We then examine Hypothesis 1 and 2 separately for each type of task by analyzing experimental data collected from Session 1 and 2 in the HIT, respectively. It is important to note that in the experimental design phase, we have decided not to include data collected from Session 3 of the HIT into our formal analyses. This is because workers in Session 3 of the two treatments differ to each other both in terms of whether they have communicated with other workers about the work in previous tasks and whether they can communicate with other workers in the current tasks, making it difficult to draw any causal conclusions on the effect of peer communication. However, as we will mention below, analyzing the data collected in Session 3 leads to observations that are consistent with our findings.
Work Quality Metrics
We evaluate the work quality using the notion of error. Specifically, in the image labeling task, since workers can only submit binary labels (i.e., Siberian Husky or Malamute), the error is defined as the binary classification error-if a worker provides a correct label, the error is 0, otherwise the Figure 5: Examine whether workers produce higher work quality in tasks with peer communication than in tasks without peer communication. In "Independent" group, we calculate workers' average error rate in Session 1 of Treatment 1 (see Figure 2). In "Peer Communication (Before Discussion)" group, we calculate workers' average error rate in Session 1 of Treatment 2, before they communicate with co-workers about the work (i.e., for their independent answers). In "Peer Communication (After Discussion)" group, we calculate workers' average error rate in Session 1 of Treatment 2, after they communicate with co-workers about the work (i.e., for their final answers). Error bars indicate the mean ± one standard error. error is 1. For OCR and audio transcription tasks, since workers' answers and the ground truth answers are both strings, we define "error" as the edit distance between the worker's answer and the ground truth, divided by the number of characters in the ground truth. Naturally, in all types of tasks, a lower rate of error implies higher work quality.
Work Quality Improves in Tasks with Peer Communication
We start with examining Hypothesis 1 by comparing work quality produced in Session 1 of the two treatments for each type of tasks. In Figure 5, We plot the average error rate for workers' final answers in Session 1 of Treatment 1 and 2 using white and black bars, respectively. Visually, it is clear that for all three types of tasks, the work quality is higher after workers communicate with others about the work compared to when workers need to complete the work on their own. We further conduct two-sample t-tests to check the statistical significance of the differences, and p-values for image labeling, OCR and audio transcription tasks are 2.42 × 10 −4 , 5.02 × 10 −3 , and 1.95 × 10 −11 respectively, suggesting the improvement in work quality is statistically significant. Our experimental results thus support Hypothesis 1.
Our consistent observations on the effectiveness of peer communication in enhancing the quality of crowdwork for various types of tasks indicate that enabling direct, synchronous and free-style communications between pairs of workers who work on the same tasks might be a simple method for improving worker performance that can be easily adapted to different contexts. To further highlight the advantage of peer communication, we apply majority voting to aggregate the labels obtained during Session 1 of the image labeling tasks 5 , and the results are presented in Figure 6. The X-axis represents the number of workers from whom we elicit labels for each image, and the Y-axis represents the prediction error (averaged across all images) of the aggregate label decided by the majority voting rule. As we can see in the figure, the aggregation error using labels obtained from tasks with peer communication greatly outperforms the aggregation error using labels from independent work. Moreover, in independent tasks, a majority of workers provide incorrect labels for approximately 20% of the images (therefore, the prediction error converges to near 20%) while in tasks with peer communication, this aggregated error reduces to only around 10%. These results reaffirm the superior quality of data collected through tasks with peer communication.
A natural question one may ask then is why work quality improves in tasks with peer communication. One possible contributing factor is the social pressure, that is, workers may put more effort and thus produce higher work quality in the tasks, simply because they know that they are working with a co-worker on the same task and are going to discuss with the co-worker about the task. Another possibility is that constructive conversations between workers enable effective knowledge sharing and lead to the improvement in work quality. To get a better understanding on the role of these two factors on influencing work quality in tasks with peer communication, we conduct a few additional analyses.
The impacts of social pressure. First, we look into whether workers behave differently when they are working on their independent answers in tasks with peer communication and when they are working on tasks without peer communication. Intuitively, if workers are affected by social pressure in tasks with peer communication, they may spend more time on the tasks and possibly produce work of higher quality even at the stage when they are asked to work on the tasks on their own before communicating with their co-workers. Table 1 summarizes the amount of time workers spend on tasks in Session 1 of Treatment 1, and on Session 1 of Treatment 2 when they work on their independent answers. We find that, overall, knowing the existence of a co-worker who works on the same task makes workers spend more time on the task on their own, though the differences are not always significant. In addition, we plot the average error rate for workers' independent answers in Session 1 of Treatment 2 as gray bars in Figure 5. Comparing the white and gray bars in Figure 5, we find that workers only improve the quality of their independent answers significantly in the audio transcription tasks when they know the existence of a co-worker (p = 0.010). Together, these results imply that workers in tasks with peer communication might be affected by the social pressure to some degree, but social pressure is likely not the major cause of the work quality improvement in tasks with peer communication.
The impacts of constructive conversations. Next, we examine whether the conversations between co-workers, by itself, help workers in tasks with peer communication to improve their work quality. We thus compare the quality of workers' independent answers before discussion (gray bars in Figure 5) and their final answers after discussion (black bars in Figure 5) in Session 1 of Treatment 2. We find that workers in tasks with peer communication submit final answers of higher quality after discussion than their independent answers before discussion. We further conduct paired t-tests on worker's error rate before and after discussion for tasks in Session 1 of Treatment 2, and test results show that the difference is statistically significant for all three types of tasks (p = 5.09×10 −4 , 1.51×10 −9 and 3.62×10 −8 for image labeling, OCR, and audio transcription tasks, respectively). In fact, we can also reach the same conclusion if we conduct a similar analysis for the work quality produced before and after discussions in Session 3 (i.e., the last two tasks) of Treatment 1. That is to say, the communication between co-workers about the same piece of work consistently leads to a significant improvement in work quality.
To gain some insights on what workers have communicated with each other during the discussion, we show a few representative examples of chat logs in Figure 7. We find that workers are mostly engaged in constructive conversations in the discussions. In particular, workers not only try to explain to each other the reasons why they come up with their independent answers and deliberate on whose answer is more convincing (as shown in the example for the image labeling tasks), but they also try to jointly work on the tasks together (as shown in the example for the OCR tasks). Throughout the conversations, workers communicate on their confidence about their answer (e.g., "I'm not sure after that...") as well as their strategies for solving the tasks (e.g., "he pronounces 'was' with a v-sound instead of the w-sound"). Note that much of the discussions as shown in Figure 7 can hardly be possible without allowing workers to directly interact and exchange information with each other in real time, which implies the necessity of direct, synchronous, free-style interactions between workers in crowdwork.
To briefly summarize, we have consistently found that enabling peer communication among pairs of workers can enhance the work quality for various types of tasks above the level of independent worker performance, which can be partly attributed to the social pressure brought up by the peer communication process, but is mostly due to the constructive conversations between workers about the work. These results indicate that introducing peer communication in crowdwork can be a simple, generalizable approach to enhance work quality.
Effects of Peer Communication on Work Quality in Future Tasks
We now move on to examine our Hypothesis 2: compared to workers who have never been involved in peer communication, do workers who have participated in tasks with peer communication continue to produce work of higher quality in future tasks of the same type, even if they need to complete those tasks on their own? In other words, is there any "spillover effect" of peer communication on the quality of crowdwork, such that peer communication can be used as a "training" method to enhance workers' independent work quality in the future?
To answer this question, we compare the work quality produced in Session 2 (i.e., the middle two independent tasks) of the two treatments for all three types of tasks, and results are shown in Figure 8. As we can see in the figure, there are no significant differences in work quality between treatments, indicating that after participating in tasks with peer communication, workers are not able to maintain a higher level of quality when they complete tasks of the same type on their own in the future. Therefore, our observations in Session 2 of the two treatments do not support Hypothesis 2. To fully understand whether and when Hypothesis 2 can be supported, we continue to conduct a set of follow-up experiments, which we will describe in detail in the next section.
Experiment 2: When Does Peer Communication Affect Quality of Independent Work in Future Tasks?
The results of our previous experiment do not support Hypothesis 2, i.e., after participating in tasks with peer communication, workers do not produce work of higher quality in tasks of the same type when working independently. This is in contrast with the empirical findings of peer instruction in educational setting, despite that the procedure of peer communication is adapted from peer instruction. We conjecture that two factors may have contributed to this observed difference.
First, for peer instruction, concepts covered in the post-tests (e.g., when students answer the test questions on their own after the instruction ends) are often the same as concepts discussed during the peer instruction process in class. Therefore, knowledge learned from the peer instruction process can be directly transferred to post-tests. This is not necessarily true for peer communication in crowdwork-for example, when workers are asked to complete a sequence of tasks to identify Siberian Husky and Malamute, it is possible that the distinguishing feature for the dog in one task is its eyes while the distinguishing feature for the dog in another task is its size, making the knowledge that workers possibly have learned in tasks with peer communication not always useful on future tasks that are somewhat unrelated.
In addition, as we have discussed in Section 2.2, compared to the standard peer instruction procedure, we remove the last step (see Figure 1) where the requester provides expert feedback to workers after reviewing workers' final answers in the peer communication process 6 due to the low availability of expert feedback. It is thus possible that worker's quality improvement in future independent work can only be obtained when additional expert feedback is provided after peer communication.
Therefore, in this section, we conduct an additional set of experiments to examine whether these two factors have impacts on the effectiveness of peer communication as a tool for training workers, and thus seek for a better understanding on whether and when peer communication can affect workers' independent performance in the future.
Experimental Tasks
In this study, we use nutrition analysis tasks, provided in the work by Burgermaster et al. [5], in the experiments. In each nutrition analysis task, we present a pair of photographs of mixed-ingredient meals to workers. Workers are asked to identify which meal in the pair contains a higher amount of a specific macronutrient (i.e., carbohydrate, fat, protein, etc.). To help workers figure out the main ingredients of the meals in each photograph, we also attach a textual description with each photograph. Figure 9a shows an example of a nutrition analysis task.
(a) Example of a nutrition analysis task (b) Expert feedback for the above nutrition analysis task Figure 9: Example of a nutrition analysis task and the expert explanation associated with it.
We choose to use the nutrition analysis tasks for two reasons. First, each nutrition analysis task is associated with a "topic," which is the key concept underlying the task. For example, the topic for the task shown in Figure 9a is that nuts (contained in peanut butter) are important sources of proteins. Knowing the topic of each task, we can then place tasks of the same topic subsequently and examine whether, after participating in tasks with peer communication, workers improve their independent work quality on related tasks (i.e., tasks that share the same underlying concept). Second, we have access to expert explanations for each nutrition analysis task (see Figure 9b for an example), which allows us to test whether peer communication has to be combined with expert feedback to influence worker's independent performance in future tasks.
We would like to note that the underlying concepts and expert feedback are often hard to obtain in crowdsourcing tasks, since requesters do not know the ground truth. The purpose of this follow-up study is to provide us better insights on under what conditions can peer communication be an effective tool for training workers.
Experimental Design
We explore whether peer communication can be used to train workers when combined with the two factors we discuss above: whether tasks are conceptually similar (i.e., whether future tasks are related to tasks where peer communication is enabled), and whether expert feedback is provided at the end of peer communication.
In particular, we aim to answer whether peer communication is effective in training workers when (a) tasks are conceptually similar but no expert feedback is given at the end of peer communication, (b) tasks are not conceptually similar but expert feedback is given at the end of peer communication, or (c) tasks are conceptually similar and expert feedback is given at the end of peer communication. For both the second and the third question, if the answer is positive, a natural question then is whether the improvement on independent work quality in future tasks is attributed to the expert feedback or the peer communication procedure.
Corresponding to these three questions, we design three sets of experiments. All the experiments share the same structure as the experiments we have designed in the previous section. That is, we include two treatments in each experiment, where Treatment 1 contains 4 independent nutrition analysis tasks followed by 2 discussion tasks and Treatment 2 contains 2 discussion tasks followed by 4 independent tasks. We highlight the differences in the design of these three experiments in the following.
Experiment 2a
Different from that in experiments of Section 3, in this experiment, tasks within a HIT are not randomly selected. Instead, for both treatments in this experiment, tasks in Session 1 are randomly picked, while tasks in Session 2 are selected from the ones that share the same topics as tasks in Session 1. This experiment is designed to understand whether peer communication can lead to better independent work quality in related tasks in the future. Naturally, if workers are able to achieve better performance in Session 2 of Treatment 2 compared to that in Session 2 of Treatment 1, we may conclude that workers can improve their independent work quality after participating in tasks with peer communication, but only for those related tasks that share similar concepts as the tasks that they have discussed with other workers.
Experiment 2b
The main difference between this experiment and experiments of Section 3 is the presence of expert feedback. Specifically, for all discussion tasks in this experiment, after workers submit their final answers, we will display extra information to workers which includes a feedback on whether the worker's final answer is correct, and an expert explanation on why the worker's answer is correct or wrong (see Figure 9b for an example). Workers are asked to spend at least 15 seconds reading this information before they can proceed to the next page in the HIT. Note that in this experiment, the tasks included in a HIT are randomly selected. Comparing worker's performance in Session 2 of the two treatments in this experiment, thus, inform us on whether the addition of expert feedback at the end of the peer communication procedure will lead to improvement on independent work quality in future tasks, which may or may not be related to the tasks for which peer communication is enabled.
Experiment 2c
Our final experiment is the same as Experiment 2b, except for one small difference-tasks included in Session 2 have the same topics as tasks in Session 1. This experiment then allows us to understand whether workers' independent work quality improves after they participate in tasks with peer communication, when expert feedback is combined with peer communication and future tasks are related to the tasks with peer communication.
Our experiments are open to U.S. workers only, and each worker is allowed to participate in only one experiment.
Experimental Results
In total, 386, 432 and 334 workers have participated in Experiments 2a, 2b and 2c, respectively. Figure 10 shows the results on the comparison of work quality in the two treatments for all three experiments.
First, we notice that in all three experiments, there are significant differences in the work quality between the two treatments for tasks in Session 1, which reaffirms our findings that workers significantly improve their performance in tasks with peer communication compared to when they work on the tasks by themselves (p-values for two-sample t-tests on workers' error rates in Session 1 are 4.779 × 10 −4 , 0.005, and 0.007 for Experiments 2a, 2b, 2c, respectively).
Furthermore, for tasks in Session 2, we find that workers don't exhibit much difference in their work quality for tasks in Session 2 between the two treatments in Experiment 2a or 2b (p-values for two-sample t-tests on workers' error rates in Session 2 are 0.844 and 0.384 for Experiment 2a and 2b, respectively), but there is a significant difference for the work quality in Session 2 between the two treatments in Experiment 2c (p = 0.019). Together with our findings in Section 3.5.3, these results imply that simply enabling direct communication between pairs of workers who work on the same microtasks does not help worker to improve their independent work quality in future tasks, in regardless of whether those future tasks share related concepts to the tasks that they have discussed with co-workers. In addition, simply providing expert feedback after the peer communication procedure can not enhance worker's future independent performance on some randomly selected tasks of the same type, either. Nevertheless, it seems that peer communication, when combined with expert feedback, can lead to improved independent work quality on future tasks that are conceptually related to the tasks where peer communication takes place.
One may wonder why peer communication, by itself, can hardly influence workers' independent work quality in future related tasks (i.e., results of Experiment 2a), but can influence workers' independent work quality when it is combined with expert feedback (i.e., results of Experiment 2c). We provide two explanations to this. First, by looking into the chat logs, we find that while workers often discover the underlying concept for a task through the peer communication procedure, their understandings on that concept are often context-specific and are not always generalizable to a different context. For example, given two tasks that share the same topic of "nuts are important sources of protein," a pair of workers might have successfully concluded in one task through discussion that peanut butter in one meal has more protein than banana in the other meal. However, when they are asked to complete the related task on their own, they are facing a choice between a meal with peanut butter and a meal with cream cheese, for which their knowledge about peanut butter that they have learned previously are not entirely applicable. In other words, worker may lack the ability to generalize the transferrable knowledge from concrete tasks through a short period of peer communication (e.g., within 2 microtasks).
Perhaps more importantly, we argue that the improvement in worker's independent work quality after participating in tasks with peer communication and expert feedback is largely due to the existence of expert feedback, rather than the peer communication procedure. To show this, we conduct a follow-up experiment with the same two-treatment design as Experiment 2c, except that we provide expert feedback on the first two independent tasks of Treatment 1. In this way, comparing the work quality produced in Session 2 of the two treatments, we can understand whether the peer communication procedure provides any additional boost to worker's independent performance beyond the improvement brought up by expert feedback. Our experimental results on 494 workers give a negative answer-on average, workers' error rates in Session 2 of Treatment 1 and 2 are 22.7% and 27.3%, respectively, and the difference is not statistically significant (p = 0.093).
Overall, our examinations on the effects of peer communication on workers' independent work quality in future tasks suggest a limited impact. In other words, peer communication may not be a very effective approach to "train" workers towards a higher level of independent performance, at least when workers are working on microtasks for a relatively short period of time.
Discussion
wIn this paper, we have studied the effects of direct interactions between workers in crowdwork, and in particular, we have explored whether introducing peer communication in tasks can enhance work quality in those tasks as well as improving workers' independent work performance in future tasks of the same type. Our results indicate a robust improvement in work quality when pairs of workers can directly communicate with each other, and such improvement is consistently observed across different types of tasks. On the other hand, we also find that allowing workers to communicate with each other in some tasks may have limited impacts on improving workers' independent work performance in tasks of the same type in the future.
Design Implications
Our consistent observations on the improvement of work quality in tasks with peer communication indicate an alternative way of organizing microtasks in crowdwork: instead of having workers solving microtasks independently, practitioners may consider the possibility of systematically organizing crowd workers to work in pairs and enabling direct, synchronous and free-style interactions between pairs of workers to enhance the quality of crowdwork. In some sense, our results suggest the promise and potential benefits of "working in pairs" as a new baseline approach to organize crowdwork. On the other hand, introducing peer communication in crowdwork also creates the complexity for requesters to synchronize the work pace of different workers. Thus, practitioners may need to carefully deliberate on the trade-off between quality improvement brought up by peer communication and extra costs of synchronizing before they implement one specific way to organize their crowdwork.
It is worthwhile to mention that while our experimental results show the advantage of intro-ducing peer communication in crowdwork for many different types of tasks, we can not rule out the possibility that for some specific type of tasks, peer communication may not be helpful or even be harmful. Previous studies have reported phenomenon like groupthink [19] where communication may actually hurt the individual performance. Therefore, more experimental research is needed to thoroughly understand the relationship between the property of tasks and whether peer communication would be helpful for those tasks.
Limitations
While our results are overall robust and consistent, our specific experimental design and choice of tasks imply a few limitations. First, our experiments only span for a short period of time (i.e., six microtasks), and workers can only communicate with each other in two microtasks. This short period of interactions could be a bottleneck for workers to really learn the underlying concepts or knowledge that is the key for workers to improve their independent performance. Indeed, in the educational settings, students are often involved in a course that spans for hours or even months, so their improved learning performance in courses with peer instruction could be attributed to repeated exposure to the peer instruction process. In this sense, our observation that peer communication is not an effect tool for training workers could be simply due to this short interactions. Thus, it is an interesting future direction to explore the long term impacts of peer communication for crowdwork.
Moreover, our current experiments focus exclusively on microtasks, thus it is unclear whether our results observed in this study can be generalized to more complex tasks. In particular, many previous work have shown that implicit worker interactions in the form of workers receiving feedback from other workers or reviewing other workers' output can be an effective method for training workers towards better independent performance. We conjecture that our conclusion on peer communication being not a very effective training method is somewhat limited by the nature of microtasks, and examining the effectiveness of peer communication for more complex tasks is a direction that worths further study.
Future Work
In addition to many interesting future directions we have discussed above, there are a few more topics that we are particularly interested in exploring in the future.
First, our current study focuses on studying peer communication between pairs of workers in crowdwork. Can we generalize interactions involving more than two workers? How should we deal with additional complexities, such as social loafing [27], when there are more than two workers involved in the communication? It would be interesting to explore the roles of communications in multi-worker collaborations for crowdwork.
Second, in this study, we focused on implementing the component of worker interactions. However, in peer instruction in education, instructor intervention has a big impact on student learning. It is natural to ask, can we further improve the quality of crowdwork through requester intervention? For example, if most workers already agree on an answer, there is a good chance the answer is correct, and therefore the requester can intervene and skip the discussion phase to improve efficiency. In general, can the requester further improve the quality of work by dynamically taking interventions in peer communication process, e.g., by deciding whether a discussion is needed or even modify the pairing of workers based on the previous discussion?
Conclusion
In this paper, we have explored how introducing peer communication-direct, synchronous, freestyle interactions between workers-affects crowdwork. In particular, we adopt the workflow of peer instruction in educational settings and examine the effects of one-to-one interactions between pairs of workers working on the same microtasks. Experiments on Amazon Mechanical Turk demonstrate that adopting peer communication significantly increases the quality of crowdwork over the level of independent worker performance, and such performance improvement is robust across different types of tasks. On the other hand, we find that participating in tasks with peer communication only leads to improvement in workers' independent tasks in future tasks of the same type, if expert feedback is provided at the end of the peer communication procedure and future tasks are conceptually related to the tasks where peer communication take places. However, the improvement is likely caused by the expert feedback rather than by peer communication. Overall, these results suggest that peer communication, by itself, may not be an effective method to train workers towards better performance, at least for typical microtasks on crowdsourcing platforms.
Figure 1 :
1Questioning procedure of peer instruction.
Figure 3 :
3The instruction of the image labeling task.
Figure 4 :
4Examples of photos used in the OCR task.
Figure 6 :
6The aggregation error for the image labeling task after using majority voting for aggregation.
Figure 7 :
7Examples of chat logs.
Figure 8 :
8Examine whether the work quality in future tasks of the same type increases after workers participating in tasks with peer communication. In "No Peer Communication" group, we calculate workers' average error rate in Session 2 of Treatment 1. In "After Peer Communication" group, we calculate workers' average error rate in Session 2 of Treatment 2. Error bars indicate the mean ± one standard error.
Figure 10 :
10Comparison of workers' average error rates in the three sets of experiments in which we examine whether workers improve the quality of independent work after they participate in tasks with peer communication.
In practice, if most of the students answer the question correctly on their own, the instructor can decide to skip the discussion phase and move on to the next concept.
We design and conduct a series of large-scale online experiments to test these two hypotheses. In our experiments, we operationalize the procedure of peer communication between pairs of workers who are working on the same microtask, and we leave the examination of the effects of peer communication in larger groups of crowd workers on more complex tasks as future work. It is also worthwhile to note that, in this work, we focus on adapting the component of worker interactions in peer instruction (i.e., the yellow-shaded boxes inFigure 1). However, in practice, the requester can often make interventions to improve the efficiency of the peer communication process (e.g., given the initial answers submitted before discussion, the requester can dynamically decide how to present information or match workers to make the discussions more effective). The study of requester interventions in peer communication is out of the scope of the current paper, but it is another direction that worths further research.3 Experiment 1: How Does Peer Communication Affect Quality of Crowdwork?To examine how introducing direct communication between pairs of workers in crowdwork affects the work quality, we design and conduct a set of online experiments on Amazon Mechanical Turk
HIT stands for Human Intelligence Task, and it refers to one unit of job on MTurk that a worker can accept to work on.
http://www.voxforge.org
We have targeted to recruit around 200 workers for each treatment, leading to about 400 workers for each experiment. However, we have encountered difficulties reaching workers of the targeted size for the audio transcription tasks, probably because workers consider the payment to be not high enough for audio transcription tasks (we fix the payment magnitude to be the same across the three types of tasks).
Since there is no straight-forward way to aggregate workers' answers in the other two types of tasks, we only perform the aggregation for image labeling tasks.
This step would be equivalent to instructor reviewing students' final responses and providing more explanation as needed in peer instruction.
AcknowledgmentsWe thank all the crowd workers who participated in the experiments to make this work possible.
Games with a purpose. Ahn Luis Von, Computer. 396Luis von Ahn. Games with a purpose. Computer, 39(6):92-94, June 2006.
Steering user behavior with badges. Ashton Anderson, Daniel Huttenlocher, Jon Kleinberg, Jure Leskovec, Proceedings of the 22Nd International Conference on World Wide Web (WWW). the 22Nd International Conference on World Wide Web (WWW)Ashton Anderson, Daniel Huttenlocher, Jon Kleinberg, and Jure Leskovec. Steering user behavior with badges. In Proceedings of the 22Nd International Conference on World Wide Web (WWW), 2013.
Crowds in two seconds: Enabling realtime crowd-powered interfaces. Michael S Bernstein, Joel Brandt, Robert C Miller, David R Karger, Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (UIST). the 24th Annual ACM Symposium on User Interface Software and Technology (UIST)Michael S. Bernstein, Joel Brandt, Robert C. Miller, and David R. Karger. Crowds in two seconds: Enabling realtime crowd-powered interfaces. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (UIST), 2011.
Vizwiz: Nearly real-time answers to visual questions. Jeffrey P Bigham, Chandrika Jayant, Hanjie Ji, Greg Little, Andrew Miller, Robert C Miller, Robin Miller, Aubrey Tatarowicz, Brandyn White, Samual White, Tom Yeh, Proceedings of the 23Nd Annual ACM Symposium on User Interface Software and Technology (UIST). the 23Nd Annual ACM Symposium on User Interface Software and Technology (UIST)Jeffrey P. Bigham, Chandrika Jayant, Hanjie Ji, Greg Little, Andrew Miller, Robert C. Miller, Robin Miller, Aubrey Tatarowicz, Brandyn White, Samual White, and Tom Yeh. Vizwiz: Nearly real-time answers to visual questions. In Proceedings of the 23Nd Annual ACM Sym- posium on User Interface Software and Technology (UIST), 2010.
The role of explanations in casual observational learning about nutrition. Marissa Burgermaster, Z Krzysztof, Patricia Gajos, Lena Davidson, Mamykina, Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. the 2017 CHI Conference on Human Factors in Computing SystemsACMMarissa Burgermaster, Krzysztof Z Gajos, Patricia Davidson, and Lena Mamykina. The role of explanations in casual observational learning about nutrition. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pages 4097-4145. ACM, 2017.
Revolt: Collaborative crowdsourcing for labeling machine learning datasets. Joseph Chee Chang, Saleema Amershi, Ece Kamar, Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI). the 2017 CHI Conference on Human Factors in Computing Systems (CHI)Joseph Chee Chang, Saleema Amershi, and Ece Kamar. Revolt: Collaborative crowdsourcing for labeling machine learning datasets. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI), 2017.
Veritas: Combining expert opinions without labeled data. S R Cholleti, S A Goldman, A Blum, D G Politte, S Don, Proceedings 20th IEEE international Conference on Tools with Artificial intelligence. 20th IEEE international Conference on Tools with Artificial intelligenceS. R. Cholleti, S. A. Goldman, A. Blum, D. G. Politte, and S. Don. Veritas: Combining expert opinions without labeled data. In Proceedings 20th IEEE international Conference on Tools with Artificial intelligence, 2008.
Peer instruction: Ten years of experience and results. Catherine Crouch, Eric Mazur, Am. J. Phys. 699Catherine Crouch and Eric Mazur. Peer instruction: Ten years of experience and results. Am. J. Phys., 69(9):970-977, September 2001.
Pomdp-based control of workflows for crowdsourcing. Peng Dai, Christopher H Lin, Mausam , Daniel S Weld, Artif. Intell. 2021Peng Dai, Christopher H. Lin, Mausam, and Daniel S. Weld. Pomdp-based control of workflows for crowdsourcing. Artif. Intell., 202(1):52-85, September 2013.
Maximum likeihood estimation of observer error-rates using the EM algorithm. A P Dawid, A M Skene, Applied Statistics. 28A. P. Dawid and A. M. Skene. Maximum likeihood estimation of observer error-rates using the EM algorithm. Applied Statistics, 28:20-28, 1979.
Maximum likelihood from incomplete data via the EM algorithm. A P Dempster, N M Laird, D B Rubin, Journal of the Royal Statistical Society: Series B. 39A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society: Series B, 39:1-38, 1977.
Toward a learning science for complex crowdsourcing tasks. Shayan Doroudi, Ece Kamar, Emma Brunskill, Eric Horvitz, Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI). the 2016 CHI Conference on Human Factors in Computing Systems (CHI)Shayan Doroudi, Ece Kamar, Emma Brunskill, and Eric Horvitz. Toward a learning science for complex crowdsourcing tasks. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI), 2016.
Microtalk: Using argumentation to improve crowdsourcing accuracy. Ryan Drapeau, Lydia B Chilton, Jonathan Bragg, Daniel S Weld, Fourth AAAI Conference on Human Computation and Crowdsourcing (HCOMP). Ryan Drapeau, Lydia B. Chilton, Jonathan Bragg, and Daniel S. Weld. Microtalk: Using argumentation to improve crowdsourcing accuracy. In Fourth AAAI Conference on Human Computation and Crowdsourcing (HCOMP), 2016.
Peer instruction: Results from a range of classrooms. Adam P Fagen, Catherine H Crouch, Eric Mazur, The Physics Teacher. 404Adam P. Fagen, Catherine H. Crouch, and Eric Mazur. Peer instruction: Results from a range of classrooms. The Physics Teacher, 40(4):206-209, 2002.
Adaptive task assignment for crowdsourced classification. C Ho, S Jabbari, J W Vaughan, The 30th International Conference on Machine Learning (ICML). C. Ho, S. Jabbari, and J. W. Vaughan. Adaptive task assignment for crowdsourced classifica- tion. In The 30th International Conference on Machine Learning (ICML), 2013.
Incentivizing high quality crowdwork. Chien-Ju Ho, Aleksandrs Slivkins, Siddharth Suri, Jennifer Wortman Vaughan, Proceedings of the 24th International Conference on World Wide Web (WWW). the 24th International Conference on World Wide Web (WWW)Chien-Ju Ho, Aleksandrs Slivkins, Siddharth Suri, and Jennifer Wortman Vaughan. Incen- tivizing high quality crowdwork. In Proceedings of the 24th International Conference on World Wide Web (WWW), 2015.
The labor economics of paid crowdsourcing. John Joseph Horton, Lydia B Chilton, Proceedings of the 11th ACM conference on Electronic commerce (EC). the 11th ACM conference on Electronic commerce (EC)John Joseph Horton and Lydia B. Chilton. The labor economics of paid crowdsourcing. In Proceedings of the 11th ACM conference on Electronic commerce (EC), 2010.
Designing incentives for online question and answer forums. S Jain, Y Chen, D C Parkes, Proceedings of the 10th ACM conference on Electronic commerce. the 10th ACM conference on Electronic commerceS. Jain, Y. Chen, and D.C. Parkes. Designing incentives for online question and answer forums. In Proceedings of the 10th ACM conference on Electronic commerce (EC), 2009.
Groupthink: Psychological Studies of Policy Decisions and Fiascoes. I L Janis, Houghton MifflinI.L. Janis. Groupthink: Psychological Studies of Policy Decisions and Fiascoes. Houghton Mifflin, 1982.
Learning with multiple labels. R Jin, Z Ghahramani, Advances in Neural Information Processing Systems (NIPS). R. Jin and Z. Ghahramani. Learning with multiple labels. In Advances in Neural Information Processing Systems (NIPS), 2003.
Iterative learning for reliable crowdsourcing systems. D R Karger, S Oh, D Shah, The 25th Annual Conference on Neural Information Processing Systems (NIPS). D. R. Karger, S. Oh, and D. Shah. Iterative learning for reliable crowdsourcing systems. In The 25th Annual Conference on Neural Information Processing Systems (NIPS), 2011.
Budget-optimal crowdsourcing using low-rank matrix approximations. D R Karger, S Oh, D Shah, Proc. 49th Annual Conference on Communication, Control, and Computing. 49th Annual Conference on Communication, Control, and ComputingAllertonD. R. Karger, S. Oh, and D. Shah. Budget-optimal crowdsourcing using low-rank matrix approximations. In Proc. 49th Annual Conference on Communication, Control, and Computing (Allerton), 2011.
Novel dataset for fine-grained image categorization. Aditya Khosla, Nityananda Jayadevaprakash, Bangpeng Yao, Li Fei-Fei, First Workshop on Fine-Grained Visual Categorization, IEEE Conference on Computer Vision and Pattern Recognition. Colorado Springs, COAditya Khosla, Nityananda Jayadevaprakash, Bangpeng Yao, and Li Fei-Fei. Novel dataset for fine-grained image categorization. In First Workshop on Fine-Grained Visual Categorization, IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, June 2011.
Crowdforge: Crowdsourcing complex work. Aniket Kittur, Boris Smus, Susheel Khamkar, Robert E Kraut, Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (UIST). the 24th Annual ACM Symposium on User Interface Software and Technology (UIST)Aniket Kittur, Boris Smus, Susheel Khamkar, and Robert E. Kraut. Crowdforge: Crowd- sourcing complex work. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (UIST), 2011.
Collaboratively crowdsourcing workflows with turkomatic. Anand Kulkarni, Matthew Can, Björn Hartmann, Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work CSCW. the ACM 2012 Conference on Computer Supported Cooperative Work CSCWAnand Kulkarni, Matthew Can, and Björn Hartmann. Collaboratively crowdsourcing work- flows with turkomatic. In Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work CSCW, 2012.
Peer instruction: From harvard to the two-year college. Nathaniel Lasry, Eric Mazur, Jessica Watkins, American Journal of Physics. 7611Nathaniel Lasry, Eric Mazur, and Jessica Watkins. Peer instruction: From harvard to the two-year college. American Journal of Physics, 76(11):1066-1069, 2008.
Many hands make light the work: The causes and consequences of social loafing. Bibb Latané, Kipling Williams, Stephen Harkins, Journal of Personality and Social Psychology. 376Bibb Latané, Kipling Williams, and Stephen Harkins. Many hands make light the work: The causes and consequences of social loafing. Journal of Personality and Social Psychology, 37 (6):822-832, 1979.
Curiosity killed the cat, but makes crowdwork better. Edith Law, Ming Yin, Joslin Goh, Kevin Chen, Michael A Terry, Krzysztof Z Gajos, Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI). the 2016 CHI Conference on Human Factors in Computing Systems (CHI)Edith Law, Ming Yin, Joslin Goh, Kevin Chen, Michael A. Terry, and Krzysztof Z. Gajos. Cu- riosity killed the cat, but makes crowdwork better. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI), 2016.
Turkit: Human computation algorithms on mechanical turk. Greg Little, Lydia B Chilton, Max Goldman, Robert C Miller, Proceedings of the 23Nd Annual ACM Symposium on User Interface Software and Technology (UIST). the 23Nd Annual ACM Symposium on User Interface Software and Technology (UIST)Greg Little, Lydia B. Chilton, Max Goldman, and Robert C. Miller. Turkit: Human compu- tation algorithms on mechanical turk. In Proceedings of the 23Nd Annual ACM Symposium on User Interface Software and Technology (UIST), 2010.
Financial incentives and the "performance of crowds. Winter Mason, Duncane Watts, Proceedings of the 1st Human Computation Workshop. the 1st Human Computation WorkshopWinter Mason and Duncane Watts. Financial incentives and the "performance of crowds". In Proceedings of the 1st Human Computation Workshop (HCOMP), 2009.
Peer instruction. Eric Mazur, Peer Instruction. SpringerEric Mazur. Peer instruction. In Peer Instruction, pages 9-19. Springer, 2017.
Conversational markers of constructive discussions. Vlad Niculae, Cristian Danescu-Niculescu-Mizil, Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). Vlad Niculae and Cristian Danescu-Niculescu-Mizil. Conversational markers of constructive discussions. In Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2016.
Platemate: Crowdsourcing nutritional analysis from food photographs. Jon Noronha, Eric Hysen, Haoqi Zhang, Krzysztof Z Gajos, Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (UIST). the 24th Annual ACM Symposium on User Interface Software and Technology (UIST)Jon Noronha, Eric Hysen, Haoqi Zhang, and Krzysztof Z. Gajos. Platemate: Crowdsourcing nutritional analysis from food photographs. In Proceedings of the 24th Annual ACM Sympo- sium on User Interface Software and Technology (UIST), 2011.
Peer instruction: do students really learn from peer discussion in computing?. Leo Porter, Cynthia Bailey Lee, Beth Simon, Daniel Zingaro, Proceedings of the seventh international workshop on Computing education research. the seventh international workshop on Computing education researchLeo Porter, Cynthia Bailey Lee, Beth Simon, and Daniel Zingaro. Peer instruction: do students really learn from peer discussion in computing? In Proceedings of the seventh international workshop on Computing education research, 2011.
Learning from crowds. V Raykar, S Yu, L Zhao, G Valadez, C Florin, L Bogoni, L Moy, Journal of Machine Learning Research. 11V. Raykar, S. Yu, L. Zhao, G. Valadez, C. Florin, L. Bogoni, and L. Moy. Learning from crowds. Journal of Machine Learning Research, 11:1297-1322, 2010.
Expert crowdsourcing with flash teams. Daniela Retelny, Sébastien Robaszkiewicz, Alexandra To, Walter S Lasecki, Jay Patel, Negar Rahmati, Tulsee Doshi, Melissa Valentine, Michael S Bernstein, Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology (UIST). the 27th Annual ACM Symposium on User Interface Software and Technology (UIST)Daniela Retelny, Sébastien Robaszkiewicz, Alexandra To, Walter S. Lasecki, Jay Patel, Negar Rahmati, Tulsee Doshi, Melissa Valentine, and Michael S. Bernstein. Expert crowdsourcing with flash teams. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology (UIST), 2014.
An assessment of intrinsic and extrinsic motivation on task performance in crowdsourcing markets. Jakob Rogstadius, Vassilis Kostakos, Aniket Kittur, Boris Smus, Jim Laredo, Maja Vukovic, 5th International AAAI Conference on Weblogs and Social Media (ICWSM). Jakob Rogstadius, Vassilis Kostakos, Aniket Kittur, Boris Smus, Jim Laredo, and Maja Vukovic. An assessment of intrinsic and extrinsic motivation on task performance in crowd- sourcing markets. In 5th International AAAI Conference on Weblogs and Social Media (ICWSM).
Double or nothing: Multiplicative incentive mechanisms for crowdsourcing. Bhadresh Nihar, Denny Shah, Zhou, The 29th Annual Conference on Neural Information Processing Systems (NIPS). Nihar Bhadresh Shah and Denny Zhou. Double or nothing: Multiplicative incentive mecha- nisms for crowdsourcing. In The 29th Annual Conference on Neural Information Processing Systems (NIPS), 2015.
Designing incentives for inexpert human raters. Aaron D Shaw, John J Horton, Daniel L Chen, Proceedings of the ACM 2011 conference on Computer supported cooperative work (CSCW). the ACM 2011 conference on Computer supported cooperative work (CSCW)Aaron D. Shaw, John J. Horton, and Daniel L. Chen. Designing incentives for inexpert human raters. In Proceedings of the ACM 2011 conference on Computer supported cooperative work (CSCW), 2011.
Whose vote should count more: Optimal integration of labels from labelers of unknown expertise. J Whitehill, P Ruvolo, T Wu, J Bergsma, J Movellan, Advances in Neural Information Processing Systems (NIPS). J. Whitehill, P. Ruvolo, T. Wu, J. Bergsma, and J. Movellan. Whose vote should count more: Optimal integration of labels from labelers of unknown expertise. In Advances in Neural Information Processing Systems (NIPS), 2009.
The effects of performance-contingent financial incentives in online labor markets. Ming Yin, Yiling Chen, Yu-An Sun, Proceedings of the 27th AAAI Conference on Artificial Intelligence (AAAI). the 27th AAAI Conference on Artificial Intelligence (AAAI)Ming Yin, Yiling Chen, and Yu-An Sun. The effects of performance-contingent financial incentives in online labor markets. In Proceedings of the 27th AAAI Conference on Artificial Intelligence (AAAI), 2013.
Reviewing versus doing: Learning and performance in crowd assessment. Haiyi Zhu, Steven P Dow, Robert E Kraut, Aniket Kittur, Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing (CSCW). the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing (CSCW)Haiyi Zhu, Steven P. Dow, Robert E. Kraut, and Aniket Kittur. Reviewing versus doing: Learning and performance in crowd assessment. In Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing (CSCW), 2014.
| [] |
[
"Mathematical Physics Trading Locality for Time: Certifiable Randomness from Low-Depth Circuits",
"Mathematical Physics Trading Locality for Time: Certifiable Randomness from Low-Depth Circuits"
] | [
"Matthew Coudron [email protected] \nNational Institute of Standards and Technology/QuICS\nUniversity of Maryland\nCollege ParkUSA\n",
"Jalex Stark \nUniversity of California Berkeley\nBerkeleyUSA\n",
"Thomas Vidick [email protected] \nCalifornia Institute of Technology\nPasadenaUSA\n"
] | [
"National Institute of Standards and Technology/QuICS\nUniversity of Maryland\nCollege ParkUSA",
"University of California Berkeley\nBerkeleyUSA",
"California Institute of Technology\nPasadenaUSA"
] | [
"Commun. Math. Phys"
] | The generation of certifiable randomness is the most fundamental informationtheoretic task that meaningfully separates quantum devices from their classical counterparts. We propose a protocol for exponential certified randomness expansion using a single quantum device. The protocol calls for the device to implement a simple quantum circuit of constant depth on a 2D lattice of qubits. The output of the circuit can be verified classically in linear time, and is guaranteed to contain a polynomial number of certified random bits assuming that the device used to generate the output operated using a (classical or quantum) circuit of sub-logarithmic depth. This assumption contrasts with the locality assumption used for randomness certification based on Bell inequality violation and more recent proposals for randomness certification based on computational assumptions. Furthermore, to demonstrate randomness generation it is sufficient for a device to sample from the ideal output distribution within constant statistical distance. Our procedure is inspired by recent work of Bravyi et al. (Science 362(6412):308-311, 2018), who introduced a relational problem that can be solved by a constant-depth quantum circuit, but provably cannot be solved by any classical circuit of sub-logarithmic depth. We develop the discovery of Bravyi et al. into a framework for robust randomness expansion. Our results lead to a new proposal for a demonstrated quantum advantage that has some advantages compared to existing proposals. First, our proposal does not rest on any complexity-theoretic conjectures, but relies on the physical assumption that the adversarial device being tested implements a circuit of sub-logarithmic depth. Second, success on our task can be easily verified in classical linear time. Finally, our task is more noise-tolerant than most other existing proposals that can only tolerate multiplicative error, or require additional conjectures from complexity theory; in contrast, we are able to allow a small constant additive error in total variation distance between the sampled and ideal distributions. | 10.1007/s00220-021-03963-w | null | 53,588,712 | 1810.04233 | 03d79694fa1471146be3b8679873fb36279ff328 |
Mathematical Physics Trading Locality for Time: Certifiable Randomness from Low-Depth Circuits
2021
Matthew Coudron [email protected]
National Institute of Standards and Technology/QuICS
University of Maryland
College ParkUSA
Jalex Stark
University of California Berkeley
BerkeleyUSA
Thomas Vidick [email protected]
California Institute of Technology
PasadenaUSA
Mathematical Physics Trading Locality for Time: Certifiable Randomness from Low-Depth Circuits
Commun. Math. Phys
382202110.1007/s00220-021-03963-wReceived: 23 January 2019 / Accepted: 15 January 2021 Published online: 9 February 2021 -© The Author(s) 2021Digital Object Identifier (DOI) https:// Communications in
The generation of certifiable randomness is the most fundamental informationtheoretic task that meaningfully separates quantum devices from their classical counterparts. We propose a protocol for exponential certified randomness expansion using a single quantum device. The protocol calls for the device to implement a simple quantum circuit of constant depth on a 2D lattice of qubits. The output of the circuit can be verified classically in linear time, and is guaranteed to contain a polynomial number of certified random bits assuming that the device used to generate the output operated using a (classical or quantum) circuit of sub-logarithmic depth. This assumption contrasts with the locality assumption used for randomness certification based on Bell inequality violation and more recent proposals for randomness certification based on computational assumptions. Furthermore, to demonstrate randomness generation it is sufficient for a device to sample from the ideal output distribution within constant statistical distance. Our procedure is inspired by recent work of Bravyi et al. (Science 362(6412):308-311, 2018), who introduced a relational problem that can be solved by a constant-depth quantum circuit, but provably cannot be solved by any classical circuit of sub-logarithmic depth. We develop the discovery of Bravyi et al. into a framework for robust randomness expansion. Our results lead to a new proposal for a demonstrated quantum advantage that has some advantages compared to existing proposals. First, our proposal does not rest on any complexity-theoretic conjectures, but relies on the physical assumption that the adversarial device being tested implements a circuit of sub-logarithmic depth. Second, success on our task can be easily verified in classical linear time. Finally, our task is more noise-tolerant than most other existing proposals that can only tolerate multiplicative error, or require additional conjectures from complexity theory; in contrast, we are able to allow a small constant additive error in total variation distance between the sampled and ideal distributions.
Introduction
A fundamental point of departure between quantum mechanics and classical theory is that the former is non-deterministic: quantum mechanics, through the Born rule, posits the existence of experiments that generate intrinsic randomness. This observation leads to the simplest and most successful "test of quantumness" to have been designed and implemented: the Bell test [Bel64]. Far beyond its role as a test of the foundations of quantum mechanics, the Bell test has become a fundamental building block in quantum information, from protocols for quantum cryptography (e.g. device-independent quantum key distribution [Eke91,VV14]) to complexity theory (e.g. delegated quantum computation [RUV13], multiprover interactive proof systems [CHTW04]) and much more [BCP+14]. Yet, while a loophole-free implementation of a Bell test has been demonstrated [HBD+15,GVW+15,SMSC+15] it remains a challenging experimental feat, which unfortunately leaves its promising applications wanting (here "loopholefree" refers to a stringent set of experimental standards which ensure that all required assumptions have been verified "beyond reasonable doubt"). The increasingly powerful quantum devices that are being experimentally realized tend to be single-chip, and do not have the ability to implement loophole-free Bell tests. The task of devising convincing "tests of quantumness" for such devices is challenging.
Until recently the only proposal for such tests was the design of so-called "quantum supremacy experiments" [HM17], which specify classical sampling tasks that can in principle be implemented by a mid-scale quantum device, but cannot be simulated by any efficient classical randomized algorithm (under somewhat standard computational assumptions [AC17,HM17]). These proposals share a number of well-recognized limitations. Firstly, while the sampling part of the process can be done efficiently on a quantum computer, verifying that the quantum computer is sampling from a hard distribution requires a computational effort which scales exponentially in the number of qubits. Secondly, their experimental realization is hindered by a generally poor tolerance to errors in the implementation, which is compounded by the necessity to implement circuits with relatively large (say, at least √ N for an N × N grid) depth. Combined with the resort to complexity-theoretic assumptions for which there is little guidance in terms of concrete parameter settings (see however [DHKLP18]), this has led to an ongoing race in efficient simulations [CZX+18,HNS18,MFIB18]. Indeed, the proposals operate in a limited computational regime, requiring a machine with, say, at least 50 qubits (to prevent direct clasical simulation) but at most 70 qubits (so that verification can be performed in a reasonable amount of time)-leaving open the question of what to do with a device with more than, say, 100 qubits. At a more conceptual level, the proposals are based on computational tasks that appear arbitrary (such as the implementation of a random quantum circuit from a certain class). In particular, they do not lead to any further characterization of the successful device, that could be used to e.g. build a secure delegation protocol or even simply certify a simple property such as the preparation of a specific quantum state or the implementation of a certain class of measurements.
We propose a different kind of experiment, or "test of quantumness", for large but noisy quantum devices, that is inspired from recent work of Bravyi et al. on the power of low-depth quantum circuits [BGK18]. Our test is applicable in a regime where the device has a large number of qubits, but may only have the ability to implement circuits of low (constant) depth, due e.g. to a limited gate fidelity. We argue that the test overcomes the main limitations outlined above: it generates useful outcomes (certifiably random bits), it is easily verifiable (in classical linear time), and it is robust to a small amount of error (it is sufficient to generate outputs within constant statistical distance from the ideal distribution 1 ). The test does not require any assumption from complexity theory, but instead considers a novel physical assumption (introduced in [BGK18]): that the device implements a circuit whose depth is at most a small constant times the logarithm of the number of qubits. Intuitively, this assumption trades off locality (as required by the Bell test) for time (as measured by circuit depth). It is particularly well-suited to quantum devices for which the number of qubits can be made quite large, but the gate fidelity remains low, limiting the depth of a circuit that can be implemented. Informally, we show the following. We refer to Theorem 6.7 for a precise statement. In particular, the output entropy is quantified using the quantum conditional min-entropy, conditioned on the inputs to the circuit and quantum side information that may be correlated with the initial state of the circuit. In this sense it is guaranteed that outputs generated by the circuit contain "genuine" randomness, that is independent from the circuit inputs and uncorrelated from an eavesdropper that may hold a quantum state entangled with the state of any ancilla qubits on which the circuit operates. Thus our construction achieves both randomness certification (outputs of the circuit have entropy as long as a simple test is passed with sufficiently large probability) and expansion (outputs of the circuit have entropy even conditioned on the inputs).
Theorem 6.7 provides a "test of quantumness" in the sense that given a (family of) circuit C N that maps N 2 bits to N 2 bits and satisfies (x, C N (x)) ∈ R N with sufficiently large probability when x ∼ D N , it must be that either the circuit is non-classical, or it has at least logarithmic depth. The requirement for succeeding in our test is more robust than most existing proposals, that require the output distribution to be multiplicatively close to the target distribution. In contrast, in our case it is sufficient to hit a certain target (the relation R, that itself is very permissive) an inverse polynomial fraction of the time! The downside is that the condition that the circuit has small depth may be difficult to certify in practice; we discuss the use of timing assumptions below.
Aside from the application to randomness expansion, Theorem 1.1 strengthens the main result of Bravyi et al. [BGK18] in multiple ways. In the first version of their work that was publicized [BGK17] Bravyi et al. provide a relation such that for any classical circuit of sufficiently low depth, there exists an input such that the circuit must return an output that satisfies the relation with probability bounded away from 1. In contrast, we point out the existence of an efficiently sampleable distribution on inputs such that, for any classical low-depth circuit, we know that on average over the choice of an input the circuit returns an output that satisfies the relation with at most small probability. This improvement follows using a simple extension of the arguments in [BGK17], and in fact a similar improvement was independently derived by Bravyi et al. in the final version of their paper [BGK18]. In addition to this improvement, which is key to the practical relevance of the scheme, we address the following question left open in [BGK18]: how small can one drive the maximum success probability of any classical low-depth circuit (i.e. the "soundness")?
(x, C N (x)) ∈ R for x ∼ D N is O(exp(−N c )).
Note that the strengthened statement on the possibility for low-depth classical circuits to generate outputs that satisfy the right relation comes at the cost that in Theorem 1.2 it is no longer the case that it is possible to sample from D using poly-logarithmically many bits. Nevertheless, it remains possible to sample from D in classical randomized polynomial-time, which is crucial for an experimental demonstration. In this case the improvement in the soundness guarantee is significant since it allows to use the test provided by the relation R as a means to distinguish relatively noisy quantum circuits of depth 3-where here by "noisy" we mean that the circuits would satisfy (x, D(x)) ∈ R with probability that is e.g. inverse-polynomial, instead of 1 for a perfect implementationfrom classical circuits of logarithmic depth, that may achieve (x, D(x)) ∈ R with probability that is exponentially small at best. Discussion. We comment on the depth assumption that underlies our results, and their potential for a practical demonstration of a quantum advantage (a.k.a. "quantum supremacy experiment"). The quantum circuit required for a successful implementation of our task is relatively straightforward to implement. It can be realized in three phases. A first, offline phase initializes EPR pairs (or three-qubit GHZ states) between nearest-neighbor qubits on a 2D grid. In a second phase, each qubit is provided an input, according to which either the qubit should be measured according to a single-qubit Pauli observable, or the qubit and one of its neighbors should be measured in the Bell basis. Finally, in the third phase the measurement outcomes are aggregated and verified using a simple classical linear-time computation.
In order to demonstrate a quantum advantage, the crucial requirement is that the second phase should be implemented using a procedure that is "certified" to have low depth. Since this is a physical assumption, it can never be rigorously proven. Nevertheless, it is possible to imagine experiments under which the assumption would hold "beyond reasonable doubt". We describe two such experiments.
In a first scenario, the verification of the depth constraint could be based on a calculation that takes into account state-of-the-art clock speeds. The fastest classical processors operate at speeds of order 1GHz, so that for an integer N , a circuit of depth d = log(N ) takes time of order 10 −9 log(N ) seconds to implement. In contrast, current gate times for, say, ion-trap quantum computers are of order 100 nanoseconds [SBT+18], meaning that the quantum circuit realizing our task could be implemented in time roughly 10 −7 seconds. To observe a quantum advantage it is thus necessary to ensure log(N ) 10 2 , leading to an impractical circuit size. However, a reasonable factor 10 improvement in the gate time for quantum gates could enable a demonstration based on a grid of order 2 10 × 2 10 qubits. Although far beyond current capabilities, the number is not beyond reach. Keeping in mind the extreme simplicity of the task to be implemented, it is not unreasonable to hope that such circuits may exist within 5-10 years.
In the previous scenario we allowed both the classical and quantum procedures solving our task to do so in a highly localized, single-chip fashion. The distributed nature of the task lends itself well to another type of implementation, that would be more demanding for a classical adversarial behavior, and may thus lead to a more practical demonstration of quantum advantage. Consider a network of constant-qubit devices arranged in a N × N grid, such that devices may be separated by large (say, kilometric) distances. In the first, offline phase the devices use nearest-neighbor quantum communication channels to distribute EPR pairs. In the second phase, each device receives a classical input, performs a simple local measurement, and returns a classical output (no communication is required). Our result implies that, to even approximately reproduce the output distribution implemented by this procedure, a classical network would need to operate in at least (log N ) rounds, where in each round a device can communicate with a constant number of devices located at arbitrary locations in the network (the network need not be 2D: at each step, a device is allowed to broadcast arbitrarily but can only receive information from a constant number of devices, whose identity must be fixed ahead of time). Taking into account inevitable latency delays incurred in any such network, this second scenario suggests that our task may lead to an interesting test for a future quantum internet [WEH18].
Finally we comment on the fidelity requirement for the gates of a quantum circuit implementing our task. Even though the circuit is only of constant depth, it is important that, along a typical path of length O(N ) between two qubits in the N × N grid, none of the gates leads to an error. This means that per-gate fidelity is required to be of order 1 − O(1/N ). For N of order 2 10 , as suggested in the first scenario described above, such fidelities are within reach. We also note that by changing the architecture of the circuit from a 2D grid to a 3D grid it may be possible to leverage existing protocols for entanglement distribution using noisy resources [RBH05]. Unfortunately, this comes with the drawback of a challenging 3D architecture for which there is no current implementation. Proof idea. Our starting point is the key observation, made by Bravyi et al. [BGK18], that a sub-logarithmic depth circuit made of gates with constant fan-in has a form of implied locality, where the "forward lightcone" of most input vertices only includes a vanishing fraction of output vertices. In particular, two randomly chosen input locations are unlikely to have overlapping lightcones. If the input to the circuit is non-trivial in those two locations only, then the outputs in each input location's forward lightcone are obtained by a computation that depends on that input only. In other words, we have a reduction from classical, low-depth circuits to two-party local computation that exactly preserves properties of the output. While the same lightcone argument holds true for a quantum circuit, the quantum circuit has the ability of distributing entanglement across any two locations in depth 2, by executing a sequence of entanglement swapping procedures in parallel. Thus the same reduction maps a quantum, low-depth circuit to a two-party local computation, where the parties may perform their local computation on a shared entangled state. Since there are well-known separations between the kinds of distributions that can be generated by performing local operations on an entangled state, as opposed to no entanglement at all-this is precisely the scope of Bell inequalities-Bravyi et al. have obtained a separation between the power of low-depth classical and quantum circuits.
We build on this argument in the following way. Our first contribution is to boost the argument in [BGK18] from a worst-case to a "high probability" statement. Instead of showing that (i) for every classical circuit, there is some choice of input on which the classical circuit will fail, and (ii) there is a quantum circuit that succeeds on every input, we show that there exists a suitable distribution on inputs that is such that, (i) any classical circuit fails with high probability given an input from the distribution, and (ii) there is a quantum circuit that succeeds with high probability (in fact, probability 1) on the distribution. Second, we observe that the construction in [BGK18] imposes constraints not only on classical low-depth circuits, but also on quantum low-depth circuits; this observation enables the reduction to nonlocal games hinted at above. Finally, we amplify the argument to show how a polynomial number of Bell experiments can be simultaneously "planted" into the input to the circuit. This allows us to perform a reduction to a nonlocal game in which there is a large number of players divided into pairs which each perform their own distinct Bell experiment. By adapting techniques from the area of randomness expansion from nonlocal games [AFDF+18] we are then able to conclude that any sub-logarithmic-depth circuit, classical or quantum, that succeeds on our input distribution, must generate large amounts of entropy. Moreover, this guarantee holds even if the circuit only correctly computes a sufficiently large but constant fraction of outputs for the games. Related work. Two recent works investigate the question of certified randomness generation outside of the traditional framework of Bell inequalities. In [BCM+18] randomness is guaranteed based on the computational assumption that the device does not have sufficient power to break the security of post-quantum cryptography. The main advantages of this proposal are that the assumption is a standard cryptographic assumption, and that verification is very efficient. A drawback is the interactive nature of the protocol, where only a fraction of a bit of randomness is extracted in each round. In [Aar18], Aaronson announced a randomness certification proposal based on the Boson Sampling task. The main advantage of the proposal is that it can potentially be implemented on a device with fewer than 100 qubits. Drawbacks are the difficulty of verification, that scales exponentially, and the resort to somewhat non-standard complexity conjectures, for which there is little evidence of practical hardness (e.g. it may not be clear how to set parameters for the scheme so that an adversarial attack would require time 2 80 ). In comparison, we would say that an advantage of our proposal is its simplicity to implement (on an axis different from Aaronson's: we require many more qubits, but a much simpler circuit, of constant depth and with classically controlled Clifford gates only), its robustness to errors, and its ease of verification. A possible drawback is the physical assumption of bounded depth, that may or may not be reasonable depending on the scenario (in contrast to cryptographic or even complexity-theoretic assumptions, that operate at a higher level of generality).
Two other works obtained concurrently and independently from ours establish directly related, but strictly incomparable, results. In [Gal18] Le Gall obtains an average-case hardness result that is very similar to our Theorem 1.2, with a concrete constant c = 1/2 that is likely better than the one that we achieve here. Le Gall's proof is based on an ingenious construction using the framework of graph states; although some aspects are similar in spirit to ours (such as the use of parallel repetition to amplify the soundness guarantees) the proof rests on rather different intuition. In independent work, Bene Watts et al.
[BWKST19] extend the results of [BGK18] to obtain a result analogous to our Theorem 1.2, with a strengthened soundness property which holds even against so-called AC 0 circuits. AC 0 circuits are still required to have constant depth but may contain AND and OR gates of arbitrary fan-in (instead of constant fan-in for [BGK18] and our results). Their proof applies to the same relation as [BGK18] but uses more involved techniques from classical complexity theory to obtain the strengthened lower bound. Neither of these results obtains an application to randomness expansion as in our Theorem 1.1.
Preliminaries
Notation.
Finite-dimensional Hilbert spaces are designated using calligraphic letters, such as H. A register A, B, R, represents a physical subsystem, whose associated Hilbert space is denoted H A , H B , etc. We say a register is classical if there is a fixed basis |i in which the only allowed states of the subsystem are states of the form i p i |i i|, which are diagonal in the designated basis. We write Id R for the identity operator on H R . A POVM {M a } on H is a collection of positive semidefinite operators on H such tht a M a = Id. For X a linear operator on H, we write Tr(X ) for the trace and X 1 = Tr √ X † X for the Schatten-1 norm. For an integer d ≥ 1 an observable over Z d is a unitary operator A such that A d = Id.
For ω = e 2iπ d and taking addition modulo d we write
X = d−1 i=0 |i + 1 i| and Z = d−1 i=0 ω i |i i|
for the generalized qudit Pauli X and Z operators, which are observables acting on H = C d . Given an integer d ≥ 1 and a tuple s ∈ Z 2 d , we write σ s = X s 0 Z s 1 for a one-qudit Pauli acting on C d . Given an integer n ≥ 1 and a string r ∈ (Z 2 d ) n , we write σ r = ⊗ i σ r i for an n-qudit Pauli acting on (C d ) ⊗n .
Nonlocal games.
We consider two types of games: multiplayer nonlocal games, and circuit games. Circuit games are nonstandard, and we introduce them in Sect. 5. Nonlocal games are defined as follows.
Definition 2.1. (Nonlocal game) Let ≥ 1 be an integer. An -player nonlocal game G consists of finite question and answer sets X = X 1 ×· · ·× X and A = A 1 ×· · ·× A respectively, a distribution π on X , and a family of coefficients V (a 1 , . . . , a |x 1 , . . . , x ) ∈ [0, 1], for (x 1 , . . . , x ) ∈ X and (a 1 , . . . , a ) ∈ A. We call an element x ∈ X in the support of π a query, and for i ∈ {1, . . . , } the ith entry x i of x a question to the ith player. We refer to the function V (·|·) as the win condition for the game, and for any query x, to a tuple a such that V (a|x) = 1 as a valid (or winning) tuple of answers (to query x). When players return valid answers we say that they win the game. Definition 2.2. (Strategy) Let ≥ 1 be an integer, and G an -player nonlocal game. An -player strategy τ = (ρ, {M x i }) for G consists of an -partite density matrix ρ ∈ H 1 ⊗ · · · ⊗ H , and for each i ∈ {1, . . . , } a collection of measurement operators {M a i x i } a i ∈A i on H i indexed by x i ∈ X i and with outcomes a i ∈ A i .
Definition 2.3. (Game value)
Let G be an -player nonlocal game, and τ = (ρ, {M a i x i }) a strategy for the players in G. The value of τ in G is
ω * τ (G) = x 1 ,...,x π(x 1 , . . . , x ) a 1 ,...,a V (a 1 , . . . , a |x 1 , . . . , x )Tr (M a 1 x 1 ⊗· · ·⊗M a x ) ρ .
A strategy τ is called perfect if ω * τ (G) = 1. The entangled value (or simply value) of G, ω * (G), is defined as the supremum over all strategies τ of ω * τ (G). To compare strategies we first introduce a notion of distance between measurements, with respect to an underlying state. (This is a standard definition in the area of selftesting.) (1)
Definition 2.5. (Closeness of strategies) Let τ = (ρ, {M a i x i }),τ = (ρ, {M a i x i }) beV i : H i → H i for each i ∈ {1, . . . , } such that τ is ε-close to the strategyτ = (ρ, {M a i x i }), whereρ = (V 1 ⊗ · · · ⊗ V )ρ(V 1 ⊗ · · · ⊗ V ) † and for all i ∈ {1, . . . , }, x i ∈ X i and a i ∈ A i ,M a i x i = V i M a i x i V † i .
Definition 2.7. We say that a game G is robustly rigid if the following holds. There is a continuous function f : R + → R + such that f (0) = 0 and a strategy τ for G such that for any δ ≥ 0, any strategy τ with value at least ω *
τ (G) − δ is f (δ)-isometric to τ .
We refer to f as the robustness of the game.
Note that for a game to be robustly rigid it is necessary that there exists a unique strategy τ such that ω * τ (G) = ω * (G), up to isometry.
Circuits.
We refer to [NC02] for an introduction to the quantum circuit model. We consider layered circuits over an arbitrary gate set. The choice of a specific gate set may affect the depth of a circuit; for concreteness, the reader may consider the standard gate set {X, Z , H, T, C N OT }, where here X, Z are the Pauli observables over C 2 ,
H = 1 √ 2 1 1 1 −1 , T = 1 0 0 e iπ/4 ,
and C N OT is the controlled-NOT gate. In general, gates in the gate set used to specify the circuit may have arbitrary fan-out, but are restricted to fan-in at most K , where K ≥ 2 is a parameter that is considered a constant (in contrast to the depth D of the circuit, that is allowed to grow with the number of input wires to the circuit). Note that if C is a quantum circuit, "fan-in" is the same as locality, i.e. the number of qubits that a gate acts on nontrivially. In particular, for quantum circuits bounded fan-in automatically implies bounded fan-out. It is convenient to generalize the usual notion of Boolean circuit to allow circuits that act on inputs taken from a larger domain, e.g. C : n → m , where is a finite alphabet. Similar to the fan-in, whenever using the O(·) notation we consider the cardinality of a constant. A circuit of depth D and fan-in K over can be converted in a straightforward way in a circuit of depth D and fan-in K · log 2 | | over {0, 1}. For the case of quantum circuits, allowing a non-Boolean amounts to considering a circuit that operates on d-dimensional qudits, for d = | |, instead of 2-dimensional qubits.
Entropies.
Given a bipartite density matrix ρ AB we write H (A|B) ρ , or simply H (A|B) when ρ AB is clear from context, for the conditional von Neumann entropy,
H (A|B) = H (AB) − H (B), with H (X ) σ = −Tr(σ ln σ )
for any density σ on H X . We recall the definition of (smooth) min-entropy.
Definition 2.8. (Min-entropy) Let ρ XE be a density matrix on two registers X and E, such that the register X is classical. The min-entropy of X conditioned on E is defined via the following optimization problem over the space Pos (H E ) of positive semidefinite operators on H.
H min (X |E) ρ = max{λ ≥ 0 : ∃σ E ∈ Pos (H E ) , Tr(σ E ) ≤ 1, s.t. 2 −λ Id X ⊗σ E ≥ ρ XE }.
When the state ρ with respect to which the entropy is measured is clear from context we simply write H min (X |E) for H min (X |E) ρ . For ε ≥ 0 the ε-smooth min-entropy of X conditioned on E is defined as
H ε min (X |E) ρ = max σ XE ∈B(ρ XE ,ε) H min (X |E) σ ,
where B(ρ XE , ε) is the ball of radius ε around ρ XE , taken with respect to the purified distance. 2
The following theorem justifies the use of the smooth min-entropy as the appropriate notion of entropy for randomness extraction.
: {0, 1} n ×{0, 1} d → {0, 1} m such that for any density matrix ρ XE = x |x x| X ⊗ ρ x E such
that the register X is an n-bit classical register and H min
(X |E) ≥ 2m, letting ρ ZYE = 2 −d x,y |Ext(x, y) Ext(x, y)| Z ⊗ |y y| Y ⊗ ρ x E it holds that ρ ZYE − U m ⊗ U d ⊗ ρ E 1 ≤ ε,
where for an integer ≥ 1, U = 2 − Id is the totally mixed state on qubits and
ρ E = x ρ x E .
Stabilizer Games
In this section we introduce a restricted class of nonlocal games that we will be concerned with throughout the paper. We call the games stabilizer games. They have the property that the game always has a perfect quantum strategy τ = (ρ, {M x i }) that uses an entangled state ρ = |ψ ψ| that is a graph state, on which the players make measurements that are specified by tensor products of Pauli observables. It is important for our results that there is a perfect strategy such that the entangled state can be prepared by a quantum circuit of low depth (in fact, constant depth) starting on a |0 state. It will also be convenient that the same perfect strategy only requires the measurement of Pauli operators, and that the win condition in the game is a linear function of the players' answers.
We proceed with a formal definition. The games we consider have players. In the intended strategy for the players, each player j ∈ {1, . . . , } holds k j qudits, measures m j commuting Pauli observables over Z d (depending on its question), and reports the m j outcomes.
Definition 3.1. (Stabilizer game) An ( , k, m) stabilizer game G = (X i , {w x , b x }) is an
-player nonlocal game defined from the following data.
• a number of players ,
• a parameter d for the dimension of the qudits (in the honest strategy),
• for j ∈ {1, . . . , }, a parameter k j for the number of qudits held by the jth player (ibid),
• for j ∈ {1, . . . , }, a parameter m j for the number of simultaneous measurements made by the jth player (ibid),
• for j ∈ {1, . . . , }, a set X j , each element of which is identified with the label x ∈ (Z 2 d ) k j of a k j -qudit Pauli, • a distribution π on queries x ∈ j=1 X m j j , such that any (x 1 , . . . , x ) in the support of π is such that for each j, x j designates an m j -tuple of commuting k j -qudit Pauli observables, • for each query x in the support of π , a vector w x ∈ j=1 (Z d ) m j and a coefficient b x ∈ Z d that are used to specify the win condition in the game.
To play, the verifier samples a question x j ∈ X m j j for each player. Each player responds with a string a j ∈ Z
m j d . Let x = (x 1 , . . . , x ) and a = (a 1 , . . . , a ). The players win if w x · a = b x ,(2)
where the inner product is over vectors in Z m j d . Using the notation from Definition 2.1,
V (a|x) = 1 if w x · a = b x , and 0 otherwise.
In a stabilizer game each player is tasked with reporting m values in Z d . It is then natural to use a representation of strategies in terms of observables over Z d . We adapt Definition 2.2 as follows.
Definition 3.2. Let G = (X i , {w x , b x }) be a stabilizer game. A strategy τ = (ρ, {M x j })
for G is specified by an -partite density matrix ρ and for each j ∈ {1, . . . , } and
x j = (x j,1 , . . . , x j,m j ) ∈ X m j j a family of m j -tuples of commuting observables M x j = (M x j ,1 , . . . , M x j ,m j ) over Z d .
Note that in the definition, for s ∈ {1, . . . , m j } the observable M x j ,s may depend on the whole m j -tuple x j , and not only on x j,s .
We introduce a notion of "honest strategy" in a stabilizer game.
Definition 3.3. (Honest strategy)
Let G = (X j , {w x , b x })
be a stabilizer game. A honest strategy in G is a strategy in which the state ρ is an ( j k j )-qudit -partite pure state |ψ such that the jth player has k j qubits, and the player's observables
M x j = (M x j ,1 , . . . , M x j ,m ) associated with question x j = (x j,1 , . . . , x j,m j ) ∈ X m j
j are precisely the m j commuting Pauli observables specified by x j . We say that the strategy has depth d if the state |ψ can be prepared by a quantum circuit of depth at most d starting from the |0 state.
3.1. Pauli observables. Recall the notation σ r , where r ∈ (Z 2 d ) k , introduced in Sect. 2.1 to designate an arbitrary k-qudit Pauli observable.
Definition 3.4. (Correction value) Let q, r ∈ (Z 2 d ) k . The correction value cor r (q) ∈ Z d is defined such that ω cor r (q) I = [σ q , σ r ],(3)
where the brackets denote the group commutator, [P, Q] = P Q P −1 Q −1 .
(The above definition takes advantage of the fact that the group commutator of Pauli matrices is always a scalar multiple of the identity matrix.) By abuse of notation, we will also sometimes write cor as a vector valued function, where if r = cor(r 1 , . . . , r i ) and q = (q 1 , . . . , q i ) then cor r (q) is defined as (cor r 1 (q 1 ), . . . , cor r i (q i )).
The following lemma shows that the function cor can be computed locally.
cor r | i (q| i ) = cor r (q). (4) Proof. First, notice that cor r | i (q| i ) = cor r | i (q).(5)
To see this, recall that cor is computed as the phase of the group commutator of a P r | i and σ (q). We can evaluate this group commutator one tensor factor at a time. In all tensor factors other than i, the commutator will be trivial since the r operator is identity. Therefore, the commutator does not change if we also set the q operator to identity. Next, we need that for any fixed q, the map r → cor r (q) is an additive homomorphism. In other words, cor r +r (q) = cor r (q) + cor r (q).
To see this, we apply Lemma 3.6 with A = σ q , B = σ r , C = σ r . The lemma follows by combining Equations (5) and (6) with the observation that
r = i r | i .Proof. Write [A, BC] as A(BC)A −1 (BC) −1 . Note that by definition, AB = [A, B]B A. Then we have [A, BC] = A(BC)A −1 (BC) −1 = ABC A −1 C −1 B −1 = [A, B]B AC A −1 C −1 B −1 = [A, B]B[A, C]B −1 = [A, B][A, C],
where the last line follows from commutation of B and [A, C].
Rotated and stretched stabilizer games.
In this section we define stretched stabilizer games which formalize the notion of distributing a stabilizer game out over long "paths". One property of stretched games is that players on far ends of the paths have outputs which require correction according to a function of the outputs along the intermediate points in the paths. We introduce a notion of rotated stabilizer game that captures this scenario by allowing the players to report an additional "rotation string".
= (X j , {w x , b x }) the rotated stabilizer game associated with G, G R , is defined as follows. For each j ∈ {1, . . . , } and question x j ∈ X m j j , the jth player reports an answer a j ∈ Z m j d together with a rotation string r j ∈ (Z 2 d ) k j . The win condition (2) is replaced by the condition w x · (a − cor r (x)) = b x ,(7)
where r = (r 1 , . . . , r ).
Observe that if r is the 0 vector then for any q, cor r (q) = 0, so the win condition for the rotated game G R reduces to the win condition for G. Therefore any strategy for G implies a strategy for G R with the same success probability. More generally, it is possible to define a strategy in G R by having the players conjugate their observables in G by an arbitrary Pauli observable (the same for all observables), and report as rotation string the string that specifies the observable used for conjugation.
Using Lemma 3.5 it follows that there is a reduction in the other direction as well. Given a strategy for G R , one obtains a strategy for G by replacing the answer (a i , r i ) from the ith player in G R by the answer
(a i − cor r i (q i ))(8)τ = (ρ , {M x j }) that has value w = ω * τ (G) in G R ,
there is a strategy in G that is a coarse-graining of (ρ , {M x j }) according to (8), 3 and that has value w in G. In particular, up to local isometries the state ρ is within distance f (1 − w ) of |ψ ψ|.
We introduce a notion of "stretched" rotated game, that will be useful when we relate circuit games to stablizer games.
Definition 3.9. (Stretched stabilizer game) Let G = (X j , {w x , b x })
be a stabilizer game, and = ( 1 , . . . , ) an -tuple of finite sets, such that for j ∈ {1, . . . , }, j has k j designated elements (u j,1 , . . . , u j,k j ). Each element of j is used to index one out of | j | qudits that are supposed to be held by the jth player. To G and we associate a "stretched" game G S as follows. In G S the parameter k j is replaced by k j = | j |. For any k j -qudit Pauli observable asked to player j in G, there is a k j -qubit Pauli observable in G S such that the observable acts as the identity on the additional (k j − k j ) qubits. The win condition in G S is the same as in G.
Given a stabilizer game G and sets = ( 1 , . . . , ), we write G S,R = (G S ) R for the rotated stretched stabilizer game associated with G and .
Repeated games.
For an integer r ≥ 1 we consider the game that is obtained by repeating a stabilizer game r times in parallel, with r independent sets of players (that may share a joint entangled state).
Definition 3.10. Let G be an ( , k, m) stabilizer game, and r ≥ 1 an integer. The rfold repetition of G is the (r , k) stabilizer game G r that is obtained by executing G independently in parallel with r groups of players. More formally, the input distribution π r in G r is the direct product of r copies of the input distribution π in G, and the win condition in G r is the AND of the win conditions in each copy of G.
For purposes of randomness expansion, in Sect. 6 we consider repeated games for which the input distribution π r is not exactly the direct product of r copies of π , but a derandomized version of it. Similarly, to achieve better robustness, instead of the AND of the winning conditions we may consider a win condition that is satisfied as soon as sufficiently many of the win conditions for the subgames are satisfied. These modifications are explained in Sect. 6.1.
The Magic
Square game. For concreteness we give two examples of stabilizer games, the Memin-Peres Magic Square game [Mer90b] and the Mermin GHZ game [Mer90a]. The former is given for illustration; the latter will be used towards randomness expansion in Sect. 6. Definition 3.11. (Magic Square game) Consider the following 3 × 3 matrix, where each entry is labeled by a two-qubit Pauli observable:
⎡ ⎣ X ⊗ I I ⊗ X X ⊗ X I ⊗ Z Z ⊗ I Z ⊗ Z X ⊗ Z Z ⊗ X (X Z) ⊗ (Z X) ⎤ ⎦ .(9)
The Magic Square game is a (2, 2, 2) stabilizer game over 2-dimensional qubits defined as follows. The sets
X 1 = X 2 = (X ⊗ I, I ⊗ X ), (I ⊗ Z , Z ⊗ I ), (X ⊗ Z , Z ⊗ X ) (X ⊗ I, I ⊗ Z ), (I ⊗ X, Z ⊗ I ), (X ⊗ X, Z ⊗ Z )
each contain 6 pairs of two qubit-Pauli observables, each pair corresponding to either a row or column of (9). The distribution π is uniform. For any query x = (x 1 , x 2 ) each player reports two bits associated with the two observables it was asked about. We can associate a third bit to the third observable in the corresponding row or column by taking the parity of the first two bits, except for the case of the third column, where we take the parity plus 1. The constraint w x · a = b expresses the constraint that, whenever the questions x 1 , x 2 are associated with a row and column that intersect in an entry of the square, the outcomes associated with the intersection should match.
Definition 3.12. (Honest strategy in the Magic Square game) In the honest strategy, the two players share two EPR pairs. Upon reception of a question that indicates two commuting two-qubit Pauli observables, the player measures both observables on her qubits and reports the two outcomes.
The following robustness result is shown in [WBMS16].
Theorem 3.13. The Magic Square game is robustly rigid, with respect to the honest strategy and with robustness f
(δ) = O( √ δ).
Next we recall the Mermin GHZ game.
Definition 3.14. (GHZ game) The game GHZ is a (3, 1, 1) stabilizer game over 2dimensional qubits defined as follows. The sets
X 1 = X 2 = X 3 = {0, 1}. The distribu- tion π is uniform over the set {(0, 0, 0), (0, 1, 1), (0, 1, 1), (1, 0, 1)}. For all queries x the vector w x = (1, 1, 1). For x = (0, 0, 0), b x = 0, and for all other x, b x = 1.
It is well-known that there is a honest strategy based on making Pauli measurements on a GHZ state |ψ GHZ = 1 √ 2 (|000 + |111 ) (which can be prepared in depth 3) and that succeeds with probability 1 in the game GHZ.
Lightcone Arguments for Low-Depth Circuits
Let N ≥ 1 be an integer. We write grid N for the set {1, . . . , N } 2 , that we often identify with the "grid graph" of degree 4, which is the graph on this vertex set with an edge between (i, j) and (i ± 1, j ± 1), with addition taken modulo N . (As a matter of notation we often identify a graph with its vertex set.)
For an integer 0 ≤ L ≤ N and u ∈ grid N we write Box L (u), or Box(u) when L is implicit, for the set Box L (u) = {u} + {−L , . . . , L} 2 ⊆ grid N (with addition again taken mod N ). In other words, Box L (u) is the closed ball of radius L around u in the L ∞ metric.
Lightcones.
Recall that the circuits that we consider are defined over an arbitrary gate set with bounded fan-in K . Given a circuit C, we introduce the natural notion of a circuit graph, with vertices at the gates and edges along the wires. We typically consider circuits that are spatially local on a 2D grid, in which case we identify the input and output sets of the graph with a grid, i.e. I = O = grid N for some integer N ≥ 1. Note that the circuit graph of a circuit with fan-in K has in-degree bounded by K , but has no a priori bound on the out-degree.
Definition 4.2. Let C be a circuit. For a vertex v in the circuit graph define its backward lightcone L b (v) as the set of input vertices u for which there exists a path in the circuit graph from u to v. For an input vertex u define the forward lightcone of u, L f (u), as the set of output vertices v such that u ∈ L b (v).
The following lemma is established in Section 4.2 of [BGK18] during the proof of their Theorem 2. We include the short proof for completeness.
Lemma 4.3. [BGK18]
Let C be a circuit that has depth D and maximum fan-in K . Then the following hold:
• All backward lightcones are small. That is, for every vertex v of the circuit graph,
|L b (v)| ≤ K D .
• Most forward lightcones are small. That is, for any μ ∈ (0, 1),
Pr v [L f (v) ≥ μ −1 K D ] ≤ μ,(10)
where the probability is taken over the choice of a uniformly random input vertex v ∈ I.
Proof. Every path in the circuit graph has length at most D. Each vertex has indegree at most K . Then for any fixed vertex v, there are at most K D distinct paths through the circuit graph ending at v. Therefore,
|L b (v)| ≤ K D . Now consider the directed graph with an edge from u to v if u ∈ L b (v). The in- degree of vertex v is equal to |L b (v)| while its out-degree is L f (v)
. Each vertex has an in-degree of at most K D , so there are at most nK D edges in the graph, where n is the number of output wires for the circuit. Fix μ ∈ (0, 1). By Markov's inequality, at most μn vertices may have out-degree at least 1 μ K D .
Input patterns.
We introduce a method to "plant" queries to the players in a stabilizer game into the input to a circuit. The main definition we need is of an input pattern, that specifies locations for each players' question, as well as paths between these locations. These paths, or "stars", will be useful in the design of a quantum circuit that implements the players' strategy as a low-depth quantum circuit; this is explained in Sect. 5.1. Fig. 1. We say that a subset of grid N is a box if it is equal to Box L (u) for some integer L and vertex u. A star is a collection of disjoint boxes together with a collection of disjoint paths such that • each path has its endpoints on the boundaries of boxes, and • contracting each box to a single vertex, and each path to a single edge, results in a a star graph, i.e. a graph that has vertices of degree one, one vertex of degree , and no other vertices.
Definition 4.4. (Star) See
We use the term central box to refer to the unique box which contains one endpoint of every path. If b 0 is the central box, we may say that the star is centered at b 0 . By abuse of notation, we often write to refer to the set of vertices contained in the paths and boxes of . The paths may be extended inside each box to connect the vertices u 1 , u 2 , u 3 to g 1 , g 2 , g 3 in an arbitrary way. Such connections will be used in Sect. 5.1 to define low-depth measurements which distribute a three-qubit state at sites g 1 , g 2 , g 3 among the qubits u 1 , u 2 , u 3 . On the right, we show the contraction of the star to a star graph. The paths are contracted to single edges (shown by thick lines) and the boxes are contracted to single vertices (shown by filled-in circles)
The following definition captures exactly the amount of information that we need to remember about a given circuit C in order to talk about the spread of correlations within C-we will forget everything about the circuit except some information about its lightcones. • each (i) is a star,
• the vertices of u (i) are contained in distinct noncentral boxes of (i) . For a vertex u, we write Box(u) for the box that contains u.
Remark 4.6. We often write patterns as P = {(u (i) , (i) )} without specifying the range of the index i, that we generally leave implicit. When we write "a pair (u (i) , (i) )", without the use of the curly brackets, we mean a pair, for some abitrary but fixed index i.
Definition 4.7. (Circuit specification) A circuit specification S on grid N is a triple S = (L f , bad in , bad out ) such that for all u ∈ grid N , L f (u) ⊆ grid N is a set called the forward lightcone associated with u, and bad in , bad out ⊆ grid N are sets called the bad input set and bad output set respectively.
Definition 4.8. For integer B, R in , R out we say that a circuit specification S = (L f , bad in , bad out ) on grid N is (B, R in , R out )-bounded if the following hold: |bad in | ≤ R in , |bad out | ≤ R out , and for every u ∈ grid N \bad in it holds that |L f (u)| ≤ B.
Given a fixed circuit specification, the following definition captures the conditions that are required for an input pattern so that the circuit game associated with that input pattern can be reduced to a nonlocal game (the reduction is explained in Sect. 5).
The intuition to keep in mind for the definition is as follows: each player in the nonlocal game receives her input from one of the input locations and puts her output along the paths of the star. Each player also puts some outputs inside their box of the star. In order for it to be possible to implement the strategy locally, we must have the outputs of each player be causally independent of the inputs of the other players. We ensure this by checking that the forward lightcone of one player's input misses the locations of the other players' outputs.
Definition 4.9. (Causality-respecting patterns) Let S = (L f , bad in , bad out ) be a circuit specification. Let P = {(u (i) , (i) ) i } be an input pattern. We say that a pair (u (i) , (i) ) is individually-S-causal with respect to P if the following hold: 4 (a) For each k, the forward lightcone of u
(i) k misses (i) , except possibly near u (i) k . More precisely, L f (u (i) k ) ∩ (i) ⊆ Box(u (i) k ). (b) For all (u ( j) , ( j) ) ∈ P (with j = i) and for all k, the forward lightcone of u ( j) k misses (i) entirely, i.e. L f (u ( j) k ) ∩ (i) = ∅. (c) (i) misses bad out , i.e. (i) ∩ bad out = ∅.
Furthermore, we say that a pair (u, ) is S-valid if the following conditions hold.
(d) Every input location lies outside of bad in , i.e. u
(i) k ∩ bad in = ∅ for all k, i.
We say that an input pattern P is S-causal if every (u (i) , (i) ) ∈ P is individually-Scausal and S-valid with respect to P.
Finally, we introduce a distribution on input patterns so that for any circuit specification S that is (B, R in , R out )-bounded for sufficiently small parameters B, R in , and R out , a sample from the distribution gives an S-causal pattern with high probability (see Sects 0 , b 1 , . . . , b within a square, fix a collection stars(b 0 , . . . , b ) of L/ stars such that Return the input pattern P = {(u (i) , (i) )} 1≤i≤r . 4 Recall that we identify a star with the union of the vertex sets of its paths and boxes. 5 It does not matter where these squares are located, as long as they do not overlap.
Single-input patterns.
We would like to show that patterns in the support of D (r ) are "very nearly" S-causal for most S in the sense that removing only an exponentially small fraction of inputs yields an S-causal pattern. To warm up, we argue that for any (B, R in , R out )-bounded circuit specification S, an input pattern sampled from the distribution D (1) introduced in Definition 4.10 is S-causal with high probability. We use this single-input analysis later to show that in a many-input pattern, most of the inputs are individually-S-causal.
In this subsection only, we use M instead of N to denote the grid size. We do this because the distribution D (r ) (N , L) can be (informally) thought of as the direct product of r copies of D (1) (M, L), and the former is of greater interest to us.
L). Moreover, the probability that P is not S-valid is O(R in /M 2 ). Overall, the probability that P is not S-causal is at most
O L 2 (R out + B)/M 2 + (R out + B)/L + R in /M 2 ,(11)
where the O notation hides factors polynomial in .
Proof. We check all conditions in Definition 4.9. Since P contains only one (input, star) pair, condition (b) (which restricts the interactions between pairs) is satisfied automatically. Similarly, since the u j are chosen independently, for any u = u ∈ {u 0 , u 1 , . . . , u } the probability that
L f (u) ∩ Box(u ) = ∅ is 16L 2 B/M 2 = O(L 2 B/M 2 ).
Finally we check condition (d). Any u j is chosen independently among (2L + 1) 2 ≥ M 2 /8 = (M 2 ) possibilities, so the probability that u j ∈ bad in is at most 8R in /M 2 = O(R in /M 2 ); we conclude by the union bound, and absorb the parameter in the O(·).
Arbitrary-input patterns.
We extend the argument from the previous section to the case where the input pattern contains more than one input. Proof. Let P = {(u (i) , (i) )} be an input pattern chosen according to D (r ) (M, L). For i ∈ {1, . . . , r } we let P (i) be the single-pair pattern {(u (i) , (i) )}. Let X i be the indicator variable that the pair (u (i) , (i) ) is not individually-S-causal with respect to P. Let Y i be the indicator that (u (i) , (i) ) is not S-valid. To see that P is S-causal, it suffices to check that each input is S-valid and individually-S-causal with respect to P. This is true if and only if i X i = 0 and i Y i = 0. We first bound the latter event.
Claim 4.13. It holds that
Pr j Y j = 0 = O r 2 R in /N 2 .(12)
Proof. Applying the second bound in Lemma 4.11 and using that M = N / √ r it follows that for any i ∈ {1, . . . , r },
Pr (Y i = 0) = Pr P (i) is not S-valid ≤ 8R in /M 2 = O r R in /N 2 .
The claim follows by a union bound over the r patterns P (i) .
Next we turn to the X i .
Claim 4.14.
Pr i X i = 0 = O r 2 B(r L 2 /N 2 + 1/L) + i Pr j =i Y j = 0 .(13)
Proof. For any i ∈ {1, . . . , r } let
bad (i) out = k =i ∪ j L f (u (k) j ) ,
and define a specification S (i) = (L f , bad in , bad (i) out ). With these definitions, it follows that
Pr(X i = 0) = Pr P (i) is individually-S-causal ≥ Pr P (i) is individually-S (i) -causal .(14)
Indeed condition (c) of being individually-S (i) -causal (see Definition 4.9) implies all conditions of being individually-S-causal for S = (L f , bad in , ∅).
In the event that
P ( j) is S-valid for all j = i (that is, when j =i Y j = 0) we know that L f (u (k) j ) ≤ B for each ( j, k) ∈ {1, . . . , } × {1, .
. . , r }, and thus that |bad (i) out | ≤ r B = O(r B). Using that the marginal distribution of a single pair (u (i) , (i) ) from P is equal to D (1) (M, L) (when seen as a distribution on the square S (i) associated with (u (i) , (i) )), it follows from the bound in Lemma 4.11 that for any i ∈ {1, . . . , r },
Pr X i = 0 j =i Y j = 0 = O r B(r L 2 /N 2 + 1/L) .(15)
Applying the union bound,
Pr i X i = 0 ≤ i Pr (X i = 0) ≤ i Pr X i = 0 j =i Y j = 0 + Pr j =i Y j = 0 ≤ O r 2 B(r L 2 /N 2 + 1/L) + i Pr j =i Y j = 0 ,
where the last line follows from (15).
To conclude the proof of the lemma we write
Pr P is not S-causal = Pr i X i + Y i = 0 ≤ Pr i X i = 0 + Pr j Y j = 0 ≤ O r 2 B(r L 2 /N 2 + 1/L) + i Pr j =i Y j = 0 + O r 2 R in /N 2 ≤ O r 2 B(r L 2 /N 2 + 1/L) + O r 3 R in /N 2 + O r 2 R in /N 2 = O r 2 B(r (L 2 + R in )/N 2 + 1/L) ,
where the third line uses (13) and (12) and the fourth uses (12). The previous lemma shows that a random input pattern P is S-causal with high probability. In this case we can define a game from P so that in the game, a shallow circuit with specification S can be simulated by a set of spacelike-separated players. This simulation is perfect when P is exactly S-causal. More generally, a weaker simulation argument still applies if a small constant fraction of inputs in P are not S-causal. The next lemma shows that this condition can be guaranteed to hold with much higher probability, exponentially close to 1 rather than inverse-polynomially close. This bound will be used in the proof of Theorem 1.2.
P V AL = (u, ) ∈ P|(u, ) is S-valid , P C AU S = (u, ) ∈ P|(u, ) is individually-S-causal with respect to P V AL .
Then there exist universal constants C, C > 0 such that if p = C r B(r L 2 /N 2 + /L) then for any t > 0,
Pr |P C AU S | ≥ r (1 − p) − 2t ≥ 1 − 2 exp −t 2 /8r ,(16)Pr |P V AL | ≥ r (1 − Cr R in /N 2 ) − t ≥ 1 − 2 exp −2t 2 /r .(17)
For later convenience we note that (16) and (17) can be combined by a union bound to obtain
Pr |P V AL ∩ P C AU S | ≥ r (1 − Cr R in /N 2 − 2 p) − 3t ≥ 1 − 4 exp − t 2 8r . (18)
Proof. The proof relies on concentration arguments to bound the probabilities in (16) and (17). The second bound, (17), is easier to show, because it can be expressed as a bound on a sum of independent random variables. The following claim establishes the bound.
Claim 4.16. There is a universal constant C > 0 such that for any t > 0,
Pr |P V AL | ≥ r (1 − Cr R in /N 2 ) − t ≥ 1 − 2 exp − 2t 2 r .
Proof. For i ∈ {1, . . . , r } let V i be the indicator variable for the event that (u (i) , (i) ) is S-valid. Since in D (r ) (N , L) the (u (i) , (i) ) are chosen independently within the disjoint squares S (i) , it follows from the definition of S-valid that the V i are independent. Using the bound shown in Lemma 4.11 it follows that for any i ∈ {1, . . . , r },
E[V i ] = 1 − C R in /M 2 ,(19)
for some constant C > 0 and where M = N / √ r . We conclude by applying Hoeffding's inequality:
Pr |P V AL | ≤ r (1 − C R in /M 2 ) − t = Pr r i=1 V i ≤ r (1 − C R in /M 2 ) − t ≤ Pr 1 r r i=1 V i − r i=1 E[V i ] ≥ t/r ≤ 2 exp −2t 2 /r .
The proof of the remaining bound (16) is made a little delicate by the fact that the condition that a pair (u, ) ∈ P is individually-S-causal is a global condition, so that P C AU S is not directly expressible as a sum of independent random variables. To get around this, we first make a few definitions.
Let P = {(u (i) , (i) )} be an input pattern chosen at random according to the distribution D (r )
(M, L). For i ∈ {1, . . . , r } define bad <i out = k<i s.t. (u (k) , (k) ) is S-valid ∪ j L f (u (k) j ) , bad >i out = k>i s.t. (u (k) , (k) ) is S-valid ∪ j L f (u (k) j ) ,
and bad (i) out = bad <i out ∪ bad >i out . From the assumption that the circuit specification S is (B, R in , 0)-bounded, and since bad (i) out is defined as a union of the lightcones of only the valid pairs (u (k) , (k) ), it follows that for all i, |bad (i) out | ,|bad <i out |, |bad >i out | are each at most r B.
Recall that in D (r ) (M, L) each u (i) is chosen within a square S (i) of side length M = N / √ r . We identify S (i) with grid (i) M , and introduce a single-pair pattern P (i) = {(u (i) , (i) )} that we think of as a pattern on grid (i) M . We further define a specification
S (i) = (L f , bad in , bad (i) out ) on grid (i)
M . Let X i be the indicator variable for the event that the pair (u (i) , (i) ) is not individually-S (i) -causal with respect to P V AL . The following claim relates the X i to P C AU S .
Claim 4.17. It holds that |P
C AU S | ≥ r i=1 X i .
Proof. Note that whenever P (i) is individually-S (i) -causal with respect to P V AL , it is also individually-S-causal with respect to P V AL . This follows by noting that for the given definition of bad (i) out , condition (c) of being individually-S (i) -causal with respect to P implies conditions (b) and (c) of being individually-S-causal with respect to P V AL . Therefore, i X i is an upper bound on the number of P (i) which are not individually-S-causal with respect to P V AL .
The previous claim reduces our task to showing a high-probability lower bound on X i . The random variables X i are dependent. To obtain a bound, we apply a Martingale argument to two related sequences of random variables, defined as follows.
X i = Y i ∨ Z r −i .
Proof. The claim follows by noting that, in Definition 4.9, the conditions (a) for X i , Y i and Z r −i are equivalent. Condition (b) for X i is equivalent to the event that condition (c) for both Y i and Z r −i is true. Finally, condition (c) for X i is vacuous, and conditions (b) for both Y i and Z r −i are vacuous.
The following claim almost finishes the proof.
Claim 4.19.
There is a universal constant C > 0 such that if p = C r B(r L 2 /N 2 + /L)) then for any t > 0,
Pr r i=1 Y i ≥ t + r p ≤ exp −t 2 2r , Pr r i=1 Z i ≥ t + r p ≤ exp −t 2 2r .
Proof. The proof is based on a Martingale tail bound. For any i ∈ {1, . . . , r }, let Y <i = {Y k |k < i} and Z >i = {Z k |k > i}. Note that Y i (resp. Z i ) depends only on the underlying circuit C together with the selection of pairs in squares S ( j) for j ≤ i (resp. j ≥ i). It follows from the bound shown in Lemma 4.11 that
E[Y i |Y <i ] = O r B(r L 2 /N 2 + /L) ,(20)
and similarly
E[Z i |Z >i ] = O r B(r L 2 /N 2 + /L) .(21)
Let p denote the maximum of the bounds on the right-hand side of (20) and (21), and assume p ≤ 1. For any n ∈ {1, . . . ,
r } defineȲ n = n i=1 Y i − np andZ r −n = r i=r −n Z i − np. Then E[Ȳ n |Y <n ] = E[Y n − p +Ȳ n−1 |Y <n ] ≤Ȳ n−1 , and E[Z r −n |Z >r −n ] = E[Z r −n − p +Z r −n+1 |Z >r −n ] ≤Z r −(n−1) .
Additionally, it always holds that
|Ȳ n −Ȳ n−1 | ≤ |Y n − p| ≤ max(1 − p, p) ≤ 1,
where the last inequality follows from the assumption that p ≤ 1. Similarly,
|Z r −n −Z r −(n−1) | ≤ |Z r −n − p| ≤ max(1 − p, p) ≤ 1.
Thus, bothȲ n , andZ r −n form super-martingale sequences for increasing n. Defininḡ Y 0 =Z r = 0 and applying Azuma's inequality gives
Pr(Ȳ n −Ȳ 0 ≥ t) = Pr(Ȳ n ≥ t) ≤ exp −t 2 2n max(1 − p, p) 2 ≤ exp −t 2 2n , and Pr(Z r −n −Z r ≥ t) = Pr(Z r −n ≥ t) ≤ exp −t 2 2n max(1 − p, p) 2 ≤ exp −t 2 2n .
Setting n = r proves the claim.
Using Claim 4.18 it follows that
r i=1 X i ≤ r i=1 Y i + r i=1 Z i . Therefore, for any t > 0 Pr r i=1 X i ≥ 2t + 2r p ≤ Pr r i=1 Y i + Z i ≥ 2t + 2r p ≤ Pr r i=1 Y i ≥ t + r p + Pr r i=1 Z i ≥ t + r p ≤ 2 exp −t 2 2r ,
where the last inequality follows from Claim 4.19. Replacing t with t/2 and using Claim 4.17 proves (16).
Derandomization. Lemma 4.12 states that if a pattern is chosen according to the distribution D (r ) (M, L)
, then it is S-causal with probability that is close to 1, regardless of the choice of S. The following lemma shows that it is possible to partially derandomize the distribution, at little loss in the success probability.
Circuit Games
Let N ≥ 1 be an integer grid size. Let r ≥ 1 be an integer number of repetitions. Let G be an ( , k, m) stabilizer game. In this section we design a circuit game G = G G,N,r associated with (G, N , r ) in a way that the circuit game has similar completeness and soundness properties as G (more precisely, as a rotated, stretched game obtained from G, using the stars in an input pattern P associated with (G, N , r ) that is provided as input to the circuit to define the length of the stretches; see Sect. 3.2 for the definition of rotated and stretched games).
We first give a general definition that specifies what we mean by a "circuit game".
Definition 5.1. (Circuit game) Given input and output sets I, O respectively, a circuit game is a relation R ⊆ I × O, together with a probability distribution π on I. We say that a circuit C wins the circuit game (R, π) with probability p if, on average over an input x ∈ I sampled according to π , the circuit returns an output y ∈ O such that (x, y) ∈ R with probability p.
To specify the relation associated with the circuit game we will construct from G it is convenient to first introduce a quantum circuit that succeeds in the game with certainty. This is done in Sect. 5.1. In Sect. 5.2 we give the definition of the circuit game.
Definition and completeness.
Informally, the game G G,N,r is obtained by "planting r copies of G in the grid grid N ". Let P be an input pattern associated with (G, N , r ) (see Definition 4.5). Let k = max j k j be the maximum number of qudits used by a player in the honest strategy for G, |ψ the (k )-qubit state used in the strategy (padded if needed), and D the depth of a circuit that prepares |ψ from |0 . Assume that D ≥ 2. We describe a depth (D + 1) quantum circuit C ideal that takes an input from
I = P : input pattern for G × {(x (i) 1 , . . . , x (i) )} 1≤i≤r : r -tuple of queries in G ,(23)
and returns a string in the output set
O = r i=1 j=1 (Z 2k d ) (i) j × (Z m j d ) u (i) j .(24)
In (24) we have used the vertices in Note that these are always distinct. In Sect. 5.1.2 we show how to modify the format for the input and the output in a way that the circuit can be made geometrically local on a 2D grid.
The computation performed by the circuit C ideal proceeds in three stages:
• In the first stage, the circuit initializes a lattice of qudits as follows. Each vertex in grid N is associated with 4k qudits, organized in 4 groups of k that we call the "left", "right", "top" and "bottom" groups associated with that vertex. Each of these groups is initialized in a maximally entangled state with the group from the neighboring grid vertex that is closest to it, i.e. the "top" group at vertex (i, j) is associated with the "bottom" group at vertex (i, j + 1), etc. In addition, for each center location g (i) of a star (i) in P, the circuit creates the state |ψ on the (k ) qudits associated with the vertices (g (i)
1 , . . . , g (i) ); for each vertex g
(i)
j , a group of qudits is used that is not connected to the next vertex in the path (i) j . (This replaces the creation of the maximally entangled state, for that group of qudits.) This step can be implemented in depth max(2, D). • In the second stage, the circuit implements an entanglement transfer protocol as described in Sect. 5.1.1, using each of the simple paths that form a star (i) from P to route the qudits of |ψ . The measurement outcomes in (Z d ) 2k from the teleportation measurements obtained at each vertex in (i) are recorded at the location at which they are obtained, and will eventually form part of the output of the circuit. This step can be completed in depth 1. • In the last stage, the circuit implements the honest quantum strategy for the game G, using locations u (i) j indicated in P to specify the k qudits to be used by the jth player (the group used is the one closest to the endpoint of the path (i) j ), and x (i) j as the player's question. The outcomes obtained are returned as part of the output. This step can be completed in depth 1, and can be executed in parallel with the previous step.
The following lemma states that outputs generated by this circuit satisfy the win condition for an associated rotated, stretched game. G be an ( , k, m) stabilizer game, 1 ≤ r ≤ N , and P an input pattern associated with (G, N , r ). Let x (1) , . . . , x (r ) be an arbitrary tuple of r queries for G.
Lemma 5.2. Let
For all i ∈ {1, . . . , r } and j ∈ {1, . . . , } let (r P and (x (1) , . . . , x (r ) ). Then for any i ∈ {1, . . . , r }, {(r
(i) j , a (i) j ) ∈ (Z 2k d ) (i) j × Z m j d
be the outputs generated by an execution of C ideal on input
(i) j , a (i) j )} j∈{1,
..., } is a valid -tuple of answers for the players in the rotated stretched game G S,R
(i) , on query x (i) .
Proof. The lemma follows from the definition of C ideal , the properties of the entanglement transfer protocol stated in Lemma 5.6, and the definition of the rotated, stretched game.
Entanglement transfer.
We introduce a simple procedure for routing entanglement along a path, such that nearest neighbors on the path have been initialized in a maximally entangled state. This is a standard calculation; for completeness we include the details.
|ψ ABCD = X a A ⊗ Z b B |EPR d AB ⊗ |EPR d CD a
maximally entangled state on four qudits. Then upon measuring the qudits in registers B and C in the Bell basis
{ X x ⊗ Z y | EPR d : x, y ≤ d }, the post-measurement state is equal (up to global phase) to X a+x A ⊗ Z b−y D |EPR d AD ⊗ X x B ⊗ Z y C |EPR d BC .(25)
Proof. We evaluate the post-measurement state by computing the result of applying the measurement projector onto the state X x ⊗ Z y |EPR d . Let ω = e 2πi/d .
X x ⊗ Z y |EPR d EPR d | BC X −x ⊗ Z −y X a A ⊗ Z b B |EPR d AB ⊗ |EPR d C D = X x B ⊗ Z y C ⎛ ⎝ 1 d k,l |kk ll| ⎞ ⎠ X −x B ⊗ Z −y C X a A ⊗ Z b B 1 d i, j |i |i | j | j = X x B ⊗ Z y C ⎛ ⎝ 1 d k,l |kk ll| ⎞ ⎠ X −x B ⊗ Z −y C 1 d i, j ω bi |i + a |i | j | j = X x B ⊗ Z y C ⎛ ⎝ 1 d k,l |kk ll| ⎞ ⎠ 1 d i, j ω bi−y j |i + a |i − x | j | j = X x B ⊗ Z y C 1 d 2 i, j,k ω bi−y j δ i−x, j |i + a |k |k | j = X x B ⊗ Z y C 1 d 2 j,k ω b( j+x)−y j | j + x + a |k |k | j = ω bx X a+x A ⊗ Z b−y D |EPR d AD ⊗ X x B ⊗ Z y C |EPR d BC .
Lemma 5.4. (Entanglement transfer II) Let n ≥ 1, and L 1 , . . . , L n , R 1 , . . . , R n qudit registers such that each L i is maximally entangled with R i . Suppose one performs (n − 1) Bell basis measurements on qudit pairs (R 1 , L 2 ), . . . , (R n−1 , L n ), so that the post measurement state of the ith pair is X x i ⊗ Z y i |EPR d . Let x = i x i and z = i z i . Then the post measurement state of the remaining pair (
L 1 , R n ) is X x ⊗ Z −z |EPR d .
Proof. The (n − 1) Bell basis measurements commute, so we can think of them as being performed in sequence, performing the measurement on (R k , L k+1 ) at the kth step. Using Lemma 5.3 and induction, one can check that after the kth measurement, the qudits
(L 1 , R k+1 ) are in post-measurement state X k i=1 x i ⊗ Z − k i=1 z i |EPR d .
Suppose that we have prepared a state |φ of k qudits in one part of the grid and we would like to teleport it to another part of the grid. We show how to design a depth-2 circuit that accomplishes the teleportation.
v 0 , · · · , v l , with endpoints v 0 ∈ A, v l ∈ B.
From this set of paths define a depth-2 measurement circuit as follows: In the first layer, to each of the "odd-even" qudit pairs (v 1 , v 2 ), (v 3 , v 4 ), . . . , (v l−1 , v l ) apply a gate taking the two-qudit state |00 to |EPR d In the second layer, measure each of the "evenodd" qudit pairs (v 0 , v 1 ), (v 2 , v 3 ), . . . , (v l−2 , v l−1 ) in the Bell basis nd return the 2 -bit string of outcomes. Now we prove that, up to local corrections, and with a technical condition on |φ , our "teleportation circuit" in fact brings |φ from region A on the grid to region B.
Lemma 5.6. Suppose that a grid grid N of qudits is initialized in a state that is in tensor product between the qudits of A and grid N \A and equals |ψ = |φ A ⊗ |0 grid N \A . Furthermore suppose that |φ is locally maximally mixed in the sense that the marginal density matrix on any individual qudit in A is the maximally mixed state.
Suppose we execute the low-depth teleportation described in Definition 5.5. For j ∈ {1, . . . , k} and i ∈ {1, . . . , } let X a j i ⊗ Z b j i |EPR d be the post-measurement state of the ith pair along the jth path of the teleportation circuit.
Then there is a unitary operator P such that the post-measurement state |ψ of the grid is |ψ = (P |φ ) B ⊗ |aux grid N \B . Furthermore, P is a tensor product of k Pauli operators such that P acts on the j th qudit of B as X a j Z −b j , where a j = i a j i and b j is defined similarly.
Proof. We analyze the circuit one path at a time. Fix i ∈ {1, . . . , k}. Let x i ∈ A be one endpoint of P i and y i the other. Suppose that |φ is any locally maximally mixed state on a set C of qubits with
x i ∈ C. Let C = C \ { x i }. Then there is a unitary V : C → C 1 ⊗ C 2 such that I ⊗ V |φ x i C = |EPR d x i C 1 ⊗ |φ C 2 .(26)
Now suppose we apply V and then apply the entangling gates and measurements along i . By Lemma 5.4, the state of qudits y i and C 1 is
X a i ⊗ Z −b i |EPR d y i C 1 = X a i Z −b i ⊗ I |EPR d y i C 1 .(27)
The equality in (27) can be verified by noticing that Z ⊗ Z † stabilizes |EPR d . Applying V † gives
(I ⊗ V † )(X a i Z −b i ⊗ I ) |EPR d y i C 1 ⊗ |φ C 2 = (X a i Z −b i ⊗ I ) |φ y i C .(28)
Notice that the operation along the path commutes with V . Therefore, applying V , applying that operation, and then applying V † is equivalent to just applying that operation. Notice that the resulting state continues to be locally maximally mixed. Therefore, we can apply the above repeatedly until all of the path circuits have been applied. Then the final state is as desired.
Geometric locality.
We explain how to modify the input and outputs sets I and O specified in (23) and (24) respectively in a way that both input and output are Boolean strings of the same length that can be organized in a 2-dimensional pattern and such that the circuit described in Sect. 5.1 can be implemented in the same depth (D + 1) using only geometrically local gates.
Recall that the input to the circuit consists of an input pattern P = {( (i) , u (i) )} 1≤i≤r together with an r -tuple of queries {(x (i) 1 , . . . , x (i) )} 1≤i≤r in G. Recall also that we think of the circuit as being organized on an N × N grid of vertices, such that each vertex contains 4 groups of k qudits, each group facing one of the vertex' nearest neighbors on the grid. We index the input and output sets by grid vertices, with each vertex associated with an element taken from a constant-size alphabet that is defined in (29) below.
Each star (i) specifies paths (i) j from the central box to the noncentral boxes. Assign to one point in each noncentral box the label u j . We naturally distribute each question x (i) j at the grid vertex indicated by u (i) j . For each edge (v, w) in the path, at vertex v (resp. w) we include a symbol that indicates that a teleportation measurement is to be performed between the group of k qudits nearest to vertex w (resp. v). For any grid vertex v, the output of the circuit at vertex v is either an answer in G, a (i) j ∈ (Z d ) m j , or a teleportation measurement outcome, which is an element of (Z d ) 2k . A question x (i) j is an element of (Z k d ) m j . This leads us to a circuit specification that considers the input and output sets
I = O = grid N , where = {0, 1} 2mk log d ,(29)
where m = max j m j and we fixed an arbitrary embedding of the natural input and output alphabets in . Note that not all input strings are used; since we consider the parameters d, m, k to be constants (depending only on the type of stabilizer game chosen), the cardinality of the alphabet is constant.
Circuit game definition.
Having specified the ideal (or, "honest") quantum circuit that we have in mind, we are ready to give a formal definition of the circuit game associated with r copies of a stabilizer game G. Recall the definition of the distribution D (r ) on input patterns given in Definition 4.10. (For clarity, we omit the arguments N , L, for which we will eventually make an appropriate choice.)
Definition 5.7. Let G be an ( , k, m) stabilizer game and 1 ≤ r ≤ N integer. The circuit game G G,N,r is a game on the input and output sets defined in (29). The input distribution π is obtained by independently sampling an input pattern P according to D (r ) and a tuple of r independent queries (x (1) , . . . , x (r ) ) for G, and encoding them as an element of I as described in Sect. 5.1.2. The relation R ⊆ I × O is defined as the support of the output distribution of the circuit described in Sect. 5.1, when it is provided an input in the support of π .
Skipping ahead, we note that in Sect. 6.2 we consider a slight variation of the circuit game G G,N,r from Definition 5.7, where the r query tuples to G are no longer independent and the win condition is relaxed to allow failure in some of the game instances. These modifications allow us to obtain a circuit game whose inputs can be sampled using few random bits (polylogarithmic in N ), and that can be won with high probability by a circuit whose gates are subject to a limited amount of noise.
Soundness.
We describe a reduction from circuit strategies (i.e. circuits with constant fan-in and bounded depth) in the circuit game introduced in Definition 5.7 to strategies for the players in the game G. The next lemma refers to the notions of lightcone, input pattern, and circuit specification introduced in Sects. 4.1 and 4.2, and of rotated, stretched and repeated game introduced in Sects. 3.2 and 3.3.
Lemma 5.8. (Circuit locality implies local simulation) Let G = G G,N,r be a circuit game as in Definition 5.7. Let P = {(u (i) , (i) )} 1≤i≤r be an input pattern in the support of the input distribution for G. Let C be a circuit with fan-in K and depth D that wins with probability p in G G,N,r , for some 0 ≤ p ≤ 1, conditioned on the input pattern being P.
Let η > 0. Let L f be the lightcone function obtained from the circuit graph, and S = (L f , bad in , ∅) an associated circuit specification. Assume that P is S-causal and that a fraction at least 1−δ of all input pairs (u, ) ∈ P are S-valid, for some δ ∈ [0, 1].
Then there exists an (r )-player strategy in the r -repeated rotated stretched game G = ((G r ) S ) R , for some depending on C that is defined in the proof, such that with probability at least p the strategy succeeds in a fraction at least 1 − δ of the game instances.
Proof. An input to the circuit C consist of two parts: the pattern P = {(u (i) , (i) )} 1≤i≤r , and the queries {(x (i) 1 , . . . , x (i) )} 1≤i≤r , that are embedded in the input to the circuit as described in Sect. 5.1.2. By assumption there is a fraction at most δ of (u (i) , (i) ) that are S-valid. For the remainder of the argument, ignore those vertices (equivalently, relabel r to (1−δ)r ). When designing a strategy for the players in the game, the players associated to ignored vertices ignore their question and return a random answer.
The assumption that P is S-causal implies that for each i ∈ {1, . . . , r } each of the vertices v ∈ (i) has a backwards lightcone that includes at most one of the input locations u Recall that by definition each wire of C is associated with the space C d , where d = | | with the input alphabet for the circuit game. We now describe an unambiguous way to generate a density matrix σ on (C d ) ⊗r , together with an assignment of each qudit of σ to a player (i, j), using the circuit C and the fixed input pattern P.
We define σ as the output of the circuit C, when certain inputs have been fixed, and certain wires have been traced out. For all input grid vertices that are not an input location u (i) j , hard-wire the input to |0 . Execute the circuit until a vertex v of the circuit graph that is in the forward lightcone of an input vertex associated with location u (i) j has to be considered. Since no input has been hard-wired for that vertex, the circuit cannot proceed. There are two cases:
• If the forward lightcone of v intersects the forward lightcone of two different input locations u (i) j , then no vertex in the forward lightcone of that location can be on any of the stars (i ) for i = i (as otherwise the vertex would be a vertex of whose backwards lightcone contains two distinct input locations). In that case, trace out the vertex.
• In all other cases, vertex v is in the forward lightcone of a single input location u (i) j . In this case, give the circuit wire associated with that vertex (in the state that it currently is) to player (i, j).
Finally, split the unassigned vertices on any path (i) j in an arbitrary way among the players; the set˜ (i) j is defined as the set of vertices from the star (i) assigned to the jth player. All remaining unassigned vertices are traced out.
This procedures specifies the state σ shared by the players (see Fig. 2 for an illustration). It remains to define their observables. Once the game starts, each player uses its input x (i) j in location u (i) j (encoded as an input to the circuit game, as specified in Sect. 5.1.2), and proceeds to complete the execution of the circuit on the qudits that it holds. If a gate has an output wire that points to a qudit that is not in the player's possession, the player measures the qudit and ignores the outcome. Finally, the player measures all qudits in the locations (i) j in the computational basis, and returns them as its answer (decoded as an answer in G, as specified in Sect. 5.1.2).
The fact that this strategy for the players has the same success probability in G as the circuit C in the circuit game G G,N,r follows from the fact that the success criterion in G G,N,r only involves output vertices that are along the stars (i) , and it can be verified that the joint operations performed by the players in the above-defined strategy correctly compute the reduced density matrix computed by the circuit on all those output vertices. Finally, using Lemma 5.6 it can be verified that the success criterion in the circuit game matches the win condition for the rotated stretched game.
Remark 5.9. The proof of Lemma 5.8 establishes a stronger statement than claimed in the lemma, that will be useful later. Specifically, the reduction from a circuit to a strategy for the players in G constructed in the proof applies whenever the input pattern P chosen in the circuit game is S-causal for S the circuit specification derived from C. Moreover, whenever this is the case the reduction yields a strategy for the players that exactly reproduces the (suitably decoded) output distribution of the circuit, on any choice of queries x (i) .
We end this section with the proof of Theorem 1.2, that specifies a circuit game for which there is a very large separation between the optimal winning probabilities of classical low-depth and quantum circuits. • (Completeness) There exists a family of depth-3 geometrically local (in 2D) quantum circuits {C N } N ∈N such that for any N ≥ 1 and any input x in the support of D N it holds that (x, C N (x)) ∈ R N with probability 1. • (Soundness) For any family of classical circuits {C N } N ∈N such that for every N ≥ 1, D N has depth at most c log N , the probability that (x,
C N (x)) ∈ R for x ∼ D N is O(exp(−N c )).
Proof. Let G N = G G,N,r be the circuit game introduced in Definition 5.7, where the stabilizer game G is instantiated as the GHZ game from Definition 3.14, and let π and P be the input distribution and input pattern introduced in Definition 5.7 respectively.
Completeness: By Lemma 5.2 the circuit C ideal , as defined from the game G N at the beginning of Sect. 5.1, wins the game G N with probability at least (ω * (G)) r , where ω * (G) is the optimal entangled winning probability for the stabilizer game G. Since ω * (G) = 1 it follows that C ideal succeeds at G N with probability 1. As noted after Definition 3.14, the shared state |ψ G H Z in the optimal strategy for the GHZ game can be prepared starting from |000 by a circuit of depth 3. It follows by definition that C ideal has depth at most 4. This establishes the completeness part of the corollary.
Soundness: Fix an η ∈ (0, 1 7 ). Let C be a classical circuit with fan-in K and depth D ≤ c log N (for c a sufficiently small constant depending on η) that wins with probability p win in G G,N,r . We show that if p win is too large, then there is a good classical strategy for the players in the game GHZ r/2 , the r/2 -repeated GHZ game defined in Definition 3.10.
For an input pattern P in the support of D (r ) , say that P has a large S-causal subpattern if there exists a subpattern P ⊆ P such that P is S-causal and P ≥ r/2 . Define q as q = 1 − Pr (P has a large S-causal subpattern) .
We first show an upper bound on q. For any pattern P let P core = P V AL ∩ P C AU S , where as in Lemma 4.15,
P V AL = (u, ) ∈ P|(u, ) is S-valid , P C AU S = (u, ) ∈ P|(u, ) is individually-S-causal with respect to P V AL .
By construction P core is S-causal. To prove an upper bound on q it suffices to place a lower bound on the probability that |P core | ≥ r/2 . Recall that the classical circuit C has fan-in K , depth D ≤ c log N , and circuit specification S = (L f , bad in , ∅). By Lemma 4.3, we may choose c as a function of η such that S is
(B, R in , 0)-bounded, where B = O(N η ) and R in = O(N 2−η ). By (18) we get Pr |P core | ≥ r (1 − Cr R in /N 2 − 2 p − 3t/r ) ≥ 1 − 4 exp −t 2 /8r ,(31)
where p = C r B(r L 2 /N 2 + /L). Here = 3, and the other parameters depend on N .
To set parameters, first recall that B = O(N η ) and R in = O(N 2−η ). Set t = r/10, r = (N η ), and L such that L = O(N 1−3η ) and L = (N 4η ), which is possible as long as η < 1/7. Then p = O(1) and r R in /N 2 = O(1). By choosing the constants appropriately, we can ensure that
1 − Cr R in /N 2 − 2 p − 3t/r > 1/2.
With this choice of parameters, (31) implies that |P core | ≥ r/2 with probability at least 1 − 4 exp −C r , for some constant C > 0. To conclude, note that whenever P contains a S-causal subpattern P ⊆ P such that |P | ≥ r/2 it follows from Lemma 5.8 and Remark 5.9 that the circuit C implies a strategy for (r /2) classical players in the repeated game GHZ r/2 . Using that the maximum success probability of classical players in GHZ is 3/4 and that the classical value multiplies under repetition (since the players are distinct) it follows that the implied strategy has success probability at most (3/4) r/2 .
Randomness Generation
In this section we give the construction of a circuit game such that any low-depth circuit that succeeds in the game with non-negligible probability must generate outputs that have large min-entropy, even conditioned on the inputs and side information that may be correlated with the initial state of the circuit. The main idea for the construction is to embed a large number of copies of a simple stabilizer game G in a circuit game, as described in Sect. 5. (We use the Mermin 3-player GHZ game [Mer90a], though a similar reduction could be performed starting from any stabilizer game whose quantum value is larger than its classical value.) Using the reduction from Lemma 5.8, it follows from Remark 5.9 that the output distribution of any circuit that wins with non-negligible probability in the circuit game can be deterministically mapped to a (stretched, rotated) variant of the r -fold parallel repetition, with r independent groups of players, of G. This reduction implies that, to bound the output entropy of the circuit, it suffices to place a lower bound on the output entropy of any strategy in the parallel repeated game.
To accomplish this last step we employ the framework based on the Entropy Accumulation Theorem (EAT) [DFR16] introduced in [AFDF+18], including the improvements from [DF18]. This framework allows to place a linear (in the number of repetitions) lower bound on the amount of min-entropy generated in the sequential repetition of a nonlocal game, using a lower bound on the function that measures the von Neumann entropy generated in a single instantiation of the game as a function of the success probability. Our setting of parallel repetition is more constrained (thus in principle easier to analyze) than the sequential setting, but the results from [AFDF+18,DF18] still give the best rates for both settings.
In Sect. 6.1 we start by establishing a bound on the single-round randomness for the three-player GHZ game that takes the form required to apply the framework from [AFDF+18]. In Sect. 6.2 we combine this bound with the reduction from circuit games to nonlocal games shown in Sect. 5 to deduce a family of randomness-generating circuit games.
6.1. Randomness generation from the GHZ game. We briefly recall the formalism from [AFDF+18], when tailored to our setting (in particular, we focus on processes specified by quantum strategies in a nonlocal game, instead of arbitrary quantum channels in [AFDF+18]). The main definition that is needed is that of a min-tradeoff function, which specifies a lower bound on the amount of randomness generated in a single execution of a nonlocal game, as a function of the players' probability of winning in the game. We give the definition for stabilizer games, as introduced in Definition 3.1.
Definition 6.1. (Min-tradeoff function) Let G be an ( , k, m) stabilizer game. Fix a set of measurements {M x j } for the players in the game. For any ω ∈ [0, 1], let (ω) denote the set of states ρ P 1 ···P k R such that when the players' state is initialized to ρ (with player i being given register P i ), the players' strategy wins the game with probability at leastω. 6 Then a real affine function f on [0, 1] is called an (affine) min-tradeoff function for
G and {M x j } if it satisfies f (ω) ≤ min ρ∈ (ω) H (A|Q R) M(ρ) ,
where the entropy is evaluated on the post-measurement state M(ρ) obtained after application of the players' measurements, and Q and A are random variables that represent inputs (distributed according to π ) and outputs for the players in G. If f is a min-tradeoff function for a game G and every possible set of measurements {M x j } for the players, then we simply say that f is a min-tradeoff function for G.
To illustrate the definition we apply results from [WBA18] to give a min-tradeoff function for the Mermin GHZ game introduced in Definition 3.14. It is well-known (and easily verified) that the best classical strategy for this game succeeds with probability 3 4 . This in particular implies that any strategy that wins with probability strictly larger than 3 4 cannot be deterministic, hence generates randomness. The following bound shown in [WBA18] provides a tight lower bound on the conditional entropy present in the outputs of any strategy that succeeds with sufficiently large probability. 7 Lemma 6.2.
[WBA18] Let τ = (ρ, {M x j }) be a strategy with success probability ω ≥ 7 8 in the game GHZ, where ρ is a density matrix on the provers' registers P 1 · · · P and an arbitrary auxiliary register R. Then
H A 1 A 2 |R Q ≥ f GHZ (ω) = − log 5 4 − ω + √ 3 ω − 1 2 1 − ω ,(32)
where Q is a random variable that denotes the query to the players, and A 1 , A 2 are random variables that denote the answers a 1 , a 2 ∈ Z 2 from the first two players.
Note that for ω = 1, the bound (32) gives 2 bits of entropy, which is clearly optimal. If ω = 1 − ε for small ε, the bound degrades as 2 − O( √ ε).
Following the framework from [AFDF+18], the bound provided in Lemma 6.2 already implies a linear lower bound on the entropy generated by the sequential repetition of the GHZ game. For our purposes it will be convenient to have a bound on the entropy generated by the stretched, rotated variant of the GHZ game, as introduced in Sect. 3.2. Corollary 6.3. (Min-tradeoff function for GHZ) Let 7 8 < p s < 1. Let be a tuple of sets as in Definition 3.9. Then the function g p s : [0, 1] → R defined by
g p s (q) = f GHZ ( p s ) + (q − p s ) d f GHZ dω ( p s )
is a min-tradeoff function for the rotated stretched game GHZ S,R .
Proof. First we observe that the bound (32) from Lemma 6.2 applies equally to the the rotated stretched game GHZ S,R , for any fixed . Indeed, fix a strategy (ρ, {M x j }) in GHZ S,R . Using Lemma 3.8 we obtain a strategy τ = (ρ, {M x j }) for GHZ that has the same success probability. Furthermore, in the coarse-graining of the strategy the answers A 1 , A 2 , A 3 ∈ Z 2 in τ are a deterministic function of the answers A 1 , A 2 , A 3 ∈ Z 2 × (Z 2 2 ) | 1 | in GHZ S,R , where the second factor is for the stretched rotation string. Therefore the same bound on randomness generation that applies to GHZ applies to GHZ S,R (as long as all outputs in the game are included, which is the case for the definition of a min-tradeoff function).
To conclude, note that the right-hand side of (32) is a convex function of ω, hence it is at least its tangent at any point.
Recall that our goal is to generate a large amount of randomness by requiring a circuit to play multiple copies of the game GHZ in parallel. Towards this, we introduce a partially derandomized variant of the repeated game, where the inputs are chosen according to a very biased distribution in order to save on the randomness required to generate them. The resulting game is a direct analogue of the protocol for randomness expansion from the CHSH game given in [DF18].
Definition 6.4. Let r ≥ 1 be an integer. Let p, γ ∈ [0, 1]. Let be a tuple of sets as in Definition 3.9. Let GHZ S,R r, , p,γ denote the r -fold repetition, as in Definition 3.10, of the rotated, stretched game GHZ with the following two modifications: 7 See (18) in [WBA18]. The authors prove a stronger bound, that applies to the min-entropy and extends to all success probabilities in the range [3/4, 1]. We only state the weaker bound that will be sufficient for us. To see how the bound stated in Lemma 6.2 is obtained from (18) in [WBA18], use the replacement ω = 1 2 + M 8 .
• The r queries, instead of being sampled independently, are chosen according to the following distribution: first, select a subset S ⊆ {1, . . . , r } by including each element independently with probability p; second, select queries x (1) , . . . , x (r ) ∈ X such that x (i) is sampled as in GHZ when i ∈ S, and x (i) ← x for i / ∈ S, where x is an arbitrary, but fixed, query in GHZ;
• It is only required that the win condition is satisfied for a fraction at least (1 − γ ) of the tuples of answers a (i) such that i ∈ S (there is no requirement for i / ∈ S).
Randomness generation from low-depth circuits.
Using the technique to embed a stabilizer game into a circuit game described in Sect. 5 we can leverage the randomness generation results from the previous section to obtain a family of circuit games for certified randomness expansion. First we define the circuit games that we consider by introducing a randomness-efficient, noise-tolerant modification of the game from Definition 5.7. Even though we will eventually instantiate the definition with G = GHZ, we give the definition for a general stabilizer game.
Definition 6.6. Let G be an ( , k, m) stabilizer game and N ≥ 1 an integer. Let L , B, R in , r be parameters that satisfy the constraints (22) for some constant η > 0. Let p, γ ∈ [0, 1]. The circuit game G G,N,r, p,γ is defined as the game G G,N,r with the following modifications:
1. The input pattern P is sampled according to the distribution D (r ) from Lemma 4.20; 2. The tuple of queries (x (1) , . . . , x (r ) ) for G is sampled by first, selecting a subset S ⊆ {1, . . . , r } by including each element independently with probability p; second, selecting queries x (1) , . . . , x (r ) ∈ X such that x (i) is sampled as in G when i ∈ S, and x (i) ← x for i / ∈ S, where x is an arbitrary, but fixed, query in G; 3. It is only required that the win condition is satisfied for a fraction at least (1 − γ ) of the tuples of answers a (i) such that i ∈ S (there is no requirement for i / ∈ S).
As shown in Lemma 4.20, it is possible to sample an input pattern from D (r ) using O(log 2 N ) random bits. In addition, it is possible to sample inputs as in Definition 6.6 using O(H ( p)r ) random bits. So, if p = (log N /r ), then it is possible to sample inputs in G G,N,r, p,γ using O(log 2 N ) random bits. (To this count, one may add O(log 3 N ) random bits, sufficient to extract near-uniform random bits from the output of the circuit by using a seed-efficient randomness extractor [DPVR12].) The following theorem places a lower bound on the amount of randomness generated by a circuit that succeeds with non-negligible probability in the circuit game from Definition 6.6, when the stabilizer game G is instantiated as the GHZ game from Definition 3.14. Together with the preceding comments, the theorem establishes the randomness certification part of Theorem 1.1 (the soundness part follows immediately since classical circuits cannot increase entropy present in their input distribution).
Theorem 6.7. Let r, p, γ, N be as in Definition 6.6, for some η > 0. Let η > 0. Let D be such that D ≤ c log N , for some sufficiently small constant c > 0, depending on η, η . Let C be a (classical or quantum) circuit with gates of constant fan-in and depth at most D. Assume that C succeeds in the game G GHZ,N ,r, p,γ with probability at least δ, for some δ = (N −η ). Suppose the circuit is executed on its input, described by random variable I , as well as a an auxiliary state |ψ CE such that the circuit acts on register C, and the register E is available to an adversary. Let O be a random variable that represents the circuit output, O = C(I ). Let ρ s OIE denote the state of the inputs, outputs, and side information, conditioned on the circuit winning in the circuit game. Then there exists an η > 0 such that for any ε = (N −η ), it holds that
H ε min (O|I E) ρ s ≥ κ − f (γ ) r − O r √ p ln 1 εTr(ρ s ) 1/2 ,(34)
where the implicit constant depends on η, η , η .
Proof. Let S be the circuit specification obtained from the circuit C. It follows from Lemma 4.3 that by choosing the constant c small enough, we force S to be (B, R in , 0)bounded where R in = O(N 2−η ) and B = O(N η ). With this choice of parameters, it follows from Lemma 4.20 and Lemma 4.12 that a pattern P sampled from D (r ) is S-causal with probability at least 1 − O(N −c ), for some c > 0. Since C succeeds in G = G GHZ,N ,r, p,γ with probability δ, it follows that conditioned on success of C, the pattern P chosen as part of the input is S-causal with probability at least 1−O(N −c δ −1 ). Let δ > 0 and let P be a pattern which is S-causal on which C succeeds with probability at least δ when the pattern P is fixed. For any such pattern, Lemma 5.8 together with Remark 5.9 imply that the circuit's behavior can be simulated by a strategy for the players in the stretched rotated game GHZ S,R r, , p,γ , for some collection of sets that is determined from S. Using Lemma 6.5 it follows that, conditioned on the input I to the circuit being of the form (P, x) for some query x ∈ X , the lower bound (33) holds.
If P is such that C succeeds with probability less than δ , then there is no bound on the min-entropy. However, the probability that this happens, conditioned on winning, is at most δ /δ. (To see this, apply Bayes' rule directly.) Choosing δ = √ δ we get an η , depending on η , such that (34) holds.
Theorem 1. 1 .
1There exists universal constants c, d > 0, a family of distributions {D N } N ∈N such that for every N ≥ 1, D N is an efficiently sampleable distribution on {0, 1} N 2 , and a family of efficiently verifiable relations {R N } N ∈N such that for every N ≥ 1, R N ⊆ {0, 1} N 2 × {0, 1} N 2 , such that the following holds: • (Completeness) There exists a family of depth-3 geometrically local (in 2D) quantum circuits {C N } N ∈N such that for any N ≥ 1 and any input x in the support of D N it holds that (x, C N (x)) ∈ R N with probability 1. • (Soundness) For any family of classical circuits {C N } N ∈N such that for every N ≥ 1, C N has constant fan-in and depth at most c log N , the probability that(x, C N (x)) ∈ R N for x ∼ D N is O(N −1/5 ).• (Randomness certification) It is possible to efficiently generate a sample from D N using O(log 2 N ) uniformly random bits. Moreover, for any family {C N } N ∈N of (classical or quantum) circuits with gates of constant fan-in and such that C N has depth at most c log N and satisfiesPr x∼D N (x, C N (x)) ∈ R N = N −1/5 ,it holds that the distribution of C N (x) for x ∼ D N has (N d ) bits of (smooth) min-entropy.
Definition 2.4. (State-dependent distance) Let ρ be a density matrix in H and let M = {M a } a , N = {N a } a be two POVM on H that have the same set of outcomes. The state-dependent distance between M and N is d ρ (M, N ) = a Tr (M a − N a )
strategies for an -player nonlocal game G. We say that τ is ε-close toτ if and only if ρ −ρ 1 ≤ ε and for all i ∈ {1, . . . ,} it holds that E x d ρ (M x i ,M x i ) ≤ ε,where the expectation is over x = (x 1 , . . . , x ) drawn from π . Definition 2.6. (Isometric strategies) Let τ = (ρ, {M a i x i }) and τ = (ρ , {(M ) a i x i }) be strategies for an -player nonlocal game G, and ε > 0. We say that τ is ε-isometric to τ if and only if there exist isometries
Theorem 2.9. [DPVR12] For any integers n, m and for any ε > 0 there exists a d = O(log 2 (n/ε)·log m) and an efficient classical procedure Ext
Lemma 3. 5 .
5(Cor can be computed locally) For a string s, let s| i denote the string which is equal to s i in the i th position and 0 everywhere else. Then for any q and r , i
Lemma 3.6. (Commutators) Suppose B commutes with [A, C]. Then [A, B][A, C] = [A, BC].
Definition 3.7. (Rotated stabilizer game) Given a stabilizer game G
Definition 4. 1 .
1Let C be a circuit. The circuit graph associated with C is a directed graph on vertex set V = I ∪ U ∪ O. Here I contains one vertex for each input wire, O contains one vertex for each output wire, and U contains one vertex for each gate. There is an edge from u to v if the output of u is an input of v. In particular, all vertices of I are sources (have indegree 0) and all vertices of O are sinks (have outdegree 0). We call vertices in I input vertices and vertices in O output vertices.
Fig. 1 .
1A star centered at box b 0 , connecting = 3 boxes b 1 , b 2 , b 3 in grid N (grid edges are not shown on the picture).
Definition 4. 5 .
5(Input pattern) Let G be an ( , k, m) stabilizer game. Let 1 ≤ r ≤ N be integer. An input pattern associated with (G, N , r ) is a tuple P = {(u (i) , (i) )} 1≤i≤r such that • each u (i) = (u (i) 1 , . . . , u (i) ) is an -tuple of vertices of grid N , which we refer to as input locations,
Definition 4.10. (Random input patterns) Let L , N ≥ 1, ≥ 1, and 1 ≤ r ≤ N be integer such that 3L √ + 1 ≤ M = N / √ r . Divide grid N in r disjoint squares S (1) , . . . , S (r ) of side length M each. 5 Partition each square into T = M 2L+1 2 boxes of side length (2L +1), in an arbitrary way. For each possible choice of ( +1) distinct boxes b
•
each star has b 0 as its central box and b 1 , . . . , b as its other boxes, • the total length of the paths in any star is at most 2 M, and • the paths of the distinct stars are vertex-disjoint. Consider the following distribution D (r ) (N , L) on input patterns on grid N . For each i ∈ {1, . . . , r } select b (i) 0 , . . . , b (i) uniformly at random among the T boxes that partition the ith square S (i) , for j ∈ {1, . . . , }, a vertex u (i) j uniformly at random within the jth selected box. Finally, select a star (i) ∈ stars(b (i) 0 , b (i) 1 , . . . , b (i) ) uniformly at random.
Lemma 4. 11 .
11Let M ≥ 1, 1 ≤ B, L ≤ M/4 and 0 ≤ R in , R out ≤ M 2 be integer. Let S = (L f , bad in , bad out ) be a circuit specification for grid M that is (B, R in , R out )bounded. Let P = { (u, ) } be drawn from the distribution D (1) introduced in Definition 4.10. Then the probability that the unique pair (u, ) in P is not individually-S-causal with respect to P is O(L 2 (R out + B)/M 2 + (R out + B)/
Now we check conditions (a) and (c). Call a box bad if it intersects bad out . Under D (1) there are M/(2L +1) 2 ≥ 1/4(M/2L) 2 possible box locations. By a union bound, the probability that any box is bad is at most 16L 2 R out /M 2 = O(L 2 R out /M 2 ). There are at least L/ possible choices for the paths of . Since all such choices are disjoint, again by a union bound the probability that ∩ X = ∅ for some subset X is at most |X | /L. Letting X = bad out ∪ i L f (u i ), so that |X | ≤ R out + B, we see that the probability of violating condition (a) or condition (c) is at most O R out + 2 B L .
Lemma 4.12. (Random input patterns are usually causal) Let N ≥ 1, 1 ≤ r ≤ N and 1 ≤ B, R in , L ≤ N /4 be integer. Let S = (L f , bad in , ∅) be a circuit specification for grid N that is (B, R in , 0)-bounded. Then the probability that an input pattern P chosen according to D (r ) (N , L) (as defined in Definition 4.10) is not S-causal is at most O(r 2 B(r (L 2 + R in )/N 2 + 1/L)).
Lemma 4.15. (Random input patterns are mostly causal with high probability) Let N ≥ 1, 1 ≤ r ≤ N and 1 ≤ B, R in , L ≤ N /4 be integer. Let S = (L f , bad in , ∅) be a circuit specification for grid N that is (B, R in , 0)-bounded. Consider an input pattern P chosen according to D(r ) (N , L).Let
First introduce specificationsS (<i) = (L f , bad in , bad <i out ) and S (>r −i) = (L f , bad in , bad >(r −i) out ) on grid grid (i)M , an let Y i (resp. Z i ) as the indicator variable for the event that (u (i) , (i) ) is not individually-S (<i) -causal (resp. individually-S (>r −i) -causal) with respect to P (i) . The next claim relates Y i and Z r −i to X i .Claim 4.18. For each i ∈ {1, . . . , r }, it holds that
Lemma 4. 20 .
20Let N ≥ 1 be an integer and η > 0. Let L , B, R in , r be integer such that B = O(N η ), L = O(N 4/7+η ), and r = O(N 2/7−2η ),(22)Then using O(log 2 N ) uniformly random bits it is possible to sample from a distribution D (r ) on input patterns P for grid N such that for any circuit specification S that is(B, R in , 0)-bounded, P is S-causal with probability 1−O(N −η ), and a random (u, ) ∈ P is valid with probability O(R in /N 2 + N −η ).Proof. The choice of parameters made in the lemma is such thatr 3 B L 2 /N 2 = O(N −η ) and r 2 B/L = O(N −η ),so Lemma 4.12 gives that P sampled according to D (r ) is Scausal with probability 1 − O(N −η ), and a random (u, ) ∈ P is valid with probability O(R in /N 2 ). A pattern in the support of D (r ) (M, L) can be specified using O(r log(N )) uniformly random bits: for each i ∈ {1, . . . , r }, there are O(log N ) random bits to specify the locations of the u (i) j , and O(log N ) additional bits to specify the star that connects the u (i) j . (Recall we fixed a small set of possible stars when we defined the distribution.) Given any choice of such random bits, and a fixed circuit specification, by Savitch's theorem it is possible to decide whether the pattern is S-causal in O(log 2 N ) space, given read-only access to the circuit graph determining S. This allows us to apply the INW pseudorandom generator for small-space circuits [INW94] with O(log 2 N ) seed to obtain the claimed result, with the additional error O(N −η ) being due to the pseudorandom generator.
j
to label indices of the elements of O.
Lemma 5. 3 .
3(Entanglement transfer I) Let d ≥ 2 and |EPR d = 1 √ d i |ii a maximally entangled state on d-dimensional qudits. Let a, b ∈ Z d and
Definition 5.5. (Low-depth state teleportation protocol) Let N ≥ 1 and A and B be ordered lists of vertices in grid N with |A| = |B| = k. Pick { j } 1≤ j≤k a set of k vertexdisjoint, even-length paths on the grid, each with one endpoint in A and one endpoint in B. For any j ∈ {1, . . . , k} denote the vertices of j as
j
. Also assign to points inside the central box the labels g(i) j . Extend the paths (i) j so that their endpoints are u (i) j and g (i)
j
. Moreover, all vertices in (i) ∩ Box(u(i) j ) have a backwards lightcone that includes no other input location than u (i) j . We define an (r )-player strategy in the rotated, stretched game G . For each i ∈ {1, . . . , r }, partition (i) into sets (i) j , j ∈ {1, . . . , }, such that the only input location in the backwards lightcone of any vertex in (i) j is u (i) j . Note that the lightcones may intersect at other grid vertices.
Fig. 2 .
2Illustration of the construction in the proof of Lemma 5.8. A circuit with two input locations u 1 , u 2 (case r = 1 and = 2). The star has two simple paths linking the center vertices g i with u i , for i = 1, 2. The forward lightcones of u 1 and u 2 do not intersect at any vertex in . The output vertices along are partitioned into˜ 1 and˜ 2 in a way that all vertices within a lightcone of an input location u i are associated with the index i
Theorem 1.2. (Exponential Soundness, restated). There exists universal constants c, c > 0, a family of distributions {D N } N ∈N such that for every N ≥ 1, D N is a distribution on {0, 1} N 2 , and a family of efficiently verifiable relations {R N } N ∈N such that for every N ≥ 1, R N ⊆ {0, 1} N 2 × {0, 1} N 2 , such that the following holds:
Theorem 1.2. (Exponential soundness) There exists universal constants c, c > 0, a family of distributions {D N } N ∈N such that for every N ≥ 1, D N is an efficiently sampleable distribution on {0, 1} N 2 , and a family of efficiently verifiable relations {R N } N ∈N such that for every N ≥ 1, R N ⊆ {0, 1} N 2 × {0, 1} N 2 , such that the following holds: • (Completeness) As in Theorem 1.1. • (Soundness) For any family of classical circuits {C N } N ∈N such that for every N ≥ 1, D N has depth at most c log N , the probability that
in G. The following lemma summarizes this observation in terms of rigidity of the rotated game. Recall the definition of a robustly rigid game in Definition 2.7.Lemma 3.8. (Rotation preserves rigidity) Suppose that a stabilizer game G is robustly rigid (see Definition 2.7). Let τ = (|ψ ψ| , {M x j }) be a rigid strategy and f the robustness. Then the rotated stabilizer game G R is rigid in the following sense. For any strategy
In fact, even less is needed; see the description of the protocol.
The definition of the purified distance is not important for us, and we defer to[Tom15] for a precise definition.
Here by "coarse-graining" we mean the strategy that is implied by requiring each player to compute the update (8) locally; Lemma 3.5 shows that this can always be done.
We may without loss of generality assume that the dimension of R is no more than the sum of the dimensions of the players' private registers P j , themselves fixed by the measurements {M x j }. Therefore, the set (ω) can be taken to be a compact set.
The results in[AFDF+18,DF18] apply to a much more general scenario, and in particular allow one to prove bounds on the entropy generated by an arbitrary sequential process, as long as it satisfies a certain Markov condition. Our setting, which considers parallel repetition, is easier, and trivially satisfies the required Markov condition.
Acknowledgement. The authors thank Adam Bouland for helpful discussions and members of the Caltech theory reading group (Matthew Weidner, Andrea Coladangelo, Jenish Mehta, Chinmay Nirkhe, Rohit Gurjar, Spencer Gordon) for posing some of the questions answered in this work. We thank Isaac Kim, Jean-Francois Le Gall, and Robin Kothari for useful discussions following the initial announcement of our results. Thomas Vidick is supported by NSF CAREER Grant CCF-1553477, AFOSR YIP award number FA9550-16-1-0495, MURI Grant FA9550-18-1-0161, a CIFAR Azrieli Global Scholar award, and the the Institute for Quantum Information and Matter, an NSF Physics Frontiers Center (NSF Grant PHY-1733907). Jalex Stark is supported by NSF CAREER Grant CCF-1553477, ARO Grant W911NF-12-1-0541, and NSF Grant CCF-1410022. Matthew Coudron is supported by Canada's NSERC and the Canadian Institute for Advanced Research (CIFAR), and through funding provided to IQC by the Government of Canada and the Province of Ontario.Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.Corollary 6.3 shows that any sufficiently successful strategy in the (rotated, stretched) game GHZ must generate outputs that contain a constant amount of entropy. It is therefore natural to expect that a strategy for the repeated game from Definition 6.4 should generate a linear (in the number of repetitions) amount of entropy. The difficulty is to obtain a bound on the entropy generated, conditioned on having produced outputs that satisfy the win condition of the game, but without placing an implicit assumption on the intrinsic winning probability of the strategy (which would be difficult to estimate). Moreover, the fact that the strategy involves all players simultaneously measuring parts of the same entangled state introduces correlations that prevent a direct treatment using techniques appropriate for the simpler case of i.i.d. (identically and independently distributed) outputs.The Entropy Accumulation Theorem, as applied in[AFDF+18], is designed specifically to address these difficulties, and indeed guarantees that the repeated game introduced in Definition 6.4 generates a linear amount of min-entropy. Here we use the improved results from[DF18], 8 that give good bounds even when the "test probability" p from Definition 6.4 can be very small. Lemma 6.5. Let r ≥ 1, p, γ ∈ [0, 1], and ε > 0. Let (ρ, {M x j }), where ρ is a density matrix on the player's private registers together with an ancilla register E, be a strategy for the (3r ) players in GHZ S,R r, , p,γ that succeeds with probability at least ε. Let Q = (Q (1) , . . . , Q (r ) ) (resp. A =(A (1) , . . . , A (r ))) be random variables associated with the players' answers in each copy of GHZ; note that each Q (i) (resp. A (i) ) is itself a 3-tuple. Let ρ s AQE denote the state of AQE conditioned on the players succeeding in the game (the players' private registers are traced out). ThenNote that the lower bound provided in (33) is non-trivial as soon as p = (log N /r ) and ε is at least inverse-polynomially large (smaller ε is also possible, but requires a larger p).Proof. The proof is identical to the proof of [DF18, Theorem 6.1], that applies to the CHSH game. The only change needed is to use the min-tradeoff function from Corollary 6.3 instead of the min-tradeoff function g * for the CHSH game used in[DF18]. The bound (33) follows from the bound stated in [DF18, Theorem 6.1] by noting that
Certified randomness from quantum supremacy. Aar18, S Aaronson, Personal communicationAar18. Aaronson, S.: Certified randomness from quantum supremacy. Personal communication (2018)
Complexity-theoretic foundations of quantum supremacy experiments. S Aaronson, L Chen, Proceedings of the 32nd Computational Complexity Conference. the 32nd Computational Complexity Conference22Schloss Dagstuhl-Leibniz-Zentrum fuer InformatikAaronson, S., Chen, L.: Complexity-theoretic foundations of quantum supremacy experiments. In: Pro- ceedings of the 32nd Computational Complexity Conference, p. 22. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik (2017)
Practical device-independent quantum cryptography via entropy accumulation. R Arnon-Friedman, F Dupuis, O Fawzi, R Renner, T Vidick, Nat. Commun. 91459Arnon-Friedman, R., Dupuis, F., Fawzi, O., Renner, R., Vidick, T.: Practical device-independent quantum cryptography via entropy accumulation. Nat. Commun. 9(1), 459 (2018)
Certifiable randomness from a single quantum device. Bcm+18, Z Brakerski, P Christiano, U Mahadev, U Vazirani, T Vidick, arXiv:1804.00640arXiv preprintBCM+18. Brakerski, Z., Christiano, P., Mahadev, U., Vazirani, U., Vidick, T.: Certifiable randomness from a single quantum device (2018). arXiv preprint arXiv:1804.00640
Bell nonlocality. Bcp+14, N Brunner, D Cavalcanti, S Pironio, V Scarani, S Wehner, Rev. Mod. Phys. 862419BCP+14. Brunner, N., Cavalcanti, D., Pironio, S., Scarani, V., Wehner, S.: Bell nonlocality. Rev. Mod. Phys. 86(2), 419 (2014)
On the Einstein-Podolsky-Rosen paradox. Bel64, J S Bell, Physics. 1Bel64. Bell, J.S.: On the Einstein-Podolsky-Rosen paradox. Physics 1, 195-200 (1964)
Quantum advantage with shallow circuits. S Bravyi, D Gosset, R Koenig, arXiv:1704.00690arXiv preprintBravyi, S., Gosset, D., Koenig, R.: Quantum advantage with shallow circuits (2017). arXiv preprint arXiv:1704.00690
Quantum advantage with shallow circuits. Bgk18, S Bravyi, D Gosset, R Koenig, Science. 3626412BGK18. Bravyi, S., Gosset, D., Koenig, R.: Quantum advantage with shallow circuits. Science 362(6412), 308-311 (2018)
Exponential separation between shallow quantum circuits and unbounded fan-in shallow classical circuits. Bwkst19, A Watts, R Kothari, L Schaeffer, A Tal, Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing. the 51st Annual ACM SIGACT Symposium on Theory of ComputingBWKST19. Bene Watts, A., Kothari, R., Schaeffer, L., Tal, A.: Exponential separation between shallow quantum circuits and unbounded fan-in shallow classical circuits. In: Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing (2019)
Consequences and limits of nonlocal strategies. R Cleve, P Høyer, B Toner, J Watrous, Proceedings of the 19th IEEE Conference on Computational Complexity (CCC'04). the 19th IEEE Conference on Computational Complexity (CCC'04)IEEE Computer SocietyCleve, R., Høyer, P., Toner, B., Watrous, J.: Consequences and limits of nonlocal strategies. In: Proceedings of the 19th IEEE Conference on Computational Complexity (CCC'04), pp. 236-249. IEEE Computer Society (2004)
64-qubit quantum circuit simulation. Z.-Y Chen, Q Zhou, C Xue, X Yang, G.-C Guo, G.-P Guo, Sci. Bull. 6315Chen, Z.-Y., Zhou, Q., Xue, C., Yang, X., Guo, G.-C., Guo, G.-P.: 64-qubit quantum circuit simu- lation. Sci. Bull. 63(15), 964-971 (2018)
Entropy accumulation with improved second-order. F Dupuis, O Fawzi, arXiv:1805.11652Technical reportDupuis, F., Fawzi, O.: Entropy accumulation with improved second-order. Technical report (2018). arXiv:1805.11652
Dfr16, F Dupuis, O Fawzi, R Renner, arXiv:1607.01796Entropy accumulation. arXiv preprintDFR16. Dupuis, F., Fawzi, O., Renner, R.: Entropy accumulation (2016). arXiv preprint arXiv:1607.01796
How many qubits are needed for quantum computational supremacy?. Dhklp18, A M Dalzell, A W Harrow, D E Koh, R L La Placa, arXiv:1805.05224arXiv preprintDHKLP18. Dalzell, A.M., Harrow, A.W., Koh, D.E., La Placa, R.L.: How many qubits are needed for quantum computational supremacy? (2018). arXiv preprint arXiv:1805.05224
Trevisan's extractor in the presence of quantum side information. Dpvr12, A De, C Portmann, T Vidick, R Renner, SIAM J. Comput. 414DPVR12. De, A., Portmann, C., Vidick, T., Renner, R.: Trevisan's extractor in the presence of quantum side information. SIAM J. Comput. 41(4), 915-940 (2012)
Quantum cryptography based on Bell's theorem. Eke91, A K Ekert, Phys. Rev. Lett. 67Eke91. Ekert, A.K.: Quantum cryptography based on Bell's theorem. Phys. Rev. Lett. 67, 661-663 (1991)
Average-case quantum advantage with shallow circuits. F L Gall, arXiv:1810.12792arXiv preprintGall, F.L.: Average-case quantum advantage with shallow circuits (2018). arXiv preprint arXiv:1810.12792
Significant-loophole-free test of Bell's theorem with entangled photons. Gvw+15, M Giustina, M A Versteegh, S Wengerowsky, J Handsteiner, A Hochrainer, K Phelan, F Steinlechner, J Kofler, J.-Å Larsson, C Abellán, Phys. Rev. Lett. 11525250401GVW+15. Giustina, M., Versteegh, M.A., Wengerowsky, S., Handsteiner, J., Hochrainer, A., Phelan, K., Steinlechner, F., Kofler, J., Larsson, J.-Å., Abellán, C., et al.: Significant-loophole-free test of Bell's theorem with entangled photons. Phys. Rev. Lett. 115(25), 250401 (2015)
Experimental loophole-free violation of a bell inequality using entangled electron spins separated by 1.3 km. B Hensen, H Bernien, A Dréau, A Reiserer, N Kalb, M Blok, J Ruitenberg, R Vermeulen, R Schouten, C Abellán, arXiv:1508.05949arXiv preprintHensen, B., Bernien, H., Dréau, A., Reiserer, A., Kalb, N., Blok, M., Ruitenberg, J., Vermeulen, R., Schouten, R., Abellán, C., et al.: Experimental loophole-free violation of a bell inequality using entangled electron spins separated by 1.3 km (2015). arXiv preprint arXiv:1508.05949
Quantum computational supremacy. Hm17, A W Harrow, A Montanaro, Nature. 5497671203HM17. Harrow, A.W., Montanaro, A.: Quantum computational supremacy. Nature 549(7671), 203 (2017)
Explicit lower bounds on strong quantum simulation. C Huang, M Newman, M Szegedy, arXiv:1804.10368arXiv preprintHuang, C., Newman, M., Szegedy, M.: Explicit lower bounds on strong quantum simulation (2018). arXiv preprint arXiv:1804.10368
Pseudorandomness for network algorithms. Inw94, R Impagliazzo, N Nisan, A Wigderson, Proceedings of the Twenty-Sixth Annual ACM Symposium on Theory of Computing. the Twenty-Sixth Annual ACM Symposium on Theory of ComputingACMINW94. Impagliazzo, R., Nisan, N., Wigderson, A.: Pseudorandomness for network algorithms. In: Pro- ceedings of the Twenty-Sixth Annual ACM Symposium on Theory of Computing, pp. 356-364. ACM (1994)
Extreme quantum entanglement in a superposition of macroscopically distinct states. Mer90a, N D Mermin, Phys. Rev. Lett. 65151838Mer90a. Mermin, N.D.: Extreme quantum entanglement in a superposition of macroscopically distinct states. Phys. Rev. Lett. 65(15), 1838 (1990)
Simple unified form for the major no-hidden-variables theorems. N D Mermin, Phys. Rev. Lett. 65273373Mermin, N.D.: Simple unified form for the major no-hidden-variables theorems. Phys. Rev. Lett. 65(27), 3373 (1990)
Quantum supremacy is both closer and farther than it appears. I L Markov, A Fatima, S V Isakov, S Boixo, arXiv:1807.10749arXiv preprintMarkov, I.L., Fatima, A., Isakov, S.V., Boixo, S.: Quantum supremacy is both closer and farther than it appears (2018). arXiv preprint arXiv:1807.10749
Nc02, M A Nielsen, I Chuang, Quantum Computation and Quantum Information. NC02. Nielsen, M.A., Chuang, I.: Quantum Computation and Quantum Information (2002)
Long-range quantum entanglement in noisy cluster states. R Raussendorf, S Bravyi, J Harrington, Phys. Rev. A. 71662313Raussendorf, R., Bravyi, S., Harrington, J.: Long-range quantum entanglement in noisy cluster states. Phys. Rev. A 71(6), 062313 (2005)
A classical leash for a quantum system: Command of quantum systems via rigidity of CHSH games. B Reichardt, F Unger, U Vazirani, Nature. 4967446Reichardt, B., Unger, F., Vazirani, U.: A classical leash for a quantum system: Command of quantum systems via rigidity of CHSH games. Nature 496(7446), 456-460 (2013)
Fast quantum logic gates with trapped-ion qubits. V Schäfer, C Ballance, K Thirumalai, L Stephenson, T Ballance, A Steane, D Lucas, Nature. 555769475Schäfer, V., Ballance, C., Thirumalai, K., Stephenson, L., Ballance, T., Steane, A., Lucas, D.: Fast quantum logic gates with trapped-ion qubits. Nature 555(7694), 75 (2018)
Strong loophole-free test of local realism. L K Shalm, E Meyer-Scott, B G Christensen, P Bierhorst, M A Wayne, M J Stevens, T Gerrits, S Glancy, D R Hamel, M S Allman, Phys. Rev. Lett. 11525250402Shalm, L.K., Meyer-Scott, E., Christensen, B.G., Bierhorst, P., Wayne, M.A., Stevens, M.J., Gerrits, T., Glancy, S., Hamel, D.R., Allman, M.S., et al.: Strong loophole-free test of local realism. Phys. Rev. Lett. 115(25), 250402 (2015)
M Tomamichel, Quantum Information Processing with Finite Resources: Mathematical Foundations. BerlinSpringer5Tomamichel, M.: Quantum Information Processing with Finite Resources: Mathematical Founda- tions, vol. 5. Springer, Berlin (2015)
Fully device-independent quantum key distribution. U Vazirani, T Vidick, Phys. Rev. Lett. 11314140501Vazirani, U., Vidick, T.: Fully device-independent quantum key distribution. Phys. Rev. Lett. 113(14), 140501 (2014)
Randomness versus nonlocality in the Mermin-Bell experiment with three parties. Wba18, E Woodhead, B Bourdoncle, A Acín, arXiv:1804.09733arXiv preprintWBA18. Woodhead, E., Bourdoncle, B., Acín, A.: Randomness versus nonlocality in the Mermin-Bell ex- periment with three parties (2018). arXiv preprint arXiv:1804.09733
Device-independent parallel self-testing of two singlets. Wbms16, X Wu, J.-D Bancal, M Mckague, V Scarani, Phys. Rev. A. 93662121WBMS16. Wu, X., Bancal, J.-D., McKague, M., Scarani, V.: Device-independent parallel self-testing of two singlets. Phys. Rev. A 93(6), 062121 (2016)
Quantum internet: a vision for the road ahead. S Wehner, D Elkouss, R Hanson, Science. 36264129288Wehner, S., Elkouss, D., Hanson, R.: Quantum internet: a vision for the road ahead. Science 362(6412), eaam9288 (2018)
| [] |
[
"Automatically Summarizing Evidence from Clinical Trials: A Prototype Highlighting Current Challenges",
"Automatically Summarizing Evidence from Clinical Trials: A Prototype Highlighting Current Challenges"
] | [
"Sanjana Ramprasad [email protected] ",
"Denis Jered Mcinerney [email protected] ",
"Iain J Marshall [email protected] ",
"Byron C Wallace [email protected] ",
"\nNortheastern University\nNortheastern University\nCollege London\n",
"\nNortheastern University\n\n"
] | [
"Northeastern University\nNortheastern University\nCollege London",
"Northeastern University\n"
] | [
"Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics System Demonstrations"
] | We present TrialsSummarizer, a system that aims to automatically summarize evidence presented in the set of randomized controlled trials most relevant to a given query. Building on prior work (Marshall et al., 2020), the system retrieves trial publications matching a query specifying a combination of condition, intervention(s), and outcome(s), and ranks these according to sample size and estimated study quality. The top-k such studies are passed through a neural multi-document summarization system, yielding a synopsis of these trials. We consider two architectures: A standard sequence-to-sequence model based on BART (Lewis et al., 2019), and a multi-headed architecture intended to provide greater transparency to end-users. Both models produce fluent and relevant summaries of evidence retrieved for queries, but their tendency to introduce unsupported statements render them inappropriate for use in this domain at present. The proposed architecture may help users verify outputs allowing users to trace generated tokens back to inputs. The demonstration video is available at: https://vimeo.com/735605060 The prototype, source code, and model weights are available at: https://sanjanaramprasad. github.io/trials-summarizer/. Emily Reif, et al. 2020. The language interpretability tool: Extensible, interactive visualizations and analysis for nlp models. arXiv preprint arXiv:2008.05122.243 | 10.48550/arxiv.2303.05392 | [
"https://www.aclanthology.org/2023.eacl-demo.27.pdf"
] | 257,427,641 | 2303.05392 | 1bcee4d1d502742a3fb7a5c25b52e346a7f91cb4 |
Automatically Summarizing Evidence from Clinical Trials: A Prototype Highlighting Current Challenges
May 2-4, 2023
Sanjana Ramprasad [email protected]
Denis Jered Mcinerney [email protected]
Iain J Marshall [email protected]
Byron C Wallace [email protected]
Northeastern University
Northeastern University
College London
Northeastern University
Automatically Summarizing Evidence from Clinical Trials: A Prototype Highlighting Current Challenges
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics System Demonstrations
the 17th Conference of the European Chapter of the Association for Computational Linguistics System DemonstrationsMay 2-4, 2023
We present TrialsSummarizer, a system that aims to automatically summarize evidence presented in the set of randomized controlled trials most relevant to a given query. Building on prior work (Marshall et al., 2020), the system retrieves trial publications matching a query specifying a combination of condition, intervention(s), and outcome(s), and ranks these according to sample size and estimated study quality. The top-k such studies are passed through a neural multi-document summarization system, yielding a synopsis of these trials. We consider two architectures: A standard sequence-to-sequence model based on BART (Lewis et al., 2019), and a multi-headed architecture intended to provide greater transparency to end-users. Both models produce fluent and relevant summaries of evidence retrieved for queries, but their tendency to introduce unsupported statements render them inappropriate for use in this domain at present. The proposed architecture may help users verify outputs allowing users to trace generated tokens back to inputs. The demonstration video is available at: https://vimeo.com/735605060 The prototype, source code, and model weights are available at: https://sanjanaramprasad. github.io/trials-summarizer/. Emily Reif, et al. 2020. The language interpretability tool: Extensible, interactive visualizations and analysis for nlp models. arXiv preprint arXiv:2008.05122.243
Introduction
Patient treatment decisions would ideally be informed by all available relevant evidence. However, realizing this aim of evidence-based care has become increasingly difficult as the medical literature (already vast) has continued to rapidly expand (Bastian et al., 2010). Well over 100 new RCT reports are now published every day (Marshall et al., 2021). Language technologies -specifically automatic summarization methods -have the potential to provide concise overviews of all evidence relevant to a given clinical question, providing a kind of systematic review on demand (Wang et al., 2022;DeYoung et al., 2021;Wallace et al., 2021).
We describe a demonstration system, TrialsSummarizer, which combines retrieval over clinical trials literature with a summarization model to provide narrative overviews of current published evidence relevant to clinical questions. Figure 1 shows an illustrative query run in our system and the resultant output. A system capable of producing accurate summaries of the medical evidence on any given topic could dramatically improve the ability of caregivers to consult the whole of the evidence base to inform care.
However, current neural summarization systems are prone to inserting inaccuracies into outputs (Kryscinski et al., 2020;Maynez et al., 2020;Pagnoni et al., 2021;Ladhak et al., 2021;Choubey et al., 2021). This has been shown specifically to be a problem in the context of medical literature summarization (Wallace et al., 2021;Otmakhova et al., 2022), where there is a heightened need for factual accuracy. A system that produces plausible but often misleading summaries of comparative treatment efficacy is useless without an efficient means for users to assess the validity of outputs.
Motivated by this need for transparency when summarizing clinical trials, we implement a summarization architecture and interface designed to permit interactions that might instill trust in outputs. Specifically, the model associates each token in a generated summary with a particular source "aspect" extracted from inputs. This in turn allows one to trace output text back to (snippets of) inputs, permitting a form of verification. The architecture also provides functionality to "in-fill" pre-defined template summaries, providing a compromise between the control afforded by templates and the flexibility of abstractive summarization. We realize this functionality in our system demonstration.
Related Work
The (lack of) factuality of neural summarization systems is an active area of research (Chen et al., Figure 1: An example query (regarding use of statins to reduce risk of stroke) and output summary provided by the system. In this example, the summary accurately reflects the evidence, but this is not always the case. 2021; Cao et al., 2020;Dong et al., 2020;Liu et al., 2020;Goyal and Durrett, 2021;Kryscinski et al., 2020;Xie et al., 2021). This demo paper considers this issue in the context of a specific domain and application. We also explored controllability to permit interaction, in part via templates. This follows prior work on hybrid template/neural summarization (Hua and Wang, 2020;Mishra et al., 2020;Wiseman et al., 2018).
We also note that this work draws upon prior work on visualizing summarization system outputs (Vig et al., 2021;Strobelt et al., 2018;Tenney et al., 2020) and biomedical literature summarization (Plaza and Carrillo-de Albornoz, 2013;Demner-Fushman and Lin, 2006;Mollá, 2010;Sarker et al., 2017;Wallace et al., 2021). However, to our knowledge this is the first working prototype to attempt to generate (draft) evidence reviews that are both interpretable and editable on demand.
System Overview
Our interface is built on top of Trialstreamer (Marshall et al., 2020), an automated system that identifies new reports of randomized controlled trials (RCTs) in humans and then extracts and stores salient information from these in a database of all published trial information. Our system works by identifying RCT reports relevant to a given query using a straightforward retrieval technique (Section 3.1), and then passing the top-k of these through a multi-document summarization model (Section 3.2). For the latter component we consider both a standard sequence-to-sequence approach and a aspect structured architecture (Section 3.3) intended to provide greater transparency.
Retrieving Articles
Trialstreamer (Marshall et al., 2020;Nye et al., 2020) monitors research databases -specifically, PubMed 1 and the World Health Organization International Clinical Trials Registry Platform -to 1 https://pubmed.ncbi.nlm.nih.gov/ automatically identify newly published reports of RCTs in humans using a previously validated classifier (Marshall et al., 2018).
Articles describing RCTs are then passed through a suite of machine learning models which extract key elements from trial reports, including: sample sizes; descriptions of trial populations, interventions, and outcomes; key results; and the reliability of the evidence reported (via an approximate risk of bias score; Higgins et al. 2019). This extracted (semi-)structured information is stored in the Trialstreamer relational database.
Extracted free-text snippets describing study populations, interventions, and outcomes (PICO elements) are also mapped onto MeSH terms, 2 using a re-implementation of MetaMap Lite (Demner-Fushman et al., 2017).
To facilitate search, users can enter MeSH terms for a subset of populations, interventions, and outcomes, which is used to search for matches over the articles and their corresponding extracted key data in the database. Matched studies are then ranked as a score function of sample size s and risk of bias score rob: score = s/rob; that is, we prioritize retrieval of large, high-quality trial reports.
The novelty on offer in this system demonstration is the inclusion of a summarization component, which consumes the top-k retrieved trials (we use k=5 here) and outputs a narrative summary of this evidence in the style of a systematic review abstract (Wallace et al., 2021). By combining this summarization module with the Trialstreamer database, we can provide real-time summarization of all trials that match a given query (Figure 1).
Summarizing Trials
We consider two realizations of the summarization module. We train both models on a dataset introduced in prior work which comprises collections As a first model, we adopt BART (Lewis et al., 2019) with a Longformer (Beltagy et al., 2020) encoder to accommodate the somewhat lengthy multi-document inputs. As inputs to the model we concatenate spans extracted from individual trials containing salient information, including populations, interventions, outcomes, and "punchlines." The latter refers to extracted snippets which seem to provide the main results or findings, e.g., "There was a significant increase in mortality ..."; see (Lehman et al., 2019) for more details. We enclose these spans in special tags. e.g., <population>Participants were diabetics ... </population>. As additional supervision we run the same extraction models over the targets and also demarcate these using the same set of tags.
Encoder-Aspect Decoders
α (t) I α (t) O α (t) PL … …
An issue with standard sequence-to-sequence models for this task is that they provide no natural means to assess the provenance of tokens in outputs, which makes it difficult to verify the trustworthiness of generated summaries. Next we discuss an alternative architecture which is intended to provide greater transparency and controllability.
Proposed Aspect Structured Architecture to Increase Transparency
We adopt a multi-headed architecture similar to (Goyal et al., 2021), which explicitly generates tokens corresponding to the respective aspects (Figure 2). We assume inputs are segmented into texts corresponding to a set of K fields or aspects. Here these are descriptions of trial populations, inter-ventions, and outcomes, and "punchline" snippets reporting the main study findings. We will denote inputs for each of the K aspects by {x a 1 , ..., x a K }, where x a k denotes the text for aspect k extracted from input x. Given that this is a multi-document setting (each input consists of multiple articles), x a k is formed by concatenating aspect texts across all documents using special tokens to delineate individual articles. We encode aspect texts separately to obtain aspect-specific embeddings x a k enc . We pass these (respectively) to aspect-specific decoders and a shared language model head to obtain vocabulary distributionsô a k t . All model parameters are shared save for the last two decoder layers which comprise aspect-specific parameters. Importantly, the representation for a given aspect is only based on the text associated with this aspect (x a k ).
We model the final output as a mixture over the respective aspect distributions:
o t = K k=1 z a k t (ô a k t ).
Mixture weights z t = z a 1 t , . . . , z a K t encode a soft selection over aspects for timestep t and are obtained as a dot product between each penultimate representation of the decoder y a k t (prior to passing them through a language model head) and a learnable parameter, W z ∈ R D . The K logitsz a k t are then normalized via a Softmax before multiplying with the aspect-specific vocabulary distributionsô a k t Tracing outputs to inputs This architecture permits one to inspect the mixture weights associated with individual tokens in a generated summary, which suggests which aspect (most) influenced the output. Further inspection of the corresponding snippets from studies for this aspect may facilitate verification of outputs, and/or help to resolve errors and where they may have been introduced. Figure 3: Template generation. To in-fill, we force generation from a specific head and monitor the model's mixture distribution to decide when to stop.
Controlled generation Neural summarization models often struggle to appropriately synthesize conflicting evidence to arrive at the correct overall determination concerning a particular intervention effectiveness. But while imperfect, summarization models may be useful nonetheless by providing a means to rapidly draft synopses of the evidence to be edited. The multi-headed architecture naturally permits template in-filling, because one can explicitly draw tokens from heads corresponding to aspects of interest. In our demo, we allow users to toggle between different templates which correspond to different conclusions regarding the overall effectiveness of the intervention in question. (It would be simple to extend this to allow users to specify their own templates to be in-filled.)
To in-fill templates we use template text preceding blanks as context and then generate text from the language head corresponding to the designated aspect. To determine span length dynamically we monitor the mixture distribution and stop when the it shifts to the another aspect ( Figure 3). Figure 5 shows the interface we have built integrating the multi-headed architecture. Highlighted aspects in the summary provide a means of interpreting the source of output tokens by indicating the aspects that informed their production. One can in turn inspect the snippets associated with these aspects, which may help to identify unsupported content in the generated summary. To this end when users click on a token we display the subset of the input that most informed its production.
User Interface
We provide additional context by displaying overviews (i.e., "punchlines") communicating the main findings of the trials. Because standard sequence-to-sequence models do not provide a mechanism to associate output tokens with input aspects, we display all aspects (and punchlines) for all trials alongside the summary for this model.
Capitalizing on the aforementioned in-filling abilities of our model, we also provide pre-defined templates for each possible "direction" of aggregate findings (significant vs. no effect). We discuss the interface along with examples in Section 5.
Dataset and Training Details
We aim to consume collections of titles and abstracts that describe RCTs addressing the same clinical question to abstractive summaries that synthesize the evidence presented in these. We train all models on an RCT summarization dataset (Wallace et al., 2021) where we extract clinically salient elements -i.e., our aspects -from each of the (unstructured) inputs as a pre-processing step using existing models (Marshall et al., 2020).
Training We use the Huggingface Transformers library (Wolf et al., 2020) to implement both models. We initalize both models to bart-base (Lewis et al., 2019). We fine-tune the models with a batch size of 2 for 3 epochs, using the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 3e-5.
Inference We use beam search with a beam size of 3. We set the min and max length of generated text to be 10 and 300, respectively.
Case Study: Verification and Controllability
To demonstrate the potential usefulness of the interface (and the architecture which enables it), we walk through two case studies. We highlight the type of interpretability for verification our proposed approach provides, also demonstrate the ability to perform controllable summarization to show how this might be useful. The queries used in these case studies along with the investigation were performed by a co-author IJM, a medical doctor with substantial experience in evidence-based medicine.
We also compare the models and report automatic scores for ROUGE and factuality in the Appendix section A and find that the two models perform comparably.
Model Interpretability
As an example to highlight the potential of the proposed architecture and interface to permit verification, we consider a query regarding the effect of Oseltamivir as an intervention for patients infected with influenza. The standard architecture produces a summary of the top most relevant RCTs to this query shown in Figure 4. This comprises two claims: (1) The intervention has been shown to reduce the risk of adverse events among adults and children, and, (2) There is no consensus as to the most effective dosage. One can inspect the inputs to attempt to verify these. Doing so, we find that reported results do tend to indicate a reduced risk of adverse events and that adolescents and adults were included in some of these studies, indicating that the first claim is accurate. The second claim is harder to verify on inspection; no such uncertainty regarding dosage is explicitly communicated in the inputs. Verifying these claims using the standard seq2seq architecture is onerous because the abstractive nature of such models makes it difficult to trace parts of the output back to inputs. Therefore, verification requires reading through entire inputs to verify different aspects. The multi-headed architecture allows us to provide an interactive interface intended to permit easier verification. In particular, associating each output token with a particular aspect provides a natural mechanism for one to inspect snippets of the inputs that might support the generated text. Figure 5 illustrates this for the aforementioned Oseltamivir and flu example. Here we show how the "effective" token in the output can be clicked on to reveal the aspect that influenced its production (Figure 2), in this case tracing back to the extracted "punchlines" conveying main study findings. This readily reveals that the claim is supported. Similarly, we can verify the bit about the population being individuals at risk of complications by tracing back to the population snippets upon which this output was conditioned.
Controllability As mentioned above, another potential benefit of the proposed architecture is the ability to "in-fill" templates to imbue neural generative models with controllability. In particular, given that the overall (aggregate) treatment efficacy is of primary importance in this context, we pre-define templates which convey an effect direction. The idea is that if upon verification one finds that the model came to the wrong aggregate effect direction, they can use a pre-defined template corresponding to the correct direction to generate a more accurate summary on-demand.
We show an example of a summary generated by the structured model in the top part of Figure 6. By using the interpretability features for veri-fication discussed above, we find that the model inaccurately communicates that the intervention Chloroquine is effective for treating COVID-19. However, with the interactive interface we are able to immediately generate a new summary featuring the corrected synthesis result (direction), as depicted in the bottom of Figure 6, without need for manual drafting.
We provide additional case studies in Appendix Section B.
Conclusions
We have described TrialsSummarizer, a prototype system for automatically summarizing RCTs relevant to a given query. Neural summarization models produce summaries that are readable and (mostly) relevant, but their tendency to introduce unsupported or incorrect information into outputs means they are not yet ready for use in this domain.
We implement a multi-headed architecture intended to provide greater transparency. We provided qualitative examples intended to highlight its potential to permit faster verification and controllable generation. Future work is needed to test the utility of this functionality in a user trial, and to inform new architectures that would further increase the accuracy and transparency of models for summarizing biomedical evidence.
Limitations and Ethical Issues
Limitations This work has several limitations. First, as stated above, while the prospect of automatic summarization of biomedical evidence is tantalizing, existing models are not yet fit for the task due to their tendency to introduce factual errors. Our working prototype serves in part to highlight this and motivate work toward resolving issues of reliability and trusworthiness.
In this demo paper we have also attempted to make some progress in mitigating such issues by way of the proposed structured summarization model and accompanying interface and provided qualitative examples highlighting its potential, but really a formal user study should be conducted to assess the utility of this. This is complicated by the difficulty of the task: To evaluate the factuality of automatic summaries requires deep domain expertise and considerable time to read through constituent inputs and determine the veracity of a generated summary.
Another limitation of this work is that we have made some ad-hoc design decisions in our current prototype system. For example, at present we (arbitrarily) pass only the top-5 (based on trial sample size and estimated reliability) articles retrieved for a given query through the summarization system. Future work might address this by considering better motivated methods to select which and how many studies ought to be included.
Ethics Accurate summaries of the biomedical evidence have the potential to ultimately improve patient care by supporting the practice of evidencebased medicine. However, at present such models bring inherent risks. In particular, one may be tempted to blindly trust model outputs; given the limitations of current summarization technologies, this would be ill-advised. Our prototype demonstration system is designed in part to highlight existing challenges that must be solved in this space before any model might actually be adopted (and beyond this, we emphasize that need for verification of outputs, which has been the focus of the present effort). In the interface we indicate with a hard-to-miss warning message that this system should only be used for research purposes and these summaries are unreliable and not to be trusted. We report ROUGE scores with respect to the target (manually composed) Cochrane summaries, for both the development and test sets. We report scores for both the vanilla standard BART model along with our proposed multi-headed model intended to aid verifiability and controllability. The models perform about comparably with respect to this metric as can be seen in Table 1. However ROUGE measures are based on (exact) n-gram overlap, and cannot measure the factuality of generated texts. Measuring factuality is in general an open problem, and evaluating the factual accuracy of biomedical reviews in particular is further complicated by the complexity of the domain and texts. Prior work has, however, proposed automated measures for this specific task (Wallace et al., 2021;DeYoung et al., 2021). These metrics are based on models which infer the reported directionality of the findings, e.g., whether or not a summary indicates that the treatment being described was effective. More specifically, we make binary predictions regarding whether generated and reference summaries report significant results (or not) and then calculate the F1 score of the former with respect to the latter.
Model
B Additional Case Studies
In this section we highlight a few more use cases that demonstrate the need for interpretability and controllability.
Interpretability We first highlight a set of examples where verifying model generated summaries is difficult without an interface explicitly designed to provide interpretability capabilities. In Figure 7 (a) we show an example where the model generates a summary that accurately synthesized a summary on the effect of using Mirtazapine for patients with depression. However, the summary also includes a statement that states the need for adequate, welldesigned trials. Because this statement is generic and does not point to discussing any of the PICO elements, it is unclear what element was responsible for the generation of the statement. A user would therefore need to review all (raw) input texts.
In the case of Figure 7 (b), the model generated summaries has two contradicting sentences. The first sentence indicates a reduction in hospital admission and death among COVID-19 patients when Ivermectin was used and the second sentence claims there is insufficient evidence for the same. However without interpretability capabilities it is not possible to debug and verify if the same set of elements were responsible for contradicting statements or not.
The example in Figure 7 (c) shows a case where the model first accurately synthesizes the findings in the studies of the effect of glucosamine in combination of chondroitin sulfate on knee pain. However, the following statement talks about the relative effects of the two. Again, in this case it is is not intuitive which element led to the generation of the statement and verification requires careful reviewing of all the text and their implication in all elements.
Controllability We next highlight examples where one can effectively control the generation of summaries that would otherwise be incorrect using the template in-filling capabilities afforded by our model. While the interpretability features may permit efficient verification, models still struggle to consistently generate factual accurate summaries. We showcase instances where one can arrive at more accurate summaries quickly via the controllability (template in-filling) made possible by our model.
In the example shown in Figure 8 the default summary synthesizes the effect accurately. However, the model summary discusses the effect on short-term and long-term benefits generated from the punchlines of the studies. Reading through extracted 'punchlines', we find that the studies indicate issues upon withdrawal but do not necessarily provide information on long-term use of the medication. In-filling templates constrains the output, and can be used to produce more accurate summaries while still taking some advantage of the flexibility afforded by generation. For instance in this case we can see that the edited summary induced using the template is more accurate.
Similarly, in Figure 9 when the multi-headed model is queried for the effect of Glucosamine on Osteoarthritis of knee, we observe that the model on its own produces a summary conveying an incorrect aggregate effect of studies. We can verify this by inspecting the elements responsible for the generation, as discussed above. We then arrive at a more accurate summary using the template shown.
The example in Figure 10 is an interesting mistake made by the model. Because the outcomes can be presented with the same information but in a positive or negative direction (e.g., weight loss Figure 9: The summary on top shows the default summary generated by the multi-headed model when queried for the effect of Glucosamine on Osteoarthritis of knee. The bottom summary shows the edited summary using a pre-defined template Figure 10: The summary on top shows the default summary generated by the multi-headed model when queried for the effect of Semaglutide on obese patients. The bottom summary shows the edited summary using a pre-defined template vs weight gain), the model has to accurately infer the effect of all studies. In this case, the model generates a summary with the right effect but views weight loss as an undesirable effect. Here again we select a template and allow the model quickly in-fill, yielding a more accurate summary.
Figure 2 :
2Our proposed structured summarization approach entails synthesizing individual aspects (automatically extracted in a pre-processing step), and conditionally generating text about each of these.of RCT reports (PICO elements extracted from abstracts) as inputs and Authors' Conclusions sections of systematic review abstracts authored by members of the Cochrane Collaboration as targets (Wallace et al., 2021) (see Section 4).
Figure 4 :
4Example output and interface using a standard BART (Lewis et al., 2019) model.
Figure 5 :
5Qualitative example where the structured summarization model (and associated interface) permits tokenlevel verification of the summary generated regarding the use of oseltamivir on influenza-infected patients. This approach readily indicates support for the claim that it is "effective" (top; yellow) and for the description of the population as individuals at risk of "complications" (bottom; purple).
Figure 6 :
6Inaccurate summaries generated by the structured model regarding the effect of Chloroquine on patients with COVID-19 (top). Template-controlled summary using the structured model (bottom).
Figure 7 :
7a) BART generated summary when queried about the use of Mirtazapine to treat depression b) BART generated summary when queried about the use of Ivermectin to treat COVID-19)
Figure 8 :
8The summary on top shows the default summary generated by the multi-headed model when queried for the effect of Mirtazapine on depression. The bottom summary shows the controlled summary using a pre-defined template.
Template: The results support the use of <intervention> as an effective treatment...<source
interventions>
prefix
The
use
of
results support
dietary
sodium restriction in
patients with
Shared
Encoder
Intervention Decoder
max
length
back to
punchline
start of correct
aspect
the
Monitored Aspect Mixture Distribution
Pop.
Int.
Out.
Punch.
Prafulla Kumar Choubey, Jesse Vig, Wenhao Liu,and Nazneen Fatema Rajani. 2021. Mofe: Mix-
ture of factual experts for controlling hallucina-
tions in abstractive summarization. arXiv preprint
arXiv:2110.07166.
Dina Demner-Fushman and Jimmy Lin. 2006. Answer
extraction, semantic clustering, and extractive sum-
marization for clinical question answering. In Pro-
ceedings of the 21st International Conference on
Computational Linguistics and 44th Annual Meet-
ing of the Association for Computational Linguistics,
pages 841-848.
Dina Demner-Fushman, Willie J Rogers, and Alan R
Aronson. 2017. Metamap lite: an evaluation of
a new java implementation of metamap. Journal
of the American Medical Informatics Association,
24(4):841-844.
Jay DeYoung, Iz Beltagy, Madeleine van Zuylen, Bai-
ley Kuehl, and Lucy Lu Wang. 2021. Ms2: Multi-
document summarization of medical studies. arXiv
preprint arXiv:2104.06486.
Yue Dong, Shuohang Wang, Zhe Gan, Yu Cheng, Jackie
Chi Kit Cheung, and Jingjing Liu. 2020. Multi-fact
correction in abstractive text summarization. arXiv
preprint arXiv:2010.02443.
Tanya Goyal and Greg Durrett. 2021. Annotating and
modeling fine-grained factuality in summarization.
arXiv preprint arXiv:2104.04302.
Tanya Goyal, Nazneen Fatema Rajani, Wenhao Liu,
and Wojciech Kryściński. 2021. Hydrasum: Dis-
entangling stylistic features in text summariza-
tion using multi-decoder models. arXiv preprint
arXiv:2110.04400.
Julian PT Higgins, Jelena Savović, Matthew J Page,
Roy G Elbers, and Jonathan AC Sterne. 2019. Assess-
ing risk of bias in a randomized trial. Cochrane hand-
book for systematic reviews of interventions, pages
205-228.
Xinyu Hua and Lu Wang. 2020. Pair: Planning and iter-
ative refinement in pre-trained transformers for long
text generation. arXiv preprint arXiv:2010.02301.
Diederik P Kingma and Jimmy Ba. 2014. Adam: A
method for stochastic optimization. arXiv preprint
arXiv:1412.6980.
Wojciech Kryscinski, Bryan McCann, Caiming Xiong,
and Richard Socher. 2020. Evaluating the factual
consistency of abstractive text summarization. In
Proceedings of the 2020 Conference on Empirical
Methods in Natural Language Processing (EMNLP),
pages 9332-9346, Online. Association for Computa-
tional Linguistics.
Faisal Ladhak, Esin Durmus, He He, Claire Cardie, and
Kathleen McKeown. 2021. Faithful or extractive?
on mitigating the faithfulness-abstractiveness trade-
off in abstractive summarization. arXiv preprint
arXiv:2108.13684.
Eric Lehman, Jay DeYoung, Regina Barzilay, and By-
ron C Wallace. 2019. Inferring which medical treat-
ments work from reports of clinical trials. arXiv
preprint arXiv:1904.01606.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan
Ghazvininejad, Abdelrahman Mohamed, Omer Levy,
Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: De-
noising sequence-to-sequence pre-training for natural
language generation, translation, and comprehension.
arXiv preprint arXiv:1910.13461.
Zhenghao Liu, Chenyan Xiong, Zhuyun Dai, Si Sun,
Maosong Sun, and Zhiyuan Liu. 2020. Adapting
open domain fact extraction and verification to covid-
fact through in-domain language modeling. In Find-
ings of the Association for Computational Linguistics:
EMNLP 2020, pages 2395-2400.
Iain J Marshall, Anna Noel-Storr, Joël Kuiper, James
Thomas, and Byron C Wallace. 2018. Machine learn-
ing for identifying randomized controlled trials: an
evaluation and practitioner's guide. Research synthe-
sis methods, 9(4):602-614.
Iain J Marshall, Benjamin Nye, Joël Kuiper, Anna
Noel-Storr, Rachel Marshall, Rory Maclean, Frank
Soboczenski, Ani Nenkova, James Thomas, and By-
ron C Wallace. 2020. Trialstreamer: A living, auto-
matically updated database of clinical trial reports.
Journal of the American Medical Informatics Associ-
ation, 27(12):1903-1912.
Iain James Marshall, Veline L'Esperance, Rachel
Marshall, James Thomas, Anna Noel-Storr, Frank
Soboczenski, Benjamin Nye, Ani Nenkova, and By-
ron C Wallace. 2021. State of the evidence: a survey
of global disparities in clinical trials. BMJ Global
Health, 6(1):e004145.
Joshua Maynez, Shashi Narayan, Bernd Bohnet, and
Ryan McDonald. 2020. On faithfulness and factu-
ality in abstractive summarization. arXiv preprint
arXiv:2005.00661.
Abhijit Mishra, Md Faisal Mahbub Chowdhury, Sagar
Manohar, Dan Gutfreund, and Karthik Sankara-
narayanan. 2020. Template controllable keywords-
to-text generation. arXiv preprint arXiv:2011.03722.
Diego Mollá. 2010. A corpus for evidence based
medicine summarisation. In Proceedings of the Aus-
tralasian Language Technology Association Work-
shop, pages 76-80.
Benjamin E Nye, Ani Nenkova, Iain J Marshall, and
Byron C Wallace. 2020. Trialstreamer: mapping
and browsing medical evidence in real-time. In Pro-
ceedings of the conference. Association for Computa-
tional Linguistics. North American Chapter. Meeting,
volume 2020, page 63. NIH Public Access.
Yulia Otmakhova, Karin Verspoor, Timothy Baldwin,
and Jey Han Lau. 2022. The patient is more dead
than alive: exploring the current state of the multi-
document summarisation of the biomedical literature.
In Proceedings of the 60th Annual Meeting of the
Association for Computational Linguistics (Volume
1: Long Papers), pages 5098-5111, Dublin, Ireland.
Association for Computational Linguistics.
Artidoro Pagnoni, Vidhisha Balachandran, and Yulia
Tsvetkov. 2021. Understanding factuality in abstrac-
tive summarization with frank: A benchmark for
factuality metrics. arXiv preprint arXiv:2104.13346.
Laura Plaza and Jorge Carrillo-de Albornoz. 2013. Eval-
uating the use of different positional strategies for
sentence selection in biomedical literature summa-
rization. BMC bioinformatics, 14(1):1-11.
Abeed Sarker, Diego Molla, and Cecile Paris. 2017.
Automated text summarisation and evidence-based
medicine: A survey of two domains.
Hendrik Strobelt, Sebastian Gehrmann, Michael
Behrisch, Adam Perer, Hanspeter Pfister, and Alexan-
der M Rush. 2018. S eq 2s eq-v is: A visual debug-
ging tool for sequence-to-sequence models. IEEE
transactions on visualization and computer graphics,
25(1):353-363.
Table 1: ROUGE scores achieved by the standard BART model and our proposed multi-headed architecture on the dev and test sets.ROUGE-L (dev) ROUGE-L(test)
BART
20.4
19.7
Multi-head
19.9
19.3
Model
Direc (dev) Direc(test)
BART
49.6
51.8
Multi-head
49.3
52.7
Table 2 :
2Directionality scores on the vanilla BART model and our proposed multi-headed architecture on the dev and test sets.
MeSH -short for Medical Subject Headings -is a controlled vocabulary maintained by the National Library of Medicine (NLM).
AcknowledgementsThis work was supported in part by the National Institutes of Health (NIH) under award R01LM012086, and by the National Science Foundation (NSF) awards 1901117 and 2211954. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH or the NSF.Appendix A Automatic Evaluation
Seventyfive trials and eleven systematic reviews a day: how will we ever keep up?. H Bastian, Glasziou, Chalmers, PLoS medicine. 79H Bastian, P Glasziou, and I Chalmers. 2010. Seventy- five trials and eleven systematic reviews a day: how will we ever keep up? PLoS medicine, 7(9).
Longformer: The long-document transformer. Iz Beltagy, E Matthew, Arman Peters, Cohan, arXiv:2004.05150arXiv preprintIz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150.
Factual error correction for abstractive summarization models. Meng Cao, Yue Dong, Jiapeng Wu, Jackie Chi Kit Cheung, arXiv:2010.08712arXiv preprintMeng Cao, Yue Dong, Jiapeng Wu, and Jackie Chi Kit Cheung. 2020. Factual error correction for ab- stractive summarization models. arXiv preprint arXiv:2010.08712.
Improving faithfulness in abstractive summarization with contrast candidate generation and selection. Sihao Chen, Fan Zhang, Kazoo Sone, Dan Roth, arXiv:2104.09061arXiv preprintSihao Chen, Fan Zhang, Kazoo Sone, and Dan Roth. 2021. Improving faithfulness in abstractive sum- marization with contrast candidate generation and selection. arXiv preprint arXiv:2104.09061.
Summvis: Interactive visual analysis of models, data, and evaluation for text summarization. Jesse Vig, Wojciech Kryściński, Karan Goel, Nazneen Fatema Rajani, arXiv:2104.07605arXiv preprintJesse Vig, Wojciech Kryściński, Karan Goel, and Nazneen Fatema Rajani. 2021. Summvis: Inter- active visual analysis of models, data, and eval- uation for text summarization. arXiv preprint arXiv:2104.07605.
Generating (factual?) narrative summaries of rcts: Experiments with neural multi-document summarization. C Byron, Sayantan Wallace, Frank Saha, Iain J Soboczenski, Marshall, AMIA Annual Symposium Proceedings. 2021605Byron C Wallace, Sayantan Saha, Frank Soboczenski, and Iain J Marshall. 2021. Generating (factual?) narrative summaries of rcts: Experiments with neural multi-document summarization. In AMIA Annual Symposium Proceedings, volume 2021, page 605. American Medical Informatics Association.
Overview of MSLR2022: A shared task on multidocument summarization for literature reviews. Lucy Lu Wang, Jay Deyoung, Byron Wallace, Proceedings of the Third Workshop on Scholarly Document Processing. the Third Workshop on Scholarly Document ProcessingGyeongju, Republic of KoreaAssociation for Computational LinguisticsLucy Lu Wang, Jay DeYoung, and Byron Wallace. 2022. Overview of MSLR2022: A shared task on multi- document summarization for literature reviews. In Proceedings of the Third Workshop on Scholarly Doc- ument Processing, pages 175-180, Gyeongju, Repub- lic of Korea. Association for Computational Linguis- tics.
Learning neural templates for text generation. Sam Wiseman, M Stuart, Alexander M Shieber, Rush, arXiv:1808.10122arXiv preprintSam Wiseman, Stuart M Shieber, and Alexander M Rush. 2018. Learning neural templates for text gen- eration. arXiv preprint arXiv:1808.10122.
Transformers: State-of-the-art natural language processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Proceedings of the 2020 conference on empirical methods in natural language processing: system demonstrations. the 2020 conference on empirical methods in natural language processing: system demonstrationsThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 con- ference on empirical methods in natural language processing: system demonstrations, pages 38-45.
Factual consistency evaluation for text summarization via counterfactual estimation. Yuexiang Xie, Fei Sun, Yang Deng, Yaliang Li, Bolin Ding, arXiv:2108.13134arXiv preprintYuexiang Xie, Fei Sun, Yang Deng, Yaliang Li, and Bolin Ding. 2021. Factual consistency evaluation for text summarization via counterfactual estimation. arXiv preprint arXiv:2108.13134.
Finegrained factual consistency assessment for abstractive summarization models. Sen Zhang, Jianwei Niu, Chuyuan Wei, Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingSen Zhang, Jianwei Niu, and Chuyuan Wei. 2021. Fine- grained factual consistency assessment for abstrac- tive summarization models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 107-116.
| [] |
[
"THRESHOLD DYNAMICS OF SAIRS EPIDEMIC MODEL WITH SEMI-MARKOV SWITCHING",
"THRESHOLD DYNAMICS OF SAIRS EPIDEMIC MODEL WITH SEMI-MARKOV SWITCHING",
"THRESHOLD DYNAMICS OF SAIRS EPIDEMIC MODEL WITH SEMI-MARKOV SWITCHING",
"THRESHOLD DYNAMICS OF SAIRS EPIDEMIC MODEL WITH SEMI-MARKOV SWITCHING"
] | [
"Stefania Ottaviano ",
"Stefania Ottaviano "
] | [] | [] | We study the threshold dynamics of a stochastic SAIRS-type model with vaccination, where the role of asymptomatic and symptomatic infectious individuals is explicitly considered in the epidemic dynamics. In the model, the values of the disease transmission rate may switch between different levels under the effect of a semi-Markov process. We provide sufficient conditions ensuring the almost surely epidemic extinction and persistence in time mean. In the case of disease persistence, we investigate the omega-limit set of the system and give sufficient conditions for the existence and uniqueness of an invariant probability measure. | 10.1002/mma.9122 | [
"https://export.arxiv.org/pdf/2110.15435v2.pdf"
] | 240,288,521 | 2110.15435 | 96ea3da60f2647496046ef61abeb67d62bdad7e0 |
THRESHOLD DYNAMICS OF SAIRS EPIDEMIC MODEL WITH SEMI-MARKOV SWITCHING
Stefania Ottaviano
THRESHOLD DYNAMICS OF SAIRS EPIDEMIC MODEL WITH SEMI-MARKOV SWITCHING
arXiv:2110.15435v2 [math.PR] 6 Dec 2021
We study the threshold dynamics of a stochastic SAIRS-type model with vaccination, where the role of asymptomatic and symptomatic infectious individuals is explicitly considered in the epidemic dynamics. In the model, the values of the disease transmission rate may switch between different levels under the effect of a semi-Markov process. We provide sufficient conditions ensuring the almost surely epidemic extinction and persistence in time mean. In the case of disease persistence, we investigate the omega-limit set of the system and give sufficient conditions for the existence and uniqueness of an invariant probability measure.
Introduction
Starting with the research of Kermack and McKendrick [13], in the last century a huge amount of mathematical epidemic models have been formulated, analysed and applied to a variety of infectious diseases, specially during the recent Covid-19 pandemic.
Once an infectious disease developed, the main goal is containing its spread. Several control strategies may be applied, such as detection and isolation of infectious individuals, lockdowns or vaccination. However, the detection of infectious individuals is far from being easy since they may not show symptoms. The presence of asymptomatic cases allows a wide circulation of a virus in the population, since they often remain unidentified and presumably have more contacts that symptomatic cases. The contribution of the so called "silent spreaders" to the infection transmission dynamics are relevant for various communicable diseases, such as Covid-19, influenza, cholera and shigella [12,27,19,23,1,22,20]; hence, asymptomatic cases should be considered in such mathematical epidemic models.
The containment of the disease with multiple lockdowns or isolation processes affects the transmission of the disease through the population. Moreover, in real biological systems, some parameters of the model are usually influenced by random switching of the external environment regime. For example, disease transmission rate in some epidemic model is influenced by random meteorological factors linked to the survival of many bacteria and viruses [24,25]. Thus, the transmission rate as the ability of an infectious individual to transmit infection and also as expression of the contact rate between individuals can be subject to random fluctuations. Hence, the choice of fixed deterministic parameters in models is unlikely to be realistic. In epidemiology, many authors have considered random switching systems (also called hybrid systems), whose distinctive feature is the coexistence of continuous dynamics and discrete events (random jumps at points in time). In particular, many works consider regime switching of external environments following a homogeneous continuous-time Markov chain [7,8,14,9,28,21]. The Markov property facilitates the mathematical analysis, although it can be a limitation as the sojourn time in each environment is exponentially distributed, which yields constant transition rates between different regimes. However, in reality, the transition rates are usually time-varying, hence, in each environmental state the conditional holding time distribution can be not exponential. For example, as shown in [24,25] and reported in [15], the dry spell (consisting of consecutive days with daily rain amount below some given threshold) length distribution is better modeled by Pearson type III distribution, gamma distribution or Weibull distribution. In this work, in order to include random influences on transmission parameters and overcome the drawback of the Markov setting, we use a semi-Markov process for describing environmental random changes. Semi-Markov switching systems are an emerging topic from both theoretical and practical viewpoints, able of capturing inherent uncertainty and randomness in the environment in many applied fields, ranging from epidemiology to DNA analysis, financial engineering, and wireless communications [30]. Compared to the most common Markov switching systems, they better characterize a broader range of phenomena but brings more difficulties to their stability analysis and control. Recently, a semi-Markov switching model has been used to analyze the coexistence and competitiveness of species in ecosystems [16]. In epidemiology, to the best of our knowledge, there are only very few semi-Markov switching models [15,29,4] and no one of these considers the role of the asymptomatic individuals in the disease dynamics. Thus, in this paper, we want to fill this gap and improve our understanding of these types of hybrid systems.
Precisely, we study a SAIRS-type model with vaccination, where the total population N is partitioned into four compartments, namely S, A, I, R, which represent the fraction of Susceptible, Asymptomatic infected, symptomatic Infected and Recovered individuals, respectively, such that N " S`A`I`R. The infection can be transmitted to a susceptible through a contact with either an asymptomatic infectious individual, at rate β A , or a symptomatic individual, at rate β I . Once infected, all susceptible individuals enter an asymptomatic state, indicating a delay between infection and symptom onset if they occur. Indeed, we include in the asymptomatic class both individuals who will never develop the symptoms and pre-symptomatic who will eventually become symptomatic. From the asymptomatic compartment, an individual can either progress to the class of symptomatic infectious I, at rate α, or recover without ever developing symptoms, at rate δ A . An infected individuals with symptoms can recover at a rate δ I . We assume that the recovered individuals do not obtain a long-life immunity and can return to the susceptible state after an average time 1{γ. We also assume that a proportion ν of susceptible individuals receive a dose of vaccine which grants them a temporary immunity. We do not add a compartment for the vaccinated individuals, not distinguishing the vaccine-induced immunity from the natural one acquired after recovery from the virus. We consider the vital dynamics of the entire population and, for simplicity, we assume that the rate of births and deaths are the same, equal to µ; we do not distinguish between natural deaths and disease related deaths [20]. Moreover, we assume that the environmental regimes (or states) influence the infectious transmission rates β A and β I , and that may switch under the action of a semi-Markov process. Accordingly, the values of β A and β I switch between different levels depending on the state in which the process is. The paper is organized as follows. In Section 2, we provide some basic concepts of semi-Markov processes and determine the SAIRS model under study. We show the existence of a unique global positive solution, and find a positive invariant set for the system. In Section 3, we investigate the threshold dynamics of the model. Precisely, we first consider the case in which β A prq " β I prq :" βprq, δ A prq " δ I prq :" δprq and find the basic reproduction number R 0 for our stochastic epidemic model driven by the semi-Markov process. We show that R 0 is a threshold value, meaning that its position with respect to one determines the almost surely disease extinction or the persistence in time mean. Then, we investigate the case β A prq ‰ β I prq or δ A prq ‰ δ I prq. First, we find two different sufficient conditions for the almost surely extinction, that are interchangeable, meaning that it is sufficient that one of the two are verified to ensure the extinction. After, we find a sufficient condition for the almost surely persistence in time mean of the system. Thus, we have two not adjacent regions depending on the model parameters, one where the system goes to extinction almost surely, and the other where it is persistent. In Section 5, as well as in Section 6, for simplicity, we restrict the analysis to the case of a semi-Markov process with two states. Under the disease persistence condition, we investigate the omega-limit set of the system. The introduction of the backward recurrence time process, that keeps track of the time elapsed since the latest switch, allows the considered stochastic system to be a piecewise deterministic Markov process [15]. Thus, in Section 6, we prove the existence of a unique invariant probability measure by utilizing an exclusion principle in [26] and the property of positive Harris recurrence. Finally, in Section 7, we validate our analytical results via numerical simulations and show the relevant role of the mean sojourn time in each environmental regime in the extinction or persistence of the disease.
Model description and basic concepts
2.1. The Semi-Markov process. Let trptq, t ě 0u be a semi-Markov process taking values in the state space M " t1, . . . , M u, whose elements denote the states of external environments influencing the transmission rates value of the model. Let 0 " τ 0 ă τ 1 ă . . . ă τ n ă . . . be the jump times, and σ 1 " τ 1´τ0 , σ 2 " τ 2´τ1 , . . . , σ n " τ n´τn´1 , . . . be the time intervals between two consecutive jumps. Let pp i,j q mˆm denote the transition probability matrix and F i ptq, t P r0, 8q, i " 1, . . . , M , the conditional holding time distribution of the semi-Markov process, then Pprpτ n`1 q " j, σ n`1 ď t|rpτ n q " iq " p i,j F i ptq, and the embedded chain tX n :" rpτ n q, n " 0, 1, . . .u of trptq, t ě 0u is Markov with one-step transition probability pp i,j q mˆm . Moreover, let f i ptq represents the density function of the conditional holding time distribution F i ptq, then for any t P r0, 8q and i, j P M, we define
q i,j ptq :" p i,j f i ptq 1´F i ptq ě 0 @i ‰ j, q i,i ptq :"´ÿ jPM,j‰i q i,j ptq @i P M.
We give the following same assumptions as in [15], which will be valid throughout the paper.
Assumptions (H1):
(i) The transition matrix pp i,j q mˆm is irreducible with p i,i " 0, i P M;
(ii) For each i P M, F i p¨q has a continuous and bounded density f i p¨q, and f i ptq ą 0 for all t P p0, 8q; (iii) For each i P M, there exists a constant ε i ą 0 such that
f i ptq 1´F i ptq ě ε i
for all t P r0, 8q. In [15], the authors provide a list of some probability distributions satisfying the assumption (H1), and show that the constraint conditions of (H1) are very weak. Specifically, they provide the phase-type distribution (PH-distribution) of a nonnegative random variable, and prove that this PH-distribution, or at most an approximation of it, satisfies the conditions in (H1). Thus, they conclude that essentially the conditional holding time distribution of the semi-Markov process can be any distribution on r0, 8q. Remark 1. Let us note that in the case of exponential (memoryless) sojourn time distribution, the semi-Markov process trptq, t ě 0u degenerates into a continuous time Markov chain. That is, if F i ptq " 1´e´q it for some q i ą 0, i P M, then
f i ptq 1´F i ptq ě q i
for all t P r0, 8q, from which
q i,j :" q i,j ptq " # q i p i,j if i ‰ j q i if i " j
where q i,j is the transition rates from state i to state j, and q i,
j ě 0 if i ‰ j, while´q i " q i,i "´ř i‰j q i,j .
Thus, the matrix Q " pq i,j q mˆm generates the Markov chain trptq, t ě 0u, i.e.,
Ptrpt`∆tq " j|rptq " iu " # q i,j ∆t`op∆tq, if i ‰ j, 1`q i,j ∆t`op∆tq, if i " j,
where ∆t ą 0 represents a small time increment. By the assumptions (H1) follows that the matrix Q is irreducible. Under this condition, the Markov chain has a unique stationary positive probability distribution π " pπ 1 , . . . , π m q T which can be determined by solving the following linear equation π T Q " 0, subject to ř M r"1 π r " 1, and π r ą 0, @r P M. Let us introduce the process ηptq " t´suptu ă t : rpuq ‰ rptqu, which represents the amount of time the process trptq, t ě 0u is at the current state after the last jump. It is also denoted as the backward recurrence time process. The pair tpηptq, rptqq, t ě 0u satisfies the Markov property [17], moreover it is strong Markov [10, Chapter 6].
Model description.
Let us consider a SAIRS model with vaccination, as in [20].
dSptq dt " µ´ˆβ A Aptq`β I Iptq˙Sptq´pµ`νqSptq`γRptq, dAptq dt "ˆβ A Aptq`β I Iptq˙Sptq´pα`δ A`µ qAptq, dIptq dt " αAptq´pδ I`µ qIptq, dRptq dt " δ A Aptq`δ I Iptq`νSptq´pγ`µqRptq.(1)
Let us now incorporate the impact of the external random environments into system (1). We assume that the external random environment is described by a semi-Markov process. We only consider the environmental influence on the disease transmission rate since it may be more sensitive to environmental fluctuations than other parameters of model (1). Thus, the average value of the transmission rate may switch between different levels with the switching of the environmental regimes. As a result, the SAIRS deterministic model (1) evolves in a random dynamical system with semi-Markov switching of the form
dSptq dt " µ´ˆβ A prptqqAptq`β I prptqqIptq˙Sptq´pµ`νqSptq`γRptq, dAptq dt "ˆβ A prptqqAptq`β I prptqqIptq˙Sptq´pα`δ A`µ qAptq, dIptq dt " αAptq´pδ I`µ qIptq, dRptq dt " δ A Aptq`δ I Iptq`νSptq´pγ`µqRptq,(2)
Let us introduceβ :" pβ A , β I q. If the initial conditions of the driving process tpηptq, rptqq, t ě 0u are ηp0q " 0 and rp0q " r 0 , then system (2) starts from the initial condition pSp0q, Ap0q, Ip0q, Rp0qq and follows (1) withβ "βpr 0 q until the first jump time τ 1 , with conditional holding distribution F r0 p¨q. Then, the environmental regime switches instantaneously from state r 0 to state r 1 ; thus, the process restarts from the state r 1 and the system evolves accordingly to (1) withβ "βpr 1 q and distribution F r1 p¨q until the next jump time τ 2 . The system will evolve in the similar way as long as the semi-Markov process jumps. This yields a continuous and piecewise smooth trajectory in R 4 . Let us note that the solution process txptq " pSptq, Aptq, Iptq, Rptqq, t ě 0u that records the position of the switching trajectory of (2) is not Markov. However, by means of additional components, tpxptq, ηptq, rptqq, t ě 0u is a homogeneous Markov process.
In this paper, unless otherwise specified, let pΩ, F , tF t u tě0 , Pq be a complete probability space with a filtration tF t u tě0 satisfying the usual conditions (i.e. it is right continuous and F 0 contains all P-null sets).
Since S`A`I`R " 1, system (2) is equivalent to the following three-dimensional dynamical system:
dSptq dt " µ´ˆβ A prptqqAptq`β I prptqqIptq˙Sptq´pµ`ν`γqSptq`γp1´Aptq´Iptqq, dAptq dt "ˆβ A prptqqAptq`β I prptqqIptq˙Sptq´pα`δ A`µ qAptq, dIptq dt " αAptq´pδ I`µ qIptq,(3)
with initial condition pSp0q, Ap0q, Ip0qq belonging to the set
Γ " tpS, A, Iq P R 3 |S`A`I ď 1u,
where R 3 is the non-negative orthant of R 3 , and initial state rp0q P M.
System (3) can be written in matrix notation as
dxptq dt " gpxptq, rptqq,(4)
where xptq " pSptq, Aptq, Iptqq and gpxptqq " pg 1 pxptqq, g 2 pxptqq, g 3 pxptqqq is defined according to (3). In the following, for any initial value xp0q, we denote by xpt, ω, xp0qq " pSpt, ω, xp0qq, Apt, ω, xp0qq, Ipt, ω, xp0qqq, the solution of (4) at time t starting in xp0q, or by xptq if there is no ambiguity, for the sake of simplicity, and by x r ptq the solution of the subsystem r.
In the following theorem we ensure the no explosion of the solution in any finite time, by proving a somehow stronger property, that is Γ is a positive invariant domain for (3). Theorem 1. For any initial value pxp0q, ηp0q, rp0qq P ΓˆR`ˆM, and for any choice of system parameters β A p¨q, β I p¨q, there exists a unique solution xpt, ω, xp0qq to system (3) on t ě 0. Moreover, for every ω P Ω the solution remains in Γ for all t ą 0.
Proof. Let 0 " τ 0 ă τ 1 ă τ 2 ă . . . , ă τ n ă . . . be the jump times of the semi-Markov chain rptq, and let rp0q " r 0 P M be the starting state. Thus, rptq " r 0 on rτ 0 , τ 1 q. The subsystem for t P rτ 0 , τ 1 q has the following form:
dxptq dt " gpxptq, r 0 q,
and, for [20, Thm 1], its solution xptq P Γ, for t P rτ 0 , τ 1 q and, by continuity for t " τ 1 , as well. Thus, xpτ 1 q P Γ and by considering rpτ 1 q " r 1 , the subsystem for t P rτ 1 , τ 2 q becomes dxptq dt " gpxptq, r 1 q.
Again, xptq P Γ, on t P rτ 1 , τ 2 q and, by continuity for t " τ 2 , as well. Repeating this process continuously, we obtain the claim.
As the switching concerns only the infection rates β A and β I , all the subsystems of (3) share the same disease-free equilibrium (DFE)
x 0 " pS 0 , A 0 , I 0 q "ˆµ`γ µ`ν`γ , 0, 0˙.
Now, we report results related to the stability analysis of each deterministic subsystems of (3) corresponding to the state r, r " 1, . . . , M . The proof of the following results can be found in [20], where the global stability of the deterministic model (1) is investigated.
Lemma 2.
The basic reproduction number R 0 of the subsystem of (3) corresponding to the state r is given by
R 0 prq "ˆβ A prq`α β I prq δ I`µ˙γ`µ pα`δ A`µ qpν`γ`µq , r " 1, . . . , M.(5)
Now, let us define
pF´V qprq "ˆβ A prqS 0´p α`δ A`µ q β I prqS 0 α´pδ I`µ q˙,(6)
where F and V are the matrices in equations p6q and p7q defined in [20]. Theorem 4. Let us fix r P M. The disease-free equilibrium x 0 is globally asymptotically stable for the subsystem r of (3) if R 0 prq ă 1.
Lemma 5. Let us fix r P M. The endemic equilibrium xr " pS˚prq, A˚prq, I˚prqq exists and it is unique in Γ for the subsystem r of (3) if R 0 prq ą 1. Moreover, xr is locally asymptotically stable inΓ.
Theorem 6. Let us fix r P M and assume that β A prq " β I prq ": βprq and δ A " δ I ": δ. The endemic equilibrium xr " pS˚prq, A˚prq, I˚prqq is globally asymptotically stable inΓ for the subsystem r of (3) if R 0 prq ą 1.
Theorem 7. Let us fix r P M, and consider β A prq ‰ β I prq or δ A ‰ δ I . Assume that Rprq 0 ą 1 and β A prq ă δ I . Then, the endemic equilibrium x˚is globally asymptotically stable inΓ for the subsystem r of (2).
Threshold dynamics of the model
By the assumptions (H1) the embedded Markov chain tX n , n P Nu, associated to the semi-Markov process trptq, t ě 0u has a unique stationary positive probability distribution π " pπ 1 , . . . , π M q. Let
m i " ż 8 0 r1´F i puqsdu
be the mean sojourn time of trptq, t ě 0u in state i. Then, by the Ergodic theorem [6, Thm 2, p. 244], we have that for any bounded measurable function f : pE, Eq Ñ pR`, BpR`qq,
lim tÑ8 1 t ż t 0 f prpsqqds " ř rPM f prqπ r m r ř rPM π r m r a.s.(7)
Hereafter, we denote q β :" max rPM tβ A prq, β I prqu.
3.1. β A prq " β I prq :" βprq, r " 1, . . . , M , δ A " δ I :" δ.
Theorem 8. Let us assume β A prq " β I prq :" βprq, δ A " δ I :" δ in each subsystem r " 1, . . . , M . If ÿ rPM π r m rˆβ prq γ`µ ν`γ`µ´p δ`µq˙ă 0,
then the solution of system (3) with any initial value pxp0q, ηp0q, rp0qq P ΓˆR`ˆM satisfies
lim tÑ`8 Sptq " γ`µ ν`γ`µ ": S 0 a.s.,(8)lim tÑ`8 Aptq " 0 a.s.,(9)
lim tÑ`8
Iptq " 0 a.s.
Proof. We know that for all ω P Ω, it holds dSpω, tq dt ď µ`γ´pµ`ν`γqSpω, tq.
For different selections of sample point ω P Ω, the sample path Spω, tq may have different convergence speeds with respect to time t. Thus, for any ω P Ω and any constant ε ą 0, by the comparison theorem, there exists T pω, εq ą 0, such that for all t ą T
Spω, tq ď S 0`ε , hence lim sup tÑ8 Sptq ď S 0 , a.s.(11)
Based on this consideration, we shall prove assertions (9) and (10). We have that for all ω P Ω, and t ą T
d lnpIptq`Aptqq dt " βprptqqSptq´pδ`µq ď βprptqqpS 0`ε q´pδ`µq.
This implies that lnpIptq`Aptqq ď lnpIpT q`ApT qq`ż t T pβprpuqqpS 0`ε q´pδ`µqq du from which, by the ergodic result for semi-Markov process (7), we get
lim sup tÑ8 lnpIptq`Aptqq t ď lim sup tÑ8 1 t ż t T pβprpuqqpS 0`ε q´pδ`µqq du " 1 ř rPM π r m r " ÿ rPM π r m r pβprqpS 0`ε q´pδ`µqq a.s.
If ř rPM π r m r pβprqpS 0`ε q´pδ`µqq ă 0, then for sufficiently small ε ą 0, we have ř rPM π r pβprqpS 0ὲ q´pδ`µqq ă 0, and consequently lim tÑ`8
Aptq " 0, and lim tÑ`8
Iptq " 0 a.s.
Now, we shall prove assertion (8). LetΩ " tω P Ω : lim tÑ`8 Aptq " 0u X tω P Ω : lim tÑ`8 Iptq " 0u. Then, from (12), PpΩq " 1. Then, for any ω PΩ and any constant ε ą 0, there exists T 1 pω, εq ą 0, such that for all t ą T 1 Apω, tq ă ε, Ipω, tq ă ε. Thus, we have for all ω PΩ, and t ą T 1 dSpω, tq dt ě µ´εβprqSpω, tq´pµ`ν`γqSpω, tq`γp1´2εq ě µ´ε q βSpω, tq´pµ`ν`γqSpω, tq`γp1´2εq.
Following the same arguments in the proof of Theorem 4, we can assert that
lim inf tÑ8 Spω, tq ě S 0 , ω PΩ.
Recalling that P pΩq " 1, we have lim inf tÑ8 Sptq ě S 0 a.s. that combined with (11) gives us that lim tÑ8
Sptq " S 0 a.s.
Thus, under the condition of Theorem 8, we can say that any positive solution of system (3) converges exponentially to the disease-free state x 0 " pS 0 , 0, 0q almost surely.
Based on the definition in [2,15], the basic reproduction number, for our model (3) with β A prq " β I prq :" βprq, δ A " δ I :" δ, in the semi-Markov random environment, can be written from Theorem 8 as
R 0 " ř rPM π r m r βprqS 0 ř rPM π r m r pδ`µq .(13)
We notice that we would have arrived to the same result if we had followed the same arguments as in [15], that are based on the theory of basic reproduction in random environment in [2].
Remark 2. It easy to see that in the case of Markov-switching, that is for the exponential holding time distribution in each regime, the basic reproduction number for our model
(3) with β A prq " β I prq :" β, δ A " δ I :" δ is R 0 " ř rPM π r βprqS 0 ř rPM π r pδ`µq " ř rPM π r βprqS 0 pδ`µq .
Proposition 9. From (13), the following alternative conditions are valid
(i) R 0 ă 1 if and only if ř rPM π r m r pβprqS 0´p δ`µqq ă 0, (ii) R 0 ą 1 if and only if ř rPM π r m r pβprqS 0´p δ`µqq ą 0. The proof is immediate, so it is omitted. 3.2. β A prq ‰ β I prq, or r " 1, . . . , M , δ A ‰ δ I . Let us define q βprq " maxtβ A prq, β I prqu,δ " mintδ A , δ I u. Theorem 10. Let β A prq ‰ β I prq, or δ A ‰ δ I in each subsystem r " 1, . . . , M . If ÿ rPM π r m r p q βprqS 0´pδ`µ qq ă 0,(14)
then the solution of system (3) with any initial value pxp0q, ηp0q, rp0qq P ΓˆR`ˆM satisfies
lim tÑ`8 Sptq " γ`µ ν`γ`µ ": S 0 a.s.,(15)lim tÑ`8 Aptq " 0 a.s.,(16)lim tÑ`8 Iptq " 0 a.s.(17)
Proof. Let us prove conditions (16) and (17). By using equation (11), we have that for all ω P Ω, and t ą T d lnpIptq`Aptqq dt ď q βprptqqpS 0`ε q´pδ´µq.
By the same arguments as in Theorem 8, we obtain that
lim sup tÑ8 lnpIptq`Aptqq t ď 1 ř rPM π r m r " ÿ rPM π r m r p q βprqpS 0`ε q´pδ`µqq a.s.
Thus, if ř rPM π r m r p q βprqS 0´p δ`µqq ă 0, then for sufficiently small ε ą 0, we have ř rPM π r p q βprqpS 0ὲ q´pδ`µqq ă 0, and consequently lim tÑ`8
Aptq " 0, and lim tÑ`8
Iptq " 0 a.s.
To prove assertion (15), we follow the same steps as in Theorem (8), by considering that β A prq and β I prq are less than or equal to q β.
With a different proof we can find another sufficient condition for the extinction of the disease.
Theorem 11. Let β A prq ‰ β I prq or δ A ‰ δ I in each subsystem r " 1, . . . , M , and let Bprq " pF´V qprq as in (6). If ÿ rPM π r m r λ 1 pBprq`Bprq T q ă 0,
where λ 1 is the maximum eigenvalue, then the solution of system (3) with any initial value pxp0q, ηp0q, rp0qq P ΓˆR`ˆM satisfies
lim tÑ`8 Sptq " γ`µ ν`γ`µ a.s.,(19)
lim tÑ`8
Aptq " 0 a.s.,
lim tÑ`8
Iptq " 0 a.s.
Proof. By following the same arguments as in the proof of Theorem 8, we know that (11) holds. Thus, we have that for any ω P Ω and any constant ε ą 0, there exists T pω, εq ą 0, such that for all t ą T dAptq dt
ďˆβ A prptqqAptq`β I prptqqIptq˙pS 0`ε q´pα`δ A`µ qAptq, dIptq dt " αAptq´pδ I`µ qIptq.
We shall now prove assertions (20) and (21), by considering the comparison system
dw 1 ptq dt "ˆβ A prptqqw 1 ptq`β I prptqqw 2 ptq˙pS 0`ε q´pα`δ A`µ qw 1 ptq, dw 2 ptq dt " αw 1 ptq´pδ I`µ qw 2 ptq, w 1 pT q " ApT q, w 2 pT q " IpT q
Let wptq " pw 1 ptq, w 2 ptqq T and consider the function V pwptqq " ln ||wptq|| 2 . Then, let B ε prq " pF ε´Vε qprq the matrix in (6), computed in x 0 pεq " pS 0`ε , 0, 0q. Then, we have
dV pwptqq dt " 1 ||wptq|| 2 2 xwptq, 9 wptqy " wptq T ||wptq|| 2 B ε prq wptq ||wptq|| 2 " wptq T ||wptq|| 2`p B ε qprq`pB ε q T prq2 wptq ||wptq|| 2 ď λ 1`p B ε qprq`pB ε q T prq2 .
By the same arguments in Theorem 8, invoking (7), assertions (20) and (21) follows, and consequently (19).
Remark 3.
In the case of Markov-switching, the condition (18) becomes ÿ rPM π r λ 1 pBprq`Bprq T q ă 0.
Remark 4. Let us fix r P M. Let us consider condition (14) and R 0 prq in (2). It is easy to see that it holds q βprqS 0´pδ`µ q ă 0 ñ R 0 prq ă 1 Now, let us consider condition (18). We have that λ 1 pBprq`Bprq T q " β A prqS 0´p α`δ A`µ q´pδ I`µ q`apβ A prqS 0´p α`δ A q`δ I q 2`p β I prqS 0`α q 2 . (5), it is easy to see that
From this and from
λ 1 pBprq`Bprq T q ă 0 ñ R 0 prq ă 1,
and that if β I prqS 0 " α, it holds
λ 1 pBprq`Bprq T q ă 0 ô R 0 prq ă 1.
In Section 7, we will compare numerically the two conditions (14) and (18), by showing a case in which condition (14) is satisfied but (18) does not, and the other case in which the vice versa occurs. Thus, it is sufficient that one of the two conditions is verified to ensure the almost sure extinction.
Persistence
In this section, we investigate the persistence in time mean of the disease. Let us remark that I`A denote the fraction of individuals that may infect the susceptible population.
Theorem 13. Let us assume β A prq " β I prq :" βprq, δ A " δ I :" δ in each subsystem r " 1, . . . , M . If R 0 ą 1, then for any initial value pxp0q, ηp0q, rp0qq PΓˆR`ˆM, the following statement is valid with probability 1:
lim inf tÑ8 1 t ż t 0 Spuqdu ě µ µ`ν`q β ,(22)
lim inf
tÑ8 1 t ż t 0 pIpuq`Apuqqdu ě µ`ν`γ q βp q β`γq ř rPM π r m r pβprqS 0´p δ`µqq ř rPM π r m r .(23)
Proof. For ease of notation, we will omit the dependence on ω and that on t if not necessary. Since A`I ď 1, from the first equation of system (2), we have dSptq dt ě µ´pµ`νqS´βprqpA`IqS ě µ´rpµ`νq`βprqsS, integrating the above inequality and dividing both sides by t, we obtain
1 t pµ`νq ż t 0 Spuqdu`1 t ż t 0 βprpuqqSpuqdu ě µ´S ptq´Sp0q t .(24)
Then, for all ω P Ω it holds
lim tÑ`8 Sptq´Sp0q t " 0, a.s.(25)
From (24), it follows
lim inf tÑ`8 ż t 0
Spuqds ě µ µ`ν`q β , a.s. and assertion (22) is proved. Next, we will prove assertion (23). By summing the second and third equation of (3), we have that
d lnpIptq`Aptqq dt " βprqS´δ´µ,
from which, integrating both sides
lnpIptq`Aptqq " lnpIp0q`Ap0qq`ż t 0 βprpuqqSpuqdu´ż t 0 pδ`µqdu " lnpIp0q`Ap0qq`ż t 0 βprpuqq γ`µ ν`γ`µ du´ż t 0 βprpuqqˆγ`µ ν`γ`µ´S puq˙du´ż t 0 pδ`µqdu ě lnpIp0q`Ap0qq`ż t 0 βprpuqq γ`µ ν`γ`µ du´q β ż t 0ˆγ`µ ν`γ`µ´S puq˙du´ż t 0 pδ`µqdu.(26)
Now, we have that
dSptq dt " µ´pµ`νqS´βprqpA`IqS`γ´γS´γpI`Aq ě pν`γ`µqˆγ`µ ν`γ`µ´S˙´p βprq`γqpI`Aq ě pν`γ`µqˆγ`µ ν`γ`µ´S˙´p q β`γqpI`Aq,(27)
from which, integrating both sides,
pν`γ`µq ż t 0ˆγ`µ ν`γ`µ´S puq˙du ď Sptq´Sp0q`p q β`γq ż t 0 pIpuq`Apuqqdu.(28)
Combining (26) with (28), we obtain
lnpIptq`Aptqq ě lnpIp0q`Ap0qq`ż t 0ˆβ prpuqq γ`µ ν`γ`µ´p δ`µq˙dú q β ν`γ`µ " Sptq´Sp0q`p q β`γq ż t 0 pIpuq`Apuqqdu .(29)
For all ω P Ω, (25) holds, moreover it is easy to see that lim sup tÑ`8 lnpIptq`Aptqq t ď 0, thus from (29), and the ergodic result (7), we get
lim inf tÑ`8 1 t ż t 0 pIpuq`Apuqqdu ě ν`γ`µ q βp q β`γq ř rPM π r m r pβprqS 0´p δ`µqq ř rPM π r m r , a.s.
that is assertion (23).
Thus, by (ii) of Proposition 9, we conclude that the disease is persistent in the time mean with probability 1.
Corollary 14.
Let us assume β A prq " β I prq :" βprq, δ A " δ I :" δ in each subsystem r " 1, . . . , M . If R 0 ą 1, then for any initial value pxp0q, ηp0q, rp0qq PΓˆR`ˆM, the following statements hold with probability 1:
lim inf tÑ8 1 t ż t 0 Ipuqdu ě α α`δ`µ µ`ν`γ q βp q β`γq ř rPM π r m r pβprqS 0´p δ`µqq ř rPM π r m r ,(30)
and lim inf
tÑ8 1 t ż t 0 Apuqdu ě δ`µ α`δ`µ µ`ν`γ q βp q β`γq ř rPM π r m r pβprqS 0´p δ`µqq ř rPM π r m r .(31)
Proof. Integrating the third equation of system (3) and dividing both sides by t, we have
pα`δ`µq 1 t ż t 0 Ipuqdu ě α t ż t 0 pIpuq`Apuqq´I ptq´Ip0q t ,
Thus, from (23) it holds (30). Now, as before, by integrating the third equation of system (3) and dividing both sides by t, it is easy to see the (31) holds.
For the next result, we need to definê βprq " mintβ A prq, β I prqu, q β " max rPMβ prq, and q δ " maxtδ A , δ I u.
Theorem 15. Let us assume β A prq ‰ β I prq or δ A ‰ δ I in each subsystem r " 1, . . . , M . If ÿ rPM π r m r pβprqS 0´p q δ`µqq ą 0 (32) then for any initial value pxp0q, ηp0q, rp0qq PΓˆR`ˆM, the following statement is valid with probability 1:
lim inf tÑ8 1 t ż t 0 Spuqdu ě µ µ`ν`q β ,(33)
lim inf
tÑ8 1 t ż t 0 pIpuq`Apuqqdu ě µ`ν`γ q βp q β`γq ř rPM π r m r pβprqS 0´p q δ`µqq ř rPM π r m r .(34)
Proof. Assertion (33) can be proved in the same way as assertion (22) in Theorem 13, by considering that pβ A prqA`β I prqIqS ě´q βpA`IqS. Let us prove assertion (34). By summing the second and third equations of (3), we have that
d lnpIptq`Aptqq dt " 1 I`A rpβ A prqA`β I prqIqS´pδ A´µ qA´pδ I´µ qIs ěβprqS´p q δ`µq.
Following similar arguments as in (26), we obtain that
lnpIptq`Aptqq ě lnpIp0q`Ap0qq`ż t 0β prpuqq γ`µ ν`γ`µ du´q β ż t 0ˆγ`µ ν`γ`µ´S puq˙du´ż t 0 p q δ`µqdu.(35)
Now, by the same steps as in (27), we obtain
pν`γ`µq ż t 0ˆγ`µ ν`γ`µ´S puq˙du ď Sptq´Sp0q`p q β`γq ż t 0 pIpuq`Apuqqdu.(36)
By combining (35) and (36), we have
lnpIptq`Aptqq ě lnpIp0q`Ap0qq`ż t 0ˆβ prpuqq γ`µ ν`γ`µ´p q δ`µq˙dú q β ν`γ`µ " Sptq´Sp0q`p q β`γq ż t 0 pIpuq`Apuqqdu .
Finally, with the same arguments as in Theorem 13, we obtain (34).
Remark 5.
Let us fix r P M. Let us consider condition (32) and R 0 prq in (5). It is easy to see that it holds pβprqS 0´p q δ`µqq ą 0 ñ R 0 prq ą 1 Corollary 16. Let us assume β A prq ‰ β I prq or δ A ‰ δ I in each subsystem r " 1, . . . , M . If ÿ rPM π r m r pβprqS 0´p q δ`µqq ą 0 then for any initial value pxp0q, ηp0q, rp0qq PΓˆR`ˆM, the following statements hold with probability 1:
lim inf tÑ8 1 t ż t 0
Ipuqdu ě α α`δ I`µ µ`ν`γ q βp q β`γq ř rPM π r m r pβprqS 0´p q δ`µqq ř rPM π r m r , and lim inf
tÑ8 1 t ż t 0 Apuqdu ě δ I`µ α`δ I`µ µ`ν`γ q βp q β`γq ř rPM π r m r pβprqS 0´p q δ`µqq ř rPM π r m r .
The proof is analogous to that of Corollary 14, by invoking (34).
Remark 6. Let β A prq " β I prq :" βprq, δ A " δ I :" δ, for all r " 1, . . . , M . From the almost surely persistence in time mean proved in Theorem 13, the following weak persistence follows: if R 0 ą 1, then for any initial value pxp0q, ηp0q, rp0qq PΓˆR`ˆM, it holds lim sup tÑ8 pIptq`Aptqq ě µ`ν`γ q βp q β`γq ř rPM π r m r pβprqS 0´p δ`µqq ř rPM π r m r a.s.
Let β A prq ‰ β I prq or δ A ‰ δ I , for all r " 1, . . . , M . From Theorem 15 follows: if ř rPM π r m r pβprqS 0ṕ q δ`µqq ą 0 then for any initial value pxp0q, ηp0q, rp0qq PΓˆR`ˆM, it holds lim sup tÑ8 pIptq`Aptqq ě µ`ν`γ q βp q β`γq ř rPM π r m r pβprqS 0´p q δ`µqq ř rPM π r m r , a.s.
In the case β A prq " β I prq :" βprq, δ A " δ I :" δ, the position of the value R 0 (13) with respect to one determines the extinction or the persistence of the disease, that is R 0 is a threshold value. Thus, from Theorems (8) and (13), we obtain the following corollary:
Corollary 17. Let us assume β A prq " β I prq :" βprq, δ A " δ I :" δ, and consider R 0 in (13). Then,the solution of system (3) has the property that (i) If R 0 ă 1, for any initial value pxp0q, ηp0q, rp0qq P ΓˆR`ˆM, the fraction of asymptomatic and infected individuals Aptq and Iptq, respectively, tends to zero exponentially almost surely, that is the disease dies out with probability one; (ii) If R 0 ą 1, for any initial value pxp0q, ηp0q, rp0qq PΓˆR`ˆM, the disease will be almost surely persistent in time mean.
Remark 7. Let us assume β A prq ‰ β I prq or δ A ‰ δ I . Let us fix r P M. Let us consider condition (32) and R 0 prq in (5). It is easy to see that it holds
pβprqS 0´p q δ`µqq ą 0 ñ R 0 prq ą 1
In the case β A prq ‰ β I prq or δ A ‰ δ I , we find two regions, one where the system goes to extinction almost surely, and the other where it is stochastic persistent in time mean. These two regions are not adjacent, as there is a gap between them; thus we do not have a threshold value separating these regions.
In the following Section 5, we investigate the omega-limit set of the system. The introduction of the backward recurrence time process allows the considered stochastic system to be a piecewise deterministic Markov process [15]. Thus, in Section (6), we prove the existence of a unique invariant probability measure for this process. Let us note that in the two subsequent sections, to obtain our results, we follow mainly the approaches in [15] and, like them, for simplicity, we restrict the analysis to a semi-Markov process with state space M " t1, 2u. Hence, the external environmental conditions can switch randomly between two states, for example favorable and adverse weather conditions for the disease spread, or lockdown and less stringent distance measures, considering that the disease transmission rate is also a function of the contact rate.
Omega-limit set
Let us assume in this section and in the subsequent one that M " t1, 2u. Let us define the omega-limit set of the trajectory starting from an initial value xp0q P Γ as
Ωpxp0q, ωq " č T ą0 ď tąT xpt, ω, xp0qq(37)
We use the notationΩ for the limit set (37) in place of the usual one ω in the deterministic dynamical systems for avoiding conflict with the element ω in the probability sample space. In this section, it will be shown that under some appropriate conditionΩpxp0q, ωq is deterministic, i.e., it is constant almost surely and it is independent of the initial value xp0q. Let us consider the following assumption:
(H2) For some r P M, there exists a unique and globally asymptotically stable endemic equilibrium xr " pSr , Ar , Ir q for the corresponding system of (3) in the state r.
Let us note that when β A prq " β I prq :" βprq and δ A " δ I :" δ, in the case of persistence in time mean, by (ii) of Proposition 9 the condition R 0 ą 1 implies that there exists at least one state r such that π r m r pβprqS 0´p δ`µqq ą 0, i.e. R 0 prq ą 1. By Theorem 6, xr is globally asymptotically stable inΓ. Thus, if R 0 ą 1 condition (H2) is satisfied. When β A prq ‰ β I prq or δ A ‰ δ I , by Theorem 15, if equation (32) holds, then there exists at least one state r such that pβprqS 0´p q δ`µqq ą 0; from Remark 7 this implies that R 0 prq ą 1. By Theorem (7), we have the global asymptotic stability of xr if R 0 prq ą 1 under the additional condition β A prq ă δ I . However, it is easy to see that if this last condition is verified, pβprqS 0´p q δ`µqq ą 0 cannot be valid. Thus, if we need (H2), we can suppose it holds for a state for which pβprqS 0´p q δ`µqq ą 0 is not verified. Indeed, we remember that this last condition is only sufficient to have R 0 prq ą 1 but not necessary. Now, let us recall some concepts on the Lie algebra of vector fields [3,11] that we need for the next results. Let wpyq and zpyq be two vector fields on R 3 . The Lie bracket rw, zs is also a vector field given by rw, zs j pyq "
3 ÿ k"1ˆw k Bz j By k pyq´z k Bw k By k pyq˙, j " 1, 2, 3
Assumption (H3):
A point x " pS, A, Iq P R 3 is said to satisfy the Lie bracket condition, if vectors u 1 pxq, u 2 pxq, ru i , u j spxq i,jPM , ru i , ru j , u k sspxq i,j,kPM , . . ., span the space R 3 , where for each r P M,
u r pxq "¨µ´ˆβ A prqA`β I prqI˙S´pµ`ν`γqS`γp1´A´Iq β A prqA`β I prqIptq˙S´pα`δ A`µ qA αA´pδ I`µ qI‹ ‹ ‹ ‹ ‚ .(38)
Without loss of generality, we can assume that condition (H2) holds for r " 1. Then, the following statements are valid: (a) The closureΨ is a subset of the omega-limit setΩpxp0q, ωq with probability one. (b) If there exists a point x˚:" pS˚, A˚, I˚q P Ψ satisfying the condition (H3), then Ψ absorbs all positive solutions, that is for any initial value xp0q PΓ, the valuê T pωq " inf tt ą 0 : xps, ω, xp0qq P Ψ, @s ą tu is finite outside a P-null set. Consequently,Ψ is the omega-limit setΩpxp0q, ωq for any xp0q PΓ with probability one.
The proof of the Theorem 18 follows by similar arguments to that of [15,Thm 9], thus we omit it.
Invariant probability measure
In this section, we will prove the existence of an invariant probability measure for the homogeneous Markov process tpxptq, ηptq, rptqq, t ě 0u on the state space H "ΓˆR`ˆM.
Following [15], we introduce a metric p¨,¨q on the state space H:
ppx 1 , s 1 , iq, px 2 , s 2 , jqq " a |x 1´x2 | 2`| s 1´s2 | 2`o pi, jq, (39) where opi, jq " # 0, if i " j, 1, if i ‰ j.
Hence, pX, p¨,¨q, BpHqq is a complete separable metric space, where BpHq is the Borel σ-algebra on H.
To ensure the existence of an invariant probability measure on H, we use the following exclusion principle.
Lemma 19 (see [26]). Let Φ " tΦ t , t ě 0u be a Feller process with state space pX, BpXqq. Then either a) there exists an invariant probability measure on X, or b) for any compact set C Ă X,
lim tÑ8 sup κ 1 t ż t 0ˆżX Ppu, x, Cqκpdxq˙du " 0,
where the supremum is taken over all initial distributions κ on the state space X, x P X is the initial condition for the process Φ t , and Ppt, x, Cq " P x pΦ t P Cq is the transition probability function.
To prove the Feller property of the process tpxptq, ηptq, rptqq, t ě 0u, we need the following lemma.
Lemma 20. Let p¨,¨q be the metric defined in (39). Then, for any T ą 0 and ε ą 0, we have
P " max 0ďtďT ppxpt, x 1 q, ηptq, rptqq, pxpt, x 2 q, ηptq, rptqqq ě ε * Ñ 0 (40)
as |x 1´x2 | Ñ 0, where px 1 , ηp0q, rp0qq, px 2 , ηp0q, rp0qq P H denote any two given initial values of the process tpxptq, ηptq, rptqq, t ě 0u.
Proof. By considering (38), it is easy to see that d pxpt, x 1 q´xpt, x 2 qq "`u rptq pxpt, x 1 q´u rptq pxpt, x 2 qq˘dt.
Applying the Itô formula to the function |xpt, x 1 q´xpt, x 2 q| 2 , we have
E|xpt, x 1 q´xpt, x 2 q| 2 " |x 1´x2 | 2`2 E "ż t 0 xxps, x 1 q´xps, x 2 q, u rpsq pxps, x 1 qq´u rpsq pxps, x 2 qy ds (42)
For ease of notation, let us define
Gps, x k q " pβ A prpsqqAps, x k q`β I prpsqqIps, x k qq, k " 1, 2. Now, we have that xxps, x 1 q´xps, x 2 q, u rpsq pxps, x 1 qq´u rpsq pxps, x 2 qy " pSps, x 1 q´Sps, x 2 qqpGps, x 1 qSps, x 1 q´Gps, x 2 qSps, x 2 qq´pµ`ν`γqpSps, x 1 q´Sps, x 2 qq 2 γpSps, x 1 q´Sps, x 2 qqpAps, x 1 q´Aps, x 2 qq´γpSps, x 1 q´Sps, x 2 qqpIps, x 1 q´Ips, x 2 qq pAps, x 1 q´Aps, x 2 qqpGps, x 1 qSps, x 1 q´Gps, x 2 qSps, x 2 qq´pα`δ A`µ qpAps, x 1 q´Aps, x 2 qq 2 αpAps, x 1 q´Aps, x 2 qqpIps, x 1 q´Ips, x 2 qq´pδ I`µ qpIps, x 1 q´Ips, x 2 qq 2 .
Since γpSps, x 1 q´Sps, x 2 qqpAps, x 1 q´Aps, x 2 qq ď γ 2 pSps, x 1 q´Sps, x 2 qq 2`γ 2 pAps, x 1 q´Aps, x 2 qq 2 , γpSps, x 1 q´Sps, x 2 qqpIps, x 1 q´Ips, x 2 qq ď γ 2 pSps, x 1 q´Sps, x 2 qq 2`γ 2 pIps, x 1 q´Ips, x 2 qq 2 , and αpAps, x 1 q´Aps, x 2 qqpIps, x 1 q´Ips, x 2 qq ď α 2 pAps, x 1 q´Aps, x 2 qq 2`α 2 pIps, x 1 q´Ips, x 2 qq 2 , we can write xxps, x 1 q´xps, x 2 q, u rpsq pxps, x 1 qq´u rpsq pxps, x 2 qy ď pµ`ν`2γqpSps, x 1 q´Sps,
x 2 qq 2`ˆγ 2`3 α 2`δ A`µ˙p Aps, x 1 q´Aps, x 2 qq 2´δ I`µ`γ 2`α 2¯p
Ips, x 1 q´Ips, x 2 qq 2 |pSps, x 1 q´Sps, x 2 qqpGps, x 1 qSps, x 1 q´Gps, x 2 qSps, x 2 qq| |pAps, x 1 q´Aps, x 2 qqpGps, x 1 qSps, x 1 q´Gps, x 2 qSps, x 2 qq|
(43) Now,
|pSps, x 1 q´Sps, x 2 qqpGps, x 1 qSps, x 1 q´Gps, x 2 qSps, x 2 qq| ď q βpSps, x 1 q´Sps, x 2 qq rpSps, x 1 q´Sps, x 2 qqpAps, x 1 q`Ips, x 1 qq
Sps, x 2 qpAps, x 1 q`Ips, x 1 q´pAps, x 2 q`Ips, x 2 qqqs ď q βpSps, x 1 q´Sps, x 2 qq 2`q βpSps, x 1 q´Sps, x 2 qqpAps, x 1 q`Ips, x 1 q´pAps, x 2 q`Ips, x 2 qqq ď q βpSps, x 1 q´Sps,
x 2 qq 2`q β 2 "
pSps, x 1 q´Sps, x 2 qq 2`p Aps, x 1 q´Aps,
x 2 qq 2 ‰ q β 2 "
pSps, x 1 q´Sps, x 2 qq 2`p Ips, x 1 q´Ips,
x 2 qq 2 ‰ ,(44)
and, similarly
|pAps, x 1 q´Aps, x 2 qqpGps, x 1 qSps, x 1 q´Gps, x 2 qSps, x 2 qq| ď q βpAps, x 1 q´Aps, x 2 qq rpSps, x 1 q´Sps, x 2 qqpAps, x 1 q`Ips, x 1 qq
Sps, x 2 qpAps, x 1 q`Ips, x 1 q´pAps, x 2 q`Ips, x 2 qqqs ď q β 2 ppAps, x 1 q´Aps, x 2 qq 2`p Sps, x 1 q´Sps, x 2 qq 2 q q βppAps, x 1 q´Aps, x 2 qqSps, x 2 qpAps, x 1 q`Ips, x 1 q´pAps, x 2 q`Ips, x 2 qqq ď q β 2 ppAps, x 1 q´Aps, x 2 qq 2`p Sps, x 1 q´Sps, x 2 qq 2 q`q βppAps, x 1 q´Aps, x 2 qq 2 q q β 2 ppAps, x 1 q´Aps, x 2 qq 2`p Ips, x 1 q´Ips, x 2 qq 2 q.
Now, substituting (44) and (45) into (43) yields
xxps, x 1 q´xps, x 2 q, u rpsq pxps, x 1 qq´u rpsq pxps, x 2 qy ď K|xps, x 1 q´xps,
x 2 q| 2 ,(46)
where K " maxtK 1 , K 2 , K 3 u, with
K 1 " 5 q β 2`p µ`ν`2γq, K 2 " 5 q β 2`ˆγ 2`3 α 2`δ A`µ˙, and K 3 " q β`δ I`µ`γ 2`α 2 .
From (42) and (46), following similar steps as in [15,Lemma 14], we have that ż T 0 E |xps, x 1 q´xps, x 2 q| 2 ds ď |xps, x 1 q´xps,
x 2 q| 2 ż T 0 expp2Ksqds Ñ 0(47)
as |xps, x 1 q´xps, x 2 q| Ñ 0. Moreover, from (41), it follows that max 0ďtďT |xps, x 1 q´xps, x 2 q| ď |xps, x 1 q´xps, x 2 q|`ż T 0ˇu rpsq pxps, x 1 qq´u rpsq pxps, x 2 qˇˇds.
For any ε ą 0, from Markov inequality and similar steps as in [15,Lemma 14], we obtain The proof requires the claim of Lemma 20 and it is analogous to that of [15,Lemma 15], thus we omit it.
P # ż T 0ˇu rpsq pxps, x 1 qq´u rpsq pxps, x 2 qˇˇds ě ε + ď T ε 2 E « ż T 0 |u rpsq pxps, x 1 qq´u rpsq pxps, x 2 q| 2 ds ff(49)
At this point, we can prove the existence of an invariant probability measure by using Lemma (19).
Theorem 22. Suppose that system (3) is persistent in time mean, then the Markov process tpxptq, ηptq, rptq, t ě 0u has an invariant probability measure κ˚on the state space H.
Proof. Let us consider the process tpxptq, ηptq, rptqq, t ě 0u on a larger state spacẽ H "ΓˆR`ˆM,
whereΓ " ΓztpS, A, Iq : A`I " 0u. By Lemma 19 and 21, we can prove the existence of an invariant probability measure κ˚for the Markov process hptq " tpxptq, ηptq, rptqq, t ě 0u onH, provided that a compact subset C ĂH exists such that lim inf
tÑ8 1 t ż t 0ˆżH Ppu, h, Cqκpdhq˙du " lim inf tÑ8 1 t ż t 0 Ppu, h 0 , Cqdu ą 0,(51)
for some initial distribution κ " δ h0 with h 0 PH, where δ. is the Dirac function. Once the existence of κi s proved, we can easily see that κ˚is also an invariant probability measure of tpxptq, ηptq, rptqq, t ě 0u on H. Indeed, it is easy to prove that for any initial value pxp0q, ηp0q, rp0qq PΓˆR`ˆM, the solution xptq of system (3) does not reach the boundary BΓ when the system is persistent in time mean. Consequently, k˚pBΓˆR`ˆMq " 0, which implies therefore that k˚is also an invariant probability measure on H. Thus, to complete the proof we just have to find a compact subset C ĂH satisfying (51). Hereafter, in the proof, we assume, without loss of generality that ηp0q " 0. Since system (3) Following similar steps as in [15,Theorem 16], we obtain lim inf
tÑ8 ż t 0 P ! Ipuq`Apuq ě ι 2 ) du ě ι 2 .(52)
Now, we have to verify that Ptηpuq ą Ku ă ε holds for any u P R`, where ε is a positive conctant such that ε ă ι 4 and K ą 0 is a sufficiently large constant such that max iPM t1´F i pKqu ă ε{2. To do this, we follow similar steps as in [15,Theorem 16]. This implies that lim inf
tÑ8 ż t 0 P ! Ipuq`Apuq ě ι 2 ) du ď ε`1 t ż t 0 P ! Ipuq`Apuq ě ι 2 , ηpuq ď K ) du.
Combining this with (52), we have lim inf
tÑ8 1 t ż t 0 P ! Ipuq`Apuq ě ι 2 , ηpuq ď K ) du ě ι 4 .
Let us consider the compact set C " Dˆr0, KsˆM PH, where the set D is ! pS, A, Iq P Γ : 0 ď S`A`I ď 1, I`A ě ι 2
) .
Then, it follows that lim inf
tÑ8 1 t ż t 0 Ppu, h 0 , Cqdu " lim inf tÑ8 1 t ż t 0 P txpu, ω, xp0qq P D, ηpuq ď Ku du " lim inf tÑ8 1 t ż t 0 P ! pIpuq`Apuqq ě ι 2 , ηpuq ď K ) ě ι 4 .
Since ι ą 0, we arrive to our claim. Now, we show that the invariant probability measure κ˚is unique by using the property of Harris recurrence and positive Harris recurrence (for standard definitions see, e.g., [18,15]).
Proposition 23. Suppose that system (3) is persistent in time mean and that assumptions (H2)-(H3) hold, then the Markov process tpxptq, ηptq, rptqq, t ě 0u is positive Harris recurrent. Thus, the invariant probability measure κ˚of tpxptq, ηptq, rptqq, t ě 0u on the state space H is unique.
Proof. The Harris recurrence of the process tpxptq, ηptq, rptqq, t ě 0u follows by similar steps as in [15,Lemma 20]. Thus, from Theorem 22, we can conclude that the process is positive Harris recurrent. Therefore, the invariant probability measure κ˚of tpxptq, ηptq, rptqq, t ě 0u on H is unique [5,18].
Numerical experiments
In this section, we provide some numerical investigations to verify the theoretical results. We assume that the conditional holding time distribution F i p¨q of the semi-Markov process in state i is a gamma distribution Γpk i , θ i q, i " 1, . . . , M , whose probability density function is
f i pt; k i , θ i q " θ ki i t ki´1 e´θ it Γpk i q ,
for t ą 0, and the cumulative distribution function is given by
F pt; k i , θ i q " ż t 0 f pu; k i , θ i qdu,
where k i ą 0, θ i ą 0, and Γp¨q is the complete gamma function. Let us note that if k i " 1, the gamma distribution becomes the exponential distribution with parameter θ i . We discuss the almost sure extinction and persistence in time mean of the system in the following cases. a) b) Figure 1. Dynamical behaviour of system (2) and the semi-Markov process rptq. The parameter values are: β A p1q " 0.004, β I p1q " 0.008, β A p2q " 0.97, β I p2q " 0.99, δ A " 0.105, δ I " 0.1, µ " 1{p60˚365q, α " 0.3, γ " 0.03, and ν " 0.01. The parameters of F 1 and F 2 are respectively: a) k 1 " 6, θ 1 " 0.8 and k 2 " 12, θ 2 " 0.8, b) k 1 " 15, θ 1 " 0.8 and k 2 " 2, θ 2 " 0.8. Figure 2. Dynamical behaviour of system (2) and the semi-Markov process rptq. The parameter values are: β A p1q " 0.55, β I p1q " 0.5, β A p2q " 0.68, β I p2q " 0.58, δ A " 0.3, δ I " 0.4, µ " 1{p60˚365q, ν " 0.01, α " 0.5, γ " 0.01. The parameters of F 1 and F 2 are k 1 " 0.9, θ 1 " 0.8 and k 2 " 2.5, θ 2 " 0.8, respectively
Case 1: β A prq ‰ β I prq, δ A prq ‰ δ I prq. We consider two states: r 1 , in which the epidemic dies out, and r 2 in which it persists. In Fig.1 a), we consider F 1 , F 2 with parameters k 1 " 6, θ 1 " 0.8 and k 2 " 12, θ 2 " 0.8, respectively. Consequently, the respective mean sojourn times are m 1 " 7.5 and m 2 " 15. The other parameters are: β A p1q " 0.004, β I p1q " 0.008, β A p2q " 0.97, β I p2q " 0.99, δ A " 0.105, δ I " 0.1, µ " 1{p60˚365q, α " 0.3, γ " 0.03, and ν " 0.01. We have that ÿ rPM π r m r pβprqS 0´p q δ`µqq " 4.283 ą 0, thus the whole system is stochastically persistent in time mean. In Fig.1 b), we consider F 1 , F 2 with parameters k 1 " 15, θ 1 " 0.8 and k 2 " 2, θ 2 " 0.8, respectively; we have m 1 " 18.75, m 2 " 2.5. The other parameters are the same as in a). In this case, ÿ rPM π r m r p q βprqS 0´pδ`µ qq "´0.0782 ă 0, and the epidemic will go extinct almost surely, in the long run. Thus, we can see the relevant role played by the mean sojourn times, indeed in the two figures the parameters are the same, what changes is that in Fig.1 a) the mean sojourn time in the persistent state is higher than that in the state of extinction, while in Fig.1 b) the vice versa occurs.
Case 2: β A prq ‰ β I prq, δ A prq ‰ δ I prq.
Here, we want to compare the two sufficient conditions for the almost sure extinction (14) and (18). In Fig.2, we consider again two states r 1 , in which the epidemic dies out, and r 2 in which it persists. The parameters are: β A p1q " 0.55, β I p1q " 0.5, β A p2q " 0.68, β I p2q " 0.58, δ A " 0.3, δ I " 0.4, µ " 1{p60˚365q, ν " 0.01, α " 0.5, γ " 0.01. Let us consider F 1 and F 2 with k 1 " 0.9, θ 1 " 0.8 and k 2 " 2.5, θ 2 " 0.8, respectively. Hence, m 1 " 1.125 and m 2 " 3.125. We have ÿ rPM π r m r p q βprqS 0´pδ`µ qq " 0.05 ą 0 and ÿ rPM π r m r λ 1 pBprq`Bprq T q "´0.1959 ă 0
Thus, in this case condition (14) is not satisfied but (18) does. Vice versa, in Fig.1 b), we have the opposite case, that is condition (14) is less than zero, while (18) is equal to 1.0084, hence greater than zero. By Theorems 10 and Theorem 11, we have the almost sure extinction in both cases, as we can also see from Figs. 2 and 1 b). a) b) c) Figure 3. Dynamical behaviour of system (2) and the semi-Markov process rptq. The parameter values are: β A p1q " β I p1q " 0.05, β A p2q " β I p2q " 0.9, δ A " δ I " 0.07, µ " 1{p60˚365q, ν " 0.01, α " 0.5, γ " 0.02. The parameters of F 1 and F 2 are respectively: a) k 1 " 4, θ 1 " 0.8 and k 2 " 15, θ 2 " 0.8, b) k 1 " 15, θ 1 " 0.8 and k 2 " 3, θ 2 " 0.8, c) k 1 " 18, θ 1 " 0.8 and k 2 " 1, θ 2 " 0.8
Case 3: β A prq " β I prq :" βprq, δ A prq " δ I prq :" δprq. We consider two states: r 1 , in which the epidemic dies out, and r 2 in which it persists. The parameters are: β A p1q " β I p1q " 0.05, β A p2q " β I p2q " 0.9, δ A " δ I " 0.07, µ " 1{p60˚365q, ν " 0.01, α " 0.5, γ " 0.02. In Fig.3 a), we have F 1 , F 2 with parameters k 1 " 4, θ 1 " 0.8 and k 2 " 15, θ 2 " 0.8, respectively. Consequently, m 1 " 5 and m 2 " 18.75, and ÿ rPM π r m rˆβ prq γ`µ ν`γ`µ´p δ`µq˙" 4.8809 ą 0.
Thus, the whole system is persistent in time mean. In Fig.3 b), F 1 and F 2 have parameters k 1 " 15, θ 1 " 0.8 and k 2 " 3, θ 2 " 0.8. Thus, m 1 " 18.75 and m 2 " 3.75, and ÿ rPM π r m rˆβ prq γ`µ ν`γ`µ´p δ`µq˙" 0.6506 ą 0.
Here, we stay on average longer in the state where the epidemic will go extinct, with respect to the case a). However, although the system is persistent in time mean, the threshold value lowers a lot and the fraction of infectious symptomatic and asymptomatic individuals have the time to decays in some time windows, and the susceptible to increases. In Fig.3 c) F 1 and F 2 have parameters k 1 " 18, θ 1 " 0.8 and k 2 " 1, θ 2 " 0.8, respectively. Thus, m 1 " 22.5 and m 2 " 1.25, and ÿ rPM π r m rˆβ prq γ`µ ν`γ`µ´p δ`µq˙"´0.0812 ă 0.
Thus, in this case with the same parameters of the cases a) and b), we have that the disease will go extinct almost surely, in the long run, stressing again the relevance of the mean sojourn times to stem the epidemic.
Conclusion
We have analyzed a SAIRS-type epidemic model with vaccination under semi-Markov switching. In this model, the role of the asymptomatic individuals in the epidemic dynamics and the random environment that possibly influences the disease transmission parameters are considered. Under the assumption that both asymptomatic and symptomatic infectious have the same transmission and recovery rates, we have found the value of the basic reproduction number R 0 for our stochastic model. We have showed that if R 0 ă 1 the disease will go extinct almost surely, while if R 0 ą 1 the system is persistent in time mean. Then, we have analyzed the model without restrictions, that is the transmission and recovery rates of the asymptomatic and symptomatic individuals ca be possible different. In this case, we have found two different sufficient conditions for the almost sure extinction of the disease and a sufficient condition for the persistence in time mean. However, the two regions of extinction and persistence are not adjacent but there is a gap between them, thus we do not have a threshold value dividing them. In the case of disease persistence, by restricting the analysis to two environmental states, under the Lie bracket conditions, we have investigated the omega-limit set of the system. Moreover, we have proved the existence of a unique invariant probability measure for the Markov process obtained by introducing the backward recurrence process that keeps track of the time elapsed since the latest switch. Finally, we have provided numerical simulations to validate our analytical result and investigate the role of the mean sojourn time in the random environments.
Lemma 3 .
3Let us fix r P M. The matrix pF´V qprq related to the subsystem r of (3) has a real spectrum. Moreover, if ρpF V´1prqq ă 1, all the eigenvalues of pF´V qprq are negative.
Definition 12 .
12We say that system (3) is almost surely persistent in the time mean,
Theorem 18 .
18Suppose that system (3) is persistent in time mean and the hypothesis (H2) holds. Let us denote by ξ r t pxp0qq the solution of system (3) in the state r with initial value xp0q PΓ, and let Ψ " " pS, A, Iq " ξ e k t k˝. . .˝ξ e1 t1 px1 q : t 1 , . . . , t k ě 0 and e 1 , . . . , e k P M, k P N * .
E
By similar arguments to (43), (44) and (45), one can find a positive numberK such thaťˇu rpsq pxps, x 1 qq´u rpsq pxps, x 2 qˇˇ2 ďK |xps, x 1 q´xps, x 2 q| 2 .Substituting this inequality into (49), by Fubini's theorem we have P # ż T 0ˇu rpsq pxps, x 1 qq´u rpsq pxps, x 2 qˇˇds ě ε |xps, x 1 q´xps, x 2 q| 2 ds.By (47), then we get P # ż T 0ˇu rpsq pxps, x 1 qq´u rpsq pxps, x 2 qˇˇds ě ε , x 1 q´xps, x 2 q| Ñ 0. Definitively, by combining (48) with (50), we obtain the claim (40).Lemma 21. The Markov process tpxptq, ηptq, rptq, t ě 0u is Feller.
AcknowledgmentsThis research was supported by the University of Trento in the frame "SBI-COVID -Squashing the business interruption curve while flattening pandemic curve (grant 40900013)".
Modelling a pandemic with asymptomatic patients, impact of lockdown and herd immunity, with applications to SARS-CoV-2. S Ansumali, S Kaushal, A Kumar, M K Prakash, M Vidyasagar, Annual reviews in control. S. Ansumali, S. Kaushal, A. Kumar, M. K. Prakash, and M. Vidyasagar. Modelling a pandemic with asymptomatic patients, impact of lockdown and herd immunity, with applications to SARS-CoV-2. Annual reviews in control, 2020.
On the basic reproduction number in a random environment. N Bacaër, M Khaladi, Journal of mathematical biology. 676N. Bacaër and M. Khaladi. On the basic reproduction number in a random environment. Journal of mathematical biology, 67(6):1729-1739, 2013.
Qualitative properties of certain piecewise deterministic markov processes. Michel Benaïm, Florent Stéphane Le Borgne, Pierre-André Malrieu, Zitt, Annales de l'IHP Probabilités et statistiques. 513Michel Benaïm, Stéphane Le Borgne, Florent Malrieu, and Pierre-André Zitt. Qualitative properties of certain piecewise deterministic markov processes. Annales de l'IHP Probabilités et statistiques, 51(3):1040-1075, 2015.
On the basic reproduction number in semi-markov switching networks. Xiaochun Cao, Zhen Jin, Guirong Liu, Michael Y Li, Journal of Biological Dynamics. 151Xiaochun Cao, Zhen Jin, Guirong Liu, and Michael Y Li. On the basic reproduction number in semi-markov switching networks. Journal of Biological Dynamics, 15(1):73-85, 2021.
Transience and recurrence of markov processes. K Ronald, Getoor, Séminaire de Probabilités XIV 1978/79. SpringerRonald K Getoor. Transience and recurrence of markov processes. In Séminaire de Probabilités XIV 1978/79, pages 397-409. Springer, 1980.
The theory of stochastic processes II. I Gikhman, A V Skorokhod, Springer Science & Business MediaI. Gikhman and A. V. Skorokhod. The theory of stochastic processes II. Springer Science & Business Media, 2004.
The sis epidemic model with markovian switching. A Gray, D Greenhalgh, X Mao, J Pan, Journal of Mathematical Analysis and Applications. 3942A. Gray, D. Greenhalgh, X. Mao, and J. Pan. The sis epidemic model with markovian switching. Journal of Mathematical Analysis and Applications, 394(2):496-516, 2012.
Modelling the effect of telegraph noise in the sirs epidemic model using markovian switching. D Greenhalgh, Y Liang, X Mao, Physica A: Statistical Mechanics and its Applications. 462D. Greenhalgh, Y. Liang, and X. Mao. Modelling the effect of telegraph noise in the sirs epidemic model using markovian switching. Physica A: Statistical Mechanics and its Applications, 462:684-704, 2016.
Stochastic SIRS model under regime switching. Z Han, J Zhao, Nonlinear Analysis: Real World Applications. 141Z. Han and J. Zhao. Stochastic SIRS model under regime switching. Nonlinear Analysis: Real World Applications, 14(1):352-364, 2013.
Markov processes and controlled Markov chains. Zhenting Hou, Jerzy A Filar, Anyue Chen, Springer Science & Business MediaZhenting Hou, Jerzy A Filar, and Anyue Chen. Markov processes and controlled Markov chains. Springer Science & Business Media, 2013.
Geometric control theory. Velimir Jurdjevic, Jurdjevic Velimir, Velimirdurdević , Cambridge university pressVelimir Jurdjevic, Jurdjevic Velimir, and VelimirDurdević. Geometric control theory. Cambridge university press, 1997.
The effects of asymptomatic attacks on the spread of infectious disease: a deterministic model. J T Kemper, Bulletin of mathematical biology. 406J. T. Kemper. The effects of asymptomatic attacks on the spread of infectious disease: a deterministic model. Bulletin of mathematical biology, 40(6):707-718, 1978.
Contributions to the mathematical theory of epidemics-i. W O Kermack, A G Mckendrick, Bltn Mathcal Biology. 53W.O. Kermack and A.G. McKendrick. Contributions to the mathematical theory of epidemics-i. Bltn Mathcal Biology, 53:33-55, 1991.
Threshold dynamics and ergodicity of an sirs epidemic model with markovian switching. D Li, S Liu, Journal of Differential Equations. 26312D. Li and S. Liu. Threshold dynamics and ergodicity of an sirs epidemic model with markovian switching. Journal of Differential Equations, 263(12):8873-8915, 2017.
Threshold dynamics and ergodicity of an SIRS epidemic model with semi-Markov switching. D Li, S Liu, J.-A Cui, Journal of Differential Equations. 2667D. Li, S. Liu, and J.-A. Cui. Threshold dynamics and ergodicity of an SIRS epidemic model with semi-Markov switching. Journal of Differential Equations, 266(7):3973-4017, 2019.
Coexistence and exclusion of competitive kolmogorov systems with semi-markovian switching. Dan Li, Hui Wan, Discrete & Continuous Dynamical Systems. 4194145Dan Li and Hui Wan. Coexistence and exclusion of competitive kolmogorov systems with semi-markovian switching. Discrete & Continuous Dynamical Systems, 41(9):4145, 2021.
Semi-Markov processes and reliability. N Limnios, G Oprisan, Springer Science & Business MediaN. Limnios and G. Oprisan. Semi-Markov processes and reliability. Springer Science & Business Media, 2001.
Stability of markovian processes iii: Foster-lyapunov criteria for continuous-time processes. P Sean, Meyn, Richard L Tweedie, Advances in Applied Probability. 253Sean P Meyn and Richard L Tweedie. Stability of markovian processes iii: Foster-lyapunov criteria for continuous-time processes. Advances in Applied Probability, 25(3):518-548, 1993.
Cholera transmission: the host, pathogen and bacteriophage dynamic. E J Nelson, J B Harris, J G Morris, S B Calderwood, A Camilli, Nature Reviews Microbiology. 710E. J. Nelson, J. B. Harris, J. G. Morris, S. B. Calderwood, and A. Camilli. Cholera transmission: the host, pathogen and bacteriophage dynamic. Nature Reviews Microbiology, 7(10):693-702, 2009.
S Ottaviano, M Sensi, S Sottile, arXiv:2109.05122Global stability of SAIRS epidemic models. arXiv preprintS. Ottaviano, M. Sensi, and S. Sottile. Global stability of SAIRS epidemic models. arXiv preprint arXiv:2109.05122, 2021.
A stochastic differential equation sis model on network under markovian switching. Stefania Ottaviano, Stefano Bonaccorsi, arXiv:2011.10454arXiv preprintStefania Ottaviano and Stefano Bonaccorsi. A stochastic differential equation sis model on network under markovian switching. arXiv preprint arXiv:2011.10454, 2020.
Visualizing the invisible: The effect of asymptomatic transmission on the outbreak dynamics of covid-19. Mathias Peirlinck, Kevin Linka, Francisco Sahli Costabal, Jay Bhattacharya, Eran Bendavid, P A John, Ellen Ioannidis, Kuhl, Computer Methods in Applied Mechanics and Engineering. 372113410Mathias Peirlinck, Kevin Linka, Francisco Sahli Costabal, Jay Bhattacharya, Eran Bendavid, John PA Ioannidis, and Ellen Kuhl. Visualizing the invisible: The effect of asymptomatic transmission on the outbreak dynamics of covid-19. Computer Methods in Applied Mechanics and Engineering, 372:113410, 2020.
A model for the emergence of drug resistance in the presence of asymptomatic infections. M Robinson, N I Stilianakis, Mathematical Biosciences. 2432M. Robinson and N. I. Stilianakis. A model for the emergence of drug resistance in the presence of asymptomatic infections. Mathematical Biosciences, 243(2):163-177, 2013.
European dry spell length distributions. C Serra, M D Martínez, X Lana, A Burgueño, Theoretical and applied climatology. 1143-4C. Serra, M. D. Martínez, X. Lana, and A. Burgueño. European dry spell length distributions, years 1951-2000. Theoretical and applied climatology, 114(3-4):531-551, 2013.
The Relationship Between a Continuous-Time Renewal Model and a Discrete Markov Chain Model of Precipitation Occurrence. M J Small, D J Morgan, Water Resources Research. 2210M. J. Small and D. J. Morgan. The Relationship Between a Continuous-Time Renewal Model and a Discrete Markov Chain Model of Precipitation Occurrence. Water Resources Research, 22(10):1422-1430, 1986.
On the existence and uniqueness of invariant measure for continuous time markov processes. Lukasz Stettner, Brown Univ Providence Ri Lefschetz Center for Dynamical SystemsTechnical reportLukasz Stettner. On the existence and uniqueness of invariant measure for continuous time markov processes. Technical report, Brown Univ Providence Ri Lefschetz Center for Dynamical Systems, 1986.
Emergence of drug resistance during an influenza epidemic: insights from a mathematical model. N I Stilianakis, A S Perelson, F G Hayden, Journal of Infectious Diseases. 1774N. I. Stilianakis, A. S. Perelson, and F. G. Hayden. Emergence of drug resistance during an influenza epidemic: insights from a mathematical model. Journal of Infectious Diseases, 177(4):863-873, 1998.
Dynamical behavior of stochastic SIRS model with two different incidence rates and markovian switching. F Wang, Z Liu, Advances in Difference Equations. 2019322F. Wang and Z. Liu. Dynamical behavior of stochastic SIRS model with two different incidence rates and markovian switching. Advances in Difference Equations, 2019(1):322, 2019.
Threshold dynamics and sensitivity analysis of a stochastic semi-markov switched sirs epidemic model with nonlinear incidence and vaccination. Xin Zhao, Tao Feng, Liang Wang, Zhipeng Qiu, Discrete & Continuous Dynamical Systems-B. Xin Zhao, Tao Feng, Liang Wang, and Zhipeng Qiu. Threshold dynamics and sensitivity analysis of a stochastic semi-markov switched sirs epidemic model with nonlinear incidence and vaccination. Discrete & Continuous Dynamical Systems-B, 2020.
Advances on modeling and control of semi-markovian switching systems: A survey. Guangdeng Zong, Wenhai Qi, Yang Shi, Journal of the Franklin Institute. Guangdeng Zong, Wenhai Qi, and Yang Shi. Advances on modeling and control of semi-markovian switching systems: A survey. Journal of the Franklin Institute, 2021.
| [] |
[
"Cross-Fusion Rule for Personalized Federated Learning",
"Cross-Fusion Rule for Personalized Federated Learning"
] | [
"Wangzhuo Yang ",
"Bo Chen ",
"Yijun Shen ",
"Jiong Liu ",
"Yu Li "
] | [] | [] | Data scarcity and heterogeneity pose significant performance challenges for personalized federated learning, and these challenges are mainly reflected in overfitting and low precision in existing methods. To overcome these challenges, a multi-layer multi-fusion strategy framework is proposed in this paper, i.e., the server adopts the network layer parameters of each client upload model as the basic unit of fusion for information-sharing calculation. Then, a new fusion strategy combining personalized and generic is purposefully proposed, and the network layer number fusion threshold of each fusion strategy is designed according to the network layer function. Under this mechanism, the L2-Norm negative exponential similarity metric is employed to calculate the fusion weights of the corresponding feature extraction layer parameters for each client, thus improving the efficiency of heterogeneous data personalized collaboration. Meanwhile, the federated global optimal model approximation fusion strategy is adopted in the network full-connect layer, and this generic fusion strategy alleviates the overfitting introduced by forceful personalized. Finally, the experimental results show that the proposed method is superior to the state-of-the-art methods.Index Terms-personalized federated learning, layer-based, cross-fusion rule, multi-layer multi-fusion strategy, heterogeneous data. | 10.2139/ssrn.4372952 | [
"https://export.arxiv.org/pdf/2302.02531v1.pdf"
] | 256,615,797 | 2302.02531 | 8b27874480882bd15c9c47bedf84f48426d8d704 |
Cross-Fusion Rule for Personalized Federated Learning
6 Feb 2023
Wangzhuo Yang
Bo Chen
Yijun Shen
Jiong Liu
Yu Li
Cross-Fusion Rule for Personalized Federated Learning
6 Feb 2023
Data scarcity and heterogeneity pose significant performance challenges for personalized federated learning, and these challenges are mainly reflected in overfitting and low precision in existing methods. To overcome these challenges, a multi-layer multi-fusion strategy framework is proposed in this paper, i.e., the server adopts the network layer parameters of each client upload model as the basic unit of fusion for information-sharing calculation. Then, a new fusion strategy combining personalized and generic is purposefully proposed, and the network layer number fusion threshold of each fusion strategy is designed according to the network layer function. Under this mechanism, the L2-Norm negative exponential similarity metric is employed to calculate the fusion weights of the corresponding feature extraction layer parameters for each client, thus improving the efficiency of heterogeneous data personalized collaboration. Meanwhile, the federated global optimal model approximation fusion strategy is adopted in the network full-connect layer, and this generic fusion strategy alleviates the overfitting introduced by forceful personalized. Finally, the experimental results show that the proposed method is superior to the state-of-the-art methods.Index Terms-personalized federated learning, layer-based, cross-fusion rule, multi-layer multi-fusion strategy, heterogeneous data.
I. INTRODUCTION
With the increasing emphasis on data privacy, federated learning without sharing private data has attracted more scholarly attention [1], [2]. Because of the non-shared data feature that protects user privacy, federated learning is commonly utilized in the financial, medical, industrial, and energy domains [3]- [7]. While there has been a significant amount of work focusing on the optimization aspects of federated learning, model overfitting and data heterogeneity remain key challenges to be faced in federated learning [8]- [12]. The heterogeneous data with non-independent and identically distributed makes the model obtained by each client one-sided, which cannot represent the integrity of the sample space. In this context, the argument that data heterogeneity exacerbates the overfitting of the model is held in [13]. Therefore, how to design reliable algorithms for the client-side data heterogeneity problem is the essence of studying federated learning.
To address this problem, a global model re-parameterization idea was suggested in [14], which uses the fusion model as a constraint target for the client, causing the client model to be trained to approximate that target. However, this method does not consider the collaboration type, resulting in similar clients not effectively using model information of each other for performance improvement. In this case, some personalized federated learning (PFL) methods have been proposed by [15]- [20], which are dedicated to optimising the collaboration strategy between client models. Specifically, a similar strategy to [14] is used in [19], with the improved aspect of changing the global model as the penalty target to a private model for each client. Further, the pFedMe proposed in [20] uses the Moreau envelopes as a constraint term to enable personalized deployment of the client model, where a bi-level strategy was adopted to calculate the server model of conditional convergence. Based on the strategy of adding additional terms, the improved PFL method compensates for the weakness of personalized data processing in [8], which was caused by pursuing the global optimum through the weighted average client model. Unfortunately, the lack of processing strategies for data heterogeneity in server fusion has led to a general problem of low precision in these methods. Accordingly, it is necessary to design a fine-grained fusion policy to calculate the personalized weights of each client. Under this case, a method named personalized federated few-shot learning was developed [21], and the core idea of this method is to construct a client personalization feature space, where the feature similarity is calculated as a metric to determine the fusion weight of each client. Furthermore, the fusion model is utilized as the penalty term of the corresponding client to improve the personalization capability. In [22], a novel client-personalized weight calculation strategy is developed on the server side, i.e., the weighting factors are designed as negative exponential mapping distances between each other's models. Then, the personalized model of each client is obtained by the weighting calculation, which will be used as a penalty factor for the optimization objective to improve the model performance. Similarly, a weighting strategy with a different structure is also proposed in [23], which uses the first-order extreme value points of the model loss function to approximate the optimal weights. However, the sophisticated personalized fusion rules set in these PFL methods lead to the risk of overfitting while solving the data heterogeneity problem. To mitigate the overfitting phenomenon in federated learning, a method of Gaussian processes-based PFL was proposed in [24], which has a better representation of the model due to the nature of Bayesian. Analogously, a PFL method based on Bayesian neural networks was proposed [9], where the penalty term is denoted as the KL distance between the hypothetical distribution and the posterior distribution of the model parameters. Notice that both the kernel function selection in the Gaussian process and the parameter hypothetical distribution in the Bayesian neural network depends on a large amount of data support, which makes these two methods unsuitable for scenarios with a small amount of heterogeneous data. Therefore, the overfitting problem of heterogeneous data with few samples is still a problem that each client needs to focus on.
To meet the data heterogeneity requirements of different clients, global efficiency and local model personality issues are considered in [25]. This inspires us to design the global model as a generic term to solve the overfitting problem and the personalized model as a penalty term to meet the personalized needs of heterogeneous data, respectively. Meanwhile, an idea of fusion based on the neural network layers was suggested in [26]. And then, a layer-based federated learning method was developed in [27], which requires a portion of the raw data for the server to train the fused weights. Although this method defeats the original purpose of federated learning data preservation, it further reminds us to integrate the generic and personality terms into one model, where the network layer is treated as the most basic fusion unit. Moreover, a deep neural network is considered by [28] to be divided into two parts, shallow and deep, and it is noted that they are generic and specialized, respectively, further inspiring the design of a functional layer-based fusion strategy in this paper.
Motivated by the above analysis, we shall study the data heterogeneity and overfitting problem for federated learning systems. Unlike traditional PFL, which is dedicated to personalized item design, instead, both personalized and generic items are focused on in this work. The main contributions of this paper are as follows.
1) For the PFL systems, a novel layer-based personalized federated fusion rule, which is different from the pseudofederated structure in [27], is proposed by combining different fusion policies employed at different network layers in each communication epoch. Then, a personalized fusion framework for multi-layer multi-fusion strategies is presented in this paper. Subsequently, a rule to determine the fusion threshold of the number of network layers for each fusion strategy is designed based on the network layer function. 2) Based on a multi-layer multi-fusion framework, a strategy for cross-fusion of personalized and generic is implemented in this paper. According to the negative exponential distance mapping of L 2 -Norm similarity metric, the rule for calculating the fusion weights between clients is improved to achieve personalized fusion of the feature extraction layer. On the other side, a generic term of federated global optimal model approximation fusion strategy for the network full-connect layer is used to alleviate the overfitting phenomenon of the client. It should be stressed that the personalized and generic terms in this paper refer to the processing rules of different layers in the model fusion, which through their respective properties enhance the model performance.
3) The layer function-based fusion threshold rule is applied to the multi-layer multi-fusion strategy framework to improve personalized federated learning performance. Then, the extensive experiments on three benchmark datasets show that the proposed personalized federated learning based on the cross-fusion rule (pFedCFR) outperforms state-of-the-art (SOTA) PFL methods [19], [20], [22] and generic federated learning strategy [8], [14].
II. PROBLEM FORMULATION
Consider a federated learning system with N clients described by the following structure: where the structure of network model M is the same for all clients, and thus the size of the corresponding model parameter ν n is the same for each c n , D n is a private training dataset for each c n that is non-independently and identically distributed. For each c n , the best performance of M (ν n ) on the D n is illustrated by ν * n . Specifically, each client c n individually represents the loss of model parameter ν * n in the training dataset D n through a cost function {F n (ν n ) : R d → R, ν n ∈ R d }. Thus, through the collaboration of each client, the goal of personalized federation learning is then defined as follows:
min V G(V ) := N n=1 F n (ν n ) + P(V )(2)
where V denotes the parameter collection of each client, i.e. V = [ν 1 , ...ν n , ...ν N ], G(V ) is the global optimization object, and P(V ) is the penalty term to each client. The loss term F n (ν n ) of each c n in (2) is calculated by its personalized training dataset. Meanwhile, the parameter ν n is updated and transmitted to the server. Based on the parameter ν n , the basic unit of collaboration information is given by
ν n = [ν n,0 , · · · , ν n,l , · · · , ν n,L ] T(3)
where ν n,l denotes the model parameters of lth layer, L is the depth of the model, and the collaborative way will be designed in Section III. Subsequently, with the consideration of data heterogeneity and model overfitting, the cross-fusion strategy structure of each client is given by
RULE : f usion p ⇒ f usion g(4)
where symbol ⇒ indicates that the cross-fusion rule consists of two serial fusion rules, and the personalized fusion rule f usion p and general fusion rule f usion g will be designed in Section III. Consequently, the issues to be addressed in this paper are described as follows.
1) The first aim is to design a layer-based personalized federated fusion structure for (3) such that collaboration information ν n,l is more granular, and the key information interactions independent of each other. 2) Under the cross-fusion strategy (4), the second aim is to design the personalized fusion rule f usion p and general fusion rule f usion p , such that the raw data feature extraction layers [ν n,0 : ν n,l ], l < L have strong personalization capability, and the remaining layers have generalization capability. Remark 1: It is concluded from (2) that the penalty term P(V ) directly affects the optimization objective of the proposed cross-fusion rule. Through the collaboration between model parameters in V , penalty terms P(V ) are computed, improving the performance of the c n under heterogeneous dataset D n . Notice that the core of PFL is the collaborative strategy among the clients, while the penalty term aims at deep optimization of ν n in [14], [16], [20], [22], which implies us to design fusion rule with considering data heterogeneity and model overfitting in this paper for each client. Moreover, when designing the fusion rule in this paper, each client's layer parameter is proposed to be viewed as the basic unit. This also inspires us to focus on the influence of using different fusion strategies at the same layer.
Remark 2: It is known from (4) that there are 2 serial fusion rules in the server. Combined with the layer-based fusion structure in (3), fusion rule f usion p and f usion g are designed to handle the information collaboration of different layer parameters between clients, respectively. It should be noted that the rule f usion p is dedicated to the personalization study of heterogeneous data, while f usion g aims to solve the overfitting problem. In this case, the ν n obtained by fusion is also present in the penalty term with the same shape.
Notations: Since the server needs to calculate the weights of each local model in federation learning, fusion is considered more appropriate than aggregation in this paper. The superscript "T" represents the transpose, while Diag() denotes extracting the elements on the diagonal of the matrix and forming the column vector. The symbol "→" indicates a point-to-point connection. λ, µ and α t are hyperparameter greater than 0.
III. PROPOSED METHOD
In this section, a layer-based client collaboration idea is improved to enhance refinement processing capability, based on which a personalized fusion framework with multi-layer multifusion strategies is designed. Then, a threshold calculation rule involving the number of network layers under each fusion strategy is designed. Furthermore, a cross-fusion rule will be developed to tackle the problem of data heterogeneity and overfitting in the PFL background. For more visualization, the overall framework of the proposed pFedCFR is shown in Fig. 1.
A. Layer-Based Structure
Conventional methods achieve collaboration between clients by weighted fusion of overall models, but these methods ignore the specificity of network layer functionality and its varying roles under different models. To solve this problem, a layer-based fusion structure is developed in this paper, using neural network layers as the basis fusion unit. Let ω l = [ω 1,l , ω 2,l , · · · , ω N,l ] T and W n = [ω 1 , ω 2 , · · · , ω L ], where ω n,l denotes the weight of lth layer in c n , and its specific value is given by the fusion rule, where the subscript n in W n indicates that the current computational sequence is c n . Then, combining with (3), the fusion result for each epoch of the c n is expressed as
ν ′ n = Diag(V · W n ) = Diag ν 1,1 ν 2,1 · · · ν N,1 ν 1,2 ν 2,2 · · · ν N,2 . . . . . . . . . . . . ν 1,L ν 2,L · · · ν N,L ω 1,1 ω 1,2 · · · ω 1,L ω 2,1 ω 2,2 · · · ω 2,L . . . . . . . . . . . . ω N,1 ω N,2 · · · ω N,L = ν ′ n,1 , ν ′ n,2 , · · · , ν ′ n,L T (5) where ν ′
n,l denotes the updated layer parameters, whose value is ω 1,l · ν 1,l + · · · + ω N,l · ν N,l .
In the existing PFL algorithms [14], [22], the weight ω G n,l contained in ν n takes the same value, while each ω n,l in (5) is more flexible and diverse. Assuming that the fusion strategy is denoted by f usionRule, the weights of the two approaches can be expressed as
ω G n,1 = · · · = ω G n,L f usionRule(ν n ; ν 1 , ν 2 , · · · , ν N ) ω n,l f usionRule(ν n,l ; ν 1,l , ν 2,l , · · · , ν N,l )(6)
Notice that ω n,l is obtained by only computing the lth layer network parameters, which implies that multiple fusion rules can exist for a federated learning algorithm based on layers as fusion units. Following this idea, a multi-layer multi-fusion strategy structure is developed to be given by
f usionRule 1 (ν 1,1 , ν 2,1 , · · · , ν N,1 ) f usionRule 2 (ν 1,2 , ν 2,2 , · · · , ν N,2 ) . . . f usionRule L (ν 1,L , ν 2,L , · · · , ν N,L )(7)
where f usionRule l is the adopted fusion rule of lth layer. Remark 3: It can be seen from (6) that ω n,l is co-determined by [ν 1,l , ν 2,l , · · · , ν N,l ], which is more refined and targeted than ω G n,l when considering that each layer of the network has a different impact on the model. Since the optimization target differs under different fusion strategies, each fusion rules f usionRule l in (7) can be the same or different. Meanwhile, considering that the functions of each network layer in the model vary, a method for determining the fusion threshold based on the network layer function was developed.
B. Function-Based Fusion Threshold Rule
Given the variability of functions among network layers in the deep learning framework and the different focus of each fusion strategy, a rule to determine the threshold of the network layers under each fusion strategy based on the layer functions is developed in this subsection. According to the multi-layer multi-fusion strategies structure of (7), it is assumed that the network layers 1 to l 1 , l 1 + 1 to l 2 , · · · , l n to L each have the same function, where 1 < l 1 < l 2 · · · < l n < L. Then, the function-based fusion threshold rule with multifusion strategies is proposed to be
f usionRule 1 = · · · = f usionRule l1 , r 1 = [1, l 1 ] f usionRule l1+1 = · · · = f usionRule l2 , r 2 = [l 1 + 1, l 2 ]
. . . where r i (0 < i < n + 1) is the threshold of each fusion strategy. With this mechanism, fusion strategies that match the characteristics of the network layer can be targeted to improve the performance of the federated learning model.
f usionRule ln+1 = · · · = f usionRule L , r n+1 = [l n + 1, L](8)
C. Cross-Fusion Rule
Based on the framework of multi-layer multi-fusion strategy, the combination of two fusion strategies, forceful personalized and generic, will be presented as follow. Concretely, to solve the problems of data heterogeneity and model overfitting, a pFedCFR structure is proposed in this section, which improves the generic strategy in [14] and the personalized strategy in [22], respectively. At the same time, according to the strategy in (4), f usion p is designed as a personalized fusion rule for the raw data feature extraction layer, while f usion g is a fusion rule for the generic full-connect decision layer. Then, the result of f usion p is set as the input of f usion g in forward propagation. In this case, the optimization problem in (2) is rewritten as arg min νn L(ν n ) := F n (ν n ) + L l=1 P l (ν n,l )
The specific details of the two fusion rules are shown below. 1) Personalized Fusion Rule: As shown in Fig. 1, a thread is opened on the server for each client, which can read all shared [ν t 1,l , ν t 2,l , · · · , ν t N,l ]. Then, according to the message passing mechanism in [22], the layer-based personalized fusion rule f usion p in (4) is given by ν t n,l = ω n,1,l ν t−1 1,l · · · + ω n,m,l ν t−1 n,l · · · + ω n,N,l ν t−1
N,l = 1 − α t N m =n A ′ ν t−1 n,l − ν t−1 m,l 2 · ν t−1 n,l + α t N m =n A ′ ν t−1 n,l − ν t−1 m,l 2 · ν t−1 m,l(10)
where ω n,1,l , · · · , ω n,N,l are collaboration weights of each other in lth layer, A(x) = 1 − e −x/σ , A ′ is the derivative of A(x) and σ is a hyperparameter. It can be seen from (10) that when the lth layer parameter ν t−1 n,l in c n is more similar to ν t−1 m,l in c m , the greater the weighted influence between them. Thus, the collaboration of clients with similar raw feature spaces is enhanced without exposing private data. Conversely, the fusion weights ω n,m,l are inversely proportional for those with large layer parameter distances, i.e., there exists a large ||ν t−1 n,l − ν t−1 m,l || 2 such that little collaborative information interaction between c n and c m . Therefore, the personalized weight in Fig. 1 was improved as
ζ n,m = α t A ′ ν t−1 n,l − ν t−1 m,l 2
, where n = m.
Following (10), the penalty term P l (ν n,l ) in (9) is proposed to be P l (ν n,l ) = λ 2α t ν n,l − ν t n,l 2
Through this term, ν n,l is forced to approximate ν t n,l , thus achieving the personalized requirements of the optimization objective L(ν n ).
2) Generic Fusion Rule: What cannot be ignored is the overfitting problem introduced by the above f usion p in solving the personality problem of heterogeneous data. To address this problem, a generic fusion rule f usion g is developed, which dedicates to information collaboration of the generic fullconnect layer. As the ν global,l showed in Fig. 1, the designed f usion g differs in structure from f usion p in that all clients share ν global,l at layer l. According to the fusion idea in [14], the layer-based generic fusion strategy is proposed to be
ν global,l = 1 N N n=1 ν n,l(12)
where ν global,l is obtained by averaging the cumulative sum of all client parameters at the lth layer. Based on this operation, the impact of each client is equivalent, such that the fusion result exhibits good generalizability. Subsequently, it is obtained from (12) that the penalty term in this layer can be calculated by
P l (ν n,l ) = µ 2 ν n,l − ν t global,l 2(13)
Similarly, ν n,l is forced to approximate ν t global,l in this layer. Then, following (4), the optimization objective is decomposed to each layer is denoted as (14) where f n,l denotes the calculation of c n at layer l.
F n (ν n ) = [f n,1 (ν n,1 ) → f n,2 (ν n,2 ) · · · → f n,L (ν n,L )] = [f n,l (ν n,l ) →] L l=1
It is implied from (11) (13) and (14) that the loss function L(ν n ) in (9) can be expressed as
L(ν n ) = [f n,l (ν n,l ) →] L l=1 + r i=1 λ 2α t ν n,i − ν t n,i 2 + L i=r+1 µ 2 ν n,i − ν t global,i 2(15)
where r is the layer number of the adopted f usion p . Here, the optimization objective of each client in pFedCFR is obtained. Remark 4: To determine the network layers' fusion threshold r in Algorithm 1, the fusion rule of the feature extraction layer and the full-connect layer in the model is focused on in this paper. Noting that the message passing mechanism in (10) enhances the influence between similar feature layers, the designed personalized fusion rule is based on the original data feature extraction layer, which can effectively solve the model collaboration problem of similar datasets in data heterogeneity. Meanwhile, the generic fusion rule designed in (12) argues that the contributions of all models are equivalent. The idea is then applied to the fully connected layer, which means that the overfitting caused by the exclusion of non-similarity layers in personalized fusion rules can be mitigated.
IV. EXPERIMENTAL RESULT
In this section, three illustrative instances are given to demonstrate the superiority of the developed pFedCFR to the SOTA PFL methods.
Algorithm 1: pFedCFR 1 Notation: N clients, a private dataset is held by each client; hyperparameter µ, α, λ, model depth L, personalized network layers' fusion threshold r, communication round T and learning rate η are preset to be given. 2 client: Randomly initialize model parameters ν n . 3 for t = 1, 2, · · · , T do 4 client: 5 Each client optimize ν t n by minimizing the loss function L(ν t n ) in (15), and then send the obtained ν t n to the server. 6 server: 7 for l = 1, 2, · · · , L do 8 If the current operating network layer is less than r, then ν t+1 n,l is obtained using the personalized fusion rule (10). 9 Otherwise, the ν t+1 n,l is calculated through the generalized fusion rule (12). 10 Recombining ν t+1 n ← [ν t+1 n,1 , ν t+1 n,2 , · · · , ν t+1 n,L ].
A. Experimental Setup
Note that the software/hardware configuration of the system in the experiment is as follows. The program is executed by using the framework of Pytorch 1.9, which runs on the server system of Ubuntu 20.04.3 LTS with 512G memory, NVIDIA 3080 GPU and Intel Core-i7 [email protected].
1) Dataset Description: Three public benchmark datasets were used in the experiments, they are MNIST [29], FMNIST [30] and CIFAR-10 [31], and the specific statistical properties are shown in TABLE I. Moreover, each dataset was preprocessed with normalization before segmentation and training.
Owing to the limitation of computational resources, the dataset is divided according to the requirement of 20 clients in this paper. Meanwhile, considering the non-independent and identical distribution of samples in practice, each client is allocated with only partially labelled training samples, and the sample capacity size of each client varies widely. Specifically, by using the heterogeneous data construction rules in [19], we first assign corresponding labels to each client, then divide the number of samples using the strategy of combining lognormal distribution and random factor, and finally achieve the segmentation of all client samples.
2) Compared Methods and Hyperparameters:
To fully and comprehensively show the superiority of the proposed method, pFedCFR is experimentally compared with four mainstream methods in this paper. It should be pointed out that the hyperparameter settings in these methods are adopted from the original proposal. And the specific details are as follows.
1) FedAvg is one of the most common representations of federation learning [8], and its fusion strategy is to average the model parameters uploaded by each client. The learning rate η is set to 0.005. 2) FedProx in [14] solves the data heterogeneity and convergence problem of client model updates by adding a global approximation penalty term. Where the penalty coefficient µ = 0.001 and the learning rate η = 0.005.
3) The goal of Ditto in [19] is to train the optimal private model for each client to meet the personalization needs of heterogeneous data. Where the local step l λ = 1 and the learning rate η = 0.005. 4) pFedMe introduces the idea of personalization [20], which transforms the optimization problem into a bi-level decoupling problem from client personalization loss to global loss. Where the penalty coefficient λ and the number of personalized training steps K are set to 15 and 5, respectively. In addition, the global tuning parameter β = 1 and the personalized learning rate η = 0.005. 5) FedAMP proposes a personalized fusion strategy with a model near-similarity-repelling-difference by introducing a message-passing mechanism [22]. Since the method is sensitive to hyperparameters, the hyperparameter settings in the experiments were strictly adopted from the authors' suggestion, i.e. penalty coefficient λ = 1, convergence coefficient α k = 10 4 , weighted hyperparameters σ = 10 6 , and the learning rate η = 0.005. Notice that the significant parameters of the proposed pFed-CFR are configured in the following way. According to the experimental validation and the convergence analysis in [22], the convergence coefficient α t and the personalized penalty coefficient λ in (11) are set to 10 4 and 1, respectively, and the hyperparameter σ in A(x) is set to 10 6 , and the generic penalty term coefficient µ in (13) is set to 0.001. Moreover, the deep neural network (DNN) with 2-layer fully connected units is selected as the training network for datasets MNIST and Fashion-MNIST. On the other hand, the convolutional neural network (CNN), which consists of 2 convolutional layers and 2-layer fully connected units, and the residual neural network (Resnet-18) in [32] are both selected as the model framework for the client in dataset CIFAR-10. The client numbers N = 20, the local update step is 10, the global communication round T = 100, and the learning rate η = 0.005. Finally, the core parameter personalized fusion threshold r in Algorithm 1 is set to 1 and 2 in the DNN and CNN, respectively.
B. Results on Heterogeneity Data
The proposed pFedCFR is experimentally validated on several heterogeneous datasets, where the performance comparison results with various typical approaches are expressed in TABLE II, and these results confirm that the proposed method is effective. As can be seen from the table, pFedCFR outperforms the other fusion models in terms of prediction accuracy, especially in the CIFAR-10 (CNN) configuration setting, with an accuracy of 0.7809, which improves the accuracy value by 0.015 over the second-ranked FedAMP. At the same time, it is noticed that FedAvg and FedProx with global optimization ideas in the experimental results are far inferior to pFedMe and FedAMP with personalized fusion strategy in terms of accuracy, which indicates that the PFL fusion strategy studied in this paper is urgent under the non-independent and identical distribution of heterogeneity data. Furthermore, in the study of personalized fusion strategies, FedAMP based on the messagepassing mechanism has better personalization services, which are reflected in the classification accuracy of all datasets. It is worth noting that the personalized in FedAMP and the generalized fusion rules in FedProx are improved to each layer in pFedCFR, and the above comparison results provide solid evidence for the advancedness of these improvements.
To illustrate that pFedCFR improves the overfitting defect under the strong personality rule, this paper conducts three experiments with the loss and accuracy comparison in the same heterogeneous configuration. As shown in Fig. 2, the global federated learning methods suffer from underperformance as the number of communication rounds increases, while the PFL methods enter an overfitting state, where the red rectangular box is a detailed comparison of each method. In particular, the FedAMP consistently presents a decrease in loss without an increase in test accuracy after about 40 rounds. This implied that while the message-passing strategy enhances feature collaboration between similar clients, it reduces model generality, which leads to overfitting. Combining the test accuracy and loss comparison of the three benchmark experiments shows that the performance of Ditto needs to be improved, although no overfitting effect was observed. Interestingly, it is shown from the experiments that the proposed pFedCFR has significant performance improvement and alleviates the overfitting phenomenon mentioned above. Moreover, the test accuracy and stability of pFedCFR outperformed the pFedMe in all experiments.
C. Cross-Fusion Operations in Different Layers
According to the core idea of cross-fusion in (15), the proposed pFedCFR with layers as the basic fusion unit contains both personalized and generalized fusion strategies. To show the effects of the fusion strategies employed at different network layers on the performance of the algorithms. In this section, three experiments with Resnet-18 are compared to verify the effectiveness of the feature extraction layer using a personalized fusion strategy and the decision layer using a generic fusion strategy.
TABLE III reports the test accuracy of pFedCFR as the personalized fusion threshold r in the range {20, 30, 40, 50, 56, 62}, where the full model depth L in Resnet-18 is 64, and l ≤ r personalized fusion strategy is adopted, while l > r the generic fusion strategy is utilized. Since the first 56 layers of the model in Resnet-18 are convolutional feature extraction layers, while the remaining are full-connected decision calculation layers. The result in Ditto [19] 0.9530 ± 0.0060 0.9498 ± 0.0082 0.7308 ± 0.0100 pFedMe [20] 0.9363 ± 0.0160 0.9333 ± 0.0020 0.7693 ± 0.0010 FedAMP [22] 0.9541 ± 0.0004 0.9535 ± 0.0006 0.7656 ± 0.0004 TABLE III illustrates that the best performance of pFedCFR is when r = 56, which is consistent with the expectation of personalized fusion threshold selection in this paper. It also indicates that as r decreases, more personalized fusion network layers are replaced by generic fusion, which consequently causes a decrease in test accuracy.
The experimental results show that an accurate selection of personalized fusion thresholds is important for pFedCFR. To depict the effect of different thresholds on the model training process in detail, information from 50 communication rounds was experimentally recorded, as shown in Fig. 3. Apparently, it clearly implies that during the training process for MNIST and Fashion-MNIST, the model performance is generally ahead of the other thresholds when r is taken as 56. Although there was a crossover between r = 56 and r = 50 in the CIFAR-10 experiment, the former could be observed to be superior overall. Combined with the analysis in Section IV-B, overfitting is the main reason for the performance degradation at r = 62. With the basic model determined, these results support us in quickly determining the personalized fusion threshold of the pFedCFR.
V. CONCLUTIONS In this paper, a new PFL method called pFedCFR has been developed for the data heterogeneity problem among multiple clients. The designed multi-layer multi-fusion strategies framework based on layer functions effectively improves the lowperformance problem caused by single fusion policy in existing federated learning. Then, a fusion strategy combining personalization and generalization is designed to alleviate the overfitting phenomenon caused by forceful personalized mechanisms. The extended experiments demonstrate the effectiveness of the proposed method.
On the other hand, consider the following critical issues in the PFL algorithm: i). although the overfitting phenomenon is alleviated, it still exists in the forceful personalized fusion rules, ii). only two functions, the feature extraction layer and the decision layer, are considered. Therefore, we will work on a more detailed and generic fusion strategy based on pFedCFR.
W. Z. Yang, B. Chen, Y. J. Shen, J. Liu and L. Yu are with the Department of Automation, Zhejiang University of Technology, Hangzhou 310023, China, and also with the Institute of Cyberspace Security, Zhejiang University of Technology, Hangzhou 310023, China (Correspondence email:[email protected]).
: c 1 , ...c n , ...c N Parameter : ν 1 , ...ν n , ...ν N Dataset : D 1 , ...D n , ...
Fig. 1 .
1Mechanistic framework of pFedCFR. Each round of iteration proceeds as follows. 1). Upload: Each client uploads the trained model to the server, and the collaboration of client model parameters is implemented in the server with layers as the basic unit. 2). Fusion: There are two types of fusion: f usionp, represented by the blue dashed arrow, and f usiong, represented by the red and green solid arrows. Each client in f usionp obtains its personalized network parameters by calculating weighting factors, while all clients in f usiong share a layer of network parameters. 3). Allocation: Restructure the network parameters of the personalization layer ν n,1 and the generic layer ν global,l , and distribute the restructured parameters ν ′ n to the corresponding client cn.
corresponding c n . 13 end 14 Output: [ν T 1 , · · · , ν T N ]
Fig. 2 .
2Comparison of test accuracy and loss between different federated learning methods within 100 communication rounds.
Fig. 3 .
3Accuracy comparison of the proposed pFedCFR method with different personalized fusion thresholds in three benchmark datasets, where the experimentally used Resnet-18 depth is 64.
TABLE I DATASET
ICHARACTERISTICDataset
MNIST
FASHION-MNIST
CIFAR-10
Items
70000
70000
60000
Class
10
10
10
Dimension
(28,28)
(28,28)
(3,32,32)
Train/Test
(6:1)
(6:1)
(5:1)
Intro
numerical
clothes
animals and vehicles
TABLE II THE
IIRESULTS OF PREDICTION ACCURACY COMPARISON OF PFEDCFR WITH SEVERAL SOTA METHODS UNDER DIFFERENT DATA SETS AND MULTIPLE NETWORK MODELS. THE BOLDED FONT INDICATES THE BEST PREDICTION PERFORMANCE OF THE METHOD.Dataset
Method
MNIST (DNN)
Fashion-MNIST (DNN)
CIFAR-10 (CNN)
FedAvg [8]
0.8502 ± 0.0008
0.7034 ± 0.0014
0.6081 ± 0.0027
Global
FedProx [14]
0.8180 ± 0.0002
0.7059 ± 0.0024
0.6073 ± 0.0019
TABLE III THE
IIITEST ACCURACY COMPARISON OF PFEDCFR IN DEEP LEARNING MODEL RESNET-18 WITH DIFFERENT PERSONALIZED FUSION THRESHOLD.r
MNIST
Fashion-MNIST
CIFAR-10
62
0.9555 ± 0.0017
0.9515 ± 0.0017
0.7211 ± 0.0015
56
0.9614 ± 0.0004 0.9581 ± 0.0033 0.7273 ± 0.0021
50
0.9522 ± 0.0032
0.9524 ± 0.0023
0.7268 ± 0.0027
40
0.9344 ± 0.0019
0.9353 ± 0.0035
0.7068 ± 0.0065
30
0.9266 ± 0.0014
0.9273 ± 0.0040
0.6285 ± 0.0018
20
0.9212 ± 0.0039
0.9179 ± 0.0023
0.5836 ± 0.0045
Federated machine learning: Concept and applications. Q Yang, Y Liu, T Chen, Y Tong, ACM Trans. Intell. Syst. Technol. 10Q. Yang, Y. Liu, T. Chen, and Y. Tong, "Federated machine learning: Concept and applications," ACM Trans. Intell. Syst. Technol., vol. 10, jan 2019.
A survey on federated learning. C Zhang, Y Xie, H Bai, B Yu, W Li, Y Gao, Knowledge-Based Systems. 216106775C. Zhang, Y. Xie, H. Bai, B. Yu, W. Li, and Y. Gao, "A survey on federated learning," Knowledge-Based Systems, vol. 216, p. 106775, 2021.
Efficient and privacy-enhanced federated learning for industrial artificial intelligence. M Hao, H Li, X Luo, G Xu, H Yang, S Liu, IEEE transactions on industrial informatics. 1610M. Hao, H. Li, X. Luo, G. Xu, H. Yang, and S. Liu, "Efficient and privacy-enhanced federated learning for industrial artificial intelligence," IEEE transactions on industrial informatics, vol. 16, no. 10, pp. 6532- 6542, 2020.
Fedstack: Personalized activity monitoring using stacked federated learning. T Shaik, X Tao, N Higgins, R Gururajan, Y Li, X Zhou, U R Acharya, Knowledge-Based Systems. 257109929T. Shaik, X. Tao, N. Higgins, R. Gururajan, Y. Li, X. Zhou, and U. R. Acharya, "Fedstack: Personalized activity monitoring using stacked federated learning," Knowledge-Based Systems, vol. 257, p. 109929, 2022.
Incentive mechanisms for federated learning: From economic and game theoretic perspective. X Tu, K Zhu, N C Luong, D Niyato, Y Zhang, J Li, IEEE transactions on cognitive communications and networking. 83X. Tu, K. Zhu, N. C. Luong, D. Niyato, Y. Zhang, and J. Li, "Incentive mechanisms for federated learning: From economic and game theoretic perspective," IEEE transactions on cognitive communications and net- working, vol. 8, no. 3, pp. 1-1, 2022.
Privacy-preserving federated learning for residential short-term load forecasting. J D Fernandez, S P Menci, C M Lee, A Rieger, G Fridgen, Applied energy. 326119915J. D. Fernandez, S. P. Menci, C. M. Lee, A. Rieger, and G. Fridgen, "Privacy-preserving federated learning for residential short-term load forecasting," Applied energy, vol. 326, p. 119915, 2022.
Federated learning for machinery fault diagnosis with dynamic validation and self-supervision. W Zhang, X Li, H Ma, Z Luo, X Li, Knowledge-Based Systems. 213106679W. Zhang, X. Li, H. Ma, Z. Luo, and X. Li, "Federated learning for machinery fault diagnosis with dynamic validation and self-supervision," Knowledge-Based Systems, vol. 213, p. 106679, 2021.
Communication-efficient learning of deep networks from decentralized data. H B Mcmahan, E Moore, D Ramage, S Hampson, B A Arcas, 2017 20th International Conference on Artificial Intelligence and Statistics. H. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, "Communication-efficient learning of deep networks from decentralized data," in 2017 20th International Conference on Artificial Intelligence and Statistics, pp. 1273-1282, 2017.
Personalized federated learning via variational bayesian inference. X Zhang, Y Li, W Li, K Guo, Y Shao, 39th International Conference on Machine Learning (ICML). 2022X. Zhang, Y. Li, W. Li, K. Guo, and Y. Shao, "Personalized federated learning via variational bayesian inference," in 39th International Con- ference on Machine Learning (ICML), 2022.
Robust and communication-efficient federated learning from non-i.i.d. data. F Sattler, S Wiedemann, K.-R Müller, W Samek, IEEE Transactions on Neural Networks and Learning Systems. 319F. Sattler, S. Wiedemann, K.-R. Müller, and W. Samek, "Robust and communication-efficient federated learning from non-i.i.d. data," IEEE Transactions on Neural Networks and Learning Systems, vol. 31, no. 9, pp. 3400-3413, 2020.
Federated learning with taskonomy for non-iid data. H Jamali-Rad, M Abdizadeh, A Singh, IEEE Transactions on Neural Networks and Learning Systems. H. Jamali-Rad, M. Abdizadeh, and A. Singh, "Federated learning with taskonomy for non-iid data," IEEE Transactions on Neural Networks and Learning Systems, pp. 1-12, 2022.
Graph-regularized federated learning with shareable side information. Y Zhang, S Wei, S Liu, Y Wang, Y Xu, Y Li, X Shang, Knowledge-Based Systems. 257109960Y. Zhang, S. Wei, S. Liu, Y. Wang, Y. Xu, Y. Li, and X. Shang, "Graph-regularized federated learning with shareable side information," Knowledge-Based Systems, vol. 257, p. 109960, 2022.
Tackling overfitting in boosting for noisy healthcare data. Y Park, J C Ho, IEEE transactions on knowledge and data engineering. 337Y. Park and J. C. Ho, "Tackling overfitting in boosting for noisy healthcare data," IEEE transactions on knowledge and data engineering, vol. 33, no. 7, pp. 2995-3006, 2021.
Federated optimization in heterogeneous networks. T Li, A K Sahu, M Zaheer, M Sanjabi, A Talwalkar, V Smith, Proceedings of Machine Learning and Systems. Machine Learning and Systems2T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith, "Federated optimization in heterogeneous networks," in Proceedings of Machine Learning and Systems, vol. 2, pp. 429-450, 2020.
Towards personalized federated learning. A Z Tan, H Yu, L Cui, Q Yang, IEEE transaction on neural networks and learning systems. A. Z. Tan, H. Yu, L. Cui, and Q. Yang, "Towards personalized federated learning," IEEE transaction on neural networks and learning systems, vol. PP, pp. 1-17, 2022.
Lower bounds and optimal algorithms for personalized federated learning. F Hanzely, S Hanzely, S Horváth, P Richtarik, Advances in Neural Information Processing Systems. 33F. Hanzely, S. Hanzely, S. Horváth, and P. Richtarik, "Lower bounds and optimal algorithms for personalized federated learning," in Advances in Neural Information Processing Systems, vol. 33, pp. 2304-2315, 2020.
Personalized federated learning with theoretical guarantees: A model-agnostic meta-learning approach. A Fallah, A Mokhtari, A Ozdaglar, Advances in Neural Information Processing Systems. 33A. Fallah, A. Mokhtari, and A. Ozdaglar, "Personalized federated learn- ing with theoretical guarantees: A model-agnostic meta-learning ap- proach," in Advances in Neural Information Processing Systems, vol. 33, pp. 3557-3568, 2020.
Multi-task federated learning for personalised deep neural networks in edge computing. J Mills, J Hu, G Min, IEEE transactions on parallel and distributed systems. 33J. Mills, J. Hu, and G. Min, "Multi-task federated learning for person- alised deep neural networks in edge computing," IEEE transactions on parallel and distributed systems, vol. 33, no. 3, pp. 630-641, 2022.
Ditto: Fair and robust federated learning through personalization. T Li, S Hu, A Beirami, V Smith, International Conference on Machine Learning (ICML). 2021T. Li, S. Hu, A. Beirami, and V. Smith, "Ditto: Fair and robust federated learning through personalization," in International Conference on Machine Learning (ICML), 2021.
Personalized federated learning with moreau envelopes. C T Dinh, N Tran, J Nguyen, Advances in Neural Information Processing Systems. 33C. T. Dinh, N. Tran, and J. Nguyen, "Personalized federated learning with moreau envelopes," in Advances in Neural Information Processing Systems, vol. 33, pp. 21394-21405, 2020.
Personalized federated few-shot learning. Y Zhao, G Yu, J Wang, C Domeniconi, M Guo, X Zhang, L Cui, IEEE transaction on neural networks and learning systems. Y. Zhao, G. Yu, J. Wang, C. Domeniconi, M. Guo, X. Zhang, and L. Cui, "Personalized federated few-shot learning," IEEE transaction on neural networks and learning systems, vol. PP, pp. 1-11, 2022.
Personalized cross-silo federated learning on non-iid data. Y Huang, L Chu, Z Zhou, L Wang, J Liu, J Pei, Y Zhang, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence35Y. Huang, L. Chu, Z. Zhou, L. Wang, J. Liu, J. Pei, and Y. Zhang, "Personalized cross-silo federated learning on non-iid data," Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 7865-7873, May 2021.
Personalized federated learning with first order model optimization. M Zhang, K Sapra, S Fidler, S Yeung, J M Alvarez, International Conference on Learning Representations. M. Zhang, K. Sapra, S. Fidler, S. Yeung, and J. M. Alvarez, "Personalized federated learning with first order model optimization," in International Conference on Learning Representations, 2021.
Personalized federated learning with gaussian processes. I Achituve, A Shamsian, A Navon, G Chechik, E Fetaya, Advances in Neural Information Processing Systems. 34I. Achituve, A. Shamsian, A. Navon, G. Chechik, and E. Fetaya, "Personalized federated learning with gaussian processes," in Advances in Neural Information Processing Systems, vol. 34, pp. 8392-8406, 2021.
. R Wu, A Scaglione, H.-T Wai, N Karakoc, K Hreinsson, W.-K , R. Wu, A. Scaglione, H.-T. Wai, N. Karakoc, K. Hreinsson, and W.-K.
Federated block coordinate descent scheme for learning global and personalized models. Ma, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence35Ma, "Federated block coordinate descent scheme for learning global and personalized models," Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35(12), pp. 10355-10362, 2021.
Memf: Multi-levelattention embedding and multi-layer-feature fusion model for person reidentification. J Sun, Y Li, H Chen, B Zhang, J Zhu, Pattern recognition. 116107937J. Sun, Y. Li, H. Chen, B. Zhang, and J. Zhu, "Memf: Multi-level- attention embedding and multi-layer-feature fusion model for person re- identification," Pattern recognition, vol. 116, p. 107937, 2021.
Fedmcsa: Personalized federated learning via model components self-attention. Q Guo, S Qi, S Qi, D Wu, Q Li, in arXivQ. Guo, S. Qi, S. Qi, D. Wu, and Q. Li, "Fedmcsa: Personalized federated learning via model components self-attention," in arXiv, 2022.
Communication-efficient federated deep learning with layerwise asynchronous model update and temporally weighted aggregation. Y Chen, X Sun, Y Jin, IEEE Transactions on Neural Networks and Learning Systems. 3110Y. Chen, X. Sun, and Y. Jin, "Communication-efficient federated deep learning with layerwise asynchronous model update and temporally weighted aggregation," IEEE Transactions on Neural Networks and Learning Systems, vol. 31, no. 10, pp. 4229-4238, 2020.
Gradient-based learning applied to document recognition. Y Lecun, L Bottou, Y Bengio, P Haffner, Proceedings of the IEEE. 8611Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-based learning applied to document recognition," Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, 1998.
Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. H Xiao, K Rasul, R Vollgraf, H. Xiao, K. Rasul, and R. Vollgraf, "Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms," 2017.
Learning multiple layers of features from tiny images. A Krizhevsky, A. Krizhevsky, "Learning multiple layers of features from tiny images," 2009.
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770-778, 2016.
| [] |
[
"ΛCDM predictions for galaxy protoclusters I: The relation between galaxies, protoclusters and quasars at z ∼ 6",
"ΛCDM predictions for galaxy protoclusters I: The relation between galaxies, protoclusters and quasars at z ∼ 6"
] | [
"Roderik A Overzier 1⋆ \nMax-Planck-Institüt für Astrophysik\nKarl-Schwarzschild-Str. 1D-85748GarchingGermany\n",
"Qi Guo \nMax-Planck-Institüt für Astrophysik\nKarl-Schwarzschild-Str. 1D-85748GarchingGermany\n",
"Guinevere Kauffmann \nMax-Planck-Institüt für Astrophysik\nKarl-Schwarzschild-Str. 1D-85748GarchingGermany\n",
"Gabriella De Lucia \nMax-Planck-Institüt für Astrophysik\nKarl-Schwarzschild-Str. 1D-85748GarchingGermany\n",
"Rychard Bouwens \nAstronomy Department\nUniversity of California\n95064Santa CruzCAUSA\n",
"Gerard Lemson \nAstronomisches Rechen-Institut\nZentrum für Astronomie\nUniversität Heidelberg\nMoenchhofstr. 12-1469120HeidelbergGermany\n\nMax-Planck Institut für extraterrestrische Physik\nGiessenbach Str\n85748GarchingGermany\n"
] | [
"Max-Planck-Institüt für Astrophysik\nKarl-Schwarzschild-Str. 1D-85748GarchingGermany",
"Max-Planck-Institüt für Astrophysik\nKarl-Schwarzschild-Str. 1D-85748GarchingGermany",
"Max-Planck-Institüt für Astrophysik\nKarl-Schwarzschild-Str. 1D-85748GarchingGermany",
"Max-Planck-Institüt für Astrophysik\nKarl-Schwarzschild-Str. 1D-85748GarchingGermany",
"Astronomy Department\nUniversity of California\n95064Santa CruzCAUSA",
"Astronomisches Rechen-Institut\nZentrum für Astronomie\nUniversität Heidelberg\nMoenchhofstr. 12-1469120HeidelbergGermany",
"Max-Planck Institut für extraterrestrische Physik\nGiessenbach Str\n85748GarchingGermany"
] | [
"Mon. Not. R. Astron. Soc"
] | Motivated by recent observational studies of the environment of z ∼ 6 QSOs, we have used the Millennium Run (MR) simulations to construct a very large (∼ 4 • × 4 • ) mock redshift survey of star-forming galaxies at z ∼ 6. We use this simulated survey to study the relation between density enhancements in the distribution of i 775 -dropouts and Lyα emitters, and their relation to the most massive halos and protocluster regions at z ∼ 6. Our simulation predicts significant variations in surface density across the sky with some voids and filaments extending over scales of 1 • , much larger than probed by current surveys. Approximately one third of all z ∼ 6 halos hosting i-dropouts brighter than z=26.5 mag (≈ M * UV,z=6 ) become part of z = 0 galaxy clusters. i-dropouts associated with protocluster regions are found in regions where the surface density is enhanced on scales ranging from a few to several tens of arcminutes on the sky. We analyze two structures of i-dropouts and Lyα emitters observed with the Subaru Telescope and show that these structures must be the seeds of massive clusters-information. In striking contrast, six z ∼ 6 QSO fields observed with HST show no significant enhancements in their i 775 -dropout number counts. With the present data, we cannot rule out the QSOs being hosted by the most massive halos. However, neither can we confirm this widely used assumption. We conclude by giving detailed recommendations for the interpretation and planning of observations by current and future ground-and space based instruments that will shed new light on questions related to the large-scale structure at z ∼ 6. | 10.1111/j.1365-2966.2008.14264.x | [
"https://arxiv.org/pdf/0810.2566v2.pdf"
] | 17,393,316 | 0810.2566 | d30cc60efa33d76cbd72cf6cf54024911bd25c9b |
ΛCDM predictions for galaxy protoclusters I: The relation between galaxies, protoclusters and quasars at z ∼ 6
2008
Roderik A Overzier 1⋆
Max-Planck-Institüt für Astrophysik
Karl-Schwarzschild-Str. 1D-85748GarchingGermany
Qi Guo
Max-Planck-Institüt für Astrophysik
Karl-Schwarzschild-Str. 1D-85748GarchingGermany
Guinevere Kauffmann
Max-Planck-Institüt für Astrophysik
Karl-Schwarzschild-Str. 1D-85748GarchingGermany
Gabriella De Lucia
Max-Planck-Institüt für Astrophysik
Karl-Schwarzschild-Str. 1D-85748GarchingGermany
Rychard Bouwens
Astronomy Department
University of California
95064Santa CruzCAUSA
Gerard Lemson
Astronomisches Rechen-Institut
Zentrum für Astronomie
Universität Heidelberg
Moenchhofstr. 12-1469120HeidelbergGermany
Max-Planck Institut für extraterrestrische Physik
Giessenbach Str
85748GarchingGermany
ΛCDM predictions for galaxy protoclusters I: The relation between galaxies, protoclusters and quasars at z ∼ 6
Mon. Not. R. Astron. Soc
0002008Printed 5 December 2008(MN L A T E X style file v2.2)cosmology: observations -early universe -large-scale structure of universe - theory -galaxies: high-redshift -galaxies: clusters: general -galaxies: starburst
Motivated by recent observational studies of the environment of z ∼ 6 QSOs, we have used the Millennium Run (MR) simulations to construct a very large (∼ 4 • × 4 • ) mock redshift survey of star-forming galaxies at z ∼ 6. We use this simulated survey to study the relation between density enhancements in the distribution of i 775 -dropouts and Lyα emitters, and their relation to the most massive halos and protocluster regions at z ∼ 6. Our simulation predicts significant variations in surface density across the sky with some voids and filaments extending over scales of 1 • , much larger than probed by current surveys. Approximately one third of all z ∼ 6 halos hosting i-dropouts brighter than z=26.5 mag (≈ M * UV,z=6 ) become part of z = 0 galaxy clusters. i-dropouts associated with protocluster regions are found in regions where the surface density is enhanced on scales ranging from a few to several tens of arcminutes on the sky. We analyze two structures of i-dropouts and Lyα emitters observed with the Subaru Telescope and show that these structures must be the seeds of massive clusters-information. In striking contrast, six z ∼ 6 QSO fields observed with HST show no significant enhancements in their i 775 -dropout number counts. With the present data, we cannot rule out the QSOs being hosted by the most massive halos. However, neither can we confirm this widely used assumption. We conclude by giving detailed recommendations for the interpretation and planning of observations by current and future ground-and space based instruments that will shed new light on questions related to the large-scale structure at z ∼ 6.
INTRODUCTION
During the first decade of the third Millennium we have begun to put observational constraints on the status quo of galaxy formation at roughly one billion years after the Big Bang (e.g. Stanway et al. 2003;Yan & Windhorst 2004a;Bouwens et al. 2003Bouwens et al. , 2004aBouwens et al. , 2006Dickinson et al. 2004;Malhotra et al. 2005;Shimasaku et al. 2005;Ouchi et al. 2005;Overzier et al. 2006). Statistical samples of starforming galaxies at z = 6 -either selected on the basis of their large (i-z) color due to the Lyman break redshifted to z ∼ 6 (idropouts), or on the basis of the large equivalent width of Lyα emission (Lyα emitters) -suggest that they are analogous to the population of Lyman break galaxies (LBGs) found at z ∼ 3 − 5 (e.g. Bouwens et al. 2007). A small subset of the i775-dropouts has been found to be surprisingly massive or old (Dow-Hygelund et al. 2005; Yan et al. 2006;Eyles et al. 2007). The slope of the UV lu-⋆ E-mail:[email protected] (RAO) minosity function at z = 6 is very steep and implies that low luminosity objects contributed significantly to reionizing the Universe (Yan & Windhorst 2004b;Bouwens et al. 2007;Khochfar et al. 2007;Overzier et al. 2008a). Cosmological hydrodynamic simulations are being used to reproduce the abundances as well as the spectral energy distributions of z = 6 galaxies. Exactly how these objects are connected to local galaxies remains a highly active area of research (e.g. Davé et al. 2006;Gayler Harford & Gnedin 2006;Nagamine et al. 2006Nagamine et al. , 2008Night et al. 2006;Finlator et al. 2007; Robertson et al. 2007).
The discovery of highly luminous quasi-stellar objects (QSOs) at z ∼ 6 (e.g. Fan et al. 2001Fan et al. , 2003Fan et al. , 2004Fan et al. , 2006aGoto 2006;Venemans et al. 2007) is of equal importance in our understanding of the formation of the first massive black holes and galaxies. Gunn & Peterson (1965) absorption troughs in their spectra demarcate the end of the epoch of reionization (e.g. Fan et al. 2001;White et al. 2003;Walter et al. 2004;Fan et al. 2006b). Assuming that high redshift QSOs are radiating near the Eddington limit, they contain supermassive black holes (SMBHs) of mass ∼ 10 9 M⊙ (e.g. Willott et al. 2003; Barth et al. 2003;Vestergaard 2004;Jiang et al. 2007;Kurk et al. 2007). The spectral properties of most z ∼ 6 QSOs in the rest-frame UV, optical, IR and X-ray are similar to those at low redshift, suggesting that massive, and highly chemically enriched galaxies were vigorously forming stars and SMBHs less than one billion years after the Big Bang (e.g. Bertoldi et al. 2003;Maiolino et al. 2005;Jiang et al. 2006;Wang et al. 2007).
Hierarchical formation models and simulations can reproduce the existence of such massive objects at early times (e.g. Haiman & Loeb 2001;Springel et al. 2005a;Begelman et al. 2006;Li et al. 2007;Narayanan et al. 2007), provided however that they are situated in extremely massive halos. Large-scale gravitational clustering is a powerful method for estimating halo masses of quasars at low redshifts, but cannot be applied to z ∼ 6 QSOs because there are too few systems known. Their extremely low space density determined from the Sloan Digital Sky Survey (SDSS) of ∼1 Gpc −3 (comoving) implies a (maximum) halo mass of M halo ∼ 10 13 M⊙ (Fan et al. 2001;Li et al. 2007). A similar halo mass is obtained when extrapolating from the (z = 0) relationship between black hole mass and bulge mass of Magorrian et al. (1998), and using ΩM /Ω bar 10 (Fan et al. 2001). Because the descendants of the most massive halos at z ∼ 6 may evolve into halos of > 10 15 M⊙ at z = 0 in a ΛCDM model, (e.g. Springel et al. 2005a;Suwa et al. 2006;Li et al. 2007, but see De Lucia & Blaizot (2007), Trenti et al. (2008) and Sect. 5 of this paper), it is believed that the QSOs trace highly biased regions that may give birth to the most massive present-day galaxy clusters. If this is true, the small-scale environment of z ∼ 6 QSOs may be expected to show a significant enhancement in the number of small, faint galaxies. These galaxies may either merge with the QSO host galaxy, or may form the first stars and black holes of other (proto-)cluster galaxies.
Observations carried out with the Advanced Camera for Surveys (ACS) on the Hubble Space Telescope (HST), allowed a rough measurement of the two-dimensional overdensities of faint i775-dopouts detected towards the QSOs J0836+0054 at z = 5.8 (Zheng et al. 2006) and J1030+0524 at z = 6.28 (Stiavelli et al. 2005). Recently Kim et al. (2008) presented results from a sample of 5 QSO fields, finding some to be overdense and some to be underdense with respect to the HST/ACS Great Observatories Origins Deep Survey (GOODS). Priddey et al. (2007) find enhancements in the number counts of sub-mm galaxies. Substantial overdensities of i-dropouts and Lyα emitters have also been found in non-QSO fields (e.g. Shimasaku et al. 2003;Ouchi et al. 2005;Ota et al. 2008), suggesting that massive structures do not always harbour a QSO, which may be explained by invoking a QSO duty-cycle. At z ∼ 2 − 5, significant excesses of star-forming galaxies have been found near QSOs (e.g. Djorgovski et al. 2003;Kashikawa et al. 2007), radio galaxies (e.g. Miley et al. 2004;Venemans et al. 2007;Overzier et al. 2008b), and in random fields (Steidel et al. 1998(Steidel et al. , 2005. Although the physical interpretation of the measurements is uncertain, these structures are believed to be associated with the formation of clusters of galaxies.
The idea of verifying the presence of massive structures at high redshift through the clustering of small galaxies around them has recently been explored by, e.g., Muñoz & Loeb (2008a) using the excursion set formalism of halo growth (Zentner 2007). However, the direct comparison between models or simulations and observations remains difficult, mainly because of complicated observational selection effects. This is especially true at high redshift. In order to investigate how a wide variety of galaxy overdensities found in surveys at z ≃ 2 − 6 are related to cluster formation, we have carried out an analysis of the progenitors of galaxy clusters in a set of cosmological N -body simulations. Our results will be presented in a series of papers. In Paper I, we use the Milennium Run Simulations (Springel et al. 2005a) to simulate a large mock survey of galaxies at z ∼ 6 to derive predictions for the properties of the progenitors of massive galaxy clusters, paying particular attention to the details of observational selection effects. We will try to answer the following questions:
(i) Where do we find the present-day descendants of the idropouts?
(ii) What are the typical structures traced by i-dropouts and Lyα emitters in current surveys, and how do they relate to protoclusters?
(iii) How do we unify the (lack of excess) number counts observed in QSO fields with the notion that QSOs are hosted by the most massive halos at z ∼ 6?
The structure of the present paper is as follows. We describe the simulations, and construction of our mock i-dropout survey in Section 2. Using these simulations, we proceed to address the main questions outlined above in Sections 3-5. We conclude the paper with a discussion (Section 6), an overview of recommendations for future observations (Section 7), and a short summary (Section 8) of the main results.
SIMULATIONS
Simulation description
We use the semi-analytic galaxy catalogues that are based on the Millennium Run (MR) dark matter simulation of Springel et al. (2005a). Detailed descriptions of the simulations and the semianalytic modeling have been covered extensively elsewhere, and we kindly refer the reader to those works for more information (e.g. Kauffmann et al. 1999;Springel et al. 2005a;Croton et al. 2006;Lemson & Springel 2006;De Lucia et al. 2004;De Lucia & Blaizot 2007, and references therein).
The dark matter simulation was performed with the cosmological simulation code GADGET-2 (Springel 2005b), and consisted of 2160 3 particles of mass 8.6 × 10 8 h −1 M⊙ in a periodic box of 500 h −1 Mpc on a side. The simulations followed the gravitational growth as traced by these particles from z = 127 to z = 0 in a ΛCDM cosmology (Ωm = 0.25, ΩΛ = 0.75, h = 0.73, n = 1, σ8 = 0.9) consistent with the WMAP year 1 data (Spergel et al. 2003). The results were stored at 64 epochs ("snapshots"), which were used to construct a detailed halo merger tree during postprocessing, by identifying all resolved dark matter halos and subhalos, and linking all progenitors and descendants of each halo. Galaxies were modeled by applying semi-analytic prescriptions of galaxy formation to the stored halo merger trees. The techniques and recipes include gas cooling, star formation, reionizaton heating, supernova feedback, black hole growth, and "radio-mode" feedback from galaxies with a static hot gas atmosphere, and are described in Croton et al. (2006). The photometric properties of galaxies are then modeled using stellar population synthesis models, including a simple dust model. Here we use the updated models 'delucia2006a' of De Lucia & Blaizot (2007) that have been made publicly available through an advanced database structure on the MR website 1 (Lemson & Virgo Consortium 2006).
Construction of a large mock survey at z ∼ 6
We used the discrete MR snapshots to create a large, continous mock survey of i775-dropout galaxies at z ∼ 6. The general principle of transforming a series of discrete snapshots into a mock pencil beam survey entails placing a virtual observer somewhere in the simulation box at z = 0 and carving out all galaxies as they would be observed in a pencil beam survey along that observer's line of sight. This technique has been described in great detail in Blaizot et al. (2005) and Kitzbichler & White (2007). In general, one starts with the snapshot i = 63 at z = 0 and records the positions, velocities and physical properties of all galaxies within the cone out to a comoving distance corresponding to that of the next snapshot. For the next segment of the cone, one then use the properties as recorded in snapshot i = 62, and so on. The procedure relies on the reasonable assumption that the large-scale structure (positions and velocities of galaxies) evolves relatively slowly between snapshots. By replicating the simulation box along the lightcone and limiting the opening angle of the cone, one can in principle 1 http://www.mpa-garching.mpg.de/millennium/ construct unique lightcones out to very high redshift without crossing any region in the simulation box more than once. The method is straightforward when done in comoving coordinates in a flat cosmology using simple Euclidean geometry (Kitzbichler & White 2007).
Because the comoving distances or redshifts of galaxies recorded at a particular snapshot do not correspond exactly to their effective position along the lightcone, we need to correct their magnitudes by interpolating over redshift as follows:
Mcor[z(d)] = M (zi) + dM dz [z(d) − zi)],(1)
where Mcor[z(d)] is the observer-frame absolute magnitude at the observed redshift, z(d) (including peculiar velocities along the line of sight), M (zi) is the magnitude at redshift zi corresponding to the ith snapshot, and dM/dz is the first order derivative of the observer-frame absolute magnitude. The latter quantity is calculated for each galaxy by placing it at neighbouring snapshots, and ensures that the K-correction is taken into account (Blaizot et al. 2005). Finally, we apply the mean attenuation of the intergalactic medium using Madau (1995) and calculate the observer-frame apparent magnitudes in each filter.
In this paper, we use the fact that the selection of z ∼ 6 galaxies through the i-dropout technique is largely free of contamination from objects at lower (and higher) redshift provided that the observations are deep enough. Because the transverse size of the MR simulation box (500 h −1 Mpc) corresponds to a comoving volume between z ≈ 5.6 and z ≈ 7.3 (the typical redshift range of i-dropouts surveys) we can use the three simulation snapshots centered at z = 5.7, z = 6.2 and z = 6.7 to create a mock survey spanning this volume, while safely neglecting objects at other redshifts.
We extracted galaxies from the MR database by selecting the Z−axis of the simulation box to lie along the line-of-sight of our mock field. In order to compare with the deepest current surveys, we calculated the apparent magnitudes in the HST/ACS V606, i775 and z850 filters and the 2MASS J, H and KS filters. We derived observed redshifts from the comoving distance along the line of sight (including the effect of peculiar velocities), applied the Kcorrections and IGM absorption, and calculated the apparent magnitudes in each band. Fig. 1 shows the spatial X-coordinate versus the redshift of objects in the simulated lightcone. Fig. 2 shows the entire simulated volume projected along the Zor redshift axis. These figures show that there exists significant filamentary and strongly clustered substructure at z ≈ 6, both parallel and perpendicular to the line-of-sight.
Our final mock survey has a comoving volume of ∼ 0.3 Gpc 3 , and spans an area of 4.4 • × 4.4 • when projected onto the sky. It contains ∼ 1.6 × 10 5 galaxies at z = 5.6 − 7.3 with z 27.5 mag (corresponding to an absolute magnitude 2 of MUV,AB ≃ −19.2 mag, about one mag below M * U V,z=6 ). For comparison and future reference, we list the main i-dropout surveys together with their areal coverage and detection limit in Table 1.
Colour-colour selection
In the left panel of Fig. 3 we show the V606 − z850 vs. i775 − z850 colour-colour diagram for all objects satisfying z 27.0. The i- dropouts populate a region in colour-colour space that is effectively isolated from lower redshift objects using a simple colour cut of i775 − z850 1.3 − 1.5. Note that although our simulated survey only contains objects at z > 5.6, it has been shown (Stanway et al. 2003;Dickinson et al. 2004;Bouwens et al. 2004aBouwens et al. , 2006) that this colour cut is an efficient selection criterion for isolating starburst galaxies at z ∼ 6 with blue z850−J colours (see right panel of Fig. 3). For reference, we have overplotted colour tracks for a 100 Myr old, continuous star formation model as a function of redshift; different colour curves show results for different amounts of reddening by dust. As can be seen, these simple models span the region of colour-colour space occupied by the MR galaxies. At z < 6, galaxies occupy a tight sequence in the plane. At z > 6, objects fan out because the V606 − z850 colour changes strongly as a function of redshift, while the i775 − z850 colour is more sensitive to both age and dust reddening. Because of the possibility of intrinsically red interlopers at z ∼ 1 − 3, the additional requirement of a non-detection in V606, or a very red V606 − z850 3 colour, if available, is often included in the selection 3 . Because the selection based on i775 − z850 1.3 introduces a small bias against objects having strong Lyα emission at z 6 (Malhotra et al. 2005;Stanway et al. 2007), we have statistically included the effect of Lyα on our sample selection by randomly assigning Lyα with a rest-frame equivalent width of 30Å to 25% of the galaxies in our volume, and recalculating the i775 − z850 colours. The inclusion of Lyα leads to a reduction in the number of objects selected of ∼ 3% (see also Bouwens et al. 2006).
3 The current paper uses magnitudes and colours defined in the HST/ACS V 606 i 775 z 850 filter system in order to compare with the deepest surveys available in literature. Other works based on groundbased data commonly use the SDSS-based r ′ i ′ z ′ filterset, but the differences in colours are minimal.
i-dropout number densities
In Table 2 we list the surface densities of i775-dropouts selected in the MR mock survey as a function of limiting z850-magnitude and field size. For comparison, we calculated the surface densities for regions having areas comparable to some of the main i775dropout surveys: the SDF (876 arcmin 2 ), two GOODS fields (320 arcmin 2 ), a single GOODS field (160 arcmin 2 ), and a HUDF-sized field (11.2 arcmin 2 ). The errors in Table 2 indicate the ±1σ deviation measured among a large number of similarly sized fields selected from the mock survey, and can be taken as an estimate of the influence of (projected) large-scale structure on the number counts (usually referred to as "cosmic variance" or "field-to-field variations"). At faint magnitudes, the strongest observational constraints on the i775-dropout density come from the HST surveys. Our values for a GOODS-sized survey are 105, 55 and 82% of the values given by the most recent estimates by B07 for limiting z850 magnitudes of 27.5, 26.5 and 26.0 mag, respectively, and consistent within the expected cosmic variance allowed by our mock survey. Because the total area surveyed by B06 is about 200× smaller than our mock survey, we also compare our results to the much larger SDF from Ota et al. (2008). At z = 26.5 mag the number densities of ∼0.18 arcmin −2 derived from both the real and mock surveys are perfectly consistent.
Last, we note that in order to achieve agreement between the observed and simulated number counts at z ∼ 6, we did not require any tweaks to either the cosmology (e.g., see Wang et al. 2008, for the effect of different WMAP cosmologies), or the dust model used (see Kitzbichler & White 2007;Guo & White 2008, for alternative dust models better tuned to high redshift galaxies). This issue may be further investigated in a future paper.
Redshift distribution
In Fig. 4 we show the redshift distribution of the full mock survey (thick solid line), along with various subsamples selected according to different i775-z850 colour cuts that we will refer to later on in this paper. The standard selection of i775-z850>1.3 results in a distribution that peaks at z ≈ 5.8. We have also indicated the expected scatter resulting from cosmic variance on the scale of GOODSsized fields (error bars are 1σ). Some, mostly groundbased, studies make use of a more stringent cut of i775-z850> 1.5 to reduce the chance of foreground interlopers (dashed histogram). Other works have used colour cuts of i775-z850 2 (blue histogram) and i775-z850 2 (red histogram) in order to try to extract subsamples at z 6 and z 6, respectively. As can be seen in Fig. 4, such cuts are indeed successful at broadly separating sources from the two redshift ranges, although the separation is not perfectly clean due to the mixed effects of age, dust and redshift on the i775-z850 colour. For reference, we have also indicated the model redshift distribution from B06 (thin solid line). This redshift distribution was derived for a much fainter sample of z850 29 mag, which explains in part the discrepancy in the counts at z 6.2. Evolution across the redshift range will furthermore skew the actual redshift distribution toward lower values (see discussion in Muñoz & Loeb 2008b). This is not included in the B06 model, and its effect is only marginally taken into account in the MR mock survey due to the relatively sparse snapshot sampling across the redshift range. Unfortunately, the exact shape of the redshift distribution is currently not very well constrained by spectroscopic samples (Malhotra et al. 2005). A more detailed analysis is beyond the scope of this paper, and we conclude by noting that the results presented below are largely independent of the exact shape of the distribution.
(i-z) > 1.3 (i-z) > 1.5 (i-z) < 2.0 (i-z) > 2.0 B06
Physical properties of i-dropouts
Although a detailed study of the successes and failures in the semianalytic modeling of galaxies at z ∼ 6 is not the purpose of our investigation, we believe it will be instructive for the reader if we at least summarize the main physical properties of the model galaxies in our mock survey. Unless stated otherwise, throughout this paper we will limit our investigations to i-dropout samples having a limiting magnitude of z850=26.5 mag 4 , comparable to M * U V at z = 6 (see Bouwens et al. 2007). This magnitude typically corresponds to model galaxies situated in dark matter halos of at least 100 dark matter particles (∼ 10 11 M⊙ h −1 ). This ensures that the evolution of those halos and their galaxies has been traced for some time prior to the snapshot from which the galaxy was selected. In this way, we ensure that the physical quantities derived from the semi-analytic model are relatively stable against snapshotto-snapshot fluctuations. A magnitude limit of z850=26.5 mag also conveniently corresponds to the typical depth that can be achieved in deep groundbased surveys or relatively shallow HST-based surveys.
In Fig. 5 we plot the cumulative distributions of the stellar mass (top left), SFR (top right), and stellar age (bottom left) of the i775-dropouts in the mock survey. The median stellar mass is ∼ 5 × 10 9 M⊙ h −1 , and about 30% of galaxies have a stellar mass greater than 10 10 M⊙. The median SFR and age are ∼ 30 M⊙ yr −1 and ∼160 Myr, respectively, with extrema of ∼ 500 M⊙ yr −1 and ∼400 Myr. These results are in general agreement with several studies based on modeling the stellar populations of limited samples of i775-dropouts and Lyα emitters for which deep observations with HST and Spitzer exist. Yan et al. (2006) have analyzed a statistically robust sample and find stellar masses ranging from ∼ 1 × 10 8 M⊙ for IRAC-undetected sources to ∼ 7 × 10 10 M⊙ for the brightest 3.6µm sources, and ages ranging from <40 to 400 Myr (see also Dow-Hygelund et al. 2005;Eyles et al. 2007;Lai et al. 2007, for additional comparison data). We also point out that the maximum stellar mass of ∼ 7 × 10 10 M⊙ found in our mock survey (see top left panel) is comparable to the most massive i-dropouts found, and that "supermassive" galaxies having masses in excess of 10 11 M⊙ are absent in both the simulations and observations (McLure et al. 2006). Last, in the bottom right panel we show the distribution of the masses of the halos hosting the model i-dropouts. The median halo mass is ∼ 3 × 10 11 M⊙. Our results are in the range of values reported by Overzier et al. (2006) and McLure et al. (2008) based on the angular correlation function of large i775-dropout samples, but we note that halo masses are currently not very well constrained by the observations.
THE RELATION BETWEEN I-DROPOUTS AND (PROTO-)CLUSTERS
In this Section we study the relation between local overdensities in the i-dropout distribution at z ∼ 6 and the sites of cluster formation. Throughout this paper, a galaxy cluster is defined as being all galaxies belonging to a bound dark matter halo having a dark mat- ter mass 5 of M tophat 10 14 h −1 M⊙ at z = 0. In the MR we find 2,832 unique halos, or galaxy clusters, fulfilling this condition, 21 of which can be considered supermassive (M tophat 10 15 h −1 M⊙). Furthermore, a proto-cluster galaxy is defined as being a galaxy at ∼ 6 that will end up in a galaxy cluster at z = 0. Note that these are trivial definitions given the database structure of the Millennium Run simulations, in which galaxies and halos at any given redshift can be related to their progenitors and descendants at another redshift (Lemson & Virgo Consortium 2006).
Properties of the z = 0 descendants of i-dropouts
In Fig. 6 we plot the distribution of number densities of the central halos that host the z = 0 descendants of the i-dropouts in our mock survey as a function of the halo mass. The median halo mass hosting the i-dropout descendants at z = 0 is 3 × 10 13 M⊙ h −1 (dotted line). For comparison we indicate the mass distribution of all halos at z = 0 (dashed line). The plot shows that the fraction of halos that can be traced back to a halo hosting an i-dropout at z ∼ 6 is a strong function of the halo mass at z = 0. 45% of all cluster-sized halos at z = 0 (indicated by the hatched region) are related to the descendants of halos hosting i-dropouts in our mock survey, and 77% of all clusters at z = 0 having a mass of M> 7 × 10 14 M⊙ h −1 can be traced back to at least one progenitor halo at z ∼ 6 hosting an i-dropout. This implies that the first seeds of galaxy clusters are already present at z ∼ 6. In addition, many i-dropout galaxies and their halos may merge and end up in the same descendant structures at z = 0, which was not accounted . Contours indicate regions of equal density, defined as δΣ 5 ′ ≡ (Σ 5 ′ −Σ 5 ′ )/Σ 5 ′ , Σ 5 ′ andΣ 5 ′ being the local and mean surface density measured in circular cells of 5 ′ radius. Over-and underdense regions of δΣ 5 ′ = ±[0.25, 0.5, 1.0] are shown in red and blue contours, respectively. The mean density (δΣ 5 ′ = 0) is indicated by the green dashed contour. Large black points mark proto-cluster galaxies that end up in galaxy clusters at z = 0.
for in our calculation above where we only counted unique halos at z = 0. In fact, about ∼34% (∼2%) of all i-dropouts (z850 26.5) in the mock survey will end up in clusters of mass > 1 × 10 14 (> 7 × 10 14 ) M⊙ h −1 at z = 0. This implies that roughly one third of all galaxies one observes in a typical i-dropout survey can be considered "proto-cluster" galaxies. The plot further shows that the majority of halos hosting i-dropouts at z ∼ 6 will evolve into halos that are more typical of the group environment. This is similar to the situation found for Lyman break or dropout galaxies at lower redshifts (Ouchi et al. 2004).
In Fig. 7 we plot the stellar mass distribution of those z = 0 galaxies that host the descendants of the i-dropouts. The presentday descendants are found in galaxies having a wide range of stellar masses (M * ≃ 10 9−12 M⊙), but the distribution is skewed towards the most massive galaxies in the MR simulations. The me-dian stellar mass of the descendants is ∼ 10 11 M⊙ (dotted line in Fig. 7).
Detecting proto-clusters at z ∼ 6
We will now focus on to what extent local overdensities in the idropout distribution at z ≈ 6 may trace the progenitor seeds of the richest clusters of galaxies in the present-day Universe. In Fig. 8 we plot the sky distribution of the i-dropouts in our 4.4 • × 4.4 • MR mock survey (large and small circles). Large circles indicate those i-dropouts identified as proto-cluster galaxies. We have plotted con-
tours of i-dropout surface density, δ Σ,5 ′ ≡ (Σ 5 ′ −Σ 5 ′ )/Σ 5 ′ , Σ 5 ′
andΣ 5 ′ being the local and mean surface density measured in circular cells of 5 ′ radius. Negative contours representing underdense regions are indicated by blue lines, while positive contours representing overdense regions are indicated by red lines. The green dashed lines indicate the mean density. The distribution of protocluster galaxies (large circles) correlates strongly with positive enhancements in the local i-dropout density distribution, indicating that these are the sites of formation of some of the first clusters. In Fig. 9 we plot the frequency distribution of the i-dropouts shown in Fig. 8, based on a counts-in-cells analysis of 20,000 randomly placed ACS-sized fields of 3.4 ′ × 3.4 ′ (solid histograms). On average, one expects to find about 2 i-dropouts in a random ACS pointing down to a depth of z850=26.5, but the distribution is skewed with respect to a pure Poissonian distribution as expected due to the effects of gravitational clustering. The Poissonian expectation for a mean of 2 i-dropouts is indicated by a thin line for comparison. The panel on the right shows a zoomed-in view to give a better sense of the small fraction of pointings having large numbers of i-dropouts. Also in Fig. 9 we have indicated the counts histogram derived from a similar analysis performed on i-dropouts extracted from the GOODS survey using the samples of B06. The GOODS result is indicated by the dotted histogram, showing that it lies much closer to the Poisson expectation than the MR mock survey. This is of course expected as our mock survey covers an area over 200× larger than GOODS and includes a much wider range of environments. To illustrate that the (small) fraction of pointings with the largest number of objects is largely due to the presence of regions associated with proto-clusters, we effectively "disrupt" all protoclusters by randomizing the positions of all protocluster galaxies and repeat the counts-in-cells calculation. The result is shown by the dashed histograms in Fig. 9. The excess counts have largely disappeared, indicating that they were indeed due to the proto-clusters. The counts still show a small excess over the Poissonian distribution due to the overall angular clustering of the i-dropout population.
We can use our counts-in-cells analysis to predict the cumulative probability, P >δ , of randomly finding an i-dropout overdensity equal or larger than δΣ,ACS . The resuls are shown in Fig. 10. The four panels correspond to the subsamples defined using the four different i775-z850 colour cuts (see §2.5 and Fig. 4). Panel insets show the full probability range for reference. The figure shows that the probability of finding, for example, cells having a surface overdensity of i-dropouts of 3 is about half a percent for the i775-z850>1.3 samples (top left panel, solid line). The other panels show the dependence of P >δ on i-dropout samples selected using different colour cuts. As the relative contribution from for-and background galaxies changes, the density contrast between real, physical overdensities on small scales and the "field" is increased.
The results presented in Fig. 10 provide us with a powerful way to interpret many observational findings. Specifically, overdensities of i-dropouts have been interpreted as evidence for largescale structure associated with proto-clusters, at least qualitatively. Although Fig. 10 tells us the likelihood of finding a given overdensity, this is not sufficient by itself to answer the question whether that overdensity is related to a proto-cluster due to a combination of several effects. First, because we are mainly working with photometrically selected samples consisting of galaxies spanning about one unit redshift, projection effects are bound to give rise to a range of surface densities. Second, the number counts may show significant variations as a function of position and environment resulting from the large-scale structure. The uncertainties in the cosmic variance can be reduced by observing fields that are larger than the typical scale length of the large-scale structures, but this is often not achieved in typical observations at z ∼ 6. Third, surface overdensities that are related to genuine overdensities in physical coordinates are not necessarily due to proto-clusters, as we have shown that the descendents of i-dropouts can be found in a wide range of environments at z = 0, galaxy groups being the most common (see Fig. 6). We have separated the contribution of these effects to P >δ from that due to proto-clusters by calculating the fraction of actual proto-cluster i-dropouts in each cell of given overdensity δ.
The results are also shown in Fig. 10, where dashed histograms indicate the combined probability of finding a cell of overdensity δ consisting of more than 25 (blue lines), 50 (green lines) and 75% (red lines) protocluster galaxies. The results show that while, for example, the chance of P (δ 2.5) is about 1%, the chance that at least 50% of the galaxies in such cells are proto-cluster galaxies is only half that, about 0.5% (see top left panel in Fig. 10). The figure goes on to show that the fractions of protocluster galaxies increases significantly as the overdensity increases, indicating that the largest (and rarest) overdensities in the i-dropout distribution are related to the richest proto-cluster regions. This is further illustrated in Fig. 11 in which we plot the average and scatter of the fraction of protocluster galaxies as a function of δ. Although the fraction rises as the overdensity increases, there is a very large scatter. At δ ≈ 4 the average fraction of protocluster galaxies is about 0.5, but varies significantly between 0.25 and 0.75 (1σ).
It will be virtually impossible to estimate an accurate cluster mass at z = 0 from a measured surface overdensity at z ∼ 6. Although there is a correlation between cluster mass at z = 0 and i-dropout overdensity at z ∼ 6, the scatter is significant. Many of the most massive (M>10 15 M⊙) clusters have very small associated overdensities, while the progenitors of fairly low mass clusters (M∼10 14 M⊙) can be found associated with regions of relatively large overdensities. However, the largest overdensities are consistently associated with the progenitors of M∼ 5 × 10 14 − 1 × 10 15 M⊙ clusters.
Some examples
Although the above sections yield useful statistical results, it is interesting to look at the detailed angular and redshift distributions of the i775-dropouts in a few of the overdense regions. In Fig. 12 we show 16 30 ′ ×30 ′ regions having overdensities ranging from Fig. 12. Field galaxies are drawn as open circles, and cluster galaxies as filled circles that are colour coded according to their cluster membership as in Fig. 12. Red dashed lines mark z = 5.9, which roughly corresponds to, respectively, the upper and lower redshift of samples selected by placing a cut at i 775 -z 850 2 and i 775 -z 850 2. Blue dotted lines mark the redshift range (∆z ≈ 0.1) probed by narrowband Lyα filters. δΣ,ACS ∼ 8 (bottom left panel) to ∼ 3 (top right panel). In each panel we indicate the relative size of an ACS pointing (red square), and the redshift, overdensity and present-day mass of the most massive protoclusters are given in the top left and right corners. Field galaxies are drawn as open circles, while protocluster galaxies are drawn as filled circles. Galaxies belonging to the same proto-cluster are drawn in the same colour. While some regions contain relatively compact protoclusters with numerous members inside the 3.4 ′ ×3.4 ′ ACS field-of-view (e.g. panels #0, 1 and 8), other regions may contain very few or highly dispersed galaxies. Also, many regions contain several overlapping protoclusters as the selection function is sensitive to structures from a relatively wide range in redshift inside the 30 ′ ×30 ′ regions plotted. Although the angular separation between galaxies belonging to the same protocluster is typically smaller than ∼10 ′ or 25 Mpc (comoving), Fig. 13 shows that the overdensities of regions centered on the protoclusters are significantly positive out to much larger radii of between 10 to 30 ′ , indicating that the protoclusters form inside very large filaments of up to 100 Mpc in size that contribute significantly to the overall (field) number counts in the protocluster regions. In Fig. 14 we plot the redshift coordinate against one of the angular coordinates using the same regions and colour codings as in Fig. 12. Protoclusters are significantly more clumped in redshift space compared to field galaxies, due to flattening of the velocity field associated with the collapse of large structures. In each panel, a red dashed line marks z = 5.9, which roughly corresponds to, respectively, the upper and lower redshift of samples selected by placing a cut at i775-z850 2 and i775-z850 2 (see the redshift selection functions in Fig. 4). Such colour cuts may help reduce the contribution from field galaxies by about 50%, depending on the redshift one is interested in. We also mark the typical redshift range of ∆z ≈ 0.1 probed by narrowband filters centered on the redshift of each protocluster using blue dotted lines. As we will show in more detail in §4.2 below, such narrowband selections offer one of the most promising methods for Figure 11. The average fraction of i-dropouts marked as proto-cluster galaxies contained in ACS-sized cells as a function of cell overdensity. Error bars are 1σ. There is a clear trend showing that larger surface overdensities are associated with a larger contribution from galaxies in proto-clusters, albeit with significant scatter. finding and studying the earliest collapsing structures at high redshift, because of the significant increase in contrast between cluster and field regions. However, such surveys are time-consuming and only probe the part of the galaxy population that is bright in the Lyα line.
COMPARISON WITH OBSERVATIONS FROM THE LITERATURE
Our mock survey of i-dropouts constructed from the MR, due to its large effective volume, spans a wide range of environments and is therefore ideal for making detailed comparisons with observational studies of the large-scale structure at z ∼ 6. In the following subsections, we will make such comparisons with two studies of candidate proto-clusters of i-dropouts and Lyα emitters found in the SDF and SXDF.
The candidate proto-cluster of Ota et al. (2008)
When analysing the sky distribution of i-dropouts in the 876 arcmin 2 Subaru Deep Field, Ota et al. (2008) (henceforward O08) discovered a large surface overdensity, presumed to be a protocluster at z ∼ 6. The magnitude of the overdensity was quantified as the excess of i-dropouts in a circle of 20 Mpc comoving radius. The region had δΣ,20Mpc = 0.63 with 3σ significance. Furthermore, this region also contained the highest density contrast measured in a 8 Mpc comoving radius δΣ,8Mpc = 3.6 (5σ) compared to other regions of the SDF. By relating the total overdensity in dark matter to the measured overdensity in galaxies through an estimate of the galaxy bias parameter, the authors estimated a mass for the proto-cluster region of ∼ 1 × 10 15 M⊙. We use our mock survey to select i-dropouts with i775-z850> 1.5 and z850< 26.5, similar to O08. The resulting surface density was 0.16 arcmin −2 in very good agreement with the value of 0.18 arcmin −2 found by O08. In Fig. 15 we plot the sky distribution of our sample, and connect regions of constant (positive) density δΣ,20Mpc. Next we selected all regions that had δΣ,20Mpc 0.63. These regions are indicated by the large red circles in Fig. 15. We find ∼30 (non-overlapping) regions in our entire mock survey having δΣ,20Mpc = 0.6 − 2.0 at 2 − 7σ significance, relative to the mean dropout density ofΣ20Mpc ≈ 32. Analogous to Fig. 8, we have marked all i-dropouts associated with proto-clusters with large symbols. It can be seen clearly that the proto-cluster galaxies are found almost exclusively inside the regions of enhanced local surface density indicated by the contour lines, while the large void regions are virtually depleted of proto-cluster galaxies. Although the 30 regions of highest overdensity selected to be similar to the region found by O08 coincide with the highest peaks in the global density distribution across the field, it is interesting to point out that in some cases the regions contain very few actual proto-cluster galaxies, e.g., the regions at (RA,DEC)=(10,150) and (80,220) in Fig. 15. We therefore introduce a proto-cluster "purity" parameter, Rpc, defined as the ratio of galaxies in a (projected) region that belong to protoclusters to the total number of galaxies in that region. We find Rpc,20Mpc ≈16-50%. The purest or richest proto-clusters are found in regions having a wide range in overdensities, e.g., the region at (175,225) with δΣ,20Mpc = 2.2, Rpc,20Mpc =50%), and the region at (200,40) with δΣ,20Mpc = 0.9, Rpc,20Mpc =40%. Following O08 we also calculate the maximum overdensity in each region using cells of 8 Mpc radius. We find δΣ,8Mpc = 1.1 − 3.5 with 2 − 6σ significance. These sub-regions are indicated in Fig. 15 using smaller circles. Interestingly, there is a very wide range in proto-cluster purity of Rpc,8Mpc ≈0-80%. The largest overdensity in Fig. 15 at (175,225) corresponds to the region giving birth to the most massive cluster. By z = 0, this region has grown into a "supercluster" region containing numerous clusters, two of which have M > 10 15 M⊙.
We conclude that local overdensities in the distribution of idropouts on scales of ∼10-50 comoving Mpc similar to the one found by O08 indeed trace the seeds of massive clusters. Because our mock survey is about 80× larger than the SDF, we expect that one would encounter such proto-cluster regions in about one in three (2.7) SDF-sized fields on average. However, the fraction of actual proto-cluster galaxies is in the range 16-50% (0-80% for 8 Mpc radius regions). This implies that while one can indeed find the overdense regions where clusters are likely to form, there is no way Figure 15. The sky distribution of i-dropouts selected using criteria matched to those of Ota et al. (2008). Grey solid lines are surface density contours of δ Σ,20Mpc = 0, +0.2, +0.4, +0.6, +0.8 and +1.0. Large red dashed circles mark overdense regions of δ Σ,20Mpc > 0.63, corresponding to similar overdensities as that associated with the candidate z ∼ 6 proto-cluster region found by Ota et al. (2008) in the Subaru Deep Field. Small red circles inside each region mark a subregion having the largest overdensity δ Σ,8Mpc measured in a 8 Mpc co-moving radius (projected) cell (see text for further details). of verifying which galaxies are part of the proto-cluster and which are not, at least not when using photometrically selected samples. These results are consistent with our earlier finding that there is a large scatter in the relation between the measured surface overdensity and both cluster "purity" and the mass of its descendant cluster at z = 0 (Sect. 3.2).
The Lyα-selected proto-cluster of Ouchi et al. (2005)
The addition of velocity information gives studies of Lyα samples a powerful edge over purely photometrically selected i775dropout samples. As explained by Monaco et al. (2005, and references therein), peculiar velocity fields are influenced by the largescale structure: streaming motions can shift the overall distribution in redshift, while the dispersion can both increase and decrease as a result of velocity gradients. Galaxies located in different structures that are not physically bound will have higher velocity dispersions, while galaxies that are in the process of falling together to form non-linear structures such as a filaments, sheets (or "pancakes") and proto-clusters will have lower velocity dispersions.
Using deep narrow-band imaging observations of the SXDF, Ouchi et al. (2005) (O05) were able to select Lyα candidate galaxies at z ≃ 5.7 ± 0.05. Follow-up spectroscopy of the candidates in one region that was found to be significantly overdense (δ 3) on a scale of 8 Mpc (comoving) radius resulted in the discovery of two groups ('A' and 'B') of Lyα emitting galaxies each having a very narrow velocity dispersion of 200 km s −1 . The threedimensional density contrast is on the order of ∼ 100, comparable Fig. 2 of Ouchi et al. (2005). The black dashed line marks the average field density. Small circles indicate field galaxies. Large circles indicate protocluster galaxies.
to that of present-day clusters, and the space density of such protocluster regions is roughly consistent with that of massive clusters (see O05).
In order to study the velocity fields of collapsing structures and carry out a direct comparison with O05, we construct a simple Lyα survey from our mock sample as follows. First, we construct a (Gaussian) redshift selection function centred on z = 5.8 with a standard deviation of 0.04. As it is not known what causes some galaxies to be bright in Lyα and others not, our simulations do not include a physical prescription for Lyα as such. However, empirical results suggest that Lyα emitters are mostly young, relatively dust-free objects and a subset of the i775-dropout population. The fraction of galaxies with high equivalent width Lyα is about 40%, and this fraction is found to be roughly constant as a function of the rest-frame UV continuum magnitude. Therefore, we scale our selection function so that it has a peak selection efficiency of 40%. Next, we apply this selection function to the i775-dropouts from the mock survey to create a sample with a redshift distribution similar to that expected from a narrowband Lyα survey. Finally, we tune the limiting z850 magnitude until we find a number density that is similar to that reported by O05. By setting z850<26.9 mag we get the desired number density of ∼0.1 arcmin −2 . The mock Lyα field is shown in Fig. 16.
In the top left panel of Fig. 17 we plot the overdensities measured in randomly drawn regions of 8 Mpc (comoving) radius against the protocluster purity parameter, analogous to Fig. 11. Although the median purity of a sample increases with overdensity (dashed line), the scatter indicated by the points is very large even for overdensities as large as δ ≈ 3 found by O05 (marked by the shaded region in the top panel of Fig. 17). To guide the eye, we have plotted regions of purity >0.5 as red points, and regions having pu- Figure 17. The correlations between surface overdensity, cluster purity and velocity dispersion for Lyα galaxies selected from the mock Lyα survey shown in Fig. 16 using randomly drawn cells of 8 Mpc (comoving) radius.
Dashed lines indicate the median trends. Red points highlight regions of purity R >0.5. Blue points highlight regions of R >0.5 and δ Σ > 3. Shaded areas mark the values obtained by Ouchi et al. (2005) for a protocluster of Lyα galaxies in the SDF. See text for details. rity >0.5 and δ > 3 as blue points in all panels of Fig. 17. Next, we calculate the velocity dispersion, σ v,bi , from the peculiar velocities of the galaxies in each region using the bi-weight estimator of Beers et al. (1990) that is robust for relatively small numbers of objects (N ≃ 10 − 50), and plot the result against δ and cluster purity in the top right and bottom left panels of Fig. 17, respectively.
Although gravitational clumping of galaxies in redshift space causes the velocity dispersions to be considerably lower than the velocity width allowed by the bandpass of the narrowband filter ( σ v,bi ≃ 1000 km s −1 compared to σNB ≈ 1800 km s −1 for σNB,z = 0.04), the velocity dispersion is not a decreasing function of the overdensity (at least not up to δ ≈ 3 − 4) and the scatter is significant. This can be explained by the fact that proto-clusters regions are rare, and even regions that are relatively overdense in angular space still contain many galaxies that are not contained within a single bound structure. A much stronger correlation is found between dispersion and cluster purity (see bottom left panel of Fig. 17). Although the scatter in dispersion is large for regions with a purity of 0.5, the smallest dispersions are associated with some of the richest protocluster regions. This can be understood because the "purest" structures represent the bound inner cores of future clusters at z = 0. The velocity dispersions are low because these systems do not contain many field galaxies that act to inflate the velocity dispersion measurements. Therefore, the velocity dispersion correlates much more strongly with the protocluster purity than with the surface overdensity. The overdensity parameter helps, however, in reducing some of the ambiguity in the cluster richness at small dispersions (compare black and blue points at small σ v,bi in the bottom left panel). The shaded regions in Fig. 17 indicate the range of measurements of O05, implying that their structure has the characteristics of Lyα galaxies falling into a protocluster at z ∼ 6.
WHERE IS THE LARGE-SCALE STRUCTURE ASSOCIATED WITH Z ∼ 6 QSOS?
For reasons explained in the Introduction, it is generally assumed that the luminous QSOs at z ∼ 6 inhabit the most massive dark matter in the early Universe. The HST/ACS, with its deep and wide-field imaging capabilities in the i775 and z850 bands, has made it possible to test one possible implication of this by searching for small neighbouring galaxies tracing these massive halos. In this Section, we will first investigate what new constraint we can put on the masses of the host halos based on the observed neighbour statistics. Muñoz & Loeb (2008a) have addressed the same problem based on the excursion set formalism. Our analysis is based on semi-analytic models incorporated in the MR simulation, which we believe is likely to provide a more realistic description of galaxy properties at z ∼ 6. We will use the simulations to evaluate what we can say about the most likely environment of the QSOs and whether they are associated with proto-clusters. We finish the Section by presenting some clear examples from the simulations that would signal a massive overdensity in future observations. Several searches for companion galaxies in the vicinity of z ∼ 6 QSOs have been carried out to date. In Table 1 we list the main surveys, covering in total 6 QSOs spanning the redshift range 5.8 < z < 6.4. We have used the results given in Stiavelli et al. (2005), Zheng et al. (2006) and Kim et al. (2008) to calculate the surface overdensities associated with each of the QSO fields listed in Table 1. Only two QSOs were found to be associated with positive overdensities to a limiting magnitude of z850=26.5: J0836+0054 6 (z = 5.82) and J1030+0524 (z = 6.28) both had δΣ,ACS ≈ 1, although evidence suggests that the overdensity could be as high as ≈ 2 − 3 when taking into account subclustering within the ACS field or sources selected using different S/N or colour cuts (see Stiavelli et al. 2005;Zheng et al. 2006;Ajiki et al. 2006;Kim et al. 2008, for details). The remaining four QSO fields (J1306+0356 at z = 5.99, J1630+4012 at z = 6.05, J1048+4637 at z = 6.23, and J1148+5251 at z = 6.43) were all consistent with having no excess counts with δΣ,ACS spanning the range from about −1 to +0.5 relatively independent of the method of selection (Kim et al. 2008). Focusing on the two overdense QSO fields, Fig. 10 tells us that overdensities of δΣ,ACS 1 are fairly common, occurring at a rate of about 17% in our 4 • × 4 • simulation. The probability of finding a random field with δΣ 2 − 3 is about 5 to 1%. It is evident that none of the six quasar fields have highly significant overdensities. The case for overdensities near the QSOs would strengthen if all fields showed a systematically higher, even if moderate, surface density. However, when considering the sample as a whole the surface densities of i-dropouts near z ∼ 6 QSOs are fairly average, given that four of the QSO fields have lower or similar number counts compared to the field. With the exception perhaps of the field towards the highest redshift QSO J1148+5251, which lies at a redshift where the i-dropout selection is particularly inefficient (see Fig. 4), the lack of evidence for substantial (surface) overdensities in the QSO fields is puzzling.
In Fig. 18 we have plotted the number of i775-dropouts encountered in cubic regions of 20 × 20 × 20 h −1 Mpc against the mass of the most massive dark matter halo found in each region. Panels on the left and on the right are for limiting magnitudes Figure 18. Panels show the number of neighbours (i 775 -dropouts) in cubic regions of (20 h −1 ) 3 Mpc 3 versus the mass of the most massive halo found in each of those regions. Top and bottom panels are for snapshots at z = 5.7 and z = 6.2, respectively. Left and right panels are for neighbour counts down to limiting magnitudes of z 850 =27.5 (left) and z 850 = 26.5 mag, respectively. There is a wide dispersion in the number of neighbours, even for the most massive halos at z ∼ 6. The highest numbers of neighbours are exclusively associated with the massive end of the halo mass function, allowing one to derive a lower limit for the mass of the most massive halo for a given number of neighbour counts. The scatter in the number of neighbours versus the mass of the most massive halo reduces signficantly when going to fainter magnitudes. The small squares in the panels on the right correspond to the three richest regions (in terms of z 850 <26.5 mag dropouts) that are shown in close-up in Fig. 19. of z850=27.5 and 26.5 mag, respectively. Because the most massive halos are so rare, here we have used the full MR snapshots at z = 5.7 (top panels) and z = 6.2 (bottom panels) rather than the lightcone in order to improve the statistics. There is a systematic increase in the number of neighbours with increasing maximum halo mass. However, the scatter is very large: for example, focusing on the neighbour count prediction for z = 5.7 and z850<26.5 (top right panel) we see that the number of neighbours of a halo of 10 12 h −1 M⊙ can be anywhere between 0 and 20, and some of the most massive halos of 10 13 h −1 M⊙ have a relatively low number of counts compared to some of the halos of significant lower mass that are far more numerous. However, for a given z850<26.5 neighbour count (in a 20 × 20 × 20 h −1 Mpc region) of 5, the halo mass is always above ∼ 10 11.5 h −1 M⊙, and if one would observe 25 i775-dropout counts one could conclude that that field must contain a supermassive halo of 10 12.5 h −1 M⊙. Thus, in principle, one can only estimate a lower limit on the maximum halo mass as a function of the neighbour counts. The left panel shows that the scatter is much reduced if we are able to count galaxies to a limiting z850-band magnitude of 27.5 instead of 26.5, simply because the Poisson noise is greatly reduced.
We can therefore conclude that the relatively average number of counts observed in the QSO fields is not inconsistent with the QSOs being hosted by very massive halos. However, one could make an equally strong case that they are, in fact, not. If we translate our results of Fig. 18 to the QSO fields that cover a smaller projected area of (∼ 5 h −1 ) 2 Mpc 2 , and we add back in the average counts from the fore-and background provided by our lightcone data, we estimate that for QSOs at z ≈ 5.7 we require an overdensity of δΣ,ACS ∼ 4 in order to be able to put a lower limit on the QSO host mass of ∼ 10 12 M⊙, while a δΣ,ACS ∼ 1 is consistent with ∼ 10 11 M⊙. At z = 6.2, we would require δΣ,ACS 2 for M 10 12 M⊙ and δΣ,ACS 1 for M 10 11.5 M⊙. Comparison with the relatively low surface overdensities observed thus suggests that the halo mass is uncontrained by the current data. Nonetheless, we can at least conclude quite firmly that the QSOs are in far less rich environments (in terms of galaxy neigbours) compared to many rich regions found both in the simulations and in some of the deep field surveys described in the previous Section. In order to get a feel for what the QSO fields might have looked like if they were in highly overdense regions, we present some closeup views in Fig. 19 of the three richest regions of z850<26.5 mag i775-dropouts as marked by the small squares in Fig. 18. The central position corresponding to that of the most massive halo in each region is indicated by the green square. Large and small dots correspond to dropout galaxies having z850<26.5 and <27.5 mag, respectively. For reference, we use blue circles to indicate galaxies that have been identified as part of a protocluster structure. The scale bar at the top left in each panel corresponds to the size of an ACS/WFC pointing used to observe z ∼ 6 QSO fields. We make a number of interesting observations. First, using the current observational limits on depth (z850=26.5 mag) and field size (3.4 ′ , see scale bar) imposed by the ACS observations of QSOs, it would actually be quite easy to miss some of these structures as they typically span a larger region of 2-3 ACS fields in diameter. Going too much fainter magnitudes would help considerably, but this is at present unfeasible. Note, also, that in three of the panels presented here, the galaxy associated with the massive central halo does not pass our magnitude limits. It is missed due to dust obscuration associated with very high star formation rates inside these halos, implying that they will be missed by large-area UV searches as well (unless, of course, they also host a luminous, unobscured QSO). Finally, we investigate the level of mutual correspondence between the most massive halos selected at z = 6 and z = 0. In Fig. 19 we already saw that the richest regions are associated with a very large number of galaxies that will become part of a cluster when evolved to z = 0. In the top row of Fig. 20 we show the mass of the most massive progenitors at z = 5.7 (left), z = 6.2 (middle) ad z = 6.7 (right) of halos selected at z = 0 (see also Trenti et al. 2008). The dotted line indicates the threshold corresponding to massive galaxy clusters at z = 0. Although the progenitor mass increases systematically with increasing local mass, the dispersion in the mass of the most massive z ∼ 6 progenitors is about or over one order of magnitude, and this is true even for the most massive clusters. As explained in detail by Trenti et al. (2008) this observation leads to an interesting complication when using the refinement technique often used to simulate the most massive regions in the early Universe by resimulating at high redshift the most massive region identified at z = 0 in a coarse grid simulation. In the bottom panels of Fig. 20 we show the inverse relation between the most massive halos selected at z ∼ 6 and their most massive descendant at z = 0. From this it becomes clear that eventhough the most massive z ∼ 6 halos (e.g. those associated with QSOs) are most likely to end up in present-day clusters, some evolve into only modest local structures more compatible with, e.g., galaxy groups. This implies that the present-day descendants of some of the first, Figure 19. Close-up views of three (20 h −1 ) 3 Mpc 3 regions that were found to be highly overdense in z 850 <26.5 mag i 775 -dropouts as marked by the squares in Fig. 18. The top row of panels correspond to the three richest regions found at z = 5.7, while the bottom row corresponds to those at z = 6.2. The position of the most massive halo in each region is indicated by a green square. Large and small dots correspond to dropout galaxies having z 850 <26.5 and <27.5 mag, respectively. Galaxies that have been identified as part of a protocluster structure are indicated by blue circles. The scale bar at the top left in each panel corresponds to the size of an ACS/WFC pointing used to observe z ∼ 6 QSO fields. Note that the galaxy corresponding to the most massive halo as indicated by the green square is not always detected in our i 775 -dropout survey due to dust obscuration associated with very high star formation rates. massive galaxies and supermassive black holes must be sought in sub-cluster environments.
DISCUSSION
Although our findings of the previous Section show that the apparent lack of excess neighbour counts near z ∼ 6 QSOs is not inconsistent with them being hosted by supermassive dark matter halos as suggested by their low co-moving abundance and large inferred black hole mass, it is interesting to note that none of the QSO fields have densities that would place them amongst the richest structures in the z ∼ 6 Universe. This leads to an intriguing question: where is the large-scale structure associated with QSOs?
One possibility that has been discussed (e.g. Kim et al. 2008) is that while the dark matter density near the QSOs is significantly higher compared to other fields, the strong ionizing radiation from the QSO may prohibit the condensation of gas thereby suppressing galaxy formation. Although it is not clear how important such feedback processes are exactly, we have found that proto-clusters in the MR form inside density enhancements that can extend up to many tens of Mpc in size. Although we do not currently know whether the z ∼ 6 QSOs might be associated with overdensities on scales larger than a few arcminutes as probed by the ACS, it is unlikely that the QSO ionization field will suppress the formation of galaxies on such large scales (Wyithe et al. 2005). An alternative, perhaps more likely, explanation for the deficit of i775-dropouts near QSOs, is that the dark matter halo mass of the QSOs is being greatly overestimated. Willott et al. (2005) suggest that the combination of the steepness of the halo mass function for rare high redshift halos on one hand, combined with the sizeable intrinsic scatter in the correlation between black hole mass and stellar velocity dispersion or halo mass at low redshift on the other hand, makes it much more probable that a 10 9 M⊙ black hole is found in relatively low mass halos than in a very rare halo of extremely high mass. Depending on the exact value of the scatter, the typical mass of a halo hosting a z ∼ 6 QSO may be reduced by ∼ 0.5 − 1.5 in log M halo without breaking the low redshift M-σv relation. The net result is that QSOs occur in some subset of halos found in substantially less dense environments, which may explain the observations. This notion seems to be confirmed by the low dynamical mass of ∼ 5 × 10 10 M⊙ estimated for the inner few kpc region of SDSS J1148+5251 at z = 6.43 based on the CO line emission (Walter et al. 2004). This is in complete contradiction to the ∼ 10 12 M⊙ stellar mass bulges and ∼ 10 13 M⊙ mass halos derived based on other arguments. If true, models should then explain why the number density of such QSOs is as observed. On the other hand, recent theoretical work by Dijkstra et al. (2008) suggests that in Figure 20. The correspondence between the most massive halos selected at z = 0 and z = 6 (see also Trenti et al. 2008). In the top row of panels we plot the mass of the most massive progrenitor of halos selected at z = 0 for snapshots at z = 5.7 (left), z = 6.2 (middle) and z = 6.7 (right). In the bottom row of panels we plot the mass of the most massive z = 0 descendant for halos selected at z = 5.7 (left), z = 6.2 (middle) and z = 6.7 (right). In all panels the dotted line indicates the mass corresponding to the threshold we use to define clusters at z = 0 (M 10 14 h −1 M ⊙ Mpc). The dispersion in the mass of the most massive z ∼ 6 progenitors of z = 0 clusters is over an order of magnitude. Conversely, the most massive halos present at z ∼ 6 are not necessarily the most massive halos at z = 0, and a minority does not even pass the threshold imposed for qualifying as a z = 0 cluster. order to facilitate the formation of a supermassive (∼ 10 9 M⊙) by z ∼ 6 in the first place, it may be required to have a rare pair of dark matter halos (∼ 10 13 M⊙) in which the intense UV radiation from one halo prevents fragmentation of the other so that the gas collapses directly into a supermassive blackhole. This would constrain the QSOs to lie in even richer environments.
RECOMMENDATIONS FOR FUTURE OBSERVATIONS
The predicted large-scale distributions of i775-dropouts and Lyα emitters as shown in, e.g., Figs. 8, 15 and 16 show evidence for variations in the large-scale structure on scales of up to ∼1-2 • , far larger than currently probed by deep HST or large-area groundbased surveys. A full appreciation of such structures could be important for a range of topics, including studies of the luminosity function at z ∼ 6 and studies of the comparison between ΛCDM predictions and gravitational clustering on very large scales. The total area probed by our simulation is a good match to a survey of ≃ 20 degree 2 targeting i775-dropouts and Lyα emitters at z ∼ 6 planned with the forthcoming Subaru/HyperSurpimeCam (first light expected 2013; M. Ouchi, private communications, 2008).
We found that the i775-dropouts associated with proto-clusters are almost exclusively found in regions with positive density enhancements. A proper understanding of such dense regions may also be very important for studies of the epoch of reionization. Simulations suggest that even though the total number of ionizing photons is much larger in very large proto-cluster regions covering several tens of comoving Mpc as compared to the field (e.g. Ciardi et al. 2003, but see Iliev et al. (2006)), they may still be the last to fully re-inionize, because the recombination rates are also much higher. If regions associated with QSOs or other structures were to contain significant patches of neutral hydrogen, this may affect both the observed number densities and clustering of LBGs or Lyα emitters relative to our assumed mean attenuation (McQuinn et al. 2007). However, since our work mostly focuses on z ≈ 6 when reionization is believed to be largely completed, this may not be such an issue compared to surveys that probe earlier times at z 7 (e.g. Kashikawa et al. 2006;McQuinn et al. 2007).
Our evaluation of the possible structures associated with QSOs leads to several suggestions for future observations. While it is unlikely that the Wide Field Camera 3 (WFC3) to be installed onboard HST in early 2009 will provide better contraints than HST/ACS due to its relatively small field-of-view, we have shown that either by surveying a larger area of ∼10 ′ ×10 ′ or by going deeper by ∼1 mag in z850, one significantly reduces the shot noise in the neighbour counts allowing more reliable overdensities and (lower) limits on the halo masses to be estimated. A single pointing with ACS would require ∼ 15−20 orbits in z850 to reach a point source sensitivity of 5σ for a z850=27.5 mag object at z ∼ 6. Given the typical structure sizes of the overdense regions shown in Fig. 19, a better approach would perhaps be to expand the area of the current QSO fields by several more ACS pointings at their present depth of z850=26.5 mag for about an equal amount of time. However, this may be achieved from the ground as well using the much more efficient wide-field detector systems. Although this has been attempted by Willott et al. (2005) targeting three of the QSO fields, we note that their achieved depth of z850=25.5 was probably much too shallow to find any overdensities even if they are there. We would like to stress that it is extremely important that foreground contamination is reduced as much as possible, for example by combining the observations with a deep exposure in the V606 band. This is currently not available for the QSO fields, making it very hard to calculate the exact magnitude of any excess counts present. While a depth of z850=27.5 mag seems out of reach for a statistical sample with HST, narrow-band Lyα surveys targeting the typically UV-faint Lyα emitters from the ground would be a very efficient alternative. Although a significant fraction of sources lacking Lyα may be lost compared to dropout surveys, they have the clear additional advantage of redshift information. Most Lyα surveys are carried out in the atmospheric transmission windows that correspond to redshifted Lyα either at z ≈ 5.7 or z ≈ 6.6 for which efficient narrow-band filters exist. We therefore suggest that the experiment is most likely to succeed around QSOs at z ≈ 5.7 rather than the QSOs at z ≃ 5.8 − 6.4 looked at so far. It is, however, possible to use combinations of, e.g., the z ≈ 5.7 narrowband filter with medium or broad band filters at ∼9000Å to place stronger constaints on the photometric redshifts of i775-dropouts in QSO fields (e.g., see Ajiki et al. 2006).
In the next decade, JWST will allow for some intriguing further possibilities that may provide some definite answers: Using the target 0.7-0.9µm sensitivity of the Near Infrared Camera (NIR-Cam) on JWST we could reach point sources at 10σ as faint as z850=28.5 mag in a 10,000 s exposure, or we could map a large ∼10 ′ ×10 ′ region around QSOs to a depth of z850=27.5 mag within a few hours. The Near Infrared Spectrograph (NIRSpec) will allow >100 simultaneous spectra to confirm the redshifts of very faint line or continuum objects over a >9 arcmin 2 field of view. Table 1. Overview of i-dropout surveys.
SUMMARY
Figure 1 .
1Redshift versus the (co-moving) X-coordinate for all objects within a slice of width ∆Y = 250 h −1 Mpc along the Y -axis.
Figure 2 .
2The simulation box showing the positions in co-moving coordinates of all objects identified as i 775 -dropout galaxies to z 850 =27.0 mag.
Figure 3 .
3Colour-colour diagrams of the MR mock i 775 -dropout survey. To guide the eye we have indicated tracks showing the colours of a 100 Myr old continuous starburst model from Bruzual & Charlot (2003) for different amounts of reddening in E(B − V ) of 0.0 (blue), 0.2 (green), and 0.4 (red). Redshifts are indicated along the zero-reddening track. Only objects at z > 5.6 are included in the simulations, as i 775 -dropouts surveys have been demonstrated to have very little contamination (see text for details).
Figure 4 .
4Redshift histograms derived from the MR mock i-dropout survey at the depth of z 850 =27.5 mag using the selection criteria i 775 -z 850 > 1.3 (thick solid line, error bars indicate the 1σ scatter expected among GOODSsized fields), i 775 -z 850 > 1.5 (dashed line), i 775 -z 850 < 2.0 (blue line), and i 775 -z 850 > 2.0 (red line). The thin solid line indicates the model redshift distribution from B06 based on the HUDF.
Figure 5 .
5Physical properties of i-dropouts in the MR mock survey satisfying z 26.5 mag. We plot the cumulative fractions of galaxies with stellar masses, star formation rates, stellar ages and halo masses greater than a given value. Top left: Distribution of stellar masses. The median stellar mass is ∼ 4 × 10 9 M ⊙ h −1 (dotted line). Top right: Distribution of SFRs. The median SFR is ∼ 30 M ⊙ yr −1 (dotted line). Bottom left: Distribution of mass-weighted ages. The median age is ∼160 Myr (dotted line). Bottom right: Distribution of halo masses. The median halo mass is ∼ 2 × 10 11 M ⊙ h −1 (dotted line).
Figure 6 .
6Number density versus halo mass of the z = 0 dark matter halos hosting descendants of i-dropouts at z ∼ 6 to a limiting depth of z 850 26.5 mag. The median i-dropout descendant halo mass is a few times 10 13 M ⊙ (dotted line). The halo mass function of all MR halos at z = 0 is indicated by the dashed line. The mass range occupied by the halos associated with galaxy clusters is indicated by the hatched region.
Figure 7 .
7Number density versus stellar mass of the galaxies at z = 0 that have at least one i-dropout progenitor at z ∼ 6. The median descendant mass is ∼ 10 11 M ⊙ (dotted line). The distribution of stellar mass of all MR z = 0 galaxies is indicated for comparison (dashed line).
Figure 8 .
8Projected distribution on the sky of the z ∼ 6 i-dropouts selected from the MR mock survey according to the criteria i 775 -z 850 >1.3 and z 26.5 mag (small and large points)
Figure 9 .
9Counts-in-cells frequency distribution of the i-dropouts shown inFig. 8, based on 20,000 randomly placed ACS-sized fields of 3.4 ′ × 3.4 ′ .The panel on the right shows a zoomed-in view to give a better sense of the small fraction of pointings having large numbers of i-dropouts. In both panels, the thick solid line indicates the frequency distribution of the full MR mock survey. The dashed line indicates how the distribution changes if we "disrupt" all protocluster regions ofFig. 8by randomizing the positions of the galaxies marked as proto-cluster galaxy. The dotted line indicates the frequency distribution of a large sample of i-dropouts selected from the GOODS survey by B06 using identical selection criteria. Thin solid lines indicate the Poisson distribution for a mean of 2 i-dropouts per pointing.
Figure 10 .
10Panels show the cumulative probability distributions of finding regions having a surface overdensity > δ Σ,ACS of i-dropouts for the four samples extracted from the MR mock survey based on colour cuts of i 775z 850 > 1.3 (top-left), i 775 -z 850 > 1.5 (top-right), 1.3 <i 775 -z 850 < 2.0 (bottom-left), and i 775 -z 850 > 2.0 (bottom-right). The inset plots show the full probability distributions. Dashed, coloured lines indicate the joint probability of finding cells having an overdensity > δ Σ,ACS and those cells consisting of at least 25% (blue), 50% (green) and 75% proto-cluster galaxies.
Figure 12 .
12Panels show the angular distribution of i 775 -dropouts in 30 ′ ×30 ′ areas centered on each of 16 protoclusters associated with overdensities δ Σ,ACS 3. Field galaxies are drawn as open circles, and cluster galaxies as filled circles that are colour coded according to their cluster membership. The ACS field-of-view is indicated by a red square. Numbers near the top of each panel indicate the ID, redshift, overdensity and cluster mass (at z = 0) of the protoclusters in the center of each panel.
Figure 14 .
14Panels show redshift versus one of the angular coordinates of i 775 -dropouts for each of the protocluster regions shown in
Figure 13 .
13Lines show overdensity as a function of radius for each of the protocluster regions shown inFig. 12.
Figure 16 .
16Mock Lyα survey at ≃ 5.8 ± 0.05 constructed from the MR mock i 775 -dropout sample. Grey solid lines are surface density contours of δ Σ,20Mpc = −0.25 to 3.25 with a step increase of 0.5 as in
The rest-frame absolute magnitude at 1350Å is defined as M 1350Å ≃ mz − 5 log 10 (d L /10pc) + 2.5 log 10 (1 + z)
For reference: a z 850 magnitude of ≃26.5 mag for an unattenuated galaxy at z ≃ 6 would correspond to a SFR of ≃7 M ⊙ yr −1 , under the widely used assumption of a 0.1-125M ⊙ Salpeter initial mass function and the conversion factor between SFR and the rest-frame 1500Å UV luminosity of 8.0 × 10 27 erg s −1 Hz −1 / M ⊙ yr −1 as given byMadau et al. (1998).
The 'tophat' mass, M tophat , is the mass within the radius at which the halo has an overdensity corresponding to the value at virialisation in the top-hat collapse model (seeWhite 2001).
The significance of the overdensity in this field is less than originally stated byZheng et al. (2006) as a result of underestimating the contamination rate when a V 606 image is not available to reject lower redshift interlopers.
a Observed surface densities fromBouwens et al. (2007) andOta et al. (2008).
ACKNOWLEDGMENTSMany colleagues have contributed to this work. We thank Tom Abel, Jérémy Blaizot, Bernadetta Ciardi, Soyoung Kim, Sangeeta Malhotra, James Rhoads, Massimo Stiavelli, and Bram Venemans for their time and suggestions. We are grateful to Masami Ouchi for a careful reading of our manuscript and insightful comments. We owe great gratitude to Volker Springel and Simon White and their colleagues at MPA responsible for the Millennium Run Project. The Millennium Simulation databases used in this paper and the web application providing online access to them were constructed as part of the activities of the German Astrophysical Virtual Observatory. RAO acknowledges the support and hospitality of the Aspen Center for Physics where part of this research was carried out.The main findings of our investigation can be summarized as follows.• We have used the N -body plus semi-analytic modeling of DeLucia & Blaizot (2007)to construct the largest (4 • × 4 • ) mock galaxy redshift survey of star-forming galaxies at z ∼ 6 to date. We extracted large samples of i775-dropouts and Lyα emitters from the simulated survey, and showed that the main observational (colours, number densities, redshift distribution) and physical properties (M * , SFR, age, M halo ) are in fair agreement with the data as far as they are constrained by various surveys.• The present-day descendants of i775-dropouts (brighter than M * U V,z=6 ) are typically found in group environments at z = 0 (halo masses of a few times 10 13 M⊙). About one third of all i775dropouts end up in halos corresponding to clusters, implying that the contribution of "proto-cluster galaxies" in typical i775-dropout surveys is significant.• The projected sky distribution shows significant variations in the local surface density on scales of up to 1 • , indicating that the largest surveys to date do not yet probe the full range of scales predicted by our ΛCDM models. This may be important for studies of the luminosity function, galaxy clustering, and the epoch of reionization.• We present counts-in-cells frequency distributions of the number of objects expected per 3.4 ′ ×3.4 ′ HST/ACS field of view, finding good agreement with the GOODS field statistics. The largest positive deviations are due to structures associated with the seeds of massive clusters of galaxies ("protoclusters"). To guide the interpretation of current and future HST/ACS observations, we give the probabilities of randomly finding regions of a given surface overdensity depending on the presence or absence of a protocluster.• We give detailed examples of the structure of proto-cluster regions. Although the typical separation between protocluster galaxies does not reach beyond ∼10 ′ (25 Mpc comoving), they sit in overdensities that extend up to 30 ′ radius, indicating that the protoclusters predominantly form deep inside the largest filamentary structures. These regions are very similar to two proto-clusters of i775-dropouts or Lyα emitters found in the SDF(Ota et al. 2008)and SXDF fields.• We have made a detailed comparison between the number counts predicted by our simulation and those measured in fields observed with HST/ACS towards luminous z ∼ 6 QSOs from SDSS, concluding that the observed fields are not particularly overdense in neighbour counts. We demonstrate that this does not rule out that the QSOs are in the most massive halos at z ∼ 6, although we can also not confirm it. We discuss the possible reasons and implications of this intriguing result (see the Discussion in Section 6).• We give detailed recommendations for follow-up observations using current and future instruments that can be used to better constrain the halo masses of z ∼ 6 QSOs and the variations in the large-scale structure as probed by i775-dropouts and Lyα emitters (see Section 7).Kim et al. (2008)a The numbers between parentheses correspond to the significance and the diameter of a circular aperture.
. M Ajiki, PASJ. 58113Ajiki, M., et al. 2006, PASJ, 58, 113
. A J Barth, P Martini, C H Nelson, L C Ho, ApJL. 59495Barth, A. J., Martini, P., Nelson, C. H., & Ho, L. C. 2003, ApJL, 594, L95
. T C Beers, K Flynn, K Gebhardt, AJ. 10032Beers, T. C., Flynn, K., & Gebhardt, K. 1990, AJ, 100, 32
. F Bertoldi, A&A. 40947Bertoldi, F., et al. 2003, A&A, 409, L47
. M C Begelman, M Volonteri, M J Rees, MNRAS. 370289Begelman, M. C., Volonteri, M., & Rees, M. J. 2006, MNRAS, 370, 289
. J Blaizot, Y Wadadekar, B Guiderdoni, S T Colombi, E Bertin, F R Bouchet, J E G Devriendt, S Hatton, 360159MN-RASBlaizot, J., Wadadekar, Y., Guiderdoni, B., Colombi, S. T., Bertin, E., Bouchet, F. R., Devriendt, J. E. G., & Hatton, S. 2005, MN- RAS, 360, 159
. R J Bouwens, ApJ. 595589Bouwens, R. J., et al. 2003, ApJ, 595, 589
. R J Bouwens, ApJL. 60625Bouwens, R. J., et al. 2004a, ApJL, 606, L25
. R J Bouwens, G D Illingworth, J P Blakeslee, M Franx, ApJ. 65353Bouwens, R. J., Illingworth, G. D., Blakeslee, J. P., & Franx, M. 2006, ApJ, 653, 53
. R J Bouwens, G D Illingworth, M Franx, H Ford, ApJ. 670928Bouwens, R. J., Illingworth, G. D., Franx, M., & Ford, H. 2007, ApJ, 670, 928
. R J Bouwens, G D Illingworth, J P Blakeslee, T J Broadhurst, M Franx, ApJL. 6111Bouwens, R. J., Illingworth, G. D., Blakeslee, J. P., Broadhurst, T. J., & Franx, M. 2004b, ApJL, 611, L1
. G Bruzual, S Charlot, MNRAS. 3441000Bruzual, G., & Charlot, S. 2003, MNRAS, 344, 1000
. D Calzetti, PASP. 1131449Calzetti, D. 2001, PASP, 113, 1449
. B Ciardi, F Stoehr, S D M White, MNRAS. 3431101Ciardi, B., Stoehr, F., & White, S. D. M. 2003, MNRAS, 343, 1101
. D J Croton, MNRAS. 36511Croton, D. J., et al. 2006, MNRAS, 365, 11
. R Davé, K Finlator, B D Oppenheimer, MNRAS. 370273Davé, R., Finlator, K., & Oppenheimer, B. D. 2006, MNRAS, 370, 273
. G De Lucia, G Kauffmann, S D M White, MNRAS. 3491101De Lucia, G., Kauffmann, G., & White, S. D. M. 2004, MNRAS, 349, 1101
. G De Lucia, J Blaizot, MNRAS. 3752De Lucia, G., & Blaizot, J. 2007, MNRAS, 375, 2
. M Dickinson, ApJL. 60099Dickinson, M. et al. 2004, ApJL, 600, L99
. S G Djorgovski, D Stern, A A Mahabal, R Brunner, ApJ. 59667Djorgovski, S. G., Stern, D., Mahabal, A. A., & Brunner, R. 2003, ApJ, 596, 67
. C C Dow-Hygelund, ApJL. 630137Dow-Hygelund, C. C., et al. 2005, ApJL, 630, L137
. M Dijkstra, arXiv:0810.0014MNRAS. submittedDijkstra, M., et al. 2008, MNRAS, submitted (arXiv:0810.0014)
. L P Eyles, A J Bunker, R S Ellis, M Lacy, E R Stanway, D P Stark, K Chiu, MNRAS. 374910Eyles, L. P., Bunker, A. J., Ellis, R. S., Lacy, M., Stanway, E. R., Stark, D. P., & Chiu, K. 2007, MNRAS, 374, 910
. X Fan, AJ. 1251649Fan, X., et al. 2003, AJ, 125, 1649
. X Fan, AJ. 132117Fan, X., et al. 2006b, AJ, 132, 117
. X Fan, AJ. 1311203Fan, X., et al. 2006a, AJ, 131, 1203
. X Fan, AJ. 128515Fan, X., et al. 2004, AJ, 128, 515
. X Fan, AJ. 1222833Fan, X., et al. 2001, AJ, 122, 2833
. K Finlator, R Davé, B D Oppenheimer, MNRAS. 3761861Finlator, K., Davé, R., & Oppenheimer, B. D. 2007, MNRAS, 376, 1861
A Gayler Harford, N Y Gnedin, astro-ph/0610057Submitted to ApJ. Gayler Harford, A., & Gnedin, N. Y. 2006, Submitted to ApJ (astro-ph/0610057)
. T Goto, MNRAS. 371769Goto, T. 2006, MNRAS, 371, 769
. J E Gunn, B A Peterson, ApJ. 1421633Gunn, J. E., & Peterson, B. A. 1965, ApJ, 142, 1633
. Q Guo, S D M White, arXiv:0809.4259Guo, Q., & White, S. D. M. 2008, arXiv:0809.4259
. Z Haiman, A Loeb, ApJ. 552459Haiman, Z., & Loeb, A. 2001, ApJ, 552, 459
. I T Iliev, G Mellema, U.-L Pen, H Merz, P R Shapiro, M A Alvarez, MNRAS. 3691625Iliev, I. T., Mellema, G., Pen, U.-L., Merz, H., Shapiro, P. R., & Alvarez, M. A. 2006, MNRAS, 369, 1625
. L Jiang, X Fan, M Vestergaard, J D Kurk, F Walter, B C Kelly, M A Strauss, AJ. 1341150Jiang, L., Fan, X., Vestergaard, M., Kurk, J. D., Walter, F., Kelly, B. C., & Strauss, M. A. 2007, AJ, 134, 1150
. L Jiang, AJ. 1322127Jiang, L., et al. 2006, AJ, 132, 2127
. G Kauffmann, J M Colberg, A Diaferio, S D M White, MNRAS. 303188Kauffmann, G., Colberg, J. M., Diaferio, A., & White, S. D. M. 1999, MNRAS, 303, 188
. N Kashikawa, PASJ. 561011Kashikawa, N., et al. 2004, PASJ, 56, 1011
. N Kashikawa, ApJ. 6487Kashikawa, N., et al. 2006, ApJ, 648, 7
. N Kashikawa, T Kitayama, M Doi, T Misawa, Y Komiyama, K Ota, ApJ. 663765Kashikawa, N., Kitayama, T., Doi, M., Misawa, T., Komiyama, Y., & Ota, K. 2007, ApJ, 663, 765
. S Khochfar, J Silk, R A Windhorst, R E Ryan, Jr, ApJL. 668115Khochfar, S., Silk, J., Windhorst, R. A., & Ryan, R. E., Jr. 2007, ApJL, 668, L115
. M G Kitzbichler, S D M White, MNRAS. 3762Kitzbichler, M. G., & White, S. D. M. 2007, MNRAS, 376, 2
. S Kim, arXiv:0805.1412805Kim, S., et al. 2008, ArXiv e-prints, 805, arXiv:0805.1412
. J D Kurk, arXiv:0707.1662707ArXiv e-printsKurk, J. D., et al. 2007, ArXiv e-prints, 707, arXiv:0707.1662
. K Lai, J.-S Huang, G Fazio, L L Cowie, E M Hu, Y Kakazu, ApJ. 655704Lai, K., Huang, J.-S., Fazio, G., Cowie, L. L., Hu, E. M., & Kakazu, Y. 2007, ApJ, 655, 704
. G Lemson, T Virgo Consortium, arXiv:astro-ph/0608019e-printLemson, G., & Virgo Consortium, t. 2006, e-print (arXiv:astro-ph/0608019)
G Lemson, V Springel, Astronomical Data Analysis Software and Systems XV. 351212Lemson, G., & Springel, V. 2006, Astronomical Data Analysis Software and Systems XV, 351, 212
. Y Li, ApJ. 665187Li, Y., et al. 2007, ApJ, 665, 187
. P Madau, ApJ. 44118Madau, P. 1995, ApJ, 441, 18
. P Madau, L Pozzetti, M Dickinson, ApJ. 498106Madau, P., Pozzetti, L., & Dickinson, M. 1998, ApJ, 498, 106
. S Malhotra, ApJ. 626666Malhotra, S., et al. 2005, ApJ, 626, 666
. R Maiolino, A&A. 44051Maiolino, R., et al. 2005, A&A, 440, L51
. J Magorrian, AJ. 1152285Magorrian, J., et al. 1998, AJ, 115, 2285
. R J Mclure, M Cirasuolo, J S Dunlop, S Foucaud, O Almaini, arXiv:0805.1335805McLure, R. J., Cirasuolo, M., Dunlop, J. S., Foucaud, S., & Al- maini, O. 2008, ArXiv e-prints, 805, arXiv:0805.1335
. R J Mclure, MNRAS. 372357McLure, R. J., et al. 2006, MNRAS, 372, 357
. M Mcquinn, L Hernquist, M Zaldarriaga, S Dutta, MNRAS. 38175McQuinn, M., Hernquist, L., Zaldarriaga, M., & Dutta, S. 2007, MNRAS, 381, 75
. G K Miley, Nature. 42747Miley, G. K., et al. 2004, Nature, 427, 47
. P Monaco, P Møller, J P U Fynbo, M Weidinger, C Ledoux, T Theuns, A&A. 440799Monaco, P., Møller, P., Fynbo, J. P. U., Weidinger, M., Ledoux, C., & Theuns, T. 2005, A&A, 440, 799
. J A Muñoz, A Loeb, MNRAS. 3852175Muñoz, J. A., & Loeb, A. 2008a, MNRAS, 385, 2175
. J A Muñoz, A Loeb, MNRAS. 3862323Muñoz, J. A., & Loeb, A. 2008b, MNRAS, 386, 2323
. D Narayanan, arXiv:0707.3141707ArXiv e-printsNarayanan, D., et al. 2007, ArXiv e-prints, 707, arXiv:0707.3141
. K Nagamine, R Cen, S R Furlanetto, L Hernquist, C Night, J P Ostriker, M Ouchi, New Astronomy Review. 5029Nagamine, K., Cen, R., Furlanetto, S. R., Hernquist, L., Night, C., Ostriker, J. P., & Ouchi, M. 2006, New Astronomy Review, 50, 29
. K Nagamine, M Ouchi, V Springel, L Hernquist, arXiv:0802.0228ApJ. submittedNagamine, K., Ouchi, M., Springel, V., & Hernquist, L. 2008, ApJ, submitted (arXiv:0802.0228)
. C Night, K Nagamine, V Springel, L Hernquist, 366705MN-RASNight, C., Nagamine, K., Springel, V., & Hernquist, L. 2006, MN- RAS, 366, 705
. P A Oesch, ApJ. 6711212Oesch, P. A., et al. 2007, ApJ, 671, 1212
. K Ota, N Kashikawa, M A Malkan, M Iye, T Nakajima, T Nagao, K Shimasaku, P Gandhi, arXiv:0804.3448804ArXiv e-printsOta, K., Kashikawa, N., Malkan, M. A., Iye, M., Nakajima, T., Nagao, T., Shimasaku, K., & Gandhi, P. 2008, ArXiv e-prints, 804, arXiv:0804.3448
. K Ota, N Kashikawa, T Nakajima, M Iye, Journal of Korean Astronomical Society. 38179Ota, K., Kashikawa, N., Nakajima, T., & Iye, M. 2005, Journal of Korean Astronomical Society, 38, 179
. M Ouchi, ApJ. 611685Ouchi, M., et al. 2004, ApJ, 611, 685
. M Ouchi, ApJL. 6201Ouchi, M., et al. 2005, ApJL, 620, L1
. R A Overzier, R J Bouwens, G D Illingworth, M Franx, ApJL. 6485Overzier, R. A., Bouwens, R. J., Illingworth, G. D., & Franx, M. 2006, ApJL, 648, L5
. R A Overzier, ApJ. 67737Overzier, R. A., et al. 2008a, ApJ, 677, 37
. R A Overzier, ApJ. 673143Overzier, R. A., et al. 2008b, ApJ, 673, 143
. R S Priddey, R J Ivison, K G Isaak, arXiv:0709.0610709Priddey, R. S., Ivison, R. J., & Isaak, K. G. 2007, ArXiv e-prints, 709, arXiv:0709.0610
. B Robertson, Y Li, T J Cox, L Hernquist, P F Hopkins, ApJ. 66760Robertson, B., Li, Y., Cox, T. J., Hernquist, L., & Hopkins, P. F. 2007, ApJ, 667, 60
. K Shimasaku, ApJL. 586111Shimasaku, K., et al. 2003, ApJL, 586, L111
. K Shimasaku, M Ouchi, H Furusawa, M Yoshida, N Kashikawa, S Okamura, PASJ. 57447Shimasaku, K., Ouchi, M., Furusawa, H., Yoshida, M., Kashikawa, N., & Okamura, S. 2005, PASJ, 57, 447
. V Springel, MNRAS. 3641105Springel, V. 2005b, MNRAS, 364, 1105
. D N Spergel, ApJS. 148175Spergel, D. N., et al. 2003, ApJS, 148, 175
. V Springel, Nature. 435629Springel, V., et al. 2005a, Nature, 435, 629
. E R Stanway, A J Bunker, R G Mcmahon, MNRAS. 342439Stanway, E. R., Bunker, A. J., & McMahon, R. G. 2003, MNRAS, 342, 439
. M Stiavelli, ApJL. 6221Stiavelli, M., et al. 2005, ApJL, 622, L1
. C C Steidel, K L Adelberger, A E Shapley, D K Erb, N A Reddy, M Pettini, ApJ. 62644Steidel, C. C., Adelberger, K. L., Shapley, A. E., Erb, D. K., Reddy, N. A., & Pettini, M. 2005, ApJ, 626, 44
. C C Steidel, K L Adelberger, M Dickinson, M Giavalisco, M Pettini, M Kellogg, ApJ. 492428Steidel, C. C., Adelberger, K. L., Dickinson, M., Giavalisco, M., Pettini, M., & Kellogg, M. 1998, ApJ, 492, 428
. E R Stanway, MNRAS. 376727Stanway, E. R., et al. 2007, MNRAS, 376, 727
. T Suwa, A Habe, K Yoshikawa, ApJL. 6465Suwa, T., Habe, A., & Yoshikawa, K. 2006, ApJL, 646, L5
. M Trenti, M R Santos, M Stiavelli, arXiv:0807.3352807ArXiv e-printsTrenti, M., Santos, M. R., & Stiavelli, M. 2008, ArXiv e-prints, 807, arXiv:0807.3352
. B P Venemans, R G Mcmahon, S J Warren, E A Gonzalez-Solares, P C Hewett, D J Mortlock, S Dye, R G Sharp, MNRAS. 37676Venemans, B. P., McMahon, R. G., Warren, S. J., Gonzalez- Solares, E. A., Hewett, P. C., Mortlock, D. J., Dye, S., & Sharp, R. G. 2007, MNRAS, 376, L76
. M Vestergaard, ApJ. 601676Vestergaard, M. 2004, ApJ, 601, 676
. B P Venemans, A&A. 461823Venemans, B. P., et al. 2007, A&A, 461, 823
. M Volonteri, M J Rees, ApJ. 650669Volonteri, M., & Rees, M. J. 2006, ApJ, 650, 669
. F Walter, C Carilli, F Bertoldi, K Menten, P Cox, K Y Lo, X Fan, M A Strauss, ApJL. 61517Walter, F., Carilli, C., Bertoldi, F., Menten, K., Cox, P., Lo, K. Y., Fan, X., & Strauss, M. A. 2004, ApJL, 615, L17
. R Wang, AJ. 134617Wang, R., et al. 2007, AJ, 134, 617
. J Wang, G De Lucia, M G Kitzbichler, S D M White, MNRAS. 3841301Wang, J., De Lucia, G., Kitzbichler, M. G., & White, S. D. M. 2008, MNRAS, 384, 1301
. M White, A&A. 36727White, M. 2001, A&A, 367, 27
. R L White, R H Becker, X Fan, M A Strauss, AJ. 1261White, R. L., Becker, R. H., Fan, X., & Strauss, M. A. 2003, AJ, 126, 1
. C J Willott, W J Percival, R J Mclure, D Crampton, J B Hutchings, M J Jarvis, M Sawicki, L Simard, ApJ. 626657Willott, C. J., Percival, W. J., McLure, R. J., Crampton, D., Hutch- ings, J. B., Jarvis, M. J., Sawicki, M., & Simard, L. 2005, ApJ, 626, 657
. C J Willott, R J Mclure, M J Jarvis, ApJL. 58715Willott, C. J., McLure, R. J., & Jarvis, M. J. 2003, ApJL, 587, L15
. J S B Wyithe, A Loeb, C Carilli, ApJ. 628575Wyithe, J. S. B., Loeb, A., & Carilli, C. 2005, ApJ, 628, 575
. H Yan, M Dickinson, M Giavalisco, D Stern, P R M Eisenhardt, H C Ferguson, ApJ. 65124Yan, H., Dickinson, M., Giavalisco, M., Stern, D., Eisenhardt, P. R. M., & Ferguson, H. C. 2006, ApJ, 651, 24
. H Yan, R A Windhorst, ApJL. 61293Yan, H., & Windhorst, R. A. 2004a, ApJL, 612, L93
. H Yan, R A Windhorst, ApJL. 6001Yan, H., & Windhorst, R. A. 2004b, ApJL, 600, L1
. A R Zentner, International Journal of Modern Physics D. 16763Zentner, A. R. 2007, International Journal of Modern Physics D, 16, 763
. W Zheng, ApJ. 640574Zheng, W., et al. 2006, ApJ, 640, 574
| [] |
[
"Multi-level Adaptation for Automatic Landing with Engine Failure under Turbulent Weather",
"Multi-level Adaptation for Automatic Landing with Engine Failure under Turbulent Weather",
"Multi-level Adaptation for Automatic Landing with Engine Failure under Turbulent Weather",
"Multi-level Adaptation for Automatic Landing with Engine Failure under Turbulent Weather"
] | [
"Haotian Gu \nStevens Institute of Technology\n07030HobokenNew Jersey\n",
"Hamidreza Jafarnejadsani \nStevens Institute of Technology\n07030HobokenNew Jersey\n",
"Haotian Gu \nStevens Institute of Technology\n07030HobokenNew Jersey\n",
"Hamidreza Jafarnejadsani \nStevens Institute of Technology\n07030HobokenNew Jersey\n"
] | [
"Stevens Institute of Technology\n07030HobokenNew Jersey",
"Stevens Institute of Technology\n07030HobokenNew Jersey",
"Stevens Institute of Technology\n07030HobokenNew Jersey",
"Stevens Institute of Technology\n07030HobokenNew Jersey"
] | [] | This paper addresses efficient feasibility evaluation of possible emergency landing sites, online navigation, and path following for automatic landing under engine-out failure subject to turbulent weather. The proposed Multi-level Adaptive Safety Control framework enables unmanned aerial vehicles (UAVs) under large uncertainties to perform safety maneuvers traditionally reserved for human pilots with sufficient experience. In this framework, a simplified flight model is first used for time-efficient feasibility evaluation of a set of landing sites and trajectory generation. Then, an online path following controller is employed to track the selected landing trajectory. We used a high-fidelity simulation environment for a fixed-wing aircraft to test and validate the proposed approach under various weather uncertainties. For the case of emergency landing due to engine failure under severe weather conditions, the simulation results show that the proposed automatic landing framework is robust to uncertainties and adaptable at different landing stages while being computationally inexpensive for planning and tracking tasks. arXiv:2209.04132v1 [cs.RO] 9 Sep 2022 | 10.2514/6.2023-0697 | [
"https://export.arxiv.org/pdf/2209.04132v1.pdf"
] | 252,185,312 | 2209.04132 | 2ebdb3d5f1326f5bf1bb6e971f6a4f5e02da391c |
Multi-level Adaptation for Automatic Landing with Engine Failure under Turbulent Weather
Haotian Gu
Stevens Institute of Technology
07030HobokenNew Jersey
Hamidreza Jafarnejadsani
Stevens Institute of Technology
07030HobokenNew Jersey
Multi-level Adaptation for Automatic Landing with Engine Failure under Turbulent Weather
This paper addresses efficient feasibility evaluation of possible emergency landing sites, online navigation, and path following for automatic landing under engine-out failure subject to turbulent weather. The proposed Multi-level Adaptive Safety Control framework enables unmanned aerial vehicles (UAVs) under large uncertainties to perform safety maneuvers traditionally reserved for human pilots with sufficient experience. In this framework, a simplified flight model is first used for time-efficient feasibility evaluation of a set of landing sites and trajectory generation. Then, an online path following controller is employed to track the selected landing trajectory. We used a high-fidelity simulation environment for a fixed-wing aircraft to test and validate the proposed approach under various weather uncertainties. For the case of emergency landing due to engine failure under severe weather conditions, the simulation results show that the proposed automatic landing framework is robust to uncertainties and adaptable at different landing stages while being computationally inexpensive for planning and tracking tasks. arXiv:2209.04132v1 [cs.RO] 9 Sep 2022
I. Introduction
The unmanned aerial vehicles (UAVs) technology, which is moving towards full autonomous flight, requires operation under uncertainties due to dynamic environments, interaction with humans, system faults, and even malicious cyber attacks. Ensuring security and safety is the first step to making the solutions using such systems certifiable and scalable. In this paper, we introduce an autopilot framework called "Multi-level Adaptive Safety Control" (MASC) for the resilient control of autonomous UAVs under large uncertainties and employ it for engine-out automatic landing under severe weather conditions.
A. MASC Architecture
In 2009, an Airbus A320 passenger plane (US Airways flight 1549) lost both engines minutes after take-off from LaGuardia airport in New York City due to severe bird strikes [1]. Captain Sullenberger safely landed the plane in the nearby Hudson River. Inspired by this story, we aim to equip UAVs with the capability of human pilots to determine if the current mission is still possible after a severe system failure. If not, the mission is re-planned so that it can be accomplished using the remaining capabilities. This is achieved by the proposed autopilot framework, MASC, which is capable of performing safe maneuvers that are traditionally reserved for human pilots. From a mission control architecture perspective, we aim to replace the traditional top-down, one-way adaptation, that starts with mission planning and cascades down to trajectory generation, tracking, and finally stabilizing controller, with an integrated top-down and bottom-up architecture that allows for two-way adaptation between planning and control to improve the stability and robustness of the system. To this end, we build the MASC framework upon the Simplex fault-tolerant architecture [2][3][4][5][6], which is recognized as a useful approach for the protection of cyber-physical systems against various software failures. By integrating the MASC framework with the Simplex architecture, we aim to enable cyber-physical systems to handle large uncertainties originating from the physical world. The MASC framework, shown in Figure 1, consists of the following components:
• Normal Mode Controller: equipped with complex functionalities to operate the system under normal conditions.
• Safe Mode Controller with Multi-level Adaptation: a simple and verified controller that ensures safe and stable operations of the system with limited levels of performance and reduced functionalities. The control architecture consists of three levels: i) offline landing trajectory prediction; ii) mission feasibility evaluation and re-planning; and iii) online trajectory generation and path following control. • Monitoring and Capability Auditing: uses a model considering the cyber-physical nature of autonomous systems for estimation and fault detection. The model identifies the remaining capabilities of the system and its decision logic triggers a switch from Normal Mode to Safe Mode. Under the engine-out flight scenario with turbulent weather, the proposed architecture adapts the mission to the new constraints by i) auditing the remaining capability of the crippled aircraft and providing feedback to the other layers, ii) updating the flight envelope, iii) evaluating the feasibility of potential reachable destinations and selecting the low risk one; and iv) planning the flight path online and then employing a robust autopilot controller to track the path, meantime making sure to stay within the flight envelope of the crippled UAV. Feedback provided from the lower layers to the higher layers such as the mission planner allows for the interaction of the MASC modules and a two-way adaptation between the planning and control.
Emergency landing due to an engine failure under severe weather conditions is challenging even for an experienced pilot. Our proposed approach, referred to as the MASC framework, provides the autopilot with the agility required to compensate for uncertainties by adaptations in planning and control. The framework can be employed on dependable computing architectures for the safety control design. We utilize a computationally-efficient trajectory generation and tracking control approach, which can be computed with low latency on a low-cost real-time embedded platform onboard most UAVs. Also, the framework allows for the evaluation of reachable areas for an emergency landing. We tested the framework in a high-fidelity flight simulation environment for validation and verification. We achieved successful landings under a wide range of initial conditions (i.e., altitude, distance, and orientation relative to the landing site), windy weather, and turbulence without re-tuning the autopilot parameters.
B. Related Work: Engine-Out Emergency Landing
UAVs play a significant role in a wide range of industries, including defense, transportation, and agriculture, to name a few [7][8][9]. Engine failure is one of the most hazardous situations for UAVs [10,11]. While engine failures are not common in passenger aircraft, an engine-out accident is more probable for a low-cost commercial UAV. The safety risks are even higher if a UAV crashes over a populated area endangering people and infrastructure on the ground. Numerous approaches for planning and control are proposed in the literature to mitigate the risks due to engine failure. In [12], an adaptive flight planner (AFP) presented for landing an engine-out aircraft. The AFP approach for loss of thrust case performs the two main flight-planning tasks required to get a crippled aircraft safely on the ground: i) select a landing site and ii) construct a post-failure trajectory that can safely reach that landing site. An adaptive trajectory generation scheme with a certain presumed best glide ratio and bank angle for turns is proposed in [13]. Additionally, trajectory planning based on flight envelope and motion primitives is proposed in [14] for damaged aircraft. A reachable set for auto-landing is calculated by using optimal control theory in [15]. Most of the related studies do not address emergency landing under additional weather uncertainties, and simulation results are mainly based on simplified models for the aircraft and environment.
The rapidly-exploring random trees (RRT) method is a popular sampling-based path planning algorithm designed to efficiently search nonconvex, high-dimensional spaces by randomly building a space-filling tree. The algorithm creates a search tree containing a set of nodes and the connecting path edges set. In [16], a path planning scheme is developed based on the optimal sampling-based RRT algorithm to generate a landing trajectory in real-time and also examines its performance for simulated engine failures occurring in mountainous terrain. However, RRT-based algorithms are computationally demanding for planning large-scale smoother path [17]. The demand increases with the dimensions of the searched state-space. Another motion planning for emergency landing is based on the Artificial Potential Field (APF) [18] method and greedy search in the space of motion primitives. In APF [19], the UAV's path is calculated based on the resultant potential fields from the initial point to the target point. However, the conventional APF [20] may encounter the trap of a local minimum when the attractive force and repulsive force reach a balance, which means that the UAV stops moving towards the target. This paper is organized as follows. The components of the Multi-level Adaptive Safety Control (MASC) framework are presented in Section II. Particularly, monitoring and capability auditing is discussed in Section II.A, the low-level safety controller is presented in Section II.B, and the mission adaptation is described in Section II.C. Section III.D describes the high-fidelity software-in-the-loop (SITL) simulation environment for a fixed-wing aircraft and presents the simulation results. Finally, Section IV concludes the paper.
II. Multi-level Adaptive Safety Control (MASC)
This section presents the components of the Multi-Level Adaptive Safety Control (MASC) * framework.
A. Monitoring and Capability Auditing
The monitoring and capability auditing module has a set of stored expected models
D = {D 1 , . . . , D } where the triple D = { , , Θ }(1)
represents the plant matrices ( , ), and the uncertainty set Θ . In particular, each model is represented as
D : ( ) = ( ) + ( ( ) + ( ( ), )), ( ) = ( ), ( 0 ) = 0 ,(2)
where ( ) ∈ R is the state vector, and ( ) ∈ R is the available output measurement. The term ∈ Θ , ∀( , ) ∈ R × [0, ∞), represents unknown system uncertainties and disturbances subject to local Lipschitz continuity assumption. Control input ( ) is the robust low-level controller that stabilizes the model D with guaranteed robustness margins for apriori given bounds on the uncertainties.
Remark 1
It is worth mentioning that D is a set of nominal/representative fault models. The models do not need to be perfectly accurate, and any modeling error is expressed as ∈ Θ . Given a nominal model, the safe mode controller will deal with any model mismatch or external disturbance.
Monitoring and capability auditing is an integral part of the MASC framework (shown in Figure 1), which performs the task of fault detection and isolation (FDI), i.e., to notice the existence of a fault, and to further identify the fault model. Since the autopilot is characterized as a cyber-physical system, regardless of the location of the faulty elements, the effect of the fault is always reflected in the physical world. Leveraging the measurement of physical state, we employ a model-based FDI approach used in control literature [21] [22]. Identifying the new model is critical for stabilizing the UAV, and it should be prioritized computationally, while the mission re-planning algorithm can take longer to converge to a feasible trajectory.
In the particular case of engine malfunction, monitoring the sensors such as the engine's RPM indicator and onboard accelerometer provides sufficient information for the monitoring and capability auditing module to detect the engine failure. Then, the module activates the planning and control task specifically designed for emergency landing within the Safe Control Mode module. Having identified the faults and updated the system model D , another critical task of the monitoring and capability auditing module is to determine the safe flight envelope for the mission re-planing. Specifically, for protecting the flight envelope during the emergency landing, it is very crucial to maintain the forward airspeed around the optimal gliding speed opt and the corresponding best slope opt that the aircraft manufacturer recommends. The optimal speed ensures maximum gliding distance without stalling the aircraft. In addition, the following constraints are considered for motion planning:
where , , and are the forward airspeed, pitch angle, and roll angle, respectively. Also, , , and are roll, pitch, yaw rates, respectively.
B. Safe Mode Path-Following Controller
Path following controller modifies the control commands to the low-level longitudinal and lateral controllers to follow the reference path. Monitoring and Capability Auditing will compute the mission feasibility, given the states and the model of a damaged UAV. Large uncertainty mitigation requires mission adaptation and selection of a new trajectory that is still feasible, given the remaining capabilities. For large uncertainties outside the design bounds, i.e., ∉ Θ in (2), the control inputs can saturate and drive the system to unsafe states. Modification of the reference command d [ ] based on the updated objectives is another layer of defense for maintaining safety by satisfying flight envelope constraints. Therefore, we consider a control structure that consists of a path following controller, where the generated reference commands to the low-level controller are limited by saturation bounds to maintain the closed-loop system within operational safety envelope.
Let the reference command be constrained to a convex polytope as a safe operational region, defined by the set
R = { d ∈ R | d ∞ ≤ 1} ,(4)
where = diag{r −1 max 1 , ..., r −1 max q }, and the positive constants r max i 's are the saturation bounds on the reference commands. Then, the weighted reference command is bounded by
d [ ] ∞ ≤ 1, ∈ Z ≥0 .
In this paper, the reference command d [ ], ∈ Z ≥0 , which is generated by the path following control law, is given by
d [ ] = (1 − ) −1 sat 1 1 − z m d [ ] − d [ ] ,(5)
where sat{·} denotes the saturation function, z ∈ R × is the state-feedback gain, and ∈ (0, 1) is a constant. Also, m d [ ] is the desired trajectory variable generated by the mission planner and d [ ] is the actual state of the aircraft. In the case of an emergency glide landing, the desired heading angle and flight path angle are the variables that are generated by the mission planner. The path following controller in (5) is equivalent to a PI controller subject to a saturation function. This control law ensures that the roll and pitch commands stay within the safe flight envelope during the glide landing, and hence the possibility of a stall decreases.
C. Mission Adaptation
In the MASC framework, we present a mission planner in the Safe Control Mode that generates the landing trajectory to a safe landing site. The planner also evaluates mission feasibility considering the damage of the aircraft and environmental constraints. In the case of a fault/failure detection, once the capability auditing provides the new/altered model D , two concurrent steps will be taken: i) MASC initiates the feasibility evaluation and landing trajectory estimation, and ii) autopilot controller is activated based on its ability to stabilize the system around a pre-calculated ℎ d [·] reference command. The first step is taken to ensure that the system does not violate its stability/safety bounds while the mission is being re-planned. Once the mission is re-planned, it is fed into the path following controller, which then accordingly alters the ℎ d [·] reference command provided to the ℎ low-level controller to execute the new mission. In the second step, re-planning of the mission, it is crucial to compute the reachable area, which is defined as all the spatial points the aircraft is capable of reaching given the dynamic constraints, potential and kinetic energy, and available fuel if the engine is still partially working. To this end, provided the information and new constraints by the monitoring and capability auditing module, a set of candidate locations are initially considered. This initial set may include nearby airports and empty lands. Leveraging the updated model of the UAV, MASC evaluates the feasibility of safe landing in the candidate areas and identifies the most likely safe location for an emergency landing. Any violation of the safety constraints (for instance (3) and (4)) during the evaluation process rules out a candidate area as a safe reachable area.
The feasibility evaluation for landing site selection is summarized in Algorithm 1, and the landing trajectory planning is summarized in Algorithm 2. The feasibility evaluation can be viewed as an offline trajectory planning from the initial state of the engine-out aircraft to a few possible landing coordinates. For the offline trajectory generation, a simplified and reduced model of the UAV dynamics and autopilot is used to estimate the trajectory and travel time to each landing site. Consequently, the more desirable landing coordinate is selected, and it is passed to the online trajectory planning for the online execution. The block diagram for feasibility evaluation of the landing trajectories using a simplified UAV model is shown in Figure 2. Both online and offline planning use the carrot chasing based guidance logic for trajectory following [23]. The carrot chasing method uses a pseudo target moving along the desired flight path while generating the desired heading angle using the reference point [24], and it is robust to disturbances [25].
Fig. 2 The block diagram for evaluating feasibility of the landing trajectories using a simplified UAV model in Matlab
The landing mission is divided into three phases: Phase I: Cruising, Phase II: Loitering, and Phase III: Approach as illustrated in Figure 3. In Algorithm 2, the emergency glide landing procedure starts by cruising towards the loitering center with coordinates ( l , l ) near the landing site. After the aircraft is close enough to the loitering center, i.e., √︁ ( l − ) 2 + ( l − ) 2 < l , the mission enters the loitering phase. While loitering, the aircraft losses altitude in a spiral trajectory. When the cut-off altitude, , is reached, i.e., < , the mission enters the approach phase. Notice that the mission is allowed to progress in one direction, similar to a directed acyclic graph (DAG), and it is possible that the first and second phases are skipped altogether depending on the initial states of the aircraft, as illustrated in Figure 3. The variables defined for the three flight phases in Algorithm2 are described in the following.
Phase I: During the cruising phase, the aircraft cruises towards the loitering circle. The desired heading angle is calculated as given in Algorithm 2. The line-of-sight (LOS) denotes the angle in the Euclidean plane, given in radians, between the positive -axis and the ray to the point ( ipl , i ) which connects the initial engine malfunction position to the loiter center. ℎ g is the Euclidean distance of the current coordinates and the start of the reference path. u is the distance between the start of the reference path and the current particle projected on the straight reference path. The airspeed should remain close to the best gliding speed opt while the UAV cruises to the landing site.
Phase II: In the loitering phase, the aircraft loiters near the landing site in a spiral trajectory to lose any excessive altitude for the final approach. In Algorithm 2, denotes the global coordinates of the loiter center. Also, will gradually approach the chord tangent angle of loiter as the moving trajectory converges to the reference circle. In addition, is the look-ahead distance. With an increase in , the flight path of the UAV can converge to the reference trajectory quickly, reducing the cross-track error [26].
Phase III: In the approach phase, the aircraft aligns itself with the runway and tracks the desired flight path angle for the final approach. The trajectory point renew scheme is the same as the Phase I. The difference is denotes the angle in the Euclidean plane, given in radians, between the positive -axis and the ray to the points ( ipl , i ) which connects optimal landing area coordinates to ( u , u ). If the ground distance of the aircraft to the landing site is larger than a constant , i.e.,
√︁
( − f ) 2 + ( − f ) 2 > , we have u = l + l × ( f − ), u = l + l × ( f − ),(6)
I II III where l is loiter diameter, and f is the heading angle of the runway. Also, ( ipl , i ) connects the ( u , u ) start of the reference line and landing position coordinates ( f , f ) in Euclidean plane.
III. Software-in-the-loop Simulation Study
This section presents a software-in-the-loop (SITL) simulation scheme to evaluate and validate the proposed MASC † framework for online path planning and navigation under the emergency landing case. The SITL architecture for the MASC autopilot, shown in Figure 4, has three primary components: i) high-fidelity physical simulation environment (in X-Plane), and ii) MASC autopilot (in MATLAB/Simulink) consisting of a nonlinear logic based mission planner and proportional heading angle regulation scheme, and iii) a user datagram protocol (UDP). The X-Plane program has been certified by the Federal Aviation Administration (FAA) as a simulation software to train pilots. The aircraft model used in this simulation is Cessna 172SP as shown in Figure 1.
Fig. 4 The block diagram for implementing MASC framework in the X-Plane high-fidelity flight simulation environment
A. UDP Receiver and Sender Interface
X-Plane adopts the User Datagram Protocol (UDP) to communicate with the third-party software and external processes. Unlike the Transmission Control Protocol (TCP), UDP assures that data packages will arrive completely and orderly. Also, the communication via UDP can achieve high-speed data traffic compared to other protocols by efficient use of bandwidth, which is an advantageous characteristic for the simulation under consideration. Correspondingly, DSP toolbox in MATLAB/Simulink supports the UDP communication through DSP System Toolbox. This toolbox can query an application using UDP to send real-time data from the Simulink model to the corresponding channel in Algorithm 2: Algorithm for emergency landing using the nonlinear guidance logic [23].
Input: Initial Engine out coordinates, reachable landing coordinates and runway direction. Output: Desired Heading Angle SIL simulation Procedure:
if (Engine malfunction == true) if ( > and > ) begin if 1: Initialize: i = ( i , i ), ipl = ( ipl , ipl ), new = ( new , new ), des , , ; 2: ℎ g = i − new ; 3: = ipl − i ; 4: u = new − i ; 5: d = − u ; 6: u = √︃ ( ℎ g ) 2 + ( ℎ g × sin( d ))) 2 ; 7: ( t , t ) = (( u + ) × cos( ), ( u + ) × sin( )); 8: des = atan2( t − new , t − new ); end if else if ( <and6: u = √︃ ( ℎ g ) 2 + ( ℎ g × sin( d ))) 2 ; 7: ( t , t ) = ( u + ) × cos( ), ( u + ) × sin( )); 8: des = atan2( t − new , t − new ); end if else
Fly in Normal Mode; end; X-Plane. Also, the UDP object allows performing byte-type and datagram-type communication using a UDP socket in the local host. For the implementation, we consider designing subscriber and publisher to guarantee data transfer between the X-Plane and MATLAB/Simulink in real time. For subscriber, we used two Simulink blocks: an embedded MATLAB function and a byte unpack. For publisher, we use a byte pack and encoder, and both are linked via a bus module in Simulink.
B. Engine-Out Landing under Clear Weather
The simulation results for the automatic landing process under clear weather are shown in Figure 5, illustrating three different viewpoints of the real-time landing trajectory. In this emergency landing simulation, we randomly initiate the process from five different positions given in Table 1. The final landing coordinates are set to North 21822 , East −9751.8 , Height 140 , and Heading direction of the runway 24. 18 . As we can see from the results, the MASC for emergency landing during engine failure can plan a path online to safely navigate the aircraft to the configured landing site from various initial conditions. Also, the aircraft's airspeed remains well above the stall speed, which is around 30 / . Throughout the simulations, the weather condition is set to Clear, with no wind, i.e., the best weather condition in X-Plane®.
C. Engine-Out Landing under Windy and Turbulent Weather
To further demonstrate the robustness of the MASC framework under large wind and turbulence uncertainties, we simulated the emergency landing task under different severe weather settings in the X-Plane program with parameters listed in Table 2. We varied the parameters Wind Direction, Wind Speed, Turbulence, Gust Speed Increase, and Total Wind Shear to different levels in the high fidelity simulation environment. Figure 6 shows the real-time trajectories for landing in different weather conditions. The initial and final coordinates of the aircraft set to that of the first trial in Table 1. We note that the initial coordinates of the engine-out aircraft are slightly different but relatively close to each other in these simulations because of how we initialize these test runs. As our results suggest, MASC can navigate the aircraft to the configured landing site in each test run under severe weather conditions. Therefore, our approach can robustly plan a landing trajectory and safely navigate the aircraft to a landing site under large wind and turbulence uncertainties. Wind Shear 1 20 14 10 10 10 2 0 3 8 14 8 3 4 5 10 8 14 4 14 7 12 22 8 5 27 12 4 9 4 Table 2 The weather settings in the X-Plane program
D. Feasibility Evaluation for Landing Site Selection
In the MASC framework, the mission adaptation starts with offline trajectory planning for reachability and feasibility evaluation of candidate landing sites. The offline planning architecture, presented in Figure 2, computes feasible emergency landing trajectories for each nearby landing site and selects the best landing coordinates. In our implementation, the control laws and simplified UAV model are implemented in MATLAB/Simulink in acceleration mode. The feasible landing site selection consists of three tasks: offline path planning, landing time estimation, and optimal landing site selection. We build a mathematical model to obtain the discretized reference trajectory points utilized to calculate the landing path and expected landing time for each landing site. To avoid unnecessary calculations, we opt-in for a low density of reference trajectory points (coarser trajectories). The simulations show that the approach is sufficiently fast and computationally inexpensive, and the computation time for each trajectory is 3.083 s on average (corresponding to a path that takes about 10 min to complete). However, we note that the computation time depends on the computing hardware used to run these simulations. In this simulation, our goal is to select the landing site with the shortest landing time among four different potential sites listed in Table 3. Figure 7 shows the trajectories leading to each of the landing sites labeled by 1, 2, 3, and 4, and the corresponding landing times are summarized in Table 3. We note that landing sites 1, 2, and 4 are reachable, but landing site 3 is infeasible, i.e., the engine-out aircraft crashes before reaching the landing site 3. It takes 650 s to land at in the SITL simulation, which means the offline planning method is accurate enough to predict the actual time needed for the emergency landing process.
E. Comparison between the Offline and Online Trajectories
We use a simplified UAV model for offline trajectory generation, and therefore, we expect some discrepancies between the predicted trajectory and the actual flight path of the UAV in the high-fidelity simulation. In the final part of our simulation study, we compare the online and offline planned paths to observe possible discrepancies. To do so, we chose the landing site 1 in SITL simulation and test set for offline path planning. The trajectory planned offline coincides with the online one, as shown in Figure 8. Due to fewer turns needed for the aircraft in the SITL simulation to match the heading direction of the reference straight line compared to the offline path planning, the diameter of the path planned offline at the start of Phase I does not entirely match the SITL trajectory. The landing time in the SITL simulation is 6.8 min, while the estimated landing time is 6 min for offline path planning in Matlab/Simulink. However, these differences do not considerably impact the main outcomes, which are predicting the feasibility of the selected path and estimation of landing time.
IV. Conclusion
This paper proposes a robust and lightweight navigation and control architecture that enables UAVs to perform an automatic emergency landing under severe weather conditions. A Multi-level Adaptation approach in mission planning, path tracking, and stabilizing control was presented within this framework. The proposed framework can also evaluate the feasibility of landing at the nearby landing sites and select the optimal one. Using a high-fidelity simulation environment, the effectiveness of the approach is verified by successfully landing a fixed-wing aircraft under engine failure for a wide range of initial aircraft states and weather uncertainties. In the future, we plan to address other challenges such as vision-based landing and obstacle avoidance when flying over a complex urban area. In this paper, rudder control has not been used. We plan to incorporate rudder control to track a desired angle of sideslip and provide the fixed-wing aircraft with more agility and maneuvering performance.
V. Acknowledgement
This research is partially supported by the National Science Foundation (award no. 2137753).
Fig. 1 (
1left) Multi-level Adaptive Safety Control (MASC) framework, (right) Simulation studies using X-Plane® program * Ph.D. Student, Department of Mechanical Engineering, Hoboken, New Jersey, USA, AIAA Student Member. † Assistant Professor, Department of Mechanical Engineering, Hoboken, New Jersey, USA, AIAA Member.
Fig. 3
3Landing mission phase transition diagram. Phase I: Cruising, Phase II: Loitering, and Phase III: Approach.
Initialize: = ( l , l ), = ( , ), des , , ; 2: = − − c ; 3: l = atan2 ( − l , − l ); 4: ( t , t ) = ( l + c × cos( + l ), l + c × sin( + l )); 5: des = atan2( t − , t − ); end if else begin if 1: Initialize: i = ( i , i ), ipl = ( ipl , ipl ), new = ( new , new ), des , , ; 2: ℎ g = i − new ; 3: = ipl − i ; 4: u = new − i ; 5: d = − u ;
Fig. 5
5Automatic engine-out landing under clear weather from different initial positions (simulation video link)
Fig. 6
6Automatic engine-out landing under different windy and turbulent weather settings in the X-plane program (simulation video link)
Fig. 7
7Trajectories generated for four different landing sites landing site 1, which is the shortest time. The emergency landing also takes about 10.08
Fig. 8
8Comparison of online and offline trajectories
Algorithm 1 :
1Algorithm for feasibility evaluation for candidate landing areas. Input: Initial engine out coordinates, coordinates of the back up landing areas Output: Predicted landing trajectory and estimated landing timeProcedure For Feasibility Evaluation of Landing Areas:
If(Engine malfunction == true)
MASC Framework initiates the Feasibility Evaluation process.
GO TO Landing area feasibility evaluation and landing trajectory and time prediction
1: Feed the global coordinates where engine is out and of landing areas;
2: Implement offline path planning in acceleration mode;
3: Get the predicted flight trajectories and estimated landing time;
4: Determine the most suitable landing site;
do
MASC(Online Navigation) Initiate
while(The determined optimal landing coordinates)
else
Conducting the Normal Flight mode;
The initial position coordinates of the engine-out aircraft in the simulation trialsTrial
North ( )
East ( )
Height ( )
Heading (
)
1
13163
-7164.9
3000
78.5
2
13353
-14380
4000
110.3
3
23429
-6675.6
5000
85.6
4
20719
-11652
3000
256.74
5
21323
-11021
2000
69.594
Table 1
min < < max , min < < max , min < < max , min < < max , min < < max , min < < max ,(3)* Open source code for the MASC framework on Github: (https://github.com/SASLabStevens/MASC-Architecture.git)
† Open source code for the MASC framework on Github: (https://github.com/SASLabStevens/MASC-Architecture.git)
Sullenberger Made the Right Move, Landing on the Hudson. 5"Sullenberger Made the Right Move, Landing on the Hudson," , [posted 05-May-2010]. URL https://www.wired.com/ 2010/05/ntsb-makes-recommendations-after-miracle-on-the-hudson-investigation/.
Proceedings. The 19th IEEE. L Sha, Real-Time Systems Symposium. IEEEDependable system upgradeSha, L., "Dependable system upgrade," Real-Time Systems Symposium, 1998. Proceedings. The 19th IEEE, IEEE, 1998, pp. 440-448.
Using Simplicity to Control Complexity. L Sha, 10.1109/MS.2001.936213IEEE Software. 184Sha, L., "Using Simplicity to Control Complexity," IEEE Software, Vol. 18, No. 4, 2001, pp. 20-28. https://doi.org/10.1109/ MS.2001.936213.
The simplex reference model: Limiting fault-propagation due to unreliable components in cyber-physical system architectures. T L Crenshaw, E Gunter, C L Robinson, L Sha, P Kumar, The 28th IEEE International Real-Time Systems Symposium. Crenshaw, T. L., Gunter, E., Robinson, C. L., Sha, L., and Kumar, P., "The simplex reference model: Limiting fault-propagation due to unreliable components in cyber-physical system architectures," The 28th IEEE International Real-Time Systems Symposium, 2007, pp. 400-412.
L1Simplex: fault-tolerant control of cyber-physical systems. X Wang, N Hovakimyan, Sha , L , Proceedings of the ACM/IEEE 4th International Conference on Cyber-Physical Systems. the ACM/IEEE 4th International Conference on Cyber-Physical SystemsACMWang, X., Hovakimyan, N., and Sha, L., "L1Simplex: fault-tolerant control of cyber-physical systems," Proceedings of the ACM/IEEE 4th International Conference on Cyber-Physical Systems, ACM, 2013, pp. 41-50.
VirtualDrone: Virtual Sensing, Actuation, and Communication for Attack-Resilient Unmanned Aerial Systems. M.-K Yoon, B Liu, N Hovakimyan, Sha , L , Proceedings of the ACM/IEEE International Conference on Cyber-Physical Systems. the ACM/IEEE International Conference on Cyber-Physical SystemsYoon, M.-K., Liu, B., Hovakimyan, N., and Sha, L., "VirtualDrone: Virtual Sensing, Actuation, and Communication for Attack-Resilient Unmanned Aerial Systems," Proceedings of the ACM/IEEE International Conference on Cyber-Physical Systems, 2017.
Guest Editorial Can Drone Deliver?. R D'andrea, IEEE Transactions on Automation Science and Engineering. 113D'Andrea, R., "Guest Editorial Can Drone Deliver?" IEEE Transactions on Automation Science and Engineering, Vol. 11, No. 3, 2014, pp. 647-648.
Agriculture drones: A modern breakthrough in precision agriculture. V Puri, A Nayyar, L Raja, Journal of Statistics and Management Systems. 204Puri, V., Nayyar, A., and Raja, L., "Agriculture drones: A modern breakthrough in precision agriculture," Journal of Statistics and Management Systems, Vol. 20, No. 4, 2017, pp. 507-518.
Drone Delivery of Medications: Review of the Landscape and Legal Considerations. C A Lin, K Shah, L C C Mauntel, S A Shah, American Journal of Health-System Pharmacy. 753Lin, C. A., Shah, K., Mauntel, L. C. C., and Shah, S. A., "Drone Delivery of Medications: Review of the Landscape and Legal Considerations," American Journal of Health-System Pharmacy, Vol. 75, No. 3, 2018, pp. 153-158.
Enhancing UAV survivability Through Damage Tolerant Control. D Jourdan, M Piedmonte, V Gavrilets, D Vos, J Mccormick, AIAA Guidance, Navigation, and Control Conference. Jourdan, D., Piedmonte, M., Gavrilets, V., Vos, D., and McCormick, J., "Enhancing UAV survivability Through Damage Tolerant Control," AIAA Guidance, Navigation, and Control Conference, 2010.
Path planning for UAVs with engine failure in the presence of winds. B Ayhan, C Kwan, B Budavari, J Larkin, D Gribben, IECON 2018-44th Annual Conference of the IEEE Industrial Electronics Society. IEEEAyhan, B., Kwan, C., Budavari, B., Larkin, J., and Gribben, D., "Path planning for UAVs with engine failure in the presence of winds," IECON 2018-44th Annual Conference of the IEEE Industrial Electronics Society, IEEE, 2018, pp. 3788-3794.
Emergency Flight Planning Applied to Total Loss of Thrust. E M Atkins, I A Portillo, M J Strube, 10.2514/1.18816Journal of Aircraft. 434Atkins, E. M., Portillo, I. A., and Strube, M. J., "Emergency Flight Planning Applied to Total Loss of Thrust," Journal of Aircraft, Vol. 43, No. 4, 2006, pp. 1205-1216. https://doi.org/10.2514/1.18816, URL https://doi.org/10.2514/1.18816.
Emergency Landing Automation Aids: An Evaluation Inspired by US Airways Flight 1549. E Atkins, AIAA Infotech@ Aerospace. 3381Atkins, E., "Emergency Landing Automation Aids: An Evaluation Inspired by US Airways Flight 1549," AIAA Infotech@ Aerospace 2010, 2010, p. 3381.
Damaged airplane trajectory planning based on flight envelope and motion primitives. D Asadi, M Sabzehparvar, E M Atkins, H A Talebi, Journal of Aircraft. 516Asadi, D., Sabzehparvar, M., Atkins, E. M., and Talebi, H. A., "Damaged airplane trajectory planning based on flight envelope and motion primitives," Journal of Aircraft, Vol. 51, No. 6, 2014, pp. 1740-1757.
Aircraft Autolander Safety Analysis Through Optimal Control-based Reach Set Computation. A M Bayen, I M Mitchell, M K Osihi, C J Tomlin, Journal of Guidance, Control, and Dynamics. 301Bayen, A. M., Mitchell, I. M., Osihi, M. K., and Tomlin, C. J., "Aircraft Autolander Safety Analysis Through Optimal Control-based Reach Set Computation," Journal of Guidance, Control, and Dynamics, Vol. 30, No. 1, 2007, pp. 68-77.
RRT*-AR: Sampling-based alternate routes planning with applications to autonomous emergency landing of a helicopter. S Choudhury, S Scherer, S Singh, 2013 IEEE International Conference on Robotics and Automation. IEEEChoudhury, S., Scherer, S., and Singh, S., "RRT*-AR: Sampling-based alternate routes planning with applications to autonomous emergency landing of a helicopter," 2013 IEEE International Conference on Robotics and Automation, IEEE, 2013.
Emergency Landing Guidance for an Aerial Vehicle with a Motor Malfunction. J Sláma, Sláma, J., "Emergency Landing Guidance for an Aerial Vehicle with a Motor Malfunction," 2018.
UAV path planning using artificial potential field method updated by optimal control theory. Y.-B Chen, G.-C Luo, Y.-S Mei, J.-Q Yu, X.-L Su, Int. J. Syst. Sci. 476Chen, Y.-B., Luo, G.-C., Mei, Y.-S., Yu, J.-Q., and Su, X.-L., "UAV path planning using artificial potential field method updated by optimal control theory," Int. J. Syst. Sci., Vol. 47, No. 6, 2016, pp. 1407-1420.
Research on hierarchical potential field method of path planning for UAVs. J Dai, Y Wang, C Wang, J Ying, J Zhai, 2nd IEEE Advanced Information Management,Communicates,Electronic and Automation Control Conference (IMCEC). IEEEDai, J., Wang, Y., Wang, C., Ying, J., and Zhai, J., "Research on hierarchical potential field method of path planning for UAVs," 2018 2nd IEEE Advanced Information Management,Communicates,Electronic and Automation Control Conference (IMCEC), IEEE, 2018.
UAV obstacle avoidance using potential field under dynamic environment. A Budiyanto, A Cahyadi, T B Adji, O Wahyunggoro, 2015 International Conference on Control, Electronics, Renewable Energy and Communications (ICCEREC). IEEEBudiyanto, A., Cahyadi, A., Adji, T. B., and Wahyunggoro, O., "UAV obstacle avoidance using potential field under dynamic environment," 2015 International Conference on Control, Electronics, Renewable Energy and Communications (ICCEREC), IEEE, 2015.
A survey of fault detection, isolation, and reconfiguration methods. I Hwang, S Kim, Y Kim, C E Seah, IEEE transactions on control systems technology. 183Hwang, I., Kim, S., Kim, Y., and Seah, C. E., "A survey of fault detection, isolation, and reconfiguration methods," IEEE transactions on control systems technology, Vol. 18, No. 3, 2009, pp. 636-653.
A survey of fault diagnosis and fault-tolerant techniques-part I: Fault diagnosis with model-based and signal-based approaches. Z Gao, C Cecati, S X Ding, IEEE Trans. Ind. Electron. 626Gao, Z., Cecati, C., and Ding, S. X., "A survey of fault diagnosis and fault-tolerant techniques-part I: Fault diagnosis with model-based and signal-based approaches," IEEE Trans. Ind. Electron., Vol. 62, No. 6, 2015, pp. 3757-3767.
Unmanned aerial vehicle path following: A survey and analysis of algorithms for fixed-wing unmanned aerial vehicless. P Sujit, S Saripalli, J B Sousa, IEEE Control Systems Magazine. 341Sujit, P., Saripalli, S., and Sousa, J. B., "Unmanned aerial vehicle path following: A survey and analysis of algorithms for fixed-wing unmanned aerial vehicless," IEEE Control Systems Magazine, Vol. 34, No. 1, 2014, pp. 42-59.
A New Nonlinear Guidance Logic for Trajectory Tracking. S Park, J Deyst, J How, 10.2514/6.2004-4900AIAA Guidance, Navigation, and Control Conference and Exhibit. American Institute of Aeronautics and AstronauticsPark, S., Deyst, J., and How, J., "A New Nonlinear Guidance Logic for Trajectory Tracking," AIAA Guidance, Navigation, and Control Conference and Exhibit, American Institute of Aeronautics and Astronautics, 2004. https://doi.org/10.2514/6.2004-4900, URL https://doi.org/10.2514/6.2004-4900.
An evaluation of UAV path following algorithms. P B Sujit, S Saripalli, J B Sousa, IEEESujit, P. B., Saripalli, S., and Sousa, J. B., "An evaluation of UAV path following algorithms," 2013 European Control Conference (ECC), IEEE, 2013.
Robust model predictive control: A survey. A Bemporad, M Morari, Robustness in identification and control. SpringerBemporad, A., and Morari, M., "Robust model predictive control: A survey," Robustness in identification and control, Springer, 1999, pp. 207-226.
| [
"https://github.com/SASLabStevens/MASC-Architecture.git)",
"https://github.com/SASLabStevens/MASC-Architecture.git)"
] |
[
"A spintronic analog of the Landauer residual resistivity dipole on the surface of a disordered topological insulator",
"A spintronic analog of the Landauer residual resistivity dipole on the surface of a disordered topological insulator"
] | [
"Raisa Fabiha \nDepartment of Electrical and Computer Engineering\nVirginia Commonwealth University\n23284RichmondVirginiaUnited States\n",
"Supriyo Bandyopadhyay \nDepartment of Electrical and Computer Engineering\nVirginia Commonwealth University\n23284RichmondVirginiaUnited States\n"
] | [
"Department of Electrical and Computer Engineering\nVirginia Commonwealth University\n23284RichmondVirginiaUnited States",
"Department of Electrical and Computer Engineering\nVirginia Commonwealth University\n23284RichmondVirginiaUnited States"
] | [] | The Landauer "residual resistivity dipole" is a well-known concept in electron transport through a disordered medium. It is formed when a defect/scatterer reflects an impinging electron causing negative charges to build up on one side of the scatterer and positive charges on the other. This results in the formation of a microscopic electric dipole that affects the resistivity of the medium. Here, we show that an equivalent entity forms in spin polarized electron transport through the surface of a disordered topological insulator (TI). When electrons reflect from a scatterer on the TI surface, a spin imbalance forms around the scatterer, resulting in a spin current that flows either in the same or the opposite direction as the injected spin current and hence either increases or decreases the spin resistivity. It also destroys spin-momentum locking and produces a magnetic field around the scatterer. The latter will cause transiting spins to precess as they pass the scatterer, thereby radiating electromagnetic waves and implementing an oscillator. If an alternating current is passed through the TI instead of a static current, the magnetic field will oscillate with the frequency of the current and radiate electromagnetic waves of the same frequency, thus making the scatterer act as a miniature antenna. | 10.1088/1361-648x/aca19b | [
"https://arxiv.org/pdf/2204.10927v1.pdf"
] | 248,376,986 | 2204.10927 | 8caff9efc8060c6ddbac899cefbf35538b88943d |
A spintronic analog of the Landauer residual resistivity dipole on the surface of a disordered topological insulator
Raisa Fabiha
Department of Electrical and Computer Engineering
Virginia Commonwealth University
23284RichmondVirginiaUnited States
Supriyo Bandyopadhyay
Department of Electrical and Computer Engineering
Virginia Commonwealth University
23284RichmondVirginiaUnited States
A spintronic analog of the Landauer residual resistivity dipole on the surface of a disordered topological insulator
The Landauer "residual resistivity dipole" is a well-known concept in electron transport through a disordered medium. It is formed when a defect/scatterer reflects an impinging electron causing negative charges to build up on one side of the scatterer and positive charges on the other. This results in the formation of a microscopic electric dipole that affects the resistivity of the medium. Here, we show that an equivalent entity forms in spin polarized electron transport through the surface of a disordered topological insulator (TI). When electrons reflect from a scatterer on the TI surface, a spin imbalance forms around the scatterer, resulting in a spin current that flows either in the same or the opposite direction as the injected spin current and hence either increases or decreases the spin resistivity. It also destroys spin-momentum locking and produces a magnetic field around the scatterer. The latter will cause transiting spins to precess as they pass the scatterer, thereby radiating electromagnetic waves and implementing an oscillator. If an alternating current is passed through the TI instead of a static current, the magnetic field will oscillate with the frequency of the current and radiate electromagnetic waves of the same frequency, thus making the scatterer act as a miniature antenna.
INTRODUCTION
The Landauer residual resistivity dipole (LRRD) is a familiar concept in microscopic charge transport and has important consequences for electromigration [1,2]. The basic idea behind the LRRD is illustrated in Fig. 1. A moving electron in a charge current (sometimes referred to as an "electron wind") encounters a scatterer on the way and is reflected with some probability, causing negative charges to accumulate on the impinging side and deplete on the opposite side. This charge imbalance causes an electric dipole to form around the scatterer. It is natural to ask if an equivalent magnetic entity can exist in spin polarized electron transport. We show that a similar phenomenon can occur on the surface of a disordered topological insulator (TI) through which a spin polarized current is flowing. As the impinging spins reflect from a static scatterer, with or without a spin flip, the spin polarizations on both sides of the scatterer vary spatially owing to interference between the incident and reflected waves, causing a spin imbalance to form between the two sides. This then causes a spin current to flow either in the same or in the opposite direction of the injected current, depending on the sign of the imbalance. These scattering-induced spin currents obviously aid or oppose the injected spin current, thereby increasing or decreasing the "spin resistivity". This is depicted schematically in Fig. 2 imbalance also forms a local magnetic field around the scatterer, which can cause transiting spins to precess and radiate electromagnetic waves, making the scatterer act as a source of radiation. Here, we analyze this phenomenon.
THEORY
For the sake of simplicity, we will consider a line defect on the surface of the TI (perpendicular to the direction of current flow). It can be created by implanting a row of magnetic impurities using ion implantation. A magnetic impurity can reflect a spin with or without spin flip. Fig. 3 shows such a system. In the figure we depict the case when the impinging spin reflects and transmits without a spin flip, but in the ensuing theory, we account for reflection/transmission both with and without any spin flip.
To make the mathematics simple, we will consider a line defect of zero spatial width on the surface of a topological insulator (TI).
2 2 2 2 * 2 * 2 0 , ,, 2 2 '
, ,
, f y x d x y d x y x x y m dx m dy d d x x y i x y dx dy E x y ,(1)
where the line defect is viewed as a one-dimensional deltascatterer that has a spin-independent part of strength and a spin-dependent part of strength '. The quantity is the
Fermi velocity and f is the spin-flip matrix.
Integrating both sides of this equation from - to + and letting , we get [4]
0 0 * 2 , , 2 ' 1 0 , x x f d x y d x y dx dx m x y .
(2)
In deriving the above equation, we made use of the continuity of the wave function at x = 0.
Note that since the Hamiltonian in Equation (1) is invariant in the coordinate y, the y-component of the wave vector k y is a good quantum number. However, the xcomponents of the wave vectors in the two eigenspinor states on the TI surface are different for any given energy [5] and these two wave vectors will be denoted as x k .
We can write the wave functions in the two spin eigenstates on a pristine TI surface (without any defect) as
, x y i k x k y x y e Φ ,(3)
where the eigenspinors are given by [5]
1 1 , 2 x y i k k k k Φ ,( 4 )
and
2 2 x y k k k .
We will next consider the situation where an electron in the Φ eigenstate is incident on the line defect at x = 0. The other case, when the incident electron is in the Φ eigenstate, is similar and is omitted here for the sake of brevity. Let the reflection amplitude for reflecting without a spin flip be r, with a spin flip be r' and the transmission amplitudes without and with a spin flip be t and t', respectively. The wavefunction to the left of the line defect is
, ' x y x y x y i k x k y i k x k y i k x k y L x y e r e r e Φ Φ Φ while to the right, it is , ' x y x y i k x k y i k x k y R x y t e t e Φ Φ [see Fig. 3].
Enforcing the continuity of the wave function at x = 0, we get
1 ' ' r r t t Φ Φ Φ Φ(5)
Next, using Equation (3) in (2), we obtain
* 2 * 2 * 2 * 2 1 ' ' 2 1 ' 2 ' 1 ' 2 = 1 ' 2 ' 1 ' . (6) x x x x f ik r ik r ik t ik t m r r m r r m r r m r r Φ Φ Φ Φ Φ Φ Φ Φ Φ Φ Φ Φ
With the aid of Equation (4), Equation (5) can be written as
, ' ' where 1 1 1 2 1 1 2 t r t r A C A Φ Φ C Φ( 7 )
Note that the matrix
[A] is not unitary since x x k k . 0 0
From Equation (7), we get
1 . (8) ' ' t r t r A C
Then from Equation (6), we obtain
* * 2 2 * * 2 2 * * 2 2 2 2 ' 2 2 ' 2 2 ' ' ' x x x x x m m ik m m r ik m m r ik tik t ik Φ Φ Φ Φ Φ Φ Φ Φ ,(9)
which can be written in matrix form as
' ' r t r t B D K ,(10)2 2 ' x x x x x x m m ik m m ik b b b b m m b ik b ik m m b ik m m b ik B b b b Φ Φ b Φ Φ B * * 2 2 * * 2 2 1 2 2 2 ' 1 1 1 2 1 2 ' 2 2 x x x x x x x x ik ik ik ik ik ik m m ik m m ik D Φ Φ K Φ Φ .
Using Equation (8) in Equation (10), we obtain a solution for the reflection amplitudes as
1 1 1 ' r r B D K B D D A C .(11)
Finally, using Equation (11) in Equation (8), we get the solution for the transmission amplitudes: (12) where I is the 2 2 identity matrix.
1 1 1 ' t t B D K B D D I A C ,
The wave function on the left of the line defect is (see Fig. 3)
(13)
whereas on the right it is
. (14)
Therefore, the x-, y-and z -components of the spin on the left of the line defect are . . The energy dispersion relation on the surface of a topological insulator (without any defect) is given by [5]
L ,(15)
RESULTS
We
2 2 2 2 2 0 * 2 x y x y k k E k k m .(17)
For the sake of simplicity, we will consider the case for k y = 0, in which case the x-components of the wavevectors in the two spin eigenstates are related to the energy E as
2 * * * 0 0 2 2 * * * 0 0 2 2 2 x x m m m E k m m m E k .(18)
Note that when k y = 0, the incident spin is completely ypolarized because of spin-momentum locking on a TI surface.
We use Equation (18) in Equations (11) and (12) to find the transmission and reflection probabilities as functions of the electron energy E for k y = 0. They are plotted in Fig. 4. We have verified that the current continuity condition is always satisfied at every energy, i.e.
2 2 2 2 ' ' 1 x x x x k k t E r E t E r E k k . (19)
For k y = 0, we find that For k y = 0, i , and hence we get from Equations (13) and (14): We use these relations in Equations (15) and (16) We see from Fig. 5 that the spin components to the left and right of the line defect are not the same at any given distance from the scatterer. Since the incident spin is y-polarized, the y-component of the transmitted spin (and hence the y-component on the right hand side) is spatially invariant, while the other components oscillate in space in the manner of a spin helix owing to interference between the incident and reflected spin. Clearly, spinmomentum locking is destroyed. We define spatially averaged spin components on the two sides of the scatterer as
1 2 1 2 1 ' 2 ' 2 1 ' 2 ' 2 x x x x x x x x x x ik x ik x ik x ik x ik x ik x ik x ik x ik x ik 0 , , L L i i W S S x d x i x y z , and 0 , , W R R i i S S x d x i x y z
, and then list them in Table 1 in arbitrary units for an arbitrary value of W = 25 nm. Table 1 shows that there is a net spin imbalance between the two sides of the line defect as we move away from the defect by any arbitrary distance on both sides, causing the formation of a local magnetization and an associated magnetic field. The spin imbalance will also cause a spin current to flow which can aid or oppose the injected spin current depending on the sign of the imbalance, thereby decreasing or increasing the spin resistivity. It is therefore a spintronic analog of the LRRD.
In Fig. 6, we plot the angular separation between the spin polarizations on the two sides of the scatterer as a function of the distance from the scatterer. This quantity x is defined as 6. Angular separation between the spin polarizations on two sides of the scatterer as a function of the distance |x| from the scatterer.
2 2 2 2 2 2 cos L R L R L R x x y y z z L L L R R R x y z x y z x S x S x S x S x S x S x S x S x S x S x S x S x FIG.
Because the spin components oscillate in space, the angular separation oscillates as well and the maximum angular separation exceeds 120 0 for this case.
CONCLUSION
In this work, we have shown the existence of a spin imbalance around a line defect on the surface of a currentcarrying topological insulator, reminiscent of the LRRD that causes a charge imbalance. Its existence can be verified experimentally with magnetic force microscopy.
The magnetic field resulting from the spin imbalance will make spins transiting through the defect precess and radiate electromagnetic waves [6][7][8][9], thereby making the defect act as an electromagnetic oscillator or radiator, provided the damping is relatively small. The sample will be a broadband oscillator since the precession frequencies will be different at different scattering sites. Finally, if instead of a static current, we use an alternating current, then the local magnetic fields will oscillate in time with the same frequency as the injected current. That too can radiate electromagnetic waves, making the line defect act as a miniature antenna [10][11][12][13]. These radiations can be detected with suitable detectors.
FIG. 1 .
1Electrons in a current reflect from a defect, causing negative charges to pile up on the impinging side and positive charges on the other side. This charge imbalance gives rise to a microscopic electric dipole around the impurity, known as the Landauer residual resistivity dipole.
FIG. 2 .
2Formation of a residual magnetic dipole around a scatterer on the surface of a topological insulator due to reflection of spins. (a) Reflection without spin flip, and (b) reflection with spin flip.Here we have assumed that the transmission occurs without a spin flip.
FIG. 3 .
3Reflection of a spin polarized current from a line defect on the surface of a topological insulator.The Pauli equation for the spinor wave function of an electron on the TI surface containing the line defect is[3]
k y = 0.
to find the spin components as functions of the distances from the scatterer on both sides extending up to 25 nm for an energy E = 100 meV and plot them in Fig. 5. FIG. 5. Spin components to the left and right of the line defect, as functions of the distance from the scatterer with ky = 0.
. The spin 1 Email: [email protected]
Table 1 :
1Average spin over a fixed distance W on the two sides of the scatterer with W = 25 nm.L
x
S
R
x
S
L
y
S
R
y
S
L
z
S
R
z
S
-0.011 -0.003 0.517
0.459
0.021
-0.027
Residual resistivity dipoles. R Landauer, Zeitschrift für Physik B Condensed Matter. 21R. Landauer, "Residual resistivity dipoles", Zeitschrift für Physik B Condensed Matter, 21, 247-254 (1975).
Exact scattering theory for the Landauer residual resistivity dipole. W Zwerger, L Bönig, K Schönhammer, Phys. Rev. B. 43W. Zwerger, L. Bönig and K. Schönhammer, "Exact scattering theory for the Landauer residual resistivity dipole", Phys. Rev. B, 43, 6434-6439 (1991).
Warping effects in the band and angular momentum structures of the topological insulator Bi2Te3. W Jung, Phys. Rev. B. 84245435W. Jung, et al., "Warping effects in the band and angular momentum structures of the topological insulator Bi2Te3", Phys. Rev. B, 84, 245435 (2011).
. M Cahay, S Bandyopadhyay, Problem Solving in Quantum Mechanics. WileyM. Cahay and S. Bandyopadhyay, Problem Solving in Quantum Mechanics (Wiley, Chichester, UK, 2017).
Reflection and refraction of a spin at the edge of a quasi-twodimensional semiconductor layer (quantum well) and a topological insulator. S Shee, R Fabiha, M Cahay, S Bandyopadhyay, Magnetism (in pressS. Shee, R. Fabiha, M. Cahay and S. Bandyopadhyay, "Reflection and refraction of a spin at the edge of a quasi-two- dimensional semiconductor layer (quantum well) and a topological insulator", Magnetism (in press).
Introduction to Spintronics. S Bandyopadhyay, M Cahay, CRC Press2Boca Ratonnd editionS. Bandyopadhyay and M. Cahay, Introduction to Spintronics, 2 nd edition (CRC Press, Boca Raton, 2015).
. D J Griffiths, PearsonLondonIntroduction to Electrodynamics, 4 th ed.D. J. Griffiths, Introduction to Electrodynamics, 4 th ed. (Pearson, London, 2013)
Terahertz radiation from magnetic excitations in diluted magnetic semiconductors. R Rungsawang, Phys. Rev. Lett. 110177203R. Rungsawang, et al., "Terahertz radiation from magnetic excitations in diluted magnetic semiconductors", Phys. Rev. Lett., 110, 177203 (2013).
Monochromatic microwave radiation from the system of strongly excited magnons. O Dzyapko, V E Demidov, S O Demokritov, G A Melkov, V L Safanov, Appl. Phys. Lett. 92162510O. Dzyapko, V. E. Demidov, S. O. Demokritov, G. A. Melkov and V. L. Safanov, "Monochromatic microwave radiation from the system of strongly excited magnons", Appl. Phys. Lett., 92, 162510 (2008).
Extreme sub-wavelength magneto-elastic electromagnetic antenna implemented with multiferroic nanomagnets. J L Drobitch, A De, K Dutta, P K A Pal, A Adhikari, S Barman, Bandyopadhyay, Adv. Mater. Technol. 52000316J. L. Drobitch, A. De, K. Dutta, P. K. Pal. A. Adhikari, A. Barman and S. Bandyopadhyay, "Extreme sub-wavelength magneto-elastic electromagnetic antenna implemented with multiferroic nanomagnets", Adv. Mater. Technol., 5, 2000316 (2020).
Experimental demonstration and operating principles of a multiferroic antenna. J D Schneider, J. Appl. Phys. 126224104J. D. Schneider, et al., "Experimental demonstration and operating principles of a multiferroic antenna", J. Appl. Phys., 126, 224104 (2019).
ULF radiation with a spinning magnet array. S Prasad M N, Y Huang, Y E Wang, Proc. 32 nd URSI GASS. 32 nd URSI GASSMontrealGoing beyond Chu-Harrington limitS. Prasad M N, Y. Huang and Y. E. Wang, "Going beyond Chu-Harrington limit", ULF radiation with a spinning magnet array", Proc. 32 nd URSI GASS, Montreal, August 2017.
Spin wave electromagnetic nano-antenna enabled by tripartite phonon-magnon-photon coupling. R Fabiha, J Lundquist, S Majumder, E Topsakal, A Barman, S Bandyopadhyay, Adv. Sci. 2104644R. Fabiha, J. Lundquist, S. Majumder, E. Topsakal, A. Barman and S. Bandyopadhyay, "Spin wave electromagnetic nano-antenna enabled by tripartite phonon-magnon-photon coupling, Adv. Sci., 2104644 (2022).
| [] |
[
"Rethinking Optimization with Differentiable Simulation from a Global Perspective",
"Rethinking Optimization with Differentiable Simulation from a Global Perspective"
] | [
"Rika Antonova \nStanford University\n\n",
"Jingyun Yang \nStanford University\n\n",
"Krishna Murthy Jatavallabhula \nMassachusetts Institute of Technology\n\n",
"Jeannette Bohg \nStanford University\n\n"
] | [
"Stanford University\n",
"Stanford University\n",
"Massachusetts Institute of Technology\n",
"Stanford University\n"
] | [] | Differentiable simulation is a promising toolkit for fast gradient-based policy optimization and system identification. However, existing approaches to differentiable simulation have largely tackled scenarios where obtaining smooth gradients has been relatively easy, such as systems with mostly smooth dynamics. In this work, we study the challenges that differentiable simulation presents when it is not feasible to expect that a single descent reaches a global optimum, which is often a problem in contact-rich scenarios. We analyze the optimization landscapes of diverse scenarios that contain both rigid bodies and deformable objects. In dynamic environments with highly deformable objects and fluids, differentiable simulators produce rugged landscapes with nonetheless useful gradients in some parts of the space. We propose a method that combines Bayesian optimization with semi-local 'leaps' to obtain a global search method that can use gradients effectively, while also maintaining robust performance in regions with noisy gradients. We show that our approach outperforms several gradient-based and gradient-free baselines on an extensive set of experiments in simulation, and also validate the method using experiments with a real robot and deformables. Videos and supplementary materials are available at https://tinyurl.com/globdiff. | 10.48550/arxiv.2207.00167 | [
"https://arxiv.org/pdf/2207.00167v1.pdf"
] | 250,243,678 | 2207.00167 | b5e644c764e4fabab692b822a0773bedd9d16eaa |
Rethinking Optimization with Differentiable Simulation from a Global Perspective
Rika Antonova
Stanford University
Jingyun Yang
Stanford University
Krishna Murthy Jatavallabhula
Massachusetts Institute of Technology
Jeannette Bohg
Stanford University
Rethinking Optimization with Differentiable Simulation from a Global Perspective
Differentiable simulationGlobal optimizationDeformable objects
Differentiable simulation is a promising toolkit for fast gradient-based policy optimization and system identification. However, existing approaches to differentiable simulation have largely tackled scenarios where obtaining smooth gradients has been relatively easy, such as systems with mostly smooth dynamics. In this work, we study the challenges that differentiable simulation presents when it is not feasible to expect that a single descent reaches a global optimum, which is often a problem in contact-rich scenarios. We analyze the optimization landscapes of diverse scenarios that contain both rigid bodies and deformable objects. In dynamic environments with highly deformable objects and fluids, differentiable simulators produce rugged landscapes with nonetheless useful gradients in some parts of the space. We propose a method that combines Bayesian optimization with semi-local 'leaps' to obtain a global search method that can use gradients effectively, while also maintaining robust performance in regions with noisy gradients. We show that our approach outperforms several gradient-based and gradient-free baselines on an extensive set of experiments in simulation, and also validate the method using experiments with a real robot and deformables. Videos and supplementary materials are available at https://tinyurl.com/globdiff.
Introduction
Physics simulation is indispensable for robot learning: it has been widely used to generate synthetic training data, explore learning of complex sensorimotor policies, and also help anticipate the performance of various learning methods before deploying on real robots. An increasing volume of recent work attempts to invert physics engines: given simulation outputs (e.g. trajectories), infer the input parameters (e.g. physical properties of the scene, robot controls) that best explain the outputs [1,2,3,4,5]. Differentiable physics engines aim to offer a direct and efficient way to invert simulation, and could enable fast gradient-based policy optimization and system identification. Works that we survey in the background section show a number of recent successes. However, to highlight the most promising prospects of differentiable simulation, works in this space focus primarily on scenarios where obtaining smooth gradients is relatively easy.
With more research efforts gearing to develop (and use) differentiable physics engines, it is now crucial that we analyze the limits of these systems thoroughly. Surprisingly, efforts to investigate the differentiability of these simulators are far and few. One prior work [6] has highlighted a few fundamental limitations of differentiable simulation in the presence of rigid contacts, in low-dimensional systems. In this work, we first investigate the quality of gradients by visualizing loss landscapes through differentiable simulators for several robotic manipulation tasks of interest. For this analysis we create several new challenging environments, and also use environments from prior work.
Our main focus is to understand whether the limitations of rigid contacts are also prevalent across deformable object simulation. We show that in many cases obtaining well-behaved gradients is feasible, but challenge the assumption that differentiable simulators provide easy landscapes in sufficiently interesting scenarios. We analyze scenarios with flexible deformable objects (cloth), plastics (clay), and fluids. Our visualizations of optimization landscapes uncover numerous local optima and plateaus, showing the need for extending gradient-based methods with global search. We propose a method that combines global search using Bayesian optimization with a semi-local search. Our semi-local strategy allows to make progress on parts of the landscape that are intractable for gradient descent alone. In our experiments we show visual, quantitative and qualitative analysis of the optimization problems arising from simulations with deformables in a variety of scenarios. We also validate our proposed approach on a real robot interacting with cloth, where our aim is to identify the properties of the cloth that make the motion of the simulated cloth match the real one.
Background
Differentiable Simulation: The earliest differentiable simulators [7,8] focused solely on rigid-body dynamics, often operating only over a small number of predefined object primitives. A number of approaches subsequently focused on arbitrary rigid body shapes [9,10], articulations [11,12,13], accurate contacts [14,15,16], scalability [17], speed [18,19], and multiphysics [1]. Differentiable simulation has also been explored in the context of deformable objects [20], cloth [21], fluids [22], robotic cutting [23] and other scientific and engineering phenomena [24,25,26]. While several approaches sought to achieve physically accurate forward simulations and their gradients, they did not rigorously study the impact of their loss landscapes on inference. Most modern simulators are plagued by two fundamental issues. First, they rely on gradient descent as an inference mechanism, which makes the optimization dependent on a good initial guess. Second, the loss landscape for contact-rich scenarios is laden with discontinuities and spurious local optima. This is evident even in the most basic case of a single bouncing ball as shown in [19]. In this work, we also verify that the issue is present in the more recent frameworks, such as Nimble [14] and Warp [27].
Global Search Methods: Global optimization tackles the problem of finding the global optimum in a given search space. In our case, this could constitute finding the optimal parameters for a controller (e.g. target position, velocity, force, torque) or physical parameters of a simulator to make the behavior of simulated objects match reality. Global optimization includes several broad families of methods: 1) space covering methods that systematically visit all parts of the space; 2) clustering methods that use cluster analysis to decide which areas of the space could be promising; 3) evolutionary methods that start from a broad set of candidates and evolve them towards exploring the promising regions; 4) Bayesian optimization (BO) that uses non-parametric approaches to keep track of a global model of the cost function and its uncertainty on the whole search space. The survey in [28] gives further details. Random search is also considered a global search method, and is one of the few methods that guarantees eventually finding a global optimum. While data efficiency can be a challenge, it is often a surprisingly robust baseline. Space covering and clustering methods face significant challenges when trying to scale to high dimensions. The recent focus in global optimization has been on evolutionary methods and BO, which can be successful on search spaces with thousands of dimensions. Hence, in this work we compare methods based on random search, evolutionary approaches and BO.
CMA-ES:
One of the most versatile evolutionary methods is Covariance Matrix Adaptation -Evolution Strategies (CMA-ES) [29]. It samples a randomized 'generation' of points at each iteration from a multivariate Gaussian distribution. To evolve this distribution, it computes the new mean from a subset of points with the lowest cost from the previous generation. It also uses these bestperforming points to update the covariance matrix. The next generation of points is then sampled from the distribution with the updated mean and covariance. CMA-ES succeeds in a wide variety of applications [30], and has competitive performance on global optimization benchmarks [31]. This method does not make any restrictive assumptions about the search space and can be used 'as-is' i.e. without tuning. However, this method is technically not fully global -while it can overcome shallow local optima, it can get stuck in deeper local optima.
Bayesian optimization (BO): BO views the problem of global search as seeking a point x x x * to minimize a given cost function f (x x x): f (x x x * ) = min x x x f (x x x). At each trial, BO optimizes an auxiliary acquisition function to select the next promising x x x to evaluate. f is frequently modeled with a Gaussian process (GP):
f (x x x) ∼ GP(m(x x x), k(x x x i , x x x j )
). Modeling f with a GP allows to compute the posterior meanf (x x x) and uncertainty (variance) V ar[f (x x x)] for each candidate point x x x. Hence, the acquisition function can select points to balance a high mean (exploitation) with high uncertainty (exploration). The kernel function k(·, ·) encodes similarity between inputs.
If k(x x x i , x x x j ) is large for inputs x x x i , x x x j , then f (x x x i ) strongly influences f (x x x j ).
One of the most widely used kernel functions is the Squared Exponential (SE): k SE (r r r ≡ |x x x i −x x x j |) = σ 2 k exp − 1 2 r r r T diag( ) −2 r r r , where σ 2 k , are signal variance and a vector of length scales respectively. σ 2 k , are hyperparameters and are optimized automatically by maximizing the marginal data likelihood. See [32] for further details.
BO-Leap: A Method for Global Search on Rugged Landscapes
We propose an approach for global search that can benefit from gradient-based descents and employs a semi-local strategy to make progress on rough optimization landscapes. To explore the loss landscape globally, we use Bayesian optimization (BO). BO models uncertainty over the loss and reduces uncertainty globally by exploring unseen regions. It also ensures to return to low-loss regions to further improve within the promising areas of the search space. BO uses an acquisition function to compute the most promising candidate to evaluate next. We treat each candidate as a starting point for a semi-local search. As we show in our experiments, the straightforward strategy of using gradient descent from each of these starting points does not ensure strong performance in scenarios with noisy gradients. Hence, we propose a hybrid descent strategy that combines gradient-free search with gradient-based descents. For this, we collect a small population of local samples and compute a sampling distribution based on CMA-ES. Instead of directly using the resulting mean and covariance to sample the next population (as CMA-ES would), we use gradient descent to evolve the distribution mean, then use this updated mean when sampling the next population. We outline our method, BO with semi-local leaps (BO-Leap), in Algorithm 1. We also visualize the algorithm in Figure 1. We start by initializing a global model for the loss with a Gaussian process (GP) prior. Using the BO acquisition function, we sample a vector x x x 1 of simulation or control parameters to evaluate. We then run a semi-local search from this starting point. For this, we initialize the population distribution N µ µ µ 1 = x x x 1 , σ 2 1 C C C 1 and sample K local candidates. Next, we update the distribution in a gradient-free way similar to the CMA-ES strategy. We then start a gradient descent from the updated mean s s s 1 . The gradient descent runs for at most J steps and is halted if the loss stagnates or increases for more than three steps. The gradient is clipped to avoid leaving the search space boundaries. When the semi-local search reaches a given number of steps (e.g. 100 in our experiments), we update the BO posterior and let BO pick a new starting point x x x 2 globally. We add all the points that the semi-local search encounters to the set S S S n that is used to compute the posterior for each BO trial.
Obtaining a noticeable improvement by incorporating gradients into a strategy based on CMA-ES is not trivial. One hybrid approach that seems conceptually sound proposes to shift the mean of each CMA-ES population by taking a step in the direction of the gradient [33]. In our preliminary experiments, taking a single gradient step was insufficient to significantly improve the performance of CMA-ES: the method from [33] performed worse than gradient-free CMA-ES. In contrast, our semi-local search strategy allows gradient-based descents to take large leaps on the parts of the landscape where gradients are relatively smooth. To prevent being mislead by unstable gradients x x x n ← arg min x x x LCB x x x| GP S S Sn Get next simulation (or control) parameter vector 7 µ µ µ 1 ← x x x n ; σ 1 ← 1.0; C C C 1 ← I I I (identity); K=10 Initialize local population distribution
8 for i = 1..local steps do 9 x x x k ∼ N µ µ µ i , σ 2 i C C C i , k = 1..K Sample K population candidates 10 l k ← Sim(x x x k ), k = 1..K
Compute losses l k with x x x k as sim/control parameters 11 K best ← sort(l k ) Get candidates with the lowest loss Break if l j stagnates for more than 3 steps 17 µ µ µ i+1 ← s s s J ; σ i+1 , C C C i+1 ← Eq.14-17 from [29] Update local population distribution
18 S S S n+1 = S S S n ∪ {(s s s j , l j )} J j=1 ∪ {(x x x k , l k )} K k=1
Update data for GP posterior and avoid wasting computation when stuck on a plateau, our method monitors the quality of the gradient-based evolution of the mean and terminates unpromising descents early. BO-Leap operates on three levels: global, semi-local and local (gradient descent), which allows it to tackle loss landscapes that are challenging due to various aspects: local optima, non-smooth losses, and noisy gradients. In the next section, we show that BO-Leap has strong empirical performance in contactrich scenarios with highly deformable objects, plastic materials and liquid. In this work, we implement scenarios using several differentiable simulation frameworks. Table 1 summarizes their properties. Our main goal is to study scenarios with deformables, because prior works already explored a number of tasks limited to rigid-body motion (see previous section). That said, very few works considered contact-rich tasks, and one prior work highlighted the potential fundamental limitations of gradients for rigid contacts [6]. Hence, there is a need to further study the fundamentals of how loss landscapes and gradient fields are affected by rigid contacts. For that, we created several scenarios using Nimble [14] and Warp [27] frameworks (described below).
A Suite of Differentiable Simulation Scenarios
3-Link Cartpole
: an extension of the classic Cartpole to get more challenging dynamics. Here, a 3-link pole needs to reach the blue target with its tip. Cart velocity and joint torques are optimized for each of the 100 steps of the episode, yielding a 400dimensional optimization problem.
Pinball and Bounce: a ball launches and bounces off colliders as in a pinball game. We optimize orientations of n colliders to route the ball from the top to the blue target at the bottom. Pinball helps analyze effects of increasing the number of contacts, implemented in Nimble [14]. We also created a simplified Bounce scenario to study effects of a single collision in Warp [27], which produced well-behaved gradients in prior work [23].
To go beyond rigid objects, mesh-based simulations can model highly deformable objects, such as cloth. Particle-based simulators can model interactions with granular matter and liquids, plastic deformation with objects permanently elongating, buckling, bending, or twisting under stress. We created several mesh-based and particle-based environments that involve deformables using Diff-Taichi [19] and compared the quality of the loss landscapes and gradient fields they yield. We also included environments from the PlasticineLab [2] that focus on plastic deformation.
Fluid: a particle-based fluid simulation that also involves two rigid objects (a spoon and a sugar cube) interacting in fluid. The objective is to scoop the sugar cube out of the liquid. We optimize forward-back and up-down velocity of the spoon, and let it change fives times per episode, yielding a 10-dimensional optimization problem.
Assembly, Pinch, RollingPin, Rope, Table, Torus, TripleMove, Writer: scenarios based on PlasticineLab [2] with particle-based simulations of plastic deformation. We optimize 3D velocities of anchors or pins, allowing them to change five times per episode. This yields a 90D problem for TripleMove, 30D for Assembly, 15D for the rest.
Swing: a basic cloth swinging scenario to study dynamic tasks with deformables. We give options to optimize the speed of the anchor that swings the cloth, the cloth width & length, and stiffness of cloth that is partitioned into n×m patches. Varying n, m lets us experiment with effects of increasing dimensionality of the optimization problem.
Flip: a scenario with highly dynamic motion and rigid-deformable object collisions. The goal is to flip a pancake by moving the pan. We optimize pan motion: n waypoints for left-right, up-down position and pan tilt. The pancake is modeled using a massspring model with small stiffness to avoid large forces from dynamic movement.
We built this suite of environments to be representative of a wide range of possible robot manipulation scenarios, regardless of whether differentiable simulators produce sensible gradients in them. We found that some of these environments yield high-quality gradients, while others do not. In Section 5.1, we show that Cartpole, Fluid, and the eight PlasticineLab environments produce wellbehaved gradients and that our method can outperform competing baselines in these scenarios. Then, in Section 5.3, we show that differentiable simulators can produce incorrect gradients in highly dynamic and contact-rich environments like Pinball, Swing, and Flip, which makes gradient-based methods (including our method) less effective in these environments.
Experiments and Analysis of Optimization Landscapes
In this section, we present visual analysis of optimization landscapes and gradients, as well as comparison experiments, in various environments. We use Rand (random search) and CMA-ES as our gradient-free baselines; we use RandDescents, an algorithm that runs multiple gradient descents with randomly sampled initial parameter value, and BO as our gradient-based baselines.
Simulation Experiments and Analysis
We first analyze using gradients with rigid objects. The left side of Figure 2 shows experiments on the 1-Link Cartpole, where gradient-free CMA-ES performs similar to gradient-based algorithms (RandDescents, BO), while BO-Leap obtains a significantly lower loss. The right side shows results for 3-Link Cartpole which has much more complex dynamics. Here, all gradient-based algorithms show a large improvement over gradient-free ones. CMA-ES does better than a completely random search (Rand), but fails to lift the pole tip to the target. Next, we study a variety of scenarios that use particle-based simulation. Particle simulators have much larger computational and memory requirements than mesh-based options. Gradient computations further increase resource requirements. For the differentiability to be warranted, benefits of gradients have to be significant. We show that gradient-based methods can indeed have large benefits. Figure 3 visualizes the Fluid scenario: the left side shows a 2D slice of the loss landscape. It has many valleys with shallow local optima and appears more difficult than many test landscapes designed to challenge global optimization methods. The middle of Figure 3 shows directions of gradients produced by the differentiable simulator. The right side shows results for gradient-based and gradient free methods. CMA-ES gets stuck in a local optimum: the spoon fails to lift the sugar cube in most runs as shown in the next-to-last column. In contrast, BO-Leap successfully lifts the sugar cube in most runs, outperforming all baselines. BO is designed to balance allocating trials to global exploration while still reserving enough trials to return to the well-explored regions that look promising. In this scenario such global optimization strategy proves to be beneficial. Furthermore, BO-Leap also handles the rough parts of the landscape more effectively than the baseline BO.
In the next set of experiments, we analyze eight environments from PlasticineLab [2]. Figure 4 shows a Rope scenario, where the objective is to wrap it around a rigid cylindrical pole. The loss landscape is smooth in most dimensions (the left plot shows an example 2D slice). However, higher dimensionality makes the overall problem challenging. The middle plot confirms that gradients are correct in most parts, but also shows a large plateau where even the gradient-based approaches are likely to get stuck. The right side shows evaluation results: CMA-ES and BO fail to wrap the rope fully around the pole, while BO-Leap succeeds in pulling the ends on the back side of the pole correctly. This environment also shows that simply using gradients with random restarts is not sufficient even on smooth landscapes: RandDescents has poor performance that does not improve significantly over gradient-free random search. BO-Leap outperforms all other methods and successfully completes the task of wrapping the rope fully around the pole (light blue region shows the target shape). Gradientfree CMA-ES cannot reliably get the optimal behavior. While BO-Leap uses gradients effectively, this scenario shows that simply using gradients with random restarts is not sufficient: RandDescents has poor performance. Figure 5 shows experiments with the other seven PlasticineLab tasks. The top shows the RollingPin task, where the objective is to spread the dark-blue dough using a thin rigid cylindrical white pin (the light-blue region shows the target shape). The loss landscape is smoother than that of the Rope task, but has even larger plateaus and flat gradients in most regions, making the problem challenging despite a smooth loss. BO-Leap outperforms CMA-ES and gradient-based methods on this task as well. Example frames on the right show that CMA-ES thins out the dough too much (large gaps appear in the middle, revealing the light-blue target shape underneath). BO and BO-Leap keep the central portion of the dough more uniformly spread. The bottom plots show results for the additional six PlasticineLab tasks. BO-Leap outperforms gradient-based methods in all tasks, and achieves significantly lower loss than CMA-ES in Assembly, Torus, and TripleMove tasks (see supplementary materials for further details).
Validation in Real Robot Setup
To validate the proposed method using real data, we consider the task of identifying the properties of a simulated deformable object to make its motion match the real motion. In this scenario, a Gen3 (7DoF) Kinova robot manipulates a small deformable object by lifting it up from the table surface. The object is tracked by two RealSense D435 depth cameras, one placed overhead, the other on the side. The objective is to optimize the size (width & length), mass, and friction of the deformable object, as well as stiffness of each of the 8×8 = 64 patches of the object, which yields a 68D optimization problem. The loss penalizes the distance of the simulated corner vertices to the position of the corners marked on the real object. Figure 6 shows results of experiments with two Figure 6: Results for optimizing a simulated object to match the motion of a real paper (left) and cloth (right), which are lifted by a real robot. We visualize the midway alignment showing that our method finds the size, mass, friction and stiffness (for each of the 64 patches) to bring simulated object behavior close to the real one.
objects: a stiff but flexible paper (left) and a highly flexible cloth (right). In both cases, BO-Leap is able to find physical simulation parameters that produce the best alignment of the simulated and real object. This offers a validation of the method on real data, and shows that it can tackle 'real2sim' problems to automatically bring the behavior of simulated objects closer to reality. This is valuable for highly deformable objects, since manual tuning is intractable for high dimensions.
Limitations
Modeling global loss posterior with BO can be computationally expensive. We use BoTorch [35] for BO on GPU, which can scale to high dimensions. We focused on results within a budget of 1K optimization steps. If a much larger budget is allowed, then more tests would be needed to validate that BoTorch (or other frameworks) can scale well in terms of compute and memory resources. The biggest challenges of optimization with differentiable simulators arise due to quality of the gradients, which can be insufficient to be beneficial for gradient-based algorithms, including our method. In Figure 7, we show three environments where gradients produced by differentiable simulators are of poor quality. In the Pinball environment, gradients with respect to collider orientations are computable only if a collision (with the pinball) has occurred to begin with. In addition to collision-induced discontinuities, the absence of gradients results in plateaus, affecting gradientbased optimizers. Even in a state-of-the-art simulator Warp [27] with relaxed contact models, a simple Bounce task induces gradient discontinuities (see supplement for details). In Swing and Flip tasks, while the dynamics appear realistic, differentiable simulators yield gradients with incorrect (often opposite) directions. This is a common pitfall for practitioners who use differentiable simulators without assessing loss landscapes first.
Conclusion Our analysis shows that differentiable simulation of contact-rich manipulation scenarios results in loss landscapes that are difficult for simple gradient-based optimizers. To overcome this, we proposed a hybrid approach that combines local (gradient-based) optimization with global search, and demonstrated success on rugged loss landscapes, focusing on cases with deformables. We believe our analyses and tools provide critical feedback to differentiable simulator designers and users alike, to take differentiable simulators a step closer to real-world robot learning applications.
A Additional Environment Descriptions and Details
In this section, we list details of environments we covered in the paper. We describe the simulation framework, physics model, contact type, parameter information, loss configuration, as well as landscape and gradient characteristics for each environment. A cart carries a triple inverted pendulum where each link has length 1m. Links farther away from the cart are lighter than the link attached to the cart (for easier control). The goal is to move the cart and actuate the joints so that the tip of the pendulum is as close as possible to a preset goal location.
A.1 Rigid Body Environments
A.1.1 3-link Cartpole
• Simulation Framework or Physics Model: Nimble [14] • Types of Contacts: none (there is no self-contact between different parts of the cartpole)
• Parameter Dimensionality: 400
• Parameter Description: at each of the 200 timesteps, the parameters specify cart velocity with 1 dimension and torques of 3 joints.
• Loss: L2 distance from final tip position to target position.
• Landscape and Gradient Characteristics: landscape is very smooth. Gradient quality is good. On a vertical platform of 8m wide and 10m tall, a ball is dropped on to a grid of n h × n w spinning colliders that have one revolute joint each attached to the platform. The goal is to guide the ball to a goal position at the end of the episode by adjusting the orientation of each collider.
A.1.2 Pinball
• Simulation Framework or Physics Model: Nimble [14] • Types of Contacts: rigid (pinball collides with colliders and walls)
• Parameter Dimensionality: n h × n w (in this work, we consider two setups with n h = 1, n w = 2 and n h = 4, n w = 4)
• Parameter Description: rotation angle of each spinning collider in the n h × n w grid.
• Loss: L2 distance from final pinball position to target pinball position near the bottom-right of the platform.
• Landscape and Gradient Characteristics: landscape has large flat regions as well as discontinuities. Gradients are zero in flat regions and not useful at locations of discontinuities.
A.2 Deformable Object Environments
A.2.1 Fluid • Parameter Dimensionality: 10
• Parameter Description: an episode splits into 5 equal-length segments. In each segment, 2 parameter values control the horizontal (forward-backward) and vertical (up-down) speed of the ladle. Note that although the ladle might not directly make contact with the cube, the ladle can push the liquid particles, which can then push the cube away from the ladle.
• Loss:
max(0, y −y w )+3· (x − x s ) 2 + (z − z s ) 2 + (x − x c ) 2 + (z − z c ) 2 ,
where (x, y, z) denotes the final sugar cube position, (x s , y s , z s ) denotes final ladle body center position, and y w = 0.2 denotes height of the container. In this and all following DiffTaichi environments, the x-axis points rightward, y-axis points upward, z-axis points to the front.
• Landscape and Gradient Characteristics: landscape is rugged and has many local minima. Gradient quality is good. Table, Torus, TripleMove, and Writer. These environments involve 1-3 anchors or a pin manipulating one or several pieces of deformable objects. The goal of all environments are to make the final deformable object configuration close to a target shape. When we adapt the environments for our purpose, we only modify the format of optimizable parameters and leave other aspects of the environments such as dynamics, episode length, and loss formulation unchanged. As we already described the high-level objectives of several PlasticineLab environments we analyzed in detail in the main paper, we direct readers to the original paper [2] for additional details of each environment.
A.2.2 PlasticineLab Environments
• Simulation Framework or Physics Model: PlasticineLab is based on DiffTaichi [19]; its environments use MLS-MPM [36] to model interactions between rigid and deformable objects; rigid bodies are modeled using signed distance fields (SDF)
• • Loss: PlasticineLab uses a 3-part loss that encourages the anchors or pins to be closer to the deformable objects while penalizing the distance between the final deformable object shape and the target shape. The only modification we made to the original loss function is increasing the weight of the loss component that encourages the manipulators to get close to the target shape. For more details of the original loss function, please refer to Section 3.1 of the original paper [2]).
• Landscape and Gradient Characteristics: the landscape is smooth with local minima, while the high dimensionality makes the problem challenging; gradient quality is good. Two anchors grasp the two corners of a 20 × 20cm piece of cloth and swing it onto the floor. The goal is to make the final cloth configuration as close as possible to a goal configuration.
A.2.3 Swing
• Simulation Framework or Physics Model: DiffTaichi with mass-spring model as cloth simulation technique; to handle contact, we update the velocities of cloth vertices when contacts occur so that the updated speed is perpendicular to the normal of the contact surface • Types of Contacts: collision between rigid (floor) and deformable objects (cloth)
• Parameter Dimensionality: 16 if stiffness is optimized; 3 if initial speed is optimized
• Parameter Description: we have two different parameter setups in this task. In the first setup, we split the cloth into 4 × 4 = 16 cloth patches and fix the swinging motion. The stiffness values of the 16 cloth patches are optimized. In the second setup, we fix the stiffness of the cloth and optimize the initial 3D velocity of the cloth.
• Loss: we have three different loss formulations for this task -loss formulation 'single' optimizes the distance between the center of mass of the cloth at the final frame to a goal position. Loss formulation 'corner' optimizes the average distance between the four corners of the cloth to their corresponding goal positions. Loss formulation 'mesh' optimizes the average distance between final vertex positions of the cloth to a target cloth mesh.
• Landscape and Gradient Characteristics: landscape is rugged. Gradients are noisy in large areas of the parameter space. A pancake is placed in a pan with 20cm radius and smooth edges that move and tilt with 3 DOF. The goal is to manipulate the pan so that the pancake is flipped at the end of the episode.
A.2.4 Flip
• Simulation Framework or Physics Model: DiffTaichi with mass-spring model for simulating the pancake; compared to the Swing task, the stiffness value in this task is smaller to make sure collision forces are not too large during the dynamic movement of the pancake; to handle contact, we update the velocities of pancake vertices when contacts occur so that the updated speed is perpendicular to the normal of the contact surface • Types of Contacts: collision between rigid (floor) and deformable objects (pancake)
• Parameter Dimensionality: 15
• Parameter Description: an episode is split into 5 equal-length segments. The parameters set the x (left-right) and y (up-down) positions as well as the tilt of the pan at the end of each segment. This leads to [5 segments × 3 control parameters = 15] dimensions; position and angular velocity of the pan is linearly interpolated within each segment.
• Loss: average L2 distance between final position of the four pancake corners (the four pancake vertices with highest and lowest x and y values at the start of the episode) and four corresponding target positions.
• Landscape and Gradient Characteristics: landscape is extremely rugged. Gradients are noisy in large areas of the parameter space.
B Additional Analysis B.1 Analysis of Challenges with Gradients for Rigid Contacts
In this section, we analyze the quality of gradients through a differentiable physics engine; observing the nature of discontinuities induced by contact. We define the Bounce task, where the goal is to steer a bouncing (red) ball with known friction and elasticity parameters to a target position (green). We seek a policy that imparts an initial 3D velocity v init to a ball such that, at the end of the simulation time t max , the center of mass of the ball achieves a pre-specified target position. Importantly, the policy must shoot the ball onto the ground plane, and upon a bounce, reach the target location (this is achieved by restricting the cone of velocities to contain a vertically downward component). This enforces at least one discontinuity in the forward simulation.
We use the relaxed contact model from Warp [27], and compute the gradients for a wide range of initial velocities (−10 to 10 m/s along both X and Y directions). Notice that, in the X-direction (horizontal speeds), the gradients are smoother, as changes to X components of the velocity only push the ball further (closer) to the goal, and have little impact on the discontinuities (bounces). However, the Y -axis components of velocities (vertical speeds) tend to have a significant impact on the location and nature of the discontinuities, and therefore induce a larger number of local optima. Figure B.1: A simple rigid body Bounce task implemented using Warp [27]. The goal is to impart an initial velocity to the red ball so that it reaches the target location (green) at the end of 2 seconds. The ball moves in 3D: X left-right, Y down-up, Z in-out of the image plane. We restrict out attention the Y direction and observe that discontinuities caused by the rigid contacts remain, even in Warp -a recent framework with semi-implicit Euler integration and advanced relaxed-contact and stiffness models.
While prior work (e.g., DiffTaichi [19]) has extensively analyzed gradients through similar contact scenarios, they leverage the conceptually simple (but numerically unstable) Euler integrator and perfectly elastic collisions. Warp [27], on the other hand, uses both a symplectic time integrator and a contact model that includes both friction and elasticity parameters. Our Bounce experiments confirm the fact that significant challenges with computing gradients through rigid contacts remain, even in these more recent and advanced differentiable simulation frameworks.
B.2 Landscape Gallery
In this section, we present more landscape and gradient plots to provide more insight into the differentiable simulation environments we presented in the paper. Apart from this section, we also present animated landscape plots in the supplementary video.
Pinball Below, we show two landscape and gradient plots, one plotting the landscape of the Pinball 2D environment with two colliders, and the other plotting a 2D slice for the Pinball 16D environment with a grid of 4-by-4 colliders. In the right plot, the x and y axes correspond to the rotation of the center two colliders at the bottom of the collider grid. From the plots, we see that the rugged landscapes occur in different variations of the Pinball task. Fluid In the main paper, we presented Fluid as an environment where the landscape is rugged and showed one 2D slice of the loss landscape of the 10D environment. Here, we show two more slices of the landscape (the middle and right plots below). In the plots, we see that the optimization landscape is similarly rugged in these dimensions. Assembly Assembly is an environment in PlasticineLab where two anchors need to pick up a soft purple ball on the left side of the scene and place it on a yellow stand on the right side. The landscape and gradients plotted below show that these environments have smooth landscapes with local minima and good quality gradients. TripleMove In the TripleMove environment, six anchors manipulate three blocks to move them to three corresponding goal positions. It can be seen in the plots below that the landscape generated by this environment is pretty rugged. Writer Below we show landscape and gradient plots of the Writer environment. We observe that in this environment, the landscape is smoother than that of the previous environments, but there are still a number of local minima at different areas of the landscape (see the left plot and the right plot). Flip In the paper, we showed that the Flip environment has rugged landscapes with suboptimal gradient quality. Below we show additional landscape and gradient plots of the Flip environment to confirm this. As seen in the plots, the landscape is very rugged, and gradients are pointing to rather random directions.
C Details for Method Implementation and Compute Resources
For implementing CMA-ES, we used a fast and lightweight library provided by [37].
For implementing Bayesian optimization (BO), we used the BOTorch library [35], which supports various versions of Gaussian processes (exact & approximate) and various BO acquisition functions. For all experiments described in the main paper, we used the Lower Confidence Bound (LCB) acquisition function, with default parameters (i.e. exploration coefficient α = 1.0). In our previous experience, LCB had a similar performance as other commonly used functions (such as the Expected Improvement acquisition function), and LCB has the advantage of being very easy to implement and interpret. See Section IV in [32] for more information. BOTorch implements automatic hyperparameter optimization based on maximizing the marginal likelihood (see [32], Section V-A). We used this for all our BO-based experiments.
Figure 1 :
1A conceptual illustration of BO-Leap.
S S Sn = GP m(·), k(·, ·)|S S S n Compute GP posterior using Eq. 2.25-26 from[34] 6
|K best | k∈K bestx x x k Compute descent start point (CMA-ES population mean) 13 for j = 1..J do 14 l k , ∇ Sim | s s sj ← Sim(s s s j ) Compute sim. loss l j and gradients ∇ Sim | s s sj 15 s s s j ← s s s j−1 − α∇ Sim | s s sj Take a gradient step 16
Figure 2 :
2Left: results for 1-Link Cartpole. Right: results for 3-Link Cartpole. For all bar charts in this paper, the vertical axis shows the mean and 95% confidence interval of best loss value after 1,000 optimization steps. Each optimization step runs one simulation episode, computes the loss, and propagates the gradient wrt. optimization parameters if the algorithm needs it.On the top-right corner ofFigure 2, we show qualitative results from a CMA-ES run (top) and from a BO-Leap run (bottom), where BO-Leap brings the tip close to the target.
Figure 3 :
3A scenario with scooping up a sugar cube from fluid. The left side shows a 2D slice of the 10D optimization landscape and the corresponding gradients. To make gradient directions visible, we normalize the magnitude of gradients in all gradient plots in this paper; the arrows point towards the direction of negated gradients (i.e. the direction gradient descent updates take). The right plot shows quantitative evaluation of optimization methods. We visualize qualitative results for CMA-ES and BO-Leap on the right side.
Figure 4 :
4Results for the Rope environment.
Figure 5 :
5Top row: analysis and results for RollingPin task. Bottom rows: results for the other six PlasticineLab tasks. Plots show mean performance over 10 runs of each method per task (see supplemental for further details). Environment illustrations below the six bar charts at the bottom of this figure are borrowed from[2].
Figure 7 :
7Visual insights into the challenges of obtaining well-behaved gradients for cases with rigid contacts in Pinball, and for deformables in the presence of contacts and highly dynamic tasks, such as Swing & Flip.
Figure A. 1 :
1Illustration of the 3-link Cartpole environment.
Figure A. 2 :
2Illustration of the Pinball environment with 16 colliders.
Figure A. 3 :
3Illustration of the Fluid environment. The upper and lower rows are renderings of the same episode in the same environment. The upper row uses the built-in realtime rendering engine in DiffTaichi; while the lower row uses Blender, which is slower but higher quality.A ladle with 2 DOF is manipulated to scoop a sugar cube from a tank of transparent syrup with width 0.4m, depth 0.4m, and height 0.2m. The goal is to scoop the cube to as high of a position as possible while being close to the ladle and the center vertical axis of the tank. This environment is made with DiffTaichi. The dynamics of syrup and the sugar cube in this environment are modeled with MLS-MPM[36], a state-of-the-art particle-based fluid simulation method.• Simulation Framework or Physics Model: DiffTaichi[19] with MLS-MPM[36] as physics model for fluid and the sugar cube • Types of Contacts: collision between rigid objects and fluid (such as fluid particles bouncing back off container walls)
Figure A. 4 :
4Illustration of environments derived from PlasticineLab. From top to bottom, we show Assembly, Pinch, RollingPin, Rope,
Figure A. 5 :
5Illustration of the Swing environment. The upper and lower rows are renderings of the same episode in the same environment. The upper row uses the built-in realtime rendering engine in DiffTaichi; while the lower row uses Blender, which is slower but has higher quality.
Figure A. 6 :
6Illustration of the Flip environment. The upper and lower rows are renderings of the same episode in the same environment. The upper row uses the built-in realtime rendering engine in DiffTaichi; while the lower row uses Blender, which is slower but has higher quality.
Figure B. 2 :
2Landscape (top) and gradient (bottom) plots for Pinball environment. Left column -Pinball 2D dimensions 0 and 1. Right column -Pinball 16D dimensions 13 and 14.
Figure B. 3 :
3Landscape (top) and gradient (bottom) plots for Fluid environment with 10D parameters. Left column -dimensions 0 and 1. Middle column -dimensions 6 and 8. Right column -dimensions 8 and 9.
. 6 :
6Landscape (top) and gradient (bottom) plots for TripleMove environment with 90D parameters. Left column -dimensions 0 and 1. Middle column -dimensions 1 and 2. Right column -dimensions 3 and 5.
. 7 :
7Landscape (top) and gradient (bottom) plots for Writer environment with 15D parameters. Left column -dimensions 0 and 3. Middle column -dimensions 1 and 4. Right column -dimensions 2 and 5.
Figure B. 8 :
8Landscape (top) and gradient (bottom) plots for Flip environment. Left column -dimensions 0 and 1. Middle column -dimensions 0 and 2. Right column -dimensions 1 and 2.
Table ,
,Torus, TripleMove, and Writer.We consider 8 different environments derived from PlasticineLab [2] -Assembly, Pinch, Rolling-
Pin, Rope,
Types of Contacts: collision between rigid and deformable objects• Parameter Dimensionality: 90 (TripleMove), 30 (Assembly, Rope), 15 (Pinch, RollingPin,
Table, Writer)
• Parameter Description: an episode splits into 5 equal-length segments. In each segment, the
3D velocities of each anchor or pin are controlled by optimizable parameters. In TripleMove,
there are six anchors to be controlled, so the optimizable parameter has a total of [5 segments
× 6 anchors × 3 velocity values = 90] dimensions. In Assembly and Rope, there are two
anchors to be controlled, so the parameter has [5 segments × 2 anchors × 3 velocity values = 30]
dimensions. In Pinch, Table, and Writer, there is one anchor, so the parameter has [5 segments ×
1 anchor × 3 velocity values = 15] dimensions. In RollingPin, instead of controlling 3D velocity
of the pin in each segment, the environment uses 3 values to control the left-right, up-down, and
top-down tilting angle of the pin, leading to [5 segments × 1 anchor × 3 control parameters =
15] dimensions.
Loss Figure B.4: Landscape (top) and gradient (bottom) plots for Assembly environment with 30D parameters. Left column -dimensions 0 and 1. Middle column -dimensions 1 and 2. Right column -dimensions 4 and 5.1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00
Left anchor speed X for 0 <= t < 10 (dim 0)
1.00
0.75
0.50
0.25
0.00
0.25
0.50
0.75
1.00
Left anchor speed Y for 0 <= t < 10 (dim 1)
23.25
25.50
27.75
30.00
32.25
34.50
36.75
39.00
41.25
43.50
Loss
1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00
Left anchor speed Y for 0 <= t < 10 (dim 1)
1.00
0.75
0.50
0.25
0.00
0.25
0.50
0.75
1.00
Left anchor speed Z for 0 <= t < 10 (dim 2)
23.50
25.75
28.00
30.25
32.50
34.75
37.00
39.25
41.50
43.75
Loss
1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00
Right anchor speed Y for 0 <= t < 10 (dim 4)
1.00
0.75
0.50
0.25
0.00
0.25
0.50
0.75
1.00
Right anchor speed Z for 0 <= t < 10 (dim 5)
23.4
25.4
27.4
29.4
31.4
33.4
35.4
37.4
39.4
41.4
Table In the
InTable environment, an anchor pushes one leg of a table soit points outward. The visualizations below reveal that small changes in the action can result in very different loss values.Figure B.5: Landscape (top) and gradient (bottom) plots for Table environment with 30D parameters. Left column -dimensions 0 and 2. Middle column -dimensions 2 and 5. Right column -dimensions 12 and 13.1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00
Anchor speed X for 0 <= t < 10 (dim 0)
1.00
0.75
0.50
0.25
0.00
0.25
0.50
0.75
1.00
Anchor speed Z for 0 <= t < 10 (dim 2)
14.5
19.0
23.5
28.0
32.5
37.0
41.5
46.0
50.5
55.0
Loss
1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00
Anchor speed Z for 0 <= t < 10 (dim 2)
1.00
0.75
0.50
0.25
0.00
0.25
0.50
0.75
1.00
Anchor speed Z for 10 <= t < 20 (dim 5)
14.4
22.4
30.4
38.4
46.4
54.4
62.4
70.4
78.4
86.4
Loss
1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00
Anchor speed X for 40 <= t < 50 (dim 12)
1.00
0.75
0.50
0.25
0.00
0.25
0.50
0.75
1.00
Anchor speed Z for 40 <= t < 50 (dim 14)
14.34
15.00
15.66
16.32
16.98
17.64
18.30
18.96
19.62
20.28
Loss
We experimented with various versions of Gaussian process (GP) models, including exact GP and sparse variational GP versions that are provided in BOTorch. BOTorch uses the lower-level GPyTorch library[38]for GP implementations. We found that exact GPs performed best, and used these for all BO experiments reported in the main paper. In future work, it would be interesting to experiment with other GP implementations that could support posteriors with a much larger number of points.We used NVIDIA Tesla T4 GPUs and 32 cores of an Intel Xeon 2.3GHz CPU for our experiments. The computational requirements of each simulation environment differ widely. For example, our environments based on Warp, Nimble and mesh-based DiffTaichi were fastest, requiring only a few minutes for 1,000 episodes (including gradient computations). Particle-based DiffTaichi environments (Fluid and the PlasticineLab environments) required significantly more time (e.g. Fluid took ≈ 2 hours for 1,000 episodes including gradient computations).Appendix
K M Jatavallabhula, M Macklin, F Golemo, V Voleti, L Petrini, M Weiss, B Considine, J Parent-Levesque, K Xie, K Erleben, L Paull, F Shkurti, D Nowrouzezahrail, S Fidler, Differentiable simulation for system identification and visuomotor control. K. M. Jatavallabhula, M. Macklin, F. Golemo, V. Voleti, L. Petrini, M. Weiss, B. Considine, J. Parent-Levesque, K. Xie, K. Erleben, L. Paull, F. Shkurti, D. Nowrouzezahrail, and S. Fidler. gradsim: Differentiable simulation for system identification and visuomotor control. 2021.
PlasticineLab: A soft-body manipulation benchmark with differentiable physics. Z Huang, Y Hu, T Du, S Zhou, H Su, J B Tenenbaum, C Gan, Z. Huang, Y. Hu, T. Du, S. Zhou, H. Su, J. B. Tenenbaum, and C. Gan. PlasticineLab: A soft-body manipulation benchmark with differentiable physics. 2021.
X Lin, Z Huang, Y Li, J B Tenenbaum, D Held, C Gan, Diffskill, arXiv:2203.17275Skill abstraction from differentiable physics for deformable object manipulations with tools. arXiv preprintX. Lin, Z. Huang, Y. Li, J. B. Tenenbaum, D. Held, and C. Gan. Diffskill: Skill abstraction from differentiable physics for deformable object manipulations with tools. arXiv preprint arXiv:2203.17275, 2022.
DiffCloud: Real-to-Sim from Point Clouds with Differentiable Simulation and Rendering of Deformable Objects. P Sundaresan, R Antonova, J Bohg, arXiv:2204.03139arXiv preprintP. Sundaresan, R. Antonova, and J. Bohg. DiffCloud: Real-to-Sim from Point Clouds with Differentiable Simulation and Rendering of Deformable Objects. arXiv preprint arXiv:2204.03139, 2022.
Risp: Rendering-invariant state predictor with differentiable simulation and rendering for cross-domain parameter estimation. P Ma, T Du, J B Tenenbaum, W Matusik, C Gan, arXiv:2205.05678arXiv preprintP. Ma, T. Du, J. B. Tenenbaum, W. Matusik, and C. Gan. Risp: Rendering-invariant state predictor with differentiable simulation and rendering for cross-domain parameter estimation. arXiv preprint arXiv:2205.05678, 2022.
H J T Suh, M Simchowitz, K Zhang, R Tedrake, arXiv:2202.00817Do differentiable simulators give better policy gradients. arXiv preprintH. J. T. Suh, M. Simchowitz, K. Zhang, and R. Tedrake. Do differentiable simulators give better policy gradients? arXiv preprint arXiv:2202.00817, 2022.
A differentiable physics engine for deep learning in robotics. J Degrave, M Hermans, J Dambre, Frontiers in neurorobotics. 6J. Degrave, M. Hermans, J. Dambre, et al. A differentiable physics engine for deep learning in robotics. Frontiers in neurorobotics, page 6, 2019.
End-to-end differentiable physics for learning and control. F De Avila Belbute-Peres, K Smith, K Allen, J Tenenbaum, J Z Kolter, Advances in Neural Information Processing Systems (NeurIPS). S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. GarnettCurran Associates, IncF. de Avila Belbute-Peres, K. Smith, K. Allen, J. Tenenbaum, and J. Z. Kolter. End-to-end dif- ferentiable physics for learning and control. In S. Bengio, H. Wallach, H. Larochelle, K. Grau- man, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems (NeurIPS). Curran Associates, Inc., 2018.
Identifying mechanical models of unknown objects with differentiable physics simulations. C Song, A Boularias, PMLRProceedings of the 2nd Conference on Learning for Dynamics and Control. the 2nd Conference on Learning for Dynamics and Control120C. Song and A. Boularias. Identifying mechanical models of unknown objects with differ- entiable physics simulations. In Proceedings of the 2nd Conference on Learning for Dynam- ics and Control, volume 120 of Proceedings of Machine Learning Research, pages 749-760. PMLR, 10-11 Jun 2020.
Learning to slide unknown objects with differentiable physics simulations. C Song, A Boularias, arXiv:2005.05456arXiv preprintC. Song and A. Boularias. Learning to slide unknown objects with differentiable physics simulations. arXiv preprint arXiv:2005.05456, 2020.
Efficient differentiable simulation of articulated bodies. Y.-L Qiao, J Liang, V Koltun, M C Lin, International Conference on Machine Learning. PMLRY.-L. Qiao, J. Liang, V. Koltun, and M. C. Lin. Efficient differentiable simulation of articulated bodies. In International Conference on Machine Learning, pages 8661-8671. PMLR, 2021.
Differentiable physics models for realworld offline model-based reinforcement learning. M Lutter, J Silberbauer, J Watson, J Peters, 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEEM. Lutter, J. Silberbauer, J. Watson, and J. Peters. Differentiable physics models for real- world offline model-based reinforcement learning. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pages 4163-4170. IEEE, 2021.
Pods: Policy optimization via differentiable simulation. M A Z Mora, M P Peychev, S Ha, M Vechev, S Coros, International Conference on Machine Learning. PMLRM. A. Z. Mora, M. P. Peychev, S. Ha, M. Vechev, and S. Coros. Pods: Policy optimization via differentiable simulation. In International Conference on Machine Learning, pages 7805- 7817. PMLR, 2021.
Fast and feature-complete differentiable physics engine for articulated rigid bodies with contact constraints. K Werling, D Omens, J Lee, I Exarchos, C K Liu, Robotics: Science and Systems. K. Werling, D. Omens, J. Lee, I. Exarchos, and C. K. Liu. Fast and feature-complete differen- tiable physics engine for articulated rigid bodies with contact constraints. In Robotics: Science and Systems, 2021.
Add: Analytically differentiable dynamics for multi-body systems with frictional contact. M Geilinger, D Hahn, J Zehnder, M Bächer, B Thomaszewski, S Coros, ACM Transactions on Graphics (TOG). 396M. Geilinger, D. Hahn, J. Zehnder, M. Bächer, B. Thomaszewski, and S. Coros. Add: Analyt- ically differentiable dynamics for multi-body systems with frictional contact. ACM Transac- tions on Graphics (TOG), 39(6):1-15, 2020.
A H Taylor, S Le Cleac'h, Z Kolter, M Schwager, Z Manchester, arXiv:2203.00806Dojo: A differentiable simulator for robotics. arXiv preprintA. H. Taylor, S. Le Cleac'h, Z. Kolter, M. Schwager, and Z. Manchester. Dojo: A differentiable simulator for robotics. arXiv preprint arXiv:2203.00806, 2022.
Scalable differentiable physics for learning and control. Y.-L Qiao, J Liang, V Koltun, M Lin, International Conference on Machine Learning. PMLRY.-L. Qiao, J. Liang, V. Koltun, and M. Lin. Scalable differentiable physics for learning and control. In International Conference on Machine Learning, pages 7847-7856. PMLR, 2020.
Brax -a differentiable physics engine for large scale rigid body simulation. C D Freeman, E Frey, A Raichuk, S Girgin, I Mordatch, O Bachem, Conference on Neural Information Processing Systems (NeurIPS) Datasets and Benchmarks Track. C. D. Freeman, E. Frey, A. Raichuk, S. Girgin, I. Mordatch, and O. Bachem. Brax -a dif- ferentiable physics engine for large scale rigid body simulation. In Conference on Neural Information Processing Systems (NeurIPS) Datasets and Benchmarks Track, 2021.
Difftaichi: Differentiable programming for physical simulation. Y Hu, L Anderson, T.-M Li, Q Sun, N Carr, J Ragan-Kelley, F Durand, Y. Hu, L. Anderson, T.-M. Li, Q. Sun, N. Carr, J. Ragan-Kelley, and F. Durand. Difftaichi: Differentiable programming for physical simulation. 2020.
Chainqueen: A real-time differentiable physical simulator for soft robotics. Y Hu, J Liu, A Spielberg, J B Tenenbaum, W T Freeman, J Wu, D Rus, W Matusik, 2019 International conference on robotics and automation (ICRA). IEEEY. Hu, J. Liu, A. Spielberg, J. B. Tenenbaum, W. T. Freeman, J. Wu, D. Rus, and W. Ma- tusik. Chainqueen: A real-time differentiable physical simulator for soft robotics. In 2019 International conference on robotics and automation (ICRA), pages 6265-6271. IEEE, 2019.
Differentiable cloth simulation for inverse problems. J Liang, M Lin, V Koltun, Advances in Neural Information Processing Systems. 32J. Liang, M. Lin, and V. Koltun. Differentiable cloth simulation for inverse problems. Advances in Neural Information Processing Systems, 32, 2019.
Differentiable fluid simulations for deep learning. N Thuerey, APS Division of Fluid Dynamics Meeting Abstracts. N. Thuerey. Differentiable fluid simulations for deep learning. In APS Division of Fluid Dynamics Meeting Abstracts, pages H17-006, 2019.
DiSECt: A Differentiable Simulation Engine for Autonomous Robotic Cutting. E Heiden, M Macklin, Y S Narang, D Fox, A Garg, F Ramos, 10.15607/RSS.2021.XVII.067Proceedings of Robotics: Science and Systems, Virtual. Robotics: Science and Systems, VirtualE. Heiden, M. Macklin, Y. S. Narang, D. Fox, A. Garg, and F. Ramos. DiSECt: A Differ- entiable Simulation Engine for Autonomous Robotic Cutting. In Proceedings of Robotics: Science and Systems, Virtual, July 2021. doi:10.15607/RSS.2021.XVII.067.
Differentiable molecular simulations for control and learning. W Wang, S Axelrod, R Gómez-Bombarelli, W. Wang, S. Axelrod, and R. Gómez-Bombarelli. Differentiable molecular simulations for control and learning. 2020.
Diffaqua: A differentiable computational design pipeline for soft underwater swimmers with shape interpolation. P Ma, T Du, J Z Zhang, K Wu, A Spielberg, R K Katzschmann, W Matusik, ACM Transactions on Graphics (TOG). 404132P. Ma, T. Du, J. Z. Zhang, K. Wu, A. Spielberg, R. K. Katzschmann, and W. Matusik. Diffaqua: A differentiable computational design pipeline for soft underwater swimmers with shape inter- polation. ACM Transactions on Graphics (TOG), 40(4):132, 2021.
Jax m.d. a framework for differentiable physics. S S Schoenholz, E D Cubuk, Advances in Neural Information Processing Systems. Curran Associates, Inc33S. S. Schoenholz and E. D. Cubuk. Jax m.d. a framework for differentiable physics. In Ad- vances in Neural Information Processing Systems, volume 33. Curran Associates, Inc., 2020.
Warp: A high-performance python framework for gpu simulation and graphics. M Macklin, NVIDIA GPU Technology Conference (GTC). M. Macklin. Warp: A high-performance python framework for gpu simulation and graphics. https://github.com/nvidia/warp, March 2022. NVIDIA GPU Technology Conference (GTC).
global) optimization: Historical notes and recent developments. M Locatelli, F Schoen, EURO Journal on Computational Optimization. 9100012M. Locatelli and F. Schoen. (global) optimization: Historical notes and recent developments. EURO Journal on Computational Optimization, 9:100012, 2021.
Completely derandomized self-adaptation in evolution strategies. N Hansen, A Ostermeier, Evolutionary computation. 92N. Hansen and A. Ostermeier. Completely derandomized self-adaptation in evolution strate- gies. Evolutionary computation, 9(2):159-195, 2001.
The cma evolution strategy: a comparing review. Towards a new evolutionary computation. N Hansen, N. Hansen. The cma evolution strategy: a comparing review. Towards a new evolutionary computation, pages 75-102, 2006.
Comparing results of 31 algorithms from the black-box optimization benchmarking bbob-2009. N Hansen, A Auger, R Ros, S Finck, P Pošík, Proceedings of the 12th annual conference companion on Genetic and evolutionary computation. the 12th annual conference companion on Genetic and evolutionary computationN. Hansen, A. Auger, R. Ros, S. Finck, and P. Pošík. Comparing results of 31 algorithms from the black-box optimization benchmarking bbob-2009. In Proceedings of the 12th annual conference companion on Genetic and evolutionary computation, pages 1689-1696, 2010.
Taking the Human Out of the Loop: A Review of Bayesian Optimization. B Shahriari, K Swersky, Z Wang, R P Adams, N De Freitas, Proceedings of the IEEE. 1041B. Shahriari, K. Swersky, Z. Wang, R. P. Adams, and N. de Freitas. Taking the Human Out of the Loop: A Review of Bayesian Optimization. Proceedings of the IEEE, 104(1):148-175, 2016.
Combining evolution strategy and gradient descent method for discriminative learning of bayesian classifiers. X Chen, X Liu, Y Jia, Proceedings of the 11th Annual conference on Genetic and evolutionary computation. the 11th Annual conference on Genetic and evolutionary computationX. Chen, X. Liu, and Y. Jia. Combining evolution strategy and gradient descent method for discriminative learning of bayesian classifiers. In Proceedings of the 11th Annual conference on Genetic and evolutionary computation, pages 507-514, 2009.
Gaussian processes for machine learning. C K Williams, C E Rasmussen, MIT press2Cambridge, MAC. K. Williams and C. E. Rasmussen. Gaussian processes for machine learning, volume 2. MIT press Cambridge, MA, 2006.
BoTorch: A Framework for Efficient Monte-Carlo Bayesian Optimization. M Balandat, B Karrer, D R Jiang, S Daulton, B Letham, A G Wilson, E Bakshy, Advances in Neural Information Processing Systems. 33M. Balandat, B. Karrer, D. R. Jiang, S. Daulton, B. Letham, A. G. Wilson, and E. Bakshy. BoTorch: A Framework for Efficient Monte-Carlo Bayesian Optimization. In Advances in Neural Information Processing Systems 33, 2020.
A moving least squares material point method with displacement discontinuity and two-way rigid body coupling. Y Hu, Y Fang, Z Ge, Z Qu, Y Zhu, A Pradhana, C Jiang, ACM Transactions on Graphics. 374150Y. Hu, Y. Fang, Z. Ge, Z. Qu, Y. Zhu, A. Pradhana, and C. Jiang. A moving least squares material point method with displacement discontinuity and two-way rigid body coupling. ACM Transactions on Graphics, 37(4):150, 2018.
A Lightweight Covariance Matrix Adaptation Evolution Strategy (CMA-ES) Implementation. M N Shibata, Hideaki Imamura, M. N. Masashi Shibata, Hideaki Imamura. A Lightweight Covariance Matrix Adaptation Evolution Strategy (CMA-ES) Implementation. URL https://github.com/CyberAgentAILab/cmaes.
GPyTorch: Blackbox Matrix-matrix Gaussian Process Inference with GPU Acceleration. J Gardner, G Pleiss, K Q Weinberger, D Bindel, A G Wilson, Advances in neural information processing systems. 31J. Gardner, G. Pleiss, K. Q. Weinberger, D. Bindel, and A. G. Wilson. GPyTorch: Black- box Matrix-matrix Gaussian Process Inference with GPU Acceleration. Advances in neural information processing systems, 31, 2018.
| [
"https://github.com/nvidia/warp,",
"https://github.com/CyberAgentAILab/cmaes."
] |
[
"Admissibility of retarded diagonal systems with one-dimensional input space",
"Admissibility of retarded diagonal systems with one-dimensional input space"
] | [
"Rafał Kapica ",
"Jonathan R Partington ",
"Radosław Zawiski "
] | [] | [] | We investigate infinite-time admissibility of a control operator B in a Hilbert space statedelayed dynamical system setting of the formż(t) = Az(t)is also diagonal and u ∈ L 2 (0, ∞; C). Our approach is based on the Laplace embedding between L 2 and the Hardy space H 2 (C + ). The results are expressed in terms of the eigenvalues of A and A 1 and the sequence representing the control operator. | 10.1007/s00498-023-00345-6 | [
"https://export.arxiv.org/pdf/2207.00662v2.pdf"
] | 250,264,503 | 2207.00662 | 45e379540a8abd7af5c40f90df4d4d7ec0af5203 |
Admissibility of retarded diagonal systems with one-dimensional input space
Rafał Kapica
Jonathan R Partington
Radosław Zawiski
Admissibility of retarded diagonal systems with one-dimensional input space
admissibilitystate delayinfinite-dimensional diagonal system 2020 Subject Classification: 34K3034K3547D0693C23
We investigate infinite-time admissibility of a control operator B in a Hilbert space statedelayed dynamical system setting of the formż(t) = Az(t)is also diagonal and u ∈ L 2 (0, ∞; C). Our approach is based on the Laplace embedding between L 2 and the Hardy space H 2 (C + ). The results are expressed in terms of the eigenvalues of A and A 1 and the sequence representing the control operator.
Introduction
State-delayed differential equations arise in many areas of applied mathematics, which is related to the fact that in the real world there is an inherent input-output delay in every physical system. Among sources of delay we have the spatial character of the system in relation to signal propagation, measurements processing or hatching time in biological systems, to name a few. Whenever the delay has a considerable influence on the outcome of the process it has to be incorporated into a process's mathematical model. Hence, an understanding of a state-delayed system, even in a linear case, plays a crucial role in the analysis and control of dynamical systems, particularly when the asymptotic behaviour is concerned.
In order to cover a possibly large area of dynamical systems our analysis uses an abstract description. Hence the retarded state-delayed dynamical system we are interested in has an abstract representation given by
ż (t) = Az(t) + A 1 z(t − τ ) + Bu(t) z(0) = x z 0 = f,(1)
where the state space X is a Hilbert space, A : D(A) ⊂ X → X is a closed, densely defined generator of a C 0 -semigroup (T (t)) t≥0 on X, A 1 ∈ L(X) and 0 < τ < ∞ is a fixed delay (some discussions of the difficulties inherent in taking A 1 unbounded appear in Subsection 4.2). The input function is u ∈ L 2 (0, ∞; C), B is the control operator, the pair x ∈ D(A) and f ∈ L 2 (−τ, 0; X) forms the initial condition. We also assume that X possesses a sequence of normalized eigenvectors (φ k ) k∈N forming a Riesz basis, with associated eigenvalues (λ k ) k∈N .
We analyse (1) from the perspective of infinite-time admissibility which, roughly speaking, asserts whether a solution z of (1) follows a required type of trajectory. A more detailed description of admissibility requires an introduction of pivot duality and some related norm inequalities. For that reason we postpone it until Subsection 2.2, where all these elements are already introduced for the setting within which we analyse (1).
With regard to previous admissibility results, necessary and sufficient conditions for infinite-time admissibility of B in the undelayed case of (1), under an assumption of diagonal generator (A, D(A)), were analysed e.g. using Carleson measures e.g. in [13,14,28]. Those results were extended to normal semigroups [29], then generalized to the case when u ∈ L 2 (0, ∞; t α dt) for α ∈ (−1, 0) in [31] and further to the case u ∈ L 2 (0, ∞; w(t)dt) in [16,17]. For a thorough presentation of admissibility results, not restricted to diagonal systems, for the undelayed case we refer the reader to [15] and a rich list of references therein.
For the delayed case, in contrast to the undelayed one, a different setting is required. Some of the first studies in such setting are [12] and [8], and these form a basis for [5]. In this article we follow the latter one in developing a setting for admissibility analysis. We also build on [24] where a similar setting was used to present admissibility results for a simplified version of (1), that is with a diagonal generator (A, D(A)) with the delay in its argument (see the Examples section below).
In fact, as the system analysed in [24] is a special case of (1), the results presented here contain those of [24]. The most important drawback of results in [24] is that the conditions leading to sufficiency for infinite-time admissibility there imply also that the semigroup generator is bounded. Thus, to obtain some results for unbounded generators one is forced to go though the so-called reciprocal systems. Results presented below are free from such limitation and can be applied to unbounded diagonal generators directly, as shown in the Examples section.
This paper is organised as follows. Section 2 defines the notation and provides preliminary results. These include a general delayed equation setting, which is applied later to the problem of our interest and the problem of infinite-time admissibility. Section 3 shows how the general setting looks for a particular case of retarded diagonal case. It then shows a component-wise analysis of infinite-time admissibility and provides results for the complete system. Section 4 gives examples.
Preliminaries
In this paper we use Sobolev spaces (see e.g. For any α ∈ R we denote the following half-planes C ← − α := {s ∈ C : Re s < α}, C − → α := {s ∈ C : Re s > α}, with a simplification for two special cases, namely C − := C ← − 0 and C + := C − → 0 . We make use of the Hardy space H 2 (C + ) that consists of all analytic functions f :
C + → C for which sup α>0 ∞ −∞ |f (α + iω)| 2 dω < ∞.(2)
If f ∈ H 2 (C + ) then for a.e. ω ∈ R the limit
f * (iω) = lim α↓0 f (α + iω)(3)
exists and defines a function f * ∈ L 2 (iR) called the boundary trace of f . Using boundary traces H 2 (C + ) is made into a Hilbert space with the inner product defined as
f, g H 2 (C+) := f * , g * L 2 (iR) := 1 2π +∞ −∞ f * (iω)ḡ * (iω) dω ∀f, g ∈ H 2 (C + ).(4)
For more information about Hardy spaces see [23], [11] or [22]. We also make use of the Paley-Wiener Theorem (see [25,Chapter 19]
L 2 (0, ∞; Y ) → H 2 (C + ; Y )
is an isometric isomorphism.
The delayed equation setting
We follow a general setting for a state-delayed system from [5, Chapter 3.1], described for a diagonal case also in [24]. And so, to include the influence of the delay we extend the state space of (1). To that end consider a trajectory of (1) given by z :
[−τ, ∞) → X.
For each t ≥ 0 we call z t : [−τ, 0] → X, z t (σ) := z(t + σ) a history segment with respect to t ≥ 0. With history segments we consider a so-called history function of z denoted by h z : [0, ∞) → L 2 (−τ, 0; X), h z (t) := z t . In [5,Lemma 3.4] we find the following Proposition 2. Let 1 ≤ p < ∞ and z ∈ W 1,p loc (−τ, ∞; X). Then the history function
h z : t → z t of z is continuously differentiable from R + into L p (−τ, 0; X) with derivative ∂ ∂t h z (t) = ∂ ∂σ z t .
To remain in the Hilbert space setting we limit ourselves to p = 2 and take
X := X × L 2 (−τ, 0; X)(5)
as the aforementioned state space extension with an inner product
x f , y g X := x, y X + f, g L 2 (−τ,0;X) .(6)
Then (X , · X ) becomes a Hilbert space with the norm x
f 2 X = x 2 X + f 2 L 2 .
We assume that a linear and bounded delay operator Ψ : W 1,2 (−τ, 0; X) → X acts on history segments z t and thus consider (1) in the form
ż (t) = Az(t) + Ψz t + Bu(t) z(0) = x, z 0 = f,(7)
where the pair x ∈ D(A) and f ∈ L 2 (−τ, 0; X) forms an initial condition. A particular choice of Ψ can be found in (21) below. Due to Proposition 2, system (7) may be written as an abstract Cauchy problem
v(t) = Av(t) + Bu(t) v(0) = x f ,(8)
where
v : [0, ∞) t → z(t)
zt ∈ X and A is a linear operator on D(A) ⊂ X , where
D(A) := x f ∈ D(A) × W 1,2 (−τ, 0; X) : f (0) = x ,(9)A := A Ψ 0 d dσ ,(10)
and the control operator is B = B 0 . Operator (A, D(A)) is closed and densely defined on X [5, Lemma 3.6]. Note that up to this moment we do not need to know more about Ψ.
Concerning the resolvent of (A, D(A)), let Moreover, for s ∈ ρ(A) the resolvent R(s, A) is given by
A 0 := d dσ , D(A 0 ) = {z ∈ W 1,2 (−τ, 0; X) : z(0) = 0},R(s, A) = R(s, A + Ψ s ) R(s, A + Ψ s )ΨR(s, A 0 ) s R(s, A + Ψ s ) ( s R(s, A + Ψ s )Ψ + I)R(s, A 0 ) .(11)
In the sequel we make use of Sobolev towers, also known as a duality with a pivot (see [26,Chapter 2] or [9,Chapter II.5]). To this end we have Definition 4. Let β ∈ ρ(A) and denote (X 1 , · 1 ) := (D(A), · 1 ) with x 1 := (βI − A)x (x ∈ D(A)). Similarly, we set x −1 := (βI − A) −1 x (x ∈ X). Then the space (X −1 , · −1 ) denotes the completion of X under the norm · −1 . For t ≥ 0 we define T −1 (t) as the continuous extension of T (t) to the space (X −1 , · −1 ).
The adjoint generator plays an important role in the pivot duality setting. Thus we take
Since D(A) is dense in X the functional in (12) has a unique bounded extension to X. By the Riesz representation theorem there exists a unique w ∈ X such that Ax, y = x, w . Then we define A * y := w so that
Ax, y = x, A * y ∀ x ∈ D(A) ∀ y ∈ D(A * ).(13)
We have the following (see [26,Prop. 2
.10.2])
Proposition 6. With the notation of Definition 4 let (A * , D(A * )) be the adjoint of (A, D(A)).
Then β ∈ ρ(A * ), (X d 1 , · d 1 ) := (D(A * ), · d 1 ) with x d 1 := (βI − A * )x (x ∈ D(A * )
) is a Hilbert space and X −1 is the dual of X d 1 with respect to the pivot space X, that is X −1 = (D(A * )) .
Much of our reasoning is justified by the following Proposition, which we include here for the reader's convenience (for more details see [9,Chapter II.5] or [26, Chapter 2.10]). Proposition 7. With the notation of Definition 4 we have the following (i) The spaces (X 1 , · 1 ) and (X −1 , · −1 ) are independent of the choice of β ∈ ρ(A).
(ii) (T 1 (t)) t≥0 is a C 0 -semigroup on the Banach space (X 1 , · 1 ) and we have T
1 (t) 1 = T (t) for all t ≥ 0. (iii) (T −1 (t)) t≥0 is a C 0 -semigroup on the Banach space (X −1 , · −1 ) and T −1 (t) −1 = T (t) for all t ≥ 0.
In the sequel, we denote the restriction (extension) of T (t) described in Definition 4 by the same symbol T (t), since this is unlikely to lead to confusions.
In the sequel we also use the following result by Miyadera and Voigt [9, Corollaries III.3.15 and 3.16], that gives sufficient conditions for a perturbed generator to remain a generator of a C 0 -semigroup. Proposition 8. Let (A, D(A)) be the generator of a strongly continuous semigroup T (t) t≥0 on a Banach space X and let P ∈ L(X 1 , X) be a perturbation which satisfies
t0 0 P T (r)x dr ≤ q x ∀x ∈ D(A)(14)
for some t 0 > 0 and 0 ≤ q < 1. Then the sum A + P with domain D(A + P ) := D(A) generates a strongly continuous semigroup (S(t)) t≥0 on X. Moreover, for all t ≥ 0 the C 0 -semigroup (S(t)) t≥0 satisfies
S(t)x = T (t)x + t 0 S(s)P T (t − s)x ds ∀x ∈ D(A).(15)
The admissibility problem
The basic object in the formulation of admissibility problem is a linear system and its mild solutionẋ
(t) = Ax(t) + Bu(t); x(t) = T (t)x 0 + t 0 T (t − s)Bu(s) ds,(16)
where x : [0, ∞) → X, u ∈ V , where V is a normed space of measurable functions from [0, ∞) to U and B is a control operator ; x 0 ∈ X is an initial state. In many practical examples the control operator B is unbounded, hence (16) is viewed on an extrapolation space X −1 ⊃ X where B ∈ L(U, X −1 ). Introduction of X −1 , however, comes at a price of physical interpretation of the solution. To be more precise, a dynamical system expressed by (16) describes a physical system where one can assign a physical meaning to X, with the use of which the modelling is performed. That is not always true for X −1 . We would then like to study those control operators B for which the (mild) solution is a continuous X-valued function that carries a physical meaning. In a rigorous way, to ensure that the state x(t) lies in X it is sufficient that
t 0 T −1 (t − s)Bu(s) ds ∈ X for all inputs u ∈ V .
Definition 9. Let B ∈ L(U, X −1 ) and t ≥ 0. The forcing operator Φ t ∈ L(V, X −1 ) is given by
Φ t (u) := t 0 T (t − σ)Bu(σ) dσ.(17)
Put differently, we have
Definition 10. The control operator B ∈ L(U, X −1 ) is called (i) finite-time admissible for T (t) t≥0 on a Hilbert space X if for each t > 0 there is a constant K t such that Φ t (u) X ≤ K t u V ∀u ∈ V ; (18) (ii) infinite-time admissible for (T (t)) t≥0 if there is a constant K ≥ 0 such that Φ t L(V,X) ≤ K ∀t ≥ 0.(19)
For the infinite-time admissibility it is convenient to define a different version of the forcing operator, namely Φ ∞ :
L 2 (0, ∞; U ) → X −1 , Φ ∞ (u) := ∞ 0 T (t)Bu(t) dt.(20)
The infinite-time admissibility of B follows then from the boundedness of Φ ∞ in (20) taken as an operator from L 2 (0, ∞; U ) to X. For a more detailed discussion concerning infinite-time admissibility see also [15] and [26] with references therein.
The setting of retarded diagonal systems
We begin with a general setting of the previous section expressed by (8) with elements defined there. Then, consecutively specifying these elements, we reach a description of a concrete case of a retarded diagonal system. Let the delay operator Ψ be a point evaluation i.e. define Ψ ∈ L(W 1,2 (−τ, 0; X), X) as
Ψ(f ) := A 1 f (−τ ),(21)
where boundedness of Ψ results from continuous embedding of With the delay operator given by (21) we are in a position to describe pivot duality for X given by (5) with (A, D(A)) given by (10)-(9) and with B = B 0 . Then, using the pivot duality, we consider (8) on the completion space X −1 where the control operator B ∈ L(U, X −1 ). To write explicitly all the elements of the pivot duality setting we need to determine the adjoint (A * , D(A * )) operator (see Proposition 6). Proposition 11. Let X, (A, D(A)) and A 1 be as in (1) and (A, D(A)) be defined by (10)-(9) with Ψ given by (21). Then (A * , D(A * )), the adjoint of (A, D(A)), is given by
W 1,2 (−τ, 0; X) in C([−τ, 0], X) (D(A * ) = y g ∈ D(A * ) × W 1,2 (−τ, 0; X) : A * 1 y = g(−τ ) ,(22)A * y g = A * y + g(0) − d dσ g ,(23)
where (A * , D(A * )) is the adjoint of (A, D(A)) and A * 1 is the adjoint of A 1 . Proof. Let F be the set defined as the right hand side of (22). To show that D(A * ) ⊂ F we adapt the approach from [19].
Let v = f (0) f ∈ D(A), w = y g ∈ D(A * ) and let A * w = (A * w) 0 (A * w) 1 .
By (10), (9) and the adjoint Definition 5 we get
A * w, v X = (A * w) 0 , f (0) X + 0 −τ (A * w) 1 (σ), f (σ) X dσ = y, Af (0) X + y, A 1 f (−τ ) X + 0 −τ g(σ), d dσ f X dσ,(24)
and boundedness of the above for every v ∈ D(A) implies that y ∈ D(A * ). Observe also that
0 −τ (A * w) 1 (σ), f (σ) X dσ = 0 −τ (A * w) 1 (σ), f (0) − 0 σ d dξ f (ξ) dξ X dσ = 0 −τ (A * w) 1 (σ), f (0) X dσ − 0 −τ (A * w) 1 (σ), 0 σ d dξ f (ξ) dξ X dσ = 0 −τ (A * w) 1 (σ), f (0) X dσ − 0 −τ ξ −τ (A * w) 1 (σ), d dξ f (ξ) X dσ dξ = 0 −τ (A * w) 1 (σ) dσ, f (0) X − 0 −τ ξ −τ (A * w) 1 (σ) dσ, d dξ f (ξ) X dξ.(25)
Putting the result of (25) into (24) and rearranging gives that for every v ∈ D(A)
(A * w) 0 + 0 −τ (A * w) 1 (σ) dσ − A * y − A * 1 y, f (0) X = 0 −τ σ −τ (A * w) 1 (ξ) dξ − A * 1 y + g(σ), d dσ f (σ) X dσ,(26)
where we used the fact that f
(−τ ) = f (0) − 0 −τ d dσ f (σ) dσ.
As for every constant f :
[−τ, 0] → D(A) we have f (0) f ∈ D(A), there is (A * w) 0 = A * y + A * 1 y − 0 −τ (A * w) 1 (σ) dσ,(27)
and then
g(σ) = A * 1 y − σ −τ (A * w) 1 (ξ) dξ, ∀σ ∈ [−τ, 0].(28)
Equation (28) shows that w = y g ∈ D(A * ) implies g ∈ W 1,2 (−τ, 0; X). Taking the limits gives
g(−τ ) = A * 1 y(29)
and
(A * w) 0 = A * y + g(0).(30)
Differentiating (28) with respect to σ we also have
(A * w) 1 = − d dσ g.(31)To show that D(A * ) ⊃ F let w = y g ∈ F and v = x f = f (0) f ∈ D(A).
By (13) we need to show that A * w, v X = w, Av X , where A * w we take as given by (23). We have
A * w, v X = A * y + g(0) − d dσ g , f (0) f X = A * y + g(0), f (0) X + 0 −τ − d dσ g, f X dσ = A * y, f (0) X + g(0), f (0) X + − g, f X 0 −τ − 0 −τ − g, d dσ f X dσ = A * y, f (0) X + g(−τ ), f (−τ ) X + g, d dσ f L 2 = A * y, f (0) X + A * 1 y, f (−τ ) X + g, d dσ f L 2 = y, Af (0) X + y, A 1 f (−τ ) X + g, d dσ f L 2 = w, Av X .
Denoting D(A * ) for the dual to D(A * ) with respect to the pivot space X , by Proposition 6 we have
X −1 = D(A * ) .(32)
System (8) represents an abstract Cauchy problem, which is well-posed if and only if (A, D(A)) generates a C 0 -semigroup on X . To show that this is the case we use a perturbation approach. We represent
A = A 0 + A Ψ , where A 0 := A 0 0 d dσ ,(33)
with domain D(A 0 ) = D(A) and
A Ψ := 0 Ψ 0 0 ,(34)
where A Ψ ∈ L X × W 1,2 (−τ, 0; X), X . The following proposition [5, Theorem 3.25] gives a necessary and sufficient condition for the unperturbed part (A 0 , D(A 0 )) to generate a C 0 -semigroup on X .
Proposition 12. Let X be a Banach space. The following are equivalent: (i) The operator (A, D(A)) generates a strongly continuous semigroup
T (t) t≥0 on X. (ii) The operator (A 0 , D(A 0 )) generates a strongly continuous semigroup T 0 (t) t≥0 on X × L p (−τ, 0; X) for all 1 ≤ p < ∞. (iii) The operator (A 0 , D(A 0 )) generates a strongly continuous semigroup T 0 (t) t≥0 on X × L p (−τ, 0; X) for one 1 ≤ p < ∞. The C 0 -semigroup T 0 (t) t≥0 is given by T 0 (t) := T (t) 0 S t S 0 (t) ∀t ≥ 0,(35)
where S 0 (t) t≥0 is the nilpotent left shift C 0 -semigroup on L p (−τ, 0; X),
S 0 (t)f (s) := f (s + t) if s + t ∈ [−τ, 0], 0 else (36)
and S t : X → L p (−τ, 0; X),
(S t x)(s) := T (s + t)x if − t < s ≤ 0, 0 if − τ ≤ s ≤ −t.(37)
Proposition 8 provides now a sufficient condition for the perturbation (A Ψ , D(A)) such that A = A 0 + A Ψ is a generator, as given by the following
Proposition 13. Operator (A, D(A)) generates a C 0 -semigroup T (t) t≥0 on X .
Proof. We use Proposition 8 with (A, D(A)) given by (10)-(9) and represented as sum of (33) and (34) with Ψ given by (21). Thus a sufficient condition for (A, D(A)) to be a generator of a strongly continuous semigroup T (t) t≥0 on X is that the perturbation
A Ψ ∈ L X × W 1,2 (−τ, 0; X), X given by (34) satisfies t0 0 A Ψ T 0 (r)v X dr ≤ q v X ∀v ∈ D(A 0 )
for some t 0 > 0 and 0 ≤ q < 1. Let x f ∈ D(A 0 ) and let 0 < t < 1. Then, using the notation of Proposition 12 and defining M :
= max sup s∈[0,1] T (s) , 1 we have t 0 A Ψ T 0 (r)v X dr = t 0 Ψ(S r x + S 0 (r)f ) X dr = t 0 A 1 (S r x)(−τ ) + A 1 S 0 (r)f (−τ ) X dr ≤ A 1 t 0 T (−τ + r)x X dr + A 1 t 0 f (−τ + r) X dr = A 1 −τ +t −τ T (s)x X ds + A 1 −τ +t −τ f (s) X ds ≤ tM A 1 x X + A 1 −τ +t −τ f (s) 2 X ds 1 2 −τ +t −τ 1 2 ds 1 2 ≤ tM A 1 x X + t 1 2 A 1 f L 2 ≤ t 1 2 M A 1 x X + f L 2 ≤ (2t) 1 2 M A 1 v X ,
where we used Hölder's inequality and the fact that
( x X + f L 2 (−τ,0;X) ) 2 ≤ 2( x 2 X + f 2 L 2 (−τ,0;X) ), with v X = ( x 2 X + f 2 L 2 (−τ,0;X) ) 1 2 . Setting now t 0 small enough so that q := (2t 0 ) 1 2 M A 1 < 1
we arrive at our conclusion.
Remark 14. The operator Ψ defined in (21) is a special case of a much wider class of operators that satisfy (14) and thus (A, D(A)) in (10) remains a generator of a strongly continuous semigroup (T (t)) t≥0 on X . For the proof of this general case see [5, Section 3.3.3].
We obtained results in Proposition 11 and Proposition 13 only by specifying a particular type of delay operator in the general setting of Section 2. Let us now specify the state space as X := l 2 with the standard orthonormal basis (e k ) k∈N , (A, D(A)) is a diagonal generator of a C 0 -semigroup (T (t)) t≥0 on X with a sequence of eigenvalues (λ k ) k∈N ⊂ C such that
sup k∈N Re λ k < ∞,(38)
and A 1 ∈ L(X) is a diagonal operator with a sequence of eigenvalues (γ k ) k∈N ⊂ C. In other words, we introduce a finite-time state delay into the standard setting for diagonal systems [26,Chapter 2.6]. Hence, the C 0 -semigroup generator (A, D(A)) is given by
D(A) = z ∈ l 2 (C) : k∈N (1 + |λ k | 2 )|z k | 2 < ∞ , (Az) k = λ k z k .(39)
Making use of the pivot duality, as the space X 1 we take (D(A), · gr ), where the graph norm · gr is equivalent to
z 2 1 = k∈N (1 + |λ k | 2 )|z k | 2 .
The adjoint generator (A * , D(A * )) has the form
D(A * ) = D(A), (A * z) k = λ k z k .(40)
The space X −1 consists of all sequences z = (z k ) k∈N ⊂ C for which
k∈N |z k | 2 1 + |λ k | 2 < ∞,(41)
and the square root of the above series gives an equivalent norm on X −1 . By Proposition 6 the space X −1 can be written as (D(A * )) . Note also that the operator B ∈ L(C,
X −1 ) is represented by the sequence (b k ) k∈N ⊂ C as L(C, X −1 ) can be identified with X −1 .
This completes the description of the setting for a diagonal retarded system. From now on we consider system (1) reformulated as (7) and its Cauchy problem representation (8) as defined with the diagonal elements described in this section.
Analysis of a single component
Let us now focus on the k-th component of (1), namely
ż k (t) = λ k z k (t) + γ k z k (t − τ ) + b k u(t) z k (0) = x k , z 0 k = f k ,(42)
where λ k , γ k , b k , x k ∈ C, f k := f, l k L 2 (−τ,0;X) l k with l k being the k-th component of an orthonormal basis in L 2 (−τ, 0; X) (see [4, Chapter 3.5, p.138] for a description of such bases). Here b k is the kth component of B.
For clarity of notation, until the end of this subsection, we drop the subscript k and rewrite (42) in the form
ż (t) = λz(t) + Ψz t + bu(t) z(0) = x, z 0 = f,(43)
where the delay operator Ψ ∈ L(W 1,2 (−τ, 0; C), C) is given by
Ψ(f ) = γf (−τ ) ∀f ∈ W 1,2 (−τ, 0; C).(44)
The setting for the k-th component now includes the extended state space
X := C × L 2 (−τ, 0; C)(45)
with an inner product
x f , y g X := xȳ + f, g L 2 (−τ,0;C) ∀ x f , y g ∈ X .(46)
The Cauchy problem for the k-th component is
v(t) = Av(t) + Bu(t) v(0) = x f ,(47)
where
v : [0, ∞) t → z(t)
zt ∈ X and A is an operator on D(A) ⊂ X defined as
D(A) := x f ∈ C × W 1,2 (−τ, 0; C) : f (0) = x ,(48)A := λ Ψ 0 d dσ(49)
and B := b 0 ∈ L(C, X −1 ). By Proposition 6 and Proposition 11 for the k-th component we have
X −1 = D(A * ) ,(50)
where Now that we know that the k-th component Cauchy problem (47) is well-posed we can formally write its X −1 -valued mild solution as
D(A * ) = y g ∈ C × W 1,2 (−τ, 0; C) : γ y = g(−τ ) ,(51)A * y g = λy + g(0) − d dσ g ,(52)v(t) = T (t)v(0) + t 0 T (t − s)Bu(s) ds,(53)
where the control operator is B = b 0 ∈ L(C, X −1 ) and T (t) ∈ L(X −1 ) is the extension of the C 0 -semigroup generated by (A, D(A)) in (48)-(49).
The following, being a corollary from Proposition 3, gives the form of the k-th component resolvent R(s, A).
Moreover, for s ∈ ρ(A) the resolvent R(s, A) is given by
R(s, A) = R(s, λ + Ψ s ) R(s, λ + Ψ s )ΨR(s, A 0 ) s R(s, λ + Ψ s ) ( s R(s, λ + Ψ s )Ψ + I)R(s, A 0 ) ,(55)
where R(s, λ + Ψ s ) ∈ L(C),
R(s, λ + Ψ s ) = 1 s − λ − γ e −sτ ∀s ∈ C − → |λ|+|γ|(56)
and R(s, A 0 ) ∈ L(L 2 (−τ, 0; C)),
R(s, A 0 )f (r) = 0 r e s(r−t) f (t) dt r ∈ [−τ, 0] ∀s ∈ C − → |λ|+|γ| .(57)
Proof. The proof runs along the lines of [24, Proposition 3.3] with necessary adjustments for the forms of diagonal operators involved.
By Proposition 16 the resolvent component R(s, λ + Ψ s ) is analytic in C − → |λ|+|γ| . To ensure analyticity of R(s, λ + Ψ s ) in C + , as required to apply H(C + )-based approach, we introduce the following sets.
Remark 17. We take the principal argument of λ to be Arg λ ∈ (−π, π].
Let D r ⊂ C be an open disc centred at 0 with radius r > 0. We shall require the following subset of the complex plane, depending on τ > 0 and a ∈ (−∞, 1 τ ] and shown in Fig. 1, namely:
• for a < 0: Λ τ,a := η ∈ C \ D |a| : Re η + a < 0, |η| < |η π |,
|Arg η| > τ |η| 2 − a 2 + arctan − 1 a |η| 2 − a 2 ∪ D |a| ,(58)
where η π is such that |η π | 2 − a 2 τ + arctan − 1 a |η π | 2 − a 2 = π;
• for a = 0:
Λ τ,a := η ∈ C \ {0} : Re η < 0, |η| < π 2τ
, |Arg η| > τ |η| + π 2 ;
(59)
• for 0 < a ≤ 1 τ Λ τ,a := η ∈ C : Re η + a < 0, |η| < |η π |, |Arg η| > τ |η| 2 − a 2 + arctan − 1 a |η| 2 − a 2 + π ,(60)
where η π is such that |η π | > a and
|η π | 2 − a 2 τ + arctan − 1 a |η π | 2 − a 2 = 0.
The analyticity of R(s, λ + Ψ s ) in C + follows now from the following [18] Proposition 18. Let τ > 0 and let λ, γ, η ∈ C such that λ = a + ib ∈ C ← − In relation to the form of R(s, λ + Ψ s ) consider the following technical result based on [27], originally stated for real coefficients, that for complex ones becomes Lemma 19. Let τ > 0 and λ, γ ∈ C such that λ = a + ib with a ≤ 1 τ , b ∈ R and γ e −ibτ ∈ Λ τ,a . Then
J := 1 2π ∞ −∞ dω |iω − λ − γ e −iωτ | 2 = J a , |γ| < |a| J e , |γ| = |a| J γ , |γ| > |a| (63) where J a := 1 2 a 2 − |γ| 2 × × e √ a 2 −|γ| 2 τ a − a 2 − |γ| 2 + e − √ a 2 −|γ| 2 τ − a − a 2 − |γ| 2 2 Re(γ e −ibτ ) + e √ a 2 −|γ| 2 τ a − a 2 − |γ| 2 − e − √ a 2 −|γ| 2 τ − a − a 2 − |γ| 2 ,(64)aτ − 1 Re(γ e −ibτ ) + a ,(65)
and
J γ := 1 2 |γ| 2 − a 2 × × a sin( |γ| 2 − a 2 τ ) − |γ| 2 − a 2 cos( |γ| 2 − a 2 τ )
Re(γ e −ibτ ) + a cos( |γ| 2 − a 2 τ ) + |γ| 2 − a 2 sin( |γ| 2 − a 2 τ ) .
The proof of Lemma 19 is a rather technical one and so it is in the Appendix section. We easily obtain Corollary 20. Let τ > 0, λ = 0 and γ ∈ Λ τ,0 . Then
J 0 := 1 2π ∞ −∞ dω |iω − γ e −iωτ | 2 = − cos(|γ|τ ) 2 Re(γ) + |γ| sin(|γ|τ ) .(67)
Referring to (20) and the mild solution of the k-th component (53) the infinite-time forcing operator Φ ∞ ∈ L(L 2 (0, ∞; C), X −1 ) is given by
Φ ∞ (u) := ∞ 0 T (t)Bu(t) dt,(68)
where
T (t)B = T 11 (t) T 12 (t) T 21 (t) T 22 (t) b 0 = b T 11 (t) b T 21 (t) .
Hence the forcing operator (68) becomes
Φ ∞ (u) = ∞ 0 T 11 (t)bu(t) dt ∞ 0 T 21 (t)bu(t) dt ∈ X −1 = D(A * ) .(69)
We can represent formally a similar product with the resolvent R(s, A) from (55), namely
R(s, A)B = R 11 (s) R 12 (s) R 21 (s) R 22 (s) b 0 = b s − λ − γ e −sτ 1 s ,(70)
where the correspondence of sub-indices with elements of (55) is the obvious one and will be used from now on to shorten the notation. The connection between the C 0 -semigroup T (t) and the resolvent R(s, A) is given by the Laplace transform, whenever the integral converges, and
R(s, A)B = ∞ 0 e −sr T (r)B dr = b L(T 11 )(s) L(T 21 )(s) ∈ L(C, X −1 ).(71)
Theorem 21. Suppose that for a given delay τ > 0 there is λ = a + iβ ∈ C ← − 1 τ and γ e −iβτ ∈ Λ τ,a . Then the control operator B = b 0 for the system (47) is infinite-time admissible for every u ∈ L 2 (0, ∞; C) and
Φ ∞ (u) 2 X ≤ (1 + τ )|b| 2 J u 2 L 2 (0,∞;C) ,
where J is given by (63).
Proof.
1. Let the standard inner product on L 2 (0, ∞; C) be given by f, g L 2 (0,∞;C) = ∞ 0 f (t)ḡ(t) dt for every f, g ∈ L 2 (0, ∞; C). Using (69) and (50) we may write for the
first component of Φ ∞ (u) ∞ 0 T 11 (t)bu(t) dt = b T 11 ,ū L 2 (0,∞;C)(72)
assuming that T 11 ∈ L 2 (0, ∞; C). This assumption is equivalent, due to Theorem 1, to L(T 11 ) ∈ H 2 (C + ), where the last inclusion holds. Indeed, using (70) and (71) we see that L(T 11 )(s) = bR 11 (s) = b s−λ−γ e −sτ . The assumptions on λ and γ give that R 11 is analytic in C + . The boundary trace R * 11 = L(T 11 ) * is given a.e. as
L(T 11 ) * (iω) = 1 iω − λ − γ e −iωτ .
Lemma 19 now gives that L(T 11 ) * ∈ L 2 (iR) and thus, by (4), R 11 ∈ H 2 (C + ).
Again by Theorem 1 and definition of the inner product on
H 2 (C) + in (4) we have b T 11 ,ū L 2 (0,∞;C) = b 2π +∞ −∞ 1 iω − λ − γ e −iωτ L(ū) * (iω) dω.
The Cauchy-Schwarz inequality now gives
|b| 1 2π +∞ −∞ 1 iω − λ − γ e −iωτ L(ū) * (iω) dω ≤ |b| 1 2π +∞ −∞ 1 iω − λ − γ e −iωτ 2 dω 1 2 1 2π +∞ −∞ L(ū) * (iω) 2 dω 1 2 = |b| J 1 2 u L 2 (0,∞;C) ,
with J given by (63). Combining this result with point 1 we obtain
∞ 0 T 11 (t)bu(t) dt 2 ≤ |b| 2 J u 2 L 2 (0,∞;C) .(73)
3. Consider now the second element of the forcing operator (69), namely
∞ 0 T 21 (t)bu(t) dt ∈ W,
where we denote by W the second component of X −1 = D(A * ) . If we assume that T 21 ∈ L 2 (0, ∞; W ) then using the vector-valued version of Theorem 1 this is equivalent to L(T 21 ) ∈ H 2 (C + , W ), but the last inclusion holds. Indeed, to show it notice that R 21 = s R 11 where s (σ) := e sσ , σ ∈ [−τ, 0], is, as a function of s, analytic everywhere for every value of σ, and follow exactly the reasoning in point 1. 4. We introduce an auxiliary function φ : [0, ∞) → C. For that purpose fix T 21 ∈ L 2 (0, ∞; W ) and x 0 ∈ W and define φ(t) := T 21 (t), x 0 W . Then φ ∈ L 2 (0, ∞; C), as the Cauchy-Schwarz inequality gives
∞ 0 | T 21 (t), x 0 W | 2 dt ≤ ∞ 0 T 21 (t) 2 W dt x 0 2 W < ∞.
Consider now the following:
b ∞ 0 φ(t)u(t) dt = b ∞ 0 T 21 (t), x 0 W u(t)dt = b ∞ 0 T 21 (t)u(t) dt, x 0 W . We also have b ∞ 0 φ(t)u(t) dt = b φ,ū L 2 (0,∞;C) = b L(φ) * , L(ū) * L 2 (iR) .
To obtain the boundary trace L(φ) * notice that
L(φ)(s) = ∞ 0 e −sr T 21 (r), x 0 W dr = ∞ 0 e −sr T 21 (r) dr, x 0 W = L(T 21 )(s), x 0 W = R 21 (s), x 0 W .
Using now (70) yields the result
L(φ) * (iω) = R * 21 (iω), x 0 W = iω iω − λ − γ e −iωτ , x 0 W .
Finally we obtain
∞ 0 T 21 (t)u(t) dt, x 0 W = 1 2π +∞ −∞ R * 21 (iω)L(ū) * (iω) dω, x 0 W and ∞ 0 T 21 (t)u(t) dt = 1 2π +∞ −∞ R * 21 (iω)L(ū) * (iω) dω ∈ W.(74)
6. By the definition of the norm on L 2 (−τ, 0; C) we have
R * 21 (iω) 2 L 2 (−τ,0;C) = 0 −τ e iωt iω − λ − γ e −iωτ 2 dt = 1 |iω − λ − γ e −iωτ | 2 0 −τ e iωt 2 dt = τ |iω − λ − γ e −iωτ | 2 .
The Cauchy-Schwarz inequality gives
|b| 1 2π +∞ −∞ R * 21 (iω)L(ū) * (iω) dω L 2 (−τ,0;C) ≤ |b| 1 2π +∞ −∞ R * 21 (iω) L 2 (−τ,0;C) |L(ū) * (iω)| dω = |b| 1 2π +∞ −∞ τ 1 2 |iω − λ − γ e −iωτ | |L(ū) * (iω)| dω ≤ |b| 1 2π +∞ −∞ τ 1 2 |iω − λ − γ e −iωτ | 2 dω 1 2 1 2π +∞ −∞ L(ū) * (iω) 2 dω 1 2 = |b| (τ J) 1 2 u L 2 (0,∞;C) ,
with J given by (63). Combining this result with point 5 gives
∞ 0 T 21 (t)bu(t) dt 2 L 2 (−τ,0;C) ≤ |b| 2 τ J u 2 L 2 (0,∞;C) .(75)
7. Taking now the norm · X resulting from (46) and using (69), (73), (75) and Lemma 19 we arrive at
Φ ∞ (u) 2 X = ∞ 0 T 11 (t)bu(t) dt 2 + ∞ 0 T 21 (t)bu(t) dt 2 L 2 (−τ,0;C) = (1 + τ )|b| 2 J u 2 L 2 (0,∞;C) .(76)
Analysis of the whole retarded delay system
Let us return to the diagonal system (1) reformulated as (8) with the extended state space X = l 2 × L 2 (−τ, 0; l 2 ) and the control operator B ∈ L(C, X −1 ). We also return to denoting the k-th component of the extended state space with the subscript. By Proposition 15 a mild solution of (42) is given by (53), that is v k :
[0, ∞) → X , v k (t) = z k (t) z t k = T k (t)v k (0) + t 0 T k (t − s)B k u(s) ds.(77)
Given the structure of the Hilbert space X = l 2 × L 2 (−τ, 0; l 2 ) in (6) the mild solution (77) has values in the subspace of X spanned by the k-th element of its basis. Hence, defining v :
[0, ∞) → X , v(t) := k∈N v k (t),(78)
we obtain the unique mild solution of (8). Using (78) and (6) we have
v(t) 2 X = z(t) z t 2 X = z(t) 2 l 2 + z t 2 L 2 (−τ,0;l 2 ) = k∈N |z k (t)| 2 + k∈N | z t , l k L 2 (−τ,0;l 2 ) | 2 = k∈N |z k (t)| 2 + z t k 2 L 2 (−τ,0;C) = k∈N v k (t) 2 X ,(79)
where we used again (45) and notation from (42). We can formally write the mild solution (78) as a function v :
[0, ∞) → X −1 , v(t) = T (t)v(0) + t 0 T (t − s)Bu(s) ds.(80)
where the control operator B ∈ L(C, X −1 ) is given by
B = (b k ) k∈N 0 .
We may now state the main theorem of this subsection.
Theorem 22. Let for the given delay τ ∈ (0, ∞) sequences (λ k ) k∈N and (γ k ) k∈N be such that
λ k = a k + iβ k ∈ C ← − 1 τ and γ k e −iβ k τ ∈ Λ τ,a k ∀k ∈ N,
with Λ τ,a k defined in (58)-(60). Then the control operator B ∈ L(C, X −1 ) given by B =
(b k ) k∈N 0 is infinite-time admissible for system (8) if the sequence (C k ) k∈N ∈ l 1 , where C k := |b k | 2 J k (81)
and J k is given by (63) for every (λ k , γ k ), k ∈ N.
Proof. Define the infinitie-time forcing operator for (80) as Φ ∞ :
L 2 (0, ∞) → X −1 , Φ ∞ (u) := ∞ 0 T (t)Bu(t) dt.
From (78) it can be represented as
Φ ∞ (u) = k∈N Φ ∞ k (u),(82)
where Φ ∞ k (u) is given by
Φ ∞ k (u) := ∞ 0 T k (t)B k u(t) dt, k ∈ N.
Then, similarly as in (79) and using the assumption we see that
Φ ∞ (u) 2 X = k∈N Φ ∞ k (u) 2 X ≤ (1 + τ ) k∈N |C k | u 2 L 2 (0,∞;C) < ∞.
Condition (C k ) k∈N ∈ l 1 of Theorem 22 may not be easy to verify given the form of J k in (63). However, in certain situations the required condition follows from relatively simple relations between generator eigenvalues and a control sequence -see Example 4.1 below.
The l 1 -convergence condition was also used in [24], where the results are, in fact, a special case of the present reasoning. This can be seen in Sections 4.2 and 4.3 below.
Examples
A motivating example of a dynamical system is the heat equation with delay [21], [20] (or a diffusion model with a delay in the reaction term [30, Section 2.1]). Consider a homogeneous rod with zero temperature imposed on its both ends and its temperature change described by the following model
∂w ∂t (x, t) = ∂ 2 w ∂x 2 (x, t) + g(w(x, t − τ )), x ∈ (0, π), t ≥ 0, w(0, t) = 0, w(π, t) = 0, t ∈ [0, ∞), w(x, 0) = w 0 (x), x ∈ (0, π), w(x, t) = ϕ(x, t) x ∈ (0, π), t ∈ [−τ, 0],(83)
where the temperature profile w(·, t) belongs to the state space X = L 2 (0, π), initial condition is formed by the initial temperature distribution w 0 ∈ W 2,2 (0, π) ∩ W 1,2 0 (0, π) and the initial history segment ϕ 0 ∈ W 1,2 (−τ, 0; X), the action of g is such that it can be considered as a linear and bounded diagonal operator on X. More precisely, consider first (83) without the delay term i.e. the classical one-dimensional heat equation setting [26,Chapter 2.6]. Define D(A) := W 2,2 (0, π) ∩ W 1,2 0 (0, π), Az :=
d 2 dx 2 z.(84)
Note that 0 ∈ ρ(A). For k ∈ N let φ k ∈ D(A), φ k (x) := 2 π sin(kx) for every x ∈ (0, π). Then (φ k ) k∈N is an orthonormal Riesz basis in X and
Aφ k = −k 2 φ k ∀k ∈ N.(85)
Introduce now the delay term g : X → X, g(z) := A 1 z where A 1 ∈ L(X) is such that
A 1 φ k = γ k φ k for every k ∈ N.
We can now, using history segments, reformulate (83) into an abstract settingż
(t) = Az(t) + A 1 z t (−τ ), z(0) = w 0 , z 0 = ϕ 0 .(86)
Using standard Hilbert space methods and transforming system (86) into the l 2 space (we use the same notation for the l 2 version of (86)) and introducing control signal we obtain a retarded system of type (1). The most important aspect of the above example is the sequence of eigenvalues (λ k ) k∈N = (−k 2 ) k∈N , a characteristic feature of the heat equation. Although the above heat equation is expressed using a specific Riesz basis, the idea behind remains the same. More precisely -one can redo the reasoning leading to a version of Theorem 22 based on a general Riesz basis instead of the standard orthonormal basis in X. Such approach, however, would be based on the same ideas and would inevitably suffer from a less clear presentation, and so we refrain from it.
Eigenvalues with unbouded real part
Consider initially generators with unbounded real parts of their eigenvalues. For a given delay τ > 0 let a diagonal generator (A, D(A)) have a sequence of eigenvalues (λ k ) k∈N such that λ k = a k + iβ k ∈ C − and a k → −∞ as k → ∞.
Let the operator A 1 ∈ L(X) be diagonal with a sequence of eigenvalues (γ k ) k∈N . Boundedness of A 1 implies that there exists M < ∞ such that |γ k | ≤ M for every k ∈ N. As A 1 is diagonal we easily get |γ k | ≤ A 1 ≤ M . Let the control operator B be represented by the sequence (b k ) k∈N ⊂ C.
To use Theorem 22 we need to assure additionally that γ k e iβ k τ ∈ Λ τ,a k for every k ∈ N and that the sequence (C k ) k∈N = (|b k | 2 J a k ) k∈N ∈ l 1 . However, for the former part we note that the boundedness of A 1 implies that there exists N ∈ N such that
|γ k | < |a k | ∀ k > N.(88)
Fix such N . By the definition of Λ τ,a in (58) we see that (γ k ) k>N ⊂ Λ τ,a N . Thus the only additional assumption on operator A 1 we need is
γ k e −iβ k τ ∈ Λ τ,a k ∀k ≤ N.(89)
Assume that (87) and (89) hold. Then the sequence (C k ) k∈N ∈ l 1 if and only if
k≥N |C k | = k≥N |b k | 2 J a k < ∞,
where J a k is given by (64) for every k ≥ N . Let us denote r k := a 2 k − |γ k | 2 . As k → ∞ we have r k → ∞, a k − r k → −∞, a k + r k → 0. and thus we obtain
lim k→∞ |C k+1 | |C k | = lim k→∞ |b k+1 | 2 |b k | 2 J a k+1 J a k = lim k→∞ |b k+1 | 2 |b k | 2 r k r k+1 × × e r k+1 τ a k+1 − r k+1 + e −r k+1 τ − a k+1 − r k+1 2 Re(γ k+1 e −ib k+1 τ ) + e r k+1 τ a k+1 − r k+1 − e −r k+1 τ − a k+1 − r k+1 × × 2 Re(γ k e −ib k τ ) + e r k τ a k − r k − e −r k τ − a k − r k e r k τ a k − r k + e −r k τ − a k − r k = lim k→∞ |b k+1 | 2 |b k | 2 |a k | 1 − |γ k | 2 a 2 k |a k+1 | 1 − |γ k+1 | 2 a 2 k+1 × × 1 − e −2r k+1 τ a k+1 +r k+1 a k+1 −r k+1 1 + e −r k+1 τ 2 Re(γ k+1 e −ib k+1 τ ) a k+1 −r k+1 + e −2r k+1 τ a k+1 +r k+1 a k+1 −r k+1 × × 1 + e −r k τ 2 Re(γ k e −ib k τ ) a k −r k + e −2r k τ a k +r k a k −r k 1 − e −2r k τ a k +r k a k −r k = lim k→∞ |b k+1 | 2 |b k | 2 |a k | |a k+1 | ,
provided that at least one of these limits exists. The above results clearly depends on a particular set of eigenvalues.
Let us now look at the abstract heat equation (86). The sequence of eigenvalues in (85) i.e. (λ k ) k∈N = (−k 2 ) k∈N clearly satisfies (87). For such (λ k ) k∈N we have
lim k→∞ |C k+1 | |C k | = lim k→∞ |b k+1 | 2 |b k | 2 ,(90)
provided that at least one limit exists. By the d'Alembert series convergence criterion lim k→∞ |C k+1 | |C k | < 1 implies (C k ) k∈N ∈ l 1 . Take the delay τ = 1 and assume that A 1 in (86) is such that (89) holds, i.e. there exists N ∈ N such that γ k ∈ Λ 1,−k 2 for every k ≤ N and γ k ∈ Λ 1,−N 2 for every k > N . Then, by Theorem 22 for B = (b k ) k∈N 0 to be infinite-time admissible it is sufficient to take any (b k ) k∈N ∈ l 2 such that lim k→∞ |b k+1 | 2 |b k | 2 < 1.
Note the role of the "first" eigenvalues γ k of A 1 which need to be inside consecutive Λ τ,−k 2 regions. As A 1 is a structural part of retarded system (86) it may not always be possible to apply Theorem 22.
Direct state-delayed diagonal systems
With small additional effort we can show that the so-called direct (or pure, see e.g. [3]) delayed system, where the delay is in the argument of the generator, is a special case of the problem analysed here. Thus we apply our admissibility results to a dynamical system analysed in [24] and given by
ż (t) = Az(t − τ ) + Bu(t) z(0) = x, z 0 = f,(91)
where (A, D(A)) is a diagonal generator of a C 0 -semigroup (T (t)) t≥0 on l 2 , B is a control operator, 0 < τ < ∞ is a delay and the control signal u ∈ L 2 (0, ∞; C). Let the sequence (λ k ) k∈N of the eigenvalues of (A, D(A)) be such that sup k∈N Re λ k < 0.
We construct a setting as the one in Section 3 and proceed with analysis of a k−th component, with a delay operator given again by point evaluation as Ψ k ∈ L(W 1,2 (−τ, 0; C), C), Ψ k (f ) := λ k f (−τ ) (we leave the index k on purpose) and it is bounded as λ k is finite. The equivalent of (42) now reads
ż k (t) = λ k z k (t − τ ) + b k u(t) z k (0) = x k , z 0 k = f k ,(92)
where the role of γ k in (42) is played by λ k in (92), while λ k of (42) is 0 in (92), and this holds for every k. Thus, instead of a collection {Λ τ,a k } k∈N , we are concerned only with Λ τ,0 . Using now Corollary 20 instead of Lemma 19, the equivalent of Theorem 21 in the direct state-delayed setting takes the form Theorem 23. Let τ > 0 and take λ k ∈ Λ τ,0 . Then the control operator B = b k 0 for the system based on (92) is infinite-time admissible for every u ∈ L 2 (0, ∞; C) and
Φ ∞ u 2 X ≤ (1 + τ )|b k | 2 − cos(|λ k |τ ) 2 Re(λ k ) + |λ k | sin(|λ k |τ ) u 2 L 2 (0,∞;C) .
As Theorem 23 refers only to k-component it is an immediate consequence of Theorem 21. Using the same approach of summing over components the equivalent of Theorem 22 takes the form Theorem 24. Let for the given delay τ ∈ (0, ∞) the sequence (λ k ) k∈N ⊂ Λ τ,0 . Then the control operator B ∈ L(C, X −1 ) for the system based on (91) and given by B
= (b k ) k∈N 0 is infinite-time admissible if the sequence (C k ) k∈N ∈ l 1 , where C k := |b k | 2 − cos(|λ k |τ ) 2 Re(λ k ) + |λ k | sin(|λ k |τ ) .(93)
Note that the assumption that λ k ∈ Λ τ,0 for every k ∈ N, due to boundedness of the Λ τ,0 set, implies that A is in fact a bounded operator. While the result of Theorem 24 is correct, it is not directly useful in analysis of unbounded operators. Instead, its usefulness follows from the the so-called reciprocal system approach. For a detailed presentation of the reciprocal system approach see [7], while for its application see [24]. We note here only that as there is some sort of symmetry in admissibility analysis of a given undelayed system and its reciprocal, introduction of a delay breaks this symmetry. In the current context consider the example of the next section. Remark 25. In [24] the result corresponding to Theorem 24 uses a sequence (C k ) k∈N which based not only on a control operator and eigenvalues of the generator, but also on some constants δ k and m k so that C k = C k (b k , λ k , δ k , m k ). As δ k and m k originate from the proof of the result corresponding to Theorem 23, it requires additional effort to make the condition based on them useful. In the current form of Theorem 24 this problem does not exist and the convergence of (93) depends only on the relation between eigenvalues of the generator and the control operator.
Bounded real eigenvalues
In a diagonal framework of Example 4.2 let us consider, for a given delay τ , a sequence (λ k ) k∈N ⊂ R ∩ Λ τ,0 such that λ k → 0 as k → ∞. In particular, let λ k := (− π 2 + ε)τ −1 k −2 for some sufficiently small 0 < ε < π 2 . Such sequence of λ k typically arises when considering a reciprocal system of a undelayed heat equation, as is easily seen by (85). The ratio of absolute values of two consecutive coefficients (93) is
|C k+1 | |C k | = |b k+1 | 2 |b k | 2 | cos(|λ k+1 |τ )| | cos(|λ k |τ )| |λ k | |λ k+1 | 1 − sin(|λ k |τ )| 1 − sin(|λ k+1 |τ )| .
It is easy to see that
lim k→∞ |C k+1 | |C k | = lim k→∞ |b k+1 | 2 |b k | 2 ,(94)
provided that at least one of these limits exists. By the d'Alembert series convergence criterion lim k→∞ |C k+1 | |C k | < 1 implies (C k ) k∈N ∈ l 1 . Thus, by (94) for B = (b k ) k∈N 0 to be infinite-time admissible for system (91) it is sufficient to take any (b k ) k∈N ∈ l 2 such that lim k→∞ |b k+1 | 2 |b k | 2 < 1.
Appendix
Proof of Lemma 19
We rewrite J as
J = 1 2π ∞ −∞ dω |iω − λ − γ e −iωτ | 2 = 1 2π ∞ −∞ dω (iω − λ − γ e −iωτ )(−iω − λ − γ e iωτ ) = 1 2πi i∞ −i∞ ds (s − λ − γ e −sτ )(−s − λ − γ e sτ ) = 1 2πi i∞ −i∞ E 1 (s)E 2 (s) ds,(96)
where
E 1 (s) := 1 s − λ − γ e −sτ , E 2 (s) := 1 −s − λ − γ e sτ .(97)
Note that writing explicitly E 1 and E 2 as functions of s and parameters λ, γ and τ we have E 1 (s, λ, γ, τ ) = E 2 (−s, λ, γ, τ ).
Let E 1 be the set of poles of E 1 and E 2 be the set of poles of E 2 . As, by assumption, γ e −ibτ ∈ Λ τ,a Proposition 18 states that E 1 ⊂ C − and E 2 ⊂ C + . Thus we have that E 1 is analytic in C \ C − while E 2 is analytic in C \ C + . Let s n ∈ E 1 , i.e. s n − λ − γ e −snτ = 0. Rearranging gives γ s n − λ = e snτ .
Substituting above to E 2 gives E 2 (s n ) = − s n − λ (s n + λ)(s n − λ) + |γ| 2 .
and this value is finite as s n ∈ E 2 . Rearranging (96) to account for (98) gives
J = 1 2πi i∞ −i∞ E 1 (s) E 2 (s) + s − λ (s + λ)(s − λ) + |γ| 2 − E 1 (s) s − λ (s + λ)(s − λ) + |γ| 2 ds.
(99) The above integrand has no poles at the roots {z 1 , z 2 } of (s + λ)(s − λ) + |γ| 2 = 0.
(100)
However, as the two parts of the integrand in (99) will be treated separately, we need to consider poles introduced by z 1 and z 2 with regard to the contour of integration. Rewrite also (100) as (s + λ)(s − λ) + |γ| 2 = (s − z 1 )(s − z 2 ) = 0. (102) and (110), respectively. The location of infinitesimally small semicircles around z 1 and z 2 in part b) is to be modified depending on the location of z 1 and z 2 on the imaginary axis.
From this point onwards we analyse three cases given by the right side of (63). Assume first that |γ| < |a|. Then z 1 = − a 2 − |γ| 2 + ib, z 2 = a 2 − |γ| 2 + ib.
(102) Figure 2a shows integration contours Γ 1 (r) = Γ I (r) + Γ L (r) and Γ 2 (r) = Γ I (r) + Γ R (r) for r ∈ (0, ∞) used for calculation of J. In particular Γ I runs along the imaginary axis, Γ L is a left semicircle and Γ R is a right semicircle. Due to the above argument for a sufficiently large r we get J = 1 2πi lim r→∞ Γ I (r) E 1 (s) E 2 (s) + s − λ (s + λ)(s − λ) + |γ| 2 − E 1 (s) s − λ (s + λ)(s − λ) + |γ| 2 ds
= 1 2πi C+C L E 1 (s) E 2 (s) + s − λ (s − z 1 )(s − z 2 ) ds − 1 2πi C+C R E 1 (s) s − λ (s − z 1 )(s − z 2 )
ds.
In calculation of the above we used the fact both integrals round the semicircles at infinity are zero as the integrands are, at most, of order s −2 and for every fixed ϕ, λ, γ, τ , lim r→∞ 1 r e iϕ −λ − γ e −r e iϕ τ = 0.
Define separate parts of (103) as
J L := 1 2πi C+C L E 1 (s) E 2 (s) + s − λ (s − z 1 )(s − z 2 ) ds(105)
and
J R := − 1 2πi C+C R E 1 (s) s − λ (s − z 1 )(s − z 2 ) ds(106)
and consider them separately.
To calculate J L note that from (98) it follows that for every s n ∈ E 1 the value E 2 (s n ) = − s n − λ (s n − z 1 )(s n − z 2 ) Jonathan R. Partington indicates no external funding.
Radosław Zawiski's work was performed when he was a visiting researcher at the Centre for Mathematical Sciences of the Lund University, hosted by Sandra Pott, and supported by the Polish National Agency for Academic Exchange (NAWA) within the Bekker programme under the agreement PPN/BEK/2020/1/00226/U/00001/A/00001.
Competing interests
[ 10 ,
10Chapter 5]) W 1,2 (J, X) := {f ∈ L 2 (J, X) : d dt f ∈ L 2 (J, X)} and W 1,2 0 (J, X) := {f ∈ W 1,2 (J, X) : f (∂J) = 0}, where d dt f is a weak derivative of f and J is an interval with boundary ∂J.
be the generator of a nilpotent left shift semigroup on L 2 (−τ, 0; X). For s ∈ C define s : [−τ, 0] → C, s (σ) := e sσ . Define also Ψ s ∈ L(D(A), X), Ψ s x := Ψ( s (·)x). Then [5, Proposition 3.19] provides Proposition 3. For s ∈ C and for all 1 ≤ p < ∞ we have s ∈ ρ(A) if and only if s ∈ ρ(A + Ψ s ).
Definition 5 .
5Let A : D(A) → X be a densely defined operator. The adjoint of (A, D(A)), denoted (A * , D(A * )), is defined on D(A * ) := {y ∈ X : the functional X x → Ax, y is bounded}.
and D(A * ) is the dual to D(A * ) with respect to the pivot space X in (45). As the proof is essentialy the same, we only state a k-th component version of Proposition 13, namely Proposition 15. The operator (A, D(A)) given by (48)-(49) generates a strongly continuous semigroup T (t) t≥0 on X given by (45).
Proposition 16 .
16For s ∈ C and for all 1 ≤ p < ∞ there is s ∈ ρ(A) if and only if s ∈ ρ(λ + Ψ s ).
every solution of the equation s − a − η e −sτ = 0 belongs to C − if and only if η ∈ Λ τ,a ; (ii) every solution of s − λ − γ e −sτ = 0 (61) and its version with conjugate coefficients s − λ − γ e −sτ = 0 (62) belongs to C − if and only if γ e −ibτ ∈ Λ τ,a .
Figure 1 :
1Outer boundaries for some Λ τ,a sets, defined in (58)-(60) with η = u + iv, for τ = 1 and different values of a: solid for a = −1.5, dashed for a = 0 and dotted for a = 0.25.
Figure 2 :
2Contours of integration in(99): part a) is used for the case |γ| < |a|, part b) is used when |γ| ≥ |a|. Both parts are drawn for a sufficiently large r so that Γ I (r) = C, Γ L (r) = C L and Γ R (r) = C R and they enclose particular values of z 1 and z 2 in
Theorem 1 (Paley-Wiener). Let Y be a Hilbert space. Then the Laplace transform L :for the scalar version or [2, Theorem 1.8.3] for
the vector-valued one)
The authors have no competing interests as defined by Springer, or other interests that might be perceived to influence the results and/or discussion reported in this paper.Authors' contributionsJRP and RZ are responsible for the initial conception of the research and the approach method. RZ performed the research concerning every element needed for the single component as well as the whole system analysis. Examples for unbounded generators were provided by RK and RZ, while examples for direct state-delayed systems come from the work of JRP and RZ.Figures 1 -2were prepared by RZ. All authors participated in writing the manuscript. All authors reviewed the manuscript.As the C + C R contour is clockwise we obtainThus we obtainwhere z 1 , z 2 are given by (110). Substituting these values, again after tedious calculations, we obtainFor the last case assume that |γ| = |a| > 0, as the assumption γ e −bτ ∈ Λ τ,0 excludes the case |a| = |γ| = 0 because 0 ∈ Λ τ,0 . Instead of {z 1 , z 2 } we now have a single double root z 0 of (101) given by z 0 = ib.As z 0 lies on the imaginary axis we use the contour shown inFigure 2btailored to the case z 1 = z 2 = z 0 . Define J L and J R as in(105)and(106), respectively, but with the contour tailored for z 0 . For the same reasons as in(111)we haveFor J R the only pole of the integrand of (106) encircled by the C + C R contour is z 0 . Denoting this integrand by g the residue formula for a double root givesWith the current assumption we have that a 2 = γγ. By this and the fact that the C + C R contour is clockwise we obtainAs J = J L + J R this finishes the proof.AcknowledgementsThe authors would like to thank Prof. Yuriy Tomilov for many valuable comments and mentioning to them reference[19].
H Amann, Linear and Quasilinear Parabolic Problems. Basel, BaselBirkhäuser89H. Amann, Linear and Quasilinear Parabolic Problems, Monographs in Mathematics, vol. 89, Birkhäuser Basel, Basel, 1995.
W Arendt, C J Batty, M Hieber, F Neubrander, Vector-valued Laplace Transforms and Cauchy Problems. BaselBirkhäuser Verlag AG962nd ed.W. Arendt, C.J.K Batty, M. Hieber, and F. Neubrander, Vector-valued Laplace Trans- forms and Cauchy Problems, 2nd ed., Monographs in Mathematics, vol. 96, Birkhäuser Verlag AG, Basel, 2010.
Retarded differential equations. C T H Baker, Journal of Computational and Applied Mathematics. 125C. T. H. Baker, Retarded differential equations, Journal of Computational and Applied Mathematics 125 (2000), 309-335.
Applied functional analysis. A V Balakrishnan, SpringerNew Yorksecond ed.A. V. Balakrishnan, Applied functional analysis, second ed., Springer, New York, 1981.
Semigroups for Delay Equations. A Batkái, S Piazzera, Research Notes in Mathematics. 10CRC PressA. Batkái and S. Piazzera, Semigroups for Delay Equations, Research Notes in Math- ematics, vol. 10, CRC Press, 2005.
H Brezis, Functional Analysis, Sobolev Spaces and Partial Differential Equations, Universitext. New York, New YorkSpringer-VerlagH. Brezis, Functional Analysis, Sobolev Spaces and Partial Differential Equations, Uni- versitext, Springer-Verlag New York, New York, 2011.
R F Curtain, Regular linear systems and their reciprocals: applications to Riccati equations. 49R. F. Curtain, Regular linear systems and their reciprocals: applications to Riccati equations, Systems and Control Letters 49 (2003), 81-89.
Spectral theory and generator property for one-sided coupled operator matrices. K.-J Engel, Semigroup Forum. 582K.-J. Engel, Spectral theory and generator property for one-sided coupled operator ma- trices, Semigroup Forum 58(2) (1999), 267-295.
One-Parameter Semigroup for Linear Evolution Equations. K.-J Engel, R Nagel, Graduate Texts in Mathematics. 194Springer-VerlagK.-J. Engel and R. Nagel, One-Parameter Semigroup for Linear Evolution Equations, Graduate Texts in Mathematics, vol. 194, Springer-Verlag, Berlin, 2000.
L C Evans, Partial Differential Equations. American Mathematical Society19L. C. Evans, Partial Differential Equations, Graduate Studies in Mathematics, vol. 19, American Mathematical Society, 2002.
Bounded Analytic Functions. J Garnett, Graduate Texts in Mathematics. 236Springer-VerlagJ. Garnett, Bounded Analytic Functions, Graduate Texts in Mathematics, vol. 236, Springer-Verlag New York, Basel, 2007.
Admissible observation operators, semigroup criteria of admissibility. P Grabowski, F M Callier, Integral Equations Operator Theory. 252P. Grabowski and F. M. Callier, Admissible observation operators, semigroup criteria of admissibility, Integral Equations Operator Theory 25(2) (1996), 182-198.
Admissible input elements for systems in Hilbert space and a Carleson measure criterion. L F Ho, D L Russell, SIAM Journal of Control and Optimization. 21L.F. Ho and D.L. Russell, Admissible input elements for systems in Hilbert space and a Carleson measure criterion, SIAM Journal of Control and Optimization 21 (1983), 616-640.
Admissible input elements for systems in Hilbert space and a Carleson measure criterion. SIAM Journal of Control and Optimization. 21, Erratum: Admissible input elements for systems in Hilbert space and a Car- leson measure criterion, SIAM Journal of Control and Optimization 21 (1983), 985-986.
Admissibility of control and observation operators for semigroups: A survey, Current Trends in Operator Theory and its Applications. B Jacob, J R Partington, Birkhäuser Basel. Joseph A. Ball, J. William Helton, Martin Klaus, and Leiba RodmanBaselB. Jacob and J. R. Partington, Admissibility of control and observation operators for semigroups: A survey, Current Trends in Operator Theory and its Applications (Joseph A. Ball, J. William Helton, Martin Klaus, and Leiba Rodman, eds.), Birkhäuser Basel, Basel, 2004, pp. 199-221.
On Laplace-Carleson embedding theorems. B Jacob, J R Partington, S Pott, Journal of Functional Analysis. 264B. Jacob, J. R. Partington, and S. Pott, On Laplace-Carleson embedding theorems, Journal of Functional Analysis 264 (2013), 783-814.
Applications of Laplace-Carleson embeddings to admissibility and controllability. SIAM Journal of Control and Optimization. 522, Applications of Laplace-Carleson embeddings to admissibility and controllabil- ity, SIAM Journal of Control and Optimization 52 (2014), no. 2, 1299-1313.
Conditions for asymptotic stability of first order scalar differential-difference equation with complex coefficients. R Kapica, R Zawiski, ArXiv e-printsR. Kapica and R. Zawiski, Conditions for asymptotic stability of first order scalar differential-difference equation with complex coefficients, ArXiv e-prints (2022), https://arxiv.org/abs/2204.08729v2.
F Kappel, Semigroups and delay equations, Semigroups, theory and applications. H. Brezis, M. Crandall, and F. KappelHarlow; New YorkLongman Scientific and Technical2F. Kappel, Semigroups and delay equations, Semigroups, theory and applications. Vol. 2 (H. Brezis, M. Crandall, and F. Kappel, eds.), Pitman Research Notes in Mathematics Series, vol. 152, Harlow: Longman Scientific and Technical, New York, 1986, pp. 136- 176.
Null controllability of retarded parabolic equations. F A Khodja, C Bouzidi, C Dupaix, L Maniar, Mathematical Control and Related Fields. 41F. A. Khodja, C. Bouzidi, C. Dupaix and L. Maniar, Null controllability of retarded parabolic equations, Mathematical Control and Related Fields 4(1) (2014), 1-15
Classical Solvability for a Linear 1D Heat Equation with Constant Delay. D Ya, M Khusainov, E I Pokojovy, Azizbayov, Konstanzer Schriften in Mathematik. 316D. Ya. Khusainov, M. Pokojovy and E. I. Azizbayov, Classical Solvability for a Lin- ear 1D Heat Equation with Constant Delay, Konstanzer Schriften in Mathematik 316 (2013)
P Koosis, Introduction to H p Spaces. Cambridge, UKCambridge University Press1152nd ed.P. Koosis, Introduction to H p Spaces, 2nd ed., Cambridge Tracts in Mathematics, vol. 115, Cambridge University Press, Cambridge, UK, 2008.
An introduction to Hankel operators. J R Partington, Cambridge University Press13Cambridge, UKJ. R. Partington, An introduction to Hankel operators, London Mathematical Society Student Texts, vol. 13, Cambridge University Press, Cambridge, UK, 1988.
Admissibility of state delay diagonal systems with onedimensional input space. J R Partington, R Zawiski, Complex Analysis and Operator Theory. 13J. R. Partington and R. Zawiski, Admissibility of state delay diagonal systems with one- dimensional input space, Complex Analysis and Operator Theory 13 (2019), 2463-2485.
W Rudin, Real and Complex Analysis. SingaporeMcGraw-Hillthird ed.W. Rudin, Real and Complex Analysis, third ed., McGraw-Hill, Singapore, 1987.
Observation and Control for Operator Semigroups. M Tucsnak, G Weiss, Birkhäuser Verlag AGBaselM. Tucsnak and G. Weiss, Observation and Control for Operator Semigroups, Birkhäuser Verlag AG, Basel, 2009.
Closed form solutions for time delay systems' cost functionals. K Walton, J E Marshall, International Journal of Control. 39K. Walton and J. E. Marshall, Closed form solutions for time delay systems' cost func- tionals, International Journal of Control 39 (1984), 1063-1071.
Admissible input elements for diagonal semigroups on l 2. G Weiss, Systems and Control Letters. 10G. Weiss, Admissible input elements for diagonal semigroups on l 2 , Systems and Control Letters 10 (1988), 79-82.
A powerful generalization of the Carleson measure theorem?. Open problems in Mathematical Systems and Control Theory. LondonSpringer, A powerful generalization of the Carleson measure theorem?, Open problems in Mathematical Systems and Control Theory, Comm. Control Engrg., Springer, London, 1999, pp. 267-272.
J Wu, Theory and Applications of Partial Functional Differential Equations. New YorkSpringer-VerlagJ. Wu, Theory and Applications of Partial Functional Differential Equations, Springer- Verlag, New York, 1996.
A Wynn, Admissibility of Observation Operators in Discrete and Continuous Time. 4A. Wynn, α-Admissibility of Observation Operators in Discrete and Continuous Time, Complex Analysis and Operator Theory 4 (2010), no. 1, 109-131.
| [] |
[
"Open-and Closed-Loop Neural Network Verification using Polynomial Zonotopes",
"Open-and Closed-Loop Neural Network Verification using Polynomial Zonotopes"
] | [
"Niklas Kochdumper [email protected] \nStony Brook University\nStony BrookNYUSA\n",
"Christian Schilling [email protected] \nAalborg University\nAalborgDenmark\n",
"Matthias Althoff [email protected] \nTechnichal University of Munich\nGarchingGermany\n",
"Stanley Bak [email protected] \nStony Brook University\nStony BrookNYUSA\n"
] | [
"Stony Brook University\nStony BrookNYUSA",
"Aalborg University\nAalborgDenmark",
"Technichal University of Munich\nGarchingGermany",
"Stony Brook University\nStony BrookNYUSA"
] | [] | We present a novel approach to efficiently compute tight non-convex enclosures of the image through neural networks with ReLU, sigmoid, or hyperbolic tangent activation functions. In particular, we abstract the input-output relation of each neuron by a polynomial approximation, which is evaluated in a set-based manner using polynomial zonotopes. While our approach can also can be beneficial for open-loop neural network verification, our main application is reachability analysis of neural network controlled systems, where polynomial zonotopes are able to capture the non-convexity caused by the neural network as well as the system dynamics. This results in a superior performance compared to other methods, as we demonstrate on various benchmarks. | 10.1007/978-3-031-33170-1_2 | [
"https://export.arxiv.org/pdf/2207.02715v2.pdf"
] | 250,311,371 | 2207.02715 | 36e5adeba77c92c28e842c26e7a64f56aec940ab |
Open-and Closed-Loop Neural Network Verification using Polynomial Zonotopes
18 Apr 2023
Niklas Kochdumper [email protected]
Stony Brook University
Stony BrookNYUSA
Christian Schilling [email protected]
Aalborg University
AalborgDenmark
Matthias Althoff [email protected]
Technichal University of Munich
GarchingGermany
Stanley Bak [email protected]
Stony Brook University
Stony BrookNYUSA
Open-and Closed-Loop Neural Network Verification using Polynomial Zonotopes
18 Apr 2023arXiv:2207.02715v2 [cs.CV]Neural network verification · neural network controlled sys- tems · reachability analysis · polynomial zonotopes · formal verification
We present a novel approach to efficiently compute tight non-convex enclosures of the image through neural networks with ReLU, sigmoid, or hyperbolic tangent activation functions. In particular, we abstract the input-output relation of each neuron by a polynomial approximation, which is evaluated in a set-based manner using polynomial zonotopes. While our approach can also can be beneficial for open-loop neural network verification, our main application is reachability analysis of neural network controlled systems, where polynomial zonotopes are able to capture the non-convexity caused by the neural network as well as the system dynamics. This results in a superior performance compared to other methods, as we demonstrate on various benchmarks.
Introduction
While previously artificial intelligence was mainly used for soft applications such as movie recommendations [9], facial recognition [23], or chess computers [11], it is now also increasingly applied in safety-critical applications, such as autonomous driving [32], human-robot collaboration [27], or power system control [5]. In contrast to soft applications, where failures usually only have minor consequences, failures in safety-critical applications in the worst case result in loss of human lives. Consequently, in order to prevent those failures, there is an urgent need for efficient methods that can verify that the neural networks used for artificial intelligence function correctly. Verification problems involving neural networks can be grouped into two main categories:
-Open-loop verification: Here the task is to check if the output of the neural network for a given input set satisfies certain properties. With this setup one can for example prove that a neural network used for image classification is robust against a certain amount of noise on the image.
-Closed-loop verification: In this case the neural network is used as a controller for a dynamical system, e.g., to steer the system to a given goal set while avoiding unsafe regions. The safety of the controlled system can be verified using reachability analysis. For both of the above verification problems, the most challenging step is to compute a tight enclosure of the image through the neural network for a given input set. Due to the high expressiveness of neural networks, their images usually have complex shapes, so that convex enclosures are often too conservative for verification. In this work, we show how to overcome this limitation with our novel approach for computing tight non-convex enclosures of images through neural networks using polynomial zonotopes.
State of the Art
We first summarize the state of the art for open-loop neural network verification followed by reachability analysis for neural network controlled systems. Many different set representations have been proposed for computing enclosures of the image through a neural network, including intervals [43], polytopes [38], zonotopes [34], star sets [40], and Taylor models [21]. For neural networks with ReLU activation functions, it is possible to compute the exact image. This can be either achieved by recursively partitioning the input set into piecewise affine regions [42], or by propagating the initial set through the network using polytopes [38,48] or star sets [40], where the set is split at all neurons that are both active or inactive. In either case the exact image is in the worst case given as a union of 2 v convex sets, with v being the number of neurons in the network. To avoid this high computational complexity for exact image computation, most approaches compute a tight enclosure instead using an abstraction of the neural network. For ReLU activation functions one commonly used abstraction is the triangle relaxation [15] (see Fig. 1), which can be conveniently integrated into set propagation using star sets [40]. Another possibility is to abstract the input-output relation by a zonotope (see Fig. 1), which is possible for ReLU, sigmoid, and hyperbolic tangent activation functions [34]. One can also apply Taylor model arithmetic [26] to compute the image through networks with sigmoid and hyperbolic tangent activation [21], which corresponds to an abstraction of the input-output relation by a Taylor series expansion. In order to better capture dependencies between different neurons, some approaches also abstract the input-output relation of multiple neurons at once [28,36]. While computation of the exact image is infeasible for large networks, the enclosures obtained by abstractions are often too conservative for verification. To obtain complete verifiers, many approaches therefore use branch and bound strategies [7] that split the input set and/or single neurons until the specification can either be proven or a counterexample is found. For computational reasons branch and bound strategies are usually combined with approaches that are able to compute rough interval bounds for the neural network output very fast. Those bounds can for example be obtained using symbolic intervals [43] that store linear constraints on the variables in addition to the interval bounds to preserve dependencies. The DeepPoly approach [35] uses a similar concept, but applies a back-substitution scheme to obtain tighter bounds. With the FastLin method [45] linear bounds for the overall network can be computed from linear bounds for the single neurons. The CROWN approach [49] extends this concept to linear bounds with different slopes as well as quadratic bounds. Several additional improvements for the CROWN approach have been proposed, including slope optimization using gradient descent [47] and efficient ReLU splitting [44]. Instead of explicitly computing the image, many approaches also aim to verify the specification directly using SMT solvers [22,30], mixed-integer linear programming [8,37], semidefinite programming [31], and convex optimization [24].
For reachability analysis of neural network controlled systems one has to compute the set of control inputs in each control cycle, which is the image of the current reachable set through the neural network controller. Early approaches compute the image for ReLU networks exactly using polytopes [46] or star sets [39]. Since in this case the number of coexisting sets grows rapidly over time, these approaches have to unite sets using convex hulls [46] or interval enclosures [39], which often results in large over-approximations. If template polyhedra are used as a set representation, reachability analysis for neural network controlled systems with discrete-time plants reduces to the task of computing the maximum output along the template directions [12], which can be done efficiently. Neural network controllers with sigmoid and hyperbolic tangent activation functions can be converted to an equivalent hybrid automaton [20], which can be combined with the dynamics of the plant using the automaton product. However, since each neuron is represented by an additional state, the resulting hybrid automaton is very high-dimensional, which makes reachability analysis challenging. Some approaches approximate the overall network with a polynomial function [14,18] using polynomial regression based on samples [14] and Bernstein polynomials [18]. Yet another class of methods [10,21,33,41] employs abstractions of the input-output relation for the neurons to compute the set of control inputs using intervals [10], star sets [41], Taylor models [21], and a combination of zonotopes and Taylor models [33]. Common tools for reachability analysis of neural network controlled systems are JuliaReach [6], NNV [41], POLAR [19], ReachNN* [16], RINO [17], Sherlock [13], Verisig [20], and Verisig 2.0 [21], where JuliaReach uses zonotopes for neural network abstraction [33], NVV supports multiple set representations, ReachNN* applies the Bernstein polynomial method [18], POLAR approximates single neurons by Bernstein polynomials [19], RINO computes interval inner-and outer-approximations [17], Sherlock uses the polynomial regression approach [14], Verisig performs the conversion to a hybrid automaton [20], and Verisig 2.0 uses the Taylor model based neural network abstraction method [21].
Overview
In this work, we present a novel approach for computing tight non-convex enclosures of images through neural networks with ReLU, sigmoid, or hyperbolic tangent activation functions. The high-level idea is to approximate the inputoutput relation of each neuron by a polynomial function, which results in the abstraction visualized in Fig. 1. Since polynomial zonotopes are closed under polynomial maps, the image through this function can be computed exactly, yielding a tight enclosure of the image through the overall neural network. The remainder of this paper is structured as follows: After introducing some preliminaries in Sec. 2, we present our approach for computing tight enclosures of images through neural networks in Sec. 3. Next, we show how to utilize this result for reachability analysis of neural network controlled systems in Sec. 4. Afterwards, in Sec. 5, we introduce some special operations on polynomial zonotopes that we require for image and reachable set computation, before we finally demonstrate the performance of our approach on numerical examples in Sec. 6.
Notation
Sets are denoted by calligraphic letters, matrices by uppercase letters, and vectors by lowercase letters. Given a vector b P R n , b piq refers to the i-th entry. Given a matrix A P R oˆn , A pi,¨q represents the i-th matrix row, A p¨,jq the j-th column, and A pi,jq the j-th entry of matrix row i. The concatenation of two matrices C and D is denoted by rC Ds, and I n P R nˆn is the identity matrix. The symbols 0 and 1 represent matrices of zeros and ones of proper dimension, the empty matrix is denoted by r s, and diagpaq returns a diagonal matrix with a P R n on the diagonal. Given a function f pxq defined as f : R Ñ R, f 1 pxq and f 2 pxq denote the first and second derivative with respect to x. The left multiplication of a matrix A P R oˆn with a set S Ă R n is defined as A S :" tA s | s P Su, the Minkowski addition of two sets S 1 Ă R n and S 2 Ă R n is defined as S 1 ' S 2 :" ts 1`s2 | s 1 P S 1 , s 2 P S 2 u, and the Cartesian product of two sets S 1 Ă R n and S 2 Ă R o is defined as
S 1ˆS2 :" rs T 1 s T 2 s T | s 1 P S 1 , s 2 P S 2 (
. We further introduce an n-dimensional interval as I :" rl, us, @i l piq ď u piq , l, u P R n .
Preliminaries
Let us first introduce some preliminaries required throughout the paper. While the concepts presented in this work can equally be applied to more complex network architectures, we focus on feed-forward neural networks for simplicity: Definition 1. (Feed-forward neural network) A feed-forward neural network with κ hidden layers consists of weight matrices W i P R viˆvi´1 and bias vectors b i P R vi with i P t1, . . . , κ`1u and v i denoting the number of neurons in layer i. The output y P R vκ`1 of the neural network for the input x P R v0 is
y :" y κ`1 with y 0 " x, y ipjq " µˆv i´1 ÿ k"1 W ipj,kq y i´1pkq`bipjq˙, i " 1, . . . , κ`1,
where µ : R Ñ R is the activation function. Step-by-step construction of the polynomial zonotope from Example 1.
In this paper we consider ReLU activations µpxq " maxp0, xq, sigmoid activations µpxq " σpxq " 1{p1`e´xq, and hyperbolic tangent activations µpxq " tanhpxq " pe x´e´x q{pe x`e´x q. Moreover, neural networks often do not apply activation functions on the output neurons, which corresponds to using the identity map µpxq " x for the last layer. The image Y through a neural network is defined as the set of outputs for a given set of inputs X 0 , which is according to Def. 1 given as
Y " " y κ`1ˇy0 P X 0 , @i P t1, . . . , κ`1u : y ipjq " µˆv i´1 ÿ k"1 W ipj,kq y i´1pkq`bipjq˙* .
We present a novel approach for tightly enclosing the image through a neural network by a polynomial zonotope [2], where we use the sparse representation of polynomial zonotopes [25] 1 :
Definition 2. (Polynomial zonotope) Given a constant offset c P R n , a generator matrix of dependent generators G P R nˆh , a generator matrix of independent generators G I P R nˆq , and an exponent matrix E P N pˆh 0 , a polynomial zonotope PZ Ă R n is defined as
PZ :" " c`h ÿ i"1ˆp ź k"1 α E pk,iq k˙G p¨,iq`q ÿ j"1 β j G Ip¨,jqˇαk , β j P r´1, 1s * .
The scalars α k are called dependent factors since a change in their value affects multiplication with multiple generators. Analogously, the scalars β j are called independent factors because they only affect the multiplication with one generator. For a concise notation we use the shorthand PZ " xc, G, G I , Ey P Z .
Let us demonstrate polynomial zonotopes by an example:
Example 1. The polynomial zonotope PZ " B" 4 4 , " 2 1 2 0 2 2 , " 1 0 , " 1 0 3 0 1 1 F P Z defines the set PZ " " " 4 4 `" 2 0 α 1`" 1 2 α 2`" 2 2 α 3 1 α 2`" 1 0 β 1ˇα1 , α 2 , β 1 P r´1, 1s * .
The construction of this polynomial zonotope is visualized in Fig. 2.
Image Enclosure
We now present our novel approach for computing tight non-convex enclosures of images through neural networks. The general concept is to approximate the input-output relation of each neuron by a polynomial function, the image through which can be computed exactly since polynomial zonotopes are closed under polynomial maps. For simplicity, we focus on quadratic approximations here, but the extension to polynomials of higher degree is straightforward. The overall procedure for computing the image is summarized in Alg. 1, where the computation proceeds layer by layer. For each neuron in the current layer i we first calculate the corresponding input set in Line 5. Next, in Line 6, we compute a lower and an upper bound for the input to the neuron. Using these bounds we then calculate a quadratic approximation for the neuron's input-output relation in Line 7. This approximation is evaluated in a set-based manner in Line 8. The resulting polynomial zonotope xc q , G q , G I,q , E q y P Z forms the j-th dimension of the set PZ representing the output of the whole layer (see Line 9 and Line 12). To obtain a formally correct enclosure, we have to account for the error made by the approximation. We therefore compute the difference between the activation function and the quadratic approximation in Line 10 and add the result to the output set in Line 12. By repeating this procedure for all layers, we finally obtain a tight enclosure of the image through the neural network. A demonstrating example for Alg. 1 is shown in Fig. 3.
For ReLU activations the quadratic approximation only needs to be calculated if l ă 0^u ą 0 since we can use the exact input-output relations gpxq " x and gpxq " 0 if l ě 0 or u ď 0 holds. Due to the evaluation of the quadratic map defined by gpxq, the representation size of the polynomial zonotope PZ increases in each layer. For deep neural networks it is therefore advisable to repeatedly reduce the representation size after each layer using order reduction [25,Prop. 16]. Moreover, one can also apply the compact operation described in [25,Prop. 2] after each layer to remove potential redundancies from PZ. Next, we explain the approximation of the input-output relation as well as the computation of the approximation error in detail.
Activation Function Approximation
The centerpiece of our algorithm for computing the image of a neural network is the approximation of the input-output relation defined by the activation function µpxq with a quadratic expression gpxq " a 1 x 2`a 2 x`a 3 (see Line 7 of Alg. 1). In this section we present multiple possibilities to obtain good approximations.
Polynomial Regression
For polynomial regression we uniformly select N samples x i from the interval rl, us and then determine the polynomial coefficients a 1 , a 2 , a 3 by minimizing the average squared distance between the activation function and the quadratic approximation:
min a1,a2,a3 1 N N ÿ i"1`µ px i q´a 1 x 2 i´a2 x i´a3˘2 .(1)
Algorithm 1 Enclosure of the image through a neural network
Require: Neural network with weight matrices Wi and bias vectors bi, initial set X0. It is well known that the optimal solution to (1) is
Ensure: Tight enclosure PZ Ě Y of the image Y. 1: PZ Ð X0 2: for i Ð 1 to κ`1 do (loop over all layers) 3: c Ð 0, G Ð 0, GI Ð 0, d Ð 0, d Ð 0 4: for j Ð 1 to vi do (» - a 1 a 2 a 3 fi fl " A : b with A " » - - x 2 1 x 1 1 . . . . . . . . . x 2 N x N 1 fi ffi fl , b " » - - µpx 1 q . . . µpx N q fi ffi fl ,
where A : " pA T Aq´1A T is the Moore-Penrose inverse of matrix A. For the numerical experiments in this paper we use N " 10 samples.
Closed-Form Expression
For ReLU activations a closed-form expression for a quadratic approximation can be obtained by enforcing the conditions gplq " 0, g 1 plq " 0, and gpuq " u. The solution to the corresponding equation system a 1 l 2`a 2 l`a 3 " 0, 2a 1 l`a 2 " 0, a 1 u 2`a 2 u`a 3 " u is
a 1 " u pu´lq 2 , a 2 "´2 lu pu´lq 2 , a 3 " u 2 p2l´uq pu´lq 2`u ,
which results in the enclosure visualized in Fig. 1. This closed-form expression is very precise if the interval rl, us is close to being symmetric with respect to the origin (|l| « |u|), but becomes less accurate if one bound is significantly larger than the other (|u| " |l| or |l| " |u|).
Taylor Series Expansion
For sigmoid and hyperbolic tangent activation functions a quadratic fit can be obtained using a second-order Taylor series expansion of the activation function µpxq at the expansion point x˚" 0.5pl`uq:
µpxq « µpx˚q`µ 1 px˚qpx´x˚q`0.5 µ 2 px˚qpx´x˚q 2 " 0.5 µ 2 px˚q loooomoooon a1 x 2``µ1 px˚q´µ 2 px˚q xl ooooooooooomooooooooooon a2˘x`µ px˚q´µ 1 px˚qx˚`0.5 µ 2 px˚q x˚2 loooooooooooooooooooooomoooooooooooooooooooooon a3 ,
where the derivatives for sigmoid activations are µ 1 pxq " σpxqp1´σpxqq and µ 2 pxq " σpxqp1´σpxqqp1´2σpxqq, and the derivatives for hyperbolic tangent activations are µ 1 pxq " 1´tanhpxq 2 and µ 2 pxq "´2 tanhpxqp1´tanhpxq 2 q. The Taylor series expansion method is identical to the concept used in [21].
Linear Approximation
Since a linear function represents a special case of a quadratic function, Alg. 1 can also be used in combination with linear approximations. Such approximations are provided by the zonotope abstraction in [34]. Since closed-form expressions for the bounds d and d of the approximation error are already specified in [34], we can omit the error bound computation described in Sec. 3.2 in this case. For ReLU activations we obtain according to [34,Theorem 3
.1] a 1 " 0, a 2 " u u´l , a 3 "´u l 2pu´lq , d "´u l 2pu´lq , d " u l 2pu´lq ,
which results in the zonotope enclosure visualized in Fig. 1. For sigmoid and hyperbolic tangent activations we obtain according to [34, Theorem 3.2] a 1 " 0, a 2 " minpµ 1 plq, µ 1 puqq, a 3 " 0.5pµpuq`µplq´a 2 pu`lqq,
d " 0.5pµpuq´µplq´a 2 pu´lqq, d "´0.5pµpuq´µplq´a 2 pu´lqq,
where the derivatives of the sigmoid function and the hyperbolic tangent are specified in the paragraph above. We observed from experiments that for ReLU activations the closed-form expression usually results in a tighter enclosure of the image than polynomial regression. For sigmoid and hyperbolic tangent activations, on the other hand, polynomial regression usually performs better than the Taylor series expansion. It is also possible to combine multiple of the methods described above by executing them in parallel and selecting the one that results in the smallest approximation error rd, ds. Since the linear approximation does not increase the number of generators, it represents an alternative to order reduction when dealing with deep neural networks. Here, the development of a method to decide automatically for which layers to use a linear and for which a quadratic approximation is a promising direction for future research.
Bounding the Approximation Error
To obtain a sound enclosure we need to compute the difference between the activation function µpxq and the quadratic approximation gpxq " a 1 x 2`a 2 x`a 3 from Sec. 3.1 on the interval rl, us. In particular, this corresponds to determining Depending on the type of activation function, we use different methods for this.
Rectified Linear Unit (ReLU)
For ReLU activation functions we split the interval rl, us into the two intervals rl, 0s and r0, us on which the activation function is constant and linear, respectively. On the interval rl, 0s we have dpxq "´a 1 x 2´a 2 x´a 3 , and on the interval r0, us we have dpxq "´a 1 x 2`p 1´a 2 q x´a 3 . In both cases dpxq is a quadratic function whose maximum and minimum values are either located on the interval boundary or at the point x˚where the derivative of dpxq is equal to zero. The lower bound on rl, 0s is therefore given as d " minpdplq, dpx˚q, dp0qq if x˚P rl, 0s and d " minpdplq, dp0qq if x˚R rl, 0s, where x˚"´0.5 a 2 {a 1 . The upper bound as well as the bounds for r0, us are computed in a similar manner. Finally, the overall bounds are obtained by taking the minimum and maximum of the bounds for the intervals rl, 0s and r0, us.
Sigmoid and Hyperbolic Tangent
Here our high-level idea is to sample the function dpxq at points x i with distance ∆x distributed uniformly over the interval rl, us. From rough bounds for the derivative d 1 pxq we can then deduce how much the function value between two sample points changes at most, which yields tight bounds d b ě d and d b ď d. In particular, we want to choose the sampling rate ∆x such that the bounds d b , d b comply to a user-defined precision δ ą 0:
d`δ ě d b ě d and d´δ ď d b ď d.(2)
We observe that for both, sigmoid and hyperbolic tangent, the derivative is globally bounded by µ 1 pxq P r0, µs, where µ " 0.25 for the sigmoid and µ " 1 for the hyperbolic tangent. In addition, it holds that the derivative of the quadratic approximation gpxq " a 1 x 2`a 2 x`a 3 is bounded by g 1 pxq P rg, gs on the interval rl, us, where g " minp2a 1 l`a 2 , 2a 1 u`a 2 q and g " maxp2a 1 l`a 2 , 2a 1 u`a 2 q. As a consequence, the derivative of the difference dpxq " µpxq´gpxq is bounded by d 1 pxq P r´g, µ´gs. The value of dpxq can therefore at most change by˘∆d between two samples x i and x i`1 , where ∆d " ∆x maxp|´g|, |µ´g|q. To satisfy (2) we require ∆d ď δ, so that we have to choose the sampling rate as ∆x ď δ{ maxp|´g|, |µ´g|q. Finally, the bounds are computed by taking the maximum and minimum of all samples: d b " max i dpx i q`δ and d b " min i dpx i q´δ. For our experiments we use a precision of δ " 0.001.
Neural Network Controlled Systems
Reachable sets for neural network controlled systems can be computed efficiently by combining our novel image enclosure approach for neural networks with a reachability algorithm for nonlinear systems. We consider general nonlinear systems 9 xptq " f`xptq, u c pxptq, tq, wptq˘,
where x P R n is the system state, u c : R nˆR Ñ R m is a control law, wptq P W Ă R r is a vector of uncertain disturbances, and f : R nˆRmˆRr Ñ R n is a Lipschitz continuous function. For neural network controlled systems the control law u c pxptq, tq is given by a neural network. Since neural network controllers are usually realized as digital controllers, we consider the sampled-data case where the control input is only updated at discrete times t 0 , t 0`∆ t, t 0`2 ∆t, . . . , t F and kept constant in between. Here, t 0 is the initial time, t F is the final time, and ∆t is the sampling rate. Without loss of generality, we assume from now on that t 0 " 0 and t F is a multiple of ∆t. The reachable set is defined as follows:
Definition 3. (Reachable set) Let ξpt, x 0 , u c p¨q, wp¨qq denote the solution to (3) for initial state x 0 " xp0q, control law u c p¨q, and the disturbance trajectory wp¨q. The reachable set for an initial set X 0 Ă R n and a disturbance set W Ă R r is Rptq :" ξpt, x 0 , u c p¨q, wp¨qqˇˇx 0 P X 0 , @t˚P r0, ts : wpt˚q P W ( .
Since the exact reachable set cannot be computed for general nonlinear systems, we compute a tight enclosure instead. We exploit that the control input is piecewise constant, so that the reachable set for each control cycle can be computed using the extended system " 9 xptq 9 uptq
"
" f pxptq, uptq, wptqq 0 (4)
together with the initial set X 0ˆY , where Y is the image of X 0 through the neural network controller. The overall algorithm is specified in Alg. 2. Its highlevel concept is to loop over all control cycles, where in each cycle we first compute the image of the current reachable set through the neural network controller in Line 3. Next, the image is combined with the reachable set using the Cartesian product in Line 4. This yields the initial set for the extended system in (4), for which we compute the reachable set p Rpt i`1 q at time t i`1 as Algorithm 2 Reachable set for a neural network controlled system Require: Nonlinear system 9 xptq " f pxptq, ucpxptq, tq, wptqq, neural network controller ucpxptq, tq, initial set X0, disturbance set W, final time tF , sampling rate ∆t. Ensure: Tight enclosure R Ě Rpr0, tF sq of the reachable set Rpr0, tF sq.
1: t0 Ð 0, Rpt0q Ð X0 2: for i Ð 0 to tF {∆t´1 do (loop over all control cycles)
3:
Y Ð image of Rptiq through the neural network controller using Alg. 1
4:
p Rptiq Ð RptiqˆY (combine reachable set and input set using (7)) 5: ti`1 Ð ti`∆t, τi Ð rti, ti`1s (update time) 6: p Rpti`1q, p Rpτiq Ð reachable set for extended system in (4) starting from p Rptiq
7:
Rpti`1q Ð rIn 0s p Rpti`1q, Rpτiq Ð rIn 0s p Rpτiq (projection using (5)) 8: end for
9: R Ð Ť t F {∆t´1 i"0
Rpτiq (reachable set for the whole time horizon)
well as the reachable set p Rpτ i q for the time interval τ i in Line 6. While it is possible to use arbitrary reachability algorithms for nonlinear systems, we apply the conservative polynomialization algorithm [2] since it performs especially well in combination with polynomial zonotopes. Finally, in Line 7, we project the reachable set back to the original system dimensions.
Operations on Polynomial Zonotopes
Alg. 1 and Alg. 2 both require some special operations on polynomial zonotopes, the implementation of which we present now. Given a polynomial zonotope PZ " xc, G, G I , Ey P Z Ă R n , a matrix A P R oˆn , a vector b P R o , and an interval I " rl, us Ă R n , the affine map and the Minkowski sum with an interval are given as
A PZ ' b " xAc`b, AG, AG I , Ey P Z(5)
PZ ' I " xc`0.5pu`lq, G, rG I 0.5 diagpu´lqs, Ey P Z ,
which follows directly from [25,Prop. 8], [25,Prop. 9], and [1, Prop. 2.1]. For the Cartesian product used in Line 4 of Alg. 2 we can exploit the special structure of the sets to calculate the Cartesian product of two polynomial zonotopes PZ 1 " xc 1 , G 1 , G I,1 , E 1 y P Z Ă R n and PZ 2 " xc 2 , rG 2 p G 2 s, rG I,2 p G I,2 s, rE 1 E 2 sy P Z Ă R o as
PZ 1ˆP Z 2 " B " c 1 c 2 , " G 1 0 G 2 p G 2 , " G I,1 0 G I,2 p G I,2 , rE 1 E 2 s F P Z .(7)
In contrast to [25,Prop. 11], this implementation of the Cartesian product explicitly preserves dependencies between the two sets, which is possible since both polynomial zonotopes have identical dependent factors. Computing the exact bounds of a polynomial zonotope in Line 6 of Alg. 1 would be computationally infeasible, especially since this has to be done for each neuron in the network.
We therefore compute a tight enclosure of the bounds instead, which can be done very efficiently: Proposition 1. (Interval enclosure) Given a polynomial zonotope PZ " xc, G, G I , Ey P Z Ă R n , an enclosing interval can be computed as
I " rc`g 1´g2´g3´g4 , c`g 1`g2`g3`g4 s Ě PZ with g 1 " 0.5 ÿ iPH G p¨,iq , g 2 " 0.5 ÿ iPH |G p¨,iq |, g 3 " ÿ iPK |G p¨,iq |, g 4 " q ÿ i"1 |G Ip¨,iq | H " " iˇˇˇˇp ź j"1`1´E pj,iq mod 2q˘" 1 * , K " t1, . . . , huzH,
where x mod y, x, y P N 0 is the modulo operation and z denotes the set difference.
Proof. We first enclose the polynomial zonotope by a zonotope Z Ě PZ according to [25,Prop. 5], and then compute an interval enclosure I Ě Z of this zonotope according to [1,Prop. 2.2].
The core operation for Alg. 1 is the computation of the image through a quadratic function. While it is possible to obtain the exact image by introducing new dependent factors, we compute a tight enclosure for computational reasons: Proposition 2. (Image quadratic function) Given a polynomial zonotope PZ " xc, G, G I , Ey P Z Ă R and a quadratic function gpxq " a 1 x 2`a 2 x`a 3 with a 1 , a 2 , a 3 , x P R, the image of PZ through gpxq can be tightly enclosed by gpxqˇˇx P PZ ( Ď xc q , G q , G I,q , E q y P Z with c q " a 1 c 2`a 2 c`a 3`0 .5 a 1
q ÿ i"1 G 2 Ip¨,iq , G q " " p2a 1 c`a 2 qG a 1 p G ‰ , E q " " E p E ‰ , G I,q " " p2a 1 c`a 2 qG I 2a 1 G a 1 q G ‰ ,(8)
where
p G " " G 2 2 p G 1 . . . 2 p G h´1 ‰ , p E " " 2 E p E 1 . . . p E h´1 ‰ , p G i " " G piq G pi`1q . . . G piq G phq ‰ , i " 1, . . . , h´1, p E i " " E p¨,iq`Ep¨,i`1q . . . E p¨,iq`Ep¨,hq ‰ , i " 1, . . . , h´1, G " " G p1q G I . . . G phq G I ‰ , q G " " 0.5 G 2 I 2 q G 1 . . . 2 q G q´1 ‰ , q G i " " G Ipiq G Ipi`1q . . . G Ipiq G Ipqq ‰ , i " 1, . . . , q´1,(9)
and the squares in G 2 as well as G 2 I are interpreted elementwise. Proof. The proof is provided in Appendix A.
Numerical Examples
We now demonstrate the performance of our approach for image computation, open-loop neural network verification, and reachability analysis of neural network controlled systems. If not stated otherwise, computations are carried out in MATLAB on a 2.9GHz quad-core i7 processor with 32GB memory. We integrated our implementation into CORA [3] and published a repeatability package 2 .
Image Enclosure
First, we demonstrate how our approach captures the non-convexity of the image through a neural network. For visualization purposes we use the deliberately simple example of randomly generated neural networks with two inputs, two outputs, and one hidden layer consisting of 50 neurons. The initial set is X 0 " r´1, 1sˆr´1, 1s. We compare our polynomial-zonotope-based approach with the zonotope abstraction in [34], the star set approach in [40] using the triangle relaxation, and the Taylor model abstraction in [21]. While our approach and the zonotope abstraction are applicable to all types of activation functions, the star set approach is restricted to ReLU activations and the Taylor model abstraction is limited to sigmoid and hyperbolic tangent activations. The resulting image enclosures are visualized in Fig. 4. While using zonotopes or star sets only yields a convex over-approximation, polynomial zonotopes are able to capture the nonconvexity of the image and therefore provide a tighter enclosure. While Taylor models also capture the non-convexity of the image to some extent they are less precise than polynomial zonotopes, which can be explained as follows: 1) The zonotopic remainder of polynomial zonotopes prevents the rapid remainder growth observed for Taylor models, and 2) the quadratic approximation obtained with polynomial regression used for polynomial zonotopes is usually more precise than the Taylor series expansion used for Taylor models.
Open-Loop Neural Network Verification
For open-loop neural network verification the task is to verify that the image of the neural network satisfies certain specifications that are typically given by linear inequality constraints. We examine the ACAS Xu benchmark from the [4,29] originally proposed in [22,Sec. 5], which features neural networks that provide turn advisories for an aircraft to avoid collisions. All networks consist of 6 hidden layers with 50 ReLU neurons per layer. For a fair comparison we performed the evaluation on the same machine that was used for the VNN competition. To compute the image through the neural networks with polynomial zonotopes, we apply a quadratic approximation obtained by polynomial regression for the first two layers, and a linear approximation in the remaining layers. Moreover, we recursively split the initial set to obtain a complete verifier. The comparison with the other tools that participated in the VNN competition shown in Tab. 1 demonstrates that for some verification problems polynomial zonotopes are about as fast as the best tool in the competition.
Neural Network Controlled Systems
The main application of our approach is reachability analysis of neural network controlled systems, for which we now compare the performance to other state-ofthe-art tools. For a fair comparison we focus on published results for which the authors of the tools tuned the algorithm settings by themselves. In particular, we examine the benchmarks from [33] featuring ReLU neural network controllers, and the benchmarks from [21] containing sigmoid and hyperbolic tangent neural network controllers. The goal for all benchmarks is to verify that the system reaches a goal set or avoids an unsafe region. As the computation times shown in Tab. 2 demonstrate, our polynomial-zonotope-based approach is for all but two benchmarks significantly faster than all other state-of-the-art tools, mainly since it avoids all major bottlenecks observed for the other tools: The polynomial approximations of the overall network used by Sherlock and ReachNN* are often imprecise, JuliaReach loses dependencies when enclosing Taylor models by zonotopes, Verisig is quite slow since the nonlinear system used to represent the neural network is high-dimensional, and Verisig 2.0 and POLAR suffer from the rapid remainder growth observed for Taylor models. Table 2. Computation times 4 in seconds for reachability analysis of neural network controlled systems considering different tools and benchmarks. The dimension, the number of hidden layers, and the number of neurons in each layer is specified in parenthesis for each benchmark, where a " 100, b " 5 for ReLU activation functions, and a " 20, b " 3 otherwise. The symbol -indicates that the tool failed to verify the specification.
ReLU
Conclusion
We introduced a novel approach for computing tight non-convex enclosures of images through neural networks with ReLU, sigmoid, and hyperbolic tangent activation functions. Since we represent sets with polynomial zonotopes, all required calculations can be realized using simple matrix operations only, which makes our algorithm very efficient. While our proposed approach can also be applied to open-loop neural network verification, its main application is reachability analysis of neural network controlled systems. There, polynomial zonotopes enable the preservation of dependencies between the reachable set and the set of control inputs, which results in very tight enclosures of the reachable set. As we demonstrated on various numerical examples, our polynomial-zonotopebased approach consequently outperforms all other state-of-the-art methods for reachability analysis of neural network controlled systems.
In (12) and (13), we substituted the expressions β j ś p k"1 α E pk,iq k , 2β 2 i´1 , and β i β j containing polynomial terms of the independent factors β by new independent factors, which results in an enclosure due to the loss of dependency. The substitution is possible since β j p ź k"1 α E pk,iq k P r´1, 1s, 2β 2 i´1 P r´1, 1s, and β i β j P r´1, 1s.
Finally, we obtain for the image gpxqˇˇx P PZ ( " a 1 x 2`a 2 x`a 3ˇx P PZ ( (10) " a 1 pc`dpαq`zpβqq 2`a 2 pc`dpαq`zpβqq`a 3ˇα , β P r´1, 1s
( " a 1 c 2`a 2 c`a 3`p 2a 1 c`a 2 qdpαq`a 1 dpαq 2 p2a 1 c`a 2 qzpβq`2a 1 dpαqzpβq`a 1 zpβq 2ˇα , β P r´1, 1s ( (11),(12),(13) Ď B a 1 c 2`a 2 c`a 3`0 .5 a 1 q ÿ i"1
G 2 I , " p2a 1 c`a 2 qG a 1 p G ‰ , " p2a 1 c`a 2 qG I 2a 1 G a 1 q G ‰ , " E p E ‰ F P Z(8)
" xc q , G q , G I,q , E q y P Z , which concludes the proof.
Fig. 1 .
1Triangle relaxation (left), zonotope abstraction (middle), and polynomial zonotope abstraction (right) of the ReLU activation function.
Fig. 2 .
2Fig. 2. Step-by-step construction of the polynomial zonotope from Example 1.
Fig. 3 .
3Exemplary neural network with ReLU activations (left) and the corresponding image enclosure computed with polynomial zonotopes (right), where we use the approximation gpxq " 0.25 x 2`0 .5 x`0.25 for the red neuron and the approximation gpxq " x for the blue neuron.
Fig. 4 .
4Image enclosures computed with zonotopes (red), star sets (green), Taylor models (purple), and polynomial zonotopes (blue) for randomly generated neural networks with ReLU activations (left), sigmoid activations (middle), and hyperbolic tangent activations (right). The exact image is shown in gray.
loop over all neurons in the layer)5:
PZj Ð W ipj,¨q PZ ' b ipjq
(map with weight matrix and bias using (5))
6:
l, u Ð lower and upper bound for PZj according to Prop. 1
7:
gpxq " a1 x 2`a
2 x`a3 Ð quad. approx. on rl, us according to Sec. 3.1
8:
xcq, Gq, GI,q, EqyP Z Ð image of PZj through gpxq according to Prop. 2
9:
c pjq Ð cq, G pj,¨q Ð Gq, G Ipj,¨q Ð GI,q, E Ð Eq
(add to output set)
10:
d pjq , d pjq Ð difference between gpxq and activation function acc. to Sec. 3.2
11:
end for
12:
PZ Ð xc, G, GI , EyP Z ' rd, ds
(add approximation error using (6))
13: end for
Table 1 .
1Computation times 3 in seconds for different verification tools on a small but representative excerpt of network-specification combinations of the ACAS Xu benchmark. The symbol -indicates that the tool failed to verify the specification.Net. Spec.
Cgdtest
CROWN
Debona
ERAN
Marabou
MN-BaB
nnenum
nnv
NV.jl
oval
RPM
venus2
VeriNet
Poly. zono.
1.9
1 0.37 1.37 111 3.91 0.66 48.7 0.41 -1.44 0.71 -0.53 0.55 0.31
2.3
4
-0.95 1.78 1.91 0.57 12.2 0.06 -
-0.97 -0.46 0.17 0.16
3.5
3 0.41 0.37 1.15 1.85 0.61 6.17 0.05 -
-0.58 34.1 0.42 0.25 0.32
4.5
4
-0.35 0.20 1.82 0.61 5.57 0.08 0.24 -0.48 -0.42 0.21 0.16
5.6
3 0.38 0.63 2.27 1.82 0.66 6.51 0.08 -
-0.52 40.6 0.48 0.37 0.43
2021 and 2022 VNN competition
In contrast to[25, Def. 1], we explicitly do not integrate the constant offset c in G. Moreover, we omit the identifier vector used in[25] for simplicity
https://codeocean.com/capsule/8237552/tree/v1
Times taken from https://github.com/stanleybak/vnncomp2021_results and https://github.com/ChristopherBrix/vnncomp2022_results.
Computation times taken from [33, Tab. 1] for Sherlock and JuliaReach, from [21, Tab. 2] for Verisig, Verisig 2.0, and ReachNN*, and from [19, Tab. 1] for POLAR.
Appendix AWe now provide the proof for Prop. 2. According to Def. 2, the one-dimensional polynomial zonotope PZ " xc, G, G I , Ey P Z is defined aswhere α " rα 1 . . . α p s T and β " rβ 1 . . . β q s T . To compute the image through the quadratic function gpxq we require the expressions dpαq 2 , dpαqzpβq, and zpβq 2 , which we derive first. For dpαq 2 we obtainfor dpαqzpβq we obtainand for zpβq 2 we obtain" 0.5where the function api, jq maps indices i, j to a new index:api, jq " ph`2qq`j´i`i´1 ÿ k"1 q´k.
Reachability Analysis and its Application to the Safety Assessment of Autonomous Cars. M Althoff, Technical University of MunichPh.D. thesisAlthoff, M.: Reachability Analysis and its Application to the Safety Assessment of Autonomous Cars. Ph.D. thesis, Technical University of Munich (2010)
Reachability analysis of nonlinear systems using conservative polynomialization and non-convex sets. M Althoff, Proc. of the Int. Conf. on Hybrid Systems: Computation and Control. of the Int. Conf. on Hybrid Systems: Computation and ControlAlthoff, M.: Reachability analysis of nonlinear systems using conservative polyno- mialization and non-convex sets. In: Proc. of the Int. Conf. on Hybrid Systems: Computation and Control. pp. 173-182 (2013)
An introduction to CORA 2015. M Althoff, Proc. of the Int. Workshop on Applied Verification for Continuous and Hybrid Systems. of the Int. Workshop on Applied Verification for Continuous and Hybrid SystemsAlthoff, M.: An introduction to CORA 2015. In: Proc. of the Int. Workshop on Applied Verification for Continuous and Hybrid Systems. pp. 120-151 (2015)
S Bak, C Liu, T Johnson, arXiv:2109.00498The second international verification of neural networks competition (VNN-COMP 2021): Summary and results. Bak, S., Liu, C., Johnson, T.: The second international verification of neural net- works competition (VNN-COMP 2021): Summary and results. arXiv:2109.00498 (2021)
Application of neural networks to load-frequency control in power systems. F Beaufays, Y Abdel-Magid, B Widrow, Neural Networks. 71Beaufays, F., Abdel-Magid, Y., Widrow, B.: Application of neural networks to load-frequency control in power systems. Neural Networks 7(1), 183-194 (1994)
JuliaReach: A toolbox for set-based reachability. S Bogomolov, Proc. of the Int. Conf. on Hybrid Systems: Computation and Control. of the Int. Conf. on Hybrid Systems: Computation and ControlBogomolov, S., et al.: JuliaReach: A toolbox for set-based reachability. In: Proc. of the Int. Conf. on Hybrid Systems: Computation and Control. pp. 39-44 (2019)
Branch and bound for piecewise linear neural network verification. R Bunel, Journal of Machine Learning Research. 2142Bunel, R., et al.: Branch and bound for piecewise linear neural network verification. Journal of Machine Learning Research 21(42) (2020)
Maximum resilience of artificial neural networks. C H Cheng, G Nührenberg, H Ruess, Proc. of the Int. Symposium on Automated Technology for Verification and Analysis. of the Int. Symposium on Automated Technology for Verification and AnalysisCheng, C.H., Nührenberg, G., Ruess, H.: Maximum resilience of artificial neural networks. In: Proc. of the Int. Symposium on Automated Technology for Verifica- tion and Analysis. pp. 251-268 (2017)
A hybrid movie recommender system based on neural networks. C Christakou, S Vrettos, A Stafylopatis, International Journal on Artificial Intelligence Tools. 165Christakou, C., Vrettos, S., Stafylopatis, A.: A hybrid movie recommender system based on neural networks. International Journal on Artificial Intelligence Tools 16(5), 771-792 (2007)
Safety verification of neural network controlled systems. A Clavière, Proc. of the Int. Conf. on Dependable Systems and Networks. of the Int. Conf. on Dependable Systems and NetworksClavière, A., et al.: Safety verification of neural network controlled systems. In: Proc. of the Int. Conf. on Dependable Systems and Networks. pp. 47-54 (2021)
DeepChess: End-to-end deep neural network for automatic learning in chess. O E David, N S Netanyahu, L Wolf, Proc. of the Int. Conf. on Artificial Neural Networks. of the Int. Conf. on Artificial Neural NetworksDavid, O.E., Netanyahu, N.S., Wolf, L.: DeepChess: End-to-end deep neural net- work for automatic learning in chess. In: Proc. of the Int. Conf. on Artificial Neural Networks. pp. 88-96 (2016)
Learning and verification of feedback control systems using feedforward neural networks. S Dutta, Proc. of the Int. Conf. on Analysis and Design of Hybrid Systems. of the Int. Conf. on Analysis and Design of Hybrid SystemsDutta, S., et al.: Learning and verification of feedback control systems using feed- forward neural networks. In: Proc. of the Int. Conf. on Analysis and Design of Hybrid Systems. pp. 151-156 (2018)
Sherlock-A tool for verification of neural network feedback systems. S Dutta, Proc. of the Int. Conf. on Hybrid Systems: Computation and Control. of the Int. Conf. on Hybrid Systems: Computation and ControlDutta, S., et al.: Sherlock-A tool for verification of neural network feedback sys- tems. In: Proc. of the Int. Conf. on Hybrid Systems: Computation and Control. pp. 262-263 (2019)
Reachability analysis for neural feedback systems using regressive polynomial rule inference. S Dutta, X Chen, S Sankaranarayanan, Proc. of the Int. Conf. on Hybrid Systems: Computation and Control. of the Int. Conf. on Hybrid Systems: Computation and ControlDutta, S., Chen, X., Sankaranarayanan, S.: Reachability analysis for neural feed- back systems using regressive polynomial rule inference. In: Proc. of the Int. Conf. on Hybrid Systems: Computation and Control. pp. 157-168 (2019)
Formal verification of piece-wise linear feed-forward neural networks. R Ehlers, Proc. of the Int. Symposium on Automated Technology for Verification and Analysis. of the Int. Symposium on Automated Technology for Verification and AnalysisEhlers, R.: Formal verification of piece-wise linear feed-forward neural networks. In: Proc. of the Int. Symposium on Automated Technology for Verification and Analysis. pp. 269-286 (2017)
ReachNN*: A tool for reachability analysis of neural-network controlled systems. J Fan, Huang, Proc. of the Int. Symposium on Automated Technology for Verification and Analysis. of the Int. Symposium on Automated Technology for Verification and AnalysisFan, J., Huang, et al.: ReachNN*: A tool for reachability analysis of neural-network controlled systems. In: Proc. of the Int. Symposium on Automated Technology for Verification and Analysis. pp. 537-542 (2020)
RINO: Robust inner and outer approximated reachability of neural networks controlled systems. E Goubault, S Putot, Proc. of the Int. Conf. on Computer Aided Verification. of the Int. Conf. on Computer Aided VerificationGoubault, E., Putot, S.: RINO: Robust inner and outer approximated reachability of neural networks controlled systems. In: Proc. of the Int. Conf. on Computer Aided Verification. pp. 511-523 (2022)
ReachNN: Reachability analysis of neural-network controlled systems. C Huang, Transactions on Embedded Computing Systems. 185sHuang, C., et al.: ReachNN: Reachability analysis of neural-network controlled systems. Transactions on Embedded Computing Systems 18(5s) (2019)
POLAR: A polynomial arithmetic framework for verifying neuralnetwork controlled systems. C Huang, Proc. of the Int. Symposium on Automated Technology for Verification and Analysis. of the Int. Symposium on Automated Technology for Verification and AnalysisHuang, C., et al.: POLAR: A polynomial arithmetic framework for verifying neural- network controlled systems. In: Proc. of the Int. Symposium on Automated Tech- nology for Verification and Analysis. pp. 414-430 (2022)
Verisig: Verifying safety properties of hybrid systems with neural network controllers. R Ivanov, Proc. of the Int. Conf. on Hybrid Systems: Computation and Control. of the Int. Conf. on Hybrid Systems: Computation and ControlIvanov, R., et al.: Verisig: Verifying safety properties of hybrid systems with neural network controllers. In: Proc. of the Int. Conf. on Hybrid Systems: Computation and Control. pp. 169-178 (2019)
Verisig 2.0: Verification of neural network controllers using Taylor model preconditioning. R Ivanov, Proc. of the Int. Conf. on Computer Aided Verification. of the Int. Conf. on Computer Aided VerificationIvanov, R., et al.: Verisig 2.0: Verification of neural network controllers using Taylor model preconditioning. In: Proc. of the Int. Conf. on Computer Aided Verification. pp. 249-262 (2021)
Reluplex: An efficient SMT solver for verifying deep neural networks. G Katz, Proc. of the Int. Conf. on Computer Aided Verification. of the Int. Conf. on Computer Aided VerificationKatz, G., et al.: Reluplex: An efficient SMT solver for verifying deep neural net- works. In: Proc. of the Int. Conf. on Computer Aided Verification. pp. 97-117 (2017)
Facial recognition using convolutional neural networks and implementation on smart glasses. S Khan, Proc. of the Int. Conf. on Information Science and Communication Technology. of the Int. Conf. on Information Science and Communication Technology19Khan, S., et al.: Facial recognition using convolutional neural networks and imple- mentation on smart glasses. In: Proc. of the Int. Conf. on Information Science and Communication Technology (2019), Article 19
PEREGRiNN: Penalized-relaxation greedy neural network verifier. H Khedr, J Ferlez, Y Shoukry, Proc. of the Int. Conf. on Computer Aided Verification. of the Int. Conf. on Computer Aided VerificationKhedr, H., Ferlez, J., Shoukry, Y.: PEREGRiNN: Penalized-relaxation greedy neu- ral network verifier. In: Proc. of the Int. Conf. on Computer Aided Verification. pp. 287-300 (2021)
Sparse polynomial zonotopes: A novel set representation for reachability analysis. N Kochdumper, M Althoff, Transactions on Automatic Control. 669Kochdumper, N., Althoff, M.: Sparse polynomial zonotopes: A novel set representa- tion for reachability analysis. Transactions on Automatic Control 66(9), 4043-4058 (2021)
Taylor models and other validated functional inclusion methods. K Makino, M Berz, Int. Journal of Pure and Applied Mathematics. 44Makino, K., Berz, M.: Taylor models and other validated functional inclusion meth- ods. Int. Journal of Pure and Applied Mathematics 4(4), 379-456 (2003)
A survey of robot learning strategies for human-robot collaboration in industrial settings. D Mukherjee, Robotics and Computer-Integrated Manufacturing. 73Mukherjee, D., et al.: A survey of robot learning strategies for human-robot collab- oration in industrial settings. Robotics and Computer-Integrated Manufacturing 73 (2022)
PRIMA: Precise and general neural network certification via multi-neuron convex relaxations. M N Müller, Proceedings on Programming Languages. 143Müller, M.N., et al.: PRIMA: Precise and general neural network certification via multi-neuron convex relaxations. Proceedings on Programming Languages 1 (2022), Article 43
M N Müller, arXiv:2212.10376The third international verification of neural networks competition (VNN-COMP 2022): Summary and results. arXiv preprintMüller, M.N., et al.: The third international verification of neural networks compe- tition (VNN-COMP 2022): Summary and results. arXiv preprint arXiv:2212.10376 (2022)
Challenging SMT solvers to verify neural networks. L Pulina, A Tacchella, AI Communications. 252Pulina, L., Tacchella, A.: Challenging SMT solvers to verify neural networks. AI Communications 25(2), 117-135 (2012)
Semidefinite relaxations for certifying robustness to adversarial examples. A Raghunathan, J Steinhardt, P Liang, Proc. of the Int. Conf. on Neural Information Processing Systems. of the Int. Conf. on Neural Information essing SystemsRaghunathan, A., Steinhardt, J., Liang, P.: Semidefinite relaxations for certifying robustness to adversarial examples. In: Proc. of the Int. Conf. on Neural Informa- tion Processing Systems. pp. 10900-10910 (2018)
Learning to drive a real car in 20 minutes. M Riedmiller, M Montemerlo, H Dahlkamp, Proc. of the Int. Conf. on Frontiers in the Convergence of Bioscience and Information Technologies. of the Int. Conf. on Frontiers in the Convergence of Bioscience and Information TechnologiesRiedmiller, M., Montemerlo, M., Dahlkamp, H.: Learning to drive a real car in 20 minutes. In: Proc. of the Int. Conf. on Frontiers in the Convergence of Bioscience and Information Technologies. pp. 645-650 (2007)
Verification of neural-network control systems by integrating Taylor models and zonotopes. C Schilling, M Forets, S Guadalupe, Proc. of the AAAI Conf. on Artificial Intelligence. of the AAAI Conf. on Artificial IntelligenceSchilling, C., Forets, M., Guadalupe, S.: Verification of neural-network control systems by integrating Taylor models and zonotopes. In: Proc. of the AAAI Conf. on Artificial Intelligence. pp. 8169-8177 (2022)
Fast and effective robustness certification. G Singh, Proc. of the Int. Conf. on Advances in Neural Information Processing Systems. of the Int. Conf. on Advances in Neural Information essing SystemsSingh, G., et al.: Fast and effective robustness certification. In: Proc. of the Int. Conf. on Advances in Neural Information Processing Systems (2018)
An abstract domain for certifying neural networks. G Singh, Proceedings on Programming Languages. on Programming Languages341Singh, G., et al.: An abstract domain for certifying neural networks. Proceedings on Programming Languages 3 (2019), Article 41
Beyond the single neuron convex barrier for neural network certification. G Singh, Proc. of the Int. Conf. on Advances in Neural Information Processing Systems. of the Int. Conf. on Advances in Neural Information essing SystemsSingh, G., et al.: Beyond the single neuron convex barrier for neural network certi- fication. In: Proc. of the Int. Conf. on Advances in Neural Information Processing Systems (2019)
Evaluating robustness of neural networks with mixed integer programming. V Tjeng, K Y Xiao, R Tedrake, Proc. of the Int. Conf. on Learning Representations. of the Int. Conf. on Learning RepresentationsTjeng, V., Xiao, K.Y., Tedrake, R.: Evaluating robustness of neural networks with mixed integer programming. In: Proc. of the Int. Conf. on Learning Representations (2019)
Parallelizable reachability analysis algorithms for feed-forward neural networks. H D Tran, Proc. of the Int. Conf. on Formal Methods in Software Engineering. of the Int. Conf. on Formal Methods in Software EngineeringTran, H.D., et al.: Parallelizable reachability analysis algorithms for feed-forward neural networks. In: Proc. of the Int. Conf. on Formal Methods in Software Engi- neering. pp. 51-60 (2019)
Safety verification of cyber-physical systems with reinforcement learning control. H D Tran, Transactions on Embedded Computing Systems. 185s105Tran, H.D., et al.: Safety verification of cyber-physical systems with reinforcement learning control. Transactions on Embedded Computing Systems 18(5s) (2019), Article 105
Star-based reachability analysis of deep neural networks. H D Tran, Proc. of the Int. Symposium on Formal Methods. of the Int. Symposium on Formal MethodsTran, H.D., et al.: Star-based reachability analysis of deep neural networks. In: Proc. of the Int. Symposium on Formal Methods. pp. 670-686 (2019)
NNV: The neural network verification tool for deep neural networks and learning-enabled cyber-physical systems. H D Tran, Proc. of the Int. Conf. on Computer Aided Verification. of the Int. Conf. on Computer Aided VerificationTran, H.D., et al.: NNV: The neural network verification tool for deep neural networks and learning-enabled cyber-physical systems. In: Proc. of the Int. Conf. on Computer Aided Verification. pp. 3-17 (2020)
Reachable Polyhedral Marching (RPM): A safety verification algorithm for robotic systems with deep neural network components. J A Vincent, M Schwager, Proc. of the Int. Conf. on Robotics and Automation. of the Int. Conf. on Robotics and AutomationVincent, J.A., Schwager, M.: Reachable Polyhedral Marching (RPM): A safety verification algorithm for robotic systems with deep neural network components. In: Proc. of the Int. Conf. on Robotics and Automation. pp. 9029-9035 (2021)
Formal security analysis of neural networks using symbolic intervals. S Wang, Proc. of the USENIX Security Symposium. of the USENIX Security SymposiumWang, S., et al.: Formal security analysis of neural networks using symbolic inter- vals. In: Proc. of the USENIX Security Symposium. pp. 1599-1614 (2018)
Beta-CROWN: Efficient bound propagation with per-neuron split constraints for neural network robustness verification. S Wang, Proc. of the Int. Conf. on Neural Information Processing Systems. of the Int. Conf. on Neural Information essing SystemsWang, S., et al.: Beta-CROWN: Efficient bound propagation with per-neuron split constraints for neural network robustness verification. In: Proc. of the Int. Conf. on Neural Information Processing Systems (2021)
Towards fast computation of certified robustness for ReLU networks. L Weng, Proc. of the Int. Conf. on Machine Learning. of the Int. Conf. on Machine LearningWeng, L., et al.: Towards fast computation of certified robustness for ReLU net- works. In: Proc. of the Int. Conf. on Machine Learning. pp. 5276-5285 (2018)
Reachable set estimation and safety verification for piecewise linear systems with neural network controllers. W Xiang, Proc. of the American Control Conf. of the American Control ConfXiang, W., et al.: Reachable set estimation and safety verification for piecewise linear systems with neural network controllers. In: Proc. of the American Control Conf. pp. 1574-1579 (2018)
Fast and complete: Enabling complete neural network verification with rapid and massively parallel incomplete verifiers. K Xu, Proc. of the Int. Conf. on Learning Representations. of the Int. Conf. on Learning RepresentationsXu, K., et al.: Fast and complete: Enabling complete neural network verification with rapid and massively parallel incomplete verifiers. In: Proc. of the Int. Conf. on Learning Representations (2021)
Reachability analysis of deep ReLU neural networks using facetvertex incidence. X Yang, Proc. of the Int. Conference on Hybrid Systems: Computation and Control (2021). of the Int. Conference on Hybrid Systems: Computation and Control (2021)Article 18Yang, X., et al.: Reachability analysis of deep ReLU neural networks using facet- vertex incidence. In: Proc. of the Int. Conference on Hybrid Systems: Computation and Control (2021), Article 18
Efficient neural network robustness certification with general activation functions. H Zhang, Proc. of the Int. Conf. on Neural Information Processing Systems. of the Int. Conf. on Neural Information essing SystemsZhang, H., et al.: Efficient neural network robustness certification with general activation functions. In: Proc. of the Int. Conf. on Neural Information Processing Systems. pp. 4944-4953 (2018)
| [
"https://github.com/stanleybak/vnncomp2021_results",
"https://github.com/ChristopherBrix/vnncomp2022_results."
] |
[
"ALMOST MINIMIZERS FOR A SUBLINEAR SYSTEM WITH FREE BOUNDARY",
"ALMOST MINIMIZERS FOR A SUBLINEAR SYSTEM WITH FREE BOUNDARY"
] | [
"Daniela De Silva ",
"ANDSeongmin Jeon ",
"Henrik Shahgholian "
] | [] | [] | We study vector-valued almost minimizers of the energy functionalFor Hölder continuous coefficients λ ± (x) > 0, we take the epiperimetric inequality approach and prove the regularity for both almost minimizers and the set of "regular" free boundary points. | 10.1007/s00526-023-02501-x | [
"https://arxiv.org/pdf/2207.06217v1.pdf"
] | 250,493,166 | 2207.06217 | e35b8093a6e9ff61e81a0755d31804fb24ecaeea |
ALMOST MINIMIZERS FOR A SUBLINEAR SYSTEM WITH FREE BOUNDARY
13 Jul 2022
Daniela De Silva
ANDSeongmin Jeon
Henrik Shahgholian
ALMOST MINIMIZERS FOR A SUBLINEAR SYSTEM WITH FREE BOUNDARY
13 Jul 2022
We study vector-valued almost minimizers of the energy functionalFor Hölder continuous coefficients λ ± (x) > 0, we take the epiperimetric inequality approach and prove the regularity for both almost minimizers and the set of "regular" free boundary points.
among all functions v ∈ W 1,2 (D; R m ) with v = g on ∂D. It is well-known that there exists a unique minimizer u and it solves a sublinear system
∆u = λ + (x)|u + | q−1 u + − λ − (x)|u − | q−1 u − .
The regularity of both the solution u and its free boundary Γ(u) := ∂{x : |u(x)| > 0} was studied in [4] or in the scalar case (when m = 1) in [3].
1.2. Almost minimizers. In this paper we consider almost minimizers of the functional (1.1).
To introduce the definition of almost minimizers, we let ω : (0, r 0 ) −→ [0, ∞), r 0 > 0, be a gauge function, which is a nondecreasing function with ω(0+) = 0.
Definition 1 (Almost minimizers). Let 0 < r 0 < 1 be a constant and ω(r) be a gauge function. We say that a function u ∈ W 1,2 (B 1 ; R m ) is an almost minimizer for the functional´ |∇u| 2 + 2F (x, u) dx in a domain D, with gauge function ω(r), if for any ball B r (x 0 ) ⋐ D with 0 < r < r 0 , we havê Br (x0) |∇u| 2 + 2F (x, u) dx ≤ (1 + ω(r))ˆB r (x0)
|∇v| 2 + 2F (x, v) dx, (1.2)
for any competitor function v ∈ u + W 1,2 0 (B r (x 0 ); R m ). In fact, we can observe that for x, x 0 ∈ D,
1 − C|x − x 0 | α ≤ λ ± (x) λ ± (x 0 ) ≤ 1 + C|x − x 0 | α , (1.3)
with a constant C depending only on λ 0 and λ ± C 0,α (D) . Using this, we can rewrite (1.2) in the form with frozen coefficientŝ Br (x0) |∇u| 2 + 2F (x 0 , u) dx ≤ (1 + ω(r))ˆB r (x0) |∇v| 2 + 2F (x 0 , v) dx, (1.4) where F (x 0 , u) = 1 q + 1 λ + (x 0 )|u + | q+1 + λ − (x 0 )|u − | q+1 , ω(r) = C(ω(r) + r α ).
This implies that almost minimizers of (1.1) with Hölder coefficients λ ± are almost minimizers with frozen coefficients (1.4). An example of an almost minimizer can be found in Appendix A. Almost minimizers for the case q = 0 and λ ± = 1 were studied by the authors in [2], where the regularity of both the almost minimizers and the regular part of the free boundary has been proved.
In this paper we aim to extend the results in [4] from solutions to almost minimizers and those in [2] from the case q = 0 to 0 < q < 1.
Main results.
For the sake of simplicity, we assume that the gauge function ω(r) = r α for 0 < α < 2, D is the unit ball B 1 , and the constant r 0 = 1 in Definition 1.
In addition, to simplify tracking all constants, we take M > 2 such that λ ± C 0,α (B1) ≤ M, 1 λ 0 , λ 1 ≤ M, ω(r), ω(r) ≤ M r α . (1.5) Now, we state our main results.
Theorem 1 (Regularity of almost minimizers). Let u ∈ W 1,2 (B 1 ; R m ) be an almost minimizer in B 1 . Then u ∈ C 1,α/2 (B 1 ). Moreover, for any K ⋐ B 1 , u C 1,α/2 (K;R m ) ≤ C(n, α, M, K)(E(u, 1) 1/2 + 1), (1.6) where E(u, 1) =´B 1 |∇u| 2 + |u| q+1 . This regularity result is rather immediate in the case of minimizers (or solutions), since their W 2,p -regularity for any p < ∞ simply follows from the elliptic theory with a bootstrapping. This is inapplicable to almost minimizers, as they do not satisfy a partial differential equations. Instead, we follow the approach in [2] by first deriving growth estimates for almost minimizers and then using Morrey and Campanato space embedding theorems.
To investigate the free boundary, for κ := 2 1−q > 2 we define a subset Γ κ (u) of the free boundary Γ(u) = ∂{|u| > 0} as
Γ κ (u) := {x 0 ∈ Γ(u) : u(x) = O(|x − x 0 | ξ ) for some ⌊κ⌋ < ξ < κ}. (1.7)
Here, the big O is not necessarily uniform on u and x 0 , and ⌊s⌋ is the greatest integer less than s, i.e., s − 1 ≤ ⌊s⌋ < s.
Theorem 2 (Optimal growth estimate). Let u be as in Theorem 1. Then there are constants C > 0 and r 0 > 0, depending only on n, α, M , κ, E(u, 1), such that Br(x0) |∇u| 2 + |u| 1+q ≤ Cr n+2κ−2 for x 0 ∈ Γ κ (u) ∩ B 1/2 and 0 < r < r 0 .
The proof is inspired by the ones for minimizers in [4] and for the case q = 0 in [2]. However, in our case concerning almost minimizers with κ > 2, several new technical difficulties arise and the proof is much more complicated, as we have to improve the previous techniques by using approximation by harmonic polynomials and limiting argument.
One implication of Theorem 2 is the existence of κ-homogeneous blowups (Theorem 8). This allows to consider a subset of Γ κ (u), the so-called "regular" set, using a class of half-space solutions H x0 := {x → β x0 max(x · ν, 0) κ e : ν ∈ R n and e ∈ R m are unit vectors}, x 0 ∈ B 1 .
Definition 2 (Regular free boundary points). We say that a point x 0 ∈ Γ κ (u) is a regular free boundary point if at least one homogeneous blowup of u at x 0 belongs to H x0 . We denote by R u the set of all regular free boundary points in Γ(u) and call it the regular set.
The following is our central result concerning the regularity of the free boundary.
Theorem 3 (Regularity of the regular set). R u is a relatively open subset of the free boundary Γ(u) and locally a C 1,γ -manifold for some γ = γ(n, α, q, η) > 0, where η is the constant in Theorem 7.
The proof is based on the use of the epiperimetric inequality from [4] and follows the general approach in [6] and [2]: The combination of the monotonicity of Weisstype energy functional (Theorem 5) and the epiperimetric inequality, together with Theorem 2, establishes the geometric decay rate for the Weiss functional. This, in turn, provides us with the rate of convergence of proper rescalings to a blowup, ultimately implying the regularity of R u .
1.4. Plan of the paper. The plan of the paper is as follows.
In Section 2 we study the regularity properties of almost minimizers. We prove their almost Lipschitz regularity (Theorem 4) and exploit it to infer the C 1,α/2regularity (Theorem 1).
In Section 3 we establish the Weiss-type monotonicity formula (Theorem 5), which will play a significant role in the analysis of the free boundary. Section 4 is dedicated to providing the proof of the optimal growth estimates in Theorem 2 above. Section 5 is devoted to proving the non-degeneracy result of almost minimizers, following the line of [2].
In Section 6 we discuss the homogeneous blowup of almost minimizers at free boundary points, including its existence and properties. In addition, we estimate a decay rate of the Weiss energy, with the help of the epiperimetric inequality.
In Section 7 we make use of the previous technical tools to establish the C 1,γregularity of the regular set (Theorem 3).
Finally, in Appendix A we provide an example of almost minimizers.
1.5. Notation. We introduce here some notations that are used frequently in this paper.
B r (x 0 ) means the open n-dimensional ball of radius r, centered at x 0 , with boundary ∂B r (x 0 ). B r := B r (0), ∂B r := ∂B r (0). For a given set, ν denotes the unit outward normal to the boundary.
∂ θ u := ∇u − (∇u · ν)ν is the surface derivative of u. For u = (u 1 , · · · , u m ), m ≥ 1, we denote u + = (u + 1 , · · · , u + m ), u − = (u − 1 , · · · , u − m ), where u ± i = max{0, ±u i }.
For a domain D, we indicate the integral mean value of u by
u D := D u = 1 |D|ˆD u.
In particular, when D = B r (x 0 ), we simply write u x0,r := u Br (x0) .
Γ(u) := ∂{|u| > 0} is the free boundary of u.
Γ κ (u) := {x 0 ∈ Γ(u) : u(x) = O(|x − x 0 | ξ ) for some ⌊κ⌋ < ξ < κ}.
⌊s⌋ is the greatest integer below s ∈ R, i.e., s − 1 ≤ ⌊s⌋ < s. For u ∈ W 1,2 (B r ; R m ) and 0 < q < 1, we set
E(u, r) :=ˆB r |∇u| 2 + |u| q+1 .
For α-Hölder continuous functions λ ± : D → R n satisfying λ 0 ≤ λ ± (x) ≤ λ 1 (as in Subsection 1.1), we denote
f (x 0 , u) := λ + (x 0 )|u + | q−1 u + − λ − (x 0 )|u − | q−1 u − , F (x, u) := 1 1 + q λ + |u + | q+1 + λ − |u − | q+1 , F (x 0 , u) := 1 1 + q λ + (x 0 )|u + | q+1 + λ − (x 0 )|u − | q+1 , F (x 0 , u, h) := 1 1 + q λ + (x 0 ) |u + | q+1 − |h + | q+1 + λ − (x 0 ) |u − | q+1 − |h − | q+1 .
We fix constants (for x 0 ∈ B 1 )
κ := 2 1 − q , β x0 = λ + (x 0 ) κ/2 (κ(κ − 1)) −κ/2 .
Throughout this paper, a universal constant may depend only on n, α, M , κ and E(u, 1). Below we consider only norms of vectorial functions to R m , but not those of scalar functions. Thus, for notational simplicity we drop R m for spaces of vectorial functions, e.g., C 1 (R n ) = C 1 (R n ; R m ), W 1,2 (B 1 ) = W 1,2 (B 1 ; R m ).
Regularity of almost minimizers
The main result of this section is the C 1,α/2 estimates of almost minimizers (Theorem 1). The proof is based on the Morrey and Campanato space embeddings, similar to the case of almost minimizers with q = 0 and λ ± = 1, treated by the authors in [2]. We first prove the following concentric ball estimates. Proposition 1. Let u be an almost minimizer in B 1 . Then, there are r 0 = r 0 (α, M ) ∈ (0, 1) and C 0 = C 0 (n, M ) > 1 such that
Bρ(x0) |∇u| 2 + F (x 0 , u) ≤ C 0 ρ r n + r α ˆB r (x0) |∇u| 2 + F (x 0 , u) + C 0 r n+ 2 1−q (q+1−αq) (2.1)
for any B r0 (x 0 ) ⋐ B 1 and 0 < ρ < r < r 0 .
Proof. Without loss of generality, we may assume x 0 = 0. Let h be a harmonic replacement of u in B r , i.e., h is the vectorial harmonic function with h = u on ∂B r . Since |h ± | q+1 and |∇h| 2 are subharmonic in B r , we have the following sub-mean value properties:
Bρ F (0, h) ≤ ρ r nˆB r F (0, h),ˆB ρ |∇h| 2 ≤ ρ r nˆB r |∇h| 2 , 0 < ρ < r. (2.2)
Moreover, notice that since h is harmonic,´B r ∇h · ∇(u − h) = 0. Combining this with the almost minimizing property of u, we obtain that for 0 < r < r 0 (α, M ),
Br |∇(u − h)| 2 =ˆB r |∇u| 2 − |∇h| 2 ≤ˆB r M r α |∇h| 2 + 2(1 + M r α )F (0, h) − 2F (0, u) =ˆB r M r α |∇h| 2 + 2(1 + 2M r α )(F (0, h) − F (0, u)) + 2M r α (2F (0, u) − F (0, h)) ≤ˆB r M r α |∇u| 2 + 3F (0, u, h) + 2M r α (2F (0, u) − F (0, h)) , (2.3)
where in the last line we have used that
F (0, h) − F (0, u) ≤ F (0, u, h).
We also note that by Poincaré inequality there is C 1 = C 1 (n) > 0 such that
r −2ˆB r |u − h| 2 ≤ C 1ˆB r |∇(u − h)| 2 .
Then, for ε 1 = 1 16C1M ,
Br 1 1 + q |u + | q+1 − |h + | q+1 ≤ˆB r |u + | q + |h + | q |u + − h + | ≤ˆB r 1/4 r αq q+1 |u + | q q+1 q + 1/4 r αq q+1 |h + | q q+1 q + C r − αq q+1 |u + − h + | q+1 =ˆB r r α /4(|u + | q+1 + |h + | q+1 ) + Cr −αq |u − h| q+1 ≤ˆB r r α /4(|u + | q+1 + |h + | q+1 ) + ε 1 r −(q+1) |u − h| q+1 2 q+1 + Cr (q+1−αq) 2 1−q =ˆB r r α /4(|u + | q+1 + |h + | q+1 ) + ε 1 r −2 |u − h| 2 + Cr n+ 2 1−q (q+1−αq) ≤ˆB r r α /4(|u + | q+1 + |h + | q+1 ) + ε 1 C 1 |∇(u − h)| 2 + Cr n+ 2 1−q (q+1−αq) ,
where in the second inequality we applied Young's inequality. Similarly, we can get
Br 1 1 + q ||u − | q+1 − |h − | q+1 | ≤ˆB r r α /4(|u − | q+1 + |h − | q+1 ) + ε 1 C 1 |∇(u − h)| 2 + Cr n+ 2 1−q (q+1−αq) ,
and it follows that
Br F (0, u, h) =ˆB r λ + (0) 1 + q |u + | q+1 − |h + | q+1 + λ − (0) 1 + q |u − | q+1 − |h − | q+1 ≤ˆB r λ + (0)r α /4 |u + | q+1 + |h + | q+1 + λ − (0)r α /4 |u − | q+1 + |h − | q+1 + 2ε 1 C 1 M |∇(u − h)| 2 + Cr n+ 2 1−q (q+1−αq) =ˆB r (1 + q)r α /4(F (0, u) + F (0, h)) + 2ε 1 C 1 M |∇(u − h)| 2 + Cr n+ 2 1−q (q+1−αq) . (2.4) From (2.3) and (2.4), Br |∇(u − h)| 2 + 4F (0, u, h) ≤ˆB r M r α |∇u| 2 + 3F (0, u, h) + Cr α F (0, u) + (1 + q − 2M ) r α F (0, h) + 8ε 1 C 1 M |∇(u − h)| 2 + Cr n+ 2 1−q (q+1−αq) ≤ˆB r Cr α |∇u| 2 + F (0, u) + 3F (0, u, h) + 1 2 |∇(u − h)| 2 + Cr n+ 2 1−q (q+1−αq) , which giveŝ Br 1 2 |∇(u − h)| 2 + F (0, u, h) ≤ Cr αˆB r |∇u| 2 + F (0, u) + Cr n+ 2 1−q (q+1−αq) . ThusˆB r |∇(u − h)| 2 + F (0, u, h) ≤ 2ˆB r 1 2 |∇(u − h)| 2 + F (0, u, h) ≤ Cr αˆB r |∇u| 2 + F (0, u) + Cr n+ 2 1−q (q+1−αq) . (2.5)
Now, by combining (2.2) and (2.5), we obtain that for 0 < ρ < r < r 0 ,
Bρ |∇u| 2 + F (0, u) ≤ 2ˆB ρ |∇h| 2 + F (0, h) + 2ˆB ρ |∇(u − h)| 2 + F (0, u, h) ≤ 2 ρ r nˆB r |∇h| 2 + F (0, h) + 2ˆB ρ |∇(u − h)| 2 + F (0, u, h) ≤ 4 ρ r nˆB r |∇u| 2 + F (0, u) + 6ˆB r |∇(u − h)| 2 + F (0, u, h) ≤ C ρ r n + r α ˆB r |∇u| 2 + F (0, u) + Cr n+ 2 1−q (q+1−αq) .
From here, we deduce the almost Lipschitz regularity of almost minimizers with the help of the following lemma, whose proof can be found in [5]. Lemma 1. Let r 0 > 0 be a positive number and let ϕ : (0, r 0 ) → (0, ∞) be a nondecreasing function. Let a, β, and γ be such that a > 0, γ > β > 0. There exist two positive numbers ε = ε(a, γ, β), c = c(a, γ, β) such that, if ϕ(ρ) ≤ a ρ r γ + ε ϕ(r) + b r β for all ρ, r with 0 < ρ ≤ r < r 0 , where b ≥ 0, then one also has, still for 0 < ρ < r < r 0 ,
ϕ(ρ) ≤ c ρ r β ϕ(r) + bρ β .
Theorem 4. Let u be an almost minimizer in B 1 . Then u ∈ C 0,σ (B 1 ) for all 0 < σ < 1. Moreover, for any K ⋐ B 1 ,
u C 0,σ (K) ≤ C E(u, 1) 1/2 + 1 (2.6)
with C = C(n, α, M, σ, K).
Proof. For given K ⋐ B 1 and x 0 ∈ K, take δ = δ(n, α, M, σ, K) > 0 such that δ < min{r 0 , dist(K, ∂B 1 )} and δ α ≤ ε(C 0 , n, n + 2σ − 2), where r 0 = r 0 (α, M ) and C 0 = C 0 (n, M ) are as in Proposition 1 and ε = ε(C 0 , n, n + 2σ − 2) is as in Lemma 1. Then, by (2.1), for 0 < ρ < r < δ,
Bρ(x0) |∇u| 2 + F (x 0 , u) ≤ C 0 ρ r n + ε ˆB r (x0) |∇u| 2 + F (x 0 , u) + C 0 r n+2σ−2 .
By applying Lemma 1, we obtain Bρ(x0)
|∇u| 2 + F (x 0 , u) ≤ C ρ r n+2σ−2ˆB r (x0) |∇u| 2 + F (x 0 , u) + ρ n+2σ−2 .
Taking r ր δ, we get Bρ(x0)
|∇u| 2 + F (x 0 , u) ≤ C(n, α, M, σ, K) (E(u, 1) + 1) ρ n+2σ−2 (2.7)
for 0 < ρ < δ. In particular, we havê
Bρ(x0) |∇u| 2 ≤ C(n, α, M, σ, K) (E(u, 1) + 1) ρ n+2σ−2 ,
and by Morrey space embedding we conclude u ∈ C 0,σ (K) with u C 0,σ (K) ≤ C(n, α, M, σ, K) E(u, 1) 1/2 + 1 .
We now prove C 1,α/2 -regularity of almost minimizers by using their almost Lipschitz estimates above.
Proof of Theorem 1. For K ⋐ B 1 , fix a small r 0 = r 0 (n, α, M, K) > 0 to be chosen later. Particularly, we ask r 0 < dist(K, ∂B 1 ). For x 0 ∈ K and 0 < r < r 0 , let h ∈ W 1,2 (B r (x 0 )) be a harmonic replacement of u in B r (x 0 ). Then, by (2.5) and (2.7) with σ = 1 − α/4 ∈ (0, 1),
Br (x0) |∇(u − h)| 2 ≤ Cr αˆB r (x0) |∇u| 2 + F (x 0 , u) + Cr n+ 2 1−q (q+1−αq)
≤ C (E(u, 1) + 1) r n+α/2 + Cr n+2 ≤ C(n, α, M, K) (E(u, 1) + 1) r n+α/2 (2.8) for 0 < r < r 0 . Note that since h is harmonic in B r (x 0 ), for 0 < ρ < r
Bρ(x0) |∇h − ∇h x0,ρ | 2 ≤ ρ r n+2ˆB r (x0) |∇h − h x0,r | 2 .
Moreover, by Jensen's inequality,
Bρ(x0) |∇u − ∇u x0,ρ | 2 ≤ 3ˆB ρ (x0) |∇h − ∇h x0,ρ | 2 + |∇(u − h)| 2 + | ∇(u − h) x0,ρ | 2 ≤ 3ˆB ρ (x0) |∇h − ∇h x0,ρ | 2 + 6ˆB ρ (x0) |∇(u − h)| 2 ,
and similarly,
Br (x0) |∇h − ∇h x0,r | 2 ≤ 3ˆB r (x0) |∇u − ∇u x0,r | 2 + 6ˆB r (x0) |∇(u − h)| 2 .
Now, we use the inequalities above to obtain
Bρ(x0) |∇u − ∇u x0,ρ | 2 ≤ 3ˆB ρ (x0) |∇h − ∇h x0,ρ | 2 + 6ˆB ρ(x0) |∇(u − h)| 2 ≤ 3 ρ r n+2ˆB r (x0) |∇h − ∇h x0,r | 2 + 6ˆB ρ (x0) |∇(u − h)| 2 ≤ 9 ρ r n+2ˆB r (x0) |∇u − ∇u x0,r | 2 + 24ˆB r (x0) |∇(u − h)| 2 ≤ 9 ρ r n+2ˆB r (x0)
|∇u − ∇u x0,r | 2 + C (E(u, 1) + 1) r n+α/2 .
Next, we apply Lemma 1 to get
Bρ(x0) |∇u − ∇u x0,ρ | 2 ≤ C ρ r n+α/2ˆB r (x0) |∇u − ∇u x0,r | 2 + C (E(u, 1) + 1) ρ n+α/2
for 0 < ρ < r < r 0 . Taking r ր r 0 , we havê
Bρ(x0) |∇u − ∇u x0,ρ | 2 ≤ C (E(u, 1) + 1) ρ n+α/2 .
By Campanato space embedding, we obtain ∇u ∈ C 0,α/4 (K) with
∇u C 0,α/4 (K) ≤ C(E(u, 1) 1/2 + 1).
In particular, we have ∇u L ∞ (K) ≤ C(E(u, 1) 1/2 + 1)
for any K ⋐ B 1 . With this estimate and (2.6), we can improve (2.8):
Br (x0) |∇(u − h)| 2 ≤ Cr αˆB r (x0) |∇u| 2 + F (x 0 , u) + Cr n+2
≤ C (E(u, 1) + 1) r n+α + Cr n+2 ≤ C (E(u, 1) + 1) r n+α , and by repeating the process above we conclude that ∇u ∈ C 1,α/2 (K) with ∇u C 0,α/2 (K) ≤ C(n, α, M, K)(E(u, 1) 1/2 + 1).
Weiss-type monotonicity formula
In the rest of the paper we study the free boundary of almost minimizers. This section is devoted to proving Weiss-type monotonicity formula, which is one of the most important tools in our study of the free boundary. This result is obtained from comparison with κ-homegeneous replacements, following the idea for the one in the case q = 0 in [2].
Theorem 5 (Weiss-type monotonicity formula). Let u be an almost minimizer in B 1 . For κ = 2 1−q > 2 and x 0 ,
x 1 ∈ B 1/2 , set W (u, x 0 , x 1 , t) := e at α t n+2κ−2 ˆB t(x0) |∇u| 2 + 2F (x 1 , u) − κ(1 − bt α ) tˆ∂ Bt(x0) |u| 2 , with a = M (n + 2κ − 2) α , b = M (n + 2κ) α .
Then, for 0 < t < t 0 (n, α, κ, M ),
d dt W (u, x 0 , x 1 , t) ≥ e at α t n+2κ−2ˆ∂ Bt(x0) ∂ ν u − κ(1 − bt α ) t u 2 .
In particular, W (u, x 0 , x 1 , t) is nondecreasing in t for 0 < t < t 0 .
Proof. We follow the argument in Theorem 5.1 in [6]. Without loss of generality, we may assume x 0 = 0. Then, for 0 < t < 1/2, define the κ-homogeneous replacement
of u in B t w(x) := |x| t κ u t x |x| , x ∈ B t .
Note that w is homogeneous of degree κ in B t and coincides with u on ∂B t , that is a valid competitor for u in B t . We computê
Bt |∇w| 2 =ˆB t |x| t 2κ−2 κ t u t x |x| x |x| + ∇u t x |x| − ∇u t x |x| · x |x| x |x| 2 =ˆt 0ˆ∂Br r t 2κ−2 κ t u t x r ν − ∇u t x r ν ν + ∇u t x r 2 dS x dr =ˆt 0ˆ∂Bt r t n+2κ−3 κ t uν − (∂ ν u)ν + ∇u 2 dS x dr = t n + 2κ − 2ˆ∂ Bt ∇u − (∂ ν u)ν + κ t uν 2 dS x = t n + 2κ − 2ˆ∂ Bt |∇u| 2 − |∂ ν u| 2 + κ t 2 |u| 2 .
Moreover, from
Bt |w ± | q+1 =ˆt 0ˆ∂Br r t 2κ−2 u ± t r x q+1 dS x dr =ˆt 0ˆ∂Bt r t n+2κ−3 |u ± | q+1 dS x dr = t n + 2κ − 2ˆ∂ Bt |u ± | q+1 , we also haveˆB t F (x 1 , w) = t n + 2κ − 2ˆ∂ Bt F (x 1 , u).
Combining those computations with the almost minimizing property of u, we get
(1 − M t α )ˆB t |∇u| 2 + 2F (x 1 , u) ≤ 1 1 + M t αˆB t |∇u| 2 + 2F (x 1 , u) ≤ˆB t |∇w| 2 + 2F (x 1 , w) = t n + 2κ − 2ˆ∂ Bt |∇u| 2 − |∂ ν u| 2 + κ t 2 |u| 2 + 2F (x 1 , u) .
This gives
d dt e at α t −n−2κ+2 ˆB t |∇u| 2 + 2F (x 1 , u) = −(n + 2κ − 2)e at α t −n−2κ+1 (1 − M t α )ˆB t |∇u| 2 + 2F (x 1 , u) ≥ −e at α t −n−2κ+2ˆ∂ Bt |∇u| 2 − |∂ ν u| 2 + κ t 2 |u| 2 + 2F (x 1 , u) . (3.1)
Note that we can write
W (u, 0, x 1 , t) = e at α t −n−2κ+2ˆB t |∇u| 2 + 2F (x 1 , u) − ψ(t)ˆ∂ Bt |u| 2 ,
where
ψ(t) = κe at α (1 − bt α ) t n+2κ−1 .
Then, using (3.1), we obtain
d dt W (u, 0, x 1 , t) = d dt e at α t −n−2κ+2 ˆB t |∇u| 2 + 2F (x 1 , u) + e at α t −n−2κ+2ˆ∂ Bt |∇u| 2 + 2F (x 1 , u) − ψ ′ (t)ˆ∂ Bt |u| 2 − ψ(t)ˆ∂ Bt 2u∂ ν u + n − 1 t |u| 2 ≥ e at α t −n−2κ+2ˆ∂ Bt |∂ ν u| 2 − 2ψ(t)ˆ∂ Bt u∂ ν u − κ 2 e at α t −n−2κ + ψ ′ (t) + (n − 1) ψ(t) t ˆ∂ Bt |u| 2 .
To simplify the last term, we observe that ψ(t) satisfies the inequality
− e at α t n+2κ−2 κ 2 e at α t −n−2κ + ψ ′ (t) + (n − 1) ψ(t) t − ψ(t) 2 ≥ 0
for 0 < t < t 0 (n, α, κ, M ). Indeed, by a direct computation, we can see that the inequality above is equivalent to
2α 2 − M (n + 2κ) [(n + 2κ)(κ − α) + 2α] t α ≥ 0,
which holds for 0 < t < t 0 (n, α, κ, M ). Therefore, we conclude that
d dt W (u, 0, x 1 , t) ≥ e at α t −n−2κ+2ˆ∂ Bt |∂ ν u| 2 − 2ψ(t)ˆ∂ Bt u∂ ν u + e −at α t n+2κ−2 ψ(t) 2ˆ∂ Bt |u| 2 = e at α t −n−2κ+2ˆ∂ Bt ∂ ν u − e −at α t n+2κ−2 ψ(t)u 2 = e at α t n+2κ−2ˆ∂ Bt ∂ ν u − κ(1 − bt α ) t u 2 .
Growth estimates
In this section we prove the optimal growth of almost minimizers at the free boundary (Theorem 2).
We will divide our proof into two cases:
either κ ∈ N or κ ∈ N.
The proof for the first case κ ∈ N is given in Lemma 3, and the one for the second case κ ∈ N can be found in Lemma 5 and Remark 1. We start the proof with an auxiliary result on a more general class of almost minimizers.
Lemma 2. For 0 < a 0 ≤ 1, 0 < b 0 ≤ 1 and z 0 ∈ B 1/2 , we define G(z, u) := a 0 F (b 0 z + z 0 , u) and let u be an almost minimizer in B 1 of functionalŝ Br (z) |∇u| 2 + 2G(z, u) dx, B r (z) ⋐ B 1 , (4.1) with a gauge function ω(r) = M r α . If u(x) = O(|x − x 0 | µ ) for some 1 ≤ µ ≤ κ, µ ∈ N, and x 0 ∈ B 1/2 , then |u(x)| ≤ C µ |x − x 0 | µ , |∇u(x)| ≤ C µ |x − x 0 | µ−1 , |x − x 0 | ≤ r µ (4.2)
with constants C µ and r µ depending only on E(u, 1), n, α, M , µ. As before, the O(·) notation does not necessarily mean the uniform estimate.
Proof. We can write G(z, u) = 1 1+q λ + (z)|u + | q+1 +λ − (z)|u − | q+1 forλ ± (z) = a 0 λ ± (b 0 z + z 0 )
, which means that u is an almost minimizer of the energy functional (1.1) with variable coefficientsλ ± . In the previous sections we have proved that almost minimizers with (1.5) satisfies the C 1,α/2 -estimate (1.6). u also satisfies (1.5) but 1/λ 0 ≤ M for the lower boundλ 0 of λ ± , since a 0 < 1. One can check, however, that in the proofs towards (1.6) the bound 1/λ 0 ≤ M in (1.5) is used only to get the estimate for λ±(x) λ±(x0) in (1.3) (when rewriting the almost minimizing property with variable coefficients (1.2) to frozen coefficients (1.4)). Due to cancellatioñ
λ±(x) λ±(x0) = λ±(b0x+z0) λ±(b0x0+z0) satisfies (1.
3), thus we can apply Theorem 1 to u to obtain the uniform estimate
u C 1,α/2 (B 1/2 ) ≤ C(n, α, M ) E(u, 1) 1/2 + 1 .
In view of this estimate, the statement of Lemma 2 holds for µ = 1. Now we assume that the statement holds for 1 ≤ µ < κ and prove that it holds for µ + δ ≤ κ with δ < α ′ /2, α ′ = α ′ (n, q) ≤ α small enough, and µ + δ ∈ N. This will readily imply Lemma 2 by bootstrapping.
First, we claim that (4.2) implies that there exist constants C 0 > 0 and r 0 > 0, depending only on E(u, 1), n, α, M , µ, δ, such that for any r ≤ r 0 there is a harmonic polynomial p r of degree s := ⌊µ + δ⌋ ∈ (µ + δ − 1, µ + δ)
satisfying Br |∇(u − p r )| 2 ≤ C 0 r 2(µ+δ−1) . (4.3)
We will prove (4.3) later, and at this moment assume that it is true. Then, by Poincaré inequality (up to possibly modifying p r by a constant and choosing C 0 larger),
Br |u − p r | 2 ≤ C 0 r 2(µ+δ) .
By a standard limiting argument, using that s < µ+ δ, we obtain that for a limiting polynomial p, and for all r ≤ r 0 ,
Br |u − p| 2 ≤ Cr 2(µ+δ) , Br |∇(u − p)| 2 ≤ Cr 2(µ+δ−1) .
From these estimates, under the assumption u(x) = O(|x| µ+δ ) we deduce p ≡ 0, and obtain that for all r ≤ r 0
Br |u| 2 ≤ Cr 2(µ+δ) , Br |∇u| 2 ≤ Cr 2(µ+δ−1) . (4.4)
On the other hand, using µ + δ ≤ κ = 2 1−q , one can easily see that the rescalings v(x) := u(rx) r µ+δ , 0 < r ≤ r 0 , are almost minimizers of the functional (4.1) with G(z, v) = r 2−(1−q)(µ+δ) F (rz, v) and a gauge function ω r (ρ) = M (rρ) α . This, together with (4.4), implies that the C 1,α/2 -estimates of v are uniformly bounded, independent of r. This readily gives the desired estimates (4.2) for µ + δ
|u| ≤ C µ+δ |x| µ+δ , |∇u| ≤ C µ+δ |x| µ+δ−1 .
We are now left with the proof of (4.3). To this aim, let h be the harmonic replacement of u in B r . Note that h minimizes the Dirichlet integral and attains its maximum on ∂B r . Combining this with the almost-minimality of u and (4.2) yields
Br |∇(u − h)| 2 = Br |∇u| 2 − Br |∇h| 2 ≤ M r α Br |∇h| 2 + 2(1 + M r α ) Br G(0, h) − 2 Br G(0, u) ≤ C 2 µ r α+2(µ−1) + 2C µ r (q+1)µ ≤ 3C 2 µ r α ′ +2(µ−1) . (4.5)
Here, the last inequality holds for α ′ ≤ α small enough since 2(µ − 1) < (q + 1)µ. Now, in order to prove (4.3), as in standard Campanato Type estimates, it suffices to show that if (4.3) holds for r, then for a fixed constant ρ small enough, ∃ p rρ harmonic polynomial of degree s = ⌊µ + δ⌋ such that
(4.6) Bρr |∇(u − p rρ )| 2 ≤ C 0 (ρr) 2(µ+δ−1) .
Indeed, since h − p r is harmonic, there exists a harmonic polynomialp ρ of degree s such that
Bρr |∇(h − p r −p ρ )| 2 ≤ Cρ 2s Br |∇(h − p r )| 2 (4.7) ≤ Cρ 2s Br |∇(u − p r )| 2 ≤ CC 0 ρ 2s r 2(µ+δ−1) ≤ C 0 4 (ρr) 2(µ+δ−1)
as long as ρ is small enough, given that s > µ + δ − 1. To justify the first inequality, notice that if w is harmonic in B 1 and q is the tangent polynomial to w at 0 of degree s − 1 then
Bρ |w − q| 2 ≤ w − q 2 L ∞ (Bρ) ≤ Cρ 2s w 2 L ∞ (B 3/4 ) ≤ Cρ 2s B1 w 2 .
Thus, we are applying this inequality to w = ∂ i (h − p r ) and q = ∂ ip ρ , 1 ≤ i ≤ n, withp ρ the tangent polynomial to h − p r at 0 of degree s. The second inequality in (4.7) follows from the fact that h is the harmonic replacement of u in B r .
From (4.5) for this specific ρ for which (4.7) holds, we obtain that
Bρr |∇(u − h)| 2 ≤ ρ −n Br |∇(u − h)| 2 ≤ ρ −n−α ′ −2(µ−1) 3C 2 µ (ρr) α ′ +2(µ−1) .
Combining this inequality with (4.7), since δ < α ′ /2, we obtain the desired claim with p rρ = p r +p ρ , as long as
C 0 ≥ 12C 2 µ ρ −n−α ′ −2(µ−1) .
Now we prove the optimal growth at free boundary points (Theorem 2) when κ ∈ N.
Lemma 3. Let u ∈ W 1,2 (B 1 ) be an almost minimizer in B 1 and κ ∈ N. Then, there exist C > 0 and r 0 > 0, depending on n, α, M, κ, E(u, 1), such that
sup Br(x0) |u| r κ + |∇u| r κ−1 ≤ C,
for any x 0 ∈ Γ κ (u) ∩ B 1/2 and 0 < r < r 0 .
Proof. We first prove the weaker version of Lemma 3 by allowing the constants C and r 0 to depend on the points x 0 ∈ Γ κ (u) ∩ B 1/2 as well. That is, for each
x 0 ∈ Γ κ (u) ∩ B 1/2 , sup Br (x0) |u| r κ + |∇u| r κ−1 ≤ C x0 , 0 < r < r x0 , (4.8)
where C x0 and r x0 depend on n, α, M , κ, E(u, 1) and x 0 .
To show this weaker estimate (4.8), we assume to the contrary there is a point
x 0 ∈ Γ κ (u) ∩ B 1/2 and a sequence of positive radii {r j } ∞ j=1 ⊂ (0, 1), r j ց 0, such that sup Br j (x0) |u| r κ j + |∇u| r κ−1 j = j, sup Br (x0) |u| r κ + |∇u| r κ−1 ≤ j for any r j ≤ r ≤ 1/4. Define the functionũ j (x) := u(r j x + x 0 ) jr κ j , x ∈ B 1 4r j . Then sup B1 (|ũ j | + |∇ũ j |) = 1 and sup BR |ũ j | R κ + |∇ũ j | R κ−1 ≤ 1 for any 1 ≤ R ≤ 1 4r j .
Now we claim that there exists a harmonic functionũ 0 ∈ C 1 loc (R n ) such that over a subsequenceũ j →ũ 0 in C 1 loc (R n ). Indeed, for a fixed R > 1 and a ball B ρ (z) ⊂ B R , we havê
Bρ(z) |∇ũ j | 2 + 2F j (z,ũ j ) = 1 j 2 r n+2κ−2 jˆBr j ρ(rj z+x0) |∇u| 2 + 2F (r j z + x 0 , u) for F j (z,ũ j ) := 1 j 1−q F (r j z+x 0 ,ũ j ) = 1 1+q (λ j ) + (z)|(ũ j ) + | q+1 + (λ j ) − (z)|(ũ j ) − | q+1 , where (λ j ) ± (z) = 1 j 1−q λ ± (r j z + x 0 ).
Since eachũ j is an almost minimizer of functional (4.1) with gauge function ω j (ρ) ≤ M (r j ρ) α ≤ M ρ α , we can apply Theorem 1 toũ j to obtain
ũ j C 1,α/2 (B R/2 ) ≤ C(n, α, M, R) E(ũ j , 1) 1/2 + 1 ≤ C(n, α, M, R).
This implies that up to a subsequence,
u j →ũ 0 in C 1 (B R/2 ).
By letting R → ∞ and using Cantor's diagonal argument, we further havẽ
u j →ũ 0 in C 1 loc (R n ).
To show thatũ 0 is harmonic, we fix R > 1 and observe that for the harmonic replacement h j ofũ j in B R ,
BR |∇ũ j | 2 + 2 j 1−q F (x 0 ,ũ j ) ≤ (1 + M (r j R) α )ˆB R |∇h j | 2 + 2 j 1−q F (x 0 , h j ) . (4.9)
From the global estimates of harmonic function h j
h j C 1,α/2 (BR) ≤ C(n, R) ũ j C 1,α/2 (BR) ≤ C(n, α, M, R),
we see that over a subsequence
h j → h 0 in C 1 (B R ), for some harmonic function h 0 ∈ C 1 (B R ). Taking j → ∞ in (4.9), we get BR |∇ũ 0 | 2 ≤ˆB R |∇h 0 | 2 ,
which implies thatũ 0 is the energy minimizer of the Dirichlet integral, or the harmonic function. This finishes the proof of the claim. Now, we observe that the harmonic functionũ 0 satisfies
sup B1 (|ũ 0 | + |∇ũ 0 |) = 1 and sup BR |ũ 0 | R κ + |∇ũ 0 | R κ−1 ≤ 1 for any R ≥ 1. (4.10)
On the other hand, from x 0 ∈ Γ κ (u), we haveũ j (x) = u(rj x+x0) jr κ j = O(|x| ξ ) for some ⌊κ⌋ < ξ < κ. Applying Lemma 2 yields |ũ j (x)| ≤ C ξ |x| ξ , |x| < r ξ , with C ξ and r ξ depending only on n, α, M , ξ. This readily implies
|ũ 0 (0)| = |∇ũ 0 (0)| = · · · = |D ⌊κ⌋ũ 0 (0)| = 0,
which combined with (4.10) contradicts Liouville's theorem, and (4.8) is proved.
The pointwise estimate (4.8) tells us u(x) = O(|x − x 0 | κ ) at every free boundary point x 0 ∈ Γ κ (u) ∩ B 1/2 . This in turn implies, using Lemma 2 again, the desired uniform estimate in Lemma 3.
In the rest of this section we establish the optimal growth of almost minimizers at free boundary points when κ is an integer. We start with weak growth estimates.
Lemma 4. Let u ∈ W 1,2 (B 1 ) be an almost minimizer in B 1 and κ ∈ N, κ > 2.
Then for any κ − 1 < µ < κ, there exist C > 0 and r 0 > 0, depending on n, α, M ,
µ, E(u, 1), such that sup Br(x0) |u| r µ + |∇u| r µ−1 ≤ C for any x 0 ∈ Γ κ (u) ∩ B 1/2 and 0 < r < r 0 .
Proof. The proof is similar to that of Lemma 3. We first claim that for each
x 0 ∈ Γ κ (u) ∩ B 1/2 , sup Br(x0) |u| r µ + |∇u| r µ−1 ≤ C µ,x0 , 0 < r < r µ,x0 , (4.11)
for positive constants C µ,x0 and r µ,x0 , depending only on n, α, M , µ, E(u, 1) and x 0 . To prove it by contradiction we assume there is a sequence of positive radii
{r j } ∞ j=1 ⊂ (0, 1), r j ց 0 such that sup Br j (x0) |u| r µ j + |∇u| r µ−1 j = j, sup Br (x0) |u| r µ + |∇u| r µ−1 ≤ j for any r j ≤ r ≤ 1/4. Letũ j (x) := u(r j x + x 0 ) jr µ j , x ∈ B 1 4r j .
Following the argument in Lemma 3, we can obtain thatũ j →ũ 0 in C 1 loc (R n ) for some harmonic functionũ 0 ∈ C 1 (R n ) and thatũ 0 satisfies
sup B1 (|ũ 0 | + |∇ũ 0 |) = 1 and sup BR |ũ 0 | R µ + |∇ũ 0 | R µ−1 ≤ 1 for any R ≥ 1. (4.12) On the other hand, from x 0 ∈ Γ κ (u), we haveũ j (x) = O |x| ξ for some ξ > κ − 1.
Thus |ũ j (x)| ≤ C ξ |x| ξ for x ∈ B r ξ by Lemma 2 with C ξ and r ξ depending only on n, α, M , ξ. This readily implies |ũ 0 (0)| = |∇ũ 0 (0)| = · · · = |D κ−1ũ 0 (0)| = 0, which combined with (4.12) contradicts Liouville's theorem. Now, the pointwise estimate (4.11) gives u(x) = O(|x − x 0 | µ ) at every x 0 ∈ Γ κ (u) ∩ B 1/2 , and we can apply Lemma 2 again to conclude Lemma 4. Lemma 5. Let u and κ be as in Lemma 4, and 0 ∈ Γ κ (u) ∩ B 1/2 . Assume that in a ball B r , r ≤ r 0 universal, we have for universal constants 0 < s < 1, C 0 > 1 and
κ − 1 < µ < κ Br |u s − p r | 2 ≤ L 2 s r 2κ , Br |∇(u s − p r )| 2 ≤ L 2 s r 2κ−2 ,(4.13)
with p r a harmonic polynomial of degree κ such that
p r L ∞ (Br ) ≤ C 0 L 2 1+q s r κ . (4.14)
Then, there exists ρ > 0 small universal such that (4.13) and (4.14) hold in B ρr for a harmonic polynomial p ρr of degree κ. Remark 1. Lemma 5 readily implies Theorem 2 when κ ∈ N. In fact, as in the standard Campanato Type estimates, the lemma ensures that (4.13)-(4.14) are true for small r ≤ r 1 . Combining these two estimates yields Scaling back to u(x) = s κ u s (x/s) gives its optimal growth estimates at 0 ∈ Γ κ (u) ∩ B 1/2 . This also holds for any x 0 ∈ Γ κ (u) ∩ B 1/2 by considering u(· − x 0 ).
Proof. For notational simplicity, we write v := u s .
Step 1. For 0 < r < 1, we denote byṽ andp r the rescalings of v and p r , respectively, to the ball of radius r, that is
v(x) := v(rx) r κ = u(srx) (sr) κ ,p r (x) := p r (rx) r κ , x ∈ B 1 .
When not specified, · ∞ denotes the L ∞ norm in the unit ball B 1 . With these notations, (4.13)-(4.14) read
B1 |ṽ −p r | 2 ≤ L 2 s , B1 |∇(ṽ −p r )| 2 ≤ L 2 s (4.15) and p r ∞ ≤ C 0 L 2 1+q s . (4.16)
We claim that ifp r is κ-homogeneous, then the finer bound
p r ∞ ≤ C 0 2 L 2 1+q
s holds for a universal constant C 0 > 0. Indeed, applying Theorem 5, Weiss-type monotonicity formula, giveŝ
B1 |∇ṽ| 2 + 2F (0,ṽ) − κ(1 − b(rs) α )ˆ∂ B1 |ṽ| 2 ≤ e −a(rs) α W (u, 0, 0, t 0 ) and since b > 0, 2λ 0 1 + qˆB 1 |ṽ| q+1 ≤ C − ˆB 1 |∇ṽ| 2 − κˆ∂ B1 |ṽ| 2 .
Using thatp r is a κ-homogeneous harmonic polynomial satisfying (4.15), we get
2λ 0 1 + qˆB 1 |ṽ| q+1 ≤ C − ˆB 1 |∇(ṽ −p r )| 2 − κˆ∂ B1 |ṽ −p r | 2 ≤ C(1 + L 2 s ). ThusˆB 1 |p r | q+1 ≤ CˆB 1 |ṽ| q+1 + |ṽ −p r | q+1 ≤ C 1 + L 2 s + ṽ −p r q+1 L 2 (B1) ≤ C(1 + L 2 s + L q+1 s ).
In conclusion (for a universal constant C > 0) we have p r q+1 ∞ ≤ CL 2 s from which we deduce that
p r ∞ ≤ C 0 2 L 2 1+q s .
Step 2. We claim that for some t 0 > 0 small universal We conclude that the following energy estimate E(w, 1) ≤ C, wherew := L − 2 1+q sṽ is an almost minimizer with the same gauge function asṽ for the energŷ
|∇ṽ(x)| ≤ C µ L 2 1+q s |x| µ−1 , |ṽ(x)| ≤ C µ L 2 1+q s |x| µ , x ∈ B t0 .Bt(x0) |∇w| 2 + 2L 2(q−1) q+1 s F (x 0 ,w) , 0 < t < 1.
As before, L 2(q−1) q+1 s ≤ 1 allows us to repeat the arguments towards the C 1,α/2estimtate of almost minimizers as well as towards Lemma 4. Since u = o(|x| κ−1 ) impliesw = o(|x| κ−1 ), we can apply Lemma 4 to have
|w(x)| ≤ C µ |x| µ , |∇w(x)| ≤ C µ |x| µ−1 , x ∈ B t0 .
This readily implies (4.17).
Step 3. Leth be the harmonic replacement ofṽ in B t0 . Then, we claim that To estimate I, we use thath is the harmonic replacement ofṽ, together with (4.17), to get
Bt 0 |∇(ṽ −h)| 2 ≤ CLBt 0 |∇h| 2 ≤ Bt 0 |∇ṽ| 2 ≤ CL 4 1+q s .
In addition, it follows from the subharmonicity of |h| 2 and (4.17) that
h L ∞ (Bt 0 ) = h L ∞ (∂Bt 0 ) = ṽ L ∞ (∂Bt 0 ) ≤ CL 2 1+q s .
This gives
Bt 0 F (0,h) ≤ C h 1+q L ∞ (Bt 0 ) ≤ CL 2 s ≤ CL 4 1+q s .
Therefore,
I ≤ Cs α L 4 1+q s = CL α µ−κ + 4 1+q s ≤ CL 1+ 2q 1+q s ,
where the last inequality holds if µ is chosen universal close enough to κ (specifically, µ ≥ κ − α(1+q)
3(1−q) ).
Next, we estimate II.
Bt 0 |F (0,h) − F (0,ṽ)| ≤ C Bt 0 |h| 1+q − |ṽ| 1+q ≤ C Bt 0 |h| q + |ṽ| q |ṽ −h| ≤ C h q L ∞ (Bt 0 ) + ṽ q L ∞ (Bt 0 ) Bt 0 |ṽ −h| 2 1/2 ≤ CL 2q 1+q s Bt 0 |∇(ṽ −h)| 2 1/2 .
To bound the last term, we observê
Bt 0 ∇(h −p r ) · ∇(h −ṽ) =ˆ∂ Bt 0 ∂ ν (h −p r )(h −ṽ) −ˆB t 0 ∆(h −p r )(h −ṽ) = 0,
and use it to obtain
Bt 0 |∇(ṽ −p r )| 2 −ˆB t 0 |∇(ṽ −h)| 2 =ˆB t 0 |∇p r | 2 − |∇h| 2 − 2∇ṽ · ∇(p r −h) =ˆB t 0 ∇(p r −h) · ∇(p r +h − 2ṽ) =ˆB t 0 ∇(p r −h) · ∇(p r −h) =ˆB t 0 |∇(p r −h)| 2 ≥ 0.
Therefore,
II ≤ 2 Bt 0 |F (0,h) − F (0,ṽ)| ≤ CL 2q 1+q s Bt 0 |∇(ṽ −h)| 2 1/2 ≤ CL 2q 1+q s Bt 0 |∇(ṽ −p r )| 2 1/2 ≤ CL 1+ 2q 1+q s ,
where we used (4.15) in the last inequality. This completes the proof of (4.18).
Step 4. For ρ ∈ (0, t 0 ) small to be chosen below, we have by (4.18)
Bρ |∇(ṽ −h)| 2 ≤ Cρ −n L 1+ 2q 1+q s .
Sinceh −p r is harmonic, arguing as in the proof of Lemma 2, we can find a harmonic polynomial q r (in B r ) of degree κ such thatq r (x) = q r (rx)
r κ satisfies Bρ |∇(h −p r −q r )| 2 ≤ Cρ 2κ Bt 0 |∇(h −p r )| 2 .
Using (4.15) and (4.18), we further have
Bρ |∇(h −p r −q r )| 2 ≤ Cρ 2κ Bt 0 |∇(h −ṽ)| 2 + Bt 0 |∇(ṽ −p r )| 2 ≤ Cρ 2κ (L 1+ 2q 1+q s + L 2 s ) ≤ CL 2 s ρ 2κ
. This, combined with the equation above, gives
Bρ |∇(ṽ −p r −q r )| 2 ≤ 2 Bρ (|∇(h −p r −q r )| 2 + |∇(ṽ −h)| 2 ) ≤ L 2 s ρ 2κ−2 (Cρ 2 + Cρ −n−2κ+2 L q−1 1+q s ).
By possibly modifyingq r by adding a constant, we also have by Poincaré inequality
Bρ |ṽ −p r −q r | 2 ≤ Cρ 2 Bρ |∇(ṽ −p r −q r )| 2 ≤ L 2 s ρ 2κ (C 1 ρ 2 + C 1 ρ −n−2κ+2 L q−1 1+q s ).
One can see thatq r depends on ρ as well as r, but we keep denotingq r for the notational simplicity. We choose ρ ∈ (0, t 0 ) small so that
C 1 ρ 2 ≤ 1/8
and then choose L s large (that is s small) so that Notice that (4.19) holds for any ρ ∈ [ρ 1 , ρ 2 ], with some constants ρ 1 , ρ 2 > 0 small universal and L s > 0 large universal. In addition, we have
C 1 ρ −n−2κ+2 L q−1 1+q s ≤ 1/8.
This yields that
Bρ |ṽ −p r −q r | 2 ≤ 1 4 L 2 s ρ 2κ , Bρ |∇(ṽ −p r −q r )| 2 ≤ 1 4 L 2 s ρ 2κ−2 .q r L ∞ (B1) ≤ C(ρ 2 , κ, n) q r L ∞ (B ρ 2 /2 ) ≤ C Bρ 2 |q r | 2 1/2 ≤ C Bρ 2 |ṽ −p r −q r | 2 + Bρ 2 |ṽ −p r | 2 1/2 ≤CL s ,(4.20)
where the last line follows from (4.15) and (4.19). We remark thatC depends on ρ 2 , but is independent of ρ 1 .
Step 5. In this step, we prove that the estimates (4.13)-(4.14) over B r imply the same estimates over B ρr . We set (by abuse of notation) p ρr := p r + q r and recall q r (x) := r κqr x r . Following the notations above we denote its homogeneous rescaling byp
ρr (x) := p ρr (ρrx) (ρr) κ = (p r +q r )(ρx) ρ κ .
We divide the proof into the following two cases:
either p ρr is homogeneous of degree κ or not.
Case 1. Suppose that p ρr is κ-homogeneous. Then (4.13) over B ρr follows from (4.19) and (4.14) over B ρr with p ρr from the monotonicity formula, see the claim in Step 1.
Case 2. Now we assume that p ρr is not homogeneous of degree κ. Note that for each polynomial p of degree κ, we can decompose p = p h + p i with p h , p i respectively the κ-homogeneous and the inhomogeneous parts of p. We will prove that (4.13)-(4.14) hold in B ρr with the harmonic polynomial p ρr h (in the place of p ρr ). In fact, it is enough to prove the statement in B ρr under the assumption that p r is κ-homogeneous. Indeed, we note that (4.19) holds for every any r ≤ 1 and ρ 1 < ρ < ρ 2 . Thus, if r ≤ r 0 is small enough, we can find ρ ∈ [ρ 1 , ρ 2 ] such that r = ρ m for some m ∈ N. Then, we can iterate the above statement with such ρ, starting with p 1 = 0. Now, we distinguish two subcases, for δ > 0 small and L s > 0 large universal: q r i ∞ ≤ δL s or q r i ∞ > δL s . Case 2.1. We first consider the case q r i ∞ ≤ δL s . To prove (4.13), we use the κ-homogeneity of p r to have p ρr i = p r i + q r i = q r i , which implies (in accordance with the decomposition above)
p ρr h = p ρr − p ρr i = p ρr − q r i .
Combining this with (4.19) gives
Bρr |v − p ρr h | 2 ≤ 2 Bρr |v − p ρr | 2 + 2 Bρr |q r i | 2 ≤ 2 L 2 s 4 (ρr) 2κ + 2r 2κ Bρ |q r i | 2 ≤ L 2 s 2 (ρr) 2κ + 2L 2 s δ 2 r 2κ ≤ L 2 s (ρr) 2κ , if δ ≤ ρ κ 1 2 .
Similarly,
Bρr |∇(v − p ρr h )| 2 ≤ 2 Bρr |∇(v − p ρr )| 2 + 2 Bρr |∇q r i | 2 ≤ L 2 s 2 (ρr) 2κ−2 + 2r 2κ−2 Bρ |∇q r i | 2 ≤ L 2 s 2 (ρr) 2κ−2 + C 2 r 2κ−2 q r i || 2 L ∞ (B1) ≤ L 2 s 2 (ρr) 2κ−2 + C 2 δ 2 L 2 s r 2κ−2 ≤ L 2 s (ρr) 2κ−2 , if δ ≤ ρ κ−1 1 2C 1/2 2 .
This proves (4.13) in B ρr with harmonic polynomial p ρr h . (4.14) follows from the homogeneity of p ρr h . Case 2.2. Now we assume q r i ∞ > δL s . We will show that it leads to a contradiction and that we always fall in the previous case.
Indeed, forr := ρ 1 r,
pr i ∞ = q r i (ρ 1 x) ρ κ 1 ∞ = q r i L ∞ (Bρ 1 ) ρ κ 1 ≥ q r i L ∞ (B1) ρ 1 ≥ δ ρ 1 L s .
Recall that the constantC in (4.20) is independent of ρ 1 . Thus, for ρ 1 > 0 small,
pr i ∞ ≥ C 3 L s , C 3 ≥ 2C.
Similarly,
pr i ∞ = q r i L ∞ (Bρ 1 ) ρ κ 1 ≤ q r L ∞ (B1) ρ κ 1 ≤ C 4 (ρ 1 )L s . For L s large pr h ∞ = p r +q r h ∞ ≤ C 0 L 2 1+q s +CL s ≤ 3 4 C 0 L 2 1+q s , and thus pr ∞ ≤ pr h ∞ + pr i ∞ ≤ 3 4 C 0 L 2 1+q s + C 4 L s ≤ 7 8 C 0 L 2 1+q s .
We iterate again, and conclude that
ρ −1 1 2 pr i ∞ ≤ p ρ1r i ∞ ≤ 2ρ −κ 1 pr i ∞ while p ρ1r h −pr h ∞ ≤CL s . Indeed, using that pr i ∞ ≥ C 3 L s ≥ 2 qr ∞ , we get p ρ1r i ∞ = pr i +qr i L ∞ (Bρ 1 ) ρ κ 1 ≥ pr i +qr i L ∞ (B1) ρ 1 ≥ pr i ∞ − qr ∞ ρ 1 ≥ ρ −1 1 2 pr i ∞ , and similarly p ρ1r i ∞ ≤ pr i ∞ + qr ∞ ρ κ 1 ≤ 2ρ −κ 1 pr i ∞ .
In addition, p ρ1r h −pr h ∞ = qr h ∞ ≤CL s . In conclusion, if a l := p
C 3 L s ≤ a 0 ≤ C 4 L s , 2a l ≤ a l+1 ≤ C * (ρ 1 )a l , b 0 ≤ 3 4 C 0 L 2 1+q s , |b l+1 − b l | ≤CL s ,
as long as
a l + b l ≤ C 0 L 2 1+q
s , which holds at l = 0. This iteration is possible, since we can repeat Step 1-4 as long as p ρ l 1r ∞ ≤ a l + b l ≤ C 0 L 2 1+q s . Thus we can iterate till the first l ≥ 1 (l ∼ log L s ) such that for c 0 ≤ 1 2C * small universal
C 0 8 L 2 1+q s ≥ a l ≥ c 0 L 2 1+q s , (4.21) b l ≤ 7 8 C 0 L 2 1+q s . (4.22)
We will now finally show that these inequalities lead to a contradiction.
For simplicity we writeṽ l :=ṽ ρ l 1r andp l :=p ρ l 1r . From
|p l i | 2 1/2 ≤ C(η µ + η κ ) + o(1/L s ).
Hence, we conclude from (4.21) that
c 0 η κ−1 ≤ L − 2 1+q s Bη |p l i | 2 1/2 ≤ C(η µ + η κ ) + o(1/L s ).
Since µ > κ − 1, for η small and L s large we obtain a contradiction.
Before closing this section, we notice that the combination of Theorem 1 and Theorem 2 provides the following optimal regularity of an almost minimizer at the free boundary. Corollary 1. Let u be an almost minimizer in B 1 . Then, for x 0 ∈ Γ κ (u) ∩ B 1/2 , 0 < r < 1/2, u x0,r ∈ C 1,α/2 (B 1 ) and for any K ⋐ B 1 , (4.23) u x0,r C 1,α/2 (K) ≤ C(n, α, M, κ, K, E(u, 1)).
Non-degeneracy
In this section we shall derive an important non-degeneracy property of almost minimizers, Theorem 6.
In the rest of this paper, for x 0 ∈ B 1/2 and 0 < r < 1/2 we denote the κhomogeneous rescalings of u by
u x0,r (x) := u(rx + x 0 ) r κ , x ∈ B 1/(2r) .
Theorem 6 (Non-Degeneracy). Let u be an almost minimizer in B 1 . There exist constants c 0 = c 0 (q, n, α, M, E(u, 1)) > 0 and r 0 = r 0 (q, n, α, M ) > 0 such that if x 0 ∈ Γ κ (u) ∩ B 1/2 and 0 < r < r 0 , then
sup Br (x0) |u| ≥ c 0 r κ .
The proof of Theorem 6 relies on the following lemma.
Lemma 6. Let u be an almost minimizer in B 1 . Then, there exist small constants ε 0 = ε 0 (q, n, M ) > 0 and r 0 = r 0 (q, n, α, M ) > 0 such that for 0 < r < r 0 and
x 0 ∈ B 1/2 , if E(u x0,r , 1) ≤ ε 0 then E(u x0,r/2 , 1) ≤ ε 0 .
Proof. For simplicity we may assume x 0 = 0. For 0 < r < r 0 to be specified later, let v r be a solution of ∆v r = f (0, v r ) in B 1 with v r = u r on ∂B 1 . We claim that if ε 0 = ε 0 (q, n, M ) > 0 is small, then v r ≡ 0 in B 1/2 . Indeed, if not, then sup B 3/4 |v r | ≥ c 0 (q, n) by the non-degeneracy of the solution v r . Thus |v r (z 0 )| ≥ c 0 (q, n) for some z 0 ∈ B 3/4 . Moreover, from 1/M ≤ λ ± ≤ M and 0 < q < 1, we have 1/M ≤ 2 1+q λ ± ≤ 2M , thus 1/M |v r | q+1 ≤ 2F (0, v r ) and 2F (0, u r ) ≤ 2M |u r | q+1 .
Then
E(v r , 1) ≤ MˆB 1 |∇v r | 2 + 2F (0, v r ) ≤ MˆB 1 |∇u r | 2 + 2F (0, u r ) ≤ 2M 2 E(u r , 1) ≤ 2M 2 ε 0 .
Combining this with the estimate for the solution v r gives sup
B 7/8 |∇v r | ≤ C(n, M ) E(v r , 1) 1/2 + 1 ≤ C(n, M ), hence |v r | ≥ c 0 (q, n) 2 in B ρ0 (z 0 )
for some small ρ 0 = ρ 0 (q, n, M ) > 0. This implies that
c(q, n, M ) ≤ˆB ρ 0 (z0) |v r | q+1 ≤ E(v r , 1) ≤ 2M 2 ε 0 ,
which is a contradiction if ε 0 = ε 0 (q, n, M ) is small.
Now, we use E(v r , 1) ≤ 2M 2 ε 0 together with the fact that u r satisfies (1.4) in B 1 with gauge function ω r (ρ) ≤ M (rρ) α to get B1 |∇u r | 2 + 2F (0, u r ) ≤ (1 + M r α )ˆB 1 |∇v r | 2 + 2F (0, v r ) ≤ 4M 4 ε 0 r α +ˆB 1 |∇v r | 2 + 2F (0, v r ) , thuŝ B1 |∇u r | 2 − |∇v r | 2 ≤ 4M 4 ε 0 r α + 2ˆB 1 (F (0, v r ) − F (0, u r )) = 4M 4 ε 0 r α + 2λ + (0) 1 + qˆB 1 |v + r | q+1 − |u + r | q+1 + 2λ − (0) 1 + qˆB 1 |v − r | q+1 − |u − r | q+1 . This giveŝ B1 |∇(u r − v r )| 2 =ˆB 1 |∇u r | 2 − |∇v r | 2 − 2∇(u r − v r ) · ∇v r =ˆB 1 |∇u r | 2 − |∇v r | 2 + 2ˆB 1 (u r − v r )f (0, v r ) =ˆB
Combining those inequalities and (5.1) yieldŝ
B1 |∇(u r − v r )| 2 ≤ 4M 4 ε 0 r α .
Applying Poincaré inequality and Hölder's inequality, we obtain
B1 |∇(u r − v r )| 2 + |u r − v r | q+1 ≤ C(q, n, M )r α/2 .
Since v r ≡ 0 in B 1/2 , we see that for 0 < r < r 0 (q, n, α, M ),
E(u r , 1/2) =ˆB 1/2 |∇u r | 2 + |u r | q+1 ≤ C(q, n, M )r α/2 ≤ ε 0 2 n+2κ−2 .
Therefore, we conclude that
E(u r/2 , 1) = 2 n+2κ−2 E(u r , 1/2) ≤ ε 0 .
Lemma 6 immediately implies the following integral form of non-degeneracy.
Lemma 7. Let u, ε 0 and r 0 be as in the preceeding lemma. If x 0 ∈ {|u| > 0}∩B 1/2 and 0 < r < r 0 , thenˆB
r (x0) |∇u| 2 + |u| q+1 ≥ ε 0 r n+2κ−2 . (5.2)
Proof. By the continuity of u, it is enough to prove (5.2) for x 0 ∈ {|u| > 0} ∩ B 1/2 . Towards a contradiction, we suppose that´B r (x0) |∇u| 2 + |u| q+1 ≤ ε 0 r n+2κ−2 , or equivalently E(u x0,r , 1) ≤ ε 0 . Then, by the previous lemma we have E(u x0,r/2 j , 1) ≤ ε 0 for all j ∈ N. From |u(x 0 )| > 0, we see that |u| > c 1 > 0 in B r/2 j (x 0 ) for large j. Therefore,
ε 0 ≥ E(u x0,r/2 j , 1) = 1 (r/2 j ) n+2κ−2ˆB r/2 j (x0) |∇u| 2 + |u| q+1 ≥ 1 (r/2 j ) n+2κ−2ˆB r/2 j (x0) 2c q+1 1 = C(n)c q+1 1 (r/2 j ) 2κ−2 → ∞ as j → ∞.
This is a contradiction, as desired.
We are now ready to prove Theorem 6.
Proof of Theorem 6. Assume by contradiction that
u x0,r (x) < c 0 , x ∈ B 1 ,
with c 0 small, to be made precise later. Let ǫ 0 , r 0 be the constants in Lemma 6 and ω n = |B 1 | be the volume of an n-dimensional ball. For r < r 0 , by interpolation together with the estimate (4.23),
∇u x0,r L ∞ (B 1/2 ) ≤ ǫ u x0,r C 1,α/2 (B 3/4 ) + K(ǫ) u x0,
Homogeneous blowups and Energy decay estimates
In this section we study the homogeneous rescalings and blowups. We first show that the κ-homogeneous blowups exist at free boundary points.
Lemma 8. Suppose u is an almost minimizer in B 1 and x 0 ∈ Γ κ (u) ∩ B 1/2 . Then for κ-homogeneous rescalings u x0,t (x) = u(x0+tx) t κ , there exists u x0,0 ∈ C 1 loc (R n ) such that over a subsequence t = t j → 0+,
u x0,tj → u x0,0 in C 1 loc (R n ).
Moreover, u x0,0 is a nonzero κ-homogeneous global solution of ∆u x0,0 = f (x 0 , u x0,0 ).
Proof. For simplicity we assume x 0 = 0 and write u t = u 0,t and W (u, r) = W (u, 0, 0, r).
Step 1. We first prove the C 1 -convergence. For any R > 1, Corollary 1 ensures that there is a function u 0 ∈ C 1 (B R/2 ) such that over a subsequence t = t j → 0+,
u tj → u 0 in C 1 (B R/2 ).
By letting R → ∞ and using a Cantor's diagonal argument, we obtain that for another subsequence t = t j → 0+,
u tj → u 0 in C 1 loc (R n ).
Step 2. By the non-degeneracy in Theorem 6, sup B1 |u tj | ≥ c 0 > 0. By the C 1convergence of u tj to u 0 , we have u 0 is nonzero. To show that u 0 is a global solution, for fixed R > 1 and small t j , let v tj be the solution of ∆v tj = f (0, v tj ) in B R with v tj = u tj on ∂B R . Then, by elliptic theory, v tj C 1,α/2 (BR) ≤ C(n, m, R)( u tj C 1,α/2 (BR) + 1) ≤ C.
Thus, there exists a solution
v 0 ∈ C 1 (B R ) such that v tj → v 0 in C 1 (B R ).
Moreover, we use again that u tj is an almost minimizer with the frozen coefficients in B 1/2tj with a gauge function ω j (ρ) ≤ M (t j ρ) α to havê
BR |∇u tj | 2 + 2F (0, u tj ) ≤ (1 + M (t j R) α )ˆB R |∇v tj | 2 + 2F (0, v tj ) .
By taking t j → 0 and using the C 1 -convergence of u tj and v tj , we obtain
BR |∇u 0 | 2 + 2F (0, u 0 ) ≤ˆB R |∇v 0 | 2 + 2F (0, v 0 ) .
Since v tj = u tj on ∂B R , we also have v 0 = u 0 on ∂B R . This means that u 0 is equal to the energy minimizer (or solution) v 0 in B R . Since R > 1 is arbitrary, we conclude that u 0 is a global solution.
Step 3. Now we prove that u 0 is κ-homogeneous. Fix 0 < r < R < ∞. By the Weiss-type monotonicity formula, Theorem 5, we have that for small t j ,
W (u, Rt j ) − W (u, rt j ) =ˆR tj rtj d dρ W (u, ρ) dρ ≥ˆR tj rtj 1 ρ n+2κˆ∂ Bρ |x · ∇u − κ(1 − bρ α )u| 2 dS x dρ =ˆR r t j (t j σ) n+2κˆ∂ Bt j σ |x · ∇u − κ(1 − b(t j σ) α )u| 2 dS x dσ =ˆR r 1 t 2κ j σ n+2κˆ∂ Bσ |t j x · ∇u(t j x) − κ(1 − b(t j σ) α )u(t j x)| 2 dS x dσ =ˆR r 1 σ n+2κˆ∂ Bσ |x · ∇u tj − κ(1 − b(t j σ) α )u tj | 2 dS x dσ. (6.1)
On the other hand, by the optimal growth estimates Theorem 2,
|W (u, r)| ≤ C for 0 < r < r 0 ,
thus W (u, 0+) is finite. Using this and taking t j → 0+ in (6.1), we get
0 = W (u, 0+) − W (u, 0+) ≥ˆR r 1 σ n+2κ−1ˆ∂ Bσ |x · ∇u 0 − κu 0 | 2 dS x dσ.
Taking r → 0+ and R → ∞, we conclude that x · ∇u 0 − κu 0 = 0 in R n , which implies that u 0 is κ-homogeneous in R n .
Out next objective is the polynomial decay rate of the Weiss-type energy functional W at the regular free boundary points x 0 ∈ R u , Lemma 9. It can be achieved with the help of the epiperimetric inequality, proved in [4]. To describe the inequality, we let
M x0 (v) :=ˆB 1 |∇v| 2 + 2F (x 0 , v) − κˆ∂ B1 |v| 2
and recall that H x0 is a class of half-space solutions.
Theorem 7 (Epiperimetric inequality). There exist η ∈ (0, 1) and δ > 0 such that if c ∈ W 1,2 (B 1 ) is a homogeneous function of degree κ and c − h W 1,2 (B1) ≤ δ for some h ∈ H x0 , then there exists a function v ∈ W 1,2 (
B 1 ) such that v = c on ∂B 1 and M x0 (v) ≤ (1 − η)M x0 (c) + ηM x0 (h).
For x 0 ∈ B 1/2 and 0 < r < 1/2, we denote the κ-homogeneous replacement of u in B r (x 0 ) (or equivalently, the κ-homogeneous replacement of u x0,r in B 1 ) by
c x0,r (x) := |x| κ u x0,r x |x| = |x| r κ u x 0 + r |x| x , x ∈ R n .
Lemma 9. Let u be an almost minimizer in B 1 and x 0 ∈ R u ∩ B 1/2 . Suppose that the epiperimetric inequality holds with η ∈ (0, 1) for each c x0,r , 0 < r < r 1 < 1. Then W (u, x 0 , x 0 , r) − W (u, x 0 , x 0 , 0+) ≤ Cr δ , 0 < r < r 0 for some δ = δ(n, α, κ, η) > 0.
Proof. For simplicity we may assume x 0 = 0 and write u r = u 0,r , c r = c 0,r and W (u, r) = W (u, 0, 0, r). We define
e(r) := W (u, r) − W (u, 0+) = e ar α r n+2κ−2ˆB r |∇u| 2 + 2F (0, u) − κ(1 − br α )e ar α r n+2κ−1ˆ∂ Br |u| 2 − W (u, 0+),
and compute d dr
e ar α r n+2κ−2 = − (n + 2κ − 2)(1 − M r α )e ar α r n+2κ−1 and d dr κ(1 − br α )e ar α r n+2κ−1 = −κ(1 − br α )e ar α (n + 2κ − 1 + O(r α )) r n+2κ .
Then e ′ (r) = − (n + 2κ − 2)(1 − M r α )e ar α r n+2κ−1ˆB r |∇u| 2 + 2F (0, u) + e ar α r n+2κ−2ˆ∂ Br |∇u| 2 + 2F (0, u)
+ κ(1 − br α )e ar α (n + 2κ − 1 + O(r α )) r n+2κˆ∂ Br |u| 2 − κ(1 − br α )e ar α r n+2κ−1ˆ∂ Br 2u∂ ν u + n − 1 r |u| 2 ≥ − n + 2κ − 2 r e(r) + κ(1 − br α )e ar α r n+2κ−1ˆ∂ Br |u| 2 + W (u, 0+) + e ar α r 1 r n+2κ−3ˆ∂ Br |∇u| 2 + 2F (0, u) + 2κ 2 + O(r α ) r n+2κ−1ˆ∂ Br |u| 2 − 2κ(1 − br α ) r n+2κ−2ˆ∂ Br u∂ ν u ≥ − n + 2κ − 2 r (e(r) + W (u, 0+)) + (1 − br α )e ar α r 1 r n+2κ−3ˆ∂ Br |∇u| 2 + 2F (0, u) + κ(2 − n) + O(r α ) r n+2κ−1ˆ∂ Br |u| 2 − 2κ r n+2κ−2ˆ∂ Br u∂ ν u .
To simplify the last term, we observe that u r = c r and ∂ ν c r = κc r on ∂B 1 and that |∇c r | 2 + 2F (0, c r ) is homogeneous of degree 2κ − 2, and obtain
∂Br 1 r n+2κ−3 |∇u| 2 + 2F (0, u) + κ(2 − n) + O(r α ) r n+2κ−1 |u| 2 − 2κ r n+2κ−2 u∂ ν u =ˆ∂ B1 |∇u r | 2 + 2F (0, u r ) + (κ(2 − n) + O(r α ))|u r | 2 − 2κu r ∂ ν u r =ˆ∂ B1 |∂ ν u r − κu r | 2 + |∂ θ u r | 2 + 2F (0, u r ) − (κ(n + κ − 2) + O(r α )) |u r | 2 ≥ˆ∂ B1 |∂ θ c r | 2 + 2F (0, c r ) − (κ(n + κ − 2) + O(r α )) |c r | 2 =ˆ∂ B1 |∇c r | 2 + 2F (0, c r ) − (κ(n + 2κ − 2) + O(r α )) |c r | 2 = (n + 2κ − 2) ˆB 1 |∇c r | 2 + 2F (0, c r ) − (κ + O(r α ))ˆ∂ B1 |c r | 2 = (n + 2κ − 2)M 0 (c r ) + O(r α )ˆ∂ B1 |u r | 2 . Thus e ′ (r) ≥ − n + 2κ − 2 r (e(r) + W (u, 0+)) + (1 − br α )e ar α (n + 2κ − 2) r M 0 (c r ) + O(r α ) rˆ∂ B1 |u r | 2 . (6.2)
We want to estimate M 0 (c r ). From the assumption that the epiperimetric inequality holds for c r , we have M 0 (v r ) ≤ (1 − η)M 0 (c r ) + ηM 0 (h r ) for some h r ∈ H 0 and v r ∈ W 1,2 (B 1 ) with v r = c r on ∂B 1 . In addition, 0 ∈ R u ensures that there exists a sequence t j → 0+ such that u tj → h 0 in C 1 loc (R n ) for some h 0 ∈ H 0 . Then
W (u, 0+) = lim j→∞ W (u, t j ) = lim j→∞ e at α j ˆB 1 |∇u tj | 2 + 2F (0, u tj ) − κ(1 − bt α j )ˆ∂ B1 |u tj | 2 = M 0 (h 0 ) = M 0 (h r ).
Here, the last equality holds since both h j and h r are elements in H 0 . By the epiperimetric inequality and the almost-minimality of u r ,
(1 − η)M 0 (c r ) + ηW (u, 0+) ≥ M 0 (v r ) =ˆB 1 |∇v r | 2 + 2F (0, v r ) − κˆ∂ B1 |v r | 2 ≥ 1 1 + M r αˆB 1 |∇u r | 2 + 2F (0, u r ) − κˆ∂ B1 |u r | 2 = e −ar α 1 + M r α W (u, r) + O(r α )ˆ∂ B1 |u r | 2 .
We rewrite it as
M 0 (c r ) ≥ e −ar α 1+Mr α W (u, r) − ηW (u, 0+) 1 − η + O(r α )ˆ∂ B1 |u r | 2
and, combining this with (6.2), obtain e ′ (r) ≥ − n + 2κ − 2 r (e(r) + W (u, 0+))
+ (1 − br α )e ar α (n + 2κ − 2) r e −ar α 1+Mr α W (u, r) − ηW (u, 0+) 1 − η + O(r α ) rˆ∂ B1 |u r | 2 .
Note that from Theorem 2, there is a constant C > 0 such that
W (u, 0+) ≤ W (u, r) ≤ C andˆ∂ B1 |u r | 2 = 1 r n+2κ−1ˆ∂ Br |u| 2 ≤ C for small r > 0. Then e ′ (r) ≥ − n + 2κ − 2 r (e(r) + W (u, 0+)) + n + 2κ − 2 r W (u, r) − ηW (u, 0+) 1 − η + O(r α ) r W (u, r) + W (u, 0+) +ˆ∂ B1 |u r | 2 ≥ − n + 2κ − 2 r e(r) + n + 2κ − 2 r W (u, r) − W (u, 0+) 1 − η − C 1 r α−1 = (n + 2κ − 2)η 1 − η e(r) r − C 1 r α−1 .
Now, take δ = δ(n, α, κ, η) such that 0 < δ < min (n+2κ−2)η 1−η , α . Using the differential inequality above for e(r) and that e(r) = W (u, r) − W (u, 0+) ≥ 0, we have for 0 < r < r 0
d dr e(r)r −δ + C 1 α − δ r α−δ = r −δ e ′ (r) − δ r e(r) + C 1 r α−δ−1 ≥ r −δ (n + 2κ − 2)η 1 − η − δ e(r) r − C 1 r α−1 + C 1 r α−δ−1 ≥ 0. Thus e(r)r −δ ≤ e(r 0 )r −δ 0 + C 0 α − δ r α−δ 0 ,
and hence we conclude that
W (u, r) − W (u, 0+) = e(r) ≤ Cr δ .
Now, we consider an auxiliary function φ(r) := e −(κb/α)r α r κ , r > 0, which is a solution of the differential equation
φ ′ (r) = κ φ(r) 1 − br α r , r > 0.
For x 0 ∈ B 1/2 , we define the κ-almost homogeneous rescalings by
u φ x0,r (x) := u(rx + x 0 ) φ(r) , x ∈ B 1/(2r) .
Lemma 10 (Rotation estimate). Under the same assumption as in Lemma 9,
∂B1 |u φ x0,t − u φ x0,s | ≤ Ct δ/2 , s < t < t 0 .
Proof. Without loss of generality, assume x 0 = 0. For u φ r = u φ 0,r and W (u, r) = W (u, 0, 0, r),
d dr u φ r (x) = ∇u(rx) · x φ(r) − u(rx)[φ ′ (r)/φ(r)] φ(r) = 1 φ(r) ∇u(rx) · x − κ(1 − br α ) r u(rx) .
By Theorem 5, we have for 0 < r < t 0
ˆ∂ B1 d dr u φ r (ξ) 2 dS ξ 1/2 = 1 φ(r) ˆ∂ B1 ∇u(rξ) · ξ − κ(1 − br α ) r u(rξ) 2 dS ξ 1/2 = 1 φ(r) 1 r n−1ˆ∂ Br ∇u(x) · ν − κ(1 − br α ) r u(x) 2 dS x 1/2 ≤ 1 φ(r) 1 r n−1 r n+2κ−2 e ar α d dr W (u, r) 1/2 = e cr α r 1/2 d dr W (u, r) 1/2 , c = κb α − a 2 .
Using this and Lemma 9, we can computê
∂B1 |u φ t − u φ s | ≤ˆ∂1/2 ≤ C log t s 1/2 (W (u, t) − W (u, s)) 1/2 ≤ C log t s 1/2 t δ/2 , 0 < t < t 0 .
Now, by a standard dyadic argument, we conclude that
∂B1 |u φ t − u φ s | ≤ Ct δ/2 .
The following are generalization of Lemma 4.4 and Proposition 4.6 in [4] on κhomogeneous solutions from the case λ ± = 1 to the constant coefficients λ ± (x 0 ), which can be proved in similar fashion.
Lemma 11. For every x 0 ∈ B 1/2 , H x0 is isolated (in the topology of W 1,2 (B 1 )) within the class of κ-homogeneous solutions v for ∆v = f (x 0 , v).
Proposition 2. For x 0 ∈ B 1/2 , let v ≡ 0 be a κ-homogeneous solution of ∆v = f (x 0 , v) satisfying {|v| = 0} 0 = ∅. Then M x0 (v) ≥ B x0 and equality implies v ∈ H x0 ; here B x0 = M x0 (h) for every h ∈ H x0 .
The proof of the next proposition can be obtained as in Proposition 4.5 in [1], by using Lemma 11 and a continuity argument.
Proposition 3. If x 0 ∈ R u , then all blowup limits of u at x 0 belong to H x0 .
Regularity of the regular set
In this last section we establish one of the main result in this paper, the C 1,γregularity of the regular set R u .
We begin by showing that R u is an open set in Γ(u).
Lemma 12. R u is open relative to Γ(u).
Proof.
Step 1. For points y ∈ B 1/2 , we let A y be a set of κ-homogeneous solutions v of ∆v = f (y, v) satisfying v ∈ H y , and define
ρ y := inf{ h − v C 1 (B1) : h ∈ H y , v ∈ A y }. From h − v C 1 (B1) ≥ c(n) h − v W 1,2 (B1)
, the isolation property of H x0 in Lemma 11 also holds in C 1 (B 1 )-norm, thus ρ y > 0 for every y ∈ B 1/2 . We claim that there is a universal constant c 1 > 0 such that c 1 < ρ y < 1/c 1 for all y ∈ B 1/2 . Indeed, the second inequality ρ y < 1/c 1 is obvious. For the first one, we assume to the contrary that ρ yi → 0 for a sequence y i ∈ B 1/2 . This gives sequences h yi ∈ H yi and v yi ∈ A yi such that dist(h yi , v yi ) → 0, where the distance is measured in C 1 (B 1 )-norm. Over a subsequence, we have y i → y 0 and, using that h yi and v yi are uniformly bounded, h yi → h y0 and v yi → v y0 for some h y0 ∈ H y0 and v y0 ∈ A y0 . It follows that dist(h y0 , v y0 ) = lim i→∞ dist(h yi , v yi ) = 0, meaning that h y0 = v y0 , a contradiction.
Step 2. Towards a contradiction we assume that the statement of Lemma 12 is not true. Then we can find a point x 0 ∈ R u and a sequence of points
x i ∈ Γ(u) \ R u such that x i → x 0 .
For a small constant ε 1 = ε 1 (n, q, M, c 1 ) > 0 to be specified later, we claim that there is a sequence r i → 0 and a subsequence of x i , still denoted by x i , such that dist(u xi,ri , H x0 ) = ε 1 ,
(7.1)
where u xi,ri (x) = u(xi+rix) r κ i are the κ-homogeneous rescalings. Indeed, since x 0 ∈ R u , we can find a sequence t j → 0 such that dist(u x0,tj , H x0 ) < ε 1 /2.
(7.2)
For each fixed t j , we take a point from the above sequence {x i } ∞ i=1 (and redefine the point as x j for convenience) close to x 0 such that dist(u xj ,tj , H x0 ) ≤ dist(u xj ,tj , u x0,tj ) + dist(u x0,tj , H x0 ) < ε 1 .
On the other hand, using x j ∈ R u and Proposition 3, we see that there is τ j < t j such that dist(u xj ,τj , H xj ) > ρ xj /2. Using the result in Step 1, we have dist(u xj ,τj , H xj ) > c 1 /2 > 2ε 1 for small ε 1 . We next want to show that for large j dist(u xj ,τj , H x0 ) ≥ (3/2)ε 1 .
(7.3)
For this aim, we let h x0 ∈ H x0 be given. Then h xj :=
βx j βx 0 h x0 ∈ H xj and βx j βx 0 = λ+(xj ) λ+(x0) κ/2 → 1 as j → ∞. Thus dist(h xj , h x0 ) < ε 1 /2 for large j, and hence dist(u xj ,τj , h x0 ) ≥ dist(u xj ,τj , h xj ) − dist(h xj , h x0 ) > 2ε 1 − ε 1 /2 = (3/2)ε 1 .
This implies (7.3). Now, (7.2) and (7.3) ensure the existence of r j ∈ (τ j , t j ) such that dist(u xj ,rj , H x0 ) = ε 1 .
Step 3. (7.1) implies that {u xi,ri } is uniformly bounded in C 1 -norm (so in C 1,α/2norm as well), thus we can follow the argument in Step 1-2 in the proof of Lemma 8 with u xi,ri in the place of u t to have that over a subsequence u xi,ri → u * in C 1 loc (R n ) for some nonzero global solution u * of ∆u * = f (x 0 , u * ). From (7.1) again, we may assume that u xi,ri − h C 1 (B1) ≤ 2ε 1 for h(x) = β x0 (x 1 + ) κ . Then |u xi,ri | + |∇u xi,ri | ≤ 2ε 1 in B 1 ∩ {x 1 ≤ 0}. By the nondegeneracy property, Lemma 7, we know that´B t (z) |∇u xi,ri | 2 + |u xi,ri | q+1 ≥ ε 0 t n+2κ−2 for any z ∈ {|u xi,ri | > 0} and B t (z) ⋐ B 1 . We can deduce from the preceding two inequalities that if ε 1 is small, then u xi,ri ≡ 0 in B 1 ∩ {x 1 ≤ −1/4}. Therefore, the coincidence set {|u * | = 0} has a nonempty interior, and there exist an open ball D ⋐ {|u * | = 0} and a point z 0 ∈ ∂D ∩ ∂{|u * | > 0}.
Step 4. We claim that u * L ∞ (Br (z0)) = O(r κ ). In fact, we can proceed as in the proof of the sufficiency part of Theorem 1.2 in [4]. In the theorem, they assume u L ∞ (Br ) = o(r ⌊κ⌋ ) and prove u L ∞ (Br ) = O(r κ ). We want to show that the condition u L ∞ (Br) = o(r ⌊κ⌋ ) can be replaced by 0 ∈ ∂D ∩ ∂{|u| > 0}, where D is an open ball contained in {|u| = 0} (then it can be applied to u * in our case, and u * L ∞ (Br(z0)) = O(r κ ) follows). Indeed, the growth condition on u in the theorem is used only to prove the following: if u j (x) = P j (x) + Γ j (x) in B r , whereũ j (x) = u(rx) jr κ j is a rescaling of u at 0, P j is a harmonic polynomial of degree l ≤ ⌊κ⌋ and |Γ j (x)| ≤ C|x| l+ε , 0 < ε < 1, then P j ≡ 0. To see that 0 ∈ ∂D ∩ ∂{|u| > 0} also gives the same result, we observe that it implies thatũ j = 0 in an open subset A r of the ball B r . This is possible only when P j = Γ j = 0 in A r . Thus, by the unique continuation P j ≡ 0.
Step 5. Consider the standard Weiss energy
W 0 (u * , z, y, s) := 1 s n+2κ−2ˆB s (z) |∇u * | 2 + 2F (y, u * ) − κ s n+2κ−1ˆ∂ Bs(z) |u * | 2 .
By the result in Step 4, there exists a κ-homogeneous blowup u * * of u * at z 0 (i.e., u * * (x) = lim r→0 u * (rx+z0) r κ over a subsequence). From z 0 ∈ ∂D, u * * should have a nonempty interior of the zero-set {|u * * | = 0} near the origin 0, thus by
Proposition 2 W 0 (u * , z 0 , x 0 , 0+) = M x0 (u * * ) ≥ B x0 .
Moreover, we observe that for any fixed r ≤ 1
W 0 (u * , z 0 , x 0 , 0+) ≤ W 0 (u * , z 0 , x 0 , r) ≤ W (u * , z 0 , x 0 , r) = lim i→∞ W (u, x i + r i z 0 , x 0 , rr i ) ≤ lim i→∞ W (u, x i + r i z 0 , x 0 , s) = W (u, x 0 , x 0 , s)
for any s > 0. Taking s ց 0 and using W (u, x 0 , x 0 , 0+) = B x0 , we obtain that W 0 (u * , z 0 , x 0 , r) = B x0 for 0 < r < 1. Thus u * is a κ-homogeneous function with respect to z 0 . Now, we can apply Proposition 2 to obtain that u * is a half-space solution with respect to z 0 , i.e., u * (· − z 0 ) ∈ H x0 . Since u xi,ri satisfies |u xi,ri (0)| = 0 and the nondegeneracy´B t |∇u xi,ri | 2 + |u xi,ri | q+1 ≥ ε 0 t n+2κ−2 , u * also satisfies the similar equations, and thus z 0 = 0. This implies u * ∈ H x0 , which contradicts (7.1).
Lemma 13. Let C h be a compact subset of R u . For any ε > 0, there is r 0 > 0 such that if x 0 ∈ C h and 0 < r < r 0 , then the κ-homogeneous replacement c x0,r of u satisfies dist(c x0,r , H x0 ) < ε, (7.4) where the distance is measured in the C 1 (B 1 )-norm.
Proof. We claim that for any ε > 0, there is r 0 > 0 such that dist(u x0,r , H x0 ) < ε for any x 0 ∈ C h and 0 < r < r 0 , which readily gives (7.4) (see the proof of Lemma 10 in [2]).
Towards a contradiction we assume that there exist a constant ε 0 > 0 and sequences x j ∈ C h (converging to x 0 ∈ C h ) and r j → 0 such that dist(u xj ,rj , H xj ) ≥ ε 0 .
By a continuity argument, for each θ ∈ (0, 1) we can find a sequence t j < r j such that dist(u xj ,tj , H xj ) = θε 0 . By following the argument in Step 1-2 in the proof of Lemma 8 with u xj ,tj in the place of u t , we can show that up to a subsequence u xj ,tj → u x0 in C 1 loc (R n ; R m ) for some nonzero global solution u x0 ∈ C 1 loc (R n ; R m ) of ∆u x0 = f (x 0 , u x0 ). We remark that the blowup u x0 depends on the sequence {t j }, thus on the choice of 0 < θ < 1.
From dist(u xj ,tj , H xj ) = θε 0 , we can take h xj ∈ H xj such that dist(u xj ,tj , h xj ) ≤ 2θε 0 . For each j we define h j x0 :=
βx 0 βx j h xj ∈ H x0 . Since βx 0 βx j = λ+(x0) λ+(xj) κ/2 → 1, dist(h xj , h j x0 ) ≤ o(|x j − x 0 |). Thus, dist(u x0 , h j x0 ) ≤ dist(u x0 , u xj,tj ) + dist(u xj ,tj , h xj ) + dist(h xj , h j x0 ) ≤ 2θε 0 + o(|x j − x 0 |), and hence dist(u x0 , H x0 ) ≤ lim sup j→∞ dist(u x0 , h j x0 ) ≤ 2θε 0 .
On the other hand, for each h x0 ∈ H x0 we let h xj :=
βx j βx 0 h x0 ∈ H xj so that dist(h xj , h x0 ) ≤ o(|x j − x 0 |). Using dist(u xj,tj , H xj ) = θε 0 again, we obtain dist(u x0 , h x 0 ) ≥ dist(u xj ,tj , h xj ) − dist(h xj , h x0 ) − dist(u xj ,tj , u x0 ) ≥ θε 0 − o(|x j − x 0 |),
and conclude dist(u x0 , H x0 ) ≥ θε 0 .
Therefore, θε 0 ≤ dist(u x0 , H x0 ) ≤ 2θε 0 . For θ > 0 small enough, this inequality contradicts the isolation property of H x0 in Lemma 11, provided u x0 is homogeneous of degree κ.
To prove the homogeneity, we fix 0 < r < R < ∞ and follow the argument in Step 3 in Lemma 8 to obtain W (u, x j , x j , Rt) − W (u, x j , x j , rt) ≥ˆR r 1 σ n+2κˆ∂ Bσ
x · ∇u xj ,t − κ (1 − b(tσ) α ) u xj,t 2 dS x dσ (7.5) for small t. Recall the standard Weiss energies
W 0 (u, x j , x j , t) = 1 t n+2κ−2ˆB t (xj ) |∇u| 2 + 2F (x j , u) − κ t n+2κ−1ˆ∂ Bt(xj ) |u| 2 .
We have M xj (u xj ,t ) = W 0 (u, x j , x j , t) = W (u, x j , x j , t) + O(t α ), where the second equality holds by Theorem 2. This, together with x j ∈ R u and the monotonicity of W (u, x j , x j , ·), gives that W (u, x j , x j , t) ց B xj as t ց 0.
Applying Dini's theorem gives that for any ε > 0 there exists t 0 = t 0 (ε) > 0 such that B xj ≤ W (u, x j , x j , t) ≤ B xj + ε for any t < t 0 and x j ∈ C h . Then, for large j, we have B xj ≤ W (u, x j , x j , Rt j ) ≤ B xj + ε, and thus
B x0 ≤ lim inf j→∞ W (u, x j , x j , Rt j ) ≤ lim sup j→∞ W (u, x j , x j , Rt j ) ≤ B x0 + ε.
Taking ε ց 0, we obtain that lim j→∞ W (u, x j , x j , Rt j ) = B x0 .
Similarly, we have lim j→∞ W (u, x j , x j , rt j ) = B x0 . They enable us to take j → ∞ in (7.5) to get 0 =ˆR r 1 σ n+2κˆ∂ Bσ |x · ∇u x0 − κu x0 | 2 dS x dσ.
Since 0 < r < R < ∞ are arbitrary, we conclude that x · ∇u x0 − κu x0 = 0, or u x0 is κ-homogeneous in R n .
Lemma 14. Let u be an almost minimizer in B 1 , C h a compact subset of R u , and δ as in Lemma 9. Then for every x 0 ∈ C h there is a unique blowup u x0,0 ∈ H x0 . Moreover, there exist t 0 > 0 and C > 0 such that ∂B1 |u φ x0,t − u x0,0 | ≤ Ct δ/2 for all 0 < t < t 0 and x 0 ∈ C h .
Proof. By Lemma 10 and Lemma 13, ∂B1 |u φ x0,t − u φ x0,s | ≤ Ct δ/2 , s < t < t 0 .
By the definition of R u , for a subsequence of t j → 0+ we have u x0,tj → u x0,0 ∈ H x0 . From lim tj →0+ φ(t) t κ = 1, we also have u φ x0,tj → u x0,0 . Taking s = t j in the above inequality and passing to the limit, we get ∂B1 |u φ x0,t − u x0,0 | ≤ Ct δ/2 , t < t 0 .
To prove the uniqueness, we letũ x0,0 be another blowup. Then ∂B1 |ũ x0,0 − u x0,0 | = 0.
We proved in Lemma 8 that every blowup is κ-homogeneous in R n , thusũ x0,0 = u x0,0 in R n .
As a consequence of the previous results, we prove the regularity of R u .
Proof of Theorem 3. The proof of the theorem is similar to those of Theorem 3 in [2] and Theorem 1.4 in [4].
Step 1. Let x 0 ∈ R u . By Lemma 12 and Lemma 14, there exists ρ 0 > 0 such that
B 2ρ0 (x 0 ) ⊂ B 1 , B 2ρ0 (x 0 ) ∩ Γ κ (u) = B 2ρ0 (x 0 ) ∩ R u and ∂B1
|u φ x1,r − β x1 max(x · ν(x 1 ), 0) κ e(x 1 )| ≤ Cr δ/2 , for any x 1 ∈ Γ κ (u) ∩ B ρ0 (x 0 ) and for any 0 < r < ρ 0 . We then claim that x 1 −→ ν(x 1 ) and x 1 −→ e(x 1 ) are Hölder continuous of order γ on Γ κ (u) ∩ B ρ1 (x 0 ) for some γ = γ(n, α, q, η) > 0 and ρ 1 ∈ (0, ρ 0 ). Indeed, we observe that for x 1 and x 2 near x 0 and for small r > 0,
∂B1 |β x1 max(x · ν(x 1 ), 0) κ e(x 1 ) − β x2 max(x · ν(x 2 ), 0) κ e(x 2 )| dS x ≤ 2Cr δ/2 +ˆ∂ B1 |u φ x1,r − u φ x2,r | ≤ 2Cr δ/2 + 1 φ(r)ˆ∂ B1ˆ1 0 |∇u(rx + (1 − t)x 1 + tx 2 )||x 1 − x 2 | dt dS x ≤ 2Cr δ/2 + C |x 1 − x 2 | r κ .
Moreover,
∂B1
|β x1 max(x · ν(x 2 ), 0) κ e(x 2 ) − β x2 max(x · ν(x 2 ), 0) κ e(x 2 )| dS x ≤ |β x1 − β x2 |ˆ∂ B1 max(x · ν(x 2 ), 0) κ |e(x 2 )| ≤ C|λ + (x 1 ) κ/2 − λ + (x 2 ) κ/2 | ≤ C|x 1 − x 2 | α .
The above two estimates give
β x1ˆ∂ B1 | max(x · ν(x 1 ), 0) κ e(x 1 ) − max(x · ν(x 2 ), 0) κ e(x 2 )| dS x ≤ C r δ/2 + |x 1 − x 2 | r κ + |x 1 − x 2 | α . Taking r = |x 1 − x 2 | 2 δ+2κ , we get ∂B1 | max(x · ν(x 1 ), 0) κ e(x 1 ) − max(x · ν(x 2 ), 0) κ e(x 2 )| dS x ≤ C|x 1 − x 2 | γ ,
where γ = min{ δ δ+2κ , α}. Combining this with the following estimate (see equation |ν(x 1 ) − ν(x 2 )| + |e(x 1 ) − e(x 2 )| ≤ Cˆ∂ B1 | max(x · ν(x 1 ), 0) κ e(x 1 ) − max(x · ν(x 2 ), 0) κ e(x 2 )|, we obtain the Hölder continuity of x 1 −→ ν(x 1 ) and x 1 −→ e(x 1 ).
Proof. Let B r (x 0 ) ⋐ B 1 and v ∈ u + W 1,2 0 (B r (x 0 )). Without loss of generality assume that x 0 = 0. Then
Br ∇u · ∇(u − v) =ˆB r b · ∇u · (u − v) − λ + |u + | q−1 u + · (u − v) + λ − |u − | q−1 u − · (u − v) =ˆB r b · ∇u · (u − v) − λ + |u + | q+1 − λ − |u − | q+1 + λ + |u + | q−1 u + · v − λ − |u − | q−1 u − · v.
To estimate the integral in the last line, we note that the following estimate was obtained in the proof of Example 1 in [2]: Br b · ∇u · (u − v) ≤ C(n, p, b L p (B1) )r γˆB r |∇u| 2 + |∇v| 2 , γ = 1 − n/p.
In addition, from u + · v ≤ u + · v + ≤ |u + ||v + | and −u − · v ≤ u − · v − ≤ |u − ||v − |, we have
λ + |u + | q−1 u + · v − λ − |u − | q−1 u − · v ≤ λ + |u + | q |v + | + λ − |u − | q |v − | ≤ λ + q 1 + q |u + | 1+q + 1 1 + q |v + | 1+q + λ − q 1 + q |u − | 1+q + 1 1 + q |v − | 1+q , and thus − λ + |u + | q+1 − λ − |u − | q+1 + λ + |u + | q−1 u + · v − λ − |u − | q−1 u − · v ≤ λ + 1 + q |v + | 1+q − |u + | 1+q + λ − 1 + q |v − | 1+q − |u − | 1+q .
Combining the above equality and inequalities yieldŝ Br ∇u · ∇(u − v) ≤ Cr γˆB r |∇u| 2 + |∇v| 2 + 1 1 + qˆB r λ + |v + | 1+q − |u + | 1+q (1/2 − Cr γ )|∇u| 2 + 1 1 + q λ + |u + | 1+q + λ − |u − | 1+q ≤ˆB r (1/2 + Cr γ )|∇v| 2 + 1 1 + q λ + |v + | 1+q + λ − |v − | 1+q .
+ 1 1 + qˆB r λ − |v − | 1+q − |u − | 1+q .
This implies that
(1 − Cr γ )ˆB r |∇u| 2 + 2F (x, u) ≤ (1 + Cr γ )ˆB r |∇v| 2 + 2F (x, v) , and hence we conclude that for 0 < r < r 0 Br |∇u| 2 + 2F (x, u) dx ≤ (1 + Cr γ )ˆB r |∇v| 2 + 2F (x, v) dx, with r 0 and C depending only on n, p, b L p (B1) .
For 0
0< s < 1 small to be chosen later, we define the homogeneous rescaling of uu s (x) := u(sx) s κ , x ∈ B 1 .Recall that u s is an almost minimizer with gauge function ω(r) ≤ M (sr) α . By Lemma 4, we have for all κ − 1 < µ < κ B1 |∇u s | 2 + B1 |u s | 2 ≤ L 2 s , L s := C µ s µ−κ with C µ depending on E(u, 1), µ, n, α, M .
) (rs) κ is an almost minimizer with gauge function ω(ρ) ≤ M (rsρ) α . Thus,
∞
, l ≥ 0, we can iterate and have that (ρ 1 small)
→ 0 ,
0L s → ∞.Moreover, from (4.17) and (4.22), for µ > κ − 1, and |x| ≤ 1/2, l −p l h |(x) ≤ C|x| µ + 7 8 C 0 |x| κ .
and c 0 ≤ ǫ 0 /(2 n+2κ K(ǫ)ω 1/2 n ). Thus, if ω n c q+1 0 < ǫ 0 /2 n+2κ−1 , then E(u x0,r , 1 2 ) < ǫ0
n+2κ−2 , which contradicts Lemma 7.
To compute the last two terms, we use u r · v + r ≤ u + r · v + r ≤ |u + r ||v + r | and Young's inequality to getStep 2. We claim that for every ε ∈ (0, 1), there exists ρ ε ∈ (0, ρ 1 ) such that forIndeed, if (7.6) does not hold, then we can take a sequence Γ κ (u)∩B ρ1 (x 0 ) ∋ x j →x and a sequence y j − x j → 0 as j → ∞ such thatAs we have seen in the proof of Lemma 13, the rescalings u j at x j ∈ R u converge in C 1 loc (R n ) to a κ-homogeneous solution u 0 of ∆u 0 = f (x, u 0 ). By applying Lemma 14 to u j , we can see that u 0 (x) = βx max(x · ν(x), 0) κ e(x). Then, for K := {z ∈ ∂B 1 : z · ν(x) ≤ −ε/2|z|}, we have that z j ∈ K for large j by Step 1. We also consider a bigger compact setK := {z ∈ R n : 1/2 ≤ |z| ≤ 2, z · ν(x) ≤ −ε/4|z|}, and let t := min{dist(K, ∂K), r 0 }, where r 0 = r 0 (n, α, q, M ) is as in Lemma 6, so that B t (z j ) ⊂K. By applying Lemma 7, we obtainHowever, this is a contradiction since u 0 (x) = βxe(x) max(x · ν(x), 0) κ = 0 inK.On the other hand, if (7.7) is not true, then we take a sequence Γ(u) ∩ B ρ1 (x 0 ) ∋ x j →x and a sequence y j − x j → 0 such that |u(y j )| = 0 and (y j − x j ) · ν(x j ) > ε|y j − x j |. For u j , u 0 (x) = βx max(x · ν(x), 0) κ e(x) and z j as above, we will have that u j (z j ) = 0 and z j ∈ K ′ := {z ∈ ∂B 1 : z · ν(x) ≥ ε/2|z|}. Over a subsequence z j → z 0 ∈ K ′ and we have u 0 (z 0 ) = lim j→∞ u j (z j ) = 0. This is a contradiction since the half-space solution u 0 is nonzero in K ′ .Step 3. By rotations we may assume that ν(x 0 ) = e n and e(x 0 ) = e 1 . Fixing ε = ε 0 , by Step 2 and the standard arguments, we conclude that there exists a Lipschitz function g : R n−1 → R such that for some ρ ε0 > 0,Now, taking ε → 0, we can see that Γ(u) is differentiable at x 0 with normal ν(x 0 ). Recentering at any y ∈ B ρε 0 (x 0 ) ∩ Γ(u) and using the Hölder continuity of y −→ ν(y), we conclude that g is C 1,γ . This completes the proof.Appendix A. Example of almost minimizers Example 1. Let u be a solution of the system ∆u + b(x) · ∇u − λ + (x)|u + | q−1 u + + λ − (x)|u − | q−1 u − = 0 in B 1 , where b ∈ L p (B 1 ), p > n, is the velocity field. Then u is an almost minimizer of the functional´ |∇u| 2 + 2F (x, u) dx with a gauge function ω(r) = C(n, p, b L p (B1) )r 1−n/p .
Weiss Regularity of the free boundary for a parabolic cooperative system. G Aleksanyan, M Fotouhi, H Shahgholian, G S , Calc. Var. Partial Differential Equations. 61G. Aleksanyan, M. Fotouhi, H. Shahgholian and G.S. Weiss Regularity of the free boundary for a parabolic cooperative system, Calc. Var. Partial Differential Equations 61 (2022)
Almost minimizers for the singular system with free boundary. D. De Silva, S Jeon, H Shahgholian, J. Differential Equations. to appearD. De Silva, S. Jeon and H. Shahgholian, Almost minimizers for the singular system with free boundary, J. Differential Equations, to appear
A semilinear PDE with free boundary, Nonlinear Anal. M Fotouhi, H Shahgholian, M. Fotouhi and H. Shahgholian, A semilinear PDE with free boundary, Nonlinear Anal. (2017) 145-163
A free boundary problem for an elliptic system. M Fotouhi, H Shahgholian, G S Weiss, J. Differential Equations. M. Fotouhi, H. Shahgholian and G.S. Weiss, A free boundary problem for an elliptic system, J. Differential Equations (2021) 126-155
Elliptic partial differential equations, Courant Lecture Notes in Mathematics. Q Han, F Lin, American Mathematical SocietyNew York; Providence, RINew York UniversityQ. Han and F. Lin, Elliptic partial differential equations, Courant Lecture Notes in Mathe- matics. New York University, Courant Institute of Mathematical Sciences, New York; Amer- ican Mathematical Society, Providence, RI, 1997
Almost minimizers for the thin obstacle problem. S Jeon, A Petrosyan, Calc. Var. Partial Differential Equations. 6012459S. Jeon and A. Petrosyan, Almost minimizers for the thin obstacle problem, Calc. Var. Partial Differential Equations 60 (2021), paper no. 124, 59pp
An obstacle-problem-like equation with two phases: pointwise regularity of the solution and an estimate of the Hausdorff dimension of the free boundary. G S Weiss, Interfaces Free Bound. 32G.S. Weiss, An obstacle-problem-like equation with two phases: pointwise regularity of the solution and an estimate of the Hausdorff dimension of the free boundary, Interfaces Free Bound. 3 (2001), no. 2, 121-128.
. Daniela De Silva, 10027Columbia University New York, NYDepartment of Mathematics Barnard CollegeDaniela De Silva) Department of Mathematics Barnard College, Columbia University New York, NY, 10027
. Daniela De Email Address, Silva, Seongmin Jeon) Department of Mathematics KTH Royal Institute of [email protected] address, Daniela De Silva: [email protected] (Seongmin Jeon) Department of Mathematics KTH Royal Institute of Technology
Sweden Email address, Seongmin Jeon: [email protected]. Stockholm, Stockholm, Sweden Email address, Seongmin Jeon: [email protected]
| [] |
[
"POINCARÉ-REEB GRAPHS OF REAL ALGEBRAIC DOMAINS",
"POINCARÉ-REEB GRAPHS OF REAL ALGEBRAIC DOMAINS"
] | [
"Arnaud Bodin ",
"ANDPatrick Popescu-Pampu ",
"Miruna-Ştefana Sorea "
] | [] | [] | An algebraic domain is a closed topological subsurface of a real affine plane whose boundary consists of disjoint smooth connected components of real algebraic plane curves. We study the geometric shape of an algebraic domain by collapsing all vertical segments contained in it: this yields a Poincaré-Reeb graph, which is naturally transversal to the foliation by vertical lines. We show that any transversal graph whose vertices have only valencies 1 and 3 and are situated on distinct vertical lines can be realized as a Poincaré-Reeb graph. | 10.1007/s13163-023-00469-y | [
"https://export.arxiv.org/pdf/2207.06871v2.pdf"
] | 250,526,606 | 2207.06871 | 1c8b3ffb4335067a527aa4f30b8968fddc0ba177 |
POINCARÉ-REEB GRAPHS OF REAL ALGEBRAIC DOMAINS
Arnaud Bodin
ANDPatrick Popescu-Pampu
Miruna-Ştefana Sorea
POINCARÉ-REEB GRAPHS OF REAL ALGEBRAIC DOMAINS
An algebraic domain is a closed topological subsurface of a real affine plane whose boundary consists of disjoint smooth connected components of real algebraic plane curves. We study the geometric shape of an algebraic domain by collapsing all vertical segments contained in it: this yields a Poincaré-Reeb graph, which is naturally transversal to the foliation by vertical lines. We show that any transversal graph whose vertices have only valencies 1 and 3 and are situated on distinct vertical lines can be realized as a Poincaré-Reeb graph.
Introduction
An algebraic domain D is a closed subset of an affine plane, homeomorphic to a surface with boundary, whose boundary C is a union of disjoint smooth connected components of real algebraic plane curves. This paper is dedicated to the study of the geometric shape of algebraic domains.
Context and previous work. In [20,22], the third author studied the non-convexity of the disks D bounded by the connected components C of the levels of a real polynomial function f (x, y) contained in sufficiently small neighborhoods of strict local minima. The principle was to collapse to points the maximal vertical segments contained inside D. This yielded a special type of tree embedded in a topological space homeomorphic to R 2 . It was called the Poincaré-Reeb tree associated to C and to the projection (x, y) → x, and it measured the non-convexity of D. Conversely, given a tree T of a special kind embedded in a plane, [20,Theorem 3.34] presented a construction of a polynomial function f (x, y) with a strict local minimum at (0, 0), whose Poincaré-Reeb tree near (0, 0) is T . The terminology "Poincaré-Reeb" introduced in [20, Definition 2.24] was inspired by a similar construction used in Morse theory, namely by the classical graph introduced by Poincaré in his study of 3-manifolds [15, 1904, Fifth supplement, p. 221], and rediscovered by Reeb [16] in arbitrary dimension. Reeb graphs encode the topology of level sets of real-valued functions on manifolds. Reeb graphs appear as useful tools in the study of singularity theory of differentiable maps; see [18], [14]. For a survey with a view towards applications in computational topology and data visualization, we refer the reader to [17] and references therein. Studies of more general Reeb spaces have been done in several recent works such as [4], [5], [6], [2]. Some very recent work in this area are, for instance, [10], [11]. Applications of Reeb graphs in nonparametric statistics and data analysis are presented for instance in [12].
Poincaré-Reeb graphs of real algebraic domains. In this paper we extend the previous method of study of non-convexity to algebraic domains D in R 2 . When D is compact, the collapsing of maximal vertical segments contained in it yields a finite planar graph which is not necessarily a tree, called the Poincaré-Reeb graph of D relative to the vertical direction. See Figure 1 for a first idea of the definition. In it is represented also a section of the collapsing map above this graph, called a Poincaré-Reeb graph in the source. It is well-defined up to isotopies stabilizing each vertical line. Such a section exists whenever the projection x : R 2 → R is in addition generic relative to the boundary C of D, that is, C has no vertical bitangencies, no vertical inflectional tangencies and no vertical asymptotes. When D is non-compact but the projection x : R 2 → R is still proper in restriction to it, one gets an analogous graph, which has this time at least one unbounded edge. When the properness assumption on the projection is dropped but one assumes instead its genericity relative to C, then one may still define a Poincaré-Reeb graph in the source, again well-defined up to isotopies stabilizing the vertical lines. Notice that the Poincaré-Reeb graph does not live in the same space as D even if the quotient space is homeomorphic to R 2 ; we will work in the context of vertical planes (see Definition 2.3) which is adapted for both the original plane and its quotient.
Finite type domains in vertical planes. In order to be able to use our construction of Poincaré-Reeb graphs for the study of more general subsets of affine planes than algebraic domains, for instance to topological surfaces bounded by semi-algebraic, piecewise smooth or even less regular curves, we give a purely topological description of the setting in which it may be applied. Namely, we define the notion of domain of finite type D inside a vertical plane (P, π): here π : P → R is a locally trivial fibration of an oriented topological surface P homeomorphic to R 2 and D is a closed topological subsurface of P, such that the restriction π | D is proper and the restriction π | C to the boundary C of D has a finite number of topological critical points.
Main theorem. Our main result is an answer in the generic case to the following question: given a transversal graph in a vertical plane (P, π), is it possible to find an algebraic domain whose Poincaré-Reeb graph is isomorphic to it? Namely, we show that each transversal graph whose vertices have valencies 1 or 3 and are situated on distinct levels of π arises up to isomorphism from an algebraic domain in R 2 such that the function x : R 2 → R is generic relative to it. Our strategy of proof is to first realize the graph via a smooth function. Then we recall a Weierstrass-type theorem that approximates any smooth function by a polynomial function and we adapt its use in order to control vertical tangencies. In this way we realize any given generic compact transversal graph as the Poincaré-Reeb graph of a compact algebraic domain. Finally, we explain how to construct non-compact algebraic domains realizing some of the non-compact transversal graphs. Roughly speaking, we do this by adding branches to a compact curve.
Structure of the paper. Section 2 is devoted to the definitions and several general properties of the notions vertical plane, finite type domain, Poincaré-Reeb graph, real algebraic domain and transversal graph in the compact setting. Section 3 is dedicated to the case where the real algebraic domain D is compact and connected. In it we present the main result of our paper, namely the algebraic realization of compact, connected, generic transversal graphs as Poincaré-Reeb graphs of connected algebraic domains (see Theorem 3.5). Section 4 presents the case where D is non-compact and C is connected. Finally, in Section 5 we focus on the general situation, where D may be both non-compact and disconnected.
Acknowledgements. We thank the referees for their useful remarks and references, which improved the quality of our manuscript. In particular, we are grateful for the simplification of the proof of Proposition 2.24, using the Vietoris-Begel theorem [19] and for the references to the C k Weierstrass approximation theorem. We also thank Antonio Lerario for referring us to [13], for a possible approach towards estimating the degree of the real algebraic domains. This work was supported in part by the Labex CEMPI (ANR-11-LABX-0007-01) and by the ANR project LISA (ANR-17-CE40-0023-01). The first author thanks the University of Vienna for his visit at the Wolfgang Pauli Institute. M.-Ş. Sorea would like to thank Antonio Lerario for the very supportive working environment during her postdoc at SISSA, in Trieste, Italy.
Poincaré-Reeb graphs of domains of finite type in vertical planes
Algebraic domains
An affine plane P is a principal homogeneous space under the action of a real vector space of dimension 2. It has a natural structure of real affine surface (the term "affine" being taken now in the sense of algebraic geometry) and also a canonical compactification into a real projective plane. Therefore, one may speak of real-valued polynomial functions f : P → R as well as of algebraic curves in P of given degree. We are interested in the following types of surfaces embedded in affine planes:
Definition 2.
1. An algebraic domain is a closed subset D of an affine plane, homeomorphic to a surface with boundary, whose boundary C is a disjoint union of finitely many smooth connected components of real algebraic plane curves.
Example 2.2. Consider the algebraic curve C 1 of equation (f 1 (x, y) = 0) with f 1 (x, y) = y 2 − (x − 1)(x − 2)(x − 3) and C 2 of equation (f 2 (x, y) = 0) with f 2 (x, y) = y 2 − x(x − 4)(x − 5).
Each of these curves has two connected components, a compact one (an oval denoted by C i ) and a non-compact one. Let D be the ring surface bounded by C 1 and C 2 . By definition, it is an algebraic domain.
Domains of finite type in vertical planes
Assume that D is an algebraic domain in R 2 . We will study its non-convexity by collapsing to points the maximal vertical segments contained inside D (see Definition 2.11 below). The image of R 2 by such a collapsing map cannot be identified canonically to R 2 , and it has not even a canonical structure of affine plane. But in many cases it is homeomorphic to R 2 , it inherits from the starting affine plane R 2 a canonical orientation and the function x : R 2 → R descends to it as a locally trivial topological fibration. This fact motivates the next definition: Definition 2.3. A vertical plane is a pair (P, π) such that P is a topological space homeomorphic to R 2 , endowed with an orientation, and π : P → R is a locally trivial topological fibration. The map π is called the projection of the vertical plane and its fibers are called the vertical lines of the vertical plane. A vertical plane (P, π) is called affine if P is an affine plane and π is affine, that is, a polynomial function of degree one. The canonical affine vertical plane is (R 2 , x : R 2 → R).
Let (P, π) be a vertical plane. As the projection π is locally trivial over a contractible base, it is globally trivializable. This implies that P is homeomorphic to the Cartesian product R × V , where V denotes any vertical line of (P, π). The assumption that P is homeomorphic to R 2 implies that the vertical lines are homeomorphic to R. We will say that a subset of a vertical line of (P, π) which is homeomorphic to a usual segment of R is a vertical segment. Given a curve in a vertical plane, we may distinguish special points of it:
Definition 2.4. Let (P, π) be a vertical plane and C a curve in it, that is, a closed subset of it which is a topological submanifold of dimension one. The topological critical set Σ top (C) of C consists of the topological critical points of the restriction π | C , which are those points p ∈ C in whose neighborhoods the restriction π | C is not a local homeomorhism onto its image. Figure 3. Two topological critical points P 1 and P 2 (which are critical points in the differential setting). The inflection point Q is not a topological critical point but is a critical point in the differential setting.
P 1 C P 2 C Q C
Remark 2.5. If C is an algebraic curve contained in an affine vertical plane, the topological critical set Σ top (C) is contained in the usual critical set Σ diff (C) of π | C , but is not necessarily equal to it. For instance, any inflection point of C with vertical tangency and at which C crosses its tangent line belongs to Σ diff (C) \ Σ top (C) (see Figure 3).
The topological critical set Σ top (C) is a closed subset of C. In the neighborhood of an isolated topological critical point, the curve has a simple behavior: Lemma 2.6. Let (P, π) be a vertical plane and C a curve in it. Let p ∈ C be an isolated topological critical point. Then C lies locally on one side of the vertical line passing through p. Moreover, there exists a neighborhood of p in C, homeomorphic to a compact segment of R, and such that the restrictions of π to both subsegments of it bounded by p are homeomorphisms onto their images.
Proof. Consider a compact arc I of C whose interior is disjoint from Σ top (C). Identify it homeomorphically to a bounded interval [a, b] of R. The projection π becomes a function [a, b] → R devoid of topological critical points in (a, b), that is, a strictly monotonic function. Consider now two such arcs I 1 and I 2 on both sides of p in C. The relative interior of their union I 1 ∪ I 2 is a neighborhood with the stated properties. Moreover, I 1 ∪ I 2 lies on only one side of the vertical line passing through p: otherwise π would map I 1 homeomorphically to [α, x 0 ] and I 2 homeomorphically to [x 0 , β], where x 0 = π(p) is a critical value and as I 1 and I 2 are on both side of the vertical line at p we would have for instance α < x 0 < β; this implies that π : I 1 ∪ I 2 → [α, β] is a homeomorphism, in contradiction with p being a topological critical point of π | C .
As explained above, in this paper we are interested in the geometric shape of algebraic domains relative to a given "vertical" direction. But the way of studying them through the collapse of vertical segments may be extended to other kinds of subsets of real affine planes, for instance to topological surfaces bounded by semi-algebraic, piecewise-smooth or even less regular curves, provided they satisfy supplementary properties relative to the chosen projection. Definition 2.7 below describes the most general context we could find in which the collapsing construction yields a new vertical plane and a finite graph in it, possibly unbounded. It is purely topological, involving no differentiability assumptions.
Definition 2.7. Let (P, π) be a vertical plane. Let D ⊂ P be a closed subset homeomorphic to a surface with non-empty boundary. Denote by C its boundary. We say that D is a domain of finite type in (P, π) if:
(1) the restriction π | D : D → R is proper;
(2) the topological critical set Σ top (C) is finite. Condition (1) implies that the restriction π | C : C → R is also proper, which means that C has no connected components which are vertical lines or have vertical asymptotes. For instance, consider an algebraic domain contained in the positive quadrant of the canonical vertical plane R 2 , limited by two distinct level curves of the function xy (see the middle drawing of Figure 4). It satisfies condition (2) as it has no topological critical points, but as C has a vertical asymptote (the y-axis), it does not satisfy condition (1), therefore it is not a domain of finite type. Note that condition (1) is stronger than the properness of π | C . For instance, the upper half-plane in (R 2 , x) does not satisfy condition (1), but x | C is proper for it (see the right drawing of Figure 4).
We distinguish two types of topological critical points on the boundaries of domains of finite type:
Definition 2.9. Let (P, π) be a vertical plane and D ⊂ P a domain of finite type, whose boundary is denoted by C. One has the following consequence of Definition 2.7:
Proposition 2.10. Let (P, π) be a vertical plane and D ⊂ P a domain of finite type. Denote by C its boundary. Then:
(1) each topological critical point of π | C is either interior or exterior in the sense of Definition 2.9; (2) the fibers of the restriction π | D : D → R are homeomorphic to finite disjoint unions of compact segments of R; (3) the curve C has a finite number of connected components. Proof.
(1) This follows directly from Lemma 2.6 and Definition 2.9.
(2) Let us consider a point x 0 ∈ R. By Definition 2.7 (1), since the set {x 0 } is compact, we obtain that the fiber π −1 | D (x 0 ) is compact. Let now p be a point of this fiber. By looking successively at the cases where p ∈ D \ C, p ∈ C \ Σ top (C), p is an interior and p is an exterior topological critical point, we see that there exists a compact vertical segment K p , neighborhood of p in the vertical line π −1 (x 0 ), such that π −1 | D (x 0 ) ∩ K p is a compact vertical segment. As π −1 | D (x 0 ) is compact, it may be covered by a finite collection of such segments K p . This implies that π −1 | D (x 0 ) is a finite union of vertical segments (some of which may be points).
(3) Let ∆ top (C) ⊂ R be the topological critical image of π| C , that is, the image π(Σ top (C)) of the topological critical set. As by Definition 2.7, Σ top (C) is finite, ∆ top (C) is also finite.
Therefore, its complement R \ ∆ top (C) is a finite union of open intervals I i . As π | D is proper, this is also the case of π | C . Therefore, for every such interval I i the preimage π −1 | C (I i ) is a finite union of arcs. This implies that C is a finite union of arcs and points, therefore it has a finite number of connected components.
Collapsing vertical planes relative to domains of finite type
Next definition formalizes the idea of collapsing the maximal vertical segments contained in a domain of finite type, mentioned at the beginning of Subsection 2.2.
Definition 2.11. Consider a vertical plane (P, π) and let D ⊂ P be a domain of finite type. We say that two points P and Q of P are vertically equivalent relative to D, denoted P ∼ D Q, if the following two conditions hold:
-P and Q are on the same fiber of π, that is π(P ) = π(Q) =: x 0 ∈ R; -either the points P and Q are on the same connected component of
π −1 (x 0 ) ∩ D, or P = Q / ∈ D.
Denote byP the quotient P/∼ D of P by the vertical equivalence relation relative to D. We call it the D-collapse of P. The associated quotient map ρ D : P →P is called the collapsing map relative to D.
P Q R D x 0 Figure 7.
The points P and Q are vertically equivalent relative to D: P ∼ D Q. However, P and Q are not equivalent to R.
Next proposition shows that the D-collapse of P is naturally a new vertical plane, which is the reason why we introduced this notion in Definition 2.3.
Proposition 2.12. Let (P, π) be a vertical plane and D be a domain of finite type in it. Consider the collapsing map ρ D : P →P relative to D. Then:
-P is homeomorphic to R 2 ;
-the projection π descends to a functionπ :P → R; ρ D is a homeomorphism from P \ D onto its image; -if one endowsP with the orientation induced from that of P by the previous homeomorphism, then (P,π) is again a vertical plane, and the following diagram is commutative:
PP R π ρ Dπ
The proof of Proposition 2.12 is similar to that of [22, Proposition 4.3].
The Poincaré-Reeb graph of a domain of finite type
We introduce now the notion of Poincaré-Reeb set associated to a domain of finite type D in a vertical plane (P, π). Whenever P is an affine plane and π is an affine function, its role is to measure the non-convexity of D in the direction of the fibers of π.
Definition 2.13. Let (P, π) be a vertical plane and D ⊂ P be a domain of finite type. The Poincaré-Reeb set of D is the quotientD := D/∼ D , seen as a subset of the D-collapseP of P in the sense of Definition 2.11.
The Poincaré-Reeb set from Definition 2.13 has a canonical structure of graph embedded in the vertical plane (P,π), a fact which may be proved similarly to [22, Theorem 4.6]. Let us explain first how to get the vertices and the edges ofD. Each critical segment is either an exterior topological critical point in the sense of Definition 2.9 or a non-trivial segment containing a finite number of interior topological critical points in its interior (see Figure 9 for an example with two such points). Next definition, motivated by Proposition 2.16 below, introduces a special type of subgraphs of vertical planes:
Definition 2.15. Let (P, π) be a vertical plane. A transversal graph in (P, π) is a closed subset G of P partitioned into finitely many points called vertices and subsets homeomorphic to open segments of R called open edges, such that:
(1) each edge, that is, the closure E of an open edge E, is homeomorphic to a closed segment of R and E \ E consists of 0, 1 or 2 vertices; (2) the edges are topologically transversal to the vertical lines, that is, the restriction of π to each edge is a homeomorphism onto its image in R;
(3) the restriction π | G : G → R is proper.
A transversal graph is called generic if its vertices are of valency 1 or 3 and if distinct vertices lie on distinct vertical lines.
Any transversal graph is homeomorphic to the complement of a subset of the set of vertices of valency 1 inside a usual finite (compact) graph. This is due to the fact that some of its edges may be unbounded, in either one or both directions. Condition (3) from Definition 2.15 avoids G having unbounded edges which are asymptotic to a vertical line of π. Note that we allow G to be disconnected and the set of vertices to be empty. In this last case, G is a finite union of pairwise disjoint open edges, each of them being sent by π homeomorphically onto R.
Here is the announced description of the canonical graph structure of the Poincaré-Reeb sets of domains of finite type in vertical planes: Proposition 2.16. Let D be a domain of finite type in a vertical plane (P, π). Then each edge of the Poincaré-Reeb setD in the sense of Definition 2.14 is homeomorphic to a closed segment of R. Endowed with its vertices and edges,D is a transversal graph in (P,π), without vertices of valency 2.
The proof is straightforward using Proposition 2.10. For an example, see the graph of Figure 8. Below we will define a related notion of generic direction with respect to an algebraic domain (see Definition 2.21). For algebraic domains of finite type, up to a small rotation the vertical direction is generic, see Remark 2.22 below. In other words, for all but a finite number of directions the projection is generic.
Algebraic domains of finite type
Let us consider algebraic domains in the canonical affine vertical plane (R 2 , x) (see Definition 2.3), in the sense of Definition 2.1. Not all of them are domains of finite type. For instance, the closed half-planes or the surface bounded by the hyperbolas (xy = 1) and (xy = −1) are not of finite type, because the restriction of the projection x to the domain is not proper. Next proposition shows that this properness characterizes the algebraic domains which are of finite type, and that it may be checked simply:
Proposition 2.20. Let (P, π) be an affine vertical plane and let D be an algebraic domain in it. Then the following conditions are equivalent:
(1) D is a domain of finite type.
(2) The restriction π | D : D → R is proper.
(3) One fiber of π | D : D → R is compact and the boundary C of D does not contain vertical lines and does not possess vertical asymptotes.
Proof. Let us prove first the implication (2) ⇒ (1). It is enough to show that Σ top (C) is a finite set. The properness of π | D shows that C contains no vertical line. The set of topological critical points being included in the set Σ diff (C) of differentiable critical points of π| C , it is enough to prove that this last set is finite. Consider a connected component C i of C and its Zariski closure C i in P. Let P π (C i ) be its polar curve relative to π (see [20, Definition 2.43]). It is again an algebraic curve in P, of degree smaller than the irreducible algebraic curve C i . Therefore, the set
C i ∩ P π (C i ) is finite, by Bézout's theorem. But this set contains C i ∩ Σ diff (C)
, which shows that π| C has a finite number of differentiable critical points on each connected component C i . As C has a finite number of such components, we get that Σ diff (C) is indeed finite.
Let us prove now that (1) ⇒ (3). Since C ⊂ D, we have by the properness condition of Definition 2.7 (1) that C does not contain vertical lines. Moreover, if the boundary C of D possessed a vertical asymptote, then we would obtain a contradiction with Definition 2.7 (1). Finally, since π | D is proper, each of its fibers is compact.
Finally we prove that (3) ⇒ (2). Since the boundary C of D does not contain vertical lines and does not possess vertical asymptotes, the restriction π| C is proper. Moreover, it has a finite number of differentiable critical points, as the above proof of this fact used only the absence of vertical lines among the connected components of C. We argue now similarly to our proof of Proposition 2.10 (3), by subdividing R using the points of the topological critical image Σ top (C). This set is finite, therefore R gets subdivided into finitely many closed intervals. Above each one of them, C consists of finitely many transversal arcs. If one fiber of π | D above such an interval I j is compact, it means that π −1 | D (I j ) is a finite union of bands bounded by pairs of such transversal arcs and compact vertical segments, therefore π | D is proper above I j . In particular, its fibers above the extremities of I j are also compact. In this way we show by progressive propagation from each interval with a compact fiber to its neighbors, that π | D is proper above each interval of the subdivision of R using Σ top (C). This implies the properness of π | D .
Let us explain now a notion of genericity of an affine function on an affine plane relative to an algebraic domain:
Definition 2.21. Let D be an algebraic domain in an affine vertical plane (P, π), and let C be its boundary. The projection π is called generic with respect to D if C does not contain vertical lines and does not possess vertical asymptotes, vertical inflectional tangent lines and vertical multitangent lines (that is, vertical lines tangent to C at least at two points, or to a point of multiplicity greater than two). . Note that the affine projection π is generic with respect to D if and only if the restriction of π to C is a proper excellent Morse function, i.e. all the critical points of π |C are of Morse type and are situated on different level sets of π |C . Note also that if the algebraic domain D is moreover of finite type and π is generic with respect to it in the sense of Definition 2.21, then D is generic in the sense of Definition 2.19.
Proposition 2.23. Let D be an algebraic domain of finite type in an affine vertical plane (P, π). Assume that π is generic with respect to D in the sense of Definition 2.21. Then its Poincaré-Reeb graph is generic in the sense of Definition 2.15.
Proof. This is a consequence of Proposition 2.18 and Remark 2.22, since by Definition 2.4, the topological critical points of C are among the differential critical points of the vertical projection π| C .
The invariance of the Euler characteristic
In this section we consider only compact domains of finite type. This implies that their boundaries are also compact (see Figure 1 for an example). Next result implies that the Betti numbers of the domain and of its Poincaré-Reeb graph are the same: Proof. Connected components. The collapsing map ρ D of Definition 2.11 being continuous, each connected component of D is sent by ρ D to a connected subset ofD. Those subsets are compact, as images of compact sets by a continuous map. They are moreover pairwise disjoint, by Definition 2.11 of the vertical equivalence relation relative to D. Therefore, they are precisely the connected components ofD, which shows that ρ D establishes a bijection between the connected components of D andD. Homotopy equivalence. We now may assume the D is connected. By definition, for any p ∈D, ρ −1 D (p) is an interval, then the Vietoris-Begle theorem, as stated by Smale in [19], proves that ρ D : D →D induces an isomorphism for the corresponding homotopy groups. By the Whitehead theorem (see [9,Theorem 4.5]), we get a homotopy equivalence between D andD.
Note that in Section 5 we will focus on the topology of the boundary curve C of D, in terms of Betti numbers (see Proposition 5.1). The case where D is a disk was considered by the third author in her study of asymptotic shapes of level curves of polynomial functions f (x, y) ∈ R[x, y] near a local extremum (see [20,22]). A direct consequence of Proposition 2.24 is: Proposition 2.25. If D ⊂ (P, π) is (homeomorphic to) a disk, then the Poincaré-Reeb graphD of D is a tree.
Proof. Proposition 2.24 implies thatD is connected and that χ(D) = 1. But these two facts characterize the trees among the finite graphs.
If the disk D ⊂ (P, π) is an algebraic domain in a vertical affine plane and the projection π is generic with respect to D in the sense of Definition 2.21, then Proposition 2.23 implies that the Poincaré-Reeb graphD is a complete binary tree: each vertex is either of valency 3 (we call it then interior) or of valency 1 (we call it then exterior). Proof. The genericity assumption means that above each vertex ofD there is a unique topological critical point of C. This determines the section of (ρ D ) | D unambiguously on the vertex set ofD. The preimage of an edge E ofD is a band (see Definition 2.14), which is a trivializable fibration with compact segments as fibers over the interior of E. Therefore, one may extend continuously the section from its boundary to the interior of E in a canonical way up to isotopies stabilizing each vertical line (see Figure 11). P Q Figure 11. Decomposition in bands and choices of paths.
Note that without the genericity assumption, the conclusion of Proposition 2.26 is not necessarily true, as may be checked on Figure 9. Using the notion of vertical equivalence defined in Subsection 2.8 below, one may show that any Poincaré-Reeb graphD in the source and the Poincaré-Reeb graphD in the target are vertically isomorphic:D ≈ vD . As explained above, an advantage of the latter construction is that the Poincaré-Reeb graph in the source lives inside the same plane as the generic finite type domain D.
Another advantage is that one may define Poincaré-Reeb graphs in the source even for algebraic domains which are not of finite type, but for which the affine projection π is assumed to be generic in the sense of Definition 2.21. In those cases the D-collapse of the starting affine plane P is not any more homeomorphic to R 2 .
Vertical equivalence
The following definition of vertical equivalence is intended to capture the underlying combinatorial structure of subsets of vertical planes. That is, we consider that two vertically equivalent such subsets have the same combinatorial type.
Definition 2.28. Let X and X be subsets of the vertical planes (P, π) and (P , π ) respectively. We say that X and X are vertically equivalent, denoted by X ≈ v X , if there exist orientation preserving homeomorphisms Φ : P → P and ψ : R → R such that Φ(X) = X and the following diagram is commutative:
P P R R π Φ π ψ
In the sequel we will apply the previous definition to situations when X and X are either domains of finite type in the sense of Definition 2.7 or transversal graphs in the sense of Definition 2.15. -The equivalence preserves the π-order of the critical points: if D ≈ v D , and if P i , P j are critical points of π |C with π(P i ) < π(P j ) then the corresponding critical points of π |C , P i := Φ(P i ), P j := Φ(P j ) verify π (P i ) < π (P j ). This comes from the assumption that the homeomorphisms Φ and ψ involved in Definition 2.28 are orientation preserving.
Example 2.30. Consider the canonical affine vertical plane (R 2 , x) in the sense of Definition 2.3. Then the vertical equivalence preserves the x-order, that is to say, if x(P i ) < x(P j ) then x(P i ) < x(P j ). Notice that the y-order of the critical points may not be preserved. However Φ preserves the orientation on each vertical line, i.e. y → Φ(x 0 , y) is a strictly increasing function.
Example 2.31. Consider again the canonical affine vertical plane (R 2 , x) and a generic algebraic domain D in it, homeomorphic to a disc. Denote C = ∂D. It is homeomorphic to a circle. Then the set of critical points of π| C (which are the same as the topological critical points, by the genericity assumption) yields a permutation. To explain that, we will define two total orders on the set of critical points. The first order enumerates {P i } in a circular manner following C, obtained by following the curve, starting with the point with the smallest x coordinate, the curve being oriented as the boundary of D. The second order is obtained by ordering the abscissas x(P i ) using the standard order relation on R. Now, as explained by Knuth (see [8, page 17], [23, Definition 4.21], [21, Section 1]), two total order relations on a finite set give rise to a permutation σ: in our case, σ(i) is the rank of x(P i ) in the ordered list of all abscissa. The vertical equivalence preserves the permutation: if D ≈ v D then σ = σ . However, the reverse implication could be false, as shown in the picture below, which shows two generic real algebraic domains homeomorphic to discs with the same permutation ( 1 2 3 4 5 6 1 5 3 6 2 4 ), but which are not vertically equivalent, as may be seen by considering their Poincaré-Reeb trees. Therefore, by Definition 2.28, G ≈ v G . -⇐. The keypoint is to reconstruct the topology of a generic domain of finite type D homeomorphic to a disk (and of its boundary C) from its Poincaré-Reeb graph G. To this end, one may construct a kind of tubular neighborhood D of G, obtained by thickening it using vertical segments (see Figure 13). Then D is vertically equivalent to D. Now suppose that G ≈ v G and letΦ :P →P be a homeomorphism inducing this equivalence. This homeomorphism induces also a vertical equivalence of convenient such thickenings, hence yields the equivalence D ≈ v D .
The combinatorial types of generic transversal graphs can be realized by special types of graphs with smooth edges in the canonical affine vertical plane (R 2 , x : R 2 → R):
Proposition 2.32. Any generic transversal graph in a vertical plane is vertically equivalent to a graph in the canonical affine vertical plane, whose edges are moreover smooth and smoothly transversal to the vertical lines.
We leave the proof of this proposition to the reader.
Remark 2.33. We said at the beginning of this subsection that we introduced vertical equivalence as a way to capture the combinatorial aspects of subsets of vertical planes. It is easy to construct a combinatorial object (that is, a structure on a finite set) which encodes the combinatorial type of a generic transversal graph. For instance, given such a graph G, one may number its vertices from 1 to n in the order of the values of the vertical projection π. Then, for each edge α of G, one may remember both its end points a < b and, for each number c ∈ {a + 1, . . . , b − 1}, whether α passes below or above the vertex numbered c.
Algebraic realization in the compact connected case
In this section we give the main result of the paper, Theorem 3.5: given a compact connected generic transversal graph G in a vertical plane (see Definition 2.15), we prove that there exists a compact algebraic domain in the canonical affine vertical plane whose Poincaré-Reeb graph is vertically equivalent to G. We will prove a variant of Theorem 3.5 for non-compact graphs in the next section (Theorem 4.6). Using the canonical orientation of the target R of the vertical projection, one may distinguish two kinds of interior and exterior vertices of the graph G (see Figure 14). Our strategy of proof of Theorem 3.5 is as follows:
-we realize the generic transversal graph G as a Poincaré-Reeb graph of a finite type domain defined by a smooth function; -we present a Weierstrass-type theorem that approximates any smooth function by a polynomial function; -we adapt this Weierstrass-type theorem in order to control vertical tangents, and we realize G as the Poincaré-Reeb graph of a generic finite type algebraic domain.
Smooth realization
First, we construct a smooth function f that realizes a given generic transversal graph. Proof. The idea is to construct first the curve C, then the function f . We construct C by interpolating between local constructions in the neighborhoods of the vertices of G (see Figure 15). Let us be more explicit. We may assume, up to performing a vertical equivalence, that G is a graph with smooth compact edges in the canonical affine vertical plane (R 2 , x), whose edges are moreover smoothly transversal to the verticals (see Proposition 2.32). Let ε > 0 be fixed. Then, one may construct C verifying the following properties:
-C is compact; -C ⊂ N (G, ε)
: the curve is contained in the ε-neighborhood of G; -C has only one vertical tangent associated to each vertex of G; -all these tangents are ordered in the same way as the vertices of G.
Note that this last condition is automatic once ε is chosen less than half the minimal absolute value |x(P i ) − x(P j )|, where P i and P j are distinct vertices of G.
Once C is fixed, one may construct f by following the steps:
-Bicolor the complement R 2 \C of C using the numbers ±1, such that neighboring connected components have distinct associated numbers. Denote by σ : R 2 \ C → R the resulting function. -Choose pairwise distinct annular neighborhoods N i of the connected components C i of C, and diffeomorphisms φ i : N i C i × (−1, 1) such that p 2 • φ i (the composition of the second projection p 2 : C i × (−1, 1) → (−1, 1) and of φ i ) has on the complement of C i the same sign as σ.
-For each connected component S j of R 2 \ C, consider the open set U j ⊂ S j obtained as the complement of the union of annuli of the form φ −1 i (C i × [−1/2, 1/2]). Then consider the restriction σ j : U j → R of σ to U j . -Fix a smooth partition of unity on R subordinate to the locally finite open covering consisting of the annuli N i and the sets U j . Then glue the smooth functions p 2 • φ i : N i → R and σ j : U j → R using it. -The resulting function f satisfies the desired properties.
A Weierstrass-type approximation theorem
Let us first recall the following classical result: Theorem 3.2 (Stone-Weierstrass theorem, [24]). Let X be a compact Hausdorff space. Let C(X) be the Banach R-algebra of continuous functions from X to R endowed with the norm · ∞ . Let A ⊂ C(X) be such that:
-A is a sub-algebra of C(X), -A separates points (that is, for each x, y ∈ X with x = y there exists f ∈ A such that f (x) = f (y)), -for each x ∈ X, there exists f ∈ A such that f (x) = 0.
Then A is dense in C(X) relative to the norm · ∞ .
We will only use the previous theorem through the following corollary: . This set A satisfies the three conditions of Theorem 3.2 (the last one because 1 X ∈ A), therefore A is dense in C(X), which implies that f can indeed be uniformly arbitrarily well approximated on X by polynomials. Is Corollary 3.3 sufficient to answer the realization question? No! Indeed, even if it provides a polynomial p(x, y) such that (p(x, y) = 0) lies in a close neighborhood of (f (x, y) = 0), we have no control on the vertical tangents of the algebraic curve (p = 0), whose Poincaré-Reeb graph can therefore be more complicated than the Poincaré-Reeb graph of (f = 0). In the sequel we construct a polynomial p by keeping at the same time a control on the vertical tangents of a suitable level curve of it.
Algebraic realization
Proposition 3.4. Let f : R 2 → R be a C 3 function such that C = (f = 0) is a curve which does not contain critical points of f , which has only simple vertical tangents, and is included in the interior of a compact subset K of R 2 . For each δ > 0, there exists a polynomial p ∈ R[x, y] such that (see Figure 16):
-(p = 0) ∩ K ⊂ N (f = 0, δ),
-for each point P 0 ∈ (f = 0) where (f = 0) has a vertical tangent, there exists a unique Q 0 ∈ (p = 0) in the disc N (P 0 , δ) centered at P 0 and of radius δ such that (p = 0) has also a vertical tangency at Q 0 , -(p = 0) ∩ K has no vertical tangent except at the former points. The referees ask:
Question. Is it possible to estimate the degree of a polynomial defining the algebraic domain in term of the combinatorics of the graph G?
A referee suggested to use a degree effective version of the C k Weierstrass polynomial approximation theorem, as in [1,Theorem 2]. Furthermore, in [13], the authors construct an algebraic hypersurface that approximates a smooth compact hypersurface with a control of its minimal degree in terms of geometric data of the hypersurface.
Proof of Proposition 3.4
Compact support. Let M > 0 such that (f = 0) ⊂ [−(M − 1), M − 1] 2 (remember that (f = 0) is assumed to be included in a compact set). We replace the function f by a function g with compact support. More precisely, let g : R 2 → R be a function such that: Such a function may be constructed using an adequate partition of unity.
-g is C 3 , -f = g
Polynomial approximation of g and f . We need a polynomial p approximating g, but also that some partial derivatives of p approximate the corresponding partial derivatives of g. This can be done by a C k Weierstrass polynomial approximation. More precisely, one can use the density of polynomials in the C k topology, as stated in [3, Proposition 1.3.7.]. Nevertheless, we state such a result and emphasise which partial derivatives we need to approximate. |p(x, y) − g(x, y)| (2M ) 3 ε, |∂ y p(x, y) − ∂ y g(x, y)| (2M ) 2 ε,
|∂ x p(x, y) − ∂ x g(x, y)| (2M ) 2 ε, ∂ y 2 p(x, y) − ∂ y 2 g(x, y) 2M ε.
In order to be self-contained we give a short proof, inspired by [7].
Proof. By Corollary 3.3 applied to the function ∂ x ∂ y ∂ y g and to (a, b) = (−M, M ), there exists a polynomial p 0 ∈ R[x, y] such that: ∀(x, y) ∈ [−M, M ] 2 |p 0 (x, y) − ∂ x ∂ y ∂ y g(x, y)| < ε. Now our polynomial p ∈ R[x, y] is defined by a triple integration :
p(x, y) = x −M y −M y −M p 0 (u, v) dv dv du.
We start by proving the last inequality. By Fubini theorem: ∂ y 2 p(x, y) =
x −M p 0 (u, y) du. Therefore:
∂ y 2 p(x, y) − ∂ y 2 g(x, y) = x −M p 0 (u, y) − ∂ x ∂ y 2 g(u, y) du x −M ε du 2M ε.
The first equality is a consequence of the fact that:
x −M ∂ x ∂ y 2 g(u, y) du = ∂ y 2 g(x, y)−c(y) where c(y) = ∂ y 2 g(−M, y). As g vanishes outside (−M, M ) 2 , for those points we have ∂ y 2 g(x, y) = 0 so that c(y) = 0. The inequality following it results from the definition of the polynomial p 0 . By successive integrations we prove the other inequalities.
Inside the square [−M, M ] 2 the curve (p = 0) defined for a sufficiently small ε is in a neighborhood of (f = 0). However, remark that (p = 0) can also vanish outside the square [−M, M ] 2 .
The curve (p = 0) inside the square.
Let us explain the structure of the curve (p = 0) around a point P 0 ∈ (f = 0) where the tangent is not vertical (recall that f = g inside the square [−(M − 1), M − 1] 2 ).
-Fix δ > 0. Let B(P 0 , δ) be a neighborhood of P 0 . On this neighborhood f takes positive and negative values. -Let η > 0 and Q 1 , Q 2 ∈ B(P 0 , δ) such that f (Q 1 ) > η and f (Q 2 ) < −η.
-We choose the ε of Lemma 3.6 such that (2M ) 3 ε < η/2.
-p(Q 1 ) > f (Q 1 ) − (2M ) 3 ε > η/2 > 0; a similar computation gives p(Q 2 ) < 0, hence p vanishes at a point Q 0 ∈ [Q 1 Q 2 ] ⊂ B(P 0 , δ)
. -Because we supposed ∂ y f = 0 in B(P 0 , δ), we also have ∂ y p = 0. Hence (p = 0) is a smooth simple curve in B(P 0 , δ) with no vertical tangent. Figure 18. Existence of the curve (p = 0).
B(P 0 , δ) P 0 Q 1 Q 2 Q 0 (p = 0) (f = 0)
Notice that the construction of (p = 0) in B(P 0 , δ) depends on ε, whose choice depends on the point P 0 . To get a common choice of ε, we first cover the compact curve (f = 0) by a finite number of balls B(P 0 , δ) and take the minimum of the ε above.
Vertical tangency.
-Let P 0 = (x 0 , y 0 ) be a point with a simple vertical tangent of (f = 0), that is to say:
∂ y f (x 0 , y 0 ) = 0 ∂ x f (x 0 , y 0 ) = 0 ∂ y 2 f (x 0 , y 0 ) = 0
-For similar reasons as before, (p = 0) is a non-empty smooth curve passing near P 0 .
-In the following we may suppose that the curve (f = 0) is locally at the left of its vertical tangent, that is to say:
∂ x f (x 0 , y 0 ) × ∂ y 2 f (x 0 , y 0 ) > 0
An example of this behavior is given by f (x, y) := x + y 2 at (0, 0). Figure 19. Vertical tangent.
B(P 0 , δ) P 0 Q 0 (f = 0) (p = 0)
-Fix δ > 0. Let B(P 0 , δ) be a neighborhood of P 0 .
-∂ y p ∼ ∂ y f . As ∂ y f vanishes at the point P 0 of (f = 0), then ∂ y f takes positive and negative values near this point. Let η > 0, and Q 1 ∈ (f = 0) such that ∂ y f (Q 1 ) > η.
For a sufficiently small ε, there exists R 1 ∈ (p = 0) such that ∂ y f (R 1 ) > 2 3 η. Therefore ∂ y p(R 1 ) > 1 3 η > 0. For a similar reason there exists R 2 ∈ (p = 0) such that ∂ y p(R 2 ) < 0. Then there exists Q 0 ∈ (p = 0) ∩ B(P 0 , δ) such that ∂ y p(Q 0 ) = 0.
-∂ x p ∼ ∂ x f . As ∂ x f (P 0 ) = 0, one has also ∂ x p(Q 0 ) = 0, thus p has a vertical tangent at Q 0 . -∂ y 2 p ∼ ∂ y 2 f and they do not vanish near P 0 and Q 0 , therefore the vertical tangent at Q 0 for (p = 0) is simple and has the same type as the vertical tangent at P 0 for (f = 0). -Moreover as ∂ y 2 p = 0 on (p = 0) ∩ B(P 0 , δ), thus ∂ y p vanishes only once, hence there is only one vertical tangent in this neighborhood.
4. Algebraic realization in the non-compact and connected case
Domains of weakly finite type in vertical planes
We consider an algebraic domain D ⊆ R 2 in the sense of Definition 2.1, whose boundary C := ∂D is a connected but non-compact curve. This curve C is homeomorphic to a line and has two branches at infinity (the germs at infinity of the two connected components of C \ K, where K ⊂ C is a non-empty compact arc). Let us suppose that these branches are in generic position w.r.t. the vertical direction: none of them has a vertical asymptote. This leads us to Definition 4.1 below, which represents a generalization of the notion of domain of finite type (see Definition 2.7), since we only ask π | C : C → R to be proper, allowing π | D : D → R not to be so. In turn, the genericity notion is an analog of that introduced in Definition 2.19.
Definition 4.1. Let (P, π) be a vertical plane. Let D ⊂ P be a closed subset homeomorphic to a surface with non-empty boundary. Denote by C its boundary. We say that D is a domain of weakly finite type in (P, π) if:
(1) the restriction π | C : C → R is proper;
(2) the topological critical set Σ top (C) is finite.
Such a domain is called generic if no two topological critical points of C lie on the same vertical line. A Poincaré-Reeb graph of a generic domain of weakly finite type is one of its Poincaré-Reeb graphs in the source in the sense of Subsection 2.7.
For instance, the closed upper half-plane H in (R 2 , x) is a generic domain of weakly finite type (for which Σ top (C) = ∅). Its Poincaré-Reeb graphs are the sections of the restriction x : H → R of the vertical projection.
The combinatorics of non-compact Poincaré-Reeb graphs
Let D be a domain of weakly finite type in a vertical plane (P, π). When C is homeomorphic to a line, we distinguish three cases, depending on the position of D and of the branches of C. We enrich the Poincaré-Reeb graph, by adding arrowhead vertices representing directions of escape towards infinity. Moreover, the unbounded edges are decorated with feathers oriented upward or downward, to indicate the unbounded vertical intervals contained in the domain. Case A. One arrow.
G Figure 20. Case A.
In case A, the two branches of C are going in the same direction (to the right or to the left, as defined by the orientations of P and the target line R of π), D being in between. Then we get a Poincaré-Reeb graph with one arrow (and no feathers). Case B. Two arrows.G -We can avoid the contraction of non-compact vertical intervals in the construction of the Poincaré-Reeb graph in case B and case C, in order to still have a graph G naturally embedded in an affine plane. We first define a subset H ⊂ R 2 that contains C, whose boundary ∂H = H + ∪ H − is the union of two curves homeomorphic to R, transverse to the vertical foliation (one above C, one below C). Proof. After applying a vertical equivalence in the sense of Definition 2.28, we may assume that D is embedded in the canonical vertical plane (R 2 , x). Denote C := ∂D. The idea is to intersect C (and D) with a sufficiently big compact convex topological disk K, to apply our previous construction for D ∩ K, then to add suitable arrows. In the figure below, such a disk is represented as a Euclidean one, but one has to keep in mind that its shape may be different, for instance a rectangle, in order to achieve topological transversality between its boundary and the curve C. First, notice that the case where C is compact is already known (see Propositions 2.18 and 2.25).
Assume therefore that C is a non-compact curve. Then π |C has a finite number of topological critical points. We consider a sufficiently large convex compact topological disk K that contains all these critical points. Let D := D ∩ K and C := ∂D . We are then in the compact situation studied before. By Proposition 2.25, the Poincaré-Reeb graph of D is a tree. We add arrows (at each circled dot below) depending on each case.
We extend now the notion of vertical equivalence of transversal graphs from Definition 2.28 to enriched non-compact transversal graphs, requiring that arrowhead vertices are sent to arrowhead vertices. Then we have the following generalization of Theorem 2.29, whose proof is similar:
Proposition 4.4.
Let D, D be generic simply connected domains of weakly finite type. Then:
D ≈ v D ⇐⇒ G ≈ v G .
Algebraic realization
We extend our realization thorem (Theorem 3.5) of generic transversal graphs as Poincaré-Reeb graphs of algebraic domains to the simply connected but non-compact case. The idea is to use the realization from the compact setting and consider the union with a line (or a parabola); finally, we take a neighboring curve.
Example 4.5. Here is an example, see Figure 24: starting from an ellipse (f = 0), we consider the union with a line (g = 0), then the unbounded component of (f g = ε) is a non-compact curve with two branches that have the shape of the ellipse on a large arc, if the sign of ε is conveniently chosen. Figure 24. Adding two branches to an ellipse.
(f = 0) (f = 0) (g = 0) (f g = ε) (f g = −ε)
Theorem 4.6. Let G be a connected, non-compact, generic transversal tree in a vertical plane, with at most three unbounded edges, not all on the same side (left or right), enriched with compatible arrows and feathers (like in cases A, B or C of Section 4.2). Let G be the compact tree obtained from G, by replacing each arrow by a sufficiently long edge with a circle vertex at the extremity. If G can be realized by a connected real algebraic curve, then G can be realized as the Poincaré-Reeb graph of a simply connected, non-compact algebraic domain in (R 2 , x). G G Figure 25. An example of a tree G (left) and its corresponding compact tree G (right) after the edges ended with arrows have been replaced by long enough edges having circle vertices.
Remark 4.7. Note that in this section we work under the additional hypothesis that the realization from the compact setting is done by a connected real algebraic curve and not by a connected component of a real algebraic curve as it was done in Theorem 3.5. We impose this hypothesis, in order not to have difficulties when taking neighboring curves (see Remark 4.10).
Proof. By hypothesis, there exists a connected real algebraic curve C : (f = 0), f ∈ R[x, y] such that C realizes the newly obtained tree G . In this proof we consider Poincaré-Reeb graphs in the source in the sense of Subsection 2.7, so that the graph is situated in the same plane as the connected real algebraic curve C : (f = 0).
The key idea of the proof is to choose appropriately a non-compact algebraic curve C : (g = 0), g ∈ R[x, y] such that when we take a neighboring level of the product of the two polynomials, say (f g = ε) for a sufficiently small ε > 0, we obtain the desired shape at infinity described by Case A, B or C. Note that the vertices of the Poincaré-Reeb graph are, by definition, transversal intersection points between the polar curve and the level curve. So a small deformation of the level curve will not change this property. Moreover, the neighboring curve must preserve the total preorder between the vertices of the tree. Since there are finitely many such vertices, we can choose ε small enough to ensure this condition holds. Let us give more details depending on the cases A, B or C. Case A. Our goal is to realize the tree from Case A. Namely, we want to add two new noncompact branches that are unbounded in the same direction (see Figure 26). In order to achieve this, we shall consider the graph (g = 0) of a parabola that is tangent to the curve (f = 0) in the rightmost vertex of G . Next, consider the real bivariate function f g : R 2 → R. The level curve (f g = 0) is the union of C and C . Finally, a neighboring curve (f g = ε) realizes the tree G, for ε = 0 sufficiently small. Case B. In Case B, the goal is to add two new non-compact branches, on opposite sides. First, note that in the presence of two such unbounded branches, the edges decorated by feathers (that is, those edges corresponding to the contraction of unbounded segments) form a linear graph L. The extremities of this linear subgraph are the arrowhead vertices of G which we replace by two circular vertices to define G . As before, by hypothesis we can consider a connected real algebraic plane curve C : (f = 0) that realizes the graph G . Consider a curve (g = 0), algebraic, homeomorphic to a line and situated just below the graph G . More precisely (g = 0) is situated in between the linear graph L of G and the lower part of (f = 0) (see Figures 29 and 31). The connected component of the neighboring curve (f g = ε) for a sufficiently small ε = 0 will be the boundary of an algebraic domain that realizes the given tree G. Note that in the above construction there exist other connected components of (f g = ε), for instance in between the curves (f = 0) and (g = 0), but this is allowed by Definition 2.1: we considered the algebraic domain D given by ∂D = C, where C (f g = ε).
G (f = 0) (g = 0) (f g = ε)
Case C. The domain considered in Case C is the complement of the algebraic domain, say D A , that we constructed in Case A. Namely, the graph G from Case C is realized by the domain D C , that is the closure of R 2 \ D A . Note that in this case the two domains have the same boundary: ∂D C = ∂D A = (f g = ε) and they are semialgebraic domains.
Remark 4.10. Our construction for Theorem 4.6 needs the graph G to be realized by a connected real algebraic curve. Theorem 3.5 only realizes G as one connected component C 1 of a real algebraic plane curve C defined by (f = 0); this is not sufficient for our construction. For instance the oval C 1 may be nested inside an oval C 2 ⊂ C; the curve (f g = ε) of the proof of Theorem 4.6 would no longer satisfy the requested conclusion.
G C 1 C 2 (g = 0)
(f g = ε) Figure 32. Construction that does not satisfy the desired conclusion.
General domains of weakly finite type
We consider the case of D being any real algebraic domain. Each connected component of C = ∂D is either an oval (a component homeomorphic to a circle) or a line (in fact a component homeomorphic to an affine line). An essential question in plane real algebraic geometry is to study the relative position of these components.
Combinatorics
Let (P, π) be a vertical plane and a generic domain D ⊂ P of weakly finite type. The next result shows that the Poincaré-Reeb graph of D allows to recover the numbers of lines and ovals of C = ∂D.
Proposition 5.1.
-The number of lines in C is:
#{arrows without feathers} + 1 2 #{arrows with simple feathers}.
-The number of ovals in C is:
b 0 (G) + b 1 (G) − c where b 0 (G) is
Interior and exterior graphs of domains of weakly finite type
Let D be a generic domain of weakly finite type in a vertical plane (P, π). Then the closure D c of P \ D in P is again a domain of weakly finite type, as ∂D = ∂D c . We say that the Poincaré-Reeb graph G of D is the interior graph of D and that the Poincaré-Reeb graph G c of D c is the exterior graph of D. In the next proposition, Poincaré-Reeb graphs are to be considered in the sense of Definition 4.1, that is, as Poincaré-Reeb graphs in the source:
Proposition 5.3. The interior graph G of a domain D of weakly finite type determines its exterior graph G c .
Proof. The two graphs share the same non-arrowhead vertices. The local situation around a non-arrowhead vertex is in accordance to the trident rule, where an exterior vertex is replaced by an interior vertex and vice-versa (see Figure 35). We also extend this rule to arrowhead vertices. Now we derive G c from G in two steps. First step: make a local construction of the beginning of the edges of G c according to the trident rule (see Figure 36).
Figure 1 .
1A Poincaré-Reeb graph: a curve C bounding a real algebraic domain D (left); a Poincaré-Reeb graph in the source (center); the Poincaré-Reeb graph (right).
Figure 2 .
2The algebraic domain D bounded by C 1 and C 2 .
Figure 4 .
4One example of a domain of finite type (left). Two examples of domains that are not of finite type (center and right).
A topological critical point of C is called: -an interior topological critical point of D if the vertical line passing through it lies locally inside D; -an exterior topological critical point of D if the vertical line passing through it lies locally outside D.
Figure 5 .
5Example of an exterior topological critical point P and an interior topological critical point Q of D.
Figure 6 .
6Different types of point p in π −1 (x 0 ) ∩ D.
Definition 2 . 14 .
214Let D be a domain of finite type in a vertical plane (P, π), and let C be its boundary. A vertex of the Poincaré-Reeb setD is an element of ρ D (Σ top (C)). A critical segment of D is a connected component of a fiber of π | D containing at least one element of Σ top (C). The bands of D are the closures of the connected components of the complement in D of the union of critical segments. An edge ofD is the image ρ D (R) of a band R of D (seeFigure 8).
Figure 8 .
8Construction of a Poincaré-Reeb set. There are three bands, delimited by four critical segments (three of them are reduced to points). The interior of each edge of the graph is drawn in the same color as the corresponding band.
Figure 9 .
9A critical segment containing two interior topological critical points.
Proposition 2 .
216 allows to give the following definition: Definition 2.17. Let D be a domain of finite type in a vertical plane (P, π). Its Poincaré-Reeb graph is the Poincaré-Reeb setD seen as a transversal graph in the D-collapse (P,π) of P in the sense of Definition 2.11, when one endows it with vertices and edges in the sense of Definition 2.14. The next result explains in which case the Poincaré-Reeb graph of a domain of finite type is generic in the sense of Definition 2.15: Proposition 2.18. Let D be a domain of finite type in a vertical plane (P, π). Denote by C its boundary. Then the Poincaré-Reeb graphD is a generic transversal graph in (P,π) if and only if no two topological critical points of C lie on the same vertical line. Proof. This follows from Definition 2.15, Definition 2.9 and Proposition 2.10 (3). Vertices of valency 1 of the Poincaré-Reeb graph correspond to exterior topological critical points, whereas vertices of valency 3 correspond to interior topological critical points. This proposition motivates: Definition 2.19. A domain of finite type in a vertical plane is called generic if no two topological critical points of its boundary lie on the same vertical line.
Remark 2.22. Let D be an algebraic domain in an affine plane P. Except for a finite number of directions of their fibers, all affine projections are generic with respect to D (see [23, Theorem 2.13])
Proposition 2 . 24 .
224Let D be a compact domain of finite type in a vertical plane. Then D and its Poincaré-Reeb graphD are homotopically equivalent. In particular they have the same number of connected components and the same Euler characteristic.
Figure 10 .
10The Poincaré-Reeb graph of a disk relative to a generic projection is a complete binary tree.2.7. Poincaré-Reeb graphs in the source Definition 2.17 of the Poincaré-Reeb graphD of a finite type domain D in a vertical plane (P, π) is canonical. However, it yields a graph embedded in a new vertical planeP, which cannot be identified canonically to the starting one. When the Poincaré-Reeb graph is generic in the sense of Definition 2.15, it is possible to lift it to the starting plane. Proposition 2.26. Let D be a finite type domain in a vertical plane (P, π). If the Poincaré-Reeb graphD is generic, then the map (ρ D ) | D : D →D admits a section, which is well defined up to isotopies stabilizing each vertical line.
Definition 2. 27 .
27Let D be a domain of finite type in a vertical plane (P, π) with generic Poincaré-Reeb graphD. Then any section of (ρ D ) | D : D →D is called a Poincaré-Reeb graph in the source of D. By contrast, the graphD is called the Poincaré-Reeb graph in the target.
Proposition 2 . 29 .
229Let D and D be compact connected domains of finite type in vertical planes, with Poincaré-Reeb graphs G and G . Assume that both are generic in the sense of Definition 2.19. Then:D ≈ v D ⇐⇒ G ≈ v G .Before giving the proof of Proposition 2.29, let us make some remarks:-Denote C = ∂D and C = ∂D . We have Φ(C) = C . -Φ sends the topological critical points {P i } of Cbijectively to the topological critical points {P i } of C . In fact, such a critical point may be geometrically characterized by the local behavior of D relative to the vertical line through this point. A point P is a topological critical point of π |C , if and only if the intersection of D with the vertical line through P is a point in a neighborhood of P , or a segment such that P is in the interior of the segment. The homeomorphism Φ sends the vertical line to a vertical line and D to D , hence P = Φ(P ) is a topological critical point of π |C .
Figure 12 .
12Two non-vertically equivalent real algebraic domains with the same permutation.Example 2.31 shows that the permutations are not complete invariants of generic domains of finite type homeomorphic to disks, under vertical equivalence. However, by Proposition 2.29, the Poincaré-Reeb graphs is a complete invariant for the vertical equivalence.Proof of Proposition 2.29.-⇒. Suppose D ≈ v D and let Φ : P → P be a homeomorphism realizing this equivalence , Φ preserves the vertical foliations, hence is compatible with the vertical equivalence relations ∼ D and ∼ D of Definition 2.11. Therefore it induces a homeomor-phismΦ :P →P from the D-collapse of P to the D -collapse of P , sending G = D/∼ to G = D /∼. This homeomorphism gets naturally included in a
Figure 13 .
13Thickening in the neighborhood of an exterior vertex (left) and of an interior vertex (right).
Figure 14 .
14The two kinds of interior vertices (on the left) and of exterior vertices (on the right).
Proposition 3. 1 .Figure 15 .
115Let G be a compact connected generic transversal graph. There exists a C ∞ function f : R 2 → R such that the curve C = (f = 0) does not contain critical points of f and is the boundary of a domain of finite type whose Poincaré-Reeb graph in the canonical vertical plane (R 2 , x) is vertically equivalent to G. A generic compact transversal graph (left) and a local smooth realization (right).
Corollary 3 . 3 .
33Let f : R 2 → R be a continuous map and a, b ∈ R, a < b. For each ε > 0, there exists a polynomial p ∈ R[x, y] such that : ∀(x, y) ∈ [a, b] × [a, b] f (x, y) − p(x, y) < ε Proof. We apply Theorem 3.2 with X = [a, b] × [a, b], A = R[x, y]
Figure 16 .
16Algebraic realization.We prove this proposition in Subsection 3.4 below. By taking the numbers ε > 0 and δ > 0 appearing in Propositions 3.1 and 3.4 sufficiently small, we get:Theorem 3.5. Any compact connected generic transversal graph can be realized as the Poincaré-Reeb graph of a connected algebraic domain of finite type. Proof of the theorem. Starting with a compact connected transversal generic graph G, it can be realized by a smooth function f (Proposition 3.1), which in turn can be replaced by a polynomial map p (Proposition 3.4).
on [−(M − 1), M − 1] 2 , g = 0 outside (−M, M ) 2 , g does not vanish in the intermediate zone (hatched area below).
Figure 17 .
17Compact support of g.
Lemma 3. 6 .
6Let us fix ε > 0. There exists a polynomial p ∈ R[x, y] such that, for all (x, y) ∈ [−M, M ] 2 :
Figure 21 .Figure 22 .
2122Case B. In case B, the two branches have opposite directions. Then we have a Poincaré-Reeb graph with two arrows, each arrow-headed edge being decorated with feathers (above or below), indicating the non-compact vertical intervals of type [0, +∞[ or ] − ∞, 0] contained in the domain bounded by that edge. Case C. Three arrows.G Case C.In case C, where the two branches are going to the same direction but D is in the "exterior", we have a graph with three arrows: two arrows with simple feathers (for the vertical intervals of type [0, +∞[ or ] − ∞, 0]) and one arrow with double feathers (for the vertical intervals of type ] − ∞, +∞[). Remark 4.2.
We change Definition 2.11, by contracting vertical intervals of D∩H (instead of vertical intervals of D) : P ∼ D Q if π(P ) = π(Q) := x 0 and P and Q are on the same connected component of D ∩ H ∩ π −1 (x 0 ). -The feather decoration on non-arrowheaded edges can be recovered from feathers at the other arrows and are omitted. -The cases A and C are complementary (or dual of each other). We can pass from one to the other by considering C as the boundary of D or of R 2 \ D. -From this point of view, case B is its own complementary case. More on such complementarities will be said later (see Section 5).
Proposition 4. 3 .
3Let D be a simply connected generic domain of weakly finite type in a vertical plane. Then its Poincaré-Reeb graph is a generic transversal binary tree.
Figure 23 .
23Cases A, B, C (from left to right). The filled region is the compact domain of finite type D := D ∩ K. A Poincaré-Reeb graph in the source is also displayed. The Poincaré-Reeb graph of D is obtained by replacing each circled vertex by an arrow.
Figure 26 .
26Zoom on the construction for case A.
Example 4 . 8 .
48Here are the pictures of a graph G (Figure 27) and its realization (Figure 28).
Figure 27 .Figure 28 .
2728Graph G to be realized. Case A: adding two new branches in the same direction.
Figure 29 .
29Zoom on the construction for case B.
Example 4.9.Here are the pictures of a graph G(Figure 30) and its realization(Figure 31).
Figure 30 .Figure 31 .
3031Graph G to be realized. Case B: adding two new opposite branches.
the number of connected components of G, b 1 (G) is the number of independent cycles in G and c is the number of connected components of G having an arrowhead vertex. Example 5.2. Let us consider Figure 33. One arrowhead without feathers and (half of) two arrowheads with simple feathers, give a number of two lines. As b 0 (G) = 3, b 1 (G) = 2 and c = 2, we see that b 0 (G) + b 1 (G) − c = 3 is indeed the number of ovals in C.
Figure 33 .
33Ovals and lines and their Poincaré-Reeb graph.Proof. For the first point we just notice that each line contributes to either an arrow without feathers or to two arrows with simple feathers.For the second point, the proof is by induction on the number of ovals. If there are no ovals, then b 0 (G) = c, and b 1 (G) = 0, therefore the formula is valid. Now start with a configuration C = ∂D and add an oval that does not contain any other ovals. Let C be the new curve and G its graph. Either the interior of the new oval is in D, in which case b 0 (G ) = b 0 (G) and b 1 (G ) = b 1 (G) + 1, or the interior of the new oval is in P \ D, in which case b 0 (G ) = b 0 (G) + 1 and b 1 (G ) = b 1 (G). In both cases c(G ) = c(G). Conclusion: b 0 (G ) + b 1 (G ) − c = (b 0 (G) + b 1 (G) − c) + 1.
Figure 34 .
34Interior and exterior graphs of a domain of weakly finite type.
Figure 35 .
35The trident rule.
Figure 36 .
36The trident rule applied at some vertices (here three vertices are completed).Second step: complete each edge. It can be done in only one way up to vertical isotopies (see for instanceFigure 37).
Figure 37 .
37Completed exterior graph.
Multivariate simultaneous approximation. Thomas Bagby, Leonard Peter Bos, Norman LevenbergJr, Constr. Approx. 184Thomas Bagby, Leonard Peter Bos, and Norman Jr. Levenberg, Multivariate simultaneous approximation, Constr. Approx. 18 (2002), no. 4, 569-577.
On the Reeb spaces of definable maps. Saugata Basu, Nathanael Cox, Sarah Percival, Discrete Comput. Geom. 682Saugata Basu, Nathanael Cox, and Sarah Percival, On the Reeb spaces of definable maps, Discrete Comput. Geom. 68 (2022), no. 2, 372-405.
The Abel Prize. The Camillo De Lellis, Masterpieces Of John Forbes NashJr, Helge Holden and Ragni PieneSpringer International PublishingChamCamillo De Lellis, The Masterpieces of John Forbes Nash Jr., The Abel Prize 2013-2017 (Helge Holden and Ragni Piene, eds.), Springer International Publishing, Cham, 2019, pp. 391-499.
Categorified Reeb graphs. Elizabeth Vin De Silva, Amit Munch, Patel, Discrete Comput. Geom. 554Vin de Silva, Elizabeth Munch, and Amit Patel, Categorified Reeb graphs, Discrete Comput. Geom. 55 (2016), no. 4, 854-906.
Topological analysis of nerves, Reeb spaces, mappers, and multiscale mappers. K Tamal, Facundo Dey, Yusu Mémoli, Wang, 33rd International Symposium on Computational Geometry. Wadern7716LIPIcs. Leibniz Int. Proc. Inform.Tamal K. Dey, Facundo Mémoli, and Yusu Wang, Topological analysis of nerves, Reeb spaces, mappers, and multiscale mappers, 33rd International Symposium on Computational Geometry, LIPIcs. Leibniz Int. Proc. Inform., vol. 77, Schloss Dagstuhl. Leibniz-Zent. Inform., Wadern, 2017, pp. Art. No. 36, 16.
Reeb spaces of piecewise linear mappings. Herbert Edelsbrunner, John Harer, Amit K Patel, Computational geometry (SCG'08). New YorkACMHerbert Edelsbrunner, John Harer, and Amit K. Patel, Reeb spaces of piecewise linear mappings, Computa- tional geometry (SCG'08), ACM, New York, 2008, pp. 242-250.
Answer to On finding polynomials that approximate a function and its derivative, StackExchange. Nate Elredge, 555712Nate Elredge, Answer to On finding polynomials that approximate a function and its derivative, StackEx- change, question 555712 (2013).
A singular mathematical promenade. Étienne Ghys, ENS Éditions. LyonÉtienne Ghys, A singular mathematical promenade, ENS Éditions, Lyon, 2017.
Allen Hatcher, Algebraic topology. CambridgeCambridge University PressAllen Hatcher, Algebraic topology, Cambridge University Press, Cambridge, 2002.
On Reeb graphs induced from smooth functions on 3-dimensional closed manifolds with finitely many singular values. Naoki Kitazawa, 897-912. 11Topol. Methods Nonlinear Anal. 592BMethods Funct. Anal. TopologyNaoki Kitazawa, On Reeb graphs induced from smooth functions on 3-dimensional closed manifolds with finitely many singular values, Topol. Methods Nonlinear Anal. 59 (2022), no. 2B, 897-912. 11. , On Reeb graphs induced from smooth functions on closed or open manifolds, Methods Funct. Anal. Topology 28 (2022), no. 2, 127-143.
Level set tree methods. Jussi Klemelä, Wiley Interdiscip. Rev. Comput. Stat. 10514Jussi Klemelä, Level set tree methods, Wiley Interdiscip. Rev. Comput. Stat. 10 (2018), no. 5, e1436, 14.
What is the degree of a smooth hypersurface?. Antonio Lerario, Michele Stecconi, J. Singul. 23Antonio Lerario and Michele Stecconi, What is the degree of a smooth hypersurface?, J. Singul. 23 (2021), 205-235.
A smooth function on a manifold with given Reeb graph. Yasutaka Masumoto, Osamu Saeki, Kyushu J. Math. 651Yasutaka Masumoto and Osamu Saeki, A smooth function on a manifold with given Reeb graph, Kyushu J. Math. 65 (2011), no. 1, 75-84.
Henri Poincaré, Papers on topology, cinquième complément à l'analysis situs. John StillwellProvidence, RI; LondonSpringer37Rendiconti del Circolo Matematico di Palermo (1884-1940Henri Poincaré, Papers on topology, cinquième complément à l'analysis situs, History of Mathematics, vol. 37, American Mathematical Society, Providence, RI; London Mathematical Society, London, 2010, Rendiconti del Circolo Matematico di Palermo (1884-1940), 45-110, Springer, translated and with an introduction by John Stillwell, 1904.
Sur les points singuliers d'une forme de Pfaff complètement intégrable ou d'une fonction numérique. Georges Reeb, C. R. Acad. Sci. Paris. 222Georges Reeb, Sur les points singuliers d'une forme de Pfaff complètement intégrable ou d'une fonction numérique, C. R. Acad. Sci. Paris 222 (1946), 847-849.
Theory of singular fibers and Reeb spaces for visualization, Topological methods in data analysis and visualization. Osamu Saeki, Int. Math. Res. Not. IMRN. 1811SpringerIV, Math. Vis.Osamu Saeki, Theory of singular fibers and Reeb spaces for visualization, Topological methods in data analysis and visualization. IV, Math. Vis., Springer, Cham, 2017, pp. 3-33. 18. , Reeb spaces of smooth functions on manifolds, Int. Math. Res. Not. IMRN (2022), no. 11, 8740-8768.
A Vietoris mapping theorem for homotopy. Stephen Smale, Proc. Amer. Math. Soc. 8Stephen Smale, A Vietoris mapping theorem for homotopy, Proc. Amer. Math. Soc. 8 (1957), 604-610.
Permutations encoding the local shape of level curves of real polynomials via generic projections. Miruna-Ştefana Sorea, 219-260. 22The shapes of level curves of real polynomials near strict local minima. 21Université de Lille/Laboratoire Paul PainlevéJ. Symbolic Comput.Miruna-Ştefana Sorea, The shapes of level curves of real polynomials near strict local minima, Ph.D. thesis, Université de Lille/Laboratoire Paul Painlevé, 2018. 21. , Constructing separable Arnold snakes of Morse polynomials, Port. Math. 77 (2020), no. 2, 219-260. 22. , Measuring the local non-convexity of real algebraic curves, J. Symbolic Comput. 109 (2022), 482-509. 23. , Permutations encoding the local shape of level curves of real polynomials via generic projections, Ann. Inst. Fourier (Grenoble) 72 (2022), no. 4, 1661-1703.
The generalized Weierstrass approximation theorem. Marshall Harvey Stone, Math. Mag. 21Marshall Harvey Stone, The generalized Weierstrass approximation theorem, Math. Mag. 21 (1948), 167-184, 237-254.
Email address: [email protected] Email address: [email protected] Email address: mirunastefana.sorea@sissa. itEmail address: [email protected] Email address: [email protected] Email address: [email protected]
| [] |
[
"p-adic vertex operator algebras",
"p-adic vertex operator algebras"
] | [
"Cameron Franc \nDepartment of Mathematics\nUC Santa Cruz\nSanta CruzCAUSA\n",
"Geoffrey Mason \nDepartment of Mathematics\nUC Santa Cruz\nSanta CruzCAUSA\n"
] | [
"Department of Mathematics\nUC Santa Cruz\nSanta CruzCAUSA",
"Department of Mathematics\nUC Santa Cruz\nSanta CruzCAUSA"
] | [] | We postulate axioms for a chiral half of a nonarchimedean 2-dimensional bosonic conformal field theory, that is, a vertex operator algebra in which a p-adic Banach space replaces the traditional Hilbert space. We study some consequences of our axioms leading to the construction of various examples, including p-adic commutative Banach rings and p-adic versions of the Virasoro, Heisenberg, and the Moonshine module vertex operator algebras. Serre p-adic modular forms occur naturally in some of these examples as limits of classical 1-point functions. | 10.1007/s40993-023-00433-1 | [
"https://export.arxiv.org/pdf/2207.07455v3.pdf"
] | 250,607,516 | 2207.07455 | 96c1d787d13df17e92bfac88dcad01d7db5cddf4 |
p-adic vertex operator algebras
Cameron Franc
Department of Mathematics
UC Santa Cruz
Santa CruzCAUSA
Geoffrey Mason
Department of Mathematics
UC Santa Cruz
Santa CruzCAUSA
p-adic vertex operator algebras
10.1007/s40993-023-00433-1C. Franc, G. Mason Res. Number Theory (2023) 9:27 R E S E A R C H * Correspondence: Full list of author information is available at the end of the article The authors were supported by an NSERC Grant and by the Simon Foundation, Grant #427007, respectivelyp-adic vertex operator algebrasp-adic Banach spacesSerre p-adic modular forms Mathematics Subject Classification: 11F8517B6917B9981R10
We postulate axioms for a chiral half of a nonarchimedean 2-dimensional bosonic conformal field theory, that is, a vertex operator algebra in which a p-adic Banach space replaces the traditional Hilbert space. We study some consequences of our axioms leading to the construction of various examples, including p-adic commutative Banach rings and p-adic versions of the Virasoro, Heisenberg, and the Moonshine module vertex operator algebras. Serre p-adic modular forms occur naturally in some of these examples as limits of classical 1-point functions.
Introduction
In this paper we introduce a new notion of p-adic vertex operator algebra. The axioms of p-adic VOAs arise from p-adic completion of the usual or, as we call them in this paper, algebraic VOAs with suitable integrality properties. There are a number of reasons why this direction is worth pursuing, both mathematical and physical. From a mathematical perspective, many natural and important VOAs have integral structures, including the Monster module of [17], whose integrality properties have been studied in a number of recent papers, including [4,[7][8][9]. Completing such VOAs with respect to the supremumnorm of an integral basis as in Sect. 7 below provided the model for the axioms that we present here. In a related vein, the last several decades have focused attention on various aspects of algebraic VOAs over finite fields. Here we can cite [1,2,12,13,31,32,40]. It is natural to ask how such studies extend to the p-adic completion, and this paper provides a framework for addressing such questions.
In a slightly different direction, p-adic completions have been useful in a variety of mathematical fields adjacent to the algebraic study of VOAs. Perhaps most impressive is the connection between VOAs and modular forms, discussed in [10,15,36,46]. It is natural to ask to what extent this connection extends to the modern and enormous field of p-adic modular forms, whose study was initiated in [27,43] and which built on earlier work of many mathematicians. We provide some hints of such a connection in Sects. 9 and 11 below.
Finally on the mathematical side, local-global principles have been a part of the study of number-theoretic problems for over a century now, particularly in relation to questions about lattices and their genera. One can think of algebraic VOAs as enriched lattices (see [15] for some discussion and references on this point) and it is natural to ask to what extent local-global methods could be applied to the study of VOAs. For example, mathematicians still do not know how to prove that the Monster module is the unique holomorphic VOA with central charge 24 and whose weight 1 graded piece is trivial. Could the uniqueness of its p-adic completion be more accessible?
When considering how to apply p-adic methods to the study of vertex operator algebras, one is confronted with the question of which p to use. In the algebraic and physical theories to date, most attention has been focused on the infinite, archimedean prime. A finite group theorist might answer that p = 2 is a natural candidate. While focusing on p = 2 could trouble a number theorist, there is support for this suggestion in recent papers such as [22] where the Bruhat-Tits tree for SL 2 (Q 2 ) plays a prominent rôle. One might also speculate that for certain small primes, p-adic methods could shed light on p-modular moonshine [1,2,40] and related constructions [14]. However, a truly local-global philosophy would suggest considering all primes at once. That is, one might consider instead the adelic picture, an idea that has arisen previously in papers such as [19]. Examples of such objects are furnished by completions lim ← −n≥1 V /nV of vertex algebras V defined over Z. After extension by Q, such a completion breaks up into a restricted product over the various padic completions. Thus, the p-adic theory discussed below could be regarded as a first step toward a more comprehensive adelic theory. Other than these brief remarks, however, we say no more about the adeles in this paper.
There is a deep connection between the algebraic theory of VOAs and the study of physical fields connected to quantum and conformal field theory (CFT) which has been present in the algebraic theory of VOAs from its very inception. A 'physical' 2-dimensional CFT provides for a pair of Hilbert spaces of states, called the left-and right-moving Hilbert spaces, or 'chiral halves'. We will not need any details here; some of them are presented in [26], or for a more physical perspective one can consult [6,41].
The current mathematical theory of vertex operator algebras is, with some exceptions, two steps removed from this physical picture: one deals exclusively with one of the chiral halves and beyond that the topology is rendered irrelevant by restricting to a dense subspace of states and the corresponding fields, which may be treated axiomatically and algebraically.
The p-adic theory we propose is but one step removed from the physical picture of left-and right-moving Hilbert spaces: although we deal only with chiral objects, they are topologically complete. The metric is nonarchimedean and the Hilbert space is replaced by a p-adic Banach space. Thus our work amounts to an axiomatization of the chiral half of a nonarchimedean 2-dimensional bosonic CFT.
There is a long history of p-adic ideas arising in the study of string theory and related fields, cf. [18,19,21,23,24,44,45] for a sample of some important works. In these papers the conformal fields of interest tend to be complex-valued as opposed to the p-adic valued fields discussed below. Nevertheless, the idea of a p-adic valued field in the sense meant by physicists has been broached at various times, for example in Section 4.4 of [34].
The new p-adic axioms are similar to the usual algebraic ones, as most of the axioms of an algebraic VOA are stable under completion. For example, the formidable Jacobi identity axiom is unchanged, though it now involves infinite sums that need not truncate. On the other hand, in our p-adic story the notion of a field in the sense of VOA theory must be adjusted slightly, and this results in a slightly weaker form of p-adic locality. Otherwise, much of the basic algebraic theory carries over to this new context. This is due to the fact that the strong triangle inequality for nonarchimedean metrics leads to analytic considerations that feel very much like they are part of algebra rather than analysis. Consequently, many standard arguments in the algebraic theory of VOAs can be adapted to this new setting.
The paper is organized as follows: to help make this work more accessible to a broader audience, we include in Sects. 2 and 3 a recollection of some basic facts about algebraic VOAs and p-adic Banach spaces respectively. In Sect. 4 we introduce our axioms for padic fields and p-adic vertex algebras and derive some basic results about them. In Sect. 5 we establish p-adic variants of the Goddard axioms [20] as discussed in [35,38]. Roughly speaking, we show that a p-adic vertex algebra amounts to a collection of mutually padically local p-adic fields, a statement whose archimedean analog will be familiar to experts. In Sect. 6 we discuss Virasoro structures, define p-adic vertex operator algebras, and constuct p-adic versions of the algebraic Virasoro algebra. Section 7 provides tools for constructing examples of p-adic VOAs via completion of algebraic VOAs. In Sect. 8 we elaborate on some aspects of p-adic locality and related topics such as operator product expansions. Finally, Sects. 9 and 11 use the preceding material and results in the literature to study examples of p-adic VOAs. In particular, we establish the following result:
Theorem 1.1 There exist p-adic versions of the Virasoro, Heisenberg and Monster VOAs.
Perhaps as interesting is the fact that in the second and third cases mentioned in Theorem 1.1, the character maps (i.e., 1-point correlation functions, or graded traces) for the VOAs extend by continuity to character maps giving rise to p-adic modular forms as defined in [43]. A noteworthy fact about the Heisenberg algebra is that while the quasi-modular form E 2 /η is the graded trace of a state in the Heisenberg algebra, if we ignore the normalizing factor of η −1 , then this is a genuine p-adic modular form à la Serre [27,43]. In this sense, the p-adic modular perspective may be more attractive than the algebraic case where one must incorporate quasi-modular forms into the picture. See Sects. 9, 10 and 11 for more details on these examples. In particular, in Sect. 10 we show, among other things, that the image of the p-adic character map for the p-adic Heisenberg algebra contains the p-stabilized Eisenstein series of weight 2,
G * 2 . .= p 2 − 1 24 + n≥1 σ * (n)q n ,
with notation as in [43], so that σ * (n) denotes the divisor sum function over divisors of n coprime to p. It is an interesting problem to determine the images of these p-adic character maps, a question we hope to return to in the near future.
In a similar way, one can deduce the existence of p-adic versions of many well-known VOAs, such as lattice theories, and theories modeled on representations of affine Lie algebras (WZW models). However, since it is more complicated to formulate definitive statements about the (p-adic) characters of such objects, and in the interest of keeping the discussion to a reasonable length, we confine our presentation to the cases intervening in Theorem 1.1. We hope that they provide a suitable demonstration and rationale for the theory developed below.
The authors thank Jeff Harvey and Greg Moore for comments on a prior version of this paper.
Notation and terminology
Researchers in number theory and conformal field theory have adopted a number of conflicting choices of terminology. In this paper field can mean algebraic field as in the rational, complex, or p-adic fields. Alternately, it can be a field as in the theory of VOAs or physics. Similarly, local can refer to local fields such as the p-adic numbers, or it can refer to physical locality, as is manifest in the theory of VOAs. In all cases, the context will make it clear what usage is intended below.
-If A, B are operators, then [A, B] = AB − BA is their commutator; -For a rational prime p, Q p and Z p are the rings of p-adic numbers and p-adic integers respectively; -IfV is a p-adic Banach space, then F(V ) is the space of p-adic fields on V , cf. Definition 4.1; -The Bernoulli numbers B k are defined by the power series
z e z − 1 = k≥0 B k k! z k ;
Primer on algebraic vertex algebras
In this Section we review the basic theory of algebraic vertex algebras over an arbitrary unital, commutative base ring k, and we variously call such a gadget a k-vertex algebra, vertex k-algebra, or vertex algebra over k. Actually, our main interests reside in the cases when k is a field of characteristic 0, or k = Z, Z/p n Z, or the p-adic integers Z p , but there is no reason not to work in full generality, at least at the outset. By algebraic, we mean the standard mathematical theory of vertex algebras based on the usual Jacobi identity (see below) as opposed to p-adic vertex algebras as discussed in the present paper. Good references for the theory over C are [16,30] (see also [36] for an expedited introduction) and the overwhelming majority of papers in the literature are similarly limited to this case. The literature on k-vertex algebras for other k, especially when k is not a Q-algebra, is scarce indeed. There is some work [7][8][9] on the case when k = Z that we shall find helpful-see also [1,2,4,12,13,25,31,32,40]. A general approach to k-vertex algebras is given in [35].
k-vertex algebras
Definition 2.1 An (algebraic) k-vertex algebra is a triple (V, Y, 1) with the following ingredients:
• V is a k-module, often referred to as Fock space;
• Y : V → End k (V )[[z, z −1 ]] is k-linear, and we write Y (v, z) . .= n∈Z v(n)z −n−1 for the map v → Y (v, z); • 1 ∈ V is a distinguished element in V called the vacuum element.
The following axioms must be satisfied for all u, v, w ∈ V :
(1) (Truncation condition) there is an integer n 0 , depending on u and v, such that u(n)v = 0 for all n > n 0 ; (2) (Creativity) u(−1)1 = u, and u(n)1 = 0 for all n ≥ 0;
(3) (Jacobi identity) Fix any r, s, t ∈ Z and u, v, w ∈ V . Then
∞ i=0 r i (u(t + i)v)(r + s − i)w = ∞ i=0 (−1) i t i u(r + t − i)v(s + i)w − (−1) t v(s + t − i)u(r + i)w . (2.1)
Inasmuch as u(n) is a k-linear endomorphism of V called the nth mode of u, we may think of V as a k-algebra equipped with a countable infinity of k-bilinear products u(n)v. The vertex operator (or field) Y (u, z) assembles the modes of u into a formal distribution and we can use an obvious notation Y (u, z)v . .= n∈Z u(n)vz −n−1 . Then the truncation condition
says that Y (u, z)v ∈ V [z, z −1 ][[z]
] and the creativity axiom says that Y (u, z)1 = u + O(z). By deleting w everywhere in the Jacobi identity, equation (2.1) may be construed as an identity satisfied by modes of u and v.
Some obvious but nevertheless important observations need to be made. For any given triple (u, v, w) of elements in V , the truncation condition ensures that (2.1) is well-defined in the sense that both sides reduce to finite sums. Furthermore, only integer coefficients occur, so that (2.1) makes perfectly good sense for any commutative base ring k.
It can be shown as a consequence of these axioms [35] that 1 is a sort of identity element, and more precisely that Y (1, z) = id V .
Suppose that U = (U, Y, 1), V = (V, Y, 1) are two k-vertex algebras. A morphism f : U → V is a morphism of k-modules that preserves vacuum elements and all nth products. This latter statement can be written in the form fY (u, z) = Y (f (u), z)f for all u ∈ U . k-vertex algebras and their morphisms form a category k − Ver.
A left-ideal in V is a k- submodule A ⊆ V such that Y (u, z)a ∈ A[z, z −1 ][[z]] for all u ∈ V and all a ∈ A. Similarly, A is a right-ideal if Y (a, z)u ∈ A[z, z −1 ][[z]]
. A 2-sided ideal is, of course, a k-submodule that is both a left-and a right-ideal. If A ⊆ V is a 2-sided ideal then the quotient kmodule V /A carries the structure of a vertex k-algebra with the obvious vacuum element and nth products. Kernels of morphisms f : U → V are 2-sided ideals and there is an isomorphism of k-vertex algebras U / ker f ∼ = im f .
k-vertex operator algebras
A definition of k-vertex operator algebras for a general base ring k is a bit complicated [35], so we will limit ourselves to the standard case where k is a Q-algebra. This will suffice for later purposes where we are mainly interested in considering k = Q p , which is a field of characteristic zero.
Recall the Virasoro Lie algebra with generators L(n) for n ∈ Z together with a central element κ satisfying the bracket relations
[L(m), L(n)] = (m − n)L(m + n) + δ m,−n (m 3 −m) 12 κ. (2.2)
In a k-vertex operator algebra the Virasoro relations (2.2) are blended into a k-vertex algebra as follows: Definition 2.2 A k-vertex operator algebra is a quadruple (V, Y, 1, ω) with the following ingredients:
• A k-vertex algebra (V, Y, 1);
• ω ∈ V is a distinguished element called the Virasoro element, or conformal vector; and the following axioms are satisfied:
(1) Y (ω, z) = n∈Z L(n)z −n−2 where the modes L(n) satisfy the relations (2.2) with κ = c id V for some scalar c ∈ k called the central charge of V ; (2) there is a direct sum decomposition V = ⊕ n∈Z V n where V n . .= {v ∈ V | L(0)v = nv}
is a finitely generated k-module, and V n = 0 for n 0;
(3) [L(−1), Y (v, z)] = ∂ z Y (v, z) for all v ∈ V .
This definition deserves a lot of explanatory comment -much more than we will provide. That the modes L(n) satisfy the Virasoro relations means, in effect, that V is a module over the Virasoro algebra. Of course it is a very special module, as one sees from the last two axioms. Furthermore, because (V, Y, 1) is a k-vertex algebra, then the Jacobi identity (2.1) must be satisfied by the modes of all elements u, v ∈ V , including for example u = v = ω. For Q-algebras k this is a non-obvious but well-known fact. Indeed, it holds for any k, cf. [35], but this requires more discussion than we want to present here.
Despite their relative complexity, there are large swaths of k-vertex operator algebras, especially in the case when k = C is the field of complex numbers. See the references above for examples.
Lattices in k-vertex operator algebras
Our main source of examples of p-adic VOAs will be completions of algebraic VOAs with suitable integrality properties, codified in the existence of integral forms as defined below. This definition is modeled on [7], but see also [4] for a more general perspective on vertex algebras over schemes.
Let either k = Q or Q p and let A = Z or Z p , respectively.
Definition 2.3 An integral form in a k-vertex operator algebra V is an
A-submodule R ⊆ V such that (i) R is an A-vertex subalgebra of V , in particular 1 ∈ R, (ii) R (n) . .= R ∩ V (n) is an A-base of V (n) for each n,(iii)
there is a positive integer s such that sω ∈ R.
Condition (i) above means in particular that if v ∈ R, then every mode v(n) defines an A-linear endomorphism of R. In particular, if R is endowed with the sup-norm in some graded basis, and if this is extended to V , then the resulting modes are uniformly bounded in the resulting p-adic topology, as required by the definition of a p-adic field, cf. part (1) of Definition 4.1 below.
Remarks on inverse limits
Suppose now that we consider a Z p -vertex algebra V . For a positive integer k, p k V is a 2-sided ideal in V , so that the quotient V /p k V is again a Z p -vertex algebra. (We could also consider this as a vertex algebra over Z/p k Z.) We have canonical surjections of Z p -vertex rings f m k : V /p m V → V /p k V for m ≥ k and we may consider the inverse limit
lim ← − V /p k V. (2.3)
This is unproblematic at the level of Z p -modules. Elements of the inverse limit are
sequences (v 1 , v 2 , v 3 , . . .) such that f m k (v m ) = v k for all m ≥ k.
It is natural to define the nth mode of such a sequence to be (
. However, this device does not make the inverse limit into an algebraic vertex algebra over Z p in general. This is because it is not possible, in the algebraic setting, to prove the truncation condition for such modal sequences-see Sect. 9 below for a concrete example. And without the truncation condition, the Jacobi identity becomes meaningless even though in some sense it holds formally. As we shall explain below, these problems can be overcome in a suitable p-adic setting by making use of the p-adic topology. Indeed, this observation provides the impetus for many of the definitions to follow.
p-adic Banach spaces
A Banach space V over Q p , or p-adic Banach space, is a complete normed vector space over Q p whose norm satisfies the ultrametric inequality
x + y ≤ sup(|x| , y )
for all x, y ∈ V . See the encyclopedic [3] for general background on nonarchimedean analysis with a view towards rigid geometry. Following Serre [42], we shall assume that for every v ∈ V we have |v| ∈ Q p , so that under the standard normalization for the p-adic absolute value, we can write |v| = p n for some n ∈ Z if v = 0. Note that we drop the subscript p from the valuation notation to avoid a proliferation of such subscripts. Since Q p is discretely valued, this is the same as Serre's condition (N). We will often omit mention of this condition throughout the rest of this note. It is likely inessential and could be removed, for example if one wished to consider more general nonarchimedean fields.
A basic and well-known consequence of the ultrametric inequality in V that we will use repeatedly is the following: Proposition 1 of [42] and the ensuing discussion shows that, up to continuous isomorphism, every p-adic Banach space can be described concretely as follows. Let I be a set and define c(I) to be the collection of families (x i ) i∈I ∈ Q I p such that x i tends to zero in the following sense: for every ε > 0, there is a finite set S ⊆ I such that |x i | < ε for all i ∈ I\S. Then c(I) has a well-defined supremum norm |x| = sup i∈I |x i |. Notice that since the p-adic absolute value is discretely valued, this norm takes values in the same value group as Q p . Serre shows that every p-adic Banach space is continuously isomorphic to a p-adic Banach space of the form c(I).
Definition 3.2 Let V be a p-adic Banach space. Then a sequence (e i ) i∈I ∈ Q I p is said to be an orthonormal basis for V provided that every element x ∈ V can be expressed uniquely as a sum x = i∈I x i e i for x i ∈ Q p tending to 0 with i, such that |x| = sup i∈I |x i |.
That every p-adic Banach space is of the form c(I), up to isomorphism, is tantamount to the existence of orthonormal bases.
The underlying Fock spaces of vertex algebras over Q p will be p-adic Banach spaces. To generalize the definitions of 1-point functions to the p-adic setting, it will be desirable to work in the context of trace-class operators. We thus recall some facts on linear operators between p-adic Banach spaces. The space Hom(U, V ) is endowed with the usual sup-norm
f = sup u =0 f (u) |u| .
Thanks to our hypotheses on the norm groups of U and V it follows that f = sup |x|≤1 f (x) . This norm furnishes Hom(U, V ) with the structure of a p-adic Banach space.
p-adic fields and p-adic vertex algebras
Norms below refer either to the norm of an ambient p-adic Banach space V , or the corresponding induced sup-norm on End(V ). We adopt the same notation for both norms and the context will make it clear which is meant. We single out a special case of Definition 4.1, of particular interest not only because of its connection with Banach rings (see below) but also because it is satisfied in most, if not all, of our examples.
Definition 4.3
Let V be a p-adic Banach space. A p-adic field a(z) on V is called submultiplicative provided that it satisfies the following qualitatively stronger version of (1):
(1') a(n)b ≤ |a| b for all n ∈ Z and all a, b ∈ V .
Remark 4.4
In the theory of nonarchimedean Banach spaces, where each a ∈ V defines a single multiplication by a operator rather than an entire sequence of such operators, the analogues of Property (1) in both Definitions 4.1 and 4.3 above are shown to be equivalent in a certain sense: the constant M of Definition 4.1 can be eliminated at a cost of changing the norm, but without changing the topology. See Proposition 2 of Section 1.2.1 in [3] for a precise statement. We suspect that the same situation pertains in the theory of p-adic vertex algebras. Most of the arguments below are independent of this choice of definition, and so we work mostly with the apparently weaker Definition 4.1.
In the following, when we refer to (p-adic) fields, we will always mean in the sense of Definition 4.1. If the intention is to refer to submultiplicative fields we will invariably say so explicitly. Notice that if a(z) is a field, then we can define a norm
a(z) = sup n∈Z a(n) ,
where as usual a(n) denotes the operator norm:
a(n) . .= sup b∈V b =0 a(n)b b
.
Definition 4.5
The space of p-adic fields on V in the sense of Definition 4.1, endowed with the topology arising from the sup-norm defined above, is denoted F(V ), and F s (V ) denotes the subset of submultiplicative fields.
Proposition 4.6 F(V ) is a Banach space over Q p .
Proof Clearly F(V ) is closed under rescaling. To show that it is closed under addition, let a(z), b(z) ∈ F(V ) be p-adic fields. Property (2) of Definition 4.1 is clearly satisfied by the sum a(z) + b(z). We must show that Property (1) holds as well, so take c ∈ V . Let M 1 and M 2 be the constants of Property (1) arising from a(z) and b(z), respectively. Then we are interested in bounding
a(n)c + b(n)c ≤ max( a(n)c , b(n)c ) ≤ max(M 1 , M 2 ) |c| .
Thus Property (1) of Definition 4.1 holds for a(z)
+ b(z) with M = max(M 1 , M 2 ). This verifies that F(V ) is a subspace of End(V )[[z, z −1 ]]. It remains to prove that F(V ) is complete. Let a j (z) be a Cauchy sequence in F(V ). This means that for all ε > 0, there exists N such that for all i, j > N , sup n∈Z a i (n) − a j (n) = a i (z) − a j (z) < ε.
In particular, for each n ∈ Z, the sequence (a j (n)) j≥0 of elements of End(V ) is Cauchy with a well-defined limit a(n) . .= lim j→∞ a j (n).
We shall show that a(z) = n∈Z a(n)z −n−1 is a p-adic field. For a given ε > 0, by definition of the sup-norm on p-adic fields, there exists N such that for all j > N and all n ∈ Z,
a(n) − a j (n) < ε.
Let b ∈ V . Then for any choice of j > N ,
a(n)b ≤ sup( a(n)b − a j (n)b , a j (n)b ) ≤ sup(ε, M j ) b
where M j is the constant of Property (1) in Definition 4.1 associated to a j (z). It follows that a(z) satisfies Property (1)
of Definition 4.1 with M = sup(ε, M j ), which is indeed independent of b ∈ V .
Let b ∈ V be nonzero and fixed. To show that lim n→∞ a(n)b = 0, let ε > 0 be given, and choose j such that a(n) − a j (n) < ε |b| for all n ∈ Z. That is,
sup b =0 a(n)b − a j (n)b b < ε b .
Then as above we have
a(n)b ≤ sup( a(n)b − a j (n)b , a j (n)b ) < sup(ε, a j (n)b ).
Since a j is a p-adic field, there exists N such that for n > N , we have a j (n)b < ε. We thus see that for n > N , we have a(n)b < ε. Therefore lim n→∞ a(n)b = 0. This verifies that the limit a(z) of the p-adic fields a j (z) is itself a p-adic field, and therefore F(V ) is complete.
Remark 4.7
The set of fields F s (V ) is not necessarily a linear subspace of F(V ), so there is no question of an analog of Proposition 4.6 for submultiplicative fields.
Now suppose that we have a continuous Q p -linear map Y (•, z): V → F(V ).
In the submultiplicative case the continuity hypothesis is automatically satisfied, as in the following result.
Lemma 4.8 Let Y (•, z): V → F(V )
be a linear map associating a submultiplicative p-adic field Y (a, z) to each state a ∈ V . Then Y is necessarily continuous.
Proof By Proposition 2 in Section 2.1.8 of [3], in this setting, continuity and boundedness are equivalent, though we only require the easier direction of this equivalence. We have
Y (a, z) = sup n a(n) = sup n sup |b|=1 a(n)b ≤ sup n sup |b|=1 |a| b = |a| .
In particular, Y (•, z) is a bounded map and thus continuous.
For every u, v ∈ V , we have u(n)v ∈ V and so there are well-defined modes (u(n)v)(m) arising from the p-adic field Y (u(n)v, z).
The following Lemma concerning these modes will allow us to work with the usual Jacobi identity from VOA theory in this new p-adic context.
Lemma 4.9 Let u, v, w ∈ V . Then for all r, s, t ∈ Z, the infinite sum ∞ i=0 r i (u(t + i)v)(r + s − i)w − ∞ i=0 (−1) i t i u(r + t − i)(v(s + i)w) − (−1) t v(s + t − i)(u(r + i)w) converges in V .
Proof Since |n| ≤ 1 for all n ∈ Z by the strong triangle inequality, Lemma 3.1 implies that it will suffice to show that
lim i→∞ (u(t + i)v)(r + s − i)w = 0, lim i→∞ u(r + t − i)(v(s + i)w) = 0, lim i→∞ v(s + t − i)(u(r + i)w) = 0.
We first discuss the second limit above, so let M be the constant from Property (1) of Definition 4.1 applied to u. Then we know that
u(r + t − i)(v(s + i)w) ≤ M v(s + i)w .
By Property (2) of Definition 4.1, this goes to zero as i tends to infinity. This establishes the vanishing of the second limit above, and the third is handled similarly.
The first limit requires slightly more work. Notice that if w = 0 there is nothing to show, so we may assume
w = 0. Since lim i→∞ u(t + i)v = 0, continuity of Y (•, z) implies that lim i→∞ Y (u(t + i)v, z) = 0. Hence, given ε > 0, we can find N such that for all i > N , we have Y (u(t + i)v, z) < ε
|w| . By definition of the norm on p-adic fields, this means that
sup n (u(t + i)v)(n)w < ε.
In particular, taking n = r + s − i, then for i > N we have (u(t + i)v)(r + s − i)w < ε as required. This concludes the proof.
The reader should compare the next Definition with Definition 2.1.
Definition 4.10
A p-adic vertex algebra is a triple (V, Y, 1) consisting of a p-adic Banach space V equipped with a distinguished state (the vacuum vector) 1 ∈ V and a p-adic vertex operator, that is, a continuous p-adic linear map
Y : V → F(V ),
written Y (a, z) = n∈Z a(n)z −n−1 for a ∈ V , satisfying the following conditions:
(1) (Vacuum normalization) |1| ≤ 1. (2) (Creativity) We have Y (a, z)1 ∈ a + zV [[z]], in other words, a(n)1 = 0 for n ≥ 0 and a(−1)1 = a. (3) (Jacobi identity) Fix any r, s, t ∈ Z and u, v, w ∈ V . Then ∞ i=0 r i (u(t + i)v)(r + s − i)w = ∞ i=0 (−1) i t i u(r + t − i)v(s + i)w − (−1) t v(s + t − i)u(r + i)w .
Definition 4.11
A p-adic vertex algebra V is said to be submultiplicative provided that every p-adic field Y (a, z) is submultiplicative.
Remark 4.12
If V is a submultiplicative p-adic vertex algebra, then Lemma 4.8 implies that continuity of the state-field correspondence follows from the other axioms.
As in the usual algebraic theory of Sect. 2, the vertex operator Y is sometimes also called the state-field correspondence, and both it and the vacuum vector will often be omitted from the notation. That is, we shall often simply say that V is a p-adic vertex algebra. The vacuum property Y (1, z) = id V follows from these axioms and does not need to be itemized separately. See Theorem 5.5 below for more details on this point.
Remark 4.13
In the formulation of the Jacobi identity we have used the completeness of the p-adic Banach space V to ensure that the infinite sum exists, via Lemma 4.9. If V is not assumed complete, one could replace the Jacobi identity with corresponding Jacobi congruences: fix r, s, t ∈ Z and u, v, w ∈ Z. Then the Jacobi congruences insist that for all ε > 0, there exists j 0 such that for all j ≥ j 0 , one has
j i=0 r i (u(t + i)v)(r + s − i)w − j i=0 (−1) i t i u(r + t − i)(v(s + i)w) − (−1) t v(s + t − i)(u(r + i)w) < ε.
In the presence of completeness, this axiom is equivalent to the Jacobi identity.
Properties of p-adic vertex algebras
In this Subsection we explore some initial consequences of Definition 4.10.
Proposition 4.14 If V is a p-adic vertex algebra, then for a ∈ V we have
|a| ≤ Y (a, z) .
In particular, the state-field correspondence Y is an injective closed mapping. If V is furthermore assumed to be submultiplicative, then |a| = Y (a, z) for all a ∈ V .
Proof If a ∈ V then a(−1)1 = a by the creativity axiom, and so
|a| = a(−1)1 ≤ Y (a, z) |1| ≤ Y (a, z) ,
where the last inequality follows by vacuum normalization. This proves the first claim, and the second follows easily from this. For the final claim, observe that the proof of Lemma 4.8 shows that when V is submultiplicative, we also have Y (a, z) ≤ |a|. This concludes the proof.
Definition 4.15
A p-adic vertex algebra V is said to have an integral structure provided that for the Z p -module V 0 . .= {v ∈ V | |v| ≤ 1}, the following condition holds:
(1) the restriction of the state-field correspondence Y to V 0 defines a map
Y : V 0 → End Z p (V 0 )[[z, z −1 ]].
Note that 1 ∈ V 0 by definition, and if a ∈ V 0 then Y (a, z)1 ∈ a + zV 0 [[z]]. In effect, then, the triple (V 0 , 1, Res V 0 Y ) is a p-adic vertex algebra over Z p , though we have not formally defined such an object.
The axioms for a p-adic vertex algebra are chosen so that the following holds:
Lemma 4.16
Suppose that V is a p-adic vertex algebra with an integral structure V 0 . Then V 0 /p k V 0 inherits a natural structure of algebraic vertex algebra over Z/p k Z for all k ≥ 0.
Proof For a formal definition of a vertex ring over Z/p k Z, the reader can consult [35].
Since we do not wish to go into too many details on this point, let us simply point out that if a, b ∈ V 0 , then since a(n)b tends to 0 as n grows, it follows that for any k ≥ 1, the reduced series
n∈Z a(n)bz −n−1 (mod p k V 0 ) ∈ End Z/p k Z (V 0 /p k V 0 )[[z, z −1 ]]
has a finite Laurent tail. Since this is the key difference between algebraic and p-adic vertex rings, one deduces the Lemma from this fact.
p-adic Goddard axioms
In this Section we draw some standard consequences from the p-adic Jacobi identity in (4.10). The general idea is to show that the axioms for a p-adic vertex algebra, especially the Jacobi identity, are equivalent to an alternate set of axioms that are ostensibly more intuitive and easier to recognize and manipulate. In the classical case of algebraic vertex algebras over C, or indeed any commutative base ring k, these are known as Goddard axioms [20,35,38]. At the same time, we develop some facts that we use later. Let V be a p-adic vertex algebra.
Commutator, associator and locality formulas
We begin with some immediate consequences of the Jacobi identity.
[u(r), v(s)]w = ∞ i=0 r i (u(i)v)(r + s − i)w. (5.1)
Proof Take t = 0 in the Jacobi identity.
Proposition 5.2 (Associator formula) For all s, t ∈ Z and all u, v, w ∈ V we have (u(t)v)(s)w = ∞ i=0 (−1) i t i u(t − i)v(s + i)w − (−1) t v(s + t − i)u(i)w . (5.2)
Proof Take r = 0 in the Jacobi identity.
The next result requires slightly more work.
Proposition 5.3 (p-adic locality) Let u, v, w ∈ V . Then lim t→∞ (x − y) t [Y (u, x), Y (v, y)]w = 0. (5.3)
Proof Because lim n→∞ a(n)b = 0, continuity of Y yields lim n→∞ Y (a(n)b, z) = 0 in the uniform sup-norm. Thus, as in the last stages of the proof of Lemma 4.9, for any ε > 0 and fixed r, s, i, we have the inequality (u(t + i)v)(r + s − i)w < ε for all large enough t.
Apply this observation to the left side of the Jacobi identity to deduce the following:
Let r, s ∈ Z and u, v, w ∈ V . For any ε > 0 there is an integer N such that for all t > N we have
∞ i=0 (−1) i t i u(r + t − i)v(s + i)w − (−1) t v(s + t − i)u(r + i)w < ε.
By a direct calculation, we can see that the summation on the left side of this inequality is exactly the coefficient of
x r y s in (x −y) t [Y (u, x), Y (v, y)]w.
Thus, we arrive at the statement of p-adic locality, and this concludes the proof. ) and Y (v, y) are mutually local p-adic fields. When the context is clear, we will drop the word p-adic from the language.
Definition 5.4 If equation (5.3) holds for two p-adic fields Y (u, x) and Y (v, y), we say that Y (u, x
Vacuum vector
We will prove
Theorem 5.5 Suppose that (V, Y, 1) is a p-adic vertex algebra. Then Y (1, z) = id V .
Proof We have to show that for all states u ∈ V we have
1(n)u = δ n,−1 u. (5.4)
To this end, first we assert that for all s ∈ Z,
u(−1)1(s)1 = 1(s)u. (5.5)
This follows directly from the creativity axiom and the special case of the commutator formula (5.1) in which v = w = 1 and r = −1, t = 0. Now, by the creativity axiom once again, we have 1(s)1 = 0 for s ≥ 0 as well as 1(−1)1 = 1. Feed these inequalities into (5.5) to see that (5.4) holds whenever n ≥ −1.
Now we prove (5.4) for n ≤ −2 by downward induction on n. First we choose u = v = w = 1 and t = −1 in the Jacobi identity and fix r, s ∈ Z, to see that
∞ i=0 r i (1(−1 + i)1)(r + s − i)1 = ∞ i=0 1(r − 1 − i)(1(s + i)1) + 1(s − 1 − i)(1(r + i)1) ,
and therefore since we already know that 1(n)1 = δ n,−1 1 for n ≥ −1 we obtain
1(r + s)1 = ∞ i=0 1(r − 1 − i)(1(s + i)1) + 1(s − 1 − i)(1(r + i)1) (5.6)
Specialize (5.6) to the case r = s = −1 to see that
1(−n)1 = ∞ i=0 1(r − 1 − i)1(s + i)1 (5.7)
By induction, the expression in (5.7) under the summation sign vanishes whenever s + i > −n and s+i = −1, i.e., i > r, i = −s−1. It also vanishes if i < r thanks to the commutator formula (5.1). So, the only possible nonzero contribution arises when i = r and i = −s−1, in which case we obtain 1(−n)1 = 21(−n)1 and therefore 1(−n)1 = 0.
Translation-covariance and the canonical derivation T
The canonical endomorphism T of V is defined by the formula
Y (a, z)1 = a + T (a)z + O(z 2 ), (5.8) that is, T (a) . .= a(−2)1.
The endomorphism T is called the canonical derivation of V because of the next result. Translation-covariance (with respect to T ) may refer to either of the equalities in (5.9) below, but we will usually retain this phrase to mean only the second equality.
Theorem 5. 6 We have T (a)(n) = −na(n − 1) for all integers n ∈ Z. Indeed,
Y (T (a), z) = ∂ z Y (a, z) = [T, Y (a, z)]. (5.9)
Moreover, T is a derivation of V in the sense that for all states u, v ∈ V and all n we have
T (u(n)v) = T (u)(n)v + u(n)T (v).
Proof Take v = 1 and t = −2 in the associator formula (5.2) to get
(u(−2)1)(s)w = j i=0 (−1) i −2 i u(−2 − i)1(s + i)w − 1(s − 2 − i)u(i)w . (5.10)
Now use Theorem 5.5. The expression under the summation is nonzero in only two possible cases, namely s + i = −1 and s − i = 1, and these cannot occur simultaneously.
In the first case (5.10) reduces to
(u(−2)1)(s)w + su(s − 1)w = 0. (5.11)
This is the first stated equality in (5.9)). In the second case, when s − i = 1, we get exactly the same conclusion by a similar argument. This proves (5.9) in all cases. Next we will prove that T is a derivation as stated in the Theorem. Use the creativity axiom and the associator formula (5.2) with s = −2 to see that
(u(t)v)(−2)1 = j i=0 (−1) i t i u(t − i)v(−2 + i)1 = u(t)v(−2)1 − tu(t − 1)v,
Using the special case (5.11) of (5.9) that has already been established, the previous display reads
Tu(t)v = u(t)T (v) + T (u)(t)v.
This proves the derivation property of T .
Finally, we have
[T, a(n)]w = T (a(n)w) − a(n)T (w) = T (a)(n)w
by the derivation property of T . This is equivalent to the second equality
Y (T (a), z) = [T, Y (a, z)]
in (5.9). We have now proved all parts of the Theorem.
Statement of the converse
We have shown that a p-adic vertex algebra (V, Y, 1) consists, among other things, of a set of mutually local, translation-covariant, creative, p-adic fields on V . Our next goal is to prove a converse to this statement. Specifically,
Theorem 5.7 Let the quadruple (V, Y, 1, T ) consist of a p-adic Banch space V ; a con- tinuous Q p -linear map Y : V → F(V ), notated a → Y (a, z) . .= n∈Z a(n)z −n−1 ; a distinguished state 1 ∈ V ; and a linear endomorphism T ∈ End(V ) satisfying T (1) = 0.
Suppose further that:
(a) any pair of fields Y (a, x), Y (b, y) are mutually local in the sense of (5.3); (b) they are creative with respect to 1 in the sense of Definition 4.10 (1); (c) and they are translation covariant with respect to T in the sense that the second equality of (5.9) holds.
Then the triple (V, Y, 1) is a p-adic vertex algebra as in Definition 4.10, and T is the canonical derivation.
We will give the proof of Theorem 5.7 over the course of the next few Sections. Given that (V, Y, 1) is a p-adic vertex algebra, it is easy to see that T is its canonical derivation. For, by translation covariance, we get
T (Y (a, z)1) = ∂ z Y (a, z)1,
so that T (a) = T (a(−1)1) = a(−2)1, and this is the definition of the canonical derivation (5.8). What we have to prove is that the Jacobi identity from Definition 4.10 is a consequence of the assumptions in Theorem 5.7.
Lemma 5.8
The Jacobi identity is equivalent to the conjunction of the associativity formula (5.2) and the locality formula (5.3).
Proof We have already seen in Sect. 5.1 that the Jacobi identity implies the associativity and locality formulas. As for the converse, fix states a, b, c ∈ V , r, s, t ∈ Z and introduce the notation:
A(r, s, t) . .= ∞ i=0 r i (a(t + i)b)(r + s − i)c, B(r, s, t) . .= ∞ i=0 (−1) i t i a(r + t − i)b(s + i)c, C(r, s, t) . .= ∞ i=0 (−1) t+i t i b(s + t − i)a(r + i)c
In these terms, the Jacobi identity is just the assertion that for all a, b, c and all r, s, t we have r, s, t).
A(r, s, t) = B(r, s, t) − C((5.12)
On the other hand, in Sect. 5.1 we saw that the associativity formula is just the case r = 0 of (5.12). Now use the standard formula r, s, t+1), and similarly for B and C. Consequently, (5.12) holds for all r ≥ 0 and all s, t ∈ Z. Now we invoke the locality assumption. In fact, we essentially saw in Sect. 5.1 that locality is equivalent to the statement that for any ε > 0 there is an integer t 0 such that for all t ≥ t 0 we have
m n = m − 1 n + m − 1 n − 1 to see that A(r+1, s, t) = A(r, s+1, t)+A(A(r, s, t) − B(r, s, t) + C(r, s, t) < ε,(5.13)
and we assert that (5.13) holds uniformly for all r, s, t. We have explained that it holds for all t ≥ t 0 and all r ≥ 0, so if there is a triple (r, s, t) for which it is false then there is a triple for which r + t is maximal. But we have A(r, s, t) − B(r, s, t) + C(r, s, t)
= A(r + 1, s − 1, t) − A(r, s − 1, t + 1) − B(r + 1, s − 1, t) + B(r, s − 1, t + 1) +C(r + 1, s − 1, t) − C(r, s − 1, t + 1) < ε
by the strong triangle inequality. This completes the proof that (5.13) holds for all r, s, t and all a, b, c ∈ V . Now because V is complete we can invoke Remark 4.13 to complete the proof of the Lemma.
Residue products
Lemma 5.8 facilitates reduction of the proof of Theorem 5.7 to the assertion that locality implies associativity. With this in mind we introduce residue products of fields following [33]. To ease notation, in the following for a state a ∈ V , we write a(z) = Y (a, z) = n a(n)z −n−1 . For states a, b, c ∈ V and any integer t, we define the tth residue product of a(z) and b(z) as follows:
(a(z) t b(z))(n) . .= ∞ i=0 (−1) i t i a(t − i)b(n + i) − (−1) t b(t + n − i)a(i) . (5.14)
To be clear, a(z) t b(z) is defined to have a modal expansion with the nth mode being the expression (5.14) above. The holistic way to write (5.14) is as follows:
a(z) t b(z) = Res y (y − z) t a(y)b(z) − (−1) t Res y (z − y) t b(z)a(y). (5.15)
It is important to add that here we are employing the convention 1 that (y−z) t is expanded as a power series in z, whereas (z − y) t is expanded as a power series in y.
Notice that in a p-adic vertex algebra, the associativity formula (5.2) may be provocatively reformulated in the following way: z). (5.16) In the present context we do not know that V is a vertex algebra: indeed, it is equation (5.16) that we are in the midst of proving on the basis of locality alone! Nevertheless, it behooves us to scrutinize residue products. First we have
Y (u(t)v, z)w = Y (u, z) t Y (v,Lemma 5.9 Given a(z), b(z) ∈ F(V ), we have a(z) t b(z) ∈ F(V ) for every t ∈ Z. Moreover, a(z) t b(z) ≤ a(z) b(z) .
Proof From equation (5.14) we obtain the bound
a(z) t b(z)(n) ≤ sup i≥0 sup{ a(t − i)b(n + i) , b(t + n − i)a(i) } ≤ sup i≥0 sup{ a(t − i) b(n + i) , b(t + n − i) a(i) } ≤ sup u∈Z a(u) · sup v∈Z b(v) = a(z) b(z) .
This establishes the desired inequality, and it shows that property (1) of Definition 4.1 holds with M = a(z) . Similarly, since we know that lim n→∞ a(n)c = lim n→∞ b(n)c = 0 then
(a(z) t b(z))(n)c = sup i a(t − i)b(n + i)c − (−1) t b(t + n − i)a(i)c ≤ M sup i b(n + i)c , b(t + n − i)c → n 0.
which establishes property (2) of Definition 4.1. The Lemma is proved.
Remark 5.10 Lemma 5.9 has the following interpretation: identify V with its image in F(V ) under the vertex operator map a → a(z), and define a new vertex operator structure on these fields using the residue products. Then Lemma 5.9 says that these new fields are submultiplicative in the sense of Definition 4.3. This is so even though we have not assumed that V is submultiplicative, cf. Corollary 5.15 for further discussion of this point. Now recall that we are assuming the hypotheses of Theorem 5.7 that a(z), b(z) are creative with respect to 1 and, in particular, b(z) creates the state b in the sense that b(z)1 = b + O(z).
Lemma 5.11 For all integers t, a(z) t b(z) is creative with respect to 1 and creates the state a(t)b.
Proof This is straightforward, for we have
(a(z) t b(z))(n)1 = ∞ i=0 (−1) i i i a(t − i)b(n + i)1 − (−1) t b(t + n − i)a(i)1 = ∞ i=0 (−1) i t i a(t − i)b(n + i)1
and this vanishes if n ≥ 0 because b(z) is creative with respect to 1. Finally, if n = −1, for similar reasons the last display reduces to a(t)b, and the Lemma is proved.
Further properties of residue products
Lemma 5.12 Let a, b, c ∈ V , so that a(z), b(z), c(z) are mutually local p-adic fields. Then for any integer t, a(z) t b(z) and c(z) are also mutually local.
Proof We have to show that lim n→∞ (x − y) n [a(x) t b(x), c(y)] = 0. We have lim n→∞ (x − y) n [a(x) t b(x), c(y)] = lim n→∞ (x − y) n Res w (w − x) t [a(w)b(x), c(y)] − (−1) t Res w (x − w) t [b(x)a(w), c(y)] = lim n→∞ (x − y) n Res w (w − x) t a(w)[b(x), c(y)] + [a(w), c(y)]b(x) − (−1) t lim n→∞ (x − y) n Res w (x − w) t b(x)[a(w), c(y)] + [b(x), c(y)]a(w) = lim n→∞ (x − y) n Res w (w − x) t [a(w), c(y)]b(x) − (−1) t lim n→∞ (x − y) n Res w (x − w) t b(x)[a(w), c(y)] ,
where we used the fact that b(z) and c(z) are mutually local p-adic fields. Note that if t ≥ 0 then (w − x) t = (−1) t (x − w) t and the last expression vanishes as required. So we may assume without loss that t < 0. Now use the identity, with n ≥ m ≥ 0,
(x − y) n = (x − y) n−m (x − w + w − y) m = (x − y) n−m m i=0 m i (x − w) i (w − y) m−i Pick any ε > 0. There is a positive integer N 0 such that (w − y) N [a(w), b(y)] < ε when- ever N ≥ N 0 . We will choose m . .= N 0 − t,
which is nonnegative because t < 0. The first summand in the last displayed equality is equal to
m i=0 (−1) i m i lim n→∞ (x − y) n−m Res w (w − x) t+i (w − y) m−i [a(w), c(y)]b(x) = −t i=0 " + m i=−t+1 " ,
where the quotation marks indicate that we are summing the same general term as in the left-side of the equality. We get a very similar formula for the second summand.
Repeating the use of a previous device, if −t + 1 ≤ i then t + i ≥ 0, so that (−1) t+i (w − x) t+i = (x − w) t+i . From this it follows that in our expression for lim n→∞ (x − y) n [a(x) t b(x), c(y)] the two sums over the range −t + 1 ≤ i ≤ m in fact cancel. Hence we find that
lim n→∞ (x − y) n [a(x) t b(x), c(y)] = −t i=0 (−1) i m i lim n→∞ (x − y) n−m Res w (w − x) t+i (w − y) m−i [a(w), c(y)]b(x) −(−1) t lim n→∞ (x − y) n−m Res w (x − w) t+i (w − y) m−i b(x)[a(w), c(y)] .
For i ≤ −t we have m − i ≥ m + t = N 0 and therefore each of the expressions (w − y) m−i [a(w), c(y)] have norms less than ε. Therefore we see that
lim n→∞ (x − y) n [a(x) t b(x), c(y)] = 0,
and the Lemma is proved.
Lemma 5.13 If a(z) and b(z) are translation covariant with respect to T , then a(z) t b(z) is translation covariant with respect to T for all integers t.
Proof This is a straightforward consequence of the next two equations, both readily checked by direct caculation:
[T, a(z) t b(z)] =[T, a(z)] t b(z) + a(z) t [T, b(z)] ∂ z (a(z) t b(z)) =(∂ z a(z)) t b(z) + a(z) t (∂ z b(z)) (5.17)
Completion of the proof of Theorem 5.7
We have managed to reduce the proof of Theorem 5.7 to showing that p-adic locality implies associativity, i.e., the formula (5.16) in the format
(a(t)b)(z) =a(z) t b(z) (5.18)
We shall carry this out now on the basis of Lemmas 5.9 through 5.12. We need one more preliminary result.
Lemma 5.14 Suppose that d(z) is a p-adic field that is translation covariant with respect to T , mutually local with all p-adic fields a(z) (a ∈ V ), and creative with respect to 1. Then d(z) = 0 if, and only if, d(z) creates 0.
Proof It suffices to show that if d(z) creates 0, i.e., d(−1)1 = 0, then d(z) = 0. To begin the proof, recall that by hypothesis we have T 1 = 0. Then
[T, d(z)]1 =T (d(z)1) = n<0 T (d(n)1)z −n−1 .
By translation covariance we also have
[T, d(z)]1 = ∂ z d(z)1 = n<0 (−n − 1)d(n)1z −n−2
Therefore, for all n < −1 we have Td(n + 1)1 = (−n − 1)d(n)1. But d(−1)1 = 0 by assumption. Therefore d(−2)1 = 0, and by induction we find that for all n < 0 we have d(n)1 = 0. This means that d(z)1 = 0. Now we prove that in fact d(z) = 0. For any state u ∈ V we know that u(z) creates u from 1. Moreover we are assuming that d(z) and u(z) are mutually local, i.e., lim
n→∞ (z − w) n [d(z), u(w)] = 0. Consider lim n→∞ z n d(z)u = lim n→∞ Res w w −1 (z − w) n d(z)u(w)1 = lim n→∞ Res w w −1 (z − w) n u(w)d(z)1 = 0.
Notice that the sup-norms of d(z)u and z n d(z)u are equal, since multiplication by z n just shifts indices. Therefore, the only way that the limit above can vanish is if d(z)u = 0. Since u is an arbitrary element of V , we deduce that d(z) = 0, and this completes the proof of Lemma 5.14.
Turning to the proof of Theorem 5.7, suppose that a, b ∈ V , let t be an integer, and consider d(z) . .= a(z) t b(z)−(a(t)b)(z). This is a p-adic field by Lemma 5.9, and it is creative by Lemma 5.11, indeed that Lemma implies that d(z) creates 0. Now Lemmas 5.11 through 5.13 supply the hypotheses of Lemma 5.14, so we can apply the latter to see that d(z) = 0. In other words, a(z) t b(z) = (a(t)b)(z). Thus we have established (5.18) and the proof of Theorem 5.7 is complete.
We record a Corollary of the proof that was hinted at during our discussion of equation (5.16).
Corollary 5.15
Suppose that (V, Y, 1) is a p-adic vertex algebra, and let W ⊆ F(V ) be the image of Y consisting of the p-adic fields Y (a, z) for a ∈ V . If we define the tth product of these fields by the residue product Y (a, z) t Y (b, z), then W is a submultiplicative p-adic vertex algebra with vacuum state id V and canonical derivation ∂ z . Moreover, Y induces a bijective, continuous map of p-adic vertex algebras Y : V −→ W that preserves all products.
Proof The set of fields Y (a, z) is closed with respect to all residue products by Lemma 5.16, and Y (1, z) = id V by Theorem 5.5. Then equation (5.16) says that the state-field correspondence Y : (V, 1) −→ (W, id V ) preserves all products. Since V is a p-adic vertex algebra and F(V ) is a p-adic Banach space by Proposition 4.6, then so too is W because W is closed by Proposition 4.14. Next we show that ∂ z is the canonical derivation for W . Certainly ∂ z id V = 0, and it remains to show that Y (a, z) −2 id V = ∂ z Y (a, z). But this is just the case t = −1, b(z) = id V of (5.17). This shows that W is a p-adic vertex algebra. That it is submultiplicaitve follows from (5.16) and Lemma 5.9. Finally, we assert that Y : V → W is a continuous bijection. Continuity follows by definition and injectivity is a consequence of the creativity axiom.
Remark 5.16
It is unclear to us whether the map Y : V −→ W of Corollary 5.15 is always a topological isomorphism. It may be that W functions as a sort of completion of V that rectifies any lack of submultiplicativity in its p-adic fields. However, we do not know of an example where Y is not a topological isomorphism onto its image.
p-adic conformal vertex algebras
Here we develop the theory of vertex algebras containing a dedicated Virasoro vector. We find it convenient to divide the main construction into two halves, namely weakly conformal and conformal p-adic vertex algebras. One of the main results is Proposition 6.8.
The basic definition
As a preliminary, we recall that the Virasoro algebra over Q p of central charge c is the Lie algebra with a canonical Q p -basis consisting of L(n), for n ∈ Z, and a central element κ that satisfies the relations of Definition 2.2. (V, Y, 1, ω) such that (V, Y, 1) is a p-adic vertex algebra (cf. Definition 4.10) and ω ∈ V is a distinguished state (called the conformal vector, or Virasoro vector) such that Y (ω, z) =: n∈Z L(n)z −n−2 enjoys the following properties:
Definition 6.1 A weakly conformal p-adic vertex algebra is a quadruple
(a) [L(m), L(n)] = (m − n)L(m + n) + 1 12 δ m+n,0 (m 3 − m)c V id V for some c = c V ∈ Q p called the central charge of V , (b) [L(−1), Y (v, z)] = ∂ z Y (v, z) for v ∈ V .
This completes the Definition.
Remark 6.2
We are using the symbols L(n) in two somewhat different ways, namely as elements in an abstract Virasoro Lie algebra in (2.2) and as modes ω(n + 1) of the vertex operator Y (ω, z) in part (a) of Definition 6.1. This practice is traditional and should cause no confusion. Thus the meaning of (a) is that the modes of the conformal vector close on a representation of the Virasoro algebra of central charge c V as operators on V such that the central element κ acts as cId V . where all coefficients, with the possible exception of c , lie in Z.
Elementary properties of weakly conformal p-adic vertex algebras
We draw some basic conclusions from Definition 6.1. These are well-known in algebraic VOA theory, although the context is slightly different here. For an integer k ∈ Z we set V (k) . .= {v ∈ V | L(0)v = kv} Lemma 6. 5 We have 1 ∈ V (0) and ω ∈ V (2) .
Proof First apply the creation axiom to see that
Y (ω, z)1 = n∈Z L(n)1z −n−2 = ω + O(z),
in particular L(0)1 = 0, that is 1 ∈ V (0) . This proves the first containment stated in the Lemma and it also shows that L(−2)1 = ω. Now use the Virasoro relations to see that
L(0)ω = L(0)L(−2)1 = [L(0), L(−2)]1 = 2L(−2)1 = 2ω,
that is ω ∈ V (2) , which is the other needed containment. The Lemma is proved. Now we calculate
Lemma 6.6 Suppose that u ∈ V (m) . Then for all integers and k we have
u( ) : V (k) → V (k+m− −1) . Proof Let v ∈ V (k) . We have to show that L(0)u( )v = (k + m − − 1)u( )v.L(0)u( )v = [L(0), u( )]v + u( )L(0)v = 1 i=0 (L(i − 1)u)(1 + − i)v + ku( )v
where we made another application of the commutator formula to get the first summand. Consequently, by (6.2) we have
L(0)u( )v = (L(−1)u)(1 + )v + (L(0)u)( )v + ku( )v = −( + 1)u( )v + mu( )v + ku( )v,
which is what we needed.
p-adic vertex operator algebras
Definition 6.7 A conformal p-adic vertex algebra is a weakly conformal p-adic vertex algebra (V, Y, 1, ω) as in Definition 6.1 such that the integral part of the spectrum of L(0) has the following two additional properties:
(a) each eigenspace V (k) is a finite-dimensional Q p -linear space, (b) there is an integer t such that V (k) = 0 for all k < t.
We now have Y, 1, ω) is a conformal p-adic vertex algebra, and let U . .= ⊕ k∈Z V (k) be the sum of the integral L(0)-eigenspaces. Then (U, Y, 1, ω) is an algebraic VOA over Q p .
Proposition 6.8 Suppose that (V,
Proof The results of the previous Subsection apply here, so in particular U contains 1 and ω by Lemma 6.5. Moreover, if u ∈ U , then Y (u, z) belongs to End(U )[[z, z −1 ]] thanks to Lemma 6.6. More is true: because V (k) = 0 for k < t, the same Lemma implies that Y, 1) is a p-adic vertex algebra, from the axioms of p-adic VOA, then the vertex operators Y (u, z) for u ∈ U satisfy the algebraic Jacobi identity, and therefore (U, Y, 1, ω) is an algebraic VOA over Q p . Definition 6.9 A p-adic VOA is a conformal p-adic vertex algebra (V, Y, 1, ω) as in Definition 6.7 such that V = U .
u( )v = 0 whenever u ∈ V (m) , v ∈ V (k) and > k + m − 1. Therefore, each Y (u, z) is an algebraic field on U . Because (V,
As an immediate consequence of the preceding Definition and Proposition 6.8, we have Corollary 6.10 Suppose that (V, Y, 1, ω) is a p-adic conformal vertex algebra, and let U . .= ⊕ k∈Z V (k) be the sum of the L(0)-eigenspaces. Then the completion U of U in V is a p-adic VOA.
Examples
We discuss some elementary examples of p-adic VOAs of a type that are familiar in the context of algebraic k-vertex algebras.
The first two are degenerate in the informal sense that the conformal vector ω is equal to 0 Example 6.11 p-adic topological rings. Suppose that V is a commutative, associative ring with identity 1 that is also a p-adic Banach space. Concerning the continuity of multiplication, we only need to assume that each multiplication a : b → ab, for a, b ∈ V , is bounded, i.e., ab ≤ M b for a nonnegative constant M depending on a but independent of b. Define the vertex operator Y (a, z) to be multiplication by a. We claim that (V, Y, 1) is a p-adic vertex algebra. Indeed, Y (a, z) = |a|, so Y is bounded and hence continuous. The commutativity of multiplication amounts to p-adic locality, cf. Definition 5.4, and the remaining assumptions of Theorem 5.7 are readily verified (taking T = 0). Then this result says that indeed (V, Y, 1) is a p-adic vertex ring. Actually, it is a p-adic VOA (with ω = 0) as long as V = V (0) is finite-dimensional over Q p .
Example 6.12 Commutative p-adic Banach algebras.
This is essentially a special case of the preceding Example 1. In a p-adic Banach ring V with identity, one has (by definition) that multiplication is submultiplicative, i.e., ab ≤ |a| b . Thus in the previous Example, the vertex operator Y (a, z) is submultiplicative (compare with Definition 4.5).
Example 6.13 The p-adic Virasoro VOA.
The construction of the Virasoro vertex algebra over C can be basically reproduced working over Z p [35]. We recall some details here.
We always assume that the quasicentral charge c as defined in (6.1) lies in Z p . Form the highest weight module, call it W , for the Virasoro algebra over Q p that is generated by a state v 0 annihilated by all modes L(n), for n ≥ 0, and on which the central element K acts as the identity.
By definition (and the Poincaré-Birkhoff-Witt theorem) then, W has a basis consisting of the states L(−n 1 ) · · · L(−n r )v 0 for all sequences n 1 ≥ n 2 ≥ . . . ≥ n r ≥ 1. Thanks to our assumption that c ∈ Z p it is evident from (6.1) that the very same basis affords a module for the Virasoro Lie ring over Z p . We also let W denote this Z p -lattice. Let W 1 ⊆ W be the Z p -submodule generated by L(−1)v 0 . Then the quotient module W /W 1 has a canonical Z p -basis consisting of states L(−n 1 ) · · · L(−n r )v 0 + W 1 for all sequences n 1 ≥ n 2 ≥ . . . ≥ n r ≥ 2.
The Virasoro vertex algebra over Z p with quasicentral charge c has (by Definition) the Fock space V . .= W /W 1 and the vacuum element is 1 . .= v 0 + W 1 . By construction V is a module for the Virasoro ring over Z p , and the vertex operators Y (v, z) for, say v = L(−n 1 ) · · · L(−n r )v 0 + W 1 , may be defined in terms of the actions of the L(n) on V . For further details see [35,Sect. 7], where it is also proved (Theorem 7.3, loc. cit) that when so defined, (V, Y, 1) is an algebraic vertex algebra over Z p . Indeed, it is shown (loc. cit, Subsection 7.5) that (V, Y, 1, ω) is a VOA over Z p in the sense defined there. But this will concern us less because we want to treat our algebraic vertex algebra (W /W 1 , Y, 1) over Z p p-adically.
Having gotten this far, the next step is almost self-evident. We introduce the completion
V . .= lim ← − V /p k V
Theorem 6.14 The completion V is a p-adic VOA in the sense of Definition 6.9.
Proof It goes without saying that V = (V , Y, 1, ω) where V , Y and 1 have already been explained. We define ω . .= L(−2)v 0 + W 1 = L(−2)1 ∈ V . Implicit in [35,Theorem 7.3] are the statements that ω is a Virasoro state in V , and that its mode L(0) has the properties (a) and (b) of Definition 6.1. Thus (V , Y, 1, ω) is a weakly conformal p-adic VOA. Now because V is an algebraic VOA then V = ⊕V (k) has a decomposition into L(0)eigenspaces as indicated, with V (k) a finitely generated free Z p -module and V (k) = 0 for all small enough k. By Proposition 7.3 V is also the sum of the integral L(0)-eigenspaces in V . So in fact (V , Y, 1, ω) is a conformal p-adic vertex algebra. Since by definition V is the completion of V then (V , Y, 1, ω) is a p-adic VOA according to Definition 6.9. The Theorem is proved.
Completion of an algebraic vertex operator algebra
We now discuss the situation of completing algebraic VOAs. The structure that arises was the model for our definition of p-adic VOA above. This section generalizes Theorem 6.14 above.
Let V = (V, Y, 1) be an algebraic VOA over Q p . Assume further that V is equipped with a nonarchimedean absolute value |·| that is compatible with the absolute value on Q p in the sense that if α ∈ Q p and v ∈ V then |αv| = |α| |v|, and such that |1| ≤ 1. We do not necessarily assume that V is complete with respect to this absolute value. In order for the completion of V to inherit the structure of a p-adic VOA, it will be necessary to assume some compatibility between the VOA axioms and the topology on V . To this end we make the following definition. Definition 7. 1 We say that |·| is compatible with V if the following properties hold:
(1) for each a ∈ V , the algebraic vertex operator Y (a, z) belongs to F(V ), i.e., it is a p-adic field in the sense of Definition 4.1. In particular, each mode a(n) is a bounded and hence continuous endomorphism of V . (2) the association a → Y (a, z) is continuous when F(V ) is given the topology derived from the sup-norm.
These conditions are well-defined even though V is not assumed to be complete. The basic observation is the following: , 1) in the sense of Definition 7.1, and let V denote the completion of V with respect to |·|. Then V has a natural structure of p-adic VOA.
Proposition 7.2 Assume that |·| is compatible with V = (V, Y
Proof Let a ∈ V so that by definition we can write a = lim n→∞ a n for a n ∈ V . We first explain how to define Y (a, z) by taking a limit of the algebraic fields Y (a n , z), and then we show that this limit is a p-adic field on V .
The modes of each a n are continuous, and so they extend to continuous endomorphisms of V , and indeed, since Y (a n , z) ∈ F(V ) by condition (1) of Definition 7.1, we can naturally view this as a p-adic field in F(V ) by continuity. Next, by continuity of the map b → Y (b, z) on V , and since F(V ) is complete by Proposition 4.6, we deduce that Y (a, z) = lim n→∞ Y (a n , z) is also contained in F(V ). The extended map Y : V → F(V ) inherits continuity similarly.
Next, it is clear that the creativity axiom is preserved by taking limits. Likewise, the Jacobi identity is preserved under taking limits, as it is the same equality as in the algebraic case. Finally, the Virasoro structure of V is inherited from V , and similarly for the (continuous) direct sum decomposition.
The following Proposition is straightforward, but we state it explicitly to make it clear that the continuously-graded pieces of the p-adic completion of an algebraic VOA do not grow in dimension: Proposition 7.3 Let V = n≥N V (n) denote an algebraic VOA endowed with a compatible nonarchimedean absolute value, and let V = n≥N V (n) denote the corresponding completion (p-adic VOA). Then V (n) = V (n) and
V (n) = {v ∈ V | L(0)v = nv}.
Proof Since each V (n) is a finite-dimensional Q p -vector space by hypothesis, it is automatically complete with respect to any nonarchimedean norm. Therefore, the completion V consists of sums v = n≥N v n with v n ∈ V (n) for all n, such that v n → 0 as n → ∞.
Moreover v = 0 if and only if v n = 0 for all n. Since L(0) is assumed to be continuous with respect to the topology on V (since all modes are assumed continuous) we deduce that
(L(0) − n)v = m≥N (L(0) − n)v m = m≥N (m − n)v m .
The preceding expression only vanishes if v ∈ V n . Thus L(0) does not acquire any new eigenvectors with integer eigenvalues in V , which concludes the proof. Now we wish to discuss one way that p-adic VOAs arise. It remains an open question to give an example of a p-adic VOA that does not arise via completion of an algebraic VOA with respect to some absolute value.
Let U = k U (k) denote an algebraic vertex algebra over Z p in the sense of [35] and Sect. 2, but equipped with an integral decomposition as shown, where each graded piece U (k) is assumed to be a free Z p -module of finite rank. Then V . .= U ⊗ Z p Q p has the structure of an algebraic vertex algebra, and suppose moreover that V has a structure of algebraic VOA compatible with this vertex algebra structure. As usual, V = ⊕ k V (k) is the decomposition of V into L(0)-eigenspaces. Note that the conformal vector ω ∈ V may not be integral, that is, it might not be contained in U . Suppose, however, that the gradings are compatible, so that U (k) = U ∩ V (k) . Choose Z p -bases for each submodule U (k) , and endow V with the corresponding sup-norm: that is, if (e j ) are basis vectors, then α j e j = sup j α j .
(7.1)
Proposition 7.4
Let V = U ⊗ Z p Q p , equipped with the induced sup-norm (7.1). Then the sup-norm is compatible with the algebraic VOA structure. Thus, the completion V is a p-adic VOA.
Proof Each a ∈ V is a finite Q p -linear combination of elements in the homogeneous pieces U (k) of the integral model. The modes of elements in U (k) are direct sums of maps between finite-rank free Z p -modules, and so they are thus uniformly bounded by 1 in the operator norm. It follows that the modes appearing in any Y (a, z) are uniformly bounded. Further, for each b ∈ V , the series Y (a, z)b has a finite Laurent tail since V is an algebraic VOA. Therefore each Y (a, z) is a p-adic field, as required by condition (1) of Definition 7.1. It remains to show that the state-field correspondence V → F(V ) is continuous. We would like to use the conclusion of Proposition 4.14 below, but that result assumed that we had a p-adic VOA to begin with. To avoid circularity, let us explain why the conclusion nevertheless applies here: first, since 1 is from the underlying algebraic VOA, we still have Y (1, z) = id V . This is all that is required to obtain |1| ≤ Y (1, z) , as in the proof of Corollary 4.14. Then using this, and since a(−1)1 = a for a ∈ V since V is an algebraic VOA, we deduce that
|a| ≤ Y (a, z) (7.2)
for all a ∈ V , as in the proof of Proposition 4.14. Now, returning to the continuity of the map V → F(V ), write a ∈ V as a = j α j e j where α j ∈ Q p and e j ∈ U (k j ) are basis vectors used to define the sup-norm on V . In particular, we have e j (n) ≤ 1 for all j and n, since e j (n) is defined over Z p by hypothesis. Coupled with the previously established inequality (7.2), this implies that under the present hypotheses we have Y (a, z) = |a|. Thus, Y is an isometric embedding, and this confirms in particular that the state-field correspondence is continuous, as required by condition (2) of Definition 7.1. Therefore we may apply Proposition 7.2 to complete the proof.
Remark 7. 5 We point out that the integral structure above was necessary to ensure a uniform bound for the modes a(n), independent of n ∈ Z, as required by the definition of a p-adic field.
Remark 7.6
There is a second, equivalent way to obtain a p-adic VOA from U (cf. Sect. 2.4). First observe that the Z p -submodules p n U are 2-sided ideals defined over Z p . We set
U = lim ← − U /p n U.
This is a limit of vertex rings over the finite rings Z/p n Z and U has a natural structure of Z p -module. Let V = U ⊗ Z p Q p . Then this carries a natural structure of p-adic VOA that agrees with the sup-norm construction above.
Further remarks on p-adic locality
The Goddard axioms of Sect. 5 illustrate that if Y (a, z) and Y (b, z) are p-adic vertex operators, then they are mutually p-adically local in the sense that
lim n→∞ (x − y) n [Y (a, x), Y (b, y)] = 0.
In this section we adapt some standard arguments from the theory of algebraic vertex algebras on locality to this p-adic setting.
Let us first analyze what sort of series Y (a, x)Y (b, y) is. We write Y (a, x)Y (b, y) = m,n∈Z a(n)b(m)x −n−1 y −m−1 .
We wish to show that:
(1) there exists M ∈ R ≥0 such that a(n)b(m)c ≤ M |c| for all c ∈ V and all n, m ∈ Z;
(2) lim m,n→∞ a(n)b(m)c = 0 for all c ∈ V .
Property (1) In order to give an equivalent formulation of p-adic locality, we introduce the formal δ-function following Kac [26]:
δ(x − y) . .= x −1 n∈Z x y n .
Likewise, let ∂ x = d/dx and define ∂ (j)
x = 1 j! ∂ j for j ≥ 0.
The following result is a p-adic analogue of part of Theorem 2.3 of [26]. Y (a, z), Y (b, z) ∈ F(V ) be p-adic fields on V . Then the following are equivalent:
Proposition 8.1 Let V be a p-adic Banach space and let
(1) lim n→∞ (x − y) n [Y (a, x), Y (b, y)] = 0; (2) there exist unique series c j ∈ End(V )[[y, y −1 ]] with lim j→∞ c j = 0 such that [Y (a, x), Y (b, y)] = ∞ j=0 c j ∂ (j) y δ(x − y).
Proof First, notice that by part (c) of Proposition 2.2 of [26], we can uniquely write [26], the coefficient of the x −1 -term in this expression is the series c j+n (y). By definition of the sup-norm on formal series, as n grows, then since (x−y) n [Y (a, x), Y (b, y)] tends to zero p-adically, we find that c j+n (y) must have coefficients that become more and more highly divisible by p. Thus lim j c j = 0 in the sup-norm, which confirms that (2) holds. Conversely, if (2) holds, then we deduce that equation (8.1) holds as in the previous part of this proof. Since the coefficients of δ (j) y (x − y) are integers, and the coefficients of c n (y) go to zero p-adically in the sup-norm as n grows, we find that (1) follows from equation (8.1) by the strong triangle ineqaulity.
[Y (a, x), Y (b, y)] = j≥0 c j (y)∂ (j) y (x − y) + b(x, y), b(x, y) = m∈Z
Recall formula (2.1.5b) of [26]:
∂ (j) y δ(x − y) = m∈Z m j x −m−1 y m−j .
Thus, if Y (a, x) and Y (b, y) are mutually p-adically local, then we deduce that
[Y (a, x), Y (b, y)] = ∞ j=0 c j (y)∂ (j) y (x − y) = ∞ j=0 m∈Z m j c j (y)x −m−1 y m−j = ∞ j=0 m∈Z n∈Z m j c j (n)x −m−1 y m−n−j−1 = m∈Z n∈Z ⎛ ⎝ ∞ j=0 m j c j (m + n − j) ⎞ ⎠ x −m−1 y −n−1
Thus, we deduce the fundamental identity:
[a(m), b(n)] = ∞ j=0 m j c j (m + n − j). (8.2)
This series converges by the strong triangle inequality since c j → 0.
Remark 8.2
As a consequence of this identity, one can deduce the p-adic operator product expansion as in equation (2.3.7b) of [26]. The only difference is that it now involves an infinite sum: if Y (a, x) and Y (b, y) are two mutually local p-adic fields, then the coefficients of the c j in equation (8.2) leads to an expression
Y (a, x)Y (b, y) ∼ ∞ j=0 c j (y) (x − y) j+1
which is strictly an abuse of notation that must be interpreted as in [26]. Again, this p-adic OPE converges thanks to the fact that c j → 0.
As in the algebraic theory of VOAs, when Y (a, x) and Y (b, x) are mutually p-adically local, one has the identity c j (n) = (a(j)b)(n) and equation (8.2) specializes to the commutator formula (5.1).
Definition 8.3
Let V be a p-adic vertex algebra, and let L(V ) ⊆ End(V ) denote the p-adic closure of the linear span of the modes of every field Y (a, x) for a ∈ V . Finally, we elucidate some aspects of rationality of locality in this p-adic context, modeling our discussion on Proposition 3.2.7 of [30]. Let V be a p-adic vertex algebra with a completed graded decomposition
V = n V (n) ,
where V n = 0 if n 0. The direct sum V of the duals of each finite-dimensional subspace V (n) consists of the linear functionals on V such that | V (n) = 0 for all but finitely many n ∈ Z. This space is not p-adically complete, so we let V * denote its completion.
Lemma 8.5
The space V * is the full continuous linear dual of V .
Proof Let : V → Q p be a continuous linear functional. We must show that is approximated arbitrarily well by functionals in the restricted dual V . Continuity of asserts the existence of M such that (v) ≤ M |v| for all v ∈ V . Let δ n ∈ V denote the functional that is the identity on V (n) and zero on V (m) for m = n, and write
N = −N ≤n≤N δ n .
We clearly have N ∈ V for all N . Then N → expresses as a limit of elements of V . This concludes the proof.
Note that we did not need the condition V (n) = 0 if n is small enough in the preceding proof.
Definition 8.6
The Tate algebra Q p x consists of those series in Q p [[x]] such that the absolute values of the coefficients go to zero.
The Tate algebra is the ring of rigid analytic functions on the closed unit disc in Q p that are defined over Q p . More general Tate algebras are the building blocks of rigid analytic geometry in the same way that polynomial algebras are the building blocks of algebraic geometry. Below we use the slight abuse of notation Q p x, x −1 to denote formal series in x and x −1 whose coefficients go to zero in both directions. Elements of Q p x −1 [x] can be interpreted as functions on the region |x| ≥ 1, while elements of Q p x, x −1 can be interpreted as functions on the region |x| = 1. Unlike in complex analysis, this boundary circle defined by |x| = 1 is a perfectly good rigid-analytic space.
Lemma 8.7
Let V be a p-adic VOA, let u, v ∈ V , let 1 ∈ V , and let 2 ∈ V * . Then
1 , Y (u, x)v ∈ Q p x −1 [x], 2 , Y (u, x)v ∈ Q p x, x −1 .
Proof Write u = n∈Z u n and v = n∈Z v n where u n , v n ∈ V (n) for all n, and lim n→∞ u n = 0, lim n→∞ v n = 0. Since each u(n) is continuous, we have
u(n)v = b∈Z u(n)v b . Likewise, since u → Y (u, x) is continuous, we have Y (u, x) = a Y (u a , x) and hence u(n)v = a,b∈Z u a (n)v b .
Since u a and v b are homogeneous of weight a and b, respectively, we have that u a (n)v b ∈ V a+b−n−1 . If : V → Q p is a continuous linear functional, and if V (n) = 0 for n < M, then we have
(u(n)v) = a,b≥M a+b≥M+n+1 (u a (n)v b ). First suppose that ∈ V , so that | V (n) = 0 if n > N , where M ≤ N . Hence in this case we have (u(n)v) = a≥M N +n+1−a b=M+n+1−a (u a (n)v b )
Suppose that N + n + 1 − a < M. Then each v b in the sum above must vanish, so that we can write the sum in fact as
(u(n)v) = N −M+n+1 a=M N +n+1−a b=M+n+1−a (u a (n)v b )
If M > N − M + n + 1, or equivalently, 2M − N − 1 > n, then the sum on a is empty and (u(n)v) vanishes. Therefore, , Y (u, x)v has only finitely many nonzero terms in positive powers of x. If n → ∞ then since u(n)v → 0, and is continuous, we likewise see that the coefficients of the x −n−1 terms go to zero as n tends to infinity. This establishes the first claim of the lemma. Suppose instead that ∈ V * , let C be a constant such that u(n)v ≤ C |v| for all n (we may assume without loss that v = 0), and write = 1 + where 1 ∈ V and < ε C|v| . Then for each n,
(u(n)v) ≤ ε C |v| u(n)v ≤ ε and so (u(n)v) ≤ sup( 1 (u(n)v) , (u(n)v) ) ≤ sup( 1 (u(n)v) , ε)
By the previous part of this proof, as n tends to −∞, eventually 1 (u(n)v) vanishes. We thus see that (u(n)v) → 0 as n → −∞. Since is a continuous linear functional and u(n)v → 0 as n → ∞, we likewise get that (u(n)v) → 0 as n → ∞. This concludes the proof.
Remark 8.8 A full discussion of rationality would involve a study of series
, Y (a, x)Y (b, y)v
for v ∈ V and ∈ V * , where a and b are mutually p-adically local. Ideally, one would show that such series live in some of the standard rings arising in p-adic geometry, similar to the situation of Lemma 8.7 above. It may be necessary to impose stronger conditions on rates of convergence of limits such as lim n→∞ a(n)b = 0 in the definition of p-adic field in order to achieve such results. We do not pursue this study here.
The Heisenberg algebra
We now discuss p-adic completions of the Heisenberg VOA of rank 1, i.e., c = 1 in detail, which was the motivation for some of the preceding discussion. In general there are many possible ways to complete such VOAs, as illustrated below, though we only endow a sup-norm completion with a p-adic VOA structure. Our notation follows [29]. To begin, define
S = Q p [h −1 , h −2 , . . .],
a polynomial ring in infinitely many indeterminates. This ring carries a natural action of the Heisenberg Lie algebra. Recall that this algebra is defined by generators h n for n ∈ Z\{0} and the central element 1, subject to the canonical commutation relations [h m , h n ] = mδ m+n,0 1.
(9.1)
The Heisenberg algebra acts on S as follows: if n < 0 then h n acts by multiplication, 1 acts as the identity, while for n > 0, the generator h n acts as n d dh −n . Then S carries a natural structure of VOA as discussed in Chapter 2 of [16], where the Virasoro action is given by
c → 1, L n → 1 2 j∈Z h j h n−j , n = 0, L 0 → 1 2 j∈Z h −|j| h |j| .
Elements of S are polynomials with finitely many terms, and this space can be endowed with a variety of p-adic norms. We describe a family of such norms that are indexed by a real parameter r ≥ 0, which are analogues of norms discussed in Chapter 8 of [28] for polynomial rings in infinitely many variables.
For I a finite multi-subset of Z <0 , let h I = i∈I h i and define |I| = − i∈I i, so that h I has weight |I|. For fixed r ∈ R >0 define a norm For example, when r = 1, this norm agrees with the sup-norm corresponding to the integral basis given by the monomials h I . Let S r denote the completion of S relative to the norm a → |a| r . Proof Let a n denote a Cauchy sequence in S relative to the norm |a| r , so that we can write a n = I a n I h I for each n ≥ 0. We first show that for each fixed indexing multiset J , the sequence a n J is Cauchy in Q p and thus has a well-defined limit. For this, let ε > 0 be given and choose N such that |a n − a m | < εr |J | for all n, m > N . This means that a n J − a m J r |J | ≤ a n − a m < εr |J | for all n, m > N , so that the sequence (a n J ) is indeed Cauchy. Let a J denote its limit in Q p and let a = J a J h J .
There exists an index n such that |a − a n | < ε. But notice that since a n ∈ S, it follows that a n I = 0 save for finitely many multisets I. In particular, if |I| is large enough we see that |a I | r |I| = a I − a n I r |I| ≤ a − a n < ε.
Therefore lim |I|→∞ |a I | r |I| = 0 as claimed, and this concludes the proof.
Corollary 9.2
If 0 < r 1 < r 2 then there is a natural inclusion S r 2 ⊆ S r 1 .
Proof This follows immediately from the previous proposition since r 1 < r 2 implies that |a I | r
|I| 1 < |a I | r |I| 2 .
Remark 9.3 Let S = r>0 S r . The ring S consists of all series I a I h I such that lim |I|→∞ |a I | r |I| = 0 for all r > 0. It contains S but is strictly larger: for example, the infinite series n≥0 p n 2 h n −1 is contained in S but, being an infinite series, it is not in S. This ring is an example of a p-adic Frechet space that is not a Banach space. Therefore, according to our definitions, S does not have a structure of p-adic VOA. It may be desirable to extend the definitions to incorporate examples like this into the theory. Notice that h I is homogeneous of weight |I|, and h J is homogeneous of weight |J |. Therefore, it follows that h I (n)h J is homogeneous of weight |I| + |J | − n − 1, and thus h I (n)h J r ≤ r |I|+|J |−n−1 . Combining this observation with the lined inequality on a(n)b r above establishes the lemma.
The preceding lemma illustrates that it is nontrivial to obtain uniform bounds for a(n)b r / |a| r unless r = 1. In this case, however, we obtain: Proposition 9.5 The Banach ring S 1 has a natural structure of submultiplicative p-adic VOA.
Proof Since S has an integral basis and |·| 1 is the corresponding sup-norm, we may apply Proposition 7.4 to conclude that S 1 has the structure of a p-adic VOA. That it is submultiplicative follows from Lemma 9.4.
In [11], the authors show that there is a surjective character map
S → Q p [E 2 , E 4 , E 6 ]η −1 (9.2)
of the Heisenberg algebra S onto the free-module of rank 1 generated by η −1 over the ring of quasi-modular forms of level one with p-adic coefficients. 2 After adjusting the grading on the Fock space S, this is even a map of graded modules. See [36,37] for more details on this map. The character is defined as follows: if v ∈ S is homogeneous of degree k, then v(n) is a graded map that increases degrees by k − n − 1. In particular, v(k − 1) preserves the grading, and we write o(v) = v(k −1) for homogeneous v of weight k. This is the zero-mode of v. The zero mode is then extended to all of S by linearity. With this notation, let S (n) be the nth graded piece of S and define the character of v ∈ S by the formula Z(v, q) . .= q −1/24 n≥0 Tr S (n) (o(v))q n . Theorem 9. 6 The association v → η · Z(v, q) defines a surjective Q p -linear map
f : S → Q p [E 2 , E 4 , E 6 ],
where η is the Dedekind η-function.
Our goal now is to use p-adic continuity to promote this to a map from S 1 into Serre's ring M p of p-adic modular forms as defined in [43]. Recall that M p is the completion of Q p [E 4 , E 6 ] with respect to the p-adic sup-norm on q-expansions. When p is odd, Serre proved that Q p [E 2 , E 4 , E 6 ] ⊆ M p . Theorem 9.7 Let p be an odd prime. Then the map v → ηZ(v, q) on the Heisenberg algebra S extends to a natural Q p -linear map
f : S 1 → M p .
The image contains all quasi-modular forms of level one.
Proof Recall that S 1 is the completion of S with respect to the p-adic sup-norm. Since the rescaling factor of η will not affect the continuity of f , we see that to establish the continuity of f , we are reduced to proving the p-adic continuity of the map v → Z(v, q), where the image space q −1/24 Q p q is given the p-adic sup-norm. Suppose that |u − v| 1 < 1/p k , so that we can write u = v + p k w for some w ∈ S 1 with |w| 1 ≤ 1. This means that w is contained in the completion of Z p [h(−1), h(−2), . . .], and all of its modes are defined over Z p and satisfy w(n) 1 ≤ 1. In particular, the zero mode o(w) is defined over Z p , and thus so is its trace, so that Tr S (n) (o(w)) ≤ 1 for all n.
By linearity of the zero-mode map o and the trace, we find that with respect to the sup-norm on q −1/24 Q p q ,
Z(u, q) − Z(v, q) = q −1/24 n≥0 p k Tr S (n) (o(w))q n < p −k .
This shows that the character map preserves p-adic limits. It then follows by general topology (e.g., Theorem 21.3 of [39]) that since S 1 is a metric space, f is continuous.
The map in Theorem 9.7 has an enormous kernel, as even the algebraic map that it extends has an enormous kernel. This complicates somewhat the study of the image, as there do not exist canonical lifts of modular forms to states in the Heisenberg VOA. A natural question is whether states in S 1 map onto nonclassical specializations (that is, specializations of non-integral weight) of the Eisenstein family discussed in [43] and elsewhere. In the next section we give some indication that this can be done, at least for certain p-adic modular forms, in spite of the large kernel of the map f .
Kummer congruences in the p-adic Heisenberg VOA
We use the notation from Sect. 9, in particular S 1 is the p-adic Heisenberg VOA associated to the rank 1 algebraic Heisenberg VOA S with canonical weight 1 state h satisfying the canonical commutator relations (9.1). We shall write down some explicit algebraic states in S that converge p-adically in S 1 and we shall describe their images under the character map f of Theorem 9.7 which, as we have seen, is p-adically continuous. Thus, the f -image of our p-adic limit states will be Serre p-adic modular forms in M p . The description of the states relies on the square-bracket formalism of [46]; see [36] for a detailed discussion of this material. The convergence of the states relies, perhaps unsurprisingly, on the classical Kummer congruences for Bernoulli numbers.
In order to describe the states in S of interest to us we must review some details from the theory of algebraic VOAs, indeed the part that leads to the proof of Theorem 9.6. This involves the square bracket vertex operators and states. For a succinct overview of this we refer the reader to Section 2.7 of [36]. From this we really only need the definition of the new operators h[n] acting on S, which is as follows:
Y [h, z] := n∈Z h[n]z −n−1 := e z Y (h, e z − 1). (10.1)
This reexpresses the vertex operators as objects living on a torus rather than a sphere, a geometric perspective that is well-explained in Chapter 5 of [16]. The following result is a special case of a result proved in [37] and exposed in [36, Theorem 4.5 and equation (44)]. We now wish to study the p-adic properties of these states as r varies. We shall rescale things in a convenient way:
u r . .= (1 − p r ) ∞ m=0 c(r, m)h(−m − 1)h(−1)1 − B r+1 r + 1 1 = 2(1 − p r )v r .
We now lift the classical Kummer congruences for Bernoulli numbers to this sequence of states in the p-adic Heisenberg VOA. We begin with a simple Lemma treating the c(r, m) terms. the Monster simple group F 1 (loc. cit.) (Use of F 1 here follows the Atlas notation [5] rather than the more conventional M in order to avoid confusion with the Heisenberg VOA). The conformal grading on V takes the general shape V = C1 ⊕ V (2) ⊕ · · · (11.1) and we denote by V + . .= V (2) ⊕ · · · the summand consisting of the homogeneous vectors of positive weight. Dong and Griess proved [7,Theorem 5.9] that V has an F 1 -invariant integral form R over Z as in Definition 2.3 (see also [4]) and in particular V is susceptible to the kind of analysis we have been considering, thereby giving rise to the p-adic Moonshine module V p . In this Section we give some details.
We obtain an algebraic vertex algebra over Z p by extension of scalars U . .= R ⊗ Z p , and similarly a vertex algebra over Q p , namely V . .= R ⊗ Z Q p = U ⊗ Z p Q p . Note that ω ∈ V thanks to property (iii) of Definition 2.3, so V is in fact a VOA over Q p . Now define V p . .= V = the completion of V with respect to its sup-norm.
Comparison with Proposition 7.4 shows that V p is a p-adic VOA and U furnishes an integral structure according to Definition 4.15 and Proposition 7.3. We can now prove the following result: Proof We have already explained the proof of part (a). As for (b), notice that every submodule p k V is preserved by the action of F 1 by linearity of this action. Therefore, there is a natural well-defined action of F 1 on each quotient V /p k V module, and this action extends to the limit V p . .= lim ← −k V /p k V . This concludes the proof of part (b).
As for part (c), we assert that if v ∈ V + then the trace function Z V (v, τ ) has a q-expansion of the general shape aq + bq 2 + · · ·. For we may assume that v ∈ V (k) is homogeneous of some weight k ≥ 2. Then o(v)1 = v(k − 1)1 = 0 by the creation property, and since V has central charge c = 24 then by (11.1) we have Z V (v, τ ) = q −c/24 n≥0 Tr V (n) o(v)q n = Tr V (2) o(v)q + · · ·, and our assertion is proved.
By the previous paragraph and Zhu's Theorem [10,15,46], it follows that Z V (v, τ ) is a sum of cusp-forms of level 1 and mixed weights and by [7, Theorem 1] every such level 1 cusp-form appears in this way. Now to complete the proof of part (c), proceed as in the proof of Theorem 9.6. There is no need to exclude p = 2 because E 2 plays no rôle.
Remark 11.2
The manner in which p-adic modular forms arise from the character map for the p-adic VOAs in Theorems 9.7 and 11.1 are rather different. This difference masks a well-known fact in the theory of VOAs, namely that the algebraic Heisenberg VOA and the algebraic Moonshine module have vastly different module categories. Indeed, V has, up to isomorphism, exactly one simple module -namely the adjoint module V itself -whereas M has infinitely many inequivalent simple modules. In the case of V , Zhu's theorem (loc. cit.) may be applied and it leads to the fact, explained in the course of the proof of Theorem 11.1, that trace functions for positive weight states are already sums of cusp forms of mixed weight. But to obtain forms without poles, we must exclude states of weight 0 (essentially the vacuum 1). On the other hand, there is no such theory for the algebraic Heisenberg and indeed one sees from (9.2) that the trace functions for M are by no means elliptic modular forms in general. That they take the form described in (9.2) is a happy convenience (and certainly not generic behaviour), and we can normalize the trace functions to exclude poles by multiplying by η. Since the quasi-modular nature of the characters disappears when one considers the p-adic Heisenberg algebra, it is natural to ask whether the module category for the p-adic Heisenberg algebra is simpler than for the algebraic Heisenberg VOA. We do not currently have an answer for this question, indeed, we have not even given a concrete definition of the module category for a p-adic VOA! Remark 11. 3 We point out that our notation V p for the p-adic Moonshine module as we have defined it may be misleading in that it does not record the dependence on the Dong-Griess form R. Indeed there are other forms that one could use in its place such as the interesting self-dual form of Carnahan [4] and we have not studied whether these different forms produce isomorphic p-adic Moonshine modules.
Data availibility Data sharing is not applicable to this article.
Lemma 3. 1
1Let v n ∈ V be a sequence in a p-adic Banach space V . Then ∞ n=1 v n converges if and only if lim n→∞ v n = 0.
Definition 3. 3
3If U and V are p-adic Banach spaces, then Hom(U, V ) denotes the set of continuous Q p -linear maps U → V . If U = V then we write End(V ) = Hom(V, V ).
Definition 4. 1
1Let V be a p-adic Banach space. A p-adic field on V associated to a state a ∈ V consists of a series a(z) ∈ End(V )[[z, z −1 ]] such that if we write a(z) = n∈Z a(n)z −n−1 then: (1) there exists M ∈ R ≥0 depending on a such that a(n)b ≤ M |a| b for all n ∈ Z and all b ∈ V ; (2) lim n→∞ a(n)b = 0 for all b ∈ V .
Remark 4. 2
2Property (1) in Definition 4.1 implies that the operators a(n) ∈ End(V ) are uniformly p-adically bounded a(n) ≤ M |a| in their operator norms, defined above in Sect. 3 and recalled below. In particular, the modes a(n) are continuous endomorphisms of V for all n. Property (2) of Definition 4.1 arises by taking limits of the truncation condition (1) in Definition 2.1.
1(−2)1 = 1(−2)(1(−1)1) + 1(−2)(1(−1)1) = 21(−2)1, whence 1(−2)1 = 0. This begins the induction. Now choose r + s = −n with 0 ≤ r < −s − 1. Then (5.6) reads
Remark 6. 3
3Part (b) of Definition 6.1 is called translation covariance. Comparison with Subsection 5.3 shows that the meaning of part (b) is that L(−1) is none other than the canonical derivation T of the vertex algebra (V, Y, 1). Remark 6.4 When working over a general base ring it is often convenient to replace the central charge c with the quasicentral charge c defined by c . .= 1 2 c, so that the basic Virasoro relation reads [L(m), L(n)] = (m − n)L(m + n) + δ m+n,
With this in mind, first consider the commutator formula Proposition 5.1 with ω in place of u and r = 0. Bearing in mind that L(−1) = ω(0) we obtain [L(−1), v(s + 1)] = (L(−1)v)(s), and a comparison with the translation covariance axiom (b) above then shows that (L(−1)v)(s) = −sv(s − 1). (6.2)
Then Y (a, z) = sup n a(n) ≤ sup n sup j α j e j (n) ≤ sup j α j = |a| .
is true since a(n) and b(m) are each bounded operators on V , and thus so is their composition a(n)b(m). For (2), notice that since there is a constant M such that a(n)c ≤ M c for all c ∈ V , we havea(n)b(m)c ≤ M b(m)c → 0 as m → ∞,where convergence is independent of n. If instead n grows, then a(n)b → 0 and continuity of Y (•, z) yields Y (a(n)b, z) → 0. Then uniformity of the sup-norm likewise yields lim n→∞ a(n)b(m)c = 0 uniformly in m. Thus for every ε > 0, there are at most finitely many pairs (n, m) of integers n, m ≥ 0 with a(n)b(m)c ≥ ε. This establishes Property (2) above.
≥0 n∈Z a m,n x m y n . for series c j (y) ∈ End(V )[[y, y −1 ]] and a m,n ∈ V . Suppose that (1) holds. Then by part (e) of Proposition 2.1 of [26], we find that lim n→∞ (x − y) n b(x, y) = 0. Since b(x, y) is constant and the sequence (x − y) n does not have a p-adic limit, the only way this can transpire is if b(x, y) = 0. Then by parts (e) and (d) of Proposition 2.1 in [26], we now have (x − y) n [Y (a, x), Y (b, y)]
Theorem 8. 4
4The space L(V ) is a Lie algebra with commutators acting as Lie bracket. Proof It is clear that L(V ) is a subspace, so it remains to prove that it is closed under commutators. Let L ⊆ L(V ) denote the dense subspace spanned by the modes. First observe that if a, b ∈ V , then [a(n), b(m)] ∈ L(V ) for all n, m ∈ Z, thanks to equation (5.1). It follows by linearity that [L, L] ⊆ L(V ). The closure of [L, L] is equal to [L(V ), L(V )], and so we deduce [L(V ), L(V )] ⊆ L(V ), which concludes the proof.
|a I | r |I| .
Lemma 9. 4
4If a, b ∈ S r , then a(n)b r ≤ |a| r b r r −n−1 for all n ∈ Z. Proof Write a = I a I h I and b = I b I h I where |a I | r |I| → 0 and b I r |I| → 0. Then the ultrametric property gives a(n)b r ≤ sup I,J a I b J h I (n)h J r .
=f
In order to assess convergence, we rewrite the square bracket state h[−r]h[−1]1 in terms of the basis {h(−n 1 )...h(−n s )1} (n 1 ≥ n 2 ≥ ... ≥ n s ≥ 1) of S. denotes a Stirling number of the second kind. In particular, c(r, m) = 0 if m ≥ r.Proof It is readily found from (10.1) that h[−1]1 = h(−1)1 = h. Then we calculate: (r − 1)! Res z z −r Y [h, z]h(−1)1 = (r − 1)! Res z z −r e z Y (h, e z − 1)h(−1)1 = (r − 1)! Res z z −r e z ⎧ ⎨ ⎩ m≥0 h(−m − 1)h(−1)1(e z − 1) m + 1(e z − 1)This, then, is the desired expression for (r − 1)!h[−r]h[−1]1 in terms of our preferred basis for S, and it only remains to sort out the numerical coefficients. Taking the second summand in the previous display first, we find by a standard calculation that Res z z −r (r − 1)!e z (e z − 1) the coefficient of 1 as stated in the Lemma. As for the first summand, for each 0 ≤ m ≤ r + 1 the needed coefficient c(r, m) is equal to (r − 1)! Res z z −r e z (e z − 1) m =(r − 1)! Res z z . The relationship between c(r, m) and Stirling numbers of the second kind then follows by standard results, and this completes the proof of the Lemma.Combining the last two results, we obtainCorollary 10.3 For a positive odd integer r, define states v r in the rank 1 algebraic Heisenberg VOA S as follows: (v r ) = G r+1 .
Lemma 10. 4
4Let p be an odd prime with r = 1 + p a (p − 1), s = 1 + p b (p − 1) and a ≤ b. Then for all m we have c(r, m) ≡ c(s, m) (mod p a+1 ).
Theorem 11. 1
1For each prime p the p-adic Moonshine module V p has the following properties: (a) V p has an integral structure isomorphic to the completion of the Dong-Griess form R ⊗ Z p , (b) The Monster simple group F 1 is a group of automorphisms of V p , (c) The character map v → Z V (v, q) for v ∈ V + extends to a Q p -linear map V + → M p into Serre's ring of p-adic modular forms. The image contains all cusp forms of integral weight and level one.
Author details 1
1Department of Mathematics and Statistics, McMaster University, Hamilton, ON, Canada, 2 Department of Mathematics, UC Santa Cruz, Santa Cruz, CA, USA. Received: 5 October 2022 Accepted: 8 March 2023
Proposition 5.1 (Commutator formula) For all r, s ∈ Z and all u, v, w ∈ V we have
Proposition 9.1 The ring S r consists of all series I a I h I ∈ Q p [[h(−1), h(−2), . . .]] such that lim |I|→∞ |a I | r |I| = 0.
Theorem 10.1 For a positive odd integer r we haveHere, G k (τ ) is the weight k Eisenstein series G k (τ ) . .= − B k 2k + n≥1 σ k−1 (n)q nwhere B k is the kth Bernoulli number.ηZ(h[−r]h[−1]1, q) =
2
(r − 1)!
G r+1 (τ )
In general, expand (z − w) n as a power series in the second variable. This is inconsequential if n ≥ 0.
Actually, the authors work over C, but their proof applies to any field of characteristic zero.
Proof Recall that c(r, m) . .= m j=0 (−1) m+j m j (j + 1) r−1 .If p | j + 1 then certainly (j + 1) r−1 ≡ 0 (mod p a+1 ). On the other hand, if p j + 1 then (j + 1) r−1 ≡ 1 (mod p a+1 ). It follows thatand since the right side of the congruence is independent of r we deduce that c(r, m) ≡ c(s, m) (mod p a+1 ). This proves the Lemma.Theorem 10.5 (Kummer congruences)The sequence (u 1+p a (p−1) ) a≥0 converges p-adically in S 1 to a state that we denote u 1 := lim a→∞ u 1+p a (p−1) .Proof The convergence for the terms in this sequence involving the Bernoulli numbers follows from the classical Kummer congruences. Therefore, if a ≤ b, it will suffice to establish that,for all m in the range 0 ≤ m ≤ p b (p − 1). But this follows by the preceding Lemma.Notice that, putting all of this together, we havewhere G * 2 is the p-normalized Eisenstein series encountered in[43], with q-expansion given byHere σ * (n) is the sum over the divisors of n that are coprime to p.Remark 10.6This computation illustrates the fact that the p-adic Heisenberg algebra contains states that map under f to p-adic modular forms that are not quasi-modular forms of level 1. It seems plausible that the rescaled p-adic character map f of Theorem 9.7 is surjective, that is, for an odd prime p every Serre p-adic modular form can be obtained from the normalized character map applied to a sequence of p-adically converging states in S 1 . Aside from some other computations with Eisenstein series, we have yet to examine this possibility in detail.The p-adic Moonshine moduleThe Moonshine module V = (V , Y, 1, ω) is a widely known example of an algebraic VOA over C[17]. Its notoriety is due mainly to the fact that its automorphism group is
Modular moonshine. R E Borcherds, III. Duke Math. J. 931Borcherds, R.E.: Modular moonshine. III. Duke Math. J. 93(1), 129-154 (1998)
Modular Moonshine II. R E Borcherds, A J E Ryba, Duke Math. J. 832Borcherds, R.E., Ryba, A.J.E.: Modular Moonshine II. Duke Math. J. 83(2), 435-459 (1996)
of Grundlehren der mathematischen Wissenschaften. S Bosch, U Güntzer, R Remmert, Non-Archimedean Analysis. BerlinSpringer261A Systematic Approach to Rigid Analytic GeometryBosch, S., Güntzer, U., Remmert, R.: Non-Archimedean Analysis, vol. 261 of Grundlehren der mathematischen Wis- senschaften [Fundamental Principles of Mathematical Sciences]. A Systematic Approach to Rigid Analytic Geometry. Springer, Berlin (1984)
A self-dual integral form of the moonshine module. S Carnahan, SIGMA Symmetry Integr. Geom. Methods Appl. 153630Carnahan, S.: A self-dual integral form of the moonshine module. SIGMA Symmetry Integr. Geom. Methods Appl. 15(36), 030 (2019)
ATLAS of Finite Groups. Maximal Subgroups and Ordinary Characters for Simple Groups. With Computational Assistance from. J H Conway, R T Curtis, S P Norton, R A Parker, R A Wilson, J. G. ThackrayOxford University PressEynshamConway, J.H., Curtis, R.T., Norton, S.P., Parker, R.A., Wilson, R.A.: ATLAS of Finite Groups. Maximal Subgroups and Ordinary Characters for Simple Groups. With Computational Assistance from J. G. Thackray. Oxford University Press, Eynsham (1985)
Di Francesco, P Mathieu, P Sénéchal, D , Conformal Field Theory. Graduate Texts in Contemporary Physics. New YorkSpringerDi Francesco, P., Mathieu, P., Sénéchal, D.: Conformal Field Theory. Graduate Texts in Contemporary Physics. Springer, New York (1997)
Integral forms in vertex operator algebras which are invariant under finite groups. C Dong, R L Griess, Jr, J. Algebra. 365Dong, C., Griess, R.L., Jr.: Integral forms in vertex operator algebras which are invariant under finite groups. J. Algebra 365, 184-198 (2012)
Lattice-integrality of certain group-invariant integral forms in vertex operator algebras. C Dong, R L Griess, Jr, J. Algebra. 474Dong, C., Griess, R.L., Jr.: Lattice-integrality of certain group-invariant integral forms in vertex operator algebras. J. Algebra 474, 505-516 (2017)
Determinants for integral forms in lattice type vertex operator algebras. C Dong, R L Griess, Jr, J. Algebra. 558Dong, C., Griess, R.L., Jr.: Determinants for integral forms in lattice type vertex operator algebras. J. Algebra 558, 327-335 (2020)
Modular-invariance of trace functions in orbifold theory and generalized moonshine. C Dong, H Li, G Mason, Commun. Math. Phys. 2141Dong, C., Li, H., Mason, G.: Modular-invariance of trace functions in orbifold theory and generalized moonshine. Commun. Math. Phys. 214(1), 1-56 (2000)
Quasi-modular forms and trace functions associated to free boson and lattice vertex operator algebras. C Dong, G Mason, K Nagatomo, Int. Math. Res. Notices. 8Dong, C., Mason, G., Nagatomo, K.: Quasi-modular forms and trace functions associated to free boson and lattice vertex operator algebras. Int. Math. Res. Notices 8, 409-427 (2001)
Representations of vertex operator algebras over an arbitrary field. C Dong, L Ren, J. Algebra. 403Dong, C., Ren, L.: Representations of vertex operator algebras over an arbitrary field. J. Algebra 403, 497-516 (2014)
Vertex operator algebras associated to the Virasoro algebra over an arbitrary field. C Dong, L Ren, Trans. Am. Math. Soc. 3687Dong, C., Ren, L.: Vertex operator algebras associated to the Virasoro algebra over an arbitrary field. Trans. Am. Math. Soc. 368(7), 5177-5196 (2016)
J F R Duncan, J A Harvey, B C Rayhaun, arXiv:2202.08277Two new avatars of moonshine for the Thomspon group. arXiv preprintDuncan, J.F.R., Harvey, J.A., Rayhaun, B.C.: Two new avatars of moonshine for the Thomspon group. arXiv preprint arXiv:2202.08277 (2022)
Character vectors of strongly regular vertex operator algebras. C Franc, G Mason, Sigma. 1885Franc, C., Mason, G.: Character vectors of strongly regular vertex operator algebras. Sigma 18, 85 (2022)
Vertex Algebras and Algebraic Curves. E Frenkel, B Z David, Mathematical Surveys and Monographs. 88American Mathematical Society2nd ednFrenkel, E., David, B.Z.: Vertex Algebras and Algebraic Curves. Mathematical Surveys and Monographs, vol. 88, 2nd edn. American Mathematical Society, Providence (2004)
Vertex Operator Algebras and the Monster. I Frenkel, J Lepowsky, A Meurman, Pure and Applied Mathematics. 134Academic Press IncFrenkel, I., Lepowsky, J., Meurman, A.: Vertex Operator Algebras and the Monster. Pure and Applied Mathematics, vol. 134. Academic Press Inc, Boston (1988)
Non-archimedean strings. P G O Freund, M Olson, Phys. Lett. B. 1992Freund, P.G.O., Olson, M.: Non-archimedean strings. Phys. Lett. B 199(2), 186-190 (1987)
Adelic string amplitudes. P G O Freund, E Witten, Phys. Lett. B. 1992Freund, P.G.O., Witten, E.: Adelic string amplitudes. Phys. Lett. B 199(2), 191-194 (1987)
Meromorphic conformal field theory. P Goddard, Luminy-Marseille, Infinite-Dimensional Lie Algebras and Groups. Teaneck7World Sci. Publ.Goddard, P.: Meromorphic conformal field theory. In: Infinite-Dimensional Lie Algebras and Groups (Luminy-Marseille, 1988), vol. 7 of Adv. Ser. Math. Phys., pp. 556-587. World Sci. Publ., Teaneck (1989)
A p-adic version of AdS/CFT. S S Gubser, Adv. Theoret. Math. Phys. 217Gubser, S.S.: A p-adic version of AdS/CFT. Adv. Theoret. Math. Phys. 21(7), 1655-1678 (2017)
Tree-like structure of eternal inflation: a solvable model. D Harlow, S H Shenker, D Stanford, L Susskind, Phys. Rev. D. 8563516Harlow, D., Shenker, S.H., Stanford, D., Susskind, L.: Tree-like structure of eternal inflation: a solvable model. Phys. Rev. D 85, 063516 (2012)
A Huang, B Stoica, S.-T Yau, arXiv:1901.02013General relativity from p-adic strings. arXiv preprintHuang, A., Stoica, B., Yau, S.-T.: General relativity from p-adic strings. arXiv preprint arXiv:1901.02013 (2019)
Wilson line networks in p-adic AdS/CFT. L.-Y Hung, W Li, C M Melby-Thompson, J. High Energy Phys. 533118Hung, L.-Y., Li, W., Melby-Thompson, C.M.: Wilson line networks in p-adic AdS/CFT. J. High Energy Phys. 5(33), 118 (2019)
Modular Virasoro vertex algebras and affine vertex algebras. X Jiao, H Li, M Qiang, J. Algebra. 519Jiao, X., Li, H., Qiang, M.: Modular Virasoro vertex algebras and affine vertex algebras. J. Algebra 519, 273-311 (2019)
Vertex Algebras for Beginners. V Kac, University Lecture Series. 10American Mathematical Society2nd ednKac, V.: Vertex Algebras for Beginners, of University Lecture Series, vol. 10, 2nd edn. American Mathematical Society, Providence (1998)
p-adic properties of modular schemes and modular forms. N M Katz, Modular Functions of One Variable, III (Proc. Internat. Summer School. Antwerp350Univ. AntwerpKatz, N.M.: p-adic properties of modular schemes and modular forms. In: Modular Functions of One Variable, III (Proc. Internat. Summer School, Univ. Antwerp, Antwerp, 1972), vol. 350, pp. 69-190. Lecture Notes in Mathematics (1973)
K S Kedlaya, p-adic Differential Equations. CambridgeCambridge University Press125Kedlaya, K.S.: p-adic Differential Equations. Cambridge Studies in Advanced Mathematics, vol. 125. Cambridge Uni- versity Press, Cambridge (2010)
Vertex operator algebras and the zeta function. J Lepowsky, Recent Developments in Quantum Affine Algebras and Related Topics. Raleigh, NC; ProvidenceAmer. Math. Soc248Lepowsky, J.: Vertex operator algebras and the zeta function. In: Recent Developments in Quantum Affine Algebras and Related Topics (Raleigh, NC, 1998), vol. 248 of Contemp. Math., pp. 327-340. Amer. Math. Soc., Providence (1999)
Introduction to Vertex Operator Algebras and Their Representations. J Lepowsky, H Li, Progress in Mathematics. 227BirkhäuserLepowsky, J., Li, H.: Introduction to Vertex Operator Algebras and Their Representations. Progress in Mathematics, vol. 227. Birkhäuser, Boston (2004)
Heisenberg VOAs over fields of prime characteristic and their representations. H Li, M Qiang, Trans. Am. Math. Soc. 3702Li, H., Qiang, M.: Heisenberg VOAs over fields of prime characteristic and their representations. Trans. Am. Math. Soc. 370(2), 1159-1184 (2018)
Twisted modules for affine vertex algebras over fields of prime characteristic. H Li, M Qiang, J. Algebra. 541Li, H., Qiang, M.: Twisted modules for affine vertex algebras over fields of prime characteristic. J. Algebra 541, 380-414 (2020)
Commutative quantum operator algebras. B H Lian, G J Zuckerman, J. Pure Appl. Algebra. 1001-3Lian, B.H., Zuckerman, G.J.: Commutative quantum operator algebras. J. Pure Appl. Algebra 100(1-3), 117-139 (1995)
Aspects of p-adic geometry related to entanglement entropy. M Marcolli, Integrability, Quantization, and Geometry II. Quantum Theories and Algebraic Geometry. ProvidenceAmer. Math. Soc103Marcolli, M.: Aspects of p-adic geometry related to entanglement entropy. In: Integrability, Quantization, and Geom- etry II. Quantum Theories and Algebraic Geometry, vol. 103 of Proc. Sympos. Pure Math., pp. 353-382. Amer. Math. Soc., Providence (2021)
Vertex rings and their Pierce bundles. G Mason, Vertex Algebras and Geometry. ProvidenceAmer. Math. Soc711Mason, G.: Vertex rings and their Pierce bundles. In: Vertex Algebras and Geometry, vol. 711 of Contemp. Math., pp. 45-104. Amer. Math. Soc., Providence (2018)
Vertex operators and modular forms. G Mason, M Tuite, A Window into Zeta and Modular Physics. CambridgeCambridge Univ. Press57Mason, G., Tuite, M.: Vertex operators and modular forms. In: A Window into Zeta and Modular Physics, vol. 57 of Math. Sci. Res. Inst. Publ., pp. 183-278. Cambridge Univ. Press, Cambridge (2010)
Torus chiral n-point functions for free boson and lattice vertex operator algebras. G Mason, M P Tuite, Commun. Math. Phys. 2351Mason, G., Tuite, M.P.: Torus chiral n-point functions for free boson and lattice vertex operator algebras. Commun. Math. Phys. 235(1), 47-68 (2003)
Axioms for a Vertex Algebra and the Locality of Quantum Fields. A Matsuo, K Nagatomo, MSJ Memoirs. 4Mathematical Society of JapanMatsuo, A., Nagatomo, K.: Axioms for a Vertex Algebra and the Locality of Quantum Fields. MSJ Memoirs, vol. 4. Mathematical Society of Japan, Tokyo (1999)
. J R Munkres, Prentice Hall, IncUpper Saddle River2nd ednMunkres, J.R.: Topology, 2nd edn. Prentice Hall, Inc, Upper Saddle River (2000)
A J E Ryba, Modular Moonshine? In: Moonshine, the Monster, and Related Topics. South Hadley, MA; ProvidenceAmer. Math. Soc193Ryba, A.J.E.: Modular Moonshine? In: Moonshine, the Monster, and Related Topics (South Hadley, MA, 1994), vol. 193 of Contemp. Math., pp. 307-336. Amer. Math. Soc., Providence (1996)
A Mathematical Introduction to Conformal Field Theory. M Schottenloher, Lecture Notes in Physics. 759Springer2nd ednSchottenloher, M.: A Mathematical Introduction to Conformal Field Theory. Lecture Notes in Physics, vol. 759, 2nd edn. Springer, Berlin (2008)
Endomorphismes complètement continus des espaces de Banach p-adiques. J.-P Serre, Inst. Hautes Études Sci. Publ. Math. 12Serre, J.-P.: Endomorphismes complètement continus des espaces de Banach p-adiques. Inst. Hautes Études Sci. Publ. Math. 12, 69-85 (1962)
J.-P Serre, Modular Functions of One Variable, III (Proc. Internat. Summer School, Univ. Antwerp, 1972). 350Formes modulaires et fonctions zêta p-adiquesSerre, J.-P.: Formes modulaires et fonctions zêta p-adiques. In: Modular Functions of One Variable, III (Proc. Internat. Summer School, Univ. Antwerp, 1972), vol. 350, pp. 191-268. Lecture Notes in Math (1973)
. V S Vladimirov, I V Volovich, E I Zelenov, Analysis and Mathematical Physics. Series on Soviet and East European Mathematics. 1World Scientific Publishing Co. IncVladimirov, V.S., Volovich, I.V., Zelenov, E.I.: p-Adic Analysis and Mathematical Physics. Series on Soviet and East European Mathematics, vol. 1. World Scientific Publishing Co. Inc., River Edge (1994)
. I V Volovich, Class. Quantum Gravity. 44Volovich, I.V.: p-adic string. Class. Quantum Gravity 4(4), L83-L87 (1987)
Modular invariance of characters of vertex operator algebras. Y Zhu, J. Am. Math. Soc. 91Zhu, Y.: Modular invariance of characters of vertex operator algebras. J. Am. Math. Soc. 9(1), 237-302 (1996)
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
| [] |
[
"Computing-In-Memory Neural Network Accelerators for Safety-Critical Systems: Can Small Device Variations Be Disastrous?",
"Computing-In-Memory Neural Network Accelerators for Safety-Critical Systems: Can Small Device Variations Be Disastrous?",
"Computing-In-Memory Neural Network Accelerators for Safety-Critical Systems: Can Small Device Variations Be Disastrous?",
"Computing-In-Memory Neural Network Accelerators for Safety-Critical Systems: Can Small Device Variations Be Disastrous?"
] | [
"Zheyu Yan \nUniversity of Notre Dame\nUniversity of Notre Dame\nUniversity of Notre Dame\n\n",
"Sharon Xiaobo \nUniversity of Notre Dame\nUniversity of Notre Dame\nUniversity of Notre Dame\n\n",
"Hu \nUniversity of Notre Dame\nUniversity of Notre Dame\nUniversity of Notre Dame\n\n",
"Yiyu Shi \nUniversity of Notre Dame\nUniversity of Notre Dame\nUniversity of Notre Dame\n\n",
"Zheyu Yan \nUniversity of Notre Dame\nUniversity of Notre Dame\nUniversity of Notre Dame\n\n",
"Sharon Xiaobo \nUniversity of Notre Dame\nUniversity of Notre Dame\nUniversity of Notre Dame\n\n",
"Hu \nUniversity of Notre Dame\nUniversity of Notre Dame\nUniversity of Notre Dame\n\n",
"Yiyu Shi \nUniversity of Notre Dame\nUniversity of Notre Dame\nUniversity of Notre Dame\n\n"
] | [
"University of Notre Dame\nUniversity of Notre Dame\nUniversity of Notre Dame\n",
"University of Notre Dame\nUniversity of Notre Dame\nUniversity of Notre Dame\n",
"University of Notre Dame\nUniversity of Notre Dame\nUniversity of Notre Dame\n",
"University of Notre Dame\nUniversity of Notre Dame\nUniversity of Notre Dame\n",
"University of Notre Dame\nUniversity of Notre Dame\nUniversity of Notre Dame\n",
"University of Notre Dame\nUniversity of Notre Dame\nUniversity of Notre Dame\n",
"University of Notre Dame\nUniversity of Notre Dame\nUniversity of Notre Dame\n",
"University of Notre Dame\nUniversity of Notre Dame\nUniversity of Notre Dame\n"
] | [] | Computing-in-Memory (CiM) architectures based on emerging nonvolatile memory (NVM) devices have demonstrated great potential for deep neural network (DNN) acceleration thanks to their high energy efficiency. However, NVM devices suffer from various nonidealities, especially device-to-device variations due to fabrication defects and cycle-to-cycle variations due to the stochastic behavior of devices. As such, the DNN weights actually mapped to NVM devices could deviate significantly from the expected values, leading to large performance degradation. To address this issue, most existing works focus on maximizing average performance under device variations. This objective would work well for general-purpose scenarios. But for safety-critical applications, the worst-case performance must also be considered. Unfortunately, this has been rarely explored in the literature. In this work, we formulate the problem of determining the worst-case performance of CiM DNN accelerators under the impact of device variations. We further propose a method to effectively find the specific combination of device variation in the high-dimensional space that leads to the worst-case performance. We find that even with very small device variations, the accuracy of a DNN can drop drastically, causing concerns when deploying CiM accelerators in safety-critical applications. Finally, we show that surprisingly none of the existing methods used to enhance average DNN performance in CiM accelerators are very effective when extended to enhance the worst-case performance, and further research down the road is needed to address this problem. | 10.1145/3508352.3549360 | [
"https://export.arxiv.org/pdf/2207.07626v1.pdf"
] | 250,607,542 | 2207.07626 | dcda4fe97a0bd683bb7c3a7a1fe38c50343524d3 |
Computing-In-Memory Neural Network Accelerators for Safety-Critical Systems: Can Small Device Variations Be Disastrous?
Zheyu Yan
University of Notre Dame
University of Notre Dame
University of Notre Dame
Sharon Xiaobo
University of Notre Dame
University of Notre Dame
University of Notre Dame
Hu
University of Notre Dame
University of Notre Dame
University of Notre Dame
Yiyu Shi
University of Notre Dame
University of Notre Dame
University of Notre Dame
Computing-In-Memory Neural Network Accelerators for Safety-Critical Systems: Can Small Device Variations Be Disastrous?
Computing-in-Memory (CiM) architectures based on emerging nonvolatile memory (NVM) devices have demonstrated great potential for deep neural network (DNN) acceleration thanks to their high energy efficiency. However, NVM devices suffer from various nonidealities, especially device-to-device variations due to fabrication defects and cycle-to-cycle variations due to the stochastic behavior of devices. As such, the DNN weights actually mapped to NVM devices could deviate significantly from the expected values, leading to large performance degradation. To address this issue, most existing works focus on maximizing average performance under device variations. This objective would work well for general-purpose scenarios. But for safety-critical applications, the worst-case performance must also be considered. Unfortunately, this has been rarely explored in the literature. In this work, we formulate the problem of determining the worst-case performance of CiM DNN accelerators under the impact of device variations. We further propose a method to effectively find the specific combination of device variation in the high-dimensional space that leads to the worst-case performance. We find that even with very small device variations, the accuracy of a DNN can drop drastically, causing concerns when deploying CiM accelerators in safety-critical applications. Finally, we show that surprisingly none of the existing methods used to enhance average DNN performance in CiM accelerators are very effective when extended to enhance the worst-case performance, and further research down the road is needed to address this problem.
can reduce data movement by performing in-situ weight data access [29]. Furthermore, by employing non-volatile memory (NVM) devices (e.g., ferroelectric field-effect transistors (FeFETs), resistive random-access memories (RRAMs), and phase-change memories (PCMs)), CiM can achieve higher memory density and higher energy efficiency compared with conventional MOSFET-based designs [3]. However, NVM devices can be unreliable, especially because of device-to-device variations due to fabrication defects and cycle-tocycle variations due to the stochastic behavior of devices. If not handled properly, the weight values provided by the NVM devices during computations could deviate significantly from the expected values, leading to great performance degradation.
To quantify the robustness of CiM DNN accelerators, a Monte Carol (MC) simulation-based evaluation process is often adopted [23]. A device variation model is extracted from physical measurements. Then in each MC run, one instance of each device is sampled from the variation model and DNN performance evaluation is collected. This process is repeated thousands of times until the collected DNN performance distribution converges. Following this process, existing practices [9,12,21,34,37] generally include up to 10,000 MC runs, which is extremely time-consuming. Other researchers use Bayesian Neural Networks (BNNs) to evaluate the robustness against device variations [7], but the variational inference of BNNs is essentially one form of MC simulation.
Based on these evaluation methods, many works have been proposed in the literature to improve the average performance of CiM DNN accelerators under device variations. They fall into two categories: (1) reducing device variations and (2) enhancing DNN robustness. To reduce device variations, a popular option is writeverify [26]. The approach applies iterative write and read (verify) pulses to make sure that the maximum difference between the weights eventually programmed into the devices and the desired values are bounded by a designer-specified threshold. Write-verify can reduce the weight deviation from the ideal value to less than 3%, thus reducing the average accuracy degradation of deployed DNNs to less than 0.5% [26]. To enhance DNN robustness, a variety of approaches exist. For example, neural architecture search is devised [35,36] to automatically search through a designated search space for DNN architectures that are more robust. Variationaware training [1,9,11], on the other hand, injects device variationinduced weight perturbations in the training process, so that the trained DNN weights are more robust against similar types of variations. Other approaches include on-chip in-situ training [39] that trains DNNs directly on noisy devices and Bayesian Neural Network (BNN)-based approaches that utilizes the variational training process of BNNs to improve DNN robustness [7]. 1 One common assumption of all these evaluation and improvement methods is that they focus on the average performance of a DNN under device variations, which may work well in generalpurpose scenarios. However, for safety-critical applications where failure could result in loss of life (e.g., medical devices, aircraft flight control, and nuclear systems), significant property damage, or damage to the environment, only focusing on the average performance is not enough. The worst-case performance, regardless of its likelihood, must also be considered [13,31]. Yet this is a very challenging problem: Given the extremely high dimension of the device variation space, simply running MC simulations in hope to capture the worst-case corner during training will not work. As shown in Fig. 1, for various DNNs for different datasets, even though the MC simulation has converged with 100K runs, the highest DNN top-1 error rate discovered is still much higher than that identified by our method to be discussed later in this paper, where the worst-case error is close to 100%.
Despite the importance of the problem, very little has been explored in the literature. The only related work comes from the security perspective [30], where a weight projected gradient descent attack (PGD) method is developed to find the weight perturbation that can lead to mis-classification of inputs. However, the goal is to generate a successful weight perturbation attack, but not to identify the worst-case scenario under all possible variations.
To fill the gap, in this work we propose an optimization framework that can efficiently and effectively find the worst-case performance of DNNs in a CiM accelerator with maximum weight variations bounded by write-verify. We show that the problem can be formulated as a constrained optimization with non-differentiable objective, which can be relaxed and solved by gradient-based optimization. We then conduct experiments on different networks and datasets under a practical setting commonly used (i.e., each device represents 2 bits of data with write-verify and yields a maximum weight perturbation magnitude of 0.03). Weight PGD [30], the only method of relevance in the literature, identifies worst-case scenarios where the accuracy is similar to that of random guess, while ours can find ones with close to zero accuracy. We then use our framework to evaluate the effectiveness of existing solutions designed to enhance the average accuracy of DNNs under device variations, and see how they improve the worst-case performance. We study two types of remedies: reducing device variations and enhancing DNN robustness. Experimental results suggest that they either induce significant overhead or are not quite effective, and further research is needed.
The main contributions of this work are multi-fold:
• This is the first work that formulates the problem of finding worst-case performance in DNN CiM accelerators with device variations for safety-critical applications. • An efficient gradient-based framework is proposed to solve the non-differentiable optimization problem and find the worst-case scenario. • Experimental results show that our framework is the only method that can effectively identify the worst-case performance of a DNN. • We show that even though the maximum weight perturbations are bounded (e.g., by write-verify) to be very small, significant DNN accuracy drop can still occur. Therefore any application of CiM accelerators in the safety-critical settings should use caution. • We further demonstrate that existing methods to enhance DNN robustness are either too costly or not quite effective in improving worst-case performance. New algorithms in this direction are needed. The remainder of the paper is organized as follows. In Section 2 we first discuss the background information about CiM DNN accelerators, their robustness issue caused by device variations, and existing methods targeting this issue. We then formulate the problem of finding the worst-case performance of DNN under device variations and propose a framework to solve it in Section 3, along with experimental results to show its efficacy. Extensive studies on the effectiveness of extending existing methods to enhance DNN worst-case performance are carried out in Section 4 and concluding remarks are given in Section 5. The crossbar array is the key computation engine of CiM DNN accelerators. A crossbar array can perform matrix-vector multiplication in one clock cycle. In a crossbar array, matrix values (e.g., weights in DNNs) are stored at the cross point of each vertical and horizontal line with NVM devices (e.g., RRAMs and FeFETs), and each vector value (e.g., inputs for DNNs) is fed in through horizontal data lines in the form of voltage. The output then flows out through vertical lines in the form of current. The calculation in the crossbar array is performed in the analog domain according to the Kirchhoff's laws but additional peripheral digital circuits are needed for other key DNN operations (e.g., pooling and non-linear activation), so digital-to-analog and analog-to-digital converters are used between different components especially for DACs to transfer digital input data to voltage levels and ADCs to transfer analog output currents into digital values.
Resistive crossbar arrays suffer from various sources of variations and noises. Two major ones include spatial variations and temporal variations. Spatial variations result from fabrication defects and have both local and global correlations. NVM devices also suffer from temporal variations due to the stochasticity in the device material, which causes fluctuations in conductance when programmed at different times. Temporal variations are typically independent of the device to device and are irrelevant to the value to be programmed [6]. In this work, as a proof of concept, we assume the impact of these non-idealities to be independent on each NVM device. The proposed framework can also be extended to other sources of variations with modification.
Evaluating DNN Robustness Against Device Variations
Most existing researches use a Monte Carlo (MC) simulation-based evaluation process to quantify the robustness of CiM DNN accelerators under the impact of device variations. A device variation model and a circuit model are first extracted from physical measurements. The DNN to be evaluated is then mapped onto the circuit model and the desired value of each NVM device is calculated. In each MC run, for each device, one instance of a non-ideal device is sampled from the variation model, and the actual conductance value of each NVM device is determined. After that, DNN performance (e.g., classification accuracy) can be collected. This process is repeated thousands of times until the collected DNN performance distribution converges. Existing practices [9,12,21,34,36,37] generally include close to 10,000 MC runs, which is extremely timeconsuming. Empirical results [36,37] show that 10k MC runs are enough for evaluating the average accuracy of DNNs while no theoretic guarantee is provided. Several researchers have also looked into the impact of weight perturbations on neural network security [30,33]. This line of research, dubbed as "Adversarial Weight Perturbation", tries to link the perturbation in weight to the more thoroughly studied adversarial example issue, where the inputs of DNNs are intentionally perturbed to trigger mis-classifications. One work [33] trains DNNs on adversarial examples to collect adversarial weight perturbation. Most recently, [30] tries to find the adversarial weight perturbation through a modified weight Projected Gradient Descent (PGD) attack. The method can successfully find a small perturbation to reduce the accuracy of DNNs. The work focuses on the success of the attack and does not offer a guarantee of worst-case weight perturbation, as will be demonstrated by our experimental results.
Addressing Device Variations
Various approaches have been proposed to deal with the issue of device variations in CiM DNN accelerators. Here we briefly review the two most common types: enhancing DNN robustness and reducing device variations.
A common method used to enhance DNN robustness against device variations is variation-aware training [1,9,11,23]. Also known as noise-injection training, the method injects variation to DNN weights in the training process, which can provide a DNN model that is statistically robust against the impact of device variations. In each iteration, in addition to traditional gradient descent, an instance of variation is sampled from a variation distribution and added to the weights in the forward pass. The backpropagation pass is noise free. Once the gradients are collected, this variation is cleared and the variation-free weight is updated according to the previously collected gradients. Other approaches include designing more robust DNN architectures [7,11,36,40] and pruning [2,12].
To reduce device variations, write-verify [26,39] is commonly used during the programming process. An NVM device is first programmed to an initial state using a pre-defined pulse pattern. Then the value of the device is read out to verify if its conductance falls within a certain margin from the desired value (i.e., if its value is precise). If not, an additional update pulse is applied, aiming to bring the device conductance closer to the desired one. This process is repeated until the difference between the value programmed into the device and the desired value is acceptable. The process typically requires a few iterations. Most recently, researchers have demonstrated that it is possible to only selectively write-verify a small number of critical devices to maintain the average accuracy of a DNN [34]. There are also various circuit design efforts [10,18,27] that try to mitigate the device variations.
EVALUATING WORST-CASE PERFORMANCE OF CIM DNN ACCELERATORS
A major impact of the device variations is that the conductance of the NVM devices will deviate from the desired value due to the device-to-device variations and cycle-to-cycle variations during the programming process, leading to perturbations in the weight values of a DNN and affecting its accuracy. In Section 3.1, we first model the impact of NVM device variations on weight perturbation, assuming write-verify is used to minimize the variations. Then based on the weight perturbation model, in Section 3.2, we formulate the problem of finding the lowest DNN accuracy under weight perturbation, and devise a framework to solve it. Experimental results are presented in Section 3.3.
Modeling of Weight Perturbation Due to Device Variations
Here we show how we model the impact of device variations on DNN weights. In this paper, we are majorly concerned about the impact of device variations in the programming process, i.e., the conductance value programmed to NVM devices is different from the desired value. For a weight represented by bits, let its desired value W be
W = −1 ∑︁ =0 × 2(1)
where is the value of the ℎ bit of the desired weight value. Moreover, each NVM device is capable of representing bits of data. Thus, each weight value of the DNNs needs / devices to represent 2 . This mapping process can be represented as
= −1 ∑︁ =0 × + × 2 (2)
where is the desired conductance of the ℎ device representing a weight. Note that negative weights are mapped in a similar manner. We assume all of the devices use write-verify so that the difference between the actual conductance of each device and its desired value is bounded [6]:
= + , − ℎ ≤ ≤ ℎ(3)
where is the actually programmed conductance and ℎ is the designated write-verify tolerate threshold.
Thus, when a weight is programmed, the actual value W mapped on the devices would be
W = / −1 ∑︁ =0 2 × = / −1 ∑︁ =0 2 × + = W + / −1 ∑︁ =0 × 2 × W − ℎ ≤ W ≤ W + ℎ (4) where ℎ = / −1 =0 ℎ × 2 × .
In this paper, we denote ℎ as the weight perturbation bound.
In this paper, we set = 2 as in [11,34]. Same as the standard practice discussed in Section 2.3, for each weight, we iteratively program the difference between the value on the device and the expected value until it is below 0.1, i.e., ℎ = 0.06 [34], resulting in ℎ = 0.03 (unless otherwise specified). These numbers are in line with those reported in [26], which confirms the validity of our model and parameters.
Problem Definition
Now that we have the weight perturbation model, we can define the problem of identifying the worst-case DNN accuracy. Without loss of generality, in this work, we use { , W} to represent a neural network, where is its neural architecture and W is its weights. The forward pass of this neural network is represented by (W, x), where x is the inputs.
From the model in Section 3.1, we can see that the weight perturbation due to device variations is additive and independent. As such, the forward pass of a neural network under the impact of device variation can be expressed as (W + ΔW, x), where ΔW is the weight perturbation caused by device variations. We can define the perturbed neural network as { , W + ΔW}.
With these annotations, we can have the following problem definition: Given a neural network { , W} and an evaluation dataset , find the perturbation ΔW that the accuracy of perturbed neural network { , W + ΔW} in dataset is the lowest among all possible perturbations inside the weight perturbation bound. In the rest of this paper, this perturbation ΔW is denoted as the worst-case weight perturbation; the resultant performance(accuracy) is denoted as the worst-case performance(accuracy); and the corresponding neural network is denoted as the worst-case neural network.
Under this definition, we can formalize the problem as:
minimize ΔW |{ (W + ΔW, x) == |( , ) ∈ }| s.t. L (ΔW) ≤ ℎ(5)
where and are the input data and classification label in dataset , respectively. L (ΔW) is the maximum magnitude of weight perturbation, i.e., max( (ΔW)), ℎ is the weight perturbation bound in (4) in Section 3.1, and | | denotes the cardinality (size) of a set . As , W and are fixed, the goal is to find the ΔW that minimizes the size of this set of correct classifications, i.e., achieving the worst-case accuracy.
Finding the Worst-Case Performance
The optimization problem defined by (5) is extremely difficult to solve directly due to the non-differentiable objective function. In this section, we put forward a framework to cast it into an alternative form that can be solved by existing optimization algorithms.
To start with, we can slightly relax the objective. Consider a function such that (W + ΔW, x) == if and only if (x, { , W + ΔW}) ≥ 0. In this case, the optimization objective
| (W + ΔW, x) == |( , ) ∈ }|(6)
can be relaxed to ∑︁
x∈ (x, { , W + ΔW})(7)
Intuitively, minimizing (7) can help to minimize (6), and these two optimization problems become strictly equivalent if all data in is mis-classified in the presence of ΔW.
There are various choices of (x, { , W + ΔW}) that can meet the requirement. We show some of the representative ones below.
O = (W + ΔW, x) Z = Softmax( (W + ΔW, x)) 1 (x, { , W + ΔW}) = − (O, ) + 1 2 (x, { , W + ΔW}) = max{max ≠ ( ) − , 0} 3 (x, { , W + ΔW}) = softplus(max ≠ ( ) − ) − log(2) 4 (x, { , W + ΔW}) = max{0.5 − , 0} 5 (x, { , W + ΔW}) = − log(2 · − 2) 6 (x, { , W + ΔW}) = max{max ≠ ( ) − , 0} 7 (x, { , W + ΔW}) = softplus(max ≠ ( ) − ) − log(2)(8)
where and are the input data and classification label, respectively. softplus( ) = log(1 + exp( )) and (O, ) is cross entropy loss. According to the empirical results, in this paper we choose
O = (W + ΔW, x) (x, { , W + ΔW}) = max{max ≠ ( ) − , 0}(9)
We now have the relaxed optimization problem
minimize ∑︁ x∈ (x, { , W + ΔW}) s.t. L (ΔW) ≤ ℎ(10)
For this relaxed problem, we can utilize Lagrange multiplier to provide an alternative formulation
minimize · ∑︁ x∈ (x, { , W + ΔW}) + (L (ΔW) − ℎ ) (11)
where > 0 is a suitably chosen constant, if optimal solution exists. This objective is equivalent to the relaxed problem, in the sense that there exists > 0 such that the optimal solution to the latter matches the optimal solution to the former. Thus, we use the optimization objective (11) as the relaxed alternative objective of the defined objective (5). Because the objective (11) is differentiable w.r.t. ΔW, we use gradient descent as the optimization algorithm to solve this problem.
The choice of constant . Qualitatively speaking, observing objective (11) with larger values means more focus on lower accuracy and less focus on L (ΔW), which would result in lower final accuracy and greater L (ΔW). The empirical results shown in Fig. 3 also prove this observation, where we plot how the worst-case error rate and L (ΔW) varies with the choice of c using LeNet for MNIST.
Because the empirical result show that L (ΔW) is monotonic w.r.t. , to find the value that leads to the lowest performance under the weight perturbation bound ℎ , we use binary search to find the largest value that ensures L (ΔW) ≤ ℎ . The corresponding accuracy obtained with this is then the worst-case performance of this DNN model under weight perturbations bounded by ℎ . Finally, since the optimization problem stated in (11) can be solved via gradient descent, the time and memory complexity of our algorithm is comparable to that needed to train the DNN.
Experimental Evaluation
In this section, we use experiments to demonstrate the effectiveness of the proposed method in finding the worst-case performance of different DNN models. Six different DNN models for four different datasets are used: (1) LeNet [17] for MNIST [5], (2) ConvNet [23] for CIFAR-10 [15], (3) ResNet-18 [8] for CIFAR-10, (4) ResNet-18 for Tiny ImageNet [16], (5) ResNet-18 for ImageNet [4] and (6) VGG-16 [28] for ImageNet. LeNet and ConvNet are quantized to 4 bits while ResNet-18 and VGG-16 are quantized to 8 bits. As discussed in Section 3.1, we use ℎ = 0.03 as the weight perturbation bound, i.e., each weight is perturbed by at most ±0.03.
As there is no existing work on identifying the worst-case performance of a DNN under device variations to compare with other than the naive MC simulations, we slightly modify the weight PGD attack method [30], which tries to find the smallest weight perturbation that can lead to a successful attack, as an additional baseline. Experiments are conducted on Titan-XP GPUs in the Py-Torch framework. For MC simulation baseline, 100,000 runs are used. We use Adam [14] as the gradient descent optimizer. The detailed setup for the proposed method is shown in Table 1. Table 2, compared with weight PGD attack and MC simulations, the proposed framework is more effective in finding the worst-case performance. It identifies worst-case weight perturbations that can lead to below 10% accuracy for LeNet and ConvNet, and almost 0% accuracy for ResNet-18 and VGG-16. On the other hand, the weight PGD attack can only find perturbations that lead to DNN accuracy close to random guessing (i.e., 1/N for N classes, which is 10% for CIFAR-10, 0.5% for Tiny ImageNet, and 0.1% for ImageNet). MC simulations perform the worst. With 100,000 runs it fails to find any perturbation that can result in accuracy drop similar to those of the other two methods. This is quite expected given the high dimensional exploration space spanned by the large number of weights. Our framework takes slightly longer time to run than the weight PGD attack method, mainly due to the number of epochs the gradient descent takes to converge. Yet both methods are much faster than the MC simulations.
Worst-case DNN Accuracy Obtained by Different Methods. As shown in
The results from the table suggest that DNNs are extremely vulnerable to device variations, even though write-verify is used and the maximum weight perturbation is only 0.03. Considering the fact that even converged 100,000 MC simulations cannot get close to the actual worst-case accuracy, For safety-critical applications, it may be necessary to screen each programmed CiM accelerator and test its accuracy to avoid disastrous consequences. Random sampling based quality control may not be an option.
In addition, comparing the results obtained by our framework on ConvNet and ResNet-18 for CIFAR-10 (as well as ResNet-18 and VGG-16 for ImageNet) we can see that deeper networks are more susceptible to weight perturbations. This is expected as more perturbation can be accumulated in the forward propagation. Table 2: Comparison between MC simulation (MC), weight PGD attack (PGD) and the proposed framework in obtaining the worst-case accuracy of various DNN models for different dataset using weight perturbation bound ℎ = 0.03. The accuracy of the original model without perturbation (Ori. Acc) is also provided. The proposed method finds perturbations that lead to much lower accuracy than those found by other methods, using slightly longer time than the weight PGD attack method but much shorter time than the MC simulation. Finally, the experimental results also show that quantization in both weights and activations is not an effective method to improve worst-case DNN performance, because all the models in these experiments are quantized as explained in the experimental setup. The model is more confident in the wrong cases than in the correct ones.
Dataset
Analysis of Classification Results.
We now take a closer look at the classification results of the worst-case LeNet for MNIST identified by our framework. We first examine the classification confidence, the distribution of which is shown in Fig. 4. Same as the common practice, the classification confidence of a DNN on an input is calculated by a Softmax function on its output vector. The element having the highest confidence is considered the classification result. Contrary to our intuition, from the figure we can see that the worst-case LeNet is highly confident in the inputs it mis-classifies, with an average confidence of 0.90 on all the inputs that are classified wrong. On the other hand, the DNN model is not confident in the inputs it classifies correctly, having an average confidence of only 0.47 on these inputs. This is significantly different from the original LeNet without perturbation, whose confidence is always close to 1.
We also observe how the classification results are distributed among different classes, which are reported in Table. 3. From the table we can see that most of the errors are due to images being wrongly classified to the same class (class 1), while many of the images that truly belong to this class are being classified to other classes (class 2 and class 3).
We hope that these observations can potentially shed light on the development of new algorithms to boost the worst-case performance of DNNs in the future. Here we show how the perturbation is distributed among the weights in the worst-case LeNet for MNIST. As can be seen in Fig. 5, most of the weights are either not perturbed or perturbed to the maximum magnitude (i.e., ℎ = 0.03). We then show the number of weights that are perturbed in each layer in Fig. 6. We can observe that the weights in convolutional layers and the final FC layer are more likely to be perturbed. This is probably due to the fact that they in general have more impact on the accuracy of a DNN.
ENHANCING WORST-CASE PERFORMANCE OF CIM DNN ACCELERATORS
Several works exist in the literature to improve the average performance of a DNN under device variations. In this section, we try to extend them to improve the worst-case performance of a DNN, and evaluate their effectiveness. Specifically, we will include two types of methods: (1) confining the device variations and (2) training DNN models that are more robust against device variations. As discussed in Section 2, one of the most popular practices of the former is write-verify and the latter includes variation-aware training.
In addition to these, we also modify adversarial training [30], a method commonly used to combat adversarial input, to address weight perturbation caused by device variations. The algorithm is summarized in Alg. 1. Similar to how adversarial training handles input perturbations, in DNN training process we inject worst-case perturbations to the weights of a DNN, in hope that they perform better under the impact of device variations. Specifically, in each iteration of training, we first conduct the proposed method to find the perturbations of the current weights tha can lead to worst-case accuracy of the model. We then add them to the weights and collect the gradient .
If not specified explicitly, all accuracy results shown in this section are collected by training one DNN architecture using the same specification but with three different initializations. The accuracy (error rate) number is shown in percentage and presented in [average ± standard deviation] for these three runs.
Stronger Write-Verify
As shown in Table 2, using the standard write-verify setting in the literature to confine the maximum weight perturbation to 0.03 ( ℎ = 0.03 in (4)) cannot significantly improve the worst-case performance of DNN models. If we set a smaller ℎ in write-verify, the write time would become longer but can potentially help to boost the worst-case performance.
To see the relationship between ℎ and worst-case DNN accuracy, we use three models, i.e., LeNet for MNIST, ConvNet for for mini-batches B in D do 6: Divide B into input I and label L; 7: Find weight perturbations N that lead to worst-case accuracy using the framework discussed in Section 3.3; CIFAR-10, and ResNet-18 for CIFAR-10, and plot the results as shown in Fig. 7(a)-(c), where we also include the model accuracy without any device variation ( ℎ = 0). From the figures we can see that a lower ℎ can indeed increase the worst-case accuracy. Yet to ensure the models have acceptable accuracy in worst-case (e.g., no more than 5% accuracy drop from DNNs without the impact of device variations and marked with star in each figure), we need to set ℎ = 0.009 for LeNet, ℎ = 0.003 for ConvNet and ℎ = 0.005 for ResNet-18, which would take extremely long write time. The experimental result of ResNet-18 for Tiny ImageNet is not shown here because its worst-case accuracy is lower than 20% even when ℎ = 0.001 and further reducing ℎ is not practical.
Variation-aware and Adversarial Training
Here we study the effectiveness of variation-aware training and adversarial training on four models: LeNet for MNIST, ConvNet for CIFAR-10, ResNet-18 for CIFAR-10, and ResNet-18 for Tiny ImageNet. We assume that standard write-verify with weight perturbation bound ℎ = 003 is used for all the models. In each figure, the circle marks the model without perturbation ( ℎ = 0) and the star marks the model with highest ℎ and no more than 5% accuracy degradation. As shown in Table 4, both variation-aware training and adversarial training can offer some improvements in most cases compared with the regular training. Adversarial training is slightly more effective than variation-aware training. However, compared with the accuracy that can be obtained by these networks without device variations (third column in Table 2), the accuracy drop is still significant in the worst case. The only exception is the case of LeNet for MNIST, where adversarial training can almost fully recover the accuracy loss even in the worst case, thanks to its simplicity. In addition, we can observe that as the network gets deeper, the worstcase accuracy improvement brought by these two training methods starts to diminish (e.g. 7.41% for ResNet-18 for Tiny ImageNet).
Combining Adversarial Training with Write-Verify
Finally, using the same three models and datasets, we show whether the models trained by the adversarial training method can reduce the requirement on write-verify to achieve the same worst-case accuracy. The results are shown in Fig. 7(d)-(f). Comparing with the results of the models from regular training in Fig. 7(a)-(c), with adversarial training, the weight perturbation bound ℎ needed to achieve the same accuracy increases. As discussed in Section 4.2, with adversarial training, the worst-case accuracy of LeNet for MNIST using the standard write-verify ( ℎ = 0.03) is already very close to that of the original model without device variations. Thus, Fig. 7 (d) is almost flat. For the other two models, to ensure no more than 5% worst-case accuracy degradation from the original model without device variations, we now need ℎ = 0.005 for Con-vNet for CIFAR-10, and ℎ = 0.008 for ResNet-18 for CIFAR-10, as marked by the star in each figure. Comparing with the weight perturbation bound needed to attain the same worst-case accuracy in Fig 7(b)-(c), we can see that using adversarial training instead of regular training can increase it by around 1.7×, indicating faster programming process. However, these bounds are still much smaller than the commonly used 0.03 [11,26] and take considerably more programming time. Therefore, more effective methods to address the worst-case accuracy are still needed, and calls for future research.
CONCLUSIONS
In this work, contrary to the existing methods that evaluate the average performance of DNNs under device variations in CiM accelerators, we proposed an efficient framework to examine their worst-case performance, which is important for safety-critical applications. With the proposed framework, we show that even with bounded small weight perturbations after write-verify, the accuracy of a well-trained DNN can drop drastically to almost zero. As such, we should use caution when applying CiM accelerators to safety-critical applications. For example, we may need to screen the accuracy of each chip rather than random sampling in quality control. We further show that the existing methods used to enhance average DNN performance in CiM accelerators are either too costly (for stronger write-verify) or ineffective (for training-based methods) when extended to enhance the worst-case performance. Further research from the community is needed to address this problem.
Figure 1 :
1Comparison of the worst-case (highest) top-1 error identified by 100K MC runs and our method, based on device variation model introduced in Section 3.1. Showing results on four DNNs: LeNet for MNIST, CovNet for CIFAR-10, ResNet-18 for CIFAR-10, and ResNet-18 for Tiny ImageNet.
Figure 2 :
2Illustration of the crossbar architecture. The input is fed horizontally and multiplied by weights stored in the NVM devices at each cross point. The multiplication results are summed up vertically and the sum serves as an output.
Figure 3 :
3Choice of the constant . We plot two different relations: (1) Worst-case error rate (left) w.r.t. value and (2) perturbation magnitude L (ΔW) (right) w.r.t. value. The DNN model used is LeNet for MNIST.
Figure 4 :
4Distribution of classification confidence among correct/wrong cases from the worst-case LeNet for MNIST.
3 :
3Normalized classification results of the worst-case LeNet for MNIST. The number in row and column indicates how many cases with class as ground truth are being classified as , normalized over the total number of cases in class . Most inputs are mis-classified to one class (class 1). of Worst-Case Weight Perturbation.
Figure 5 :
5The distribution of the weight perturbation magnitude in the worst-case LeNet for MNIST. Most weights are either not perturbed or perturbed to ℎ .
Figure 6 :
6The percentage of weights being perturbed in each layer of the worst-case LeNet for MNIST. Weights in convolutional layers and the last FC layer are more likely to be perturbed.
: A DNN architecture , training dataset D, validation dataset V, the total number of training epochs , loss function , learning rate ; 2: Initialize weight W for ; 3: Initialize = 0, W = W; 4: for ( = 0; < ; + +) do 5:
Figure 7 :
7Effectiveness of write-verify with regular training (a)-(c), and with adversarial training (d)-(f). Figures represent the relationship between weight perturbation bound in write-verify ℎ (X-axis) and the worst-case DNN accuracy (Y-axis) in different models: (a)(e) LeNet for MNIST, (b)(d) ConvNet for CIFAR-10, and (c)(f) ResNet-18 for CIFAR-10. For each data point, three experiments of the same setting but different random initialization are conducted. The solid lines show the averaged results over the three experiments and the shadows represent the standard deviations.
Table 1 :
1Hyper-parameter setups to perform the proposed method on different models for different datasets. The value in (11) identified by binary search, the learning rate (lr) and the number of iterations used for gradient descent are specified.Dataset
Model
c
lr
# of runs
MNIST
LeNet
1E-3 1E-5
500
CIFAR-10
ConvNet
1E-5 1E-5
100
CIFAR-10
ResNet-18 1E-9 1E-4
20
Tiny ImgNet ResNet-18 1E-10 1E-4
20
ImageNet
ResNet-18 1E-3 1E-3
10
ImageNet
VGG-16
1E-3 1E-3
10
Table
Table 4 :
4Worst-case accuracy (%) of various DNN models from regular training (Regular), variation-aware training (VA) and adversarial training (ADV). Write-verify with weight perturbation bound ℎ = 0.03. Compared with regular training, adversarial training is effective in LeNet for MNIST, but both methods are not effective in other more complex models. 35±03.70 18.58±00.80 98.26±01.05 CIFAR10 ConvNet 4.27±00.33 63.71±03.76 67.09±03.85 CIFAR10 ResNet18 0.00±00.00 32.84±17.20 34.84±13.20 Tiny IN ResNet18 0.00±00.00 3.57±03.48 7.41±08.10Dataset
Model
Regular
VA
ADV
MNIST
LeNet
7.
(a) LeNet for MNIST w/ reg. training.(b) ConvNet for CIFAR-10 w/ reg. training.(f) ResNet-18 for CIFAR-10 w/ adv. training.Accuracy (%)
99.12
97.31
0.000
0.002
0.004
0.006
0.008
Weight perturbation bound
0
20
40
60
80
100
Accuracy (%)
86.47
82.01
0.000
0.002
0.004
0.006
0.008
Weight perturbation bound
0
20
40
60
80
100
Accuracy (%)
95.60
92.11
(c) ResNet18 for CIFAR-10 w/ reg. training.
0.0000 0.0025 0.0050 0.0075 0.0100 0.0125 0.0150 0.0175
Weight perturbation bound
0
20
40
60
80
100
Accuracy (%)
99.12
98.26
(d) LeNet for MNIST w/ adv. training.
0.000
0.002
0.004
0.006
0.008
Weight perturbation bound
0
20
40
60
80
100
Accuracy (%)
83.00
80.71
(e) ConvNet for CIFAR-10 w/ adv. training.
0.000
0.002
0.004
0.006
0.008
0.010
Weight perturbation bound
0
20
40
60
80
100
Accuracy (%)
93.41
90.40
This project is supported in part by NSF under grants CNS-1822099, CCF-1919167 and CCF-2028879.
Without loss of generality, we assume that M is a multiple of K.
NV-BNN: An accurate deep convolutional neural network based on binary STT-MRAM for adaptive AI edge. Chih-Cheng Chang, Ming-Hung Wu, Jia-Wei Lin, Chun-Hsien Li, Vivek Parmar, Heng-Yuan, Jeng-Hua Lee, Shyh-Shyuan Wei, Manan Sheu, Tian-Sheuan Suri, Chang, 56th ACM/IEEE Design Automation Conference (DAC). IEEEChih-Cheng Chang, Ming-Hung Wu, Jia-Wei Lin, Chun-Hsien Li, Vivek Parmar, Heng-Yuan Lee, Jeng-Hua Wei, Shyh-Shyuan Sheu, Manan Suri, Tian-Sheuan Chang, et al. 2019. NV-BNN: An accurate deep convolutional neural network based on binary STT-MRAM for adaptive AI edge. In 2019 56th ACM/IEEE Design Automation Conference (DAC). IEEE, 1-6.
Pruning of Deep Neural Networks for Fault-Tolerant Memristor-based Accelerators. Ching-Yuan Chen, Krishnendu Chakrabarty, 2021 58th ACM/IEEE Design Automation Conference (DAC). IEEEChing-Yuan Chen and Krishnendu Chakrabarty. 2021. Pruning of Deep Neu- ral Networks for Fault-Tolerant Memristor-based Accelerators. In 2021 58th ACM/IEEE Design Automation Conference (DAC). IEEE, 889-894.
Eyeriss: A spatial architecture for energy-efficient dataflow for convolutional neural networks. Yu-Hsin Chen, Joel Emer, Vivienne Sze, ACM SIGARCH computer architecture news. 44Yu-Hsin Chen, Joel Emer, and Vivienne Sze. 2016. Eyeriss: A spatial architecture for energy-efficient dataflow for convolutional neural networks. ACM SIGARCH computer architecture news 44, 3 (2016), 367-379.
Imagenet: A large-scale hierarchical image database. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, Li Fei-Fei, 2009 IEEE conference on computer vision and pattern recognition. IeeeJia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition. Ieee, 248-255.
The mnist database of handwritten digit images for machine learning research. Li Deng, IEEE Signal Processing Magazine. 29Li Deng. 2012. The mnist database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine 29, 6 (2012), 141-142.
Making memristive neural network accelerators reliable. Ben Feinberg, Shibo Wang, Engin Ipek, 2018 IEEE International Symposium on High Performance Computer Architecture (HPCA). IEEEBen Feinberg, Shibo Wang, and Engin Ipek. 2018. Making memristive neural network accelerators reliable. In 2018 IEEE International Symposium on High Performance Computer Architecture (HPCA). IEEE, 52-65.
Ulf Schlichtmann, and Cheng Zhuo. 2021. Bayesian inference based robust computing on memristor crossbar. Di Gao, Qingrong Huang, Grace Li Zhang, Xunzhao Yin, Bing Li, 2021 58th ACM/IEEE Design Automation Conference (DAC). Di Gao, Qingrong Huang, Grace Li Zhang, Xunzhao Yin, Bing Li, Ulf Schlicht- mann, and Cheng Zhuo. 2021. Bayesian inference based robust computing on memristor crossbar. In 2021 58th ACM/IEEE Design Automation Conference (DAC).
. IEEE. IEEE, 121-126.
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770-778.
Noise injection adaption: End-to-end ReRAM crossbar non-ideal effect adaption for neural network mapping. Zhezhi He, Jie Lin, Rickard Ewetz, Proceedings of the 56th Annual Design Automation Conference. the 56th Annual Design Automation ConferenceJiann-Shiun Yuan, and Deliang FanZhezhi He, Jie Lin, Rickard Ewetz, Jiann-Shiun Yuan, and Deliang Fan. 2019. Noise injection adaption: End-to-end ReRAM crossbar non-ideal effect adaption for neural network mapping. In Proceedings of the 56th Annual Design Automation Conference 2019. 1-6.
2022. Variation-Tolerant and Low R-Ratio Compute-in-Memory ReRAM Macro With Capacitive Ternary MAC Operation. Soyoun Jeong, Jaerok Kim, Minhyeok Jeong, Yoonmyung Lee, IEEE Transactions on Circuits and Systems I: Regular Papers. Soyoun Jeong, Jaerok Kim, Minhyeok Jeong, and Yoonmyung Lee. 2022. Variation- Tolerant and Low R-Ratio Compute-in-Memory ReRAM Macro With Capacitive Ternary MAC Operation. IEEE Transactions on Circuits and Systems I: Regular Papers (2022).
Device-circuit-architecture co-exploration for computingin-memory neural accelerators. Weiwen Jiang, Qiuwen Lou, Zheyu Yan, Lei Yang, Jingtong Hu, Sharon Xiaobo, Yiyu Hu, Shi, IEEE Trans. Comput. 70Weiwen Jiang, Qiuwen Lou, Zheyu Yan, Lei Yang, Jingtong Hu, Xiaobo Sharon Hu, and Yiyu Shi. 2020. Device-circuit-architecture co-exploration for computing- in-memory neural accelerators. IEEE Trans. Comput. 70, 4 (2020), 595-605.
On improving fault tolerance of memristor crossbar based neural network designs by target sparsifying. Song Jin, Songwei Pei, Yu Wang, 2020Song Jin, Songwei Pei, and Yu Wang. 2020. On improving fault tolerance of memristor crossbar based neural network designs by target sparsifying. In 2020
Automation & Test in Europe Conference & Exhibition (DATE). Design, IEEEDesign, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 91-96.
More Accurate Learning of k-DNF Reference Classes. Brendan Juba, Hengxuan Li, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Brendan Juba and Hengxuan Li. 2020. More Accurate Learning of k-DNF Ref- erence Classes. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 4385-4393.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic opti- mization. arXiv preprint arXiv:1412.6980 (2014).
Learning multiple layers of features from tiny images. Alex Krizhevsky, Geoffrey Hinton, Alex Krizhevsky, Geoffrey Hinton, et al. 2009. Learning multiple layers of features from tiny images. (2009).
Tiny imagenet visual recognition challenge. Ya Le, Xuan Yang, CS 231N. 73Ya Le and Xuan Yang. 2015. Tiny imagenet visual recognition challenge. CS 231N 7, 7 (2015), 3.
Gradientbased learning applied to document recognition. Yann Lecun, Léon Bottou, Yoshua Bengio, Patrick Haffner, Proc. IEEE. 86Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient- based learning applied to document recognition. Proc. IEEE 86, 11 (1998), 2278- 2324.
A 40nm RRAM Compute-in-Memory Macro Featuring On-Chip Write-Verify and Offset-Cancelling ADC References. Wantong Li, Xiaoyu Sun, Hongwu Jiang, Shanshi Huang, Shimeng Yu, ESSCIRC 2021-IEEE 47th European Solid State Circuits Conference (ESSCIRC). IEEEWantong Li, Xiaoyu Sun, Hongwu Jiang, Shanshi Huang, and Shimeng Yu. 2021. A 40nm RRAM Compute-in-Memory Macro Featuring On-Chip Write-Verify and Offset-Cancelling ADC References. In ESSCIRC 2021-IEEE 47th European Solid State Circuits Conference (ESSCIRC). IEEE, 79-82.
Zhiding Liang, Hanrui Wang, Jinglei Cheng, Yongshan Ding, Hang Ren, Xuehai Qian, Song Han, Weiwen Jiang, Yiyu Shi, arXiv:2203.17267Variational quantum pulse learning. arXiv preprintZhiding Liang, Hanrui Wang, Jinglei Cheng, Yongshan Ding, Hang Ren, Xuehai Qian, Song Han, Weiwen Jiang, and Yiyu Shi. 2022. Variational quantum pulse learning. arXiv preprint arXiv:2203.17267 (2022).
Can noise on qubits be learned in quantum neural network? a case study on quantumflow. Zhiding Liang, Zhepeng Wang, Junhuan Yang, Lei Yang, Yiyu Shi, Weiwen Jiang, 2021 IEEE/ACM International Conference On Computer Aided Design (ICCAD). IEEE. Zhiding Liang, Zhepeng Wang, Junhuan Yang, Lei Yang, Yiyu Shi, and Weiwen Jiang. 2021. Can noise on qubits be learned in quantum neural network? a case study on quantumflow. In 2021 IEEE/ACM International Conference On Computer Aided Design (ICCAD). IEEE, 1-7.
A fault-tolerant neural network architecture. Tao Liu, Wujie Wen, Lei Jiang, Yanzhi Wang, Chengmo Yang, Gang Quan, 56th ACM/IEEE Design Automation Conference (DAC). IEEETao Liu, Wujie Wen, Lei Jiang, Yanzhi Wang, Chengmo Yang, and Gang Quan. 2019. A fault-tolerant neural network architecture. In 2019 56th ACM/IEEE Design Automation Conference (DAC). IEEE, 1-6.
Bingqian Lu, Zheyu Yan, Yiyu Shi, Shaolei Ren, arXiv:2203.139212022. A Semi-Decoupled Approach to Fast and Optimal Hardware-Software Co-Design of Neural Accelerators. arXiv preprintBingqian Lu, Zheyu Yan, Yiyu Shi, and Shaolei Ren. 2022. A Semi-Decoupled Ap- proach to Fast and Optimal Hardware-Software Co-Design of Neural Accelerators. arXiv preprint arXiv:2203.13921 (2022).
DNN+ NeuroSim: An end-to-end benchmarking framework for compute-inmemory accelerators with versatile device technologies. Xiaochen Peng, Shanshi Huang, Yandong Luo, Xiaoyu Sun, Shimeng Yu, 2019 IEEE international electron devices meeting (IEDM). IEEEXiaochen Peng, Shanshi Huang, Yandong Luo, Xiaoyu Sun, and Shimeng Yu. 2019. DNN+ NeuroSim: An end-to-end benchmarking framework for compute-in- memory accelerators with versatile device technologies. In 2019 IEEE international electron devices meeting (IEDM). IEEE, 32-5.
ISAAC: A convolutional neural network accelerator with in-situ analog arithmetic in crossbars. Ali Shafiee, Anirban Nag, Naveen Muralimanohar, Rajeev Balasubramonian, John Paul Strachan, Miao Hu, Stanley Williams, Vivek Srikumar, ACM SIGARCH Computer Architecture News. 44Ali Shafiee, Anirban Nag, Naveen Muralimanohar, Rajeev Balasubramonian, John Paul Strachan, Miao Hu, R Stanley Williams, and Vivek Srikumar. 2016. ISAAC: A convolutional neural network accelerator with in-situ analog arithmetic in crossbars. ACM SIGARCH Computer Architecture News 44, 3 (2016), 14-26.
The Larger The Fairer? Small Neural Networks Can Achieve Fairness for Edge Devices. Yi Sheng, Junhuan Yang, Yawen Wu, Kevin Mao, Yiyu Shi, Jingtong Hu, Weiwen Jiang, Lei Yang, Yi Sheng, Junhuan Yang, Yawen Wu, Kevin Mao, Yiyu Shi, Jingtong Hu, Weiwen Jiang, and Lei Yang. 2022. The Larger The Fairer? Small Neural Networks Can Achieve Fairness for Edge Devices. (2022).
Two-step write-verify scheme and impact of the read noise in multilevel RRAM-based inference engine. Wonbo Shim, Jae-Sun Seo, Shimeng Yu, Science and Technology. 35115026Wonbo Shim, Jae-sun Seo, and Shimeng Yu. 2020. Two-step write-verify scheme and impact of the read noise in multilevel RRAM-based inference engine. Semi- conductor Science and Technology 35, 11 (2020), 115026.
Fault-free: A Fault-resilient Deep Neural Network Accelerator based on Realistic ReRAM Devices. Hyein Shin, Myeonggu Kang, Lee-Sup Kim, 2021 58th ACM/IEEE Design Automation Conference (DAC). IEEEHyein Shin, Myeonggu Kang, and Lee-Sup Kim. 2021. Fault-free: A Fault-resilient Deep Neural Network Accelerator based on Realistic ReRAM Devices. In 2021 58th ACM/IEEE Design Automation Conference (DAC). IEEE, 1039-1044.
Karen Simonyan, Andrew Zisserman, arXiv:1409.1556Very deep convolutional networks for large-scale image recognition. arXiv preprintKaren Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).
Efficient processing of deep neural networks: A tutorial and survey. Vivienne Sze, Yu-Hsin Chen, Tien-Ju Yang, Joel S Emer, Proc. IEEE 105. IEEE 105Vivienne Sze, Yu-Hsin Chen, Tien-Ju Yang, and Joel S Emer. 2017. Efficient processing of deep neural networks: A tutorial and survey. Proc. IEEE 105, 12 (2017), 2295-2329.
Formalizing Generalization and Adversarial Robustness of Neural Networks to Weight Perturbations. Yu-Lin Tsai, Chia-Yi Hsu, Chia-Mu Yu, Pin-Yu Chen, Advances in Neural Information Processing Systems. 34Yu-Lin Tsai, Chia-Yi Hsu, Chia-Mu Yu, and Pin-Yu Chen. 2021. Formalizing Generalization and Adversarial Robustness of Neural Networks to Weight Per- turbations. Advances in Neural Information Processing Systems 34 (2021).
Zhilu Wang, Chao Huang, Qi Zhu, arXiv:2203.14141Efficient Global Robustness Certification of Neural Networks via Interleaving Twin-Network Encoding. arXiv preprintZhilu Wang, Chao Huang, and Qi Zhu. 2022. Efficient Global Robustness Cer- tification of Neural Networks via Interleaving Twin-Network Encoding. arXiv preprint arXiv:2203.14141 (2022).
Exploration of quantum neural architecture by mixing quantum neuron designs. Zhepeng Wang, Zhiding Liang, Shanglin Zhou, Caiwen Ding, Yiyu Shi, Weiwen Jiang, 2021 IEEE/ACM International Conference On Computer Aided Design (ICCAD). IEEE. Zhepeng Wang, Zhiding Liang, Shanglin Zhou, Caiwen Ding, Yiyu Shi, and Weiwen Jiang. 2021. Exploration of quantum neural architecture by mixing quantum neuron designs. In 2021 IEEE/ACM International Conference On Computer Aided Design (ICCAD). IEEE, 1-7.
Adversarial weight perturbation helps robust generalization. Dongxian Wu, Shu-Tao Xia, Yisen Wang, Advances in Neural Information Processing Systems. 33Dongxian Wu, Shu-Tao Xia, and Yisen Wang. 2020. Adversarial weight pertur- bation helps robust generalization. Advances in Neural Information Processing Systems 33 (2020), 2958-2969.
SWIM: Selective Write-Verify for Computing-in-Memory Neural Accelerators. Zheyu Yan, Sharon Xiaobo, Yiyu Hu, Shi, 2022 59th ACM/IEEE Design Automation Conference (DAC). IEEEZheyu Yan, Xiaobo Sharon Hu, and Yiyu Shi. 2022. SWIM: Selective Write-Verify for Computing-in-Memory Neural Accelerators. In 2022 59th ACM/IEEE Design Automation Conference (DAC). IEEE.
RADARS: Memory Efficient Reinforcement Learning Aided Differentiable Neural Architecture Search. Zheyu Yan, Weiwen Jiang, Sharon Xiaobo, Yiyu Hu, Shi, 2022 27th Asia and South Pacific Design Automation Conference (ASP-DAC). IEEEZheyu Yan, Weiwen Jiang, Xiaobo Sharon Hu, and Yiyu Shi. 2022. RADARS: Memory Efficient Reinforcement Learning Aided Differentiable Neural Architec- ture Search. In 2022 27th Asia and South Pacific Design Automation Conference (ASP-DAC). IEEE, 128-133.
Uncertainty Modeling of Emerging Device based Computing-in-Memory Neural Accelerators with Application to Neural Architecture Search. Zheyu Yan, Da-Cheng Juan, Xiaobo Sharon Hu, Yiyu Shi, 2021 26th Asia and South Pacific Design Automation Conference (ASP-DAC). IEEEZheyu Yan, Da-Cheng Juan, Xiaobo Sharon Hu, and Yiyu Shi. 2021. Uncertainty Modeling of Emerging Device based Computing-in-Memory Neural Accelerators with Application to Neural Architecture Search. In 2021 26th Asia and South Pacific Design Automation Conference (ASP-DAC). IEEE, 859-864.
When single event upset meets deep neural networks: Observations, explorations, and remedies. Zheyu Yan, Yiyu Shi, Wang Liao, Masanori Hashimoto, Xichuan Zhou, Cheng Zhuo, 2020 25th Asia and South Pacific Design Automation Conference (ASP-DAC). IEEEZheyu Yan, Yiyu Shi, Wang Liao, Masanori Hashimoto, Xichuan Zhou, and Cheng Zhuo. 2020. When single event upset meets deep neural networks: Observations, explorations, and remedies. In 2020 25th Asia and South Pacific Design Automation Conference (ASP-DAC). IEEE, 163-168.
Co-exploration of neural architectures and heterogeneous asic accelerator designs targeting multiple tasks. Lei Yang, Zheyu Yan, Meng Li, Hyoukjun Kwon, Liangzhen Lai, Tushar Krishna, Vikas Chandra, Weiwen Jiang, Yiyu Shi, 2020 57th ACM/IEEE Design Automation Conference (DAC). IEEELei Yang, Zheyu Yan, Meng Li, Hyoukjun Kwon, Liangzhen Lai, Tushar Krishna, Vikas Chandra, Weiwen Jiang, and Yiyu Shi. 2020. Co-exploration of neural architectures and heterogeneous asic accelerator designs targeting multiple tasks. In 2020 57th ACM/IEEE Design Automation Conference (DAC). IEEE, 1-6.
Fully hardware-implemented memristor convolutional neural network. Peng Yao, Huaqiang Wu, Bin Gao, Jianshi Tang, Qingtian Zhang, Wenqiang Zhang, Joshua Yang, He Qian, Nature. 577Peng Yao, Huaqiang Wu, Bin Gao, Jianshi Tang, Qingtian Zhang, Wenqiang Zhang, J Joshua Yang, and He Qian. 2020. Fully hardware-implemented memristor convolutional neural network. Nature 577, 7792 (2020), 641-646.
BayesFT: Bayesian Optimization for Fault Tolerant Neural Network Architecture. Nanyang Ye, Jingbiao Mei, Zhicheng Fang, Yuwen Zhang, Ziqing Zhang, Huaying Wu, Xiaoyao Liang, 2021 58th ACM/IEEE Design Automation Conference (DAC). IEEENanyang Ye, Jingbiao Mei, Zhicheng Fang, Yuwen Zhang, Ziqing Zhang, Huaying Wu, and Xiaoyao Liang. 2021. BayesFT: Bayesian Optimization for Fault Tol- erant Neural Network Architecture. In 2021 58th ACM/IEEE Design Automation Conference (DAC). IEEE, 487-492.
| [] |
[
"AS-IntroVAE: Adversarial Similarity Distance Makes Robust IntroVAE",
"AS-IntroVAE: Adversarial Similarity Distance Makes Robust IntroVAE"
] | [
"Changjie Lu ",
"Shen Zheng [email protected] ",
"Zirui Wang ",
"Omar Dib [email protected] ",
"Gaurav Gupta [email protected] ",
"\nWenzhou-Kean University\nWenzhouChina\n",
"\nCarnegie Mellon University\nPittsburghUSA\n",
"\nWenzhou-Kean University\nWenzhouChina\n",
"\nZhejiang University\nHangzhouChina\n",
"\nWenzhou-Kean University\nWenzhouChina\n",
"\nWenzhou-Kean University\nWenzhouChina\n"
] | [
"Wenzhou-Kean University\nWenzhouChina",
"Carnegie Mellon University\nPittsburghUSA",
"Wenzhou-Kean University\nWenzhouChina",
"Zhejiang University\nHangzhouChina",
"Wenzhou-Kean University\nWenzhouChina",
"Wenzhou-Kean University\nWenzhouChina"
] | [] | Recently, introspective models like IntroVAE and S-IntroVAE have excelled in image generation and reconstruction tasks. The principal characteristic of introspective models is the adversarial learning of VAE, where the encoder attempts to distinguish between the real and the fake (i.e., synthesized) images. However, due to the unavailability of an effective metric to evaluate the difference between the real and the fake images, the posterior collapse and the vanishing gradient problem still exist, reducing the fidelity of the synthesized images. In this paper, we propose a new variation of IntroVAE called Adversarial Similarity Distance Introspective Variational Autoencoder (AS-IntroVAE). We theoretically analyze the vanishing gradient problem and construct a new Adversarial Similarity Distance (AS-Distance) using the 2-Wasserstein distance and the kernel trick. With weight annealing on AS-Distance and KL-Divergence, the AS-IntroVAE are able to generate stable and highquality images. The posterior collapse problem is addressed by making per-batch attempts to transform the image so that it better fits the prior distribution in the latent space. Compared with the per-image approach, this strategy fosters more diverse distributions in the latent space, allowing our model to produce images of great diversity. Comprehensive experiments on benchmark datasets demonstrate the effectiveness of AS-IntroVAE on image generation and reconstruction tasks. | 10.48550/arxiv.2206.13903 | [
"https://export.arxiv.org/pdf/2206.13903v3.pdf"
] | 250,088,840 | 2206.13903 | a987c6e3b960b3ebb983fc4a5a62918c96555b5c |
AS-IntroVAE: Adversarial Similarity Distance Makes Robust IntroVAE
Changjie Lu
Shen Zheng [email protected]
Zirui Wang
Omar Dib [email protected]
Gaurav Gupta [email protected]
Wenzhou-Kean University
WenzhouChina
Carnegie Mellon University
PittsburghUSA
Wenzhou-Kean University
WenzhouChina
Zhejiang University
HangzhouChina
Wenzhou-Kean University
WenzhouChina
Wenzhou-Kean University
WenzhouChina
AS-IntroVAE: Adversarial Similarity Distance Makes Robust IntroVAE
189, 2022 ACML 2022Image GenerationVariational AutoencoderIntrospective Learning
Recently, introspective models like IntroVAE and S-IntroVAE have excelled in image generation and reconstruction tasks. The principal characteristic of introspective models is the adversarial learning of VAE, where the encoder attempts to distinguish between the real and the fake (i.e., synthesized) images. However, due to the unavailability of an effective metric to evaluate the difference between the real and the fake images, the posterior collapse and the vanishing gradient problem still exist, reducing the fidelity of the synthesized images. In this paper, we propose a new variation of IntroVAE called Adversarial Similarity Distance Introspective Variational Autoencoder (AS-IntroVAE). We theoretically analyze the vanishing gradient problem and construct a new Adversarial Similarity Distance (AS-Distance) using the 2-Wasserstein distance and the kernel trick. With weight annealing on AS-Distance and KL-Divergence, the AS-IntroVAE are able to generate stable and highquality images. The posterior collapse problem is addressed by making per-batch attempts to transform the image so that it better fits the prior distribution in the latent space. Compared with the per-image approach, this strategy fosters more diverse distributions in the latent space, allowing our model to produce images of great diversity. Comprehensive experiments on benchmark datasets demonstrate the effectiveness of AS-IntroVAE on image generation and reconstruction tasks.
Introduction
In the last decade, two types of deep generative models-Variational Autoencoders (VAEs) (Kingma and Welling (2013)), and Generative Adversarial Networks (GANs) (Goodfellow et al. (2014)) -has gained tremendous popularity in computer vision (CV) applications. Their acceptance is attributed to their success in various CV tasks, such as image generation (Karras et al. (2019); Vahdat and Kautz (2020)), image reconstruction (Gu et al. (2020); Hou et al. (2017)), and image-to-image translation (Zhu et al. (2017); Liu et al. (2017)).
VAEs can produce images with diverse appearances and is easy-to-train. However, the synthesized images of VAEs are often blurry and lack fine details (Larsen et al. (2016)). On the other hand, GANs produce sharper images with more details but often suffer from mode collapse (i.e., lack diversity) and vanishing gradient (Goodfellow (2016)). Many researchers have sought to develop an efficient hybrid model that combines the advantages of VAEs and GANs. Unfortunately, due to the requirement of an extra discriminator, existing hybrid VAE and GAN models (Makhzani et al. (2015); Larsen et al. (2016); Dumoulin et al. (2016); Tolstikhin et al. (2017)) has high computational complexity and heavy memory usage. Even with these delicate architecture designs, those methods still underperform leading GANs (Karras et al. (2017); Brock et al. (2018)) in terms of the quality of the generated images.
Unlike classical hybrid GAN-VAE models, introspective methods (Huang et al. (2018); Daniel and Tamar (2021)) eliminate the need for extra discriminators. Instead, they utilize the decoder as the 'actual' discriminator to distinguish between the fake and the real images and have achieved state-of-the-art results on image generation tasks. Despite their progress, those introspective learning-based methods suffer from the posterior collapse problem with insufficiently tuned hyperparameters and the vanishing gradient problem, especially during the early-stage model training.
In this paper, we propose Adversarial Similarity Distance Introspective Variational Autoencoder (AS-IntroVAE), an introspective VAE that can competently address the posterior collapse and the vanishing gradient problem. Firstly, we present a theoretical analysis and demonstrate that the vanishing gradient problem of introspective models can be addressed by a similarity distance based upon the 2-Wasserstein distance and the kernel trick. We termed this distance as Adversarial Similarity Distance (AS-Distance). The weight annealing strategy applied to the AS-Distance and the KL-Divergence yields highly stable synthesized images with excellent quality. We address the posterior collapse problem by per-batch aligning the image with the prior distribution in the latent space. This strategy allows the proposed AS-IntroVAE to contain diverse distributions in the latent space, thereby promoting the diversity of the synthesized image.
Our contribution are highlighted as follows (1) A new introspective variational autoencoder named Adversarial Similarity Distance Introspective Variational Autoencoder (AS-IntroVAE) (2) A new theoretical understanding of the posteriors collapse and the vanishing gradient problem in VAEs. (3) A novel similarity distance named Adversarial Similarity Distance (AS-Distance) for measuring the differences between the real and the synthesized images. (4) Promising results on image generation and image reconstruction tasks with significantly faster convergence speed.
Related Work
Generative Adversarial Network (GAN)
Generative Adversarial Network (GAN) (Goodfellow et al. (2014)) consists of a generator G(z) and a discriminator D(x). The generator tries to confuse the discriminator by generating a synthetic image from the input noise z, whereas the discriminator tries to distinguish that synthetic image from the real image x.
There are two crucial drawbacks with vanilla GANs: mode collapse (insufficient diversity) and vanishing gradient (insufficient stability) (Goodfellow (2016)). To remedy these issues, WGAN ) replace the commonly used Jensen Shannon divergence with Wasserstein distance to alleviate vanishing gradient and mode collapse. However, the hard weight clipping used by WGAN to satisfy the Lipschitz constraint significantly reduces the network capacity (Gulrajani et al. (2017)). Some better substitutes for weight clipping include gradient penalty of WGAN-GP (Gulrajani et al. (2017)) and spectral normalization (Miyato et al. (2018)) of SN-GAN.
Compared with these GAN approaches, our method also enjoys the advantages of VAEbased methods: more diversity and stability for the synthesized images.
Variational Autoencoder (VAE)
Variational Autoencoder (VAE) (Kingma and Welling (2013)) consists of an encoder and an decoder. The encoder q φ (z | x) compress the image x into a latent variable z, whereas the decoder p θ (x | z) try to reconstruct the image from that latent variable. Besides, variational inference is applied to approximate the posterior.
A common issue during VAE training is posterior collapse (Bowman et al. (2015)). Posterior collapse occurs when the latent variables become uninformative (e.g., weak, noisy) such that the model choose to rely solely on the autoregressive property of the decoder and ignore the latent variables (Subramanian et al. (2018)). Recent approaches mainly address the posterior collapse problems using KL coefficient annealing (Bowman et al. (Havrylov and Titov (2020)).
Compared with former VAE approaches, our method also shares the strength of GANbased models: sharp edges and sufficient fine details.
Integration of VAE and GAN
A major limitation of VAEs is that they tend to generate blurry, photo-unrealistic images (Dosovitskiy and Brox (2016)). One popular approach to alleviate this issue is to integrate GAN's adversarial training directly into VAE to obtain sharp edges and fine details. Specifically, these hybrid models often consists of an encoder-decoder and an extra discriminator. To save the growing computational cost from the extra discriminators, the recent stateof-the-art image synthesis method IntroVAE (Huang et al. (2018)) proposes to train VAEs in an introspective way such that the model can distinguish between the fake and real images using only the encoder and the decoder. The problem with IntroVAE is that it utilizes a hard margin to regulate the hinge terms (i.e., the KL divergence between the posterior and the prior), which leads to unstable training and difficult hyperparameter selection. To alleviate this issue, S-IntroVAE (Daniel and Tamar (2021)) expresses VAE's ELBO in the form of a smooth exponential loss, thereby replacing the hard margin with a soft threshold function. However, the posterior collapse and the vanishing gradient problem still exist in these introspective methods.
Unlike these methods, our approach has stable training throughout the entire training stage and can generate samples of sufficient diversity without careful hyperparameter tuning.
Background
We place our generative model under the variational inference setting (Kingma and Welling (2013)), where we aim to utilize variational inference methods to approximate the intractable maximum-likelihood objective. With this in mind, in this section, we will first revisit vanilla VAE, focusing on its ELBO technique, and then analyze introspective learningbased methods, including IntroVAE and S-IntroVAE.
Evidence Lower Bound (ELBO)
The learning object of VAE is to maximize the evidence lower bound (ELBO) as below:
log θ (x) ≥ E q φ (z|x) log p θ (x | z) − D KL (q φ (z | x) p θ (z)) (1) where x is the input data, z is the latent variable, q φ (z | x)
represents the encoder with parameter φ, and p θ (x | z) represents the decoder with parameter θ. The Kullback-Leibler (KL) divergence term can be expressed as:
D KL (q(z | x) p(z)) = E q(z) log q(z | x) p(z)(2)
Reparameterization trick (Kingma and Welling (2013)) is applied to make VAE trainable (i.e., differentiable through backpropagation). Specifically, the reparameterization trick transforms the latent representation z into two latent vectors µ and σ and a random vector ε, thereby excluding randomness from the backpropagation process. The reparameterization trick can be formulated as z = µ + σ ε.
IntroVAE
Unlike the vanilla VAE, which optimizes upon a single lower bound, IntroVAE (Huang et al. (2018)) incorporates an adversarial learning strategy commonly used by GANs during training. Specifically, the encoder aims to maximize the KL divergence between the fake image and the latent variable and minimize the KL divergence between the actual image and the latent variable. Meanwhile, the decoder aims to confuse the encoder by minimizing the KL divergence between the fake photo and the latent variable. The learning objective (i.e., loss function) of IntroVAE for Encoder and Decoder is:
L E = ELBO(x) + s=r,g m − KL (q φ (z|x s ) p(z)] + L D = s=r,g [KL (q φ (z|x s ) p(z))] .(3)
where x r is the reconstructed image, x g is the generated image, and m is the hard threshold for constraining the KL divergence.
S-IntroVAE
The major limitation of IntroVAE is that it utilizes a hard threshold m to constrain the KL divergence term. S-IntroVAE (Daniel and Tamar (2021)) suggests this design will significantly reduce model capacity and induce vanishing gradient. Instead, S-IntroVAE proposes to utilize the complete ELBO (instead of just KL) with a soft exponential function (instead of a hard threshold). The learning objective (loss function) of S-IntroVAE is:
L E = ELBO(x) − 1 α s=r,g exp (αELBO (x s )) L D = ELBO(x) + γ s=r,g ELBO (x s )(4)
where α, γ are both hyperparameters.
Proposed Method
In this section, we will illustrate the proposed AS-IntroVAE, including its strategy for posterior collapse (Fig. 1), its model workflow (Fig. 2), and the theoretical analysis.
Theoretical Analysis
An effective distance metrics are crucial for generative model like VAEs and GANs. To address the vanishing gradient problems of S-IntroVAE and IntroVAE, we propose a novel similarity distance called Adversarial Similarity Distance (AS-Distance). Inspired by 1-Wasserstein distance ) and the distance metrics in unsupervised domain adaptation (Wu and Zhuang (2021)), which could provide stable gradients, the AS-Distance is defined as:
D (p r , p g ) = E x∼p(z) [ E x∼pr [q (z|x)] − E x∼pg [q (z|x)] ] 2(5)
where p r is distribution of real data, p g is distribution of generated data. The encoder and the decoder plays an adversarial game on this distance:
arg min Dec max Enc D (p r , p g )(6)
We use 2-Wasserstein so that we could apply a kernel trick on Equ.5.
D(p r , p g ) = E x∼pr,g k x i r , x j r + k x i g , x j g − 2k x i r , x j g(7)
where k
x i r , x j g = E z∼p(z) [q(z|x i r ) · q(z|x i g )].
Since the latent space is a normal distribution. This kernel k can be deduced as Figure 1: Illustration of how AS-IntroVAE addresses the posterior collapse problem. Both IntroVAE/S-IntroVAE and the proposed AS-IntroVAE project the real images into the latent space. However, IntroVAE/S-IntroVAE force every image to match the prior distribution of the latent space. This enforcement undermines the valuable signal from the real image such that the latent space becomes uninformative for the decoder. In contrast, AS-IntroVAE align the image with the prior distribution in a per-batch manner. Since a batch contains far more variations than a single image, in AS-IntroVAE, the signal becomes strong enough such that the decoder has to leverage the latent space to generate diverse samples.
k x i r , x j g = − 1 2 (u i r −u j g ) 2 λ i r +λ j g (2π) n 2 · λ i r + λ j g 1 2(8)
where u, λ represent the variational inference on the mean and variance of x, i, j represent the ith, jth pixel in images.
In the maximum mean discrepancy (MMD) method, the distance calculation is conducted after the reparameterization. As shown by (Wu and Zhuang (2020)), this will leads to high variance (i.e., error) for the estimated distance. Instead, we seek to calculate the distance before the reparameterization, which can reduce the variance and improve the distance estimation accuracy.
During the experiment, we found that KL term from S-IntroVAE would generate sharp but distort images, whereas our AS term (without KL term) would generate diverse but blur images. If we fix the weight for KL and AS term (e.g. both at 0.5), there will exist two optimal solutions, which induces training stability. Inspired by (Fu et al. (2019)), we decide to gradually increase the weight for KL (from 0 to 1), and decrease the weight for AS (from 1 to 0) during training. In this way, for KL and AS, we can enjoy their advantages and eschew their disadvantages. Meanwhile, the decoder will generate fake image from Gaussian noise alone. In the second phase, the same encoder-decoder conduct adversarial learning in the latent space for the reconstructed image and the fake image. During adversarial learning, the encoder tries to maximize the adversarial similarity distance between fake image and reconstruction image, whereas the decoder wants to minimize it. After each iteration, the model will calculate the annealing rate c..
Based on the former discussions, we derive the loss function for AS-IntroVAE as:
L E φ = ELBO(x) − 1 α s=r,g exp(α(E q(z|xs) [log p(x | z)] + cKL(q φ (z|x s ) p(z)) + (1 − c)D(p r , p g ))) L D θ = ELBO(x) + γ s=r,g (E q(z|xs) [log p(x | z)] + cKL(q φ (z|x s ) p(z)) + (1 − c)D(p r , p g ))(9)
where c = min(i * 5/T, 1), i is the current iteration and T is total iteration.
Theorem 1 Introspective Variational Autoencoders (IntroVAEs) have vanishing gradient problems.
Proof As illustrated in IntroVAEs (IntroVAE and S-IntroVAE), the Nash equilibrium can be attained when KL (q φ (z|x r ) q φ (z|x g )) = 0, where x r could also represents the real images since the reconstructed images are sampled from real data points. Moreover, with the object D KL (q φ (z | x) p(z)) = 0, we have:
q φ (z|x r ) = q φ (z|x g ) = p(z)(10)
Replace the term p(z) with q φ (z|xr)+q φ (z|xg) 2 , the adversarial term for the decoder then becomes:
KL q φ (z|x r ) q φ (z|x r ) + q φ (z|x g ) 2 + KL q φ (z|x g ) q φ (z|x r ) + q φ (z|x g ) 2 = 2JSD (q φ (z|x r ) q φ (z|x g ))(11)
Therefore, the gradient of loss for Decoder in IntroVAE becomes:
∇L D = ∇2JSD (q φ (z|x r ) q φ (z|x g ))(12)
As shown by (Arjovsky and Bottou (2017)), if P xr and P xg are two distributions in two different manifolds that don't align perfectly and don't have full dimension (i.e., the dimension of the latent variable is sparse in the image dimension). With the assumption that P xr and P xg are continuous in their manifolds, if there exists a set A with measure 0 in one manifold, then P (A) = 0. Consequently, there will be an optimal discriminator with 100% accuracy for classify almost any x in these two manifolds, resulting in ∇L D = 0. For IntroVAE, the above condition (i.e., vanishing gradient problem) occurs since the very beginning of the training process when there is no intersection between the real image distribution and the fake image distribution. The reason why S-IntroVAE alleviate vanishing gradient at the later epoch (See Fig.7) is that the reconstruction loss of S-IntroVAE gradually create a support set between the real and the fake images during training. In comparison, since AS-Distance is based on 2-Wasserstein distance, the proposed AS-IntroVAE provides stable gradient, even there is no intersection between these two distributions.
Experiments
In this section, we will explain the implementation details, the comparison on 2D toy datasets, image generation and image reconstruction tasks, and the training stability.
Implementation Details
We train our model using the Adam (Kingma and Ba (2014)) optimizer with the default setting (β 1 = 0.9 and β 2 = 0.999) for 150 epochs. We implement our framework in Pytorch (Paszke et al. (2019)) with 3 NVIDIA RTX 3090 GPU. Same with (Huang et al. (2018); Daniel and Tamar (2021)), we set a fixed learning rate of 2e-4. It takes around 1 day to converge our model on CelebA-128 dataset and 2 days on CelebA-256 dataset, respectively. The exponential moving average is applied to stabilize training. The encoder and decoder will be updated respectively in each iteration. For the loss function, we set α = 2, and γ = 1. The weight for the real image's KL term and reconstruction term is fixed at 0.5 and 1.0, respectively, whereas the fake image's KL term and reconstruction term is both set at 0.5. For the annealing rate c, we apply a linear function shown in Equ.9. For other hyperparameter settings, we inherit the setting from S-IntroVAE (Daniel and Tamar (2021)).
2D Toy Datasets
In this subsection, we evaluate the proposed method's performance on 2D Toy datasets, including Gaussian and Checkerboard (De Cao et al. (2020); Grathwohl et al. (2018)), and compare our approach with baselines including VAE, IntroVAE, and S-IntroVAE using two commonly used metrics, including KL-divergence (KL) and Jensen-Shannon-divergence (JSD). Both KL and JSD measure how far away the model predicted outcome is from the ground truth data distribution. Therefore, a lower score indicates a better result for both KL and JSD.
We design three hyperparameter combinations to assess the robustness of different methods to hyperparameter changes. The hyperparameter includes the weight for real image ELBO, the weight for fake image's KL divergence term, and the weight for fake image's reconstruction term. The value for each combination is as below. C1: (0.3, 0.1, 0.9) C2: (0.5, 0.1, 0.9) C3: (0.7, 0.2, 0.9). Table 5.3 shows the quantitative comparison on 2D Toy Datasets 8 Gaussians. For the 8 Gaussians dataset, we find that our method has the lowest (i.e., best) KL and JSD score under all hyperparameter combinations, outperforming VAE, IntroVAE, and S-IntroVAE by a large margin. Fig. 3 shows the qualitative comparison of the 8 Gaussians dataset. We notice that VAE has one isolated data point for all three hyperparameter combinations, which means it has severe posterior collapse problems. IntroVAE has a small trace around a specific data point for C1 and C2, indicating nontrivial posterior collapse problems. For C3, IntroVAE produces a ring shape, meaning the generated data is evenly distributed and fails to converge to any designated data point. S-IntroVAE converges to two data points for C1 and C2 and six for C3, which still reflects the posterior collapse problem.
In comparison, our method, under all hyperparameter combinations, successfully converges to all eight data points. Therefore, we can conclude our approach is the only one that avoids the posterior collapse problem in 8 Gaussian datasets experiments. Due to the scope of this paper, the result for the Checkerboard dataset is in the supplementary material.
Image Generation
In this subsection, we evaluate the proposed method's performance on image generation tasks, using benchmark datasets including MNIST (LeCun et al. (1998)), CIFAR-10 (Krizhevsky et al. (2009)), CelebA-128, and CelebA-256 (Liu et al. (2015)). The methods for comparison includes WGAN-GP and S-IntroVAE, and the evaluation metric is Frechet Inception Distance (FID). Specifically, we use FID to estimate the distance between the generated dataset's distribution and the source (i.e., training) dataset's distribution. Hence, a lower FID score means a better result. Table 5.3 shows the quantitative comparison of image generation tasks. For all chosen datasets, the proposed method has the lowest (e.g., best) FID score. Fig. 4 shows the qualitative comparison of image generation at CelebA-128 dataset. We notice both WGAN-GP and S-IntroVAE have apparent facial feature distortion, edging blur, and facial asymmetry. Although S-IntroVAE has less ghost artifact and unnatural texture than WGAN-GP, it has a significant posterior collapse problem: S-IntroVAE's two generated images in the first row are extremely similar. In comparison, the proposed method's generated face is the best in terms of all mentioned aspects. Fig. 5 shows the qualitative comparison of image generation at CelebA-256 dataset. Compared with the CelebA-128 results in Fig. 4, we find that both WGAN-GP and S-IntroVAE experience less posterior collapse, unnatural texture, and facial asymmetry. However, both WGAN-GP and S-IntroVAE still have significant facial feature distortion. The ghost artifacts, the edging blur, and the over-smoothed hairs from these methods further degrade the perceptual quality. Compared with WGAN-GP and S-IntroVAE, the proposed method is the best in all mentioned aspects. Due to the scope of this paper, the qualitative result of image generation on other datasets will be in the supplementary material.
Image Reconstruction
In this subsection, we evaluate the proposed method's performance on image reconstruction tasks using benchmark datasets, including MNIST, CIFAR-10, Oxford Building Datasets, CelebA-128, and CelebA-256. The model for comparison is S-IntroVAE, and the evaluation metrics are Peak signal-to-noise ratio (PSNR), Structural Similarity Index (SSIM), and Mean Squared Error (MSE). A higher PSNR, a higher SSIM, and a lower MSE mean better results. Table 5.4 shows the quantitative comparison of image reconstruction task. For all except CelebA-256, our method has the best PSNR, SSIM, and MSE. For the CelebA-256 dataset, our method has the second-best SSIM but the best for both PSNR and MSE. Fig. 6 shows the qualitative comparison of image reconstruction at CelebA-128 dataset. S-IntroVAE fails to faithfully reconstruct the facial features, facial expressions, and skin textures. The reconstructed image also contains significant edging blur, defects, and artifacts, which significantly distort the perceptual quality.
The proposed method is much closer to the ground truth regarding contrast, exposure, color, edge information, and facial details. In short, our approach surpasses S-IntroVAE by a large margin in image reconstruction with the CelebA-128 dataset. We also find that S-IntroVAE has split performances in the early stage (10 & 20 epoch) and the later stage (50 epoch). In the early stage, the reconstructed images contain a loss of blur, defects, and artifacts, whereas the generated images have distorted facial features and a significant amount of unnatural artifacts. In the later stage, the quality of both tasks has improved. However, the generated and reconstructed images still contain many defects and edging blur. In comparison, the proposed method has quick learning convergence in the early stage (10 & 20 epoch) and maintains excellent training stability in the later stage (50 epoch). The reconstructed images are faithfully aligned with the original images, whereas the generated images have superb perceptual quality.
Conclusion
This paper introduces Adversarial Similarity Distance Introspective Variational Autoencoder (AS-IntroVAE), a new introspective approach that can faithfully address the posterior collapse and the vanishing gradient problem. Our theoretical analysis rigorously illustrated the advantages of the proposed Adversarial Similarity Distance (AS-Distance). Our empirical results exhibited compelling quality, diversity, and stability in image generation and construction tasks. In the future, we hope to apply the proposed AS-IntroVAE to high resolution (e.g., 1024 × 1024) image synthesis. We also hope to extend AS-IntroVAE to reinforcement learning, self-supervised learning tasks with detection-driven (Zheng et al. (2021)) and segmentation-driven ) techniques, and medical image analysis ). The upper/middle/bottom two row refer to real/reconstructed/generated images. We can see that the images are oversmoothed and looks blurry without the help of KL divergence. Figure 9: S-IntroVAE performance at CelebA-128, when the weight for KL divergence and AS-Distance are both 0.5. The upper/middle/bottom two rows refer to real/reconstructed/generated images. We can see that the images are with significant blur. Figure 10: AS-IntroVAE performance at CelebA-128, when the weight for KL divergence and AS-Distance are both 0.5. The upper/middle/bottom two rows refer to real/reconstructed/generated images. From this figure and the figure above, we note that different images display different levels of sharpness and blur. Therefore, we conclude that this hyperparameter combination causes the model to have unstable training and fluctuating performances.
(2015); Fu et al. (2019)), auxiliary cost function (ALIAS PARTH GOYAL et al. (2017)), pooling operations (Long et al. (2019)), variational approximation restraints (Razavi et al. (2019)), or different Evidence Lower Bound (ELBO) learning objectives
For example, VAE-GAN (Larsen et al. (2016)) and A-VAE (Mescheder et al. (2017)) both utilize a VAE-like encoder-decoder structure and an extra discriminator to constraint the latent space with adversarial learning. ALI (Dumoulin et al. (2016)) and BiGANs (Donahue et al. (2016)) adopt both mapping and inverse mapping with an extra discriminator to determine which mapping result is better.
Figure 2 :
2AS-IntroVAE workflow, using image reconstruction task as the example. AS-IntroVAE contains an encoder-decoder architecture. In the first phase, the encoder-decoder receives the real image and produce the reconstructed image.
Figure 3 :
3Visual Comparison on 2D Toy Dataset 8 Gaussians. From top to bottom row: results with different hyperparameters. From left to right column: VAE, IntroVAE, S-IntroVAE, Ours. Zoom in to view the detail within each subfigure.
Figure 4 :Figure 5 :
45Image Generation Visual Comparison at CelebA-128 dataset. Image Generation Visual Comparison at CelebA-256 dataset.
Figure 6 :
6Image Reconstruction Visual Comparison at CelebA-128 dataset.
Figure 7 :
7The training stability visual comparison at CelebA-128 dataset. From left to right panel: 10 epoch, 20 epoch, 50 epoch. For each image grid, the first and the second row are real images, the third and fourth rows are reconstructed images, and the fifth and sixth rows are generated images. Zoom in for a better view.
Figure 8 :
8AS-IntroVAE performance at CelebA-128, using only AS-Distance and no KL divergence.
Table 1 :
12D Toy Dataset 8 Gaussians Score
KL↓/JSD↓ Table
WGAN-GP S-IntroVAE Ours
MNIST
139.02
98.84
96.16
CIFAR-10
434.11
275.20
271.69
CelebA-128 160.53
140.35
130.74
CelebA-256 170.79
143.33
129.61
Table 2 :
2Image Generation FID Score↓ Table.
Table 3 :
3Image Reconstruction PSNR↑/SSIM↑/MSE↓ ScoreTable 5.5. Training StabilityFig. 7shows the training stability visual comparison at CelebA-128 dataset. We find that IntroVAE fails in image reconstruction and image generation tasks, even if we train the model using its recommended hyperparameters. IntroVAE's reconstructed images at a specific epoch are almost homogeneous: a mixture of blue and green clouds with little semantic information.
Figure 15: Visual Comparison on 2D Toy Dataset Checkerboard. From top to bottom row: results with different hyperparameters. From left to right column: VAE, In-troVAE, S-IntroVAE, Ours. The results show that AS-IntroVAE has a slight advantage over S-IntroVAE in terms of point clustering and centroid convergence.VAE IntroVAE S-IntroVAE Ours
C1
KL
22.1 NaN
20.7
20.4
JSD 10.8 -
9.6
9.6
C2
KL
21.2 NaN
21.0
20.6
JSD 9.9
-
10.0
9.6
C3
KL
21.7 NaN
21.2
20.9
JSD 10.7 -
10.3
9.9
Table 4 :
42D Toy Dataset Checkerboard KL↓/JSD↓ ScoreTable. The Table showsthat the proposed AS-IntroVAE has the best score for KL and JSD under all hyperparameter combinations.
© 2022 C. Lu, S. Zheng, Z. Wang, O. Dib & G. Gupta.
Appendix A. supplementary materialA.1. IntroductionIn this supplementary material, we first show experiment results with different weights for Adversarial Similarity Distance (AS-Distance) and KL Divergence, and then proceeds to more visual comparisons on image generation tasks on various benchmark dataset.A.2. AS-Distance and KL DivergenceIn this section, we use a visual comparison of image generation and image reconstruction tasks to show that the following hyperparameter combinations are worse than the weight annealing method introduced in the paper. The hyperparameter combinations are (1) AS-IntroVAE with a weight of 1.0 for AS-Distance and 0 for KL Divergence and (2) AS-IntroVAE with a weight of 0.5 for both AS-Distance and KL Divergence.Appendix B. Visual Comparison for Image GenerationThis section shows the additional visual comparison for image generation tasks. Specifically, we display the results on four datasets, including CelebA-128, CelebA-256, MNIST, and CIFAR10. For each dataset, we randomly select 16 images from each model's output dataset. In each figure, the upper left images are from AS-IntroVAE, the upper right images are from
Z-forcing: Training stochastic recurrent networks. Advances in neural information processing systems. Anirudh Goyal, Alias Parth Goyal, Alessandro Sordoni, Marc-Alexandre Côté, Nan Rosemary Ke, Yoshua Bengio, 30Anirudh Goyal ALIAS PARTH GOYAL, Alessandro Sordoni, Marc-Alexandre Côté, Nan Rosemary Ke, and Yoshua Bengio. Z-forcing: Training stochastic recurrent net- works. Advances in neural information processing systems, 30, 2017.
Towards principled methods for training generative adversarial networks. Martin Arjovsky, Léon Bottou, arXiv:1701.04862arXiv preprintMartin Arjovsky and Léon Bottou. Towards principled methods for training generative adversarial networks. arXiv preprint arXiv:1701.04862, 2017.
Wasserstein generative adversarial networks. Martin Arjovsky, Soumith Chintala, Léon Bottou, International conference on machine learning. PMLRMartin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In International conference on machine learning, pages 214-223. PMLR, 2017.
Generating sentences from a continuous space. Luke Samuel R Bowman, Oriol Vilnis, Vinyals, M Andrew, Rafal Dai, Samy Jozefowicz, Bengio, arXiv:1511.06349arXiv preprintSamuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Bengio. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349, 2015.
Large scale gan training for high fidelity natural image synthesis. Andrew Brock, Jeff Donahue, Karen Simonyan, arXiv:1809.11096arXiv preprintAndrew Brock, Jeff Donahue, and Karen Simonyan. Large scale gan training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096, 2018.
Soft-introvae: Analyzing and improving the introspective variational autoencoder. Tal Daniel, Aviv Tamar, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionTal Daniel and Aviv Tamar. Soft-introvae: Analyzing and improving the introspective vari- ational autoencoder. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4391-4400, 2021.
Block neural autoregressive flow. Nicola De Cao, Wilker Aziz, Ivan Titov, Uncertainty in artificial intelligence. PMLRNicola De Cao, Wilker Aziz, and Ivan Titov. Block neural autoregressive flow. In Uncer- tainty in artificial intelligence, pages 1263-1273. PMLR, 2020.
Adversarial feature learning. Jeff Donahue, Philipp Krähenbühl, Trevor Darrell, arXiv:1605.09782arXiv preprintJeff Donahue, Philipp Krähenbühl, and Trevor Darrell. Adversarial feature learning. arXiv preprint arXiv:1605.09782, 2016.
Generating images with perceptual similarity metrics based on deep networks. Alexey Dosovitskiy, Thomas Brox, Advances in neural information processing systems. 29Alexey Dosovitskiy and Thomas Brox. Generating images with perceptual similarity metrics based on deep networks. Advances in neural information processing systems, 29, 2016.
Ishmael Vincent Dumoulin, Ben Belghazi, Olivier Poole, Mastropietro, arXiv:1606.00704Martin Arjovsky, and Aaron Courville. Adversarially learned inference. Alex LambarXiv preprintVincent Dumoulin, Ishmael Belghazi, Ben Poole, Olivier Mastropietro, Alex Lamb, Mar- tin Arjovsky, and Aaron Courville. Adversarially learned inference. arXiv preprint arXiv:1606.00704, 2016.
Cyclical annealing schedule: A simple approach to mitigating kl vanishing. Hao Fu, Chunyuan Li, Xiaodong Liu, Jianfeng Gao, Asli Celikyilmaz, Lawrence Carin, arXiv:1903.10145arXiv preprintHao Fu, Chunyuan Li, Xiaodong Liu, Jianfeng Gao, Asli Celikyilmaz, and Lawrence Carin. Cyclical annealing schedule: A simple approach to mitigating kl vanishing. arXiv preprint arXiv:1903.10145, 2019.
Ian Goodfellow, arXiv:1701.00160Nips 2016 tutorial: Generative adversarial networks. arXiv preprintIan Goodfellow. Nips 2016 tutorial: Generative adversarial networks. arXiv preprint arXiv:1701.00160, 2016.
Generative adversarial nets. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio, Advances in neural information processing systems. 27Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. Advances in neural information processing systems, 27, 2014.
Will Grathwohl, T Q Ricky, Jesse Chen, Ilya Bettencourt, David Sutskever, Duvenaud, Ffjord, arXiv:1810.01367Free-form continuous dynamics for scalable reversible generative models. arXiv preprintWill Grathwohl, Ricky TQ Chen, Jesse Bettencourt, Ilya Sutskever, and David Duvenaud. Ffjord: Free-form continuous dynamics for scalable reversible generative models. arXiv preprint arXiv:1810.01367, 2018.
Image processing using multi-code gan prior. Jinjin Gu, Yujun Shen, Bolei Zhou, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionJinjin Gu, Yujun Shen, and Bolei Zhou. Image processing using multi-code gan prior. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3012-3021, 2020.
Improved training of wasserstein gans. Advances in neural information processing systems. Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, Aaron C Courville, 30Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. Improved training of wasserstein gans. Advances in neural information pro- cessing systems, 30, 2017.
Preventing posterior collapse with levenshtein variational autoencoder. Serhii Havrylov, Ivan Titov, arXiv:2004.14758arXiv preprintSerhii Havrylov and Ivan Titov. Preventing posterior collapse with levenshtein variational autoencoder. arXiv preprint arXiv:2004.14758, 2020.
Deep feature consistent variational autoencoder. Xianxu Hou, Linlin Shen, Ke Sun, Guoping Qiu, 2017 IEEE winter conference on applications of computer vision (WACV). IEEEXianxu Hou, Linlin Shen, Ke Sun, and Guoping Qiu. Deep feature consistent variational au- toencoder. In 2017 IEEE winter conference on applications of computer vision (WACV), pages 1133-1141. IEEE, 2017.
Introvae: Introspective variational autoencoders for photographic image synthesis. Huaibo Huang, Ran He, Zhenan Sun, Tieniu Tan, Advances in neural information processing systems. 31Huaibo Huang, Ran He, Zhenan Sun, Tieniu Tan, et al. Introvae: Introspective varia- tional autoencoders for photographic image synthesis. Advances in neural information processing systems, 31, 2018.
Progressive growing of gans for improved quality, stability, and variation. Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen, arXiv:1710.10196arXiv preprintTero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196, 2017.
A style-based generator architecture for generative adversarial networks. Tero Karras, Samuli Laine, Timo Aila, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionTero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for gen- erative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4401-4410, 2019.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Auto-encoding variational bayes. P Diederik, Max Kingma, Welling, arXiv:1312.6114arXiv preprintDiederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
Learning multiple layers of features from tiny images. Alex Krizhevsky, Geoffrey Hinton, Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.
Autoencoding beyond pixels using a learned similarity metric. Anders Boesen Lindbo Larsen, Søren Kaae Sønderby, Hugo Larochelle, Ole Winther, International conference on machine learning. PMLRAnders Boesen Lindbo Larsen, Søren Kaae Sønderby, Hugo Larochelle, and Ole Winther. Autoencoding beyond pixels using a learned similarity metric. In International conference on machine learning, pages 1558-1566. PMLR, 2016.
Gradient-based learning applied to document recognition. Yann Lecun, Léon Bottou, Yoshua Bengio, Patrick Haffner, Proceedings of the IEEE. 8611Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998.
Unsupervised image-to-image translation networks. Advances in neural information processing systems. Ming-Yu Liu, Thomas Breuel, Jan Kautz, 30Ming-Yu Liu, Thomas Breuel, and Jan Kautz. Unsupervised image-to-image translation networks. Advances in neural information processing systems, 30, 2017.
Deep learning face attributes in the wild. Ziwei Liu, Ping Luo, Xiaogang Wang, Xiaoou Tang, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionZiwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of the IEEE international conference on computer vision, pages 3730-3738, 2015.
Preventing posterior collapse in sequence vaes with pooling. Teng Long, Yanshuai Cao, Jackie Chi Kit Cheung, Teng Long, Yanshuai Cao, and Jackie Chi Kit Cheung. Preventing posterior collapse in sequence vaes with pooling. 2019.
Unsupervised domain adaptation for cardiac segmentation: Towards structure mutual information maximization. Changjie Lu, Shen Zheng, Gaurav Gupta, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionChangjie Lu, Shen Zheng, and Gaurav Gupta. Unsupervised domain adaptation for cardiac segmentation: Towards structure mutual information maximization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2588-2597, 2022.
. Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, Brendan Frey, arXiv:1511.05644Adversarial autoencoders. arXiv preprintAlireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, and Brendan Frey. Adversarial autoencoders. arXiv preprint arXiv:1511.05644, 2015.
Adversarial variational bayes: Unifying variational autoencoders and generative adversarial networks. Lars Mescheder, Sebastian Nowozin, Andreas Geiger, International Conference on Machine Learning. PMLRLars Mescheder, Sebastian Nowozin, and Andreas Geiger. Adversarial variational bayes: Unifying variational autoencoders and generative adversarial networks. In International Conference on Machine Learning, pages 2391-2400. PMLR, 2017.
Spectral normalization for generative adversarial networks. Takeru Miyato, Toshiki Kataoka, Masanori Koyama, Yuichi Yoshida, arXiv:1802.05957arXiv preprintTakeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normal- ization for generative adversarial networks. arXiv preprint arXiv:1802.05957, 2018.
Pytorch: An imperative style, high-performance deep learning library. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Advances in neural information processing systems. 32Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An im- perative style, high-performance deep learning library. Advances in neural information processing systems, 32, 2019.
Preventing posterior collapse with delta-vaes. Ali Razavi, Aäron Van Den, Ben Oord, Oriol Poole, Vinyals, arXiv:1901.03416arXiv preprintAli Razavi, Aäron van den Oord, Ben Poole, and Oriol Vinyals. Preventing posterior collapse with delta-vaes. arXiv preprint arXiv:1901.03416, 2019.
Towards text generation with adversarially learned neural outlines. Sandeep Subramanian, Alessandro Sai Rajeswar Mudumba, Adam Sordoni, Trischler, C Aaron, Chris Courville, Pal, Advances in Neural Information Processing Systems. 31Sandeep Subramanian, Sai Rajeswar Mudumba, Alessandro Sordoni, Adam Trischler, Aaron C Courville, and Chris Pal. Towards text generation with adversarially learned neural outlines. Advances in Neural Information Processing Systems, 31, 2018.
. Ilya Tolstikhin, Olivier Bousquet, Sylvain Gelly, Bernhard Schoelkopf, arXiv:1711.01558Wasserstein auto-encoders. arXiv preprintIlya Tolstikhin, Olivier Bousquet, Sylvain Gelly, and Bernhard Schoelkopf. Wasserstein auto-encoders. arXiv preprint arXiv:1711.01558, 2017.
Nvae: A deep hierarchical variational autoencoder. Arash Vahdat, Jan Kautz, Advances in Neural Information Processing Systems. 33Arash Vahdat and Jan Kautz. Nvae: A deep hierarchical variational autoencoder. Advances in Neural Information Processing Systems, 33:19667-19679, 2020.
Cf distance: a new domain discrepancy metric and application to explicit domain adaptation for cross-modality cardiac image segmentation. Fuping Wu, Xiahai Zhuang, IEEE Transactions on Medical Imaging. 3912Fuping Wu and Xiahai Zhuang. Cf distance: a new domain discrepancy metric and applica- tion to explicit domain adaptation for cross-modality cardiac image segmentation. IEEE Transactions on Medical Imaging, 39(12):4274-4285, 2020.
Unsupervised domain adaptation with variational approximation for cardiac segmentation. Fuping Wu, Xiahai Zhuang, IEEE Transactions on Medical Imaging. 4012Fuping Wu and Xiahai Zhuang. Unsupervised domain adaptation with variational ap- proximation for cardiac segmentation. IEEE Transactions on Medical Imaging, 40(12): 3555-3567, 2021.
Deblur-yolo: Real-time object detection with efficient blind motion deblurring. Yuxiong Shen Zheng, Shiyu Wu, Changjie Jiang, Gaurav Lu, Gupta, 2021 International Joint Conference on Neural Networks (IJCNN). IEEEShen Zheng, Yuxiong Wu, Shiyu Jiang, Changjie Lu, and Gaurav Gupta. Deblur-yolo: Real-time object detection with efficient blind motion deblurring. In 2021 International Joint Conference on Neural Networks (IJCNN), pages 1-8. IEEE, 2021.
Sapnet: Segmentation-aware progressive network for perceptual contrastive deraining. Changjie Shen Zheng, Yuxiong Lu, Gaurav Wu, Gupta, Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. the IEEE/CVF Winter Conference on Applications of Computer VisionShen Zheng, Changjie Lu, Yuxiong Wu, and Gaurav Gupta. Sapnet: Segmentation-aware progressive network for perceptual contrastive deraining. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 52-62, 2022.
Unpaired image-to-image translation using cycle-consistent adversarial networks. Jun-Yan Zhu, Taesung Park, Phillip Isola, Alexei A Efros, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionJun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE in- ternational conference on computer vision, pages 2223-2232, 2017.
| [] |
[
"Medical Imaging with Deep Learning -Under Review 2023 SAM.MD: Zero-shot medical image segmentation capabilities of the Segment Anything Model",
"Medical Imaging with Deep Learning -Under Review 2023 SAM.MD: Zero-shot medical image segmentation capabilities of the Segment Anything Model"
] | [
"Saikat Roy [email protected] \nMedical Image Computing\nGerman Cancer Research Center (DKFZ)\nHeidelbergGermany\n",
"Tassilo Wald [email protected] \nMedical Image Computing\nGerman Cancer Research Center (DKFZ)\nHeidelbergGermany\n\nHelmholtz Imaging\n\n",
"Gregor Koehler [email protected] \nMedical Image Computing\nGerman Cancer Research Center (DKFZ)\nHeidelbergGermany\n\nHelmholtz Imaging\n\n",
"Maximilian R Rokuss [email protected] \nMedical Image Computing\nGerman Cancer Research Center (DKFZ)\nHeidelbergGermany\n",
"Nico Disch [email protected] \nMedical Image Computing\nGerman Cancer Research Center (DKFZ)\nHeidelbergGermany\n",
"Julius Holzschuh [email protected] \nMedical Image Computing\nGerman Cancer Research Center (DKFZ)\nHeidelbergGermany\n",
"David Zimmerer [email protected] \nMedical Image Computing\nGerman Cancer Research Center (DKFZ)\nHeidelbergGermany\n",
"Klaus H Maier-Hein [email protected] \nMedical Image Computing\nGerman Cancer Research Center (DKFZ)\nHeidelbergGermany\n\nPattern Analysis and Learning Group\nHeidelberg University Hospital\nGermany\n"
] | [
"Medical Image Computing\nGerman Cancer Research Center (DKFZ)\nHeidelbergGermany",
"Medical Image Computing\nGerman Cancer Research Center (DKFZ)\nHeidelbergGermany",
"Helmholtz Imaging\n",
"Medical Image Computing\nGerman Cancer Research Center (DKFZ)\nHeidelbergGermany",
"Helmholtz Imaging\n",
"Medical Image Computing\nGerman Cancer Research Center (DKFZ)\nHeidelbergGermany",
"Medical Image Computing\nGerman Cancer Research Center (DKFZ)\nHeidelbergGermany",
"Medical Image Computing\nGerman Cancer Research Center (DKFZ)\nHeidelbergGermany",
"Medical Image Computing\nGerman Cancer Research Center (DKFZ)\nHeidelbergGermany",
"Medical Image Computing\nGerman Cancer Research Center (DKFZ)\nHeidelbergGermany",
"Pattern Analysis and Learning Group\nHeidelberg University Hospital\nGermany"
] | [] | Foundation models have taken over natural language processing and image generation domains due to the flexibility of prompting. With the recent introduction of the Segment Anything Model (SAM), this prompt-driven paradigm has entered image segmentation with a hitherto unexplored abundance of capabilities. The purpose of this paper is to conduct an initial evaluation of the out-of-the-box zero-shot capabilities of SAM for medical image segmentation, by evaluating its performance on an abdominal CT organ segmentation task, via point or bounding box based prompting. We show that SAM generalizes well to CT data, making it a potential catalyst for the advancement of semi-automatic segmentation tools for clinicians. We believe that this foundation model, while not reaching state-of-theart segmentation performance in our investigations, can serve as a highly potent starting point for further adaptations of such models to the intricacies of the medical domain. | null | [
"https://export.arxiv.org/pdf/2304.05396v1.pdf"
] | 258,078,785 | 2304.05396 | 0931888d5cd7c26427cc116af2ac33863552da27 |
Medical Imaging with Deep Learning -Under Review 2023 SAM.MD: Zero-shot medical image segmentation capabilities of the Segment Anything Model
Saikat Roy [email protected]
Medical Image Computing
German Cancer Research Center (DKFZ)
HeidelbergGermany
Tassilo Wald [email protected]
Medical Image Computing
German Cancer Research Center (DKFZ)
HeidelbergGermany
Helmholtz Imaging
Gregor Koehler [email protected]
Medical Image Computing
German Cancer Research Center (DKFZ)
HeidelbergGermany
Helmholtz Imaging
Maximilian R Rokuss [email protected]
Medical Image Computing
German Cancer Research Center (DKFZ)
HeidelbergGermany
Nico Disch [email protected]
Medical Image Computing
German Cancer Research Center (DKFZ)
HeidelbergGermany
Julius Holzschuh [email protected]
Medical Image Computing
German Cancer Research Center (DKFZ)
HeidelbergGermany
David Zimmerer [email protected]
Medical Image Computing
German Cancer Research Center (DKFZ)
HeidelbergGermany
Klaus H Maier-Hein [email protected]
Medical Image Computing
German Cancer Research Center (DKFZ)
HeidelbergGermany
Pattern Analysis and Learning Group
Heidelberg University Hospital
Germany
Medical Imaging with Deep Learning -Under Review 2023 SAM.MD: Zero-shot medical image segmentation capabilities of the Segment Anything Model
Short Paper -MIDL 2023 submission Editors: Under Review for MIDL 2023medical image segmentationSAMfoundation modelszero-shot learning
Foundation models have taken over natural language processing and image generation domains due to the flexibility of prompting. With the recent introduction of the Segment Anything Model (SAM), this prompt-driven paradigm has entered image segmentation with a hitherto unexplored abundance of capabilities. The purpose of this paper is to conduct an initial evaluation of the out-of-the-box zero-shot capabilities of SAM for medical image segmentation, by evaluating its performance on an abdominal CT organ segmentation task, via point or bounding box based prompting. We show that SAM generalizes well to CT data, making it a potential catalyst for the advancement of semi-automatic segmentation tools for clinicians. We believe that this foundation model, while not reaching state-of-theart segmentation performance in our investigations, can serve as a highly potent starting point for further adaptations of such models to the intricacies of the medical domain.
Introduction
In recent years, there has been an explosion in the development and use of foundational models in the field of artificial intelligence. These models are trained on very large datasets in order to generalize on various tasks and domains. In the Natural Language Processing domain, Large Language Models (LLMs) have taken over (Brown et al., 2020). This leads to models of increasing size culminating in the recent GPT4 by OpenAI (2023). For the image domain, Stable Diffusion (Rombach et al., 2022) and DALL-E (Ramesh et al., 2021), are models that generate high-resolution images using text prompts. And with the recent publication of the Segment Anything Model (SAM) (Kirillov et al., 2023) the field of image segmentation received a promptable model, possibly enabling a wide range of applications. In this paper, we contribute an early stage evaluation of SAM with different visual prompts demonstrating varying degrees of accuracy on a multi-organ dataset of the CT domain.
Methods
Slice Extraction
We use slices from the AMOS22 Abdominal CT Organ Segmentation dataset (Ji et al., 2022) to evaluate the zero-shot capabilities of SAM. We generate our evaluation dataset using axial 2D slices of patients centered around the center-of-mass of each given label. This results in 197-240 slices per patient, per class with each image slice containing some foreground class and a corresponding binary mask. Given this slice and binary mask, we generate different types of visual prompts.
Visual Prompt Engineering Zero-shot approaches have recently utilized prompting to segment novel concepts or classes not seen during training (Lüddecke and Ecker, 2022;Zou et al., 2022). SAM allows a variety of prompts including text, points and boxes to enable zero-shot semantic segmentation. 1 In this work, we use the following limited set of positive visual prompts to gauge the zero-shot capabilities of SAM on unseen concepts -1) Pointbased prompting with 1, 3 and 10 randomly selected points from the segmentation mask of the novel structure, 2) Bounding boxes of the segmentation masks with jitter of 0.01, 0.05, 0.1, 0.25 and 0.5 added randomly, to simulate various degrees of user inaccuracy. Boxes and Points are provided in an Oracle fashion to imitate an expert clinician.
Results and Discussion
Results
We compare the predictions of SAM to the corresponding 2D slices extracted from predictions of a trained 2D and 3D nnU-Net baseline (Isensee et al., 2018). Dice Similarity Coefficient (DSC) of the various prompting types as well as nnU-Net are shown in Table 1. To the best of our knowledge, SAM does not provide a direct text prompt interface yet. 1. Box prompting, even with moderate (0.1) jitter, is seen to be highly competitive against our baselines, compared to Point prompts.
Discussion
Zero-shot Medical Image Segmentation SAM is seen to segment novel target structures (organs), especially with bounding box prompting at moderate jitter, to highly competitive accuracies compared to our baselines. Single positive bounding boxes are seen to perform considerably better than 10 positive point prompts. The performance does not degrade on raw CT values as well (AVG*), indicating robustness of box prompting to high intensity ranges. Considering that nnU-Net is a strong automatic baseline trained on the entire dataset and SAM only sees a slice and a prompt (points or box), SAM demonstrates enormous potential as a zero-shot technique for medical image segmentation.
Who is it useful for? Our experiments demonstrate that SAM could be highly beneficial for interactive segmentation pipelines -enabling rapid semi-automatic segmentation of a majority of the structure of interest, with only a few click or bounding box prompts (or possibly both) by an expert. Empirically, it appears that SAM may experience decreased accuracy in areas near class boundaries (as shown in Figure 1). However, as such areas can be manually segmented, the use of SAM might still greatly improve the speed of a clinical pipeline while maintaining a good level of accuracy.
Conclusion
Our study evaluates the zero-shot effectiveness of the Segment Anything Model (SAM) for medical image segmentation using few click and bounding box prompting demonstrating high accuracy on novel medical image segmentation tasks. We find that by using SAM, expert users can achieve fast semi-automatic segmentation of most relevant structures, making it highly valuable for interactive medical segmentation pipelines.
Figure 1 :
1Examples of random point and jittered box prompts with subsequently generated segmentation masks. Prompt points and boxes are represented in green, while the obtained segmentations are shown in blue.
Spl. R.Kid. L.Kid. GallBl. Esoph. Liver Stom. Aorta Postc. Pancr. R.AG. L.AG. Duod. Blad.Method
Organs
AVG AVG*
1 Point
0.632 0.759
0.770
0.616
0.382 0.577 0.508 0.720 0.453
0.317
0.085
0.196
0.339 0.542 0.493
0.347
3 Points
0.733 0.784
0.786
0.683
0.448 0.658 0.577 0.758 0.493
0.343
0.129
0.240
0.325 0.631 0.542
0.397
10 Points
0.857 0.855
0.857
0.800
0.643 0.811 0.759 0.842 0.637
0.538
0.405
0.516
0.480 0.789 0.699
0.560
Boxes, 0.01 0.926 0.884
0.889
0.883
0.820 0.902 0.823 0.924 0.867
0.727
0.618
0.754
0.811 0.909 0.838
0.826
Boxes, 0.05 0.920 0.883
0.894
0.879
0.814 0.883 0.818 0.923 0.862
0.727
0.609
0.746
0.805 0.907 0.834
0.819
Boxes, 0.1
0.890 0.870
0.874
0.859
0.806 0.813 0.796 0.919 0.845
0.702
0.594
0.733
0.785 0.862 0.810
0.795
Boxes, 0.25 0.553 0.601
0.618
0.667
0.656 0.490 0.561 0.747 0.687
0.481
0.478
0.558
0.655 0.561 0.594
0.612
Boxes, 0.5
0.202 0.275
0.257
0.347
0.356 0.164 0.252 0.381 0.335
0.239
0.234
0.308
0.343 0.205 0.278
0.289
nnUNet 3D 0.978 0.951
0.951
0.903
0.856 0.978 0.919 0.961 0.923
0.856
0.790
0.815
0.814 0.929 0.902
0.902
nnUNet 2D 0.977 0.938
0.943
0.865
0.850 0.976 0.890 0.954 0.884
0.788
0.753
0.787
0.745 0.920 0.877
0.877
Table 1: DSC of Point and Box Prompting against 2D and 3D nnUNet. All results created
after CT clipping to -100 to 200 Hounsfield Units, except AVG* on the right
which is the average DSC on raw CT values.
Language models are few-shot learners. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Advances in neural information processing systems. 33Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Lan- guage models are few-shot learners. Advances in neural information processing systems, 33:1877-1901, 2020.
Fabian Isensee, Jens Petersen, Andre Klein, David Zimmerer, Paul F Jaeger, Simon Kohl, Jakob Wasserthal, Gregor Koehler, Tobias Norajitra, Sebastian Wirkert, Klaus H Maier-Hein, nnu-net: Self-adapting framework for u-net-based medical image segmentation. Fabian Isensee, Jens Petersen, Andre Klein, David Zimmerer, Paul F. Jaeger, Simon Kohl, Jakob Wasserthal, Gregor Koehler, Tobias Norajitra, Sebastian Wirkert, and Klaus H. Maier-Hein. nnu-net: Self-adapting framework for u-net-based medical image segmenta- tion, 2018.
Amos: A large-scale abdominal multiorgan benchmark for versatile medical image segmentation. Yuanfeng Ji, Haotian Bai, Chongjian Ge, Jie Yang, Ye Zhu, Ruimao Zhang, Zhen Li, Lingyan Zhanng, Wanling Ma, Xiang Wan, Advances in Neural Information Processing Systems. 35Yuanfeng Ji, Haotian Bai, Chongjian Ge, Jie Yang, Ye Zhu, Ruimao Zhang, Zhen Li, Lingyan Zhanng, Wanling Ma, Xiang Wan, et al. Amos: A large-scale abdominal multi- organ benchmark for versatile medical image segmentation. Advances in Neural Infor- mation Processing Systems, 35:36722-36732, 2022.
Segment Anything. Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, Piotr Dollár, Ross Girshick, Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr Dollár, and Ross Girshick. Segment Anything. apr 2023. URL http://arxiv.org/abs/2304.02643.
Image segmentation using text and image prompts. Timo Lüddecke, Alexander Ecker, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionTimo Lüddecke and Alexander Ecker. Image segmentation using text and image prompts. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7086-7096, 2022.
. Openai, GPT-4 Technical ReportOpenAI. GPT-4 Technical Report. mar 2023. URL http://arxiv.org/abs/2303.08774.
Zero-shot text-to-image generation. Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, Ilya Sutskever, Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-shot text-to-image generation, 2021.
High-resolution image synthesis with latent diffusion models. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer, Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models, 2022.
Xueyan Zou, Zi-Yi Dou, Jianwei Yang, Zhe Gan, Linjie Li, Chunyuan Li, Xiyang Dai, Harkirat Behl, Jianfeng Wang, Lu Yuan, arXiv:2212.11270Generalized decoding for pixel, image, and language. arXiv preprintXueyan Zou, Zi-Yi Dou, Jianwei Yang, Zhe Gan, Linjie Li, Chunyuan Li, Xiyang Dai, Harkirat Behl, Jianfeng Wang, Lu Yuan, et al. Generalized decoding for pixel, image, and language. arXiv preprint arXiv:2212.11270, 2022.
| [] |
[
"On some computational aspects of Hermite wavelets on a class of SBVPs arising in exothermic reactions",
"On some computational aspects of Hermite wavelets on a class of SBVPs arising in exothermic reactions"
] | [
"Amit K Verma \nDepartment of Mathematics\nIIT Patna\n801106Patna, BiharIndia\n",
"Diksha Tiwari \nFaculty of Mathematics\nUniversity of Vienna\nAustria\n"
] | [
"Department of Mathematics\nIIT Patna\n801106Patna, BiharIndia",
"Faculty of Mathematics\nUniversity of Vienna\nAustria"
] | [] | We propose a new class of SBVPs which deals with exothermic reactions. We also propose four computationally stable methods to solve singular nonlinear BVPs by using Hermite wavelet collocation which are coupled with Newton's quasilinearization and Newton-Raphson method. We compare the results obtained with Hermite Wavelets with Haar wavelet collocation. The efficiency of these methods are verified by applying these four methods on Lane-Emden equations. Convergence analysis is also presented. | null | [
"https://arxiv.org/pdf/1911.00495v2.pdf"
] | 207,871,014 | 1911.00495 | 22675f970c9e0795c218d597f8eaa9fbf4a00d53 |
On some computational aspects of Hermite wavelets on a class of SBVPs arising in exothermic reactions
November 6, 2019
Amit K Verma
Department of Mathematics
IIT Patna
801106Patna, BiharIndia
Diksha Tiwari
Faculty of Mathematics
University of Vienna
Austria
On some computational aspects of Hermite wavelets on a class of SBVPs arising in exothermic reactions
November 6, 2019MRAQuasilinearizationNewton RaphsonHaar WaveletsHermite WaveletsNonlinear Singular Boundary value problems,
We propose a new class of SBVPs which deals with exothermic reactions. We also propose four computationally stable methods to solve singular nonlinear BVPs by using Hermite wavelet collocation which are coupled with Newton's quasilinearization and Newton-Raphson method. We compare the results obtained with Hermite Wavelets with Haar wavelet collocation. The efficiency of these methods are verified by applying these four methods on Lane-Emden equations. Convergence analysis is also presented.
Introduction
This paper deals with Wavelets and nonlinear singular BVPs. Nonlinear BVPs are difficult to deal and if singularity is also present it becomes even more difficult. It is not easy to capture the behavior of the solutions near the point of singularity. If we apply suitable boundary conditions which forces the existence of unique continuous solutions and hence there is possibility of finding these solutions via numerical methods. Still since the coefficient blow up when near the singularity discretizing the differential equation is a challenge. Wavelets help us to treat this complicated situation in an easy way with less number of spatial points. To address both nonlinear BVPs and Wavelets we divide the introduction in two parts.
Nonlinear SBVPs arising in Exothermic Reactions
Here we propose a new class of nonlinear SBVP. For that let us consider the mathematical equation which governs, balance between heat generated and conducted away
λ∇ 2 T = −QW,(1)
where T is gas temperature, Q the heat of the reaction, λ the thermal conductivity, W the reaction velocity and ∇ 2 the Laplacian operator. Chambre [4] assumed that reaction is mono-molecular and velocity follows the Arrhenius law, given as
W = A exp −E RT ,(2)
and after some approximations and symmetry assumptions Chambre [4] arrived at the following equation
Ly = −δ exp(t)(3)
where,
L ≡ d 2 dt 2 + kg t d dt ,
kg depends on shape and size of the vessel and δ is a parameter.
Nakamura et al. [15], while looking for an equation which can express the temperature dependence of the rate constant proposed the following W = A exp −E R (T n 0 + T n ) 1/n , n = 1, 2, 3, · · · (4) where R is a gas constant, T is absolute temperature, A, E0 and T0 are parameters and n is an integer. Similar to analysis of Chambre [4], we arrive at the following differential equation
Ly = −B exp −A (c n + y n ) 1/n .(5)
There are several other examples which led us to consider the following class of nonlinear singular boundary value problem (SBVPs) Ly + f (t, y(t)) = 0, 0 < t ≤ 1, (6) subject to following the boundary conditions
Case (i) y (0) = α, y(1) = β,(7a)
Case (ii) y(0) = α, y(1) = β,
Case (iii) y (0) = α, ay(1) + by (1) = β,
where a, b, c, α, β are real constants and f (t, y(t)) is a real valued function. Boundary conditions at the singular end t = 0 depend on the value of kg.
There is huge literature on existence of solutions of such BVPs. Please refer [17], [18], [19] and [28] and the references there in. Several numerical methods have also been proposed for solving these type of non-linear singular boundary value problems (see [25,24,30] and its references).
In [10] non linear singular Lane-Emden IVPs are solved with Haar Wavelet Quasilinearization approach, and nonlinearity is easily handled with quasilinearization. In [22] Hermite wavelets operational matrix method is used to solve second order nonlinear singular initial value problems. In [16] Chebyshev wavelets operation matrices are used for solving nonlinear singular boundary value problems. In [31] method based on Laguerre wavelets is used to solve nonlinear singular boundary value problems. In [14] method based on Legendre wavelet is proposed to solve singular boundary value problems. All these based on wavelets show high accuracy. In [29,27,26] Haar wavelets are used to solve SBVPs efficiently for higher resolution.
In this article we solve SBVP of type (6) subject to boundary conditions (7a),(7b),(7c) with help of Hermite Wavelet Newton Approach (HeWNA), Hermite Wavelet Quasilinearization Approach (HeWQA), Haar wavelet Newton Approach (HWNA) and Haar Wavelet Quasilinearization Approach (HWQA) and compare results to show accuracy of the method. Convergence of HeWNA method is also established. Novelty of this paper is that Newton Raphson has not been coupled with Haar wavelets and Hermite has not been used to solve Nonlinear Singular BVPs till now.
Some recent works on wavelets and their applications can be find in [9,2,23,11] and the references there in. This paper is organized in the following manner. In section 2 we discuss MRA and Hermite wavelet is defined, in sub section 2.3 Haar wavelet is defined, in section 3 method of solution based on HeWQA, HeWNA, HWQA and HWNA are proposed. In section 4, convergence analysis of HWNA method is done and in the last section some numerical examples are presented to show accuracy of the method.
Hermite and Haar Wavelet
This section starts with MRA and definition of Hermite Wavelets.
Wavelets and MRA
The theory of Wavelets are developed by mathematicians as well engineers over the years. Wavelet word find its origin the name of French geophysicist Jean Morlet . He used wavelet to describe certain functions. Morlet and Croatian-French physicist Alex Grossman developed theory further which is used now a days [20, p. 222]. Main issue that was being addressed in the process, was to overcome drawbacks of Fourier transforms. Wavelets are multi-indexed and it contains parameters which can be used to shift or dilate/contract the functions giving us basis functions. Thus computationally they are complex but they have better control and much better results are obtained at low resolution, i.e., less number of divisions are needed that we need in finite difference and all methods based on similar concepts. Here we consider methods based on wavelet transforms.
The following properties of wavelets enable us to choose them over other methods:
• Orthogonality • Compact Support • Density • Multiresolution Analysis (MRA)
MRA
Pereyra et al. [20] observe that an orthogonal multiresolution analysis (MRA) is a collection of closed subspaces of L 2 (R) which are nested, having trivial intersection, they exhaust the space, the subspaces are connected to each other by scaling property and finally there is a special function, the scaling function ϕ, whose integer translates form an orthonormal basis for one of the subspaces. We give formal statement of MRA as defined in [20].
Definition 2.
1. An MRA with scaling function ϕ is a collection of closed subspaces Vj, j = . . . , 2, 1, 0, 1, 2, . . . of L 2 (R) such that
1. Vj ⊂ Vj+1 2. Vj = L 2 (R) 3. Vj = 0
4. The function f (x) belongs to Vj if and only if the function f (2x) ∈ Vj+1.
5. The function ϕ belongs to V0, the set {ϕ(x − k), k ∈ Z} is orthonormal basis for V0.
The sequence of wavelet subspaces Wj of L 2 (R), are such that Vj ⊥ W j , for all j and Vj+1 = Vj ⊕ Wj. Closure of ⊕ j∈Z Wj is dense in L 2 (R) with respect to L 2 norm. Now we state Mallat's theorem [13] which guarantees that in presence of an orthogonal MRA, an orthonormal basis for L 2 (R) exists. These basis functions are fundamental functions in the theory of wavelets which helps us to develop advanced computational techniques. Given an orthogonal MRA with scaling function ϕ, there is a wavelet ψ ∈ L 2 (R) such that for each j ∈ Z, the family {ψ j,k } k∈Z is an orthonormal basis for Wj . Hence the family {ψ j,k } k∈Z is an orthonormal basis for L 2 (R).
Hermite Wavelet ([21])
Hermite Polynomials are defined on the interval (−∞, ∞) and can be defined with help of the recurrence formula:
H0(t) = 1 H1(t) = 2tHm+1(t) = 2tHm(t) − 2mHm−1(t), m = 1, 2, 3, · · · .
Completeness and orthogonality (with respect to weight function e −t 2 ) of Hermite polynomials enable us to treat them as wavelet (Theorem 2.1).
Hermite wavelet on the interval [0, 1] are defined as ψn,m(t) = 2 k/2 1 n!2 n √ π Hm(2 k t −n)χ n−1
2 k ,n +1 2 k(8)
where k = 1, 2, . . . is level of resolution, n = 1, 2, . . . , 2 k−1 ,n = 2n−1 is translation parameter, m = 1, 2, . . . , M − 1 is order of Hermite polynomial.
Approximation of Function with Hermite Wavelet
A function f (t) defined on L 2 [0, 1] can be approximated with Hermite wavelet in the following manner
f (t) = ∞ n=1 ∞ m=0 cnmψnm(t).(9)
For computation purpose we truncate (9) and define,
f (t) 2 k −1 n=1 M −1 m=0 cnmψnm(t) = c T ψ(t)(10)
where ψ(t) is 2 k−1 M × 1 matrix given as:
ψ(t) = ψ1,
Integration of Hermite Wavelet
As suggested in [1], ν-th order integration of ψ(t) can also be approximated as
t 0 t 0 · · · t 0 ψ(τ )dτ J ν ψ1,0(t), . . . , J ν ψ1,M−1(t), J ν ψ2,0(t), . . . , J ν ψ2,M−1(t), . . . , J ν ψ 2 k−1 ,0 (t), . . . , J ν ψ 2 k−1 ,M −1 (t) T where J ν ψn,m(t) = 2 k/2 1 n!2 n √ π J ν Hm(2 k t −n)χ n−1 2 k ,n +1 2 k ,(12)
where k = 1, 2, . . . is level of resolution, n = 1, 2, . . . , 2 k−1 ,n = 2n−1 is translation parameter, m = 1, 2, . . . , M − 1 is order of Hermite polynomial.
Remark 2.1. Integral operator J ν (ν > 0) of a function f (t) is defined as J ν f (t) = 1 ν! t 0 (t − s) ν−1 f (s)ds.
Hermite Wavelet Collocation Method
To apply Hermite wavelet on the ordinary differential equations, we need its discretized form. We use collocation method for discretization where mesh points are given bȳ
x l = l∆x, l = 0, 1, · · · , M − 1.(13)
For the collocation points we define
x l = 0.5(x l−1 +x l ), l = 0, 1, · · · , M − 1.(14)
For k = 1, equation (10) takes the form
f (t) M −1 m=0 c1mψ1m(t).(15)
We replace t by x l in above equation and arrive at system of equations which can easily be solved to get the solution of the nonlinear SBVP.
Haar Wavelet
Let us assume that x belongs to any interval [P, Q], where P and Q are constant end points. Let us define M = 2 J , where J is the maximal level of resolution. Devide [P, Q] into 2M subintervals of equal length ∆x = (Q−P )/(2M ). The wavelet number i is defined as i = m + k + 1, where j = 0, 1, · · · , J and k = 0, 1, · · · , m − 1 (here m = 2j). The i th Haar wavelet is explained as
hi(x) = χ [η 1 (i),η 2 (i)) − χ [η 2 (i),η 3 (i))(16)
where
η1(i) = P + 2kµ∆x, η2(i) = P + (2k + 1)µ∆x, η3(i) = P + 2(k + 1)µ∆x, µ = M/m.(17)
Above equations are valid for i > 2.
For i = 1 case we have, hi(x) = χ [P,Q] . For i = 2 we have η1(2) = P, η2(2) = 0.5(2P + Q), η3(2) = Q.(18)
The thickness of the i th wavelet is
η3(i) − η1(i) = 2µ∆x = (Q − P )m −1 = (Q − P )2 −j .(19)
If J is fixed then by (16)
Q P hi(x)h l (x)dx = (Q − P )2 −j , l = i, 0, l = i.(20)
The integrals pυ,i(x) are defined as
pυ,i(x) = x P x P · · · x P hi(t)dt v = 1 (v − 1)! x P (x − t) (υ−1) hi(t)dt,(21)
where υ = 1, 2, · · · , n, i = 1, 2, · · · , 2M. Putting all values in the integral we get
pυ,i(x) = 1 υ! [x − η1(i)] υ χ [η 1 (i),η 2 (i)) + 1 υ! {[x − η1(i)] υ − 2[x − η2(i)] υ }χ [η 2 (i),η 3 (i)] + 1 υ! {[x − η1(i)] υ − 2[x − η2(i)] υ + [x − η3(i)] υ }χ (η 3 (i),∞) (22)
for i > 1 and for i = 1 we have η1 = P, η2 = η3 = Q and
pυ,1(x) = 1 υ! (x − P ) υ .(23)
Haar Wavelet Collocation Method
Similar to case of previous section here again we define collocation points as follows
xt = P + t∆x, t = 0, 1, · · · , 2M,(24)xt = 0.5(xt−1 +xt), t = 0, 1, · · · , 2M,(25)
and replace x → xt in (16), (17), (18). We define the Haar matrices H, P1, P2, · · · , Pυ which are 2M ×2M matrices. Entries of matrices are given by
H(i, t) = hi(xt), Pv(i, t) = Pυ,i(xt), υ = 1, 2, · · · . Consider P = 0, Q = 1, J = 1. Then 2M = 4, so H, P1, P2 are defined as H = 1 1 1 1 1 1 −1 −1 1 −1 0 0 0 0 1 −1 , P1 = 1 8 1 3 5 7 1 3 3 1 1 1 0 0 0 0 1 1 , P2 = 1 128 1 9 25 49 1 9 23 31 1 7 8 8 0 0 1 7 .(26)
Approximation of Function with Haar Wavelet
A function f (t) defined on L 2 [0, 1] can be approximated by Haar wavelet basis in the following manner
f (t) = ∞ i=0 aihi(t).(27)
For computation purpose we truncate (27) and define
f (t) 2M i=0 aihi(t),(28)
where M is level of resolution.
Numerical Methods Based on Hermite and Haar Wavelets
In this section, solution methods based on Hermite wavelet and Haar wavelet are presented.
Hermite Wavelet Quasilinearization Approach (HeWQA)
In HeWQA we use quasilinearization to linearize SBVP then method of collocation for discretization and use Hermite wavelets for computation of solutions to nonlinear SBVP. We consider differential equation (6) with boundary conditions (7a), (7b) and (7c). Quasilinearizing this equation, we get the form
Lyr+1 = −f (t, yr(t)) + 1 s=0 (y s r+1 − y s r )(−fys (t, yr(t)),(29a)
subject to linearized boundary conditions,
y r+1 (0) = yr(0), yr+1(1) = yr(1),(29b)yr+1(0) = yr(0), yr+1(1) = yr(1),(29c)y r+1 (0) = yr(0), ayr+1(1) + by r+1 (1) = ayr(1) + by r (1).(29d)
Here s = 0, 1, fys = ∂f /∂y s and y 0 r (t) = yr(t).
Thus we arrive at linearized form of given differential equation. Now we use Hermite wavelet method similar to described in [6]. Let us assume
y r+1 (t) = M −1 m=0 c1mψ1m(t).
(29e)
Then integrating twice we get the following two equations:
y r+1 (t) = M −1 m=0 c1mJψ1m(t) + y r+1 (0), (29f) yr+1(t) = M −1 m=0 c1mJ 2 ψ1m(t) + ty r+1 (0) + yr+1(0).(29g)
Here J ν (ν > 0) is the integral operator defined previously.
Treatement of the Boundary Value Problem
Based on boundary conditions we will consider different cases and follow procedure similar to described in [12].
Case (i): In equation (7a) we have y (0) = α, y(1) = β. So by linearization we have y r+1 (0) = α, yr+1(1) = β. Now put t = 1 in (29g) we get yr+1(1) = M −1 m=0 c1mJ 2 ψ1m(1) + y r+1 (0) + yr+1(0), so yr+1(0) = yr+1(1) − M −1 m=0 c1mJ 2 ψ1m(1) − y r+1 (0).(30)
By using equation (30) in (29g) and simplifying we get (1)).
yr+1(t) = (t − 1)y r+1 (0) + yr+1(1) + M −1 m=0 c1m(J 2 ψ1m(t) − J 2 ψ1m
Now putting values of y r+1 (0) and yr+1 (1) in (29f) and (31) we get (1)).
y r+1 (t) = α + M −1 m=0 c1mJψ1m(t),(32)yr+1(t) = (t − 1)α + β + M −1 m=0 c1m(J 2 ψ1m(t) − J 2 ψ1m
Case (ii): In equation (7b) we have y(0) = α, y(1) = β. So by linearization we have yr+1(0) = α, yr+1(1) = β.
Now put t = 1 in equation (29g) we get yr+1(1) = M −1 m=0 c1mJ 2 ψ1m(1) + y r+1 (0) + yr+1(0),(34)
so
y r+1 (0) = yr+1(1) − M −1 m=0 c1mJ 2 ψ1m(1) − yr+1(0).(35)
By putting these values in equation (29f) and (29g) we get
y r+1 (t) = yr+1(1) − yr+1(0) + M −1 m=0 c1m(Jψ1m(t) − J 2 ψ1m(1)),
and (1)).
yr+1(t) = (1 − t)yr+1(0) + tyr+1(1) + M −1 m=0 c1m(J 2 ψ1m(t) − J 2 ψ1m
Now we put values of yr+1(0) and yr+1(1) we get
y r+1 (t) = (β − α) + M −1 m=0 c1m(Jψ1m(t) − J 2 ψ1m(1)),(36)yr+1(t) = (1 − t)α + tβ + M −1 m=0 c1mJ 2 ψ1m(1) − yr+1(0)).(37)
Case (iii): In equation (7c) we have y (0) = α, ay(1) + by (1) = β. So by linearization we have y r+1 (0) = α, ayr+1(1) + by r+1 (1) = β. Now put t = 1 in equation (29f) and (29g) we get
y r+1 (1) = M −1 m=0 c1mJψ1m(1) + y r+1 (0),(38)yr+1(1) = M −1 m=0 c1mJ 2 ψ1m(1) + y r+1 (0) + yr+1(0).(39)
Putting these values in ayr+1(1) + by r+1 (1) = β and solving for yr+1(0) we have
yr+1(0) = 1 a β − ay r+1 (0) − a M −1 m=0 c1mJ 2 ψ1m(1) − b M −1 m=0 c1mJψ1m(1) + y r+1 (0) .
Hence from equation (29g) we have
yr+1(t) = M −1 m=0 c1mJ 2 ψ1m(t) + ty r+1 (0) + 1 a β − ay r+1 (0) − a M −1 m=0 c1mJ 2 ψ1m(1) − b M −1 m=0 c1mJψ1m(1) + y r+1 (0) .(40)
Now we put values of yr+1(0) and yr+1(1) in equation (29f) and (40) we get
y r+1 (t) = α + M −1 m=0 c1mJψ1m(t),(41)yr+1(t) = β a + t − 1 − b a α + M −1 m=0 c1m J 2 ψ1m(t) − J 2 ψ1m(1) − b a Jψ1m(1) .(42)
Finally we put values of y r+1 , y r+1 and yr+1 for all these cases in the linearized differential equation (29a). Now we discritize the final equation with collocation method and then solve the resulting system assuming initial guess y0(t). We get required value of solution y(t) of the nonlinear SBVPs at different collocation points.
Hermite Wavelet Newton Approach (HeWNA)
In this approach we use the method of collocation for discretization and then Hermite wavelet for approximation of the solutions. Finally Newton Raphson method is used to solve the resulting nonlinear system of equations. We consider differential equation (6) with boundary conditions (7a), (7b) and (7c). Now we assume
y (t) = M −1 m=0 c1mψ1m(t).(43)
Integrating twice we get the following two equations:
y (t) = M −1 m=0 c1mJψ1m(t) + y (0),(44)y(t) = M −1 m=0 c1mJ 2 ψ1m(t) + ty (0) + y(0).(45)
Treatment of the Boundary Value Problem
Based on boundary conditions we divide it in different cases. Case (i): In equation (7a) we have y (0) = α, y(1) = β. Now put t = 1 in (45) we get
y(1) = M −1 m=0 c1mJ 2 ψ1m(1) + y (0) + y(0),(46)
so
y(0) = y(1) − M −1 m=0 c1mJ 2 ψ1m(1) − y (0).(47)
By using equation (47) in (45) we get
y(t) = M −1 m=0 c1mJ 2 ψ1m(t) + ty (0) + y(1) − M −1 m=0 c1mJ 2 ψ1m(1) − y (0),y(t) = M −1 m=0 c1mJ 2 ψ1m(t) + (t − 1)y (0) + y(1) − M −1 m=0 c1mJ 2 ψ1m(1).
Hence (1)).
y(t) = (t − 1)y (0) + y(1) + M −1 m=0 c1m(J 2 ψ1m(t) − J 2 ψ1m
Now putting values of y (0) and y(1) in (44) and (48) we get (1)). (1)).
y(t) = M −1 m=0 c1mJ 2 ψ1m(t) + ty (0) + y(0),(49)y(t) = (t − 1)α + β + M −1 m=0 c1m(J 2 ψ1m(t) − J 2 ψ1my (t) = (β − α) + M −1 m=0 c1m(Jψ1m(t) − J 2 ψ1m(1)),(52)y(t) = (1 − t)α + tβ + M −1 m=0 c1m(J 2 ψ1m(t) − J 2 ψ1m
Case (iii): In equation (7c) we have y (0) = α, ay(1) + by (1) = β. Now put t = 1 in equation (44) and (45) we get
y (1) = M −1 m=0 c1mJψ1m(1) + y (0),(54)y(1) = M −1 m=0 c1mJ 2 ψ1m(1) + y (0) + y(0).(55)
Putting these values in ay(1) + by (1) = β and solving we will get value of y(0), now put y(0) and y (0) in (45) we have
y(t) = M −1 m=0 c1mJ 2 ψ1m(t) + ty (0) + 1 a β − ay (0) − a M −1 m=0 c1mJ 2 ψ1m(1) − b M −1 m=0 c1mJψ1m(1) + y (0) .(56)
Now by putting y (0) = α in (44) and (56), we have
y (t) = α + M −1 m=0 c1mJψ1m(t),(57)y(t) = β a + t − 1 − b a α + M −1 m=0 c1m J 2 ψ1m(t) − J 2 ψ1m(1) − b a Jψ1m(1) .(58)
Now we put values of y(t), y (t) and y (t) in (6). We discretize final equation with collocation method and solve the resulting nonlinear system with Newton Raphson method for c1m, m = 0, 1, . . . , M − 1. By substituting the values of c1m, m = 0, 1, . . . , M − 1, we get value of the solution of y(t) of nonlinear SBVPs at different collocation points.
Haar Wavelet Quasilinearization Approach (HWQA)
As explained for HeWQA, we will follow same procedure in HWQA method. Here we are using Haar Wavelet in place of Hermite Wavelet. We consider differential equation (6) with boundary conditions (7a), (7b) or (7c).
Applying method of quasilinearization, as we did in HeWQA, we have equation (29a) with linearized boundary conditions (29b),(29c) and (29d). Let us assume
y r+1 (t) = 2M i=0 aihi(t),(59)
where ai are the wavelet coefficients. Then integrating twice we get following two equations:
y r+1 (t) = 2M i=0 aip1,i(t) + y r+1 (0),(60)yr+1(t) = 2M i=0 aip2,i(t) + ty r+1 (0) + yr+1(0).(61)
Treatement of the Boundary Value Problem
Expressions for different boundary conditions in HWQA method are given below. Case (i): In equation (7a) we have y (0) = α, y(1) = β. Following same procedure as HeWQA we have
y r+1 (t) = α + 2M i=0 aip1,i(t),(62)yr+1(t) = (t − 1)α + β + 2M i=0 ai(p2,i(t) − p2,i(1)).(63)
Case (ii): In equation (7b) we have y(0) = α, y(1) = β. So we have
y r+1 (t) = (β − α) + M −1 m=0 c1m(p1,i(t) − p2,i(1)),(64)yr+1(t) = (1 − t)α + tβ + 2M i=0 ai(p2,i(t) − tp2,i(1)).(65)
Case (iii): In equation (7c) we have y (0) = α, ay(1) + by (1) = β. We finally have
y r+1 (t) = α + 2M i=0 aip1,i(t),(66)yr+1(t) = β a + t − 1 − b a α + 2M i=0 ai p2,i(t) − p2,i(1) − b a p1,i(1) .(67)
Haar Wavelet Newton Approach(HWNA)
Here we use same procedure as that of HeWNA.
Treatment of the Boundary Value Problem
Expressions for different boundary conditions in HWNA method are given below. Case (i): In equation (7a) we have y (0) = α, y(1) = β. Following same procedure, final expression will take the form
y (t) = α + 2M i=0 aip1,i(t),(68)y(t) = (t − 1)α + β + 2M m=0 ai(p2,i(t) − p2,i(1)).(69)
Case (ii): In equation (7b) we have y(0) = α, y(1) = β. So by linearization we have y(0) = α, y(1) = β. Final expression is of the form
y (t) = (β − α) + M −1 m=0 c1m(p1,i(t) − p2,i(1)),(70)y(t) = (1 − t)α + tβ + 2M m=0 c1m(p2,i(t) − p2,i(1)).(71)
Case (iii): In equation (7c) we have y (0) = α, ay(1) + by (1) = β, so we have the following expression:
y (t) = α + 2M i=0 aip1,i(t),(72)y(t) = β a + t − 1 − b a β + 2M m=0 c1m p2,i(t) − p2,i(1) − b a p1,i(1) .(73)
Convergence
Let us consider 2 nd order ordinary differential equation in general form G(t, u, u , u ) = 0.
We consider the HeWNA method. Let
f (t) = u (t) = ∞ n=1 ∞ m=0 cnmψnm(t).(74)
Integrating above equation two times we have
u(t) = ∞ n=1 ∞ m=0 cnmJ 2 ψnm(t) + BT (t),(75)
where BT (t) stands for boundary term. ∃ η :
d 2 u dt 2 ≤ η.(76)
Then method based on Hermite Wavelet Newton Approach (HeWNA) converges.
Proof. In (75) by truncating expansion we have,
u k,M (t) = 2 k −1 n=1 M −1 m=0 cnmJ 2 ψnm(t) + BT (t)(77)
So error E k,M can be expressed as
E k,M 2 = u(t) − u k,M (t) 2 = ∞ n=2 k ∞ m=M cnmJ 2 ψnm(t) 2 .(78)
Expanding L 2 norm, we have
E k,M 2 2 = 1 0 ∞ n=2 k ∞ m=M cnmJ 2 ψnm(t) 2 dt,(79)E k,M 2 2 = ∞ n=2 k ∞ m=M ∞ s=2 k ∞ r=M 1 0 cnmcsrJ 2 ψnm(t)J 2 ψsr(t)dt,(80)E k,M 2 2 ≤ ∞ n=2 k ∞ m=M ∞ s=2 k ∞ r=M 1 0 |cnm| |csr| |J 2 ψnm(t)| |J 2 ψsr(t)|dt.(81)
Now
|J 2 ψnm(t)| ≤ t 0 t 0 |ψnm(t)|dtdt, ≤ t 0 1 0 |ψnm(t)|dtdt,
since t ∈ [0, 1]. Now by (8), we have
|J 2 ψnm(t)| ≤ 2 k/2 1 n!2 n √ π t 0 n+1 2 k n−1 2 k |Hm(2 k t −n)|dtdt.
By changing variable 2 k t −n = y, we get
|J 2 ψnm(t)| ≤ 2 −k/2 1 n!2 n √ π t 0 1 −1 |Hm(y)|dydt, ≤ 2 −k/2 1 n!2 n √ π t 0 1 −1 H m+1 (y) m + 1 dydt, ≤ 2 −k/2 1 ( n!2 n √ π)(m + 1) t 0 1 −1 |H m+1 (y)|dydt.
By putting 1 −1 |H m+1 (y)|dy = h, we get
|J 2 ψnm(t)| ≤ 2 −k/2 1 ( n!2 n √ π)(m + 1) t 0 hdt. Hence |J 2 ψnm(t)| ≤ 2 −k/2 1 ( n!2 n √ π)(m + 1) h,(82)
since t ∈ [0, 1]. Now for |cnm|, we have
cnm = 1 0 f (t)ψnm(t)dt,(83)E k,M 2 2 ≤ 2 −2k η 2 h 4 ∞ n=2 k ∞ m=M ∞ s=2 k ∞ r=M 1 0 1 ( n!2 n √ π) 2 (m + 1) 2 1 ( s!2 s √ π) 2 (r + 1) 2 dt,(86)E k,M 2 2 ≤ 2 −2k η 2 h 4 ∞ n=2 k 1 n!2 n √ π ∞ s=2 k 1 s!2 s √ π ∞ m=M 1 (m + 1) 2 ∞ r=M 1 (r + 1) 2 .(87)
Here all four series converges and E k,M −→ 0 as k, M → ∞. Proof. The proof is similar to the previous theorem.
Numerical Illustrations
In this section we apply HeWQA, HeWNA, HWQA and HWNA on proposed model which occurs in exothermic reaction (5). We also solve four other examples from real life and compare solutions with among these four methods and with exact solutions whenever available.
To examine the accuracy of methods we define maximum absolute error L∞ as
L∞ = max x∈[0,1] |y(t) − yw(t)| (88)
here y(t) is exact solution and yw(t) is wavelet solution. and the L2-norm error as
Example 1 (Exothermic Reaction with Modified Arrehenius Law)
Consider the non-linear SBVP (5) with given boundary condition
Ly + B exp −A (c n + y n ) 1/n = 0, y (0) = 0, y(1) = 0,(90)
we take some particular cases when A = 1, B = 1, c = 1. Comparison Graphs taking initial vector [0, 0, . . . , 0] and J = 1, J = 2 with n = 1, kg = 1; n = 1, kg = 2; n = 2, kg = 1; n = 2, kg = 2;n = 3, kg = 1 and n = 3, kg = 2 are plotted in Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6 respectively . Tables for solution is tabulated in table 1, 2, table 3, table 4, table 5 and table 6 respectively.
Example 2 (Stellar Structure)
Consider the non-linear SBVP:
Ly(t) + y 5 (t) = 0, y (0) = 0, y(1) = 3 4 , kg = 2,(91)
Chandrasekhar ([5], p88 ) derived above two point nonlinear SBVP. This equation arise in study of stellar structure. Its exact solution is y(t) = 3 3+x 2 .
Comparison Graphs taking initial vector [ 3 4 , 3 4 , . . . , 3 4 ] and J = 1, J = 2 are plotted in Figure 7. Tables for solution and error are tabulated in table 7 and table 8 We also observed for small changes in initial vector, for example taking [0.8, 0.8, . . . , 0.8] or [0.7, 0.7, . . . , 0.7] doesn't significantly change the solution.
Example 3 (Thermal Explosion)
Consider the non linear SBVP:
Ly(t) + e y(t) = 0, y (0) = 0, y(1) = 0, kg = 1.
(92)
Above nonlinear SBVP is derived by Chamber [4]. This equation arises in the thermal explosion in cylindrical vessel. The exact solution of this equation is y(x) = 2 ln
4−2 √ 2 (3−2 √
2)x 2 +1 . Comparison Graphs taking initial vector [0, 0, . . . , 0] and J = 1, J = 2 are plotted in Figure 8. Tables for solution and error are tabulated in table 9 and table 10. Error HWNA [29] HeWNA HWQA [29] HeWQA L ∞ 0.0000345103 1.07541×10 −10 0.0000345103 1.07541×10 −10 L 2 0.0000771278 4.99369×10 −10 0.0000771278 4.99369×10 −10 This is test case derived by Chambre [4] long back again exact solution is available. Table 9 and figure 8 show that numerics are in good agreement with exact solutions or J = 1 and J = 2.
We also observed for small changes in initial vector, for example taking [0.
Example 4 (Rotationally Symmetric Shallow Membrane Caps)
Consider the non linear SBVP:
Ly(t) + 1 8y 2 (t) − 1 2 = 0, y (0) = 0, y(1) = 1, kg = 1.(93)
Above nonlinear SBVP is studied in papers [7,3]. Exact solution of this problem is not known. Comparison Graphs taking initial vector [1, 1, . . . , 1] and J = 1, J = 2 are plotted in Figure 9. Tables for solution is tabulated in table 11. In this real life example again exact solution is not known so comparison is not done with exact solution. Table 11 and figure 9 show that computed results are comparable for J = 1, 2.
We also observed for small changes in initial vector, for example taking [0.9, 0.9, . . . , 0.9] or [0.8, 0.8, . . . , 0.8] doesn't significantly change the solution.
Example 5 (Thermal Distribution in Human Head)
Consider the non linear SBVP:
Ly(t) + e −y(t) = 0, y (0) = 0, 2y(1) + y (1) = 0, kg = 2.
This SBVP is derived by Duggan and Goodman [8]. Exact solution of this problem is not known to the best of our knowledge. Comparison Graphs taking initial vector [0, 0, . . . , 0] and J = 1, J = 2 are plotted in Figure 10. Tables for solution is tabulated in table 12. Figure 10: Comparison plots of solution methods for J = 1, 2 for example 5.5
In absence of exact solution the comparison has not been made with exact solution. But comparison of all four methods for in the given problem due to Duggan and Goodman [8], in table 12 and figure 10 shows accuracy of the present method.
We also observed for small changes in initial vector, for example taking [0.1, 0.1, . . . , 0.1] or [0.2, 0.2, . . . , 0.2] doesn't significantly change the solution in any case.
Conclusions
In this research article, we have proposed a new model governing exothermic reactions and four different numerical methods based on wavelets, namely HWQA, HWNA, HeWQA, HeWNA for solving these nonlinear SBVPs arising in different branches of science and engineering (cf. [7,3,8,4,5]). We have applied these methods in five real life examples [see equations (90), (91), (92), (93) and (94)]. Singularity of differential equations can also be very well handled with help of these four proposed methods based. Difficulty arise due to non-linearity of differential equations is dealt with the help of quasilinearization in HWQA and HeWQA method. In the other two proposed method, HWNA and HeWNA, we will solve the resulting non-linear system with help of Newton-Raphson method. Boundary conditions are also handled well by the proposed methods. Main advantage of proposed methods is that solutions with high accuracy are obtained using a few iterations. We also observe that small perturbation in initial vector does not significantly change the solution. Which shows that our method is numerically stable.
Our convergence analysis shows that that ||E k,M || tends to zero as M tends to infinite. Which shows that accuracy of solution increases as J increases.
Computational work illustrate the validity and accuracy of the procedure. Our computations are based on a higher resolution and codes developed can easily be used for even further resolutions.
Theorem 2.1. (Mallat s Theorem).
0(t), . . . , ψ1,M−1(t), ψ2,0(t), . . . , ψ2,M−1(t), . . . , ψ 2 k−1 ,0 (t), . . . , ψ 2 k−1 ,M −1 (t) T c is 2 k−1 M × 1 matrix. Entries of c can be computed as cij = 1 0 f (t)ψij(t)dt (11) with i = 1, 2, . . . , 2 k − 1 and j = 0, 1, . . . , M − 1. Here M is degree of Hermite polynomial.
( 50 )
50Case (ii): In equation (7b) we have y(0) = α, y(1) = β. Now put t = 1 in equation (45) we get Now using these values of y (0) and y(1) in (44) and (45) and solving we get
Theorem 4. 1 .
1Let us assume that, f (t) = d 2 u dt 2 ∈ L 2 (R) isa continuous function defined on [0,1]. Let us consider f (t) is bounded, i.e., ∀t ∈ [0, 1]
Remark 4. 1 .
1The above theorem can easily be extended for the method HeWQA.
Theorem 4. 2 .
2Let us assume that f (t) = d u dt 2 ∈ L 2 (R) is a continuous function on [0, 1] and its first derivative is bounded for all t ∈ [0, 1] there exists η such that df dt ≤ η, then the method based on HWQA and HWNA converges.
(xj) is exact solution and yw(xj) is wavelet solution at the point xj.
Figure 1 :Figure 2 :Figure 3 :Figure 4 :, k g = 2 Table 5 :Figure 5 :
1234255Comparison plots of solution methods for J = 1, 2 for example 5.1 with n = 1, k g = 1:Table 2: Comparison of HWQA, HeWNA, HWQA, HeWQA method solution for example 5.1 with n = 1, Comparison plots of solution methods for J = 1, 2 for example 5.1 with n = 1, k g = 2:Table 3: Comparison of HWQA, HeWNA, HWQA, HeWQA method solution for example 5.1 with n = 2, Comparison plots of solution methods for example 5.1 with n = 2, k g = 1:Table 4: Comparison of HWQA, HeWNA, HWQA, HeWQA method solution for example 5.1 with n = 2, Comparison plots of solution methods for J = 1, 2 for example 5.1 with n = 2Comparison of HWQA, HeWNA, HWQA, HeWQA method solution for example 5.1 with n = 3, Comparison plots of solution methods for J = 1, 2 for example 5.1 with n = 3, k g = 1
Figure 6 :, 2 .
62Comparison plots of solution methods for J = 1, 2 for example 5.1 with n = 3, k g = 2 The example defined by equation (90) is new and does not exist in literature. So we are not in a situation to compare the results. We have considered n = 1, 2, 3 and kg = 1Tables the behaviour of the solution for J = 1, 2. HWNA, HeWNA, HWQA and HeWQA all give numerics which are very well comparable and shows that our proposed techniques are working well. We also observed for small changes in initial vector, for example taking [0.1, 0.1, . . . , 0.1] or [0.2, 0.2, . . . , 0.2] doesn't significantly change the solution in any case.
Figure 7 :
7Comparison plots and error plots of solution methods for J = 1, 2 for example5.2 In this test case since exact solution of the SBVP governed by (91) exists, We have compared our solutions with exact solution in table 7 and figure 7. Numerics again prove that method gives results with best accuracy for J = 1 and J = 2.
Figure 8 :
8Comparison plot and error plots of solution methods for J = 1, 2 for example 5.3
Figure 9 :
9Comparison plots of solution methods for J = 1, 2 for example 5.4
Table 1 :
1Comparison of HWQA, HeWNA, HWQA, HeWQA method solution for example 5.1 with n = 1, kg = 1 taking J = 2 :Grid Points HWNA [29] HeWNA
HWQA [29] HeWQA
0
0.098471606 0.098471868 0.099733232 0.098721649
1/16
0.098078784 0.09807895
0.099334324 0.098328004
3/16
0.094937743 0.094937908 0.096144758 0.095180981
5/16
0.08866797
0.08866813
0.089779277 0.088898418
7/16
0.079294153 0.079294305 0.080265493 0.079503596
9/16
0.066853514 0.066853646 0.067645598 0.06703223
11/16
0.051396022 0.051396117 0.051977236 0.051533416
13/16
0.032984684 0.032984721 0.033334595 0.033070901
15/16
0.011695904 0.011695851 0.011809628 0.01172465
Table 6 :
6Comparisonof HWQA, HeWNA, HWQA, HeWQA method solution for example 5.1 with n = 3, kg = 2
taking J = 2:
Grid Points HWNA [29] HeWNA
HWQA [29] HeWQA
0
0.061315351 0.061315829 0.061318845 0.061321705
1/16
0.061075828 0.061075814 0.061079322 0.061083071
3/16
0.059159646 0.059159634 0.059163135 0.059166226
5/16
0.055327307 0.055327295 0.055330739 0.055333256
7/16
0.049578851 0.04957884
0.049582101 0.049584212
9/16
0.041914328 0.041914319 0.041917195 0.041918897
11/16
0.032333785 0.032333778 0.032336026 0.032337266
13/16
0.020837254 0.02083725
0.020838652 0.020839592
15/16
0.007424747 0.007424746 0.007425199 0.007424945
.
Table 7 :
7Comparison of HWNA, HeWNA, HWQA, HeWQA methods solution with analytical solution for example 5.2 taking J = 2 :Grid Points HWNA [29] HeWNA
HWQA [29] HeWQA
Exact
0
1.00023666
0.999999992 1.00023666
0.999999992 1
1/16
0.999586961 0.99934958
0.999586961 0.99934958
0.999349593
3/16
0.994419294 0.994191616 0.994419294 0.994191616 0.994191626
5/16
0.984319576 0.984110835 0.984319576 0.984110835 0.984110842
7/16
0.969730094 0.96954859
0.969730094 0.96954859
0.969548596
9/16
0.9512486
0.951101273 0.9512486
0.951101273 0.951101277
11/16
0.92956584
0.92945791
0.92956584
0.92945791
0.929457914
13/16
0.905403371 0.905338132 0.905403371 0.905338132 0.905338136
15/16
0.879460746 0.879439538 0.879460746 0.879439538 0.879439536
Table 8 :
8Comparisonof error of HWNA, HeWNA, HWQA, HeWQA methods for example 5.2 taking J = 2 :
Error HWNA [29] HeWNA
HWQA [29] HeWQA
L ∞ 0.000237368 2.49669×10 −9 0.000237368 2.49669×10 −9
L 2 0.000471959 1.97638×10 −8 0.000471959 1.97638×10 −8
Table 9 :
9Comparison of HWNA, HeWNA, HWQA, HeWQA methods solution with analytical solution for example 5.3 taking J = 2:Grid Points HWNA [29] HeWNA
HWQA [29] HeWQA
Exact
0
0.316727578 0.316694368 0.316727578 0.316694368 0.316694368
1/16
0.315388914 0.315354403 0.315388914 0.315354403 0.315354404
3/16
0.304700946 0.304666887 0.304700946 0.304666887 0.304666888
5/16
0.283494667 0.283461679 0.283494667 0.283461679 0.283461679
7/16
0.252100547 0.252069555 0.252100547 0.252069555 0.252069555
9/16
0.210993138 0.210965461 0.210993138 0.210965461 0.210965462
11/16
0.160768168 0.16074555
0.160768168 0.16074555
0.16074555
13/16
0.102115684 0.102100258 0.102115684 0.102100258 0.102100258
15/16
0.035791587 0.035785793 0.035791587 0.035785793 0.035785793
Table 10 :
10Comparison of error of HWNA, HeWNA, HWQA, HeWQA methods for example 5.3 taking J = 2 :
1, 0.1, . . . , 0.1] or [0.2, 0.2, . . . , 0.2] doesn't significantly change the solution.
Table 11 :
11Comparison of HWQA, HeWNA, HWQA, HeWQA method solution for example 5.4 taking J = 2:Grid Points HWNA [29] HeWNA
HWQA [29] HeWQA
0
0.954137376 0.954135008 0.954137376 0.954135008
1/16
0.954314498 0.954311604 0.954314498 0.954311604
3/16
0.95573187
0.95572956
0.95573187
0.95572956
5/16
0.958569785 0.958567713 0.958569785 0.958567713
7/16
0.962834546 0.962832683 0.962834546 0.962832683
9/16
0.968535496 0.968533886 0.968535496 0.968533886
11/16
0.975684891 0.975683641 0.975684891 0.975683641
13/16
0.984297738 0.984296771 0.984297738 0.984296771
15/16
0.994391588 0.994391728 0.994391588 0.994391728
Table 12 :
12Comparison of HWQA, HeWNA, HWQA, HeWQA method solution for example 5.5 taking J = 2:Grid Points HWNA [29] HeWNA
HWQA [29] HeWQA
0
0.269855704 0.269948774 0.272263769 0.272366612
1/16
0.269358573 0.269451863 0.27176762
0.271870738
3/16
0.265377954 0.265471233 0.267793921 0.267896983
5/16
0.257388082 0.257481347 0.259810468 0.259913411
7/16
0.245331028 0.245424295 0.247745058 0.247847809
9/16
0.229118226 0.229211536 0.231489202 0.231591678
11/16
0.208628362 0.2087218
0.210897975 0.211000089
13/16
0.183704413 0.183798121 0.18579005
0.18589165
15/16
0.154149664 0.154243862 0.155947881 0.156048741
1
0.13756259
0.137656718 0.139174003 0.139274111
An investigation with hermite wavelets for accurate solution of fractional jaulentmiodek equation associated with energy-dependent schrdinger potential. S Saha Ray, A K Gupta, Applied Mathematics and Computation. 270S. Saha Ray A. K. Gupta. An investigation with hermite wavelets for accurate solution of fractional jaulent- miodek equation associated with energy-dependent schrdinger potential. Applied Mathematics and Compu- tation, 270:458-471, 2015.
Application of dual-chebyshev wavelets for the numerical solution of boundary integral equations with logarithmic singular kernels. Pouria Assari, Mehdi Dehghan, Engineering with Computers. 351Pouria Assari and Mehdi Dehghan. Application of dual-chebyshev wavelets for the numerical solution of boundary integral equations with logarithmic singular kernels. Engineering with Computers, 35(1):175-190, Jan 2019.
Nonlinear boundary value problems for shallow membrane caps, ii. V John, Stephen B Baxley, Robinson, Journal of Computational and Applied Mathematics. 88John V. Baxley and Stephen B. Robinson. Nonlinear boundary value problems for shallow membrane caps, ii. Journal of Computational and Applied Mathematics, 88:203 -224, 1998.
On the solution of the poissonboltzmann equation with application to the theory of thermal explosions. P L Chambre, The Journal of Chemical Physics. 2011P. L. Chambre. On the solution of the poissonboltzmann equation with application to the theory of thermal explosions. The Journal of Chemical Physics, 20(11):1795-1797, 1952.
Introduction to the study of stellar structure. S Chandrasekhar, Dover publicationsS. Chandrasekhar. Introduction to the study of stellar structure. Dover publications, 1967.
Haar wavelet method for solving lumped and distributed-parameter systems. C F Chen, C H Hsiao, IEEE Proceedings Control Theory and Applications. 144C. F. Chen and C. H. Hsiao. Haar wavelet method for solving lumped and distributed-parameter systems. IEEE Proceedings Control Theory and Applications, 144:87 -94, 1997.
Rotationally symmetric solutions for shallow membrane caps. R W Dickey, Quarterly of Applied Mathematics. 47R. W. Dickey. Rotationally symmetric solutions for shallow membrane caps. Quarterly of Applied Mathe- matics, 47:571-581, 1989.
Pointwise bounds for a nonlinear heat conduction model of the human head. R C Duggan, A M Goodman, Bulletin of Mathematical Biology. 48R. C. Duggan and A. M. Goodman. Pointwise bounds for a nonlinear heat conduction model of the human head. Bulletin of Mathematical Biology, 48:229 -236, 1986.
An Efficient Wavelet-Based Spectral Method to Singular Boundary Value Problems. G Hariharan, SpringerSingapore; SingaporeG. Hariharan. An Efficient Wavelet-Based Spectral Method to Singular Boundary Value Problems, pages 63-91. Springer Singapore, Singapore, 2019.
Haar wavelet approximate solutions for the generalized laneemden equations arising in astrophysics. R C Harpreet Kaur, Vinod Mittal, Mishra, Computer Physics Communications. 184Harpreet Kaur, R.C . Mittal, and Vinod Mishra. Haar wavelet approximate solutions for the generalized laneemden equations arising in astrophysics. Computer Physics Communications, 184:2169-2177, 2013.
and Syed Tauseef Mohyud-Din. Numerical Solution Of Fractional Boundary Value Problems By Using Chebyshev Wavelet. Hassan Khan, Muhammad Arif, Matrix Science Mathematic (MSMK). 31Hassan Khan, Muhammad Arif, and Syed Tauseef Mohyud-Din. Numerical Solution Of Fractional Boundary Value Problems By Using Chebyshev Wavelet. Matrix Science Mathematic (MSMK), 3(1):13-16, March 2019.
Haar wavelet method for solving higher order differential equations. Ulo Lepik, International Journal of Mathematics and Computation. 1Ulo Lepik. Haar wavelet method for solving higher order differential equations. International Journal of Mathematics and Computation, 1:84-94, 2008.
Multiresolution approximations and wavelet orthonormal bases of l 2 (r). Stephane G Mallat, Trans. Amer. Math. Soc. 315Stephane G. Mallat. Multiresolution approximations and wavelet orthonormal bases of l 2 (r). Trans. Amer. Math. Soc., 315:69-87, 1989.
A new legendre wavelet operational matrix of derivative and its applications in solving the singular ordinary differential equations. F Mohammadi, M M Hosseini, Journal of the Franklin Institute. 3488F. Mohammadi and M.M. Hosseini. A new legendre wavelet operational matrix of derivative and its appli- cations in solving the singular ordinary differential equations. Journal of the Franklin Institute, 348(8):1787 -1796, 2011.
A modified arrhenius equation. Kazutaka Nakamura, Toshiyuki Takayanagi, Shin Sato, Chemical Physics Letters. 1603Kazutaka Nakamura, Toshiyuki Takayanagi, and Shin Sato. A modified arrhenius equation. Chemical Physics Letters, 160(3):295 -298, 1989.
Wavelet analysis method for solving linear and nonlinear singular boundary value problems. A Nasab, A Klman, E Babolian, Z Pashazadeh Atabakan, Applied Mathematical Modelling. 378A. Kazemi Nasab, A. Klman, E. Babolian, and Z. Pashazadeh Atabakan. Wavelet analysis method for solving linear and nonlinear singular boundary value problems. Applied Mathematical Modelling, 37(8):5876 -5886, 2013.
Existence-uniqueness results for a class of singular boundary value problems arising in physiology. R K Pandey, A K Verma, Nonlinear Analysis: Real World Applications. 91R.K. Pandey and A.K. Verma. Existence-uniqueness results for a class of singular boundary value problems arising in physiology. Nonlinear Analysis: Real World Applications, 9(1):40 -52, 2008.
Existence-uniqueness results for a class of singular boundary value problemsii. R K Pandey, A K Verma, Journal of Mathematical Analysis and Applications. 3382R.K. Pandey and A.K. Verma. Existence-uniqueness results for a class of singular boundary value problems- ii. Journal of Mathematical Analysis and Applications, 338(2):1387 -1396, 2008.
A note on existence-uniqueness results for a class of doubly singular boundary value problems. R K Pandey, A K Verma, Nonlinear Analysis: Theory, Methods & Applications. 71R.K. Pandey and A.K. Verma. A note on existence-uniqueness results for a class of doubly singular boundary value problems. Nonlinear Analysis: Theory, Methods & Applications, 71(7):3477 -3487, 2009.
Harmonic Analysis: From Fourier to Wavelets. Cristina Mara, Lesley A Pereyra, Ward, Mara Cristina Pereyra and Lesley A. Ward. Harmonic Analysis: From Fourier to Wavelets. 2012.
Umer Saeed and Mujeeb ur Rehman. Hermite wavelet method for fractional delay differential equations. Journal of Difference Equations. Umer Saeed and Mujeeb ur Rehman. Hermite wavelet method for fractional delay differential equations. Journal of Difference Equations, 2014, 2014.
Hermite wavelets operational matrix of integration for the numerical solution of nonlinear singular initial value problems. S C Shiralashetti, S Kumbinarasaiah, Alexandria Engineering Journal. S. C. Shiralashetti and S. Kumbinarasaiah. Hermite wavelets operational matrix of integration for the numerical solution of nonlinear singular initial value problems. Alexandria Engineering Journal, 2017.
Hermite wavelets method for the numerical solution of linear and nonlinear singular initial and boundary value problems. Kumbinarasaiah Srinivasa Siddu Channabasappa Shiralashetti, Computational Methods for Differential Equations. 72Kumbinarasaiah Srinivasa Siddu Channabasappa Shiralashetti. Hermite wavelets method for the numerical solution of linear and nonlinear singular initial and boundary value problems. Computational Methods for Differential Equations, 7(2):177-198, 2019.
On an iterative method for a class of 2 point & 3 point nonlinear sbvps. Mandeep Singh, K Verma Amit, Ravi P Agarwal, Journal of Applied Analysis and Computation. 94Mandeep Singh, Verma Amit K., and Ravi P. Agarwal. On an iterative method for a class of 2 point & 3 point nonlinear sbvps. Journal of Applied Analysis and Computation, 9(4):1242-1260, 2019.
An effective computational technique for a class of lane-emden equations. Mandeep Singh, Amit K Verma, Journal of Mathematical Chemistry. 541Mandeep Singh and Amit K. Verma. An effective computational technique for a class of lane-emden equa- tions. Journal of Mathematical Chemistry, 54(1):231-251, Jan 2016.
Haar wavelet collocation approach for lane-emden equations arising in mathematical physics and astrophysics. The European Physical Journal Plus. R Singh, J Shahni Nd, H Garg, A Garg, R. Singh, J. Shahni nd H. Garg, and A. Garg. Haar wavelet collocation approach for lane-emden equations arising in mathematical physics and astrophysics. The European Physical Journal Plus, 2019.
Haar wavelet collocation method for laneemden equations with dirichlet, neumann and neumannrobin boundary conditions. Randhir Singh, Himanshu Garg, Vandana Guleria, Journal of Computational and Applied Mathematics. 346Randhir Singh, Himanshu Garg, and Vandana Guleria. Haar wavelet collocation method for laneemden equations with dirichlet, neumann and neumannrobin boundary conditions. Journal of Computational and Applied Mathematics, 346:150 -161, 2019.
The monotone iterative method and zeros of bessel functions for nonlinear singular derivative dependent bvp in the presence of upper and lower solutions. K Amit, Verma, Nonlinear Analysis: Theory, Methods & Applications. 74Amit K. Verma. The monotone iterative method and zeros of bessel functions for nonlinear singular derivative dependent bvp in the presence of upper and lower solutions. Nonlinear Analysis: Theory, Methods & Applications, 74(14):4709 -4717, 2011.
Higher resolution methods based on quasilinearization and haar wavelets on laneemden equations. K Amit, Diksha Verma, Tiwari, International Journal of Wavelets, Multiresolution and Information Processing. 1731950005Amit K. Verma and Diksha Tiwari. Higher resolution methods based on quasilinearization and haar wavelets on laneemden equations. International Journal of Wavelets, Multiresolution and Information Processing, 17(3):1950005, 2019.
On the convergence of mickens' type nonstandard finite difference schemes on lane-emden type equations. Amit Kumar Verma, Sheerin Kayenat, Journal of Mathematical Chemistry. 566Amit Kumar Verma and Sheerin Kayenat. On the convergence of mickens' type nonstandard finite difference schemes on lane-emden type equations. Journal of Mathematical Chemistry, 56(6):1667-1706, Jun 2018.
Numerical solutions for the linear and nonlinear singular boundary value problems using laguerre wavelets. Fengying Zhou, Xiaoyong Xu, Advances in Difference Equations. 201617Fengying Zhou and Xiaoyong Xu. Numerical solutions for the linear and nonlinear singular boundary value problems using laguerre wavelets. Advances in Difference Equations, 2016(1):17, 2016.
| [] |
[
"SPARSE DOMINATIONS AND WEIGHTED VARIATION INEQUALITIES FOR SINGULAR INTEGRALS AND COMMUTATORS",
"SPARSE DOMINATIONS AND WEIGHTED VARIATION INEQUALITIES FOR SINGULAR INTEGRALS AND COMMUTATORS"
] | [
"Yongming Wen ",
"Huoxiong Wu ",
"Qingying Xue "
] | [] | [] | This paper gives the pointwise sparse dominations for variation operators of singular integrals and commutators with kernels satisfying the L r -Hörmander conditions. As applications, we obtain the strong type quantitative weighted bounds for such variation operators as well as the weak-type quantitative weighted bounds for the variation operators of singular integrals and the quantitative weighted weak-type endpoint estimates for variation operators of commutators, which are completely new even in the unweighted case. In addition, we also obtain the local exponential decay estimates for such variation operators.2010 Mathematics Subject Classification. 42B20; 42B25. | 10.1016/j.jmaa.2019.123825 | [
"https://arxiv.org/pdf/2105.03587v1.pdf"
] | 213,337,424 | 2105.03587 | 581e7441f0f70491c2bfc5bf2f3e26ec119f4f2d |
SPARSE DOMINATIONS AND WEIGHTED VARIATION INEQUALITIES FOR SINGULAR INTEGRALS AND COMMUTATORS
8 May 2021
Yongming Wen
Huoxiong Wu
Qingying Xue
SPARSE DOMINATIONS AND WEIGHTED VARIATION INEQUALITIES FOR SINGULAR INTEGRALS AND COMMUTATORS
8 May 2021
This paper gives the pointwise sparse dominations for variation operators of singular integrals and commutators with kernels satisfying the L r -Hörmander conditions. As applications, we obtain the strong type quantitative weighted bounds for such variation operators as well as the weak-type quantitative weighted bounds for the variation operators of singular integrals and the quantitative weighted weak-type endpoint estimates for variation operators of commutators, which are completely new even in the unweighted case. In addition, we also obtain the local exponential decay estimates for such variation operators.2010 Mathematics Subject Classification. 42B20; 42B25.
Introduction and main results
During the past few years, a novel set of approaches that allow to dominate operators by sparse operators has blossomed. It provides us with a new way to simplify the proofs of known results or to draw new conclusions in the weights theory. The sparse operators were originally introduced and used by Lerner [33] to simplify the proof of the A 2 Conjecture [23]. Later on, the following sparse domination for an ω-Calderón-Zygmund operator T was given in [15] and [34], independently,
(1.1) |T f (x)| ≤ c n κ T 3 n j=1 A S j f (x),
where A S f (x) = Q∈S |Q| −1´Q |f |χ Q (x) and S is a sparse family of dyadic cubes from R n (see [34] for the definition of T ). Since then, there is a good number of literature using sparse domination methods to deal with other operators, see [3,11,14,26,32], and these are far from complete. Now let's turn to the commutators of a linear or sublinear operator T . Recall first that a locally integrable function b is said in BMO(R n ) spaces if
b BMO := sup Q 1 |Q|ˆQ |b − b Q | < ∞, where b Q = |Q| −1´Q b(x)dx. Then the commutator [b, T ] generated by T with b is defined by [b, T ]f (x) := b(x)T (f )(x) − T (bf )(x).
In 2017, Lerner, Ombrosi and Rivera-Ríos [36] obtained an analogue of (1.1) for commutators of ω-Calderón-Zygmund operators, in which they gave several weighted weak type bounds for [b, T ] and the quantitative two-weighted estimate for [b, T ] due to Holmes et al. [22]. Subsequently, Lerner et al. [37] extended the results to the iterated commutator and established the necessary conditions for a rather wider class of operators. For more applications of sparse operators to commutators, see [1,11,28,43,45,46,48], and references therein.
In this paper, we will focus on the variation operators of singular integrals and their commutators. These operators can be used to measure the pointwise rate of convergence for the truncated versions of singular integrals and the corresponding commutators. Moreover, they are pointwisely larger with stronger degree of nonlinearity than maximal truncated singular integrals and their commutators, respectively. It has been shown that the variation operators for martingales and several families of operators are closely connected with probability, ergodic theory and harmonic analysis. For more earlier results, we refer the readers to [2,5,29,30].
Before stating our results, we first recall some definitions and backgrounds.
Definition 1.1. Let T = {T ε } ε>0 be a family of operators and the lim ε→0 T ε f (x) = T f (x) exists in some sense. The ρ-variation operator is defined as
V ρ (T f )(x) := sup ε i ↓0 ∞ i=1 |T ε i+1 f (x) − T ε i f (x)| ρ 1/ρ ,
where ρ > 1 and the supremum is taken over all sequences {ε i } decreasing to zero. where b ∈ BMO(R n ) and K(x, y) satisfies the size condition:
(1.4) |K(x, y)| ≤ C/|x − y| n and L r -Hörmander condition (denote by K ∈ H r ):
sup Q sup x,z∈ 1 2 Q ∞ k=1 |2 k Q| 1 |2 k Q|ˆ2k Q\2 k−1 Q
|K(x, y) − K(z, y)| r dy 1/r < ∞,
and sup Q sup x,z∈ 1 2 Q ∞ k=1 |2 k Q| 1 |2 k Q|ˆ2k Q\2 k−1 Q
|K(y, x) − K(y, z)| r dy 1/r < ∞, when 1 ≤ r < ∞. For r = ∞, we mean that
sup Q sup x,z∈ 1 2 Q ∞ k=1 |2 k Q| ess sup y∈2 k Q\2 k−1 Q |K(x, y) − K(z, y)| < ∞, and sup Q sup x,z∈ 1 2 Q ∞ k=1 |2 k Q| ess sup y∈2 k Q\2 k−1 Q |K(y, x) − K(y, z)| < ∞.
If we denote the class of kernels of ω-Calderón-Zygmund operators by H Dini , then for 1 < s < r < ∞, it is easy to check that (also see [28])
H Dini ⊂ H ∞ ⊂ H r ⊂ H s ⊂ H 1 .
In 2000, Campbell et al. [8] first proved that the ρ-variation operators for Hilbert transform are strong (p, p) and weak (1, 1) types if ρ > 2. Subsequently, the aforementioned authors [9] extended the results to the higher dimensional cases, such as Riesz transforms and the homogeneous singular integrals with rough kernels. For further results of the cases of rough kernels, we refer the readers to [17]. For the weighted cases, we refer to [16,21,25,39,40]. In particularly, Hytönen, Lacey and Pérez [25] gave the sharp weighted bounds for the ρ-variation of Calderón-Zygmund operators satisfying a log-Dini condition. See [6,18,41,42] etc. for other recent works on variational inequalities and their applications.
On the other hand, the variation inequalities for the commutators of singular integrals have also attracted several authors attentions. In 2013, Betancor et al. [4] obtained the L pboundedness of variation operators for the commutators of Riesz transforms in Euclidean setting and Schrödinger setting. In 2015, Liu and Wu [38] studied the boundedness for V ρ (T b ) on the weighted L p spaces with p > 1 and ρ > 2, where b ∈ BMO and the kernel of the T ε is a standard Calderón-Zygmund kernel. Following this work, Zhang and Wu [49] established the weighted strong type bounds for the variation operators of commutators with kernels satisfying certain Hörmander conditions. Recently, the variation inequalities for commutators of singular integrals with rough kernels were also established in [12]. However, to our knowledge, there is no any results on the weak-type endpoint estimates for the variation of commutators, moreover, on the quantitative weighted bounds.
Inspired by the above works, this paper aims to extend the quantitative weighted results for variation of T with kernels from H Dini into H r for 1 < r ≤ ∞, and to establish the quantitative weighted variation inequalities for the families of commutators T b and the corresponding weighted endpoint estimate, which is completely new even in the unweighted case.
Our main ingredients are to establish the sparse dominations for variation operator of singular integrals and commutators, which are non-trivial, especially in proving the weak type estimate of a local grand maximal truncated variation operator, and due to the methods used in [37] do not work and to avoid employing the trick of Cauchy integral formula as in [28], we seek for appropriate ways relied on sparse dominations to obtain the quantitative weighted bounds and weak-type endpoint estimates for variations of commutators. Now our main results can be formulated as follows. (1.2) and (1.3), respectively. Assume that the kernel K(x, y) ∈ H r and satisfies (1.4). If V ρ (T ) is bounded on L q 0 (R n , dx) for some 1 < q 0 < ∞, then for every f ∈ C ∞ c (R n ), there exist 3 n sparse families S j such that [20], Franca Silva and Zorin-Kranich applied the sparse domination to explore the sharp weighted estimates for the variation operators associated with the family of ω-Calderon-Zygmund operators. Our results can be regarded as a generalization of the results in [20], since H Dini ⊂ H r .
Theorem 1.3. Let 1 < r ≤ ∞, ρ > 2 and b ∈ L 1 loc (R n ). Let T = {T ε } ε>0 and T b = {T ε,b } ε>0 be given by(1.5) V ρ (T f )(x) ≤ C(n, r ′ , q 0 , V ρ (T ) L q 0 →L q 0 ) 3 n j=1 R∈S j |f | r ′ 1/r ′ R χ R (x), and V ρ (T b f )(x) ≤ C(n, r ′ , q 0 , V ρ (T ) L q 0 →L q 0 ) × 3 n j=1 R∈S j |b(x) − b R | |f | r ′ 1/r ′ R + |f (b − b R )| r ′ 1/r ′ R χ R (x), (1.6) where |f | r ′ R = 1 |R|´R |f (y)| r ′ dy. Remark 1.4. In
Applying the conclusion (1.5) in Theorem 1.3, we get the following sharp weighted estimates for the variation operators of singular integrals. Theorem 1.5. Let 1 < r ≤ ∞, ρ > 2 and ω and σ r ′ be a pair of weights. Let T = {T ε } ε>0 be given by (1.2). Assume that K(x, y) ∈ H r and satisfies (1.4). If V ρ (T ) is bounded on L q 0 (R n , dx) for some 1 < q 0 < ∞, then for any r ′ < p < ∞,
V ρ (T σf ) L p (ω) ≤ C(n, r ′ , p, q 0 , V ρ (T ) L q 0 →L q 0 )[ω, σ r ′ ] 1/p A p/r ′ × [ω] r ′ /p ′ A∞ + [σ r ′ ] r ′ /p A∞ 1/r ′ f L p (σ r ′ ) , V ρ (T σf ) L p,∞ (ω) ≤ C(n, r ′ , p, q 0 , V ρ (T ) L q 0 →L q 0 )[ω, σ r ′ ] 1/p A p/r ′ [ω] 1/p ′ A∞ f L p (σ r ′ ) .
Corollary 1.6. Let 1 < r ≤ ∞, ρ > 2, T = {T ε } ε>0 be given by (1.2). Assume that K(x, y) ∈ H r and satisfies (1.4). If V ρ (T ) is bounded on L q 0 (R n , dx) for some 1 < q 0 < ∞, then for any r ′ < p < ∞, ω ∈ A p/r ′ ,
V ρ (T f ) L p (ω) ≤ C(n, r ′ , p, q 0 , V ρ (T ) L q 0 →L q 0 )[ω] 1/p A p/r ′ × [ω] r ′ /p ′ A∞ + [ω 1 1−p/r ′ ] r ′ /p A∞ 1/r ′ f L p (ω) , V ρ (T f ) L p,∞ (ω) ≤ C(n, r ′ , p, q 0 , V ρ (T ) L q 0 →L q 0 )[ω] 1/p A p/r ′ [ω] 1/p ′ A∞ f L p (ω) .
Using (1.5) again and the same method as in [28], we can obtain the following weak type estimate for variation operators. Corollary 1.7. Let 1 < r ≤ ∞, ρ > 2, T = {T ε } ε>0 be given by (1.2). Assume that K(x, y) ∈ H r and satisfies (1.4). If V ρ (T ) is bounded on L q 0 (R n , dx) for some 1 < q 0 < ∞, then for every weight ω and every Young function ϕ,
ω({x ∈ R n : V ρ (T f )(x) > λ}) ≤ C(n, r ′ , q 0 , V ρ (T ) L q 0 →L q 0 )κ ϕ ׈R n |f (x)| λ r ′ M ϕ ω(x)dx, where κ ϕ :=ˆ∞ 1 ϕ −1 (t)[log(e + t)] 2r ′ t 2 [log(e + t)] 3 dt.
Remark 1.8. We remark that the first conclusions in Theorem 1.5 and Corollary 1.6 improve the main results in [25] by removing the weak (1, 1) type assumption and weakening the condition of kernel, and the second conclusions are new. Therefore, Theorem 1.5 and Corollary 1.6 can be regarded as an extension and generalization of the main results in [25]. In Corollary 1.7, take r ′ = 1, ϕ(t) = t and ω ∈ A 1 , since H Dini ⊂ H ∞ , our argument covers the result in [18], moreover, the conclusion in Corollary 1.7 is new itself.
Moreover, applying the conclusion (1.6) in Theorem 1.3, we can obtain the following quantitative weighted bounds for the variation operators of commutators of singular integrals, which are also completely new. Theorem 1.9. Let 1 < r ≤ ∞, ρ > 2, b ∈ BMO(R n ). Assume that K(x, y) ∈ H r and satisfies (1.4). Let T and T b be given by (1.2) and (1.3), respectively. If V ρ (T ) is bounded on L q 0 (R n , dx) for some 1 < q 0 < ∞, then for any r ′ < p < ∞, ω ∈ A p/r ′ ,
V ρ (T b f ) L p (ω) ≤ C(n, r ′ , p, q 0 , V ρ (T ) L q 0 →L q 0 )[ω] 2 A∞ × [ω] p+r ′ p(p−r ′ ) A p/r ′ + ([ω] A p/r ′ [ω − r ′ p−r ′ ] A∞ ) 1/p b BMO f L p (ω) .
Theorem 1.10. Let 1 < r ≤ ∞, ρ > 2, b ∈ BMO(R n ). Assume that K(x, y) ∈ H r and satisfies (1.4). Let T and T b be given by (1.2) and (1.3), respectively. If V ρ (T ) is bounded on L q 0 (R n , dx) for some 1 < q 0 < ∞, then for every weight ω and every 0 < ε ≤ 1, all λ > 0,
ω({x ∈ R n : V ρ (T b f )(x) > λ}) ≤ C(n, r ′ , q 0 , V ρ (T ) L q 0 →L q 0 ) 1 εˆRn ψ b BMO |f (x)| λ × M L(log L) 4r ′ −3 (log log L) 1+ε ω(x)dx, and ω({x ∈ R n : V ρ (T b f )(x) > λ}) ≤ C(n, r ′ , q 0 , V ρ (T ) L q 0 →L q 0 ) 1 εˆRn ψ b BMO |f (x)| λ × M L(log L) 4r ′ −3+ǫ ω(x)dx, where ψ(t) = t r ′ [log(e + t)] r ′ . Specially, if ω ∈ A ∞ , then ω({x ∈ R n : V ρ (T b f )(x) > λ}) ≤ C(n, r ′ , q 0 , V ρ (T ) L q 0 →L q 0 )[ω] 4r ′ −3 A∞ × [log(e + [ω] A∞ )]ˆR n ψ b BMO |f (x)| λ M ω(x)dx. Moreover, if ω ∈ A 1 , ω({x ∈ R n : V ρ (T b f )(x) > λ}) ≤ C(n, r ′ , q 0 , V ρ (T ) L q 0 →L q 0 )[ω] A 1 [ω] 4r ′ −3 A∞ × [log(e + [ω] A∞ )]ˆR n ψ b BMO |f (x)| λ ω(x)dx.
Remark 1.11. When r ′ = 1, the conclusions of Theorem 1.10 coincide with the case of commutator of Calderón-Zygmund operator in [28,36]. And the weak-type endpoint estimates for the variation of commutators are new, even in the un-weighted case. In
addition, note that if Ω ∈ Lip α (S n−1 ) with 0 < α ≤ 1, then K(x, y) = Ω(x − y)/|x − y| n ∈ H Dini ⊂ H r , 1 ≤ r ≤ ∞.). Suppose that Ω ∈ Lip α (S n−1 ) with 0 < α ≤ 1, b ∈ BMO(R n ), ω ∈ A 1 and ρ > 2.
Then for all functions f and all λ > 0,
ω({x ∈ R n : V ρ (T Ω,b f )(x) > λ}) ≤ C[ω] A 1 [ω] A∞ [log(e + [ω] A∞ )] ׈R n Φ b BMO |f (x)| λ ω(x)dx, where Φ(t) = t log(e + t), T Ω,b is defined as T b in Definition 1.2 with K(x, y) = Ω(x − y)/|x − y| n .
We organize the rest of the paper as follows. In Section 2, we will recall some related definitions and auxiliary lemmas. The proofs of Theorems 1.3 and 1.5 will be given in Section 3. In Section 4 we will prove Theorems 1.9 and 1.10. Finally, we will apply the sparse domination to present the local exponential decay estimates of variation operators in Section 5.
Through out the rest of our paper, we will denote positive constants by C, which may change at each occurrence. If f ≤ Cg and f g f , we denote f g, f ∼ g, respectively. We write the side length of Q by l Q .
Preliminaries
In this section, we recall some well known definitions and properties which will be used later.
Weights.
A weight is a nonnegative and locally integrable function on R n . Given a pair of weights ω and σ, which satisfy
[ω, σ] Ap = sup Q 1 |Q|ˆQ w(y)dy 1 |Q|ˆQ σ(y)dy p−1 < ∞, and for ω ∈ A ∞ , [ω] A∞ = sup Q 1 ω(Q)ˆQ M (ωχ Q )(x)dx,
where the supremum is taken over all cubes
Q ⊂ R n . Observe that if ω ∈ A p and σ = ω 1−p ′ , then [ω, σ] Ap = [ω] Ap . Recall that A ∞ = p≥1 A p . The A ∞ constant [ω] A∞
given above was shown in [27] to be the most suitable one and the following optimal reverse Hörder inequality was also obtained.
Lemma 2.1. (cf. [27]) Let ω ∈ A ∞ , then for every cube Q, 1 |Q|ˆQ ω(x) rω dx 1/rω ≤ 2 |Q|ˆQ ω(x)dx,
where r ω = 1 + 1/(τ n [ω] A∞ ) with τ n a dimensional constant independent ω and Q.
Sparse family.
In this subsection, we will introduce a quite useful tool, which has been borrowed from [34].
In the following, we call D(Q) the dyadic grid obtained by repeatedly subdividing Q and its descendants in 2 n cubes with the same side length.
Definition 2.2. A family of cubes is said to be a dyadic lattice D if it satisfies the following properties:
(1) if Q ∈ D, then every descendant of Q is also in D;
(2) for every two cubes Q 1 , Q 2 ∈ D, we can find a common ancestor Q ∈ D such that Q 1 , Q 2 ∈ D(Q);
(3) for each compact set K ⊆ R n , we can find a cube Q ∈ D such that K ⊆ Q.
The following lemma is called the Three Lattice Theorem, which will play a key role in our proofs. and for each cube Q ∈ D we can find a cube R Q in each D j such that Q ⊆ R Q and 3l Q = l R Q .
Remark 2.4. Fix a dyadic lattice D. For any cube Q ⊂ R n , we can always find a cube Q ′ ∈ D such that l Q /2 < l Q ′ ≤ l Q and Q ⊂ 3Q ′ . By the above lemma, for some j ∈ {1, . . . , 3 n }, it is easy to see that 3Q ′ = P ∈ D j . Hence, for each cube Q ⊂ R n , we can find a cube P ∈ D j that satisfies Q ⊂ P and l P ≤ 3l Q .
By the definition of dyadic lattice, we can give the definition of sparse family.
Definition 2.5. Let D be a dyadic lattice. S ⊂ D is a η-sparse family with η ∈ (0, 1) if for every cube Q ∈ S, we can find a measurable subset E Q ⊂ Q such that η|Q| ≤ |E Q |, where all the E Q are pairwise disjoint.
Let r > 0 and S be a η-sparse family, we define the sparse operator as
A r,S f (x) = Q∈S 1 |Q|ˆQ |f (y)|dy r χ Q (x) 1/r .
Let ω, σ be a pair of weights, for 1 < p < ∞, r > 0. In [26], Hytönen et al. proved that
(2.1) A r,S (σf ) L p (ω) [ω, σ] 1/p Ap [ω]
(1/r−1/p) + A∞
+ [σ] 1/p A∞ f L p (σ) , (2.2) A r,S (σf ) L p,∞ (ω) [ω, σ] 1/p Ap [ω] (1/r−1/p) + A∞ f L p (σ) p = r, where (α) + = α if α > 0, 0 otherwise.
Young function and Orlicz maximal operators.
In this subsection, we recall some fundamental facts about Young functions, Orlicz local averages, for more details, see [47]. We say that a function A :
[0, ∞) → [0, ∞) is a Young function if A is a continuous, convex increasing function that satisfies A(0) = 0 and lim t→∞ A(t) = ∞. The A-norm of f over Q is defined as f A(µ),Q := inf λ > 0 : 1 µ(Q)ˆQ A |f (x)| λ dµ ≤ 1 .
We'll denote f A,Q if ω is the Lebesgue measure and write f A(ω),Q if µ = ωdx is an absolutely continuous measure with respect to the Lebesgue measure. We then define the Orlicz maximal operator M A f (x) in a natural way by
M A f (x) := sup Q∋x f A,Q .
For every Young function A, we can define its complementary function A by
A(t) = sup s>0 {st − A(s)}.
There are some interesting properties such as a generalized Hölder inequality
(2.3) 1 µ(Q)ˆQ |f (x)g(x)|dµ(x) ≤ 2 f A(µ),Q g Ā (µ),Q .
Now we present some particular cases of maximal operators.
• If A(t) = t r with r > 1, then M A = M r . • M A = M L log L α with α > 0 given by the function A(t) = t(log(e + t)) α . Note that for any α > 0, M M A M r for every 1 < r < ∞. • If A(t) = t log(e + t) α log(e + log(e + t)) β with α, β > 0, then we denote M A = M L(log L) α (log log L) β .
Finally, to prove Theorem 1.10, we will use the following lemma, which was proved essentially in [28].
Lemma 2.6. Let b ∈ BMO and ψ 0 (t) = t r ′ for 1 ≤ r ′ < ∞. Let ψ be a Young function such that ψ −1 1 (t)ψ −1 0 (t)C −1 1 (t) t withC 1 (t) = e t for t ≥ 1. Assume that ψ(xy) ψ(x)ψ(y)
, and β n is the constant such that e (3/2) k−1 2 n e −1 ≥ max(e 2 , 4 k ) for k > β n . Then for every weight ω, and Young functions ϕ 0 , ϕ 1 ,
ω({x ∈ R n : A S,b (f )(x) > λ}) ≤ c 1 j=0 κ ϕ jˆR n ψ j b BMO |f (x)| λ M Φ 1−j •ϕ j ω(x)dx , where A S,b (f )(x) = R∈S |b(x) − b R | |f | r ′ 1 r ′ R + |f (b − b R )| r ′ 1 r ′ R χ R (x), Φ j (t) = t log(e + t) j , j = 0, 1, and κ ϕ j = βn k=1 4 k(r ′ −1) ϕ −1 0 •Φ −1 1 (1/α k ) Φ −1 1 (1/α k ) + c n´∞ 1 ϕ −1 0 •Φ −1 1 (t)ψ 0 ((log(e+t)) 4 ) t 2 (log(e+t)) 4 , j = 0 ∞ 1 ϕ −1 1 (t)ψ((log(e+t)) 2 ) t 2 (log(e+t)) 3 dt, j = 1 with α k = min(1, e − (3/2) k 2 n e +1 ).
Remark 2.7. Lemma 2.6 is a special case of Theorem 2.7 in [28] with a carefully checking in the constants. We omit the details.
3. Proofs of Theorems 1.3 and 1.5
This section is devoted to Theorems 1.3 and 1.5, which will be based on a pointwise estimate of the grand maximal operators associated with the variation operators. At first, we define the grand maximal truncated operators M Vρ(T ) by
M Vρ(T ) f (x) = sup Q∋x ess sup ξ∈Q V ρ (T f χ R n \3Q )(ξ).
Given a cube Q 0 , we also consider a local version of M Vρ(T ),Q 0 by
M Vρ(T ),Q 0 f (x) := sup Q∋x,Q⊂Q 0 ess sup ξ∈Q V ρ (T f χ 3Q 0 \3Q )(ξ), x ∈ Q 0 . 0, otherwise.
To prove our results, we need to fix some notations as in [21]. Set Θ = {β : β = {ǫ i }, ǫ i ∈ R, ǫ i ↓ 0}. We consider the set N × Θ and denote the mixed norm space of two variables functions g(i, β) by F ρ such that
g Fρ ≡ sup β i |g(i, β)| ρ 1/ρ < ∞.
We consider the F ρ -valued operator V (T ) : f → V (T )f defined by
V (T )f (x) := {T ǫ i+1 f (x) − T ǫ i f (x)} β={ǫ i }∈Θ . Then V ρ (T f )(x) = V (T )f (x) Fρ . Lemma 3.1. Let T = {T ε } ε>0 be given by (1.2). Suppose that V ρ (T ) is bounded on L q 0 (R n ) for some 1 < q 0 < ∞. Then for a.e. x ∈ Q 0 , f ∈ C ∞ c (R n ), (3.1) V ρ (T f χ 3Q 0 )(x) ≤ c n,q 0 V ρ (T ) L q 0 →L q 0 |f (x)| + M Vρ(T ),Q 0 f (x).
Proof. For x ∈ int Q 0 , let x be a point of approximate continuity of V ρ (T f χ 3Q 0 )(x) (see [19]). For any ε > 0, set
E r (x) = {y ∈ B(x, r) : |V ρ (T f χ 3Q 0 )(y) − V ρ (T f χ 3Q 0 )(x)| < ε},
where B(x, r) is the ball centered at x of radius r. Then we immediately deduce that lim r→0 |E r (x)|/|B(x, r)| = 1. We denote the smallest cube centered at x and containing B(x, r) by Q(x, r). Let r > 0 be small enough so that Q(x, r) ⊂ Q 0 . Hence, for a.e. y ∈ E r (x),
V ρ (T f χ 3Q 0 )(x) < V ρ (T f χ 3Q 0 )(y) + ε ≤ V ρ (T f χ 3Q(x,r) )(y) + M Vρ(T ),Q 0 f (x) + ε.
Applying the L q 0 -boundedness of V ρ (T ), we have
V ρ (T f χ 3Q 0 )(x) ≤ 1 |E r (x)|ˆE r (x) V ρ (T f χ 3Q(x,r) )(y) q 0 dy 1 q 0 + M Vρ(T ),Q 0 f (x) + ε ≤ V ρ (T ) L q 0 →L q 0 1 |E r (x)|ˆ3 Q(x,r) |f (y)| q 0 dy 1 q 0 + M Vρ(T ),Q 0 f (x) + ε.
By additionally assuming that x is a Lebesgue point of f , we get the results by letting r, ε → 0.
Lemma 3.2. Let T = {T ε } ε>0 be given by (1.2). Assume that K(x, y) satisfies (1.4) and
K ∈ H r for 1 < r ≤ ∞. If V ρ (T ) is bounded on L q 0 (R n ) for some 1 < q 0 < ∞, then M Vρ(T ),Q 0 L r ′ →L r ′ ,∞ ≤ C(n, r ′ , q 0 , V ρ (T ) L q 0 →L q 0 ),
where 1/r + 1/r ′ = 1.
Proof. We first consider the case 1 < r < ∞. For any
x ∈ Q 0 , Q ∋ x with Q ⊂ Q 0 and ξ ∈ Q, we denote B x := B(x, 9nl Q ) and B x := B(x, 3 √ nl Q 0 ). Then 3Q ⊂ B x and 3Q 0 ⊂ B x . We have V ρ (T f χ 3Q 0 \3Q )(ξ) ≤ V ρ (T f χ (Bx∩3Q 0 )\3Q )(ξ) + V ρ (T f χ 3Q 0 \Bx )(x) + |V ρ (T f χ 3Q 0 \Bx )(ξ) − V ρ (T f χ 3Q 0 \Bx )(x)| =: I + II + III.
Now we estimate I, II and III, respectively. Note that
{χ {ε i+1 <|ξ−y|≤ε i } } β={ε i }∈Θ Fρ ≤ 1.
By the Minkowski inequality and (1.4), one can see that
I ≤ˆR n {χ {ε i+1 <|ξ−y|≤ε i } } β={ε i }∈Θ Fρ |f (y)χ (Bx∩3Q 0 )\3Q (y)||K(ξ, y)|dy ≤ˆ( Bx∩3Q 0 )\3Q |f (y)| |ξ − y| n dy ≤ C n M r ′ f (x). (3.2)
Similarly, by the definition and sublinearity of V ρ (T ), we have
(3.3) II ≤ 2V ρ (T f )(x) + V ρ (T f χ Bx\3Q 0 )(x) ≤ 2V ρ (T f )(x) + C n M r ′ f (x).
For the term III, we can write
III ≤ ˆε i+1 <|ξ−y|≤ε i K(ξ, y)f (y)χ 3Q 0 \Bx (y)dy −ˆε i+1 <|x−y|≤ε i K(x, y)f (y)χ 3Q 0 \Bx (y)dy β={ǫ i }∈Θ Fρ ≤ ˆε i+1 <|ξ−y|≤ε i (K(ξ, y) − K(x, y))f (y)χ 3Q 0 \Bx (y)dy β={ǫ i }∈Θ Fρ + ˆR n (χ {ε i+1 <|ξ−y|≤ε i } (y) − χ {ε i+1 <|x−y|≤ε i } (y))K(x, y)f (y)χ 3Q 0 \Bx (y)dy Fρ =: I 1 + I 2 .
Since {χ {ε i+1 <|ξ−y|≤ε i } } β={ε i }∈Θ Fρ ≤ 1, by the Minkowski inequality and the Hörmander condition,
I 1 ≤ˆR n {χ {ε i+1 <|ξ−y|≤ε i } } β={ε i }∈Θ Fρ |f (y)χ R n \3Q (y)||K(ξ, y) − K(x, y)|dy ≤ ∞ k=1 2 kn (3l Q ) n 1 |2 k 3Q|ˆ2k 3Q\2 k−1 3Q |f (y)||K(ξ, y) − K(x, y)|dy ≤ ∞ k=1 2 kn (3l Q ) n 1 |2 k 3Q|ˆ2k 3Q\2 k−1 3Q |K(ξ, y) − K(x, y)| r dy 1/r 1 |2 k 3Q|ˆ2k 3Q |f (y)| r ′ dy 1/r ′ ≤ CM r ′ (f )(x).
Next, we deal with the term I 2 . As we can see, the integral
R n |χ {ε i+1 <|ξ−y|≤ε i } (y) − χ {ε i+1 <|x−y|≤ε i } (y)||K(x, y)f (y)|χ 3Q 0 \Bx (y)dy
will be non-zero if either
χ {ε i+1 <|ξ−y|≤ε i } (y) = 1 and χ {ε i+1 <|x−y|≤ε i } (y) = 0
or viceversa. Therefore, we need to consider the following four cases:
(i) ε i+1 < |ξ − y| ≤ ε i and |x − y| ≤ ε i+1 , (ii) ε i+1 < |ξ − y| ≤ ε i and |x − y| > ε i , (iii) ε i+1 < |x − y| ≤ ε i and |ξ − y| ≤ ε i+1 , (iv) ε i+1 < |x − y| ≤ ε i and |ξ − y| > ε i .
In case (i) we have ε i+1 < |ξ − y| ≤ |x − ξ| + |x − y| ≤ √ nl Q + ε i+1 . Other cases are similar, and we conclude that: in case (ii), ε i < |x − y| ≤ √ nl Q + ε i ; in case (iii),
ε i+1 < |x − y| ≤ √ nl Q + ε i+1 ; in case (iv), ε i < |ξ − y| ≤ √ nl Q + ε i .
Therefore, using |K(x, y)| ≤ C/|x − y| n , we obtain Observe that √ nl Q ≥ ε i+1 implies I 21 = I 23 = 0, and I 22 = I 24 = 0 if √ nl Q ≥ ε i . Using c 0 |x − y| ≤ |ξ − y| ≤ c 1 |x − y| with constants c 0 , c 1 > 0, we may assume that c 0 < 1 and c 1 > 1, otherwise, I 22 = I 24 = 0. For 1 < t < min(r ′ , 2), by the Hölder inequality, we get
R n |χ {ε i+1 <|ξ−y|≤ε i } (y) − χ {ε i+1 <|x−y|≤ε i } (y)||K(x, y)||f (y)|χ 3Q 0 \Bx (y)dy ≤ CˆR n χ {ε i+1 <|ξ−y|≤ε i } (y)χ {ε i+1 <|ξ−y|≤ε i+1 + √ nl Q } (y) 1 |x − y| n |f (y)|χ 3Q 0 \Bx (y)dy + CˆR n χ {ε i+1 <|ξ−y|≤ε i } (y)χ {ε i <|x−y|≤ε i +diamQ} (y) 1 |x − y| n |f (y)|χ 3Q 0 \Bx (y)dy + CˆR n χ {ε i+1 <|x−y|≤ε i } (y)χ {ε i+1 <|x−y|≤ε i+1 + √ nl Q } (y) 1 |x − y| n |f (y)|χ 3Q 0 \Bx (y)dy + CˆR n χ {ε i+1 <|x−y|≤ε i } (y)χ {ε i <|ξ−y|≤ε i + √ nl Q } (y) 1 |x − y| n |f (y)|χI 21 ≤ C ˆR n χ {ε i+1 <|ξ−y|≤ε i } (y)|f (y)| t χ 3Q 0 \Bx (y) 1 |x − y| nt dy 1/t × [( √ nl Q + ε i+1 ) n − (ε i+1 ) n ] 1/t ′ , I 22 ≤ C ˆR n χ {max{ε i+1 ,2ε i /3}<|ξ−y|≤ε i } (y)|f (y)| t χ 3Q 0 \Bx (y) 1 |x − y| nt dy 1/t × [ √ nl Q + ε i ) n − (ε i ) n ] 1/t ′ , I 23 ≤ C ˆR n χ {ε i+1 <|x−y|≤ε i } (y)|f (y)| t χ 3Q 0 \Bx (y) 1 |x − y| nt dy 1/t × [( √ nl Q + ε i+1 ) n − (ε i+1 ) n ] 1/t ′ , I 24 ≤ C ˆR n χ {max{ε i+1 ,3ε i /4}<|x−y|≤ε i } (y)|f (y)| t χ 3Q 0 \Bx (y) 1 |x − y| nt dy 1/t × [( √ nl Q + ε i ) n − (ε i ) n ] 1/t ′ . Note that √ nl Q < ε i+1 , we have ( √ nl Q + ε i+1 ) n − ε n i+1 ≤ C(max{ √ nl Q , ε i+1 }) n−1 √ nl Q ≤ Cε n−1 i+1 √ nl Q .
Then
I 21 ≤ C n,r ′ [( √ nl Q + ε i+1 ) n − (ε i+1 ) n ] 1/t ′ (ε i+1 ) (n−1)/t ′ × ˆR n χ {ε i+1 <|ξ−y|≤ε i } (y) |f (y)| t χ 3Q 0 \Bx (y) |x − y| n+t−1 dy 1/t ≤ C n,r ′ ( √ nl Q ) 1 t ′ ˆR n χ {ε i+1 <|ξ−y|≤ε i } (y) |f (y)| t χ 3Q 0 \Bx (y) |x − y| n+t−1 dy 1/t .
Similarly,
I 22 ≤ C n,r ′ ( √ nl Q ) 1 t ′ ˆR n χ {ε i+1 <|ξ−y|≤ε i } (y) |f (y)| t χ 3Q 0 \Bx (y) |x − y| n+t−1 dy 1/t . I 23 ≤ C n,r ′ ( √ nl Q ) 1 t ′ ˆR n χ {ε i+1 <|x−y|≤ε i } (y) |f (y)| t χ 3Q 0 \Bx (y) |x − y| n+t−1 dy 1/t . I 24 ≤ C n,r ′ ( √ nl Q ) 1 t ′ ˆR n χ {ε i+1 <|x−y|≤ε i } (y) |f (y)| t χ 3Q 0 \Bx (y) |x − y| n+t−1 dy 1/t .
Consequently,
I 2 ≤ C n,r ′ ( √ nl Q ) 1 t ′ ˆR n χ {ε i+1 <|ξ−y|≤ε i } (y) |f (y)| t χ 3Q 0 \Bx (y) |x − y| n+t−1 dy 1/t β={ε i }∈Θ Fρ + C n,r ′ ( √ nl Q ) 1 t ′ ˆR n χ {ε i+1 <|x−y|<ε i } (y) |f (y)| t χ R n \3Q (y) |x − y| n+t−1 dy 1/t β={ε i }∈Θ Fρ =: D 1 + D 2 .
A direct computation shows that
ˆR n χ {ε i+1 <|ξ−y|≤ε i } (y) χ 3Q 0 \Bx (y) |x − y| n+t−1 |f (y)| t dy 1/t β={ε i }∈Θ Fρ = sup ε i i ˆR n χ {ε i+1 <|ξ−y|≤ε i } (y) χ 3Q 0 \Bx (y) |x − y| n+t−1 |f (y)| t dy ρ/t 1/ρ ≤ ˆ3 Q 0 \Bx |f (y)| t |x − y| n+t−1 dy 1/t ≤ C n,r ′ ( √ nl Q ) − 1 t ′ M r ′ (f )(x).
This implies that
D 1 ≤ C n,r ′ M r ′ (f )(x).
Similarly, we can also deduce that
D 2 ≤ C n,r ′ M r ′ (f )(x). Hence, I 2 ≤ C n,r ′ M r ′ (f )(x)
, which, together with the estimates of I 1 , implies that
(3.4) III = |V ρ (T f χ 3Q 0 \Bx )(ξ) − V ρ (T f χ 3Q 0 \Bx )(x)| ≤ C n,r ′ M r ′ f (x).
It was proved in [49] that V ρ (T ) are strong type (p, p) with p > r ′ and weak type (1, 1), then using the interpolation theorem, we have
V ρ (T ) L r ′ →L r ′ ≤ C(n, r ′ , q 0 , V ρ (T ) L q 0 →L q 0 ).
Then by (3.2)-(3.4), we get the desired result for the case of 1 < r < ∞. Now we turn to the case of r = ∞ by employing the idea in [18]. For any f ∈ L 1 (R n ) and λ > 0, applying the Calderón-Zygmund decomposition to f at height λ. We obtain Since K ∈ H ∞ , for some r 0 ∈ (1, q 0 ), we know that K ∈ H r ′ 0 , hence, we can apply (3.2)-(3.4) to get that
f = g + b such that (P 1 ) |g(x)| ≤ 2 n λ for a.e. x ∈ R n and g L 1 (R n ) ≤ f L 1 (R n ) ; (P 2 ) b = j b j , supp (b j ) ⊂ Q j and {Q j } ⊂ D(R n ) is a pairwise disjoint family of cubes, where D(R n ) is the family of dyadic cubes in R n ; (P 3 ) for every j,´R n b j (x)dx = 0 and b j L 1 (R n ) ≤ 2 n+1 λ|Q j |; (P 4 ) j |Q j | ≤ f L 1 (R n ) /λ.M Vρ(T ),Q 0 L q 0 →L q 0 ≤ C(n, q 0 , V ρ (T ) L q 0 →L q 0 ).
This together with (P 1 ), we deduce that
|{x ∈ R n : M Vρ(T ),Q 0 g(x) > λ/2}| ≤ C(n, q 0 , V ρ (T ) L q 0 →L q 0 ) 1 λ f L 1 (R n ) .
Denote Q := j 25 √ nQ j , by (P 4 ), we see that
|{x ∈ R n : M Vρ(T ),Q 0 b(x) > λ/2}| ≤ | Q| + |{x ∈ R n \ Q : M Vρ(T ),Q 0 b(x) > λ/2}| ≤ C n 1 λ f L 1 (R n ) + |{x ∈ R n \ Q : M Vρ(T ),Q 0 b(x) > λ/2}|.
Note that for any x ∈ R n ,
M Vρ(T ),Q 0 b(x) ≤ CM b(x) + M Vρ(T ),Q 0 b(x),
and
M Vρ(T ),Q 0 f (x) := sup Q∋x,Q⊂Q 0 ess sup ξ∈Q |V ρ (T f χ 3Q 0 \Bx )(ξ)|, x ∈ Q 0 ; 0, otherwise.
Hence, we only need to show the weak type (1,1) of M Vρ(T ),Q 0 . Let I i := (ε i , ε i+1 ] and
A I i (ξ) := {y ∈ R n : |ξ − y| ∈ I i }. For x ∈ Q 0 \ Q, choose Q ∋ x, ξ ∈ Q and {ε i } i such that M Vρ(T ),Q 0 b(x) ≤ 2 i j T (χ A I i (ξ) χ 3Q 0 \Bx b j )(ξ) ρ 1/ρ .
Let us consider the following three sets of indices j ′ s:
L 1 I i (ξ) := {j : Q j ⊂ A I i (ξ) ∩ (3Q 0 \B x )}, L 2 I i (ξ) := {j : Q j A I i (ξ) ∩ (3Q 0 \B x ), Q j ∩ A I i (ξ) ∩ (3Q 0 \B x ) = ∅, Q j ∩ ∂(3Q 0 ) = ∅},L 3 I i (ξ) := {j : Q j A I i (ξ) ∩ (3Q 0 \B x ), Q j ∩ A I i (ξ) ∩ (3Q 0 \B x ) = ∅, Q j ∩ ∂(B x ) = ∅ or Q j ∩ ∂(A I i (ξ) = ∅}.
Then we get
i j T (χ A I i (ξ) χ 3Q 0 \Bx b j )(ξ) ρ 1/ρ ≤ 3 m=1 i j∈L m I i (ξ) T (χ A I i (ξ) χ 3Q 0 \Bx b j )(ξ) ρ 1/ρ =: 3 m=1 L m (ξ)(x).
By (P 3 ), it implies that
L 1 (ξ)(x) ≤ i j∈L 1 I i (ξ)ˆR n |K(ξ, y) − K(ξ, y j )|χ A I i (ξ) (y)χ 3Q 0 \Bx (y)|b j (y)|dy ≤ jˆR n |K(ξ, y) − K(ξ, y j )|χ 3Q 0 \Bx (y)|b j (y)|dy,
where y j is the center of Q j . Note that for x ∈ Q 0 \ Q and y ∈ Q j (3Q 0 \B x ),
|x − y| ≥ |x − y j | − √ nl Q j ≥ 23 25 |x − y j |, and |ξ − y| ≥ |x − y| − |x − ξ| ≥ |x − y| − |x − y| 9 √ n ≥ 8 9 |x − y|.
Then |ξ − y j | ≥ |ξ − y| − |y − y j |
≥ 8 9 |x − y| − √ nl Q j ≥ 8 9 · 23 25 |x − y j | − 2 25 |x − y j | ≥ 7 √ nl Q j .
Since K ∈ H ∞ , by (P 3 ) and (P 4 ), we have
|{x ∈ Q 0 \ Q : L 1 (ξ)(x) > λ 16 }| ≤ 16 λ jˆQ 0 \B jˆR n |K(ξ, y) − K(ξ, y j )|χ 3Q 0 \Bx (y)|b j (y)|dydx ≤ C 1 λ jˆR n |b j (y)| ∞ k=1ˆ2 kB j \2 k−1B j |K(ξ, y) − K(ξ, y j )|χ 3Q 0 \Bx (y)dxdy ≤ C n 1 λ jˆR n |b j (y)| sup B j sup y,y j ∈ 1 2B j ∞ k=1 |2 kB j | ess sup ξ∈2 kB j \2 k−1B j |K(ξ, y) − K(ξ, y j )|dy ≤ C n 1 λ f L 1 (R n ) ,
whereB j := B(y j , 7 √ nl Q j ). By the same arguments as in [18], we can obtain
|{x ∈ Q 0 \ Q : L m (ξ)(x) > λ 16 }| ≤ C n 1 λ f L 1 (R n ) , m = 2, 3.
This implies the desired conclusion and completes the proof of Lemma 3.2.
Now we are in a position to prove Theorem 1.3.
Proof of Theorem 1.3. The proof of this theorem follows the standard step in [36]. We only prove (1.6) since (1.5) follows the similar pattern. In view of Remark 2.4, we see that there exist 3 n dyadic lattices D j such that for every cube Q ⊂ R n , there is a cube R Q ∈ D j for some j, for which 3Q ⊂ R Q and |R Q | ≤ 9 n |Q|. Fixed a cube Q 0 ⊂ R n , we first claim that there exists a 1 2 -sparse family F ⊆ D(Q 0 ) such that for a.e. x ∈ Q 0 ,
V ρ (T b f χ 3Q 0 )(x) ≤ C(n, r ′ , q 0 , V ρ (T ) L q 0 →L q 0 ) × Q∈F (|b(x) − b R Q | |f | r ′ 1/r ′ 3Q + |f (b − b R Q )| r ′ 1/r ′ 3Q )χ Q (x). (3.5)
For some α n > 0, which will be determined later, denote E = E 1 ∪ E 2 , where
E 1 : = {x ∈ Q 0 : |f (x)| > α n |f | r ′ 1/r ′ 3Q 0 } ∪ {x ∈ Q 0 : M Vρ(T ),Q 0 (f )(x) > α n M Vρ(T ),Q 0 L r ′ ,∞ →L r ′ ) |f | r ′ 1/r ′ 3Q 0 }, and E 2 : = {x ∈ Q 0 : |f (x)(b(x) − b R Q 0 )| > α n |f (b − b R Q 0 )| r ′ 1/r ′ 3Q 0 } ∪ {x ∈ Q 0 : M Vρ(T ),Q 0 (f (b − b R Q 0 ))(x) > α n M Vρ(T ),Q 0 L r ′ ,∞ →L r ′ ) × |f (b − b R Q 0 )| r ′ 1/r ′ 3Q 0 }. By Lemma 3.2, |E| ≤´Q 0 |f (x)|dx α n |f | r ′ 1/r ′ 3Q 0 +´Q 0 |f (x)(b(x) − b R Q 0 )|dx α n |f (b − b R Q 0 )| r ′ 1/r ′ 3Q 0 +ˆ3 Q 0 |f (x)(b(x) − b R Q 0 )| r ′ α r ′ n |f (b − b R Q 0 )| r ′ 3Q 0 dx +ˆ3 Q 0 |f (x)| r ′ α r ′ n |f | r ′ 3Q 0 dx ≤ ( 2 · 3 n α n + 2 · 3 n α r ′ n )|Q 0 |.
Hence, taking α n large enough, we deduce that |E| ≤ 1 2 n+1 |Q 0 |. By using the Calderón-Zygmund decomposition to the function χ E on Q 0 at height λ = 1 2 n+1 , we obtain pairwise disjoint cubes P j such that
χ E (x) ≤ 1 2 n+1
for a.e. x ∈ ∪P j . Together with this we immediately obtain E\ j P j = 0, we also have
1 2 n+1 |P j | ≤ |P j ∩ E| ≤ 1 2 |P j |,
and that j |P j | ≤ 1/2|Q 0 |, P j ∩ E c = ∅. As the direct results of Lemma 3.2, we have
(3.6) ess sup ξ∈P j V ρ (T f χ 3Q 0 \3P j )(ξ) ≤ C(n, r ′ , q 0 , V ρ (T ) L q 0 →L q 0 ) |f | r ′ 1/r ′ 3Q 0 , and (3.7) ess sup ξ∈P j V ρ (T (b − b R Q 0 )f χ 3Q 0 \3P j )(ξ) ≤ C(n, r ′ , q 0 , V ρ (T ) L q 0 →L q 0 ) |f (b − b R Q 0 )| r ′ 1/r ′ 3Q 0 . By Lemmas 3.1 and 3.2, for almost everywhere x ∈ Q 0 \ ∪ P j , (3.8) V ρ (T f χ 3Q 0 )(x) ≤ C(n, r ′ , q 0 , V ρ (T ) L q 0 →L q 0 ) |f | r ′ 1/r ′ 3Q 0 , and V ρ (T (b − b R Q 0 )f χ 3Q 0 )(x) ≤ C(n, r ′ , q 0 , V ρ (T ) L q 0 →L q 0 ) |f (b − b R Q 0 )| r ′ 1/r ′ 3Q 0 . (3.9)
Note that for any pairwise disjoint cubes P j we have obtained,
V ρ (T b f χ 3Q 0 )(x)χ Q 0 (x) = V ρ (T b f χ 3Q 0 )(x)χ Q 0 \ j P j (x) + j V ρ (T b f χ 3Q 0 )(x)χ P j (x) ≤ V ρ (T b f χ 3Q 0 )(x)χ Q 0 \ j P j (x) + j V ρ (T b f χ 3P j )(x)χ P j (x) + j V ρ (T b f χ 3Q 0 \3P j )(x)χ P j (x). For c = b − b R Q 0 , using the fact that V ρ (T b f ) = V ρ (T b−c f ), we have V ρ (T b f χ 3Q 0 )(x)χ Q 0 \ j P j (x) + j V ρ (T b f χ 3Q 0 \3P j )(x)χ P j (x) ≤ |b(x) − b R Q 0 | V ρ (T f χ 3Q 0 )(x)χ Q 0 \ j P j (x) + j V ρ (T f χ 3Q 0 \3P j )(x)χ P j (x) + V ρ (T (b − b R Q 0 )f χ 3Q 0 )(x)χ Q 0 \ j P j (x) + j V ρ (T (b − b R Q 0 )f χ 3Q 0 \3P j )(x)χ P j (x).
By (3.6)-(3.9), for a.e. x ∈ Q 0 , we get
V ρ (T b f χ 3Q 0 )(x)χ Q 0 (x) ≤ C(n, r ′ , q 0 , V ρ (T ) L q 0 →L q 0 ) |b(x) − b R Q 0 | |f | r ′ 1/r ′ 3Q 0 + |(b − b R Q 0 )f | r ′ 1/r ′ 3Q 0 + j V ρ (T b f χ 3P j )(x)χ P j (x).
Iterating the above inequality, we obtain a 1 2 -sparse family
F = {P k j }(k ∈ Z + ) with j |P j | ≤ 1 2 |Q 0 |, where {P 0 j } = {Q 0 }, {P 1 j } = {P j }
and {P k j } are the cubes achieved at the k-stage of the iterative process. This proves (3.5).
In the end of the proof, let us show that supp f ⊂ 3Q j with j Q j = R n . We begin by taking a cube Q 0 such that supp f ⊂ Q 0 . And cover 3Q 0 \Q 0 by 3 n − 1 congruent cubes Q j . For every j, Q 0 ⊂ 3Q j . We continue to do the same way for 9Q 0 \3Q 0 and so on. It's easy to check that the union of the cubes Q j of this process, including Q 0 , satisfies our requirement. Applying to each Q j , then for a.e. x ∈ Q j ,
V ρ (T b f )(x)χ Q j (x) ≤ C(n, r ′ , q 0 , V ρ (T ) L q 0 →L q 0 ) Q∈F j |b(x) − b R Q | |f | r ′ 1/r ′ 3Q + |f (b − b R Q )| r ′ 1/r ′ 3Q χ Q (x),
where F j ⊂ D(Q j ) is a 1 2 -sparse family. Let F = j F j , we also have that F is a 1 2 -sparse family and the following estimate holds for a.e. x ∈ R n ,
V ρ (T b f )(x) ≤ C(n, r ′ , q 0 , V ρ (T ) L q 0 →L q 0 ) Q∈F |b(x) − b R Q | |f | r ′ 1/r ′ 3Q + |f (b − b R Q )| 1/r ′ 3Q χ Q (x). Denote S j = {R Q ∈ D j : Q ∈ F} and note that |f | 3Q ≤ C n |f | 3R Q , it yields that V ρ (T b f )(x) ≤ C(n, r ′ , q 0 , V ρ (T ) L q 0 →L q 0 ) 3 n j=1 R∈S j |b(x) − b R | |f | r ′ 1/r ′ R + |f (b − b R )| r ′ 1/r ′ R χ R (x).
This completes the proof of Theorem 1.3. This section is devoted to the proofs of Theorems 1.9 and 1.10. To prove 1.9, we need the following lemma.
Lemma 4.1. If ω ∈ A p/r ′ with 1 ≤ r ′ < ∞, then there exists a constant s > 1 with s ′ ≤ c n,p [ω] A∞ such that [ω] A p/(r ′ s) [ω] A p/r ′ . Proof. Take θ = 1 τn[ω] A∞ , by Lemma 2.1, we have 1 |Q|ˆQ ω(x) (1−(p/r ′ ) ′ )(1+θ) dx p/r ′ −1 1+θ ≤ 2 p−1 1 |Q|ˆQ ω(x) 1−(p/r ′ ) ′ dx p/r ′ −1 .
From this, we obtain
1 |Q|ˆQ ω(x)dx 1 |Q|ˆQ ω(x) (1−(p/r ′ ) ′ )(1+θ) dx p/r ′ −1 1+θ ≤ 2 p−1 1 |Q|ˆQ ω(x)dx 2 p−1 1 |Q|ˆQ ω(x) 1−(p/r ′ ) ′ dx p/r ′ −1 .
Choosing p/(r ′ s) − 1 = p/r ′ −1 1+θ , then s ′ ≤ c n,p [ω] A∞ and [ω]
A p/(r ′ s) ≤ 2 p−1 [ω] A p/r ′ .
Proof of Theorem 1.9. Instead of using the conjugation method, which relies on the sparse bounds (1.5). We use the sparse bounds (1.6). Let us denote [28]) and the generalized Hölder inequality (2.3),
A 1 S j ,b f (x) = R∈S j |b(x) − b R | |f | r ′ 1/r ′ R χ R (x), A 2 S j ,b f (x) = R∈S j |f (b − b Q )| r ′ 1/r ′ R χ R (x). First, we consider A 1 S j ,b , using b − b R exp L(ω),Q ≤ c n [ω] A∞ b BMO (seeA 1 S j ,b f L p (ω) = sup g L p ′ (ω) ≤1ˆR n A 1 S j ,b f (x)g(x)ω(x)dx = sup g L p ′ (ω) ≤1 R∈S j 1 ω(R)ˆR |b(x) − b R |g(x)ω(x)dxω(R) |f | r ′ 1/r ′ R ≤ sup g L p ′ (ω) ≤1 R∈S j b − b R exp L(ω),R g L(log L)(ω),R ω(R) |f | r ′ 1/r ′ R ≤ c n [ω] A∞ b BMO sup g L p ′ (ω) ≤1 R∈S j g L(log L)(ω),R ω(R) |f | r ′ 1/r ′ R .
Let B be the family of the principal cubes in the usual sense, i.e., B = ∞ k=0 B k , where B 0 := {maximal cubes in S j } and
B k+1 := B∈B k ch B (B), where ch B (B) = {R ⊆ B maximal s.t. τ (R) > 2τ (B)},
where τ (R) = g L(log L)(ω),R |f | r ′ 1/r ′ R . Then
R∈S j g L(log L)(ω),R ω(R) |f | r ′ 1/r ′ R ≤ B∈B g L(log L)(ω),B |f | r ′ 1/r ′ B R∈S j ,π(R)=B ω(R) ≤ c n [ω] A∞ B∈B g L(log L)(ω),B |f | r ′ 1/r ′ B ω(B) ≤ c n [ω] A∞ˆR n M r ′ f (x)M L log L(ω) g(x)ω(x)dx ≤ c n [ω] A∞ˆR n M r ′ f (x)M 2 ω g(x)ω(x)dx,
where π(R) is the minimal principal cube which contains R. Hence, using the sharp bounds of M r ′ (see [27]) and M ω g L p ′ (ω) ≤ c n,p g L p ′ (ω) (see [27]), we obtain
A 1 S j ,b f L p (ω) ≤ c n [ω] 2 A∞ b BMO sup g L p ′ (ω) ≤1ˆR n M r ′ f (x)M 2 ω g(x)ω(x)dx ≤ c n [ω] 2 A∞ b BMO M r ′ f L p (ω) sup g L p ′ (ω) ≤1 M 2 ω g L p ′ (ω) ≤ c n,p [ω] 2 A∞ ([ω] A p/r ′ [ω − r ′ p−r ′ ] A∞ ) 1/p b BMO f L p (ω) . Now, turn to A 2 S j ,b f (x)
. Applying the John-Nirenberg theorem, we deduce that
1 |R|ˆR |b(x) − b R | r ′ s ′ 1/(r ′ s ′ ) = r ′ s ′ˆ∞ 0 t r ′ s ′ −1 |R| −1 |{x ∈ R : |b(x) − b R | > t}|dt 1/(r ′ s ′ ) ≤ e 2 ˆ∞ 0 t r ′ s ′ −1 e −t 2 n e b BMO dt 1/(r ′ s ′ ) ,
where the last inequality is due to the fact that (r ′ s ′ ) 1/(r ′ s ′ ) ≤ e. Since
t r ′ s ′ −1 e −t 2 n+1 e b BMO ≤ (2 n+1 e) r ′ s ′ −1 (r ′ s ′ − 1) r ′ s ′ −1 b r ′ s ′ −1 BMO , we have 1 |R|ˆR |b(x) − b R | r ′ s ′ 1/(r ′ s ′ ) ≤ e 2 ˆ∞ 0 (2 n+1 e) r ′ s ′ −1 (r ′ s ′ − 1) r ′ s ′ −1 b r ′ s ′ −1 BMO e −t 2 n+1 e b BMO dt 1/(r ′ s ′ ) (4.1) = 2 n+1 e 3 r ′ s ′ b BMO .
As we do in dealing with A 1 S j ,b f , by (4.1) and take s ′ as Lemma 4.1, we get that
A 2 S j ,b f L p (ω) = sup g L p ′ (ω) ≤1ˆR n A 2 S j ,b f (x)g(x)ω(x)dx = sup g L p ′ (ω) ≤1 R∈S j 1 ω(R)ˆR g(x)ω(x)dxω(R) |f (b − b R )| r ′ 1 r ′ R ≤ sup g L p ′ (ω) ≤1 R∈S j 1 ω(R)ˆR g(x)ω(x)dxω(R) |b − b R | r ′ s ′ 1/(r ′ s ′ ) R f L r ′ s ,R ≤ c n,p,r ′ b BMO [ω] 2 A∞ sup g L p ′ (ω) ≤1ˆR n M ω g(x)M r ′ s f (x)ω(x)dx.
Now, using the following results given in [24,27]:
M f L p (ω) ≤ 4ep ′ ([ω] Ap [ω −1/(p−1) ] A∞ ) 1/p f L p (ω) , ω ∈ A p , [ω − 1 p−1 ] A∞ ≤ [ω] 1 p−1 Ap , ω ∈ A p .
Then by Lemma 4.1 and (p/(r ′ s)) ′ ≤ p+r ′ p−r ′ ,
M r ′ s f L p (ω) = M (|f | r ′ s ) 1/(r ′ s) L p/(r ′ s) (ω) ≤ [4e(p/(r ′ s)) ′ ([ω] A p/(r ′ s) [ω − 1 p/(r ′ s)−1 ] A∞ ) r ′ s/p ] 1/(r ′ s) |f | r ′ s 1/(r ′ s) L p/(r ′ s)
[(p/(r ′ s)) ′ ] 1/(r ′ s) ([ω] A p/r ′ [ω]
(p/(r ′ s)) ′ −1 A p/r ′ ) 1/p f L p (ω)
[ω]
(p/(r ′ s)) ′ 1
p A p/r ′ f L p (ω) ≤ [ω]
p+r ′ p(p−r ′ ) A p/r ′ f L p (ω) .
Therefore,
A 2 S j ,b f L p (ω) ≤ c n,p,r ′ [ω] 2 A∞ b BMO sup g L p ′ (ω) ≤1ˆR n M r ′ s f (x)M ω g(x)ω(x)dx ≤ c n,p,r ′ [ω] 2 A∞ b BMO M r ′ s f L p (ω) sup g L p ′ (ω) ≤1 M ω g L p ′ (ω) ≤ c n,p,r ′ [ω] 2 A∞ [ω] p+r ′ p(p−r ′ ) A p/r ′ b BMO f L p (ω) .
This, together with the estimate of A 1 S j ,b f L p (ω) , deduces that
V ρ (T b f ) L p (ω) ≤ C(n, r ′ , p, q 0 , V ρ (T ) L q 0 →L q 0 )[ω] 2 A∞ × [ω] p+r ′ p(p−r ′ ) A p/r ′ + ([ω] A p/r ′ [ω − r ′ p−r ′ ] A∞ ) 1/p b BMO f L p (ω) .
Theorem 1.9 is proved.
Proof of Theorem 1.10. The case r ′ = 1 has been obtained in [28], here, we extend the results to 1 ≤ r ′ < ∞. We will apply Lemma 2.6 to prove our theorem. Choose ψ j (t) = t r ′ log(e + t) r ′ j , j = 0, 1. It's not hard to see that ψ j satisfies the conditions of Lemma 2.6. For j = 0,ˆ∞ 1 ϕ −1 0 • Φ −1 1 (t)(log(e + t)) 4r ′ t 2 (log(e + t)) 4 dt
=ˆ∞ Φ −1 1 (1) ϕ −1 0 (t)Φ ′ 1 (t)(log(e + Φ 1 (t))) 4r ′ Φ 2 1 (t)(log(e + Φ 1 (t))) 4 dt ≤ˆ∞ Φ −1 1 (1)
ϕ −1 0 (t)[log(e + Φ 1 (t))] 4r ′ −4 t 2 log(e + t) dt.
Take ϕ 0 (t) = t[log(e + t)] 4r ′ −4 [log(e + log(e + t))] 1+ε . Then For j = 1, noticing thatˆ∞ 1 ϕ −1 1 (t)[log(e + (log(e + t)) 2 )] r ′ t 2 [log(e + t)] 3−2r ′ dt ˆ∞ 1 ϕ −1 1 (t) t 2 [log(e + t)] 3−3r ′ dt, and taking ϕ 1 (t) = t(log(e + t)) 3r ′ −2 [log(e + log(e + t))] 1+ε , we havê where Q is a Whitney cube and supp f ⊂ Q. In 1993, Buckley [7] obtained an exponential decay in γ in studying the quantitative weighted estimates for Calderón-Zygmund operators,
(5.1) |{x ∈ Q : T * f (x) > 2λ, M f (x) ≤ λγ}| ≤ ce − c γ |Q|.
Based on the result above, it yields that
T f L p (ω) ≤ cp[ω] A∞ M f L p (ω) ,
which is also a key to the L log L estimate obtained in [35]. Later on, Karagulyan [31] improved (5.1) by giving the below estimate:
|{x ∈ Q : T * f (x) > tM f (x)}| ≤ ce −αt |Q|.
The above estimate was then extended to other operators by Ortiz-Caraballo, Pérez and Rela in [44]. We also refer readers to [10] for its application to the quantitative C p estimate for Calderón-Zygmund operators. Now, by Theorem 1.3 and employing the arguments in [11,28], we may obtain:
Theorem 5.1. Let 1 < r ≤ ∞, ρ > 2, b ∈ BMO(R n ). Assume that K(x, y) ∈ H r and (1.4). Let T and T b be given by (1.2) and (1.3), respectively. If V ρ (T ) is bounded on L q 0 (R n , dx) for some 1 < q 0 < ∞, then for supp f ⊂ Q, there exist constants c 1 , c 2 , c 3 and c 4 such that
|{x ∈ Q : V ρ (T f )(x) > tM r ′ (f )(x)}| ≤ c 1 e −c 2 t |Q|, |{x ∈ Q : V ρ (T b f )(x) > tM L r ′ (log L) r ′ (f )(x)}| ≤ c 3 e − √ c 4 t/ b BMO |Q|.
This provides an extension of the corresponding results for singular integrals and commutators in [28,44] to the variation operators. As far as we are concerned, these results are completely new, since no local exponential decay estimates of variation operators have been considered before.
Definition 1. 2 .
2The truncated singular integral operators T := {T ε } ε>0 and commutatorsT b := {T ε,b } ε>0 are given by x) − b(y)]K(x, y)f (y)dy,
Lemma 2.3. (cf.[34]) Given a dyadic lattice D, there exist 3 n dyadic lattices D 1 , . . . ,
3Q 0 \Bx (y)dy =: I 21 + I 22 + I 23 + I 24 .
By the sublinearity of M Vρ(T ),Q 0 , we have |{x ∈ R n : M Vρ(T ),Q 0 f (x) > λ}| ≤ |{x ∈ R n : M Vρ(T ),Q 0 g(x) > λ/2}| + |{x ∈ R n : M Vρ(T ),Q 0 b(x) > λ/2}|.
Proof of Theorem 1.5. The proof directly follows from (1.5) and (2.1)-(2.2). 4. Proofs of the Theorems 1.9 and 1.10
∞ 1 ϕ
1−1 1 (t)[log(e + (log(e + t)) 2 )] r ′ t 2 [log(e + t)] 3−2r ′ dt ˆ∞ 1 1 t log(e + t)[log(e + log(e + t))] 1+ε dt 1 ε .
Acknowledgements. The authors would like to thank Professor Kangwei Li for a helpful discussion on the subject of this article.Using that log(e + ϕ 0 (t)) log(e + t), we deduce that Φ 1 • ϕ 0 (t) ϕ 0 (t) log(e + t) = t(log(e + t)) 4r ′ −3 [log(e + log(e + t))] 1+ε .Combining with Φ 0 • ϕ 1 (t) = t(log(e + t)) 3r ′ −2 [log(e + log(e + t))] 1+ε andwe conclude thatSimilarly, take ϕ 0 (t) = t(log(e + t)) 4r ′ −4+ǫ and ϕ 1 (t) = t(log(e + t)) 3r ′ −2+ǫ , we getFinally, choose ε = 1 log(e+[ω] A∞ ) , then [ω] ǫ A∞ ≤ e and it implies that. This together with (4.2), we complete the proof of Theorem 1.10.Including resultIn this last section, we will apply the sparse domination obtained in Theorem 1.3 to present the local exponential decay estimates of variation operators. Let us recall some backgrounds before stating our contributions. It was known that Coifman and Fefferman applied good-λ technique to obtainwhere T * is the maximal Calderón-Zygmund operator and ω ∈ A ∞ . The method was relied heavily on the estimate |{x ∈ R n : T * f (x) > 2λ, M f (x) ≤ λγ}| ≤ cγ|{x ∈ R n : T * f (x) > λ}|.To show the above estimate, it suffices to study the following local etimate |{x ∈ Q : T * f (x) > 2λ, M f (x) ≤ λγ}| ≤ cγ|Q|,
On Bloom type estimates for iterated commutators of fractional integrals. N Accomazzo, J C Martínez-Perales, I P Rivera-Ríos, Indiana Univ. Math. J. to appearN. Accomazzo, J.C. Martínez-perales and I.P. Rivera-Ríos, On Bloom type estimates for iterated commutators of fractional integrals, Indiana Univ. Math. J. (to appear).
Variation in probability, ergodic theory and analysis. M Akcoglu, R L Jones, P Schwartz, Illinois J. Math. 421M. Akcoglu, R.L. Jones and P. Schwartz, Variation in probability, ergodic theory and analysis, Illinois J. Math. 42(1) (1998), 154-177.
Sharp weighted norm estimates beyond Calderón-Zygmund theory. F Bernicot, D Frey, S Petermichl, Anal. PDE. 95F. Bernicot, D. Frey and S. Petermichl, Sharp weighted norm estimates beyond Calderón-Zygmund theory, Anal. PDE. 9(5) (2016), 1079-1113.
L p -boundedness properties of variation operators in the Schrödinger setting. J J Betancor, J C Fariña, E Harboure, L Rodríguez-Mesa, Rev. Mat. Complut. 262J.J. Betancor, J.C. Fariña, E. Harboure and L. Rodríguez-Mesa, L p -boundedness properties of varia- tion operators in the Schrödinger setting, Rev. Mat. Complut. 26(2) (2013), 485-534.
Pointwise ergodic theorems for arithmetric sets. J Bourgain, Publ. Math. Inst. HautesÉtudes Sci. 691J. Bourgain, Pointwise ergodic theorems for arithmetric sets, Publ. Math. Inst. HautesÉtudes Sci. 69(1) (1989), 5-45.
On dimension-free variational inequalities for averaging operators in R d. J Bourgain, M Mirek, E M Stein, B Wróbel, Geom. Funct. Anal. 281J. Bourgain, M. Mirek, E.M. Stein, and B. Wróbel, On dimension-free variational inequalities for averaging operators in R d , Geom. Funct. Anal. 28(1) (2018), 58-99.
Estimates for operators norms on weighted spaces and reverse Jensen inequalities. S M Buckley, Trans. Amer. Math. Soc. 3401S.M. Buckley, Estimates for operators norms on weighted spaces and reverse Jensen inequalities, Trans. Amer. Math. Soc. 340(1) (1993), 253-272.
Oscillations and variation for the Hilbert transform. J T Campbell, R L Jones, K Reinhold, M Wierdl, Duke Math. J. 1051J.T. Campbell, R.L. Jones, K. Reinhold and M. Wierdl, Oscillations and variation for the Hilbert transform, Duke Math. J. 105(1) (2000), 59-83.
Oscillations and variation for singular integrals in higher dimensions. J T Campbell, R L Jones, K Reinhold, M Wierdl, Trans. Amer. Math. Soc. 3555J.T. Campbell, R.L. Jones, K. Reinhold and M. Wierdl, Oscillations and variation for singular integrals in higher dimensions, Trans. Amer. Math. Soc. 355(5) (2003), 2115-2137.
Quantitative Cp estimates for Calderón-Zygmund operators. J Canto, arXiv:1811.05209v1preprintJ. Canto, Quantitative Cp estimates for Calderón-Zygmund operators, preprint, arXiv: 1811.05209v1 (2018).
Vector-valued operators, optimal weighted estimates and the Cp condition. M E Cejas, K Li, C Pérez, I P Rivera-Ríos, Sci. China Math. to appearM.E. Cejas, K. Li, C. Pérez and I.P. Rivera-Ríos, Vector-valued operators, optimal weighted estimates and the Cp condition, Sci. China Math. (to appear).
Variational inequalities for the commutators of rough operators with BMO functions. Y Chen, Y Ding, G Hong, H Liu, J. Funct. Anal. 2758Y. Chen, Y. Ding, G. Hong and H. Liu, Variational inequalities for the commutators of rough operators with BMO functions, J. Funct. Anal. 275(8) (2018), 2446-2475.
Sharp bounds for general commutators on weighted Lebesgue spaces. D Chung, C Pereyra, C Pérez, Trans. Amer. Math. Soc. 364D. Chung, C. Pereyra and C. Pérez, Sharp bounds for general commutators on weighted Lebesgue spaces, Trans. Amer. Math. Soc. 364 (2012), 1163-1177.
A sparse domination principle for rough singular integrals. J M Conde-Alonso, A Culiuc, F Di Plinio, Y Ou, Anal. PDE. 105J.M. Conde-Alonso, A. Culiuc, F. Di Plinio and Y. Ou, A sparse domination principle for rough singular integrals, Anal. PDE. 10(5) (2017), 1255-1284.
A pointwise estimates for positive dyadic shifts and some applications. J M Conde-Alonso, G Rey, Math. Ann. 365J.M. Conde-Alonso and G. Rey, A pointwise estimates for positive dyadic shifts and some applications, Math. Ann. 365(3-4) (2016), 1111-1135.
The ρ-variation of the Hermitian Riesz transform. R Crescimbeni, F J Martín-Reyes, A L Torre, J L Torrea, Acta Math. Sin. (Engl. Ser.). 26R. Crescimbeni, F.J. Martín-Reyes, A.L. Torre and J.L. Torrea, The ρ-variation of the Hermitian Riesz transform, Acta Math. Sin. (Engl. Ser.) 26 (2010), 1827-1838.
Jump and variational inequalities order for rough operators. Y Ding, G Hong, H Liu, J. Fourier. Anal. Appl. 233Y. Ding, G. Hong and H. Liu, Jump and variational inequalities order for rough operators, J. Fourier. Anal. Appl. 23(3) (2017), 679-711.
Variation of Calderón-Zygmund operators with matrix weight, Contemp. X T Duong, J Li, D Yang, Math. (in pressX.T. Duong, J. Li and D. Yang, Variation of Calderón-Zygmund operators with matrix weight, Con- temp. Math. (in press).
Measure Theory and Fine Properties of Functions. L C Evans, R F Gariepy, Studies in Advanced Mathematics. CRC PressL.C. Evans and R.F. Gariepy, Measure Theory and Fine Properties of Functions, Studies in Advanced Mathematics, CRC Press, Boca Raton FL, (1992).
F C Silva, P Zorin-Kranich, arXiv:1604.05506Sparse domination of sharp variational truncations. F.C. Franca Silva and P. Zorin-Kranich, Sparse domination of sharp variational truncations, arXiv:1604.05506
Dimension free estimates for the oscillation of Riesz transform. T A Gillespie, J L Torrea, Israel J. Math. 141T.A. Gillespie and J.L. Torrea, Dimension free estimates for the oscillation of Riesz transform, Israel J. Math. 141 (2004), 125-144.
Commutators in the two-weight setting. I Holmes, M T Lacey, B D Wick, Math. Ann. 3671-2I. Holmes, M.T. Lacey, and B.D. Wick, Commutators in the two-weight setting, Math. Ann. 367(1-2) (2017), 51-80.
The sharp weighted bound for general Calderón-Zygmund operators. T P Hytönen, Ann. of Math. 2T.P. Hytönen, The sharp weighted bound for general Calderón-Zygmund operators, Ann. of Math.(2) 175(3) (2012), 1473-1506.
The Holmes-Wick theorem on two-weight bounds for higher order commutators revisited. T P Hytönen, Arch. Math. (Basel). 1074T.P. Hytönen, The Holmes-Wick theorem on two-weight bounds for higher order commutators revis- ited, Arch. Math. (Basel), 107(4) (2016), 389-395.
Sharp weighted bounds for the q-variation of singular integrals. T P Hytönen, M T Lacey, C Pérez, Bull. London Math. Soc. 453T.P. Hytönen, M.T. Lacey and C. Pérez, Sharp weighted bounds for the q-variation of singular integrals, Bull. London Math. Soc. 45(3) (2013), 529-540.
Weak and strong Ap-A∞ estimates for square functions and related operators. T P Hytönen, K Li, Proc. Amer. Math. Soc. 146T.P. Hytönen, K. Li, Weak and strong Ap-A∞ estimates for square functions and related operators, Proc. Amer. Math. Soc. 146 (2018), 2497-2507.
T P Hytönen, C Pérez, Sharp weighted bounds involving A∞. 6T.P. Hytönen and C. Pérez, Sharp weighted bounds involving A∞, Anal. PDE. 6(4) (2013), 777-818.
Sparse and weighted estimates for generalized Hörmander operators and commutators. G H Ibáñez-Firnkorn, I P Rivera-Ríos, Monatsh. Math. 1911G.H. Ibáñez-Firnkorn and I.P. Rivera-Ríos, Sparse and weighted estimates for generalized Hörmander operators and commutators, Monatsh. Math. 191(1) (2020), 125-173.
Ergodic theory and connections with analysis and probability. R L Jones, New York J. Math. 3R.L. Jones, Ergodic theory and connections with analysis and probability, New York J. Math. 3A (1997), 31-67.
Oscillation and variation inequalities for convolution powers, Ergodic Theory Dynam. R L Jones, K Reinhold, Systems. 216R.L. Jones and K. Reinhold, Oscillation and variation inequalities for convolution powers, Ergodic Theory Dynam. Systems. 21(6) (2001), 1809-1829.
Exponential estimates for the Calderón-Zygmund operator and related problems of Fourier series. G A Karagulyan, Mat. Zametki. 713G.A. Karagulyan, Exponential estimates for the Calderón-Zygmund operator and related problems of Fourier series, Mat. Zametki. 71(3) (2002), 398-411.
Sparse bounds for oscillatory and random singular integrals. M T Lacey, S Spencer, New York J. Math. 23M.T. Lacey and S. Spencer. Sparse bounds for oscillatory and random singular integrals, New York J. Math. 23 (2017), 119-131.
A simple proof of the A2 conjecture. A K Lerner, Inter. Math. Res. Not. 2414A.K. Lerner, A simple proof of the A2 conjecture, Inter. Math. Res. Not. 24(14) (2013), 3159-3170.
Intuitive dyadic calculus: the basics. A K Lerner, F Nazarov, Expo. Math. 37A.K. Lerner and F. Nazarov, Intuitive dyadic calculus: the basics, Expo. Math. 37 (2019), 225-265.
A1 bounds for Calderón-Zygmund operators related to a problem of Muckenhoupt and Wheeden. A K Lerner, S Ombrosi, C Pérez, Math. Res. Lett. 161A.K. Lerner, S. Ombrosi and C. Pérez, A1 bounds for Calderón-Zygmund operators related to a problem of Muckenhoupt and Wheeden, Math. Res. Lett. 16(1) (2017), 149-156.
On pointwise and weighted estimates for commutators of Calderón-Zygmund operators. A K Lerner, S Ombrosi, I P Rivera-Ríos, Adv. Math. 319A.K. Lerner, S. Ombrosi and I.P. Rivera-Ríos, On pointwise and weighted estimates for commutators of Calderón-Zygmund operators, Adv. Math. 319 (2017), 153-181.
A K Lerner, S Ombrosi, I P Rivera-Ríos, Commutators of singular integrals revisited. 51A.K. Lerner, S. Ombrosi and I.P. Rivera-Ríos, Commutators of singular integrals revisited, Bull. London Math. Soc. 51 (2019), 107-119.
A criterion on oscillation and variation for the commutators of singular integrals. F Liu, H Wu, Forum Math. 271F. Liu and H. Wu, A criterion on oscillation and variation for the commutators of singular integrals, Forum Math. 27(1) (2015), 77-97.
Weighted variation inequalities for differential operators and singular integrals. T Ma, J L Torrea, Q Xu, J. Funct. Anal. 2682T. Ma, J.L. Torrea and Q. Xu, Weighted variation inequalities for differential operators and singular integrals, J. Funct. Anal. 268(2) (2015), 376-416.
Weighted variation inequalities for differential operators and singular integrals in higher dimensions. T Ma, J L Torrea, Q Xu, Sci. China Math. 608T. Ma, J.L. Torrea and Q. Xu, Weighted variation inequalities for differential operators and singular integrals in higher dimensions, Sci. China Math. 60(8) (2017), 1419-1442.
L p estimates for the variation for singular integrals on uniformly rectifiable sets. A Mas, X Tolsa, Trans. Amer. Soc. 36911A. Mas and X. Tolsa, L p estimates for the variation for singular integrals on uniformly rectifiable sets, Trans. Amer. Soc. 369(11) (2017), 8239-8275.
l p (Z d )-estimates for discrete operators of Radon type: variational estimates. M Mirek, E M Stein, B Trojan, Invent. Math. 2093M. Mirek, E.M. Stein and B. Trojan, l p (Z d )-estimates for discrete operators of Radon type: variational estimates, Invent. Math. 209(3) (2017), 665-748.
Quadratic A1 bounds for commutators of singular integrals with BMO functions. C Ortiz-Caraballo, Indiana Univ. Math. J. 606C. Ortiz-Caraballo, Quadratic A1 bounds for commutators of singular integrals with BMO functions, Indiana Univ. Math. J. 60(6) (2011), 2107-2129.
Exponential decay estimates for singular integral operators. C Ortiz-Caraballo, C Pérez, E Rela, Math. Ann. 3574C. Ortiz-Caraballo, C. Pérez and E. Rela, Exponential decay estimates for singular integral operators, Math. Ann. 357(4) (2013), 1217-1243.
Sharp weighted endpoint estimates for commutators of singular integrals. C Pérez, G Pradolini, Michigan Math. J. 491C. Pérez and G. Pradolini, Sharp weighted endpoint estimates for commutators of singular integrals, Michigan Math. J. 49(1) (2001), 23-37.
Borderline weighted estimates for commutators of singular integrals. C Pérez, I P Rivera-Ríos, Israel J. Math. 2171C. Pérez and I.P. Rivera-Ríos, Borderline weighted estimates for commutators of singular integrals, Israel J. Math. 217(1) (2017) 435-475.
Theory of Orlicz Spaces. M M Rao, Z Ren, Monographs and Textbooks in Pure and Applied Mathematics. 146Marcel Dekker, IncM.M. Rao, Z. Ren, Theory of Orlicz Spaces, volume 146 of Monographs and Textbooks in Pure and Applied Mathematics. Marcel Dekker, Inc., New York, 1991.
Improved A1-A∞ and related estimates for commutators of rough singular integrals. I P Rivera-Ríos, Proc. Edinb. Math. Soc. 2I.P. Rivera-Ríos, Improved A1-A∞ and related estimates for commutators of rough singular integrals, Proc. Edinb. Math. Soc.(2) 61(4) (2018), 1069-1086.
Weighted oscillation and variation inequalities for singular integrals and commutators satisfying Hörmander conditions. J Zhang, H Wu, Acta Math. Sin. (Engl. Ser.). 3310J. Zhang and H. Wu, Weighted oscillation and variation inequalities for singular integrals and com- mutators satisfying Hörmander conditions, Acta Math. Sin. (Engl. Ser.) 33(10) (2017), 1397-1420.
| [] |
[
"The number of bounded-degree spanning trees",
"The number of bounded-degree spanning trees"
] | [
"Raphael Yuster "
] | [] | [] | For a graph G, let c k (G) be the number of spanning trees of G with maximum degree at most k. For k ≥ 3, it is proved that every connected n-vertex r-regular graph G with r ≥ n k+1 satisfieswhere z k > 0 approaches 1 extremely fast (e.g. z 10 = 0.999971). The minimum degree requirement is essentially tight as for every k ≥ 2 there are connected n-vertex r-regular graphs G with r = n/(k + 1) − 2 for which c k (G) = 0. Regularity may be relaxed, replacing r with the geometric mean of the degree sequence and replacing z k with z * k > 0 that also approaches 1, as long as the maximum degree is at most n(1 − (3 + o k (1)) ln k/k). The same holds with no restriction on the maximum degree as long as the minimum degree is at least n k (1 + o k (1)). | 10.1002/rsa.21118 | [
"https://export.arxiv.org/pdf/2207.14574v1.pdf"
] | 251,196,719 | 2207.14574 | 53f5b78cfc495d0a9404ba75de178fdf48926d20 |
The number of bounded-degree spanning trees
Raphael Yuster
The number of bounded-degree spanning trees
AMS subject classifications: 05C05, 05C35, 05C30 Keywords: spanning treebounded degreecounting
For a graph G, let c k (G) be the number of spanning trees of G with maximum degree at most k. For k ≥ 3, it is proved that every connected n-vertex r-regular graph G with r ≥ n k+1 satisfieswhere z k > 0 approaches 1 extremely fast (e.g. z 10 = 0.999971). The minimum degree requirement is essentially tight as for every k ≥ 2 there are connected n-vertex r-regular graphs G with r = n/(k + 1) − 2 for which c k (G) = 0. Regularity may be relaxed, replacing r with the geometric mean of the degree sequence and replacing z k with z * k > 0 that also approaches 1, as long as the maximum degree is at most n(1 − (3 + o k (1)) ln k/k). The same holds with no restriction on the maximum degree as long as the minimum degree is at least n k (1 + o k (1)).
Introduction
For a graph G, let c k (G) be the number of spanning trees of G with maximum degree at most k and let c(G) be the number of spanning trees of G. Computationally, these parameters are well-understood: Determining c(G) is easy by the classical Matrix-Tree Theorem which says that c(G) is equal to any cofactor of the Laplacian matrix of G, while determining c k (G) is NP-hard for every fixed k ≥ 2. In this paper we look at these parameters from the extremal graph-theoretic perspective. The two extreme cases, i.e. c(G) and c 2 (G), are rather well-understood. As for c(G), Grone and Merris [9] proved that c(G) ≤ (n/(n − 1)) n−1 d(G)/2m where n and m are the number of vertices and edges of G respectively, and d(G) is the product of its degrees. Note that this upper bound is tight for complete graphs. Alon [1], extending an earlier result of McKay [11], proved that if G is a connected r-regular graph, then c(G) = (r −o(r)) n . Alon's method gives meaningful results already for r = 3, where the proof yields (1 − o n (1))c(G) 1/n ≥ √ 2. Alon's result was extended by Kostochka [10] to arbitrary connected graphs with minimum degree r ≥ 3. He proved that c(G) ≥ d(G)r −nO(ln r/r) and improved the aforementioned case of 3-regular graphs showing that (1 − o n (1))c(G) 1/n ≥ 2 3/4 and that the constant 2 3/4 is optimal. We mention also that Greenhill, Isaev, Kwan, and McKay [8] asymptotically determined the expected number of spanning trees in a random graph with a given sparse degree sequence.
The case c 2 (G) (the number of Hamilton paths) has a significant body of literature. All of the following mentioned results hold, in fact, for counting the number of Hamilton cycles. First, we recall that there are connected graphs with minimum degree n/2 − 1 for which c 2 (G) = 0, so most results concerning c 2 (G) assume that the graph is Dirac, i.e. has minimum degree at least n/2. Dirac's Theorem [6] proves that c 2 (G) > 0 for Dirac graphs. Significantly strengthening Dirac's theorem, Sárközy, Selkow, and Szemerédi [12] proved that every Dirac graph contains at least c n n! Hamilton cycles for some small positive constant c. They conjectured that c can be improved to 1/2 − o(1). In a breakthrough result, Cuckler and Kahn [4] settled this conjecture proving that every Dirac graph with minimum degree r has at least (r/e) n (1 − o(1))n Hamilton cycles. This bound is tight as shown by an appropriate random graph. Bounds on the number of Hamilton cycles in Dirac graphs expressed in terms of maximal regular spanning subgraphs were obtained by Ferber, Krivelevich, and Sudakov [7]. Their bound matches the bound of Cuckler and Kahn for graphs that are regular or nearly regular.
In this paper we consider c k (G) for fixed k ≥ 3. Observe first that c k (G) 1/n ≤ c(G) 1/n < d(G) 1/n (by simple counting or by the aforementioned result [9]). Thus, we shall express the lower bounds for c k (G) 1/n in our theorems in terms of constant multiples of d(G) 1/n . Notice also that if G is r-regular, then d(G) 1/n = r.
Our first main result concerns connected regular graphs. It is not difficult to prove that every connected r-regular graph with r ≥ n/(k + 1) has c k (G) > 0 (this also holds for k = 2 [3]). We prove that c k (G) is, in fact, already very large under this minimum degree assumption. To quantify our lower bound we define the following functions of k.
f k = 1 − 1 e k−3 i=0 1 i! , g k = 2 e(k − 1)! . z k = 0.0494, for k = 3
0.1527, for k = 4
(1 − (k + 1)(f k + g k )) g k (1 − g k ) 1−g k g k g k , for k ≥ 5 .
It is important to observe that z k approaches 1 extremely quickly, as Table 1 shows.
Theorem 1.1. Let k ≥ 3 be given. Every connected n-vertex r-regular graph G with r ≥ n k+1 satisfies c k (G) 1/n ≥ (1 − o n (1))r · z k .
The requirement on the minimum degree in Theorem 1.1 is essentially tight. In Subsection 4.3 we Table 1: The value of z k for some small k.
show that for every k ≥ 2 and for infinitely many n, there are connected r-regular graphs G with r = n/(k + 1) − 2 for which c k (G) = 0. In light of this construction, it may be of some interest to determine whether Theorem 1.1 holds with n/(k + 1) − 1 instead of n/(k + 1). Furthermore, as our proof of Theorem 1.1 does not work for k = 2, we raise the following interesting problem. Problem 1.2. Does there exist a positive constant z 2 such that every connected n-vertex r-regular graph G with r ≥ n 3 satisfies c 2 (G) 1/n ≥ (1 − o n (1))r · z 2 .
One may wonder whether the regularity requirement in Theorem 1.1 can be relaxed, while still keeping the minimum degree at n/(k + 1). It is easy to see that a bound on the maximum degree cannot be entirely waved. Indeed, consider a complete bipartite graph with one part of order (n − 2)/k. It is connected, has minimum degree (n − 2)/k > n/(k + 1), maximum degree n − (n − 2)/k but it clearly does not have any spanning tree with maximum degree at most k. However, if we place just a modest restriction on the maximum degree, we can extend Theorem 1.1. Let
z * k = 1 − 1 7k 1− 1 7k 1 9k 1 7k .
It is easy to see that z * k approaches 1. For example, z * 20 > 0.956.
Theorem 1.3.
There exists a positive integer k 0 such that for all k ≥ k 0 the following holds. Every connected n-vertex graph G with minimum degree at least n k+1 and maximum degree at most
n(1 − 3 ln k/k) satisfies c k (G) 1/n ≥ (1 − o n (1))d(G) 1/n · z * k .
Finally, we obtain a lower bound on c k (G) where we have no restriction on the maximum degree of G. Analogous to Dirac's theorem, Win [13] proved that every connected graph with minimum degree (n−1)/k has c k (G) > 0 (see also [5] for an extension of this result). Clearly, the requirement on the minimum degree is tight as the aforementioned example of a complete bipartite graph shows that there are connected graphs with minimum degree (n − 2)/k for which c k (G) = 0. We prove that for all k ≥ k 0 , if the minimum degree is just slightly larger, then c k (G) becomes large.
Theorem 1.4. There exists a positive integer k 0 such that for all k ≥ k 0 the following holds. Every connected n-vertex graph G with minimum degree at least n k (1 + 3 ln k/k) satisfies
c k (G) 1/n ≥ (1 − o n (1))d(G) 1/n · z * k .
Using Szemerédi's regularity lemma, it is not too difficult to prove a version of Theorem 1.4 that works already for k ≥ 3 and where c k (G) is exponential in n. However, the bound we can obtain by that method, after taking its n-th root, is not a positive constant multiple of d(G) 1/n . We do conjecture that the error term in the minimum degree assumption can be eliminated.
Conjecture 1.5. Let k ≥ 3.
There is a constant z † k > 0 such that every connected n-vertex graph G with minimum degree at least n k satisfies
c k (G) 1/n ≥ (1 − o n (1))d(G) 1/n · z † k where lim k→∞ z † k = 1.
All of our theorems are based on two major ingredients. The first ingredient consists of proving that G has many spanning forests, each with only a relatively small number of component trees, and each having maximum degree at most k. However, the proof of this property varies rather significantly among the various theorems and cases therein. We combine the probabilistic model of Alon [1] for showing that there are many out-degree one orientations with certain properties, together with a novel nibble approach to assemble edges from several out-degree one orientations. The second ingredient consists of proving that each of the large spanning forests mentioned above has small "edit distance" from a spanning tree with maximum degree at most k. Once this is established, it is not difficult to deduce that G has many spanning trees with maximum degree at most k.
In Section 2 we prove the edit-distance property. In Section 3 we introduce out-degree one orientations and the multi-stage model which is the basis for our nibble approach. In Section 4 we consider regular graphs and prove Theorem 1.1. In Section 5 we prove Theorems 1.3 and 1.4.
Throughout the paper we assume that the number of vertices of the host graph, always denoted by n, is sufficiently large as a function of all constants involved. Thus, we refrain from repeatedly mentioning this assumption. We also ignore rounding issues (floors and ceilings) whenever these have no effect on the final statement of our results. We use the terminology G-neighbor of a vertex v to refer to a neighbor of v in G, as opposed to a neighbor of v in spanning tree or a spanning forest of G. The notation d(v) always denotes the degree of v in G. Other notions that are used are standard, or defined upon their first use.
Extending a bounded degree forest
The edit distance between two graphs on the same vertex set is the number of edges in the symmetric difference of their edge sets. In this section we prove that the edit distance between a bounded degree spanning forest and a bounded degree spanning tree of a graph is proportional to the number of components of the forest, whenever the graph is connected and satisfies a minimum degree condition.
Lemma 2.1. Let k ≥ 3 and let G be a connected graph with n vertices and minimum degree at least n/(k + 1). Suppose that F is a spanning forest of G with m < n − 1 edges and maximum degree at most k. Furthermore, assume that F has at most t vertices with degree k where t ≤ n/(6.8k). Then there exists a spanning forest F * of G with m + 1 edges that contains at least m − 3 edges of F . Furthermore, F * has maximum degree at most k and at most t + 4 vertices with degree k.
Proof. For a forest (or tree) with maximum degree at most k, its W-vertices are those with degree k and its U-vertices are those with degree less than k. Denote the tree components of F by T 1 , . . . , T n−m . Let U i = ∅ denote the U-vertices of T i and let W i denote the W-vertices of T i . We distinguish between several cases as follows: (a) There is some edge of G connecting some u i ∈ U i with some u j ∈ U j where i = j. (b) Case (a) does not hold but there is some T i with fewer than n/(k + 1) vertices. (c) The previous cases do not hold but there is some edge of G connecting some u i ∈ U i to a vertex in a different component of F . (d) The previous cases do not hold.
Case (a). We can add to F the edge u i u j obtaining a forest with m + 1 edges which still has maximum degree at most k. The new forest has at most t + 2 W-vertices since only u i and u j increase their degree in the new forest.
Case (b). Let u i be some vertex with degree 1 or 0 in T i (note that it is possible that T i is a singleton so that the degree of its unique vertex is indeed 0 in T i ). Since T i has fewer than n/(k + 1) vertices, and since u i has minimum degree at least n/(k + 1) in G, we have that u i has at least two G-neighbors that are not in T i . Let w 1 , w 2 denote such neighbors. Notice that w 1 , w 2 are W-vertices of F as we assume Case (a) does not hold.
Assume first that w 1 , w 2 are adjacent in F (in particular, they are in the same component of F ). Let F * be obtained from F by adding both edges u i w 1 and u i w 2 and removing the edge w 1 w 2 . Note that F * has m + 1 edges, has m − 1 edges of F , and has maximum degree at most k. It also has at most t + 1 W-vertices as only u i may become a new vertex of degree k (in fact, the degree of u i in F * is at most 3 so if k > 3 we still only have t W-vertices in F * ).
We may now assume that w 1 , w 2 are independent in F . Removing both of them from F further introduces at least 2k −1 component trees denoted L 1 , . . . , L s where s ≥ 2k −1. To see this, observe first that if we remove w 1 , we obtain at least k nonempty components since w 1 has degree k. If . The red ovals depict the various L j 's obtained when removing w 1 and w 2 . The denoted L 1 contains a vertex u of degree 1 in L 1 (and degree smaller than 3 in F ) which has a neighbor u in G that also has degree smaller than 3 in F . The blue edges represent edges of G that are not used in F . To obtain F * we add u i w 1 , add uu and remove the edge w 1 z.
we then remove w 2 , we either obtain an additional set of k components (if w 2 is not in the same component of w 1 in F ) or an additional set of k − 1 components (if w 2 is in the same component of w 1 in F ).
Each L j , being a tree, either has at least two vertices of degree 1, or else L j is a singleton, in which case it has a single vertex with degree 0 in L j . If L j is a singleton, then its unique vertex has degree at most 2 in F as it may only be connected in F to w 1 and w 2 . If L j is not a singleton, then let v 1 , v 2 be two vertices with degree 1 in L j . It is impossible for both v 1 , v 2 to have degree at least 3 in F as otherwise they are both adjacent to w 1 , w 2 in F , implying that F is not a forest (has a K 2,2 ). In any case, we have shown that each L j (whether a singleton or not) has a vertex which is a U-vertex of F .
Consider now an L j with smallest cardinality, say L 1 . Its number of vertices is therefore at most
n s ≤ n 2k − 1 .(1)
Let u be a vertex of L 1 which is a U-vertex of F . By our minimum degree assumption on G, u has at least n/(k + 1) − (|V (L 1 )| − 1) neighbors in G that are not in L 1 . By (1) we have that
n k + 1 − (|V (L 1 )| − 1) ≥ n k + 1 − n 2k − 1 > n 6.8k ≥ t .(2)
It follows that u has a G-neighbor u not in L 1 which is a U-vertex of F . Notice that u and u must be in the same component of F since we assume Case (a) does not hold. Since u is not in L 1 , adding uu to F introduces a cycle that contains at least one of w 1 , w 2 . Assume wlog that the cycle contains w 1 and that z is the neighbor of w 1 on the cycle (possibly z ∈ {u, u }). We can now obtain a forest F * from F by adding uu , adding u i w 1 and removing w 1 z. The obtained forest has m + 1 edges, has m − 1 edges of F , has maximum degree at most k, and at most t + 2 W-vertices as only u, u can increase their degree in F * to k. Figure 1 visualizes u i , u, u , w 1 , z, L 1 and the added and removed edges when going from F to F * . Case (c). In this case, T i has at least n/(k + 1) vertices. Let w j ∈ W j be a G-neighbor of u i in a different component T j of F . Removing w j from T j splits T j \w j into a forest with k component trees L 1 , . . . , L k . So at least one of these components, say L 1 , has at most (n − |V (T i )| − 1)/k < n/(k + 1) vertices. Obtain a forest F * * from F be adding the edge u i w j and removing the unique edge of T j connecting w j to L 1 . The new forest also has m edges and has m − 1 edges of F . It also has at most t + 1 W-vertices as only u i may become a new vertex of degree k. But in F * * , there is a component, namely L 1 , with fewer than n/(k + 1) vertices. Hence, we arrive at either Case (a) or Case (b) for F * * . So, applying the proofs of these cases to F * * (and observing that the number of W-vertices in F * * is only t + 1 so (2) still holds because of the slack in the sharp inequality of (2)), we obtain a forest F * with m + 1 edges, at least m − 2 edges of F , maximum degree at most k, and at most t + 3 W-vertices.
Case (d). Since G is connected, we still have an edge of G connecting some vertex w i ∈ W i with some w j ∈ W j . Without loss of generality, |V (T j )| ≤ n/2. Removing w j from T j splits T j \ w j into a forest with k component trees L 1 , . . . , L k . So at least one of these components, say L 1 , has at most |V (T j )|/k ≤ n/(2k) vertices. Let u be a vertex of L 1 of degree 1 in F . So, u has at least n/(k + 1) − n/(2k) > n/(6.8k) ≥ t neighbors not in L 1 . It follows that u has a G-neighbor u which is a U-vertex of F . Also notice that u ∈ T j since we assume Case (a) does not hold. Now, let F * * be obtained from F by adding the edge uu and removing the unique edge of T j connecting w j to L 1 . The new forest also has m edges and has m − 1 edges of F . It also has at most t + 1 W-vertices as only u may become a new vertex of degree k. But observe that in F * * the degree of w j is only k − 1. Since w j has a G-neighbor (namely w i ) in a different component of F * * , we arrive in F * * at either Case (a) or Case (b) or Case (c). So, applying the proofs of these cases to F * * (and observing that the number of W-vertices in F * * is only t + 1 so (2) still holds because of the slack in the sharp inequality of (2)), we obtain a forest F * with m + 1 edges, at least m − 3 edges of F , maximum degree at most k, and at most t + 4 W-vertices.
By repeated applications of Lemma 2.1 where we start with a large forest and repeatedly increase the number of edges until obtaining a spanning tree, we immediately obtain the following corollary.
Corollary 2.2. Let k ≥ 3 and let G be a connected graph with n vertices and minimum degree at least n/(k+1). Suppose that F is a spanning forest of G with n−O(ln n) edges and maximum degree at most k. Furthermore, assume that F has at most t vertices with degree k where t ≤ n/(7k). Then there exists a spanning tree of G with maximum degree at most k where all but at most O(ln n) of its edges are from F .
From out-degree one orientations to bounded degree spanning trees
Let G be a graph with no isolated vertices. An out-degree one orientation of G is obtained by letting each vertex v of G choose precisely one of its neighbors, say u, and orient the edge vu as (v, u) (i.e from v to u). Observe that an out-degree one orientation may have cycles of length 2. Also note that an out-degree one orientation has the property that each component 1 contains precisely one directed cycle and that all cycles in the underlying graph of an out-degree one orientation are directed cycles. Furthermore, observe that the edges of the component that are not on its unique directed cycle (if there are any) are oriented "toward" the cycle. In particular, given the cycle, the orientation of each non-cycle edge of the component is uniquely determined. Let H(G) denote the set of all out-degree one orientations of G. Clearly, |H(G)| = d(G).
Most of our proofs use the probabilistic model of Alon [1]: Each v ∈ V (G) chooses independently and uniformly at random a neighbor u and the edge vu is oriented (v, u). In this way we obtain a uniform probability distribution over the sample space H(G). We let G denote a randomly selected element of H(G) and let Γ(v) denote the chosen out-neighbor of v.
We focus on certain parameterized subsets of H(G). Let H k,s (G) be the subset of all elements of H(G) with maximum in-degree at most k − 1 and with at most s vertices of in-degree k − 1. If s = n (i.e. we do not restrict the number of vertices with in-degree k − 1) then we simply denote the set by H k (G). Let H * (G) be the subset of all elements of H(G) with at most directed cycles (equivalently, at most components). Our proofs are mostly concerned with establishing lower bounds for the probability that G ∈ H k,s (G) ∩ H * (G). Hence we denote
P k,s, (G) = Pr[ G ∈ H k,s (G) ∩ H * (G)] .
Lemma 3.1. Let k ≥ 3 be given. Suppose that G is a connected graph with minimum degree at least n/(k + 1). Then:
c k (G) 1/n ≥ (1 − o n (1))d(G) 1/n P k,n/(7k),ln n (G) 1/n .
Proof. Let p = P k,n/(7k),ln n (G). By the definition of p, we have that
|H k,n/(7k) (G) ∩ H * ln n (G)| ≥ d(G)p .
Consider some G ∈ H k,n/(7k) (G) ∩ H * ln n (G). As it has at most ln n directed cycles (and recall that these cycles are pairwise vertex-disjoint as each belongs to a distinct component), it has at most ln n edges that, once removed from G, turn it into a forest F with at least n−O(ln n) edges. Viewed as an undirected graph, F has maximum degree at most k (since the in-degree of each vertex of G is at most k − 1 and the out-degree of each vertex of G is precisely 1). Thus, we have a mapping assigning each G ∈ H k,n/(7k) (G) ∩ H * ln n (G) an undirected forest F . While this mapping is not injective, the fact that G only has at most ln n components implies that each F is the image of at most n O(ln n) distinct G. Indeed, given an undirected F , suppose it has t ≤ ln n components of sizes s 1 , . . . , s t . To turn it into an element of H(G), we must first add a single edge to each component to give a cycle, and then choose the orientation of each cycle in each component, which implies the orientation of non-cycle edges. Hence, the number of possible G obtained from F is at most
t i=1 (2s 2 i ) ≤ n O(ln n)
. Furthermore, since G has at most n/(7k) vertices with in-degree k − 1, it follows that F has at most n/(7k) vertices with degree k. By Corollary 2.2, there exists a spanning tree T of G with maximum degree at most k where all but at most O(ln n) of its edges are from F . Thus, we have a mapping assigning each G ∈ H k,n/(7k) (G) ∩ H * ln n (G) a spanning tree T of G with maximum degree at most k. While this mapping is not injective, the fact that the edit distance between T and F is O(ln n) trivially implies that each T is the image of at most n O(ln n) distinct F . Hence, we obtain that
c k (G) ≥ d(G)pn −O(ln n) .
Taking the n'th root from both sides of the last inequality therefore concludes the lemma.
We also require an upper bound for the probability that G has many components. The following lemma is proved by Kostochka [10] (see Lemma 2 in that paper, applied to the case where the minimum degree is at least n/(k + 1), as we assume).
Lemma 3.2.
[10] Let G be a graph with minimum degree at least n/(k + 1). The expected number of components of G is at most (k + 1) ln n.
For G ∈ H(G), let B G i denote the set of vertices with in-degree i. We will omit the superscript and simply write B i whenever G is clear from context. We define the following K-stage model for establishing a random element of H(G). This model is associated with a positive integer K and a convex sum of probabilities p 1 + · · · + p K = 1. In the first part of the K-stage model, we select uniformly and independently (with replacement) K elements of H(G) as in the aforementioned model of Alon. Denote the selected elements by G c for
c = 1, . . . , K. Let Γ c (v) denote the out-neighbor of v in G c . In the second part of the K-stage model, we let each vertex v ∈ V (G) choose precisely one of Γ 1 (v), . . . , Γ K (v) where Γ c (v) is chosen with probability p c .
Observe that the resulting final element G consisting of all n = |V (G)| selected edges is also a uniform random element of H(G). Also note that for any given partition of V (G) into parts V 1 , . . . , V K , the probability that all out-edges of the vertices of V c are taken from G c for all c = 1, . . . , K is precisely p |Vc| c . As mentioned in the introduction, most of our proofs for lower-bounding c k (G) contain two major ingredients. The first ingredient consists of using the K-stage model for a suitable K in Theorem or case thereof K nibble step completion step combination order to establish a lower bound for P k,s, (G) (with = ln n). This first ingredient further splits into several steps: a) The nibble step where we prove that with nonnegligible probability, there is a forest with a linear number of edges consisting of edges of G 1 , . . . , G K−1 and which satisfies certain desirable properties.
b) The completion step where we prove that given a forest with the properties of the nibble step we can, with nonnegligible probability, complete it into an out-degree one orientation with certain desirable properties using only the edges of G K . c) A combination lemma which uses (a) and (b) above to prove a lower bound for P k,s, (G). The second ingredient uses Lemma 3.1 applied to the lower bound obtained in (c) to yield the final outcome of the desired proof. Table 2 gives a roadmap for the various lemmas used for establishing steps steps (a) (b) (c), and the value of K used.
Proof of Theorem 1.1
In this section we assume that G is r-regular with r ≥ n/(k + 1). We will consistently be referring to the notation of Section 3. When k ≥ 5 we will use the two-stage model (K = 2) and when k ∈ {3, 4} (dealt with in the next subsection) we will need to use larger K (see Table 2).
The case k ≥ 5
We first need to establish several lemmas (the first lemma being straightforward).
Lemma 4.1. Let G be an r-regular graph with r ≥ n/(k + 1). For 0 ≤ i ≤ n, the probability that v ∈ B i (i.e., that v has in-degree i in G) is
Pr[v ∈ B i ] = r i 1 r i 1 − 1 r r−i ≤ (1 + o n (1)) 1 i!e .
Furthermore, the in-degree of v in G is nearly Poisson as for all 0 ≤ i ≤ k,
Pr[v ∈ B i ] = 1 ± O(n −1 ) 1 i!e .
Lemma 4.2. Let G be an r-regular graph with r ≥ n/(k + 1). For all 0 ≤ i ≤ k and for any set X of vertices of G it holds that
Pr |X ∩ B i | − |X| i!e > n 2/3 < 1 n 2 .
Proof. Consider the random variable |X ∩ B i |. By Lemma 4.1, its expectation, denoted by X 0 , is (1). Now, suppose we expose the edges of G one by one in n steps (in each step we choose the out-neighbor of another vertex of G), and let X j be the expectation of |X ∩ B i | after j steps have been exposed (so after the final stage we have X n = |X ∩ B i |). Then X 0 , X 1 , . . . , X n is a martingale satisfying the Lipschitz condition (each exposure increases by one the in-degree of a single vertex), so by Azuma's inequality (see [2]), for all λ > 0,
X 0 = (1 ± O( 1 n )) |X| i!e = |X| i!e ± O nPr ||X ∩ B i | − X 0 | > λ √ n < 2e −λ 2 /2 .
Using, say, λ = n 1/10 the lemma immediately follows.
Lemma 4.3. Let G be an r-regular graph with r ≥ n/(k + 1). For all 3 ≤ t ≤ k the following holds: With probability at least 1 10 , G has a set of at most 5.9n 3et! edges, such that after their removal, the remaining subgraph has maximum in-degree at most t − 1.
Proof. Let
Q G,t = n i=t (i − t + 1)|B i |
be the smallest number of edges we may delete from G in order to obtain a subgraph where all vertices have in-degree at most t − 1. We upper-bound the expected value of Q G,t . By Lemma 4.1 we have that
E[Q G,t ] = n i=t (i − t + 1)E[|B i |] ≤ (1 + o n (1)) n e n i=t (i − t + 1) 1 i! .
Now, for all t ≥ 4, each term in the sum n i=t (i − t + 1) 1 i! is smaller than its predecessor by at least a factor of 2.5, which means that for all n sufficiently large
E[Q G,t ] ≤ 5.3n 3et! .
It is easily verified that for t = 3, the last inequality also holds since ∞ i=3
i−2 i! < 5.3/18. By Markov's inequality, we therefore have that with probability at least 1 10 , for t ≥ 3 it holds that
Q G,t ≤ 5.9n 3et! .
Thus, with probability at least 1 10 , we can pick a set of at most 5.9n 3et! edges of G, such that after their removal, the remaining subgraph has maximum in-degree at most t − 1. Proof. By Lemma 4.3, with probability at least 1 10 we can remove at most 5.9n 3e(k−1)! edges from G, such that after their removal, the remaining subgraph has maximum in-degree at most k − 2.
By Lemma 3.2, with probability at most 1 40 we have that G has more than 40(k + 1) ln n components. Recalling that in G each component can be made a tree by removing a single edge from its unique directed cycle, with probability at least 1 − 1 40 we have that G can be made acyclic by removing at most 40(k + 1) ln n edges.
By We therefore obtain that with probability at least 1 10 − 1 40 − 1 40 = 1 20 , the claimed forest exists and has at least n − 5.9n 3e(k−1)! − 40(k + 1) ln n ≥ n − 2n e(k−1)! edges.
Using the two-stage model, consider G 1 and G 2 as denoted in Section 3. We say that G 1 is successful if it has a spanning forest as guaranteed by Lemma 4.4. By that lemma, with probability at least 1 20 , we have that G 1 is successful. Assuming G 1 is successful, designate a spanning forest F 1 of it satisfying the properties of Lemma 4.4. Let X 1 ⊂ V (G) be the set of vertices with out-degree 0 in F 1 . Thus, we have by Lemma 4.4 that |X 1 | ≤ 2n e(k−1)! = ng k . Now, consider the set of edges of G 2 emanating from X 1 , denoting them by
E 2 = {(v, Γ 2 (v)) | v ∈ X 1 }.
By adding E 2 to F 1 we therefore obtain an out-degree one orientation of G, which we denote (slightly abusing notation) by E 2 ∪ F 1 .
Lemma 4.5. Suppose that k ≥ 5. Given that G 1 is successful, and given the corresponding forest F 1 , the probability that (E 2 ∪ F 1 ) ∈ H k−1 (G) ∩ H * ln n (G) is at least
(1 − (k + 1)(f k + g k ) − o n (1)) ng k .
Proof. Fix an arbitrary ordering of the vertices of X 1 , say v 1 , . . . , v |X 1 | . We consider the edges (v i , Γ 2 (v i )) one by one, and let E 2,i ∪ F 1 be the graph obtained after adding to F 1 the edges (v j , Γ 2 (v j )) for 1 ≤ j ≤ i. Also let E 2,0 = ∅. We say that E 2,i ∪ F 1 is good if it satisfies the following two properties: (i) The in-degree of each vertex in E 2,i ∪ F 1 is at most k − 2.
(ii) Every component of E 2,i ∪ F 1 with fewer than n/ ln n vertices is a tree.
Trivially, E 2,0 ∪ F 1 = F 1 is good, since F 1 is a forest where the in-degree of each vertex is at most k − 2. We estimate the probability that E 2,i+1 ∪ F 1 is good given that E 2,i ∪ F 1 is good.
Consider vertex v i+1 . By Property (c) of Lemma 4.4, v i+1 has at most (1 + o n (1))nf k neighbors with in-degree k − 2 in
F 1 (recall that f k = 1 − 1 e k−3 i=0 1 i! ). Thus, there is a subset S of at least r − f k (1 + o n (1))n − i neighbors of v i+1 in G which still have in-degree at most k − 3 in E 2,i ∪ F 1 . Now, if the component of v i+1 in E 2,i ∪ F 1
has fewer than n/ ln n vertices, then further remove from S all vertices of that component. In any case, |S| ≥ r − f k (1 + o n (1))n − i − n/ ln n. The probability that Γ 2 (v i+1 ) ∈ S is therefore at least
r − f k (1 + o n (1))n − i − n ln n r = 1 − f k (1 + o n (1))n + i + n ln n r ≥ 1 − f k (1 + o n (1))n + 2n e(k−1)! + n ln n n/(k + 1) = 1 − (k + 1)(f k + g k ) − o n (1) .
Now, to have Γ 2 (v i+1 ) ∈ S means that we are not creating any new components of size smaller than n/ ln n, so all components of size at most n/ ln n up until now are still trees. It further means that E 2,i+1 ∪ F 1 still has maximum in-degree at most k − 2. In other words, it means that E 2,i+1 ∪ F 1 is good. We have therefore proved that the final E 2 ∪ F 1 is good with probability at least
(1 − (k + 1)(f k + g k ) − o n (1)) |X 1 | ≥ (1 − (k + 1)(f k + g k ) − o n (1)) ng k .
Finally, observe that for E 2 ∪F 1 to be good simply means that it belongs to H k−1 (G)∩H * ln n (G).
Lemma 4.6. Let k ≥ 5. Then,
P k,0,ln n (G) 1/n ≥ (1 − o n (1))z k .
Proof. Using the two-stage model, we have by Lemma 4.4 that G 1 is successful with probability at least 1 20 . Thus, by Lemma 4.5, with probability at least 1 20 (1 − (k + 1)(f k + g k ) − o n (1)) ng k the following holds: There is an out-degree one orientation G consisting of x ≥ n − 2n e(k−1)! edges of G 1 , and hence at most n − x ≤ ng k edges of G 2 , which is in H k−1 (G) ∩ H * ln n (G) (observe that being in H k−1 (G) is the same as being in H k,0 , i.e. there are zero vertices with in-degree k − 1 since every vertex has maximum in-degree at most k − 2).
Assuming that this holds, let X be the set of vertices whose out-edge in G is from G 1 . Now let p 1 + p 2 = 1 be the probabilities associated with the two-stage model where we will use p 2 < 1 2 . The probability that in the second part of the two-stage model, each vertex v ∈ X will indeed choose Γ 1 (v) and each vertex v ∈ V (G) \ X will indeed choose Γ 2 (v) is precisely
p x 1 p 2 n−x ≥ (1 − p 2 ) n−ng k p 2 ng k .
Optimizing, we will choose p 2 = g k . Recalling that the final outcome of the two-stage model is a completely random element of H(G), we have that
P k,0,ln n (G) ≥ 1 20 (1 − (k + 1)(f k + g k ) − o n (1)) ng k (1 − g k ) n−ng k g k ng k .
Taking the n'th root from both sides and recalling that
z k = (1 − (k + 1)(f k + g k )) g k (1 − g k ) 1−g k g k g k
yields the lemma.
Proof of Theorem 1.1 for k ≥ 5. By Lemma 4.6 we have that P k,0,ln n (G) 1/n ≥ (1 − o n (1))z k . As trivially P k,0,ln n (G) ≤ P k,n/(7k),ln n (G) we have by Lemma 3.1 that
c k (G) 1/n ≥ (1 − o n (1))d(G) 1/n (1 − o n (1))z k = (1 − o n (1))r · z k .
The cases k = 3 and k = 4
Lemma 4.5 doesn't quite work when k ∈ {3, 4} as the constant 1 − (k + 1)(f k + g k ) is negative in this case (f 4 = 1 − 2/e and g 4 = 1/(3e)). To overcome this, we need to make several considerable adjustments in our arguments. Among others, this will require using the K-stage model for K relatively large (K = 20 when k = 3 and K = 5 when k = 4 will suffice). Recall that in this model we have randomly chosen out-degree one orientations G 1 , . . . , G K . Define the following sequence:
q i = 1 e , for i = 1 q i−1 e q i−1 , for i > 1 .
Slightly abusing notation, for sets of edges F 1 , . . . , F i where F j ⊂ E( G j ) we let ∪ i j=1 F j denote the graph whose edge set is the union of these edge sets.
Definition 4.7. For 1 ≤ i ≤ K − 1 we say that G i is successful if G i has a subset of edges F i such that all the following hold: (a) i = 1 or G i−1 is successful (so the definition is recursive). (b) F 1 , . . . , F i are pairwise-disjoint and ∪ i j=1 F j is a forest. (c) The maximum in-degree and maximum out-degree of ∪ i j=1 F j is at most 1.
(d) ∪ i j=1 F j has (1 ± o n (1))nq i vertices with in-degree 0. (e) For all v ∈ V (G) the number of G-neighbors of v having in-degree 0 in ∪ i j=1 F j is (1 ± o n (1))rq i . (f ) For all v ∈ V (G) the number of G-neighbors of v having out-degree 0 in ∪ i j=1 F j is (1±o n (1))rq i . Lemma 4.8. For all 1 ≤ i ≤ K − 1, G i is successful with probability at least 1 2 i .
Proof. We prove the lemma by induction. Observe that for i ≥ 2, it suffices to prove that, given that G i−1 is successful, then G i is also successful with probability 1 2 . For the base case i = 1 it just suffices to prove that items (b)-(f) in Definition 4.7 hold with probability 1 2 without the preconditioning item (a), so this is easier than proving the induction step; thus we shall only prove the induction step. In other words, we assume that G i−1 is successful and given this assumption, we prove that G i is successful with probability 1 2 . For notational convenience, let F = ∪ i−1 j=1 F j . Let X i−1 be the set of vertices with out-degree 0 in F . Since G i−1 is successful we have that |X i−1 | = (1 ± o n (1))nq i−1 (in a digraph with maximum in-degree 1 and maximum out-degree 1, the number of vertices with in-degree 0 equals the number of vertices with out-degree 0). Consider the set of edges of G i emanating from X i−1 , denoting them by
E i = {(v, Γ i (v)) | v ∈ X i−1 }.
By adding E i to F we therefore obtain an out-degree one orientation of G, which we denote by E i ∪ F . We would like to prove that by deleting just a small amount of edges from E i , we have a subset F i ⊂ E i such that F i ∪ F satisfies items (b)-(f) of Definition 4.7. Fix some ordering of X i−1 , say v 1 , . . . , v |X i−1 | . Let E i,h ∪ F be the graph obtained after adding to F the edges (v j , Γ i (v j )) for 1 ≤ j ≤ h. Also let E i,0 = ∅.
We start by taking care of Item (b). For 0 ≤ h < |X i−1 |, we call v h+1 friendly if the component of v h+1 in E i,h ∪F has at most √ n vertices and Γ i (v h+1 ) belongs to that component. The probability of being friendly is therefore at most √ n/r, so the expected number of friendly vertices is at most
|X i−1 | √ n/r ≤ (1 ± o n (1))nq i−1 √
n/(n/5) < n 2/3 (recall that we assume that r ≥ n/(k + 1) and that k ∈ {3, 4} so r ≥ n/5). By Markov's inequality, with probability p (b) = 1 − o n (1), there are at most n 3/4 friendly vertices. But observe that removing from E i ∪ F the edges of E i emanating from friendly vertices results in a digraph with maximum out-degree 1 in which every component with at most √ n vertices is a tree. Thus, with probability p (b) = 1 − o n (1) we can remove a set E * i ⊂ E i of at most n 3/4 + n/ √ n = n 3/4 + √ n < 2n 3/4 edges from E i such that (E i \ E * i ) ∪ F still constitutes a forest (recall that F is a forest since G i−1 is successful).
We next consider Item (c). While trivially the maximum out-degree of E i ∪ F is one (being an out-degree one orientation), this is not so for the in-degrees. It could be that a vertex whose in-degree in F is 0 or 1 has significantly larger in-degree after adding E i . So, we perform the following process for reducing the in-degrees. For each v ∈ V (G) whose in-degree in E i ∪ F is t > 1, we randomly delete precisely t − 1 edges of E i entering v thereby reducing v's in-degree to 1 (note: this means that if v's in-degree in F is 1 we remove all edges of E i entering it and if v's in-degree in F is 0 we just keep one edge of E i entering it, and the kept edge is chosen uniformly at random). Let E * * i be the edges removed from E i by that process. Then we have that (E i \ E * * i ) ∪ F has maximum in-degree at most 1 and maximum out-degree at most 1.
We next consider Item (d). For u ∈ V (G), let W u denote the G-neighbors of u in X i−1 . Since G i−1 is successful, we have that |W u | = (1 ± o n (1))rq i−1 . Let Z be the number of vertices with in-degree 0 in E i ∪ F . Suppose u has in-degree 0 in F . In order for u to remain with in-degree 0 in E i ∪ F it must be that each vertex v ∈ W u has Γ i (v) = u. The probability of this happening is
precisely (1 − 1/r) |Wu| = (1 − 1/r) (1±on(1))rq i−1 . Since G i−1 is successful, there are (1 ± o n (1))nq i−1 vertices u with in-degree 0 in F . We obtain that E[Z] = (1 ± o n (1))nq i−1 1 − 1 r (1±on(1))rq i−1 = (1 ± o n (1))nq i .
We can prove that Z is tightly concentrated around its expectation as we have done in Lemma 4.2 using martingales. Let Z 0 = E[Z] and let Z h be the conditional expectation of Z after the edge (v h , Γ i (v h )) of E i has been exposed, so that we have Z |X i−1 | = Z. Then, Z 0 , Z 1 , . . . , Z |X i−1 | is a martingale satisfying the Lipschitz condition (since the exposure of an edge can change the amount of vertices with in-degree 0 by at most one), so by Azuma's inequality, for every λ > 0,
Pr [ |Z − E[Z]| > λ|X i−1 |] < 2e −λ 2 /2 .
In particular, Z = (1±o n (1))nq i with probability p (d) = 1−o n (1) (the o n (1) term in the probability can even be taken to be exponentially small in n).
We next consider Item (e) whose proof is quite similar to the proof of Item (d) above.
Let Z v denote the number of G-neighbors of v with in-degree 0 in E i ∪ F . Since G i−1 is successful, there are (1 ± o n (1))rq i−1 G-neighbors of v with in-degree 0 in F so the expected value of Z v is E[Z v ] = (1 ± o n (1))rq i−1 1 − 1 r (1±on(1))rq i−1 = (1 ± o n (1))rq i .
As in the previous paragraph, we apply Azuma's inequality to show that Z v = (1 ± o n (1))rq i with probability 1 − o n (1/n), so for all v ∈ V (G) this holds with probability p (e) = 1 − o n (1).
We finally consider Item (f) which is somewhat more delicate as we have to make sure that after removal of E * * i , the vertices of X i−1 that remain with out-degree 0 are distributed roughly equally among all neighborhoods of vertices of G. Fix some u ∈ V (G), and consider again W u , the G-neighbors of u in X i−1 , recalling that |W u | = (1 ± o n (1))rq i−1 . Suppose v ∈ W u . We would like to estimate the probability that (v, Γ i (v)) / ∈ E * * i . For this to happen, a necessary condition is that Γ i (v) has in-degree 0 in F . As there are (1 ± o n (1))rq i−1 G-neighbors of v with in-degree 0 in F , this occurs with probability q i−1 (1 ± o n (1)). Now, given that Γ i (v) has in-degree 0 in F , suppose that Γ i (v) has t in-neighbors in E i . Then, the probability that (v, Γ i (v)) / ∈ E * * i is 1/t. As the probability that Γ
i (v) has t in-neighbors in E i (including v) is s − 1 t − 1 1 r t−1 1 − 1 r s−t = (1 ± o n (1)) q t−1 i−1 (t − 1)!e q i−1 where s = (1 ± o n (1))rq i−1 is the number of G-neighbors of Γ i (v) in X i−1 \ {v}.
We therefore have that
Pr[(v, Γ i (v)) / ∈ E * * i ] = (1 ± o n (1))q i−1 ∞ t=1 1 t · q t−1 i−1 (t − 1)!e q i−1 = (1 ± o n (1)) 1 − e −q i−1 .
Since |W u | = (1±o n (1))rq i−1 , we have from the last equation that the expected number of neighbors
of u with out-degree 0 in F ∪ (E i \ E * * i ) is (1 ± o n (1))rq i−1 e −q i−1 = (1 ± o n (1))rq i .
Once again, using Azuma's inequality as in the previous cases, we have that the number of neighbors of u with out-degree 0 in F ∪ (E i \ E * * i ) is (1 ± o n (1))rq i with probability 1 − o(1/n), so this holds with probability p (f ) = 1 − o n (1) for all u ∈ V (G).
Finally, we define
F i = E i \ (E * i ∪ E * * i ) so items (b)-(f) hold for F i ∪ F with probability at least 1 − (1 − p (b) ) − (1 − p (d) ) − (1 − p (e) ) − (1 − p (f ) ) > 1
2 (recall that |E * i | = o(n) so its removal does not change the asymptotic linear quantities stated in items (d),(e),(f)).
By Lemma 4.8, with probability at least 1 2 i , we have that G i is successful. Assuming that G i is successful, let F 1 , . . . , F i satisfy Definition 4.7. Let X i be the set of vertices with out-degree 0 in
∪ i j=1 F j . Since G i is successful we have that |X i | = (1 ± o n (1))nq i . Consider the set of edges of G i+1 emanating from X i , denoting them by E i+1 = {(v, Γ i+1 (v)) | v ∈ X i }.
By adding E i+1 to ∪ i j=1 F j we therefore obtain an out-degree one orientation of G, which we denote by E i+1 ∪ (∪ i j=1 F j ).
Lemma 4.9. Let i ≥ 4 2 . Given that G i is successful, and given the corresponding forest ∪ i j=1 F j , the probability that (E i+1 ∪ (∪ i j=1 F j )) ∈ H 3,nq i (1±on(1)) (G) ∩ H * ln n (G) is at least
(1 − 5q i − o n (1)) nq i .
Proof. Fix an arbitrary ordering of the vertices of X i , say v 1 , . . . , v |X i | . We consider the edges (v h , Γ i+1 (v h )) one by one, and let E i+1,h ∪ (∪ i j=1 F j ) be the graph obtained after adding to ∪ i j=1 F j the edges (v , Γ i+1 (v )) for 1 ≤ ≤ h. Also let E i+1,0 = ∅. We say that E i+1,h ∪ (∪ i j=1 F j ) is good if it satisfies the following properties:
(i) The in-degree of each vertex of E i+1,h ∪ (∪ i j=1 F j ) is at most 2. (ii) Every component of E i+1,h ∪ (∪ i j=1 F j ) with fewer than n/ ln n vertices is a tree. (iii) The number of vertices with in-degree 2 in E i+1,h ∪ (∪ i j=1 F j ) is at most h. Trivially, E i+1,0 ∪ (∪ i j=1 F j ) = ∪ i j=1 F j is good, since by Definition 4.7, ∪ i j=1 F j is a forest where the in-degree of each vertex is at most 1. We estimate the probability that E i+1,h+1 ∪ (∪ i j=1 F j ) is good given that E i+1,h ∪ (∪ i j=1 F j ) is good. So, consider now the vertex v h+1 . Since E i+1,h ∪ (∪ i j=1 F j ) is assumed good, v h+1 has at most h neighbors with in-degree 2 in E i+1,h ∪ (∪ i j=1 F j ). Thus, there is a subset S of at least r − h G- neighbors of v h+1 which still have in-degree at most 1 in E i+1,h ∪(∪ i j=1 F j ). If the component of v h+1 in E i+1,h ∪(∪ i j=1 F j )
has fewer than n/ log n vertices, then also remove all vertices of this component from S. In any case we have that |S| ≥ r − h − n/ ln n. The probability that Γ i+1 (v h+1 ) ∈ S is therefore at least
r − h − n ln n r = 1 − h + n ln n r ≥ 1 − nq i (1 ± o n (1)) + n ln n n/5 = 1 − 5q i − o n (1) .
Now, to have Γ i+1 (v h+1 ) ∈ S means that we are not creating any new components of size smaller than n/ ln n, so all components of size at most n/ ln n up until now are still trees and furthermore, E i+1,h+1 ∪ (∪ i j=1 F j ) still has maximum in-degree at most 2 and at most one additional vertex, namely Γ i+1 (v h+1 ), which may become now of in-degree 2, so it has at most h + 1 vertices with in-degree 2. Consequently, E i+1,h+1 ∪ (∪ i j=1 F j ) is good. We have therefore proved that the final E i+1 ∪ (∪ i j=1 F j ) is good with probability at least
(1 − 5q i − o n (1)) |X i | ≥ (1 − 5q i − o n (1)) nq i (1±on(1)) = (1 − 5q i − o n (1)) nq i .
Finally, notice that the definition of goodness means that (
E i+1 ∪ (∪ i j=1 F j )) ∈ H 3,nq i (1±on(1)) (G) ∩ H * ln n (G).
Lemma 4.10. Let K ≥ 5 be given.
P 4,0,ln n (G) 1/n ≥ P 3,nq K−1 (1±on(1)),ln n (G) 1/n ≥ (1 − o n (1)) (1 − 5q K−1 ) q K−1 K .
Proof. The first inequality is trivial since an out-degree one orientation with maximum-in degree at most 3 has zero vertices with in-degree 4 or larger. So, we only prove the second inequality.
Consider the K-stage model, and let i = K − 1 ≥ 4. By Lemma 4.8, with probability at least 1 2 i , G i is successful. Thus, by Lemma 4.9 and Definition 4.7, with probability at least (1)) nq i the following holds: There is an out-degree one orientation G consisting of edges of G j for j = 1, . . . , K, at most nq i (1 ± o n (1)) of which are edges of G K , which is in H 3,nq i (1±on(1)) (G) ∩ H * ln n (G). Assuming that this holds, for j = 1, . . . , K let X i be the set of vertices whose out-edge in G is from G j . Now, let p 1 , . . . , p K with p 1 + · · · + p K = 1 be the probabilities associated with the K-stage model. The probability that in the second part of the K-stage model, each vertex v ∈ X j will indeed choose Γ j (v) is precisely
1 2 i (1 − 5q i − o nK j=1 p |X j | j .
Using p j = 1 K for all p j 's and recalling that the final outcome of the K-stage model is a completely random element of H(G), we have that
P 3,nq i (1±on(1)),ln n (G) 1/n ≥ 1 2 i (1 − 5q i − o n (1)) nq i 1 K n .
Taking the n'th root from both sides yields the lemma. It is not too difficult to prove that if we additively increase the minimum degree requirement in Lemma 2.1 by a small constant, then we can allow for many more vertices of degree k in that lemma. This translates to an increase in the constants z 3 and z 4 . For example, in the case k = 4 a minimum degree of n/5 + 2 already increases z 4 to about 0.4 (instead of z 4 = 0.1527... above) and in the case k = 3 a minimum degree of n/4 + 17 increases z 3 to about 0.2 (instead of z 3 = 0.0494... above). However, we prefer to state Theorem 1.1 in the cleaner form of minimum degree exactly n/(k + 1) for all k ≥ 3.
Regular connected graphs with high minimum degree and c k (G) = 0
In this subsection we show that the requirement on the minimum degree in Theorem 1.1 is essentially tight. For every k ≥ 2 and for infinitely many n, there are connected r-regular graphs G with r = n/(k + 1) − 2 for which c k (G) = 0. We mention that a construction for the case k = 2 is proved in [3].
Let t ≥ k + 4 be odd. Let G 0 , . . . , G k be pairwise vertex-disjoint copies of K t . Designate three vertices of each G i for 1 ≤ i ≤ k where the designated vertices of G i are v i,0 , v i,1 , v i,2 . Also designate k + 2 vertices of G 0 denoting them by v 0,0 , . . . , v 0,k+1 . We now remove a few edges inside the G i 's and add a few edges between them as follows. For 1 ≤ i ≤ k, remove the edges v i,0 v i,1 and v i,0 v i,2 and remove a perfect matching on the t − 3 undesignated vertices of G i . Notice that after removal, each vertex of G i has degree t − 2, except v i,0 which has degree t − 3. Now consider G 0 and remove from it all edges of the form v 0,0 v 0,j for 1 ≤ j ≤ k + 1. Also remove a perfect matching on the remaining t − k − 2 undesignated vertices of G 0 . Notice that after removal, each vertex of G 0 has degree t − 2, except v 0,0 which has degree t − k − 2. Finally, add the edges v 0,0 v i,0 for 1 ≤ i ≤ k. After addition, each vertex has degree precisely t − 2. So the obtained graph G is connected, has n = (k + 1)t vertices, and is r-regular for r = n/(k + 1) − 2. However, notice that any spanning tree of G must contain all edges v 0,0 v i,0 for 1 ≤ i ≤ k and must also contain at least one edge connecting v 0,0 to another vertex in G 0 . Thus, v 0,0 has degree at least k + 1 in every spanning tree.
Suppose next that k ≥ 2 is even and suppose that t ≥ k + 5 be odd. We slightly modify the construction above. First, we now take G 0 to be K t+1 . Now, there are k + 3 designated vertices in G 0 , denoted by v 0,0 , . . . , v 0,k+2 . The removed edges from the G i for 1 ≤ i ≤ k stay the same. The removed edges from G 0 are as follows. We remove all edges of the form v 0,0 v 0,j for 1 ≤ j ≤ k + 2. We remove a perfect matching on the vertices v 0,1 , . . . , v 0,k+2 . We also remove a Hamilton cycle on the t − k − 2 undesignated vertices of G 0 . Finally, as before, add the edges v 0,0 v i,0 for 1 ≤ i ≤ k. After addition, each vertex has degree precisely t − 2. So the obtained graph G is connected, has n = (k + 1)t + 1 vertices, and is r-regular for r = (n − 1)/(k + 1) − 2 = n/(k + 1) − 2. However, notice that as before, v 0,0 has degree at least k + 1 in every spanning tree.
Proofs of Theorems 1.3 and 1.4
In this section we prove Theorems 1.3 and 1.4. As their proofs are essentially identical, we prove them together. We assume that k ≥ k 0 where k 0 is a sufficiently large absolute constant satisfying the claimed inequalities. Although we do not try to optimize k 0 , it is not difficult to see from the computations that it is a moderate value.
Consider some G ∈ H(G). An ordered pair of distinct vertices u, v ∈ V (G) is a removable edge if Γ(u) = v (so in particular uv ∈ E(G)) and the in-degree of v in G is at least k − 1.
Lemma 5.1. Suppose that k ≥ k 0 . Let G be a graph with minimum degree at least δ = n/(k + 1) and maximum degree at most ∆ = n(1 − 3 ln k/k). Then with probability at least 1 2 , G has at most n/(14k) removable edges. The same holds if G has minimum degree at least δ * = n k (1 + 3 ln k/k) and unrestricted maximum degree.
Proof. Consider some ordered pair of distinct vertices u, v ∈ V (G) such that uv ∈ E(G). For that pair to be a removable edge, it must hold that: (i) Γ(u) = v, and (ii) v has at least k − 2 in-neighbors in N (v) \ u. As (i) and (ii) are independent, and since Pr[Γ(u) = v] = 1/d(u), we need to estimate the number of in-neighbors of v in N (v) \ u, which is clearly at most v's in-degree in G. So let D v be the random variable corresponding to v's in-degree in G. Observe that D v is the sum of independent random variables
D v = w∈N (v) D v,w where D v,w is the indicator variable for the event Γ(w) = v.
Consider first the case where G has minimum degree at least δ and maximum degree at most ∆. In particular, D v ≤ X where X ∼ Bin(∆, 1/δ). (1)). Then by Chernoff's inequality (see [2] Appendix A) we have that for sufficiently large k,
E[X] = ∆ δ = (k + 1)(1 − 3 ln k/k) = k(1 − o k (1)) . Now let a = k − 2 − E[X] = 3 √ k ln k(1 − o kPr[D v ≥ k − 2] ≤ Pr[X ≥ k − 2] = Pr[X − E[X] ≥ a] ≤ e −a 2 /(2E[X])+a 3 /(2(E[X]) 2 ) ≤ e −(1−o k (1))9k ln k/(2k)+(1+o k (1))27k 3/2 ln 3/2 k/(2k 2 ) ≤ 1 k 4(3)
where the last inequality holds for k ≥ k 0 . It follows that the probability that u, v is a removable edge is at most (1/d(u))/k 4 ≤ 1/(δk 4 ) ≤ 1/(nk 2 ). Consider next the case where G has minimum degree at least δ * . In particular, D v ≤ X where X ∼ Bin(n, 1/δ * ). (1)). So as in (3), we obtain that Pr[D v ≥ k − 2] ≤ 1/k 4 . It follows that the probability that u, v is a removable edge is at most 1/(δ * k 4 ) ≤ 1/(nk 2 ).
E[X] = n δ * = k 1 + 3 ln k/k = k(1 − o k (1)) . Now let a = k − 2 − E[X] = 3 √ k ln k(1 − o k
As there are fewer than n 2 ordered pairs to consider, the expected number of removable edges is in both cases is at most n/k 2 . By Markov's inequality, with probability at least 1 2 , G has at most 2n/k 2 ≤ n/(14k) removable edges.
Lemma 5.2. Suppose that k ≥ k 0 . Let G be a graph with minimum degree at least δ = n/(k + 1) and maximum degree at most ∆ = n(1 − 3 ln k/k). With probability at least 1 4 , G has a spanning forest F such that: (a) F has maximum in-degree at most k − 2.
(b) F has at least n − n/(7k) edges. The same holds if G has minimum degree at least δ * = n k (1 + 3 ln k/k) and unrestricted maximum degree.
Proof. By Lemma 3.2, with probability at most 1 4 we have that G has more than 4(k + 1) ln n components. Recalling that in G each component can be made a tree by removing a single edge from its unique directed cycle, with probability at least 3 4 we have that G can be made acyclic by removing at most 4(k + 1) ln n edges. By Lemma 5.1, with probability at least 1 2 , G has at most n/(14k) removable edges. So, with probability at least 3 4 − 1 2 = 1 4 we have a forest subgraph of G with at least n − 4(k + 1) ln n − n/(14k) ≥ n − n/(7k) edges in which all removable edges have been removed. But observe that after removing the removable edges, each vertex has in-degree at most k − 2.
Using the two-stage model, consider the graphs G 1 , G 2 as denoted in Section 3. For a given k ≥ k 0 , we say that G 1 is successful if it has a spanning forest as guaranteed by Lemma 5.2. By that lemma, with probability at least 1 4 , we have that G 1 is successful. Assuming it is successful, designate a spanning forest F 1 of it satisfying the properties of Lemma 5.2. Let X 1 ⊂ V (G) be the set of vertices with out-degree 0 in F 1 . Thus, we have by Lemma 5.2 that |X 1 | ≤ n/(7k). Consider the set of edges of the G 2 emanating from X 1 , denoting them by E 2 = {(v, Γ 2 (v)) | v ∈ X 1 }. By adding E 2 to F 1 we therefore obtain an out-degree one orientation of G, which we denote by E 2 ∪F 1 . The following lemma is analogous to Lemma 4.5.
Lemma 5.3. Given that G 1 is successful, and given the corresponding forest F 1 , the probability that (E 2 ∪ F 1 ) ∈ H k,n/(7k) (G) ∩ H * ln n (G) is at least ( 5 6 ) n/(7k) .
Proof. Fix an arbitrary ordering of the vertices of X 1 , say v 1 , . . . , v |X 1 | . We consider the edges (v i , Γ 2 (v i )) one by one, and let E 2,i ∪ F 1 be the graph obtained after adding to F 1 the edges (v j , Γ 2 (v j )) for 1 ≤ j ≤ i. Also let E 2,0 = ∅. We say that E 2,i ∪ F 1 is good if it satisfies the following two properties: (i) The in-degree of each vertex in E 2,i ∪ F 1 is at most k − 1.
(ii) Every component of E 2,i ∪ F 1 with fewer than n/ ln n vertices is a tree.
(iii) The number of vertices in E 2,i ∪ F 1 with in-degree k − 1 is at most i.
Note that E 2,0 ∪ F 1 = F 1 is good, since F 1 is a forest where the in-degree of each vertex is at most k − 2. We estimate the probability that E 2,i+1 ∪ F 1 is good given that E 2,i ∪ F 1 is good. By our assumption, v i+1 has at most i neighbors with in-degree k − 1 in E 2,i ∪ F 1 . Thus, there is a subset S of at least d(v i+1 ) − i neighbors of v i+1 in G which still have in-degree at most k − 2 in E 2,i ∪ F 1 . As in Lemma 4.5, we may further delete at most n/ ln n vertices from S in case the component of v i+1 in E 2,i ∪ F 1 has fewer than n/ ln n vertices so that in any case we have that |S| ≥ d(v i+1 ) − i − n/ ln n. The probability that Γ 2 (v i+1 ) ∈ S is therefore at least (note that d(v i+1 ) ≥ n/(k + 1) trivially holds also in the assumption of Theorem 1.4). To have Γ 2 (v i+1 ) ∈ S means that we are not creating any new components of size smaller than n/ ln n and that E 2,i+1 ∪ F 1 has at most i + 1 vertices with in-degree k − 1. In other words, it means that E 2,i+1 ∪ F 1 is good. We have therefore proved that the final E 2 ∪ F 1 is good with probability at least 5 6 .
d(v i+1 ) − i − n ln n d(v i+1 ) ≥ 1 − i + n
Finally, note that the goodness of E 2 ∪ F 1 means that it is in H k,n/(7k) (G) ∩ H * ln n (G). Proof. Considering the two-stage model, we have by Lemma 5.2 that with probability at least 1 4 , G 1 is successful. Thus, by Lemma 5.3, with probability at least 1 4
6
n/(7k) the following holds: There is an out-degree one orientation G consisting of x ≥ n − n/(7k) edges of G 1 and hence at most n/(7k) edges of G 2 , which is in H k,n/(7k) (G) ∩ H * ln n (G). Assuming that this holds, let X be the set of vertices whose out-edge in G is from G 1 . Now, let p 1 + p 2 = 1 be the probabilities associated with the two-stage model where we assume p 2 < 1 2 . The probability that in the second part of the two-stage model, each vertex v ∈ X will indeed choose Γ 1 (v) and each vertex v ∈ V (G) \ X will indeed choose Γ 2 (v) is precisely p x 1 p n−x 2 ≥ (1 − p 2 ) n−n/(7k) p 2 n/(7k) .
Optimizing, we will choose p 2 = 1/(7k). Recalling that the final outcome of the two-stage model is a completely random element of H(G), we have that .
Taking the n'th root from both sides yields the lemma.
Figure 1 :
1Constructing F * from F in Case (b) of Lemma 2.1 (here we use k = 3). The figure depicts the component T i containing u i and some other component containing w 1 (and w 2 in this example)
Lemma 4. 4 .
4Let G be an r-regular graph with r ≥ n/(k + 1). With probability at least 1 20 , G has a spanning forest F such that: (a) F has maximum in-degree at most k − 2.(b) F has at least n − 2n e(k−1)! edges. (c) The number of vertices of F with in-degree at most k − 3 is at least (1 − o n (1))
Lemma 4.2 applied to X = V (G), with probability at least 1 − (k − 1)/n 2 > 1 − 1/40 we have that for all 0 ≤ i ≤ k − 3, the number of vertices of G with in-degree i is at least n/(i!e) − n 2/3 ≥ (1 − o n (1))n/(i!e). Thus, with probability at least 1 − 1/40 there are at least (1 − o n (1)) vertices of G with in-degree at most k − 3.
Proof of Theorem 1.1 for k ∈ {3, 4}. Consider first the case k = 4 where we will use K = 5. A simple computation gives that q K−1 = q 4 = 0.162038... so we have by Lemma 4.10 thatP 4,0,ln n (G) 1/n ≥ (1 − o n (1)) (1 − 5q 4 ) q 4 5 = (1 − o n (1))0.1527... .As trivially P 4,0,ln n (G) ≤ P 4,n/28,ln n (G) we have by Lemma 3.1 thatc 4 (G) 1/n ≥ (1 − o n (1))d(G) 1/n (1 − o n (1))0.1527... = (1 − o n (1))d · 0.1527... .Consider now the case k = 3 where we will use K = 20. A simple computation gives that q K−1 = q 19 = 0.045821... so we have by Lemma 4.10 that P 3,n/21,ln n (G) 1/n ≥ P 3,nq 19 (1±on(1)),ln n (G) 1/n≥ (1 − o n (1)) (1 − 5q 19 ) q 19 20 = (1 − o n (1))0.0494... .We now have by Lemma 3.1 that c 3 (G) 1/n ≥ (1 − o n (1))d(G) 1/n (1 − o n (1))0.0494... = (1 − o n (1))d · 0.0494... .
.
Combining Lemma 5.4 and Lemma 3.1 we have thatc k (G) 1/n ≥ (1 − o n (1))d(G) 1/n z * k .
Table 2 :
2A roadmap for the proofs of Theorems 1.1, 1.3, 1.4.
A component of a directed graph is a component of its underlying undirected graph.
We require this assumption so that the value 1−5qi used in the lemma, is positive. Indeed, already q4 = 0.162038... satisfies this (observe also that qi = qi−1/e q i−1 is monotone decreasing).
AcknowledgmentThe author thanks the referees for their useful comments.
The number of spanning trees in regular graphs. N Alon, Random Structures & Algorithms. 12N. Alon. The number of spanning trees in regular graphs. Random Structures & Algorithms, 1(2):175-181, 1990.
The probabilistic method. N Alon, J Spencer, John Wiley & SonsN. Alon and J. Spencer. The probabilistic method. John Wiley & Sons, 2004.
D W Cranston, O Suil, Hamiltonicity in connected regular graphs. Information Processing Letters. 113D. W. Cranston and O. Suil. Hamiltonicity in connected regular graphs. Information Process- ing Letters, 113(22-24):858-860, 2013.
Hamiltonian cycles in dirac graphs. B Cuckler, J Kahn, Combinatorica. 293B. Cuckler and J. Kahn. Hamiltonian cycles in dirac graphs. Combinatorica, 29(3):299-326, 2009.
Spanning trees of bounded degree. A Czygrinow, G Fan, G Hurlbert, H A Kierstead, W T Trotter, The Electronic Journal of Combinatorics. A. Czygrinow, G. Fan, G. Hurlbert, H. A. Kierstead, and W. T. Trotter. Spanning trees of bounded degree. The Electronic Journal of Combinatorics, pages R33-R33, 2001.
Some theorems on abstract graphs. G A Dirac, Proceedings of the London Mathematical Society. 31G. A. Dirac. Some theorems on abstract graphs. Proceedings of the London Mathematical Society, 3(1):69-81, 1952.
Counting and packing hamilton cycles in dense graphs and oriented graphs. A Ferber, M Krivelevich, B Sudakov, Journal of Combinatorial Theory, Series B. 122A. Ferber, M. Krivelevich, and B. Sudakov. Counting and packing hamilton cycles in dense graphs and oriented graphs. Journal of Combinatorial Theory, Series B, 122:196-220, 2017.
The average number of spanning trees in sparse graphs with given degrees. C Greenhill, M Isaev, M Kwan, B D Mckay, European Journal of Combinatorics. 63C. Greenhill, M. Isaev, M. Kwan, and B. D. McKay. The average number of spanning trees in sparse graphs with given degrees. European Journal of Combinatorics, 63:6-25, 2017.
A bound for the complexity of a simple graph. R Grone, R Merris, Discrete mathematics. 691R. Grone and R. Merris. A bound for the complexity of a simple graph. Discrete mathematics, 69(1):97-99, 1988.
The number of spanning trees in graphs with a given degree sequence. A V Kostochka, Random Structures & Algorithms. 62-3A. V. Kostochka. The number of spanning trees in graphs with a given degree sequence. Random Structures & Algorithms, 6(2-3):269-274, 1995.
Spanning trees in regular graphs. B D Mckay, European Journal of Combinatorics. 42B. D. McKay. Spanning trees in regular graphs. European Journal of Combinatorics, 4(2):149- 160, 1983.
On the number of hamiltonian cycles in dirac graphs. G N Sárközy, S M Selkow, E Szemerédi, Discrete Mathematics. 2651-3G. N. Sárközy, S. M. Selkow, and E. Szemerédi. On the number of hamiltonian cycles in dirac graphs. Discrete Mathematics, 265(1-3):237-250, 2003.
Existenz von gerüsten mit vorgeschriebenem maximalgrad in graphen. S Win, Mathematischen Seminar der Universität Hamburg. 431S. Win. Existenz von gerüsten mit vorgeschriebenem maximalgrad in graphen. Mathematischen Seminar der Universität Hamburg, 43(1):263-267, 1975.
| [] |
[
"Provably Stabilizing Global-Position Tracking Control for Hybrid Models of Multi-Domain Bipedal Walking via Multiple Lyapunov Analysis",
"Provably Stabilizing Global-Position Tracking Control for Hybrid Models of Multi-Domain Bipedal Walking via Multiple Lyapunov Analysis"
] | [
"Yuan Gao [email protected] ",
"Kentaro Barhydt [email protected] ",
"Christopher Niezrecki [email protected] ",
"Yan Gu [email protected] ",
"\nDept. of Mechanical Engineering\nDepartment of Mechanical Engineering Massachusetts Institute of Technology Cambridge, Massachusetts\nDepartment of Mechanical Engineering\nUniversity of Massachusetts Lowell\n01854, 02139Massachusetts\n",
"\nSchool of Mechanical Engineering\nThe University of Massachusetts Lowell\n01851Christopher\n",
"\nPurdue University West Lafayette\n47907Indiana\n"
] | [
"Dept. of Mechanical Engineering\nDepartment of Mechanical Engineering Massachusetts Institute of Technology Cambridge, Massachusetts\nDepartment of Mechanical Engineering\nUniversity of Massachusetts Lowell\n01854, 02139Massachusetts",
"School of Mechanical Engineering\nThe University of Massachusetts Lowell\n01851Christopher",
"Purdue University West Lafayette\n47907Indiana"
] | [] | Accurate control of a humanoid robot's global position (i.e., its three-dimensional position in the world) is critical to the reliable execution of high-risk tasks such as avoiding collision with pedestrians in a crowded environment. This paper introduces a time-based nonlinear control method that achieves accurate global-position tracking (GPT) for multi-domain bipedal walking. Deriving a tracking controller for bipedal robots is challenging due to the highly complex robot dynamics that are timevarying and hybrid, especially for multi-domain walking that involves multiple phases/domains of full actuation, over actuation, and underactuation. To tackle this challenge, we introduce a continuous-phase GPT control law for multi-domain walking, which provably ensures the exponential convergence of the entire error state within the full and over actuation domains and that of the directly regulated error state within the underactuation domain. We then construct sufficient multiple-Lyapunov stability conditions for the hybrid multi-domain tracking error system under the proposed GPT control law. We illustrate the proposed controller design through both * Equal contribution.three-domain walking with all motors activated and twodomain gait with inactive ankle motors. Simulations of a ROBOTIS OP3 bipedal humanoid robot demonstrate the satisfactory accuracy and convergence rate of the proposed control approach under two different cases of multi-domain walking as well as various walking speeds and desired paths. | 10.48550/arxiv.2304.13943 | [
"https://export.arxiv.org/pdf/2304.13943v1.pdf"
] | 258,352,805 | 2304.13943 | d5eef194916d34f228f5c637b50eb25c338885c2 |
Provably Stabilizing Global-Position Tracking Control for Hybrid Models of Multi-Domain Bipedal Walking via Multiple Lyapunov Analysis
Yuan Gao [email protected]
Kentaro Barhydt [email protected]
Christopher Niezrecki [email protected]
Yan Gu [email protected]
Dept. of Mechanical Engineering
Department of Mechanical Engineering Massachusetts Institute of Technology Cambridge, Massachusetts
Department of Mechanical Engineering
University of Massachusetts Lowell
01854, 02139Massachusetts
School of Mechanical Engineering
The University of Massachusetts Lowell
01851Christopher
Purdue University West Lafayette
47907Indiana
Provably Stabilizing Global-Position Tracking Control for Hybrid Models of Multi-Domain Bipedal Walking via Multiple Lyapunov Analysis
Accurate control of a humanoid robot's global position (i.e., its three-dimensional position in the world) is critical to the reliable execution of high-risk tasks such as avoiding collision with pedestrians in a crowded environment. This paper introduces a time-based nonlinear control method that achieves accurate global-position tracking (GPT) for multi-domain bipedal walking. Deriving a tracking controller for bipedal robots is challenging due to the highly complex robot dynamics that are timevarying and hybrid, especially for multi-domain walking that involves multiple phases/domains of full actuation, over actuation, and underactuation. To tackle this challenge, we introduce a continuous-phase GPT control law for multi-domain walking, which provably ensures the exponential convergence of the entire error state within the full and over actuation domains and that of the directly regulated error state within the underactuation domain. We then construct sufficient multiple-Lyapunov stability conditions for the hybrid multi-domain tracking error system under the proposed GPT control law. We illustrate the proposed controller design through both * Equal contribution.three-domain walking with all motors activated and twodomain gait with inactive ankle motors. Simulations of a ROBOTIS OP3 bipedal humanoid robot demonstrate the satisfactory accuracy and convergence rate of the proposed control approach under two different cases of multi-domain walking as well as various walking speeds and desired paths.
INTRODUCTION
Multi-domain walking of legged locomotors refers to the type of walking that involves multiple continuous foot-swinging phases and discrete foot-landing behaviors within a gait cycle, due to changes in foot-ground contact conditions and actuation authority [1,2]. Human walking is a multi-domain process that involves phases with different actuation types. These phases include: (1) full actuation phases during which the support foot is flat on the ground and the number of actuators is equal to that of the degrees of freedom (DOFs); (2) underactuation phases where the support foot rolls about its toe and the number of actuators is less than that of the DOFs; and (3) over actuation phases within which both feet are on the ground and there are more actuators than DOFs.
Researchers have proposed various control strate-1 Fig. 1.
Illustration of the Darwin OP3 robot, which is used to validate the proposed global-position tracking control approach. Darwin OP3 is a bipedal humanoid robot with twenty revolute joints, designed and manufactured by ROBOTIS [3]. The reference frame of the robot's floating base, highlighted as "{Base}", is located at the center of the chest.
gies to achieve stable multi-domain walking for bipedal robots. Zhao et al. [2] proposed a hybrid model to capture the multi-domain robot dynamics and used offline optimization to obtain the desired motion trajectory based on the hybrid model. An input-output linearizing control scheme was then applied to drive the robot state to converge to the desired trajectory. The approach was validated on a physical planar bipedal robot, AM-BER2, and later extended to another biped platform [1], ATRIAS [4]. Hereid et al. utilized the reduced-order Spring Loaded Inverted Pendulum model [5] to design an optimization-based trajectory generation method that plans periodic orbits in the state space of the compliant bipedal robot [6], ATRIAS [7]. The method guarantees orbital stability of the multi-domain gait based on the hybrid zero dynamics (HZD) approach [8]. Reher et al. achieved an energy-optimal multi-domain walking gait on the physical robot platform, DURUS, by creating a hierarchical motion planning and control framework [9]. The framework ensures orbital walking stability and energy efficiency for the multi-domain robot model based on the HZD approach [8]. Hamed et al. established orbitally stable multi-domain walking on a quadrupedal robot [10,11] by modeling the associated hybrid full-order robot dynamics and constructing virtual constraints [12]. Although these approaches have realized provable stability and impressive performance of multi-domain walking on various physical robot platforms, it remains unclear how to directly extend them to solve general global-position tracking (GPT) control problems. In real-world mobility tasks, such as dynamic obstacle avoidance during navigation through a crowded hallway, a robot needs to control its global position accurately with precise timing. However, the previous methods' walking stabilization mechanism is orbital stabilization [13,8,14,15,16], which may not ensure reliable tracking of a time trajectory precisely with the desired timing.
We have developed a GPT control method that achieves exponential trajectory tracking for the hybrid model of two-dimensional (2-D) fully actuated bipedal walking [17,18,19]. To extend our approach to 3-D fully actuated robots, we considered the robot's lateral global movement and its coupling with forward dynamics through dynamics modeling and stability analysis [20,21,22,23]. For fully actuated quadrupedal robotic walking on a rigid surface moving in the inertial frame, we formulated the associated robot dynamics as a hybrid time-varying system and exploited the model to develop a GPT control law for fully actuated quadrupeds [24,25,26]. However, these methods designed for fully actuated robots cannot solve the multidomain control problem directly because they do not explicitly handle the underactuated robot dynamics associated with general multi-domain walking.
Some of the results presented in this paper have been reported in [27]. While our previous work in [27] focused on GPT controller design and stability analysis for hybrid multi-domain models of 2-D walking along a straight line, this study extends the previous method to 3-D bipedal robotic walking, introducing the following significant new contributions:
(a) Theoretical extension of the previous GPT control method from 2-D to 3-D bipedal robotic walking.
The key novelty is the formulation of a new phase variable that represents the distance traveled along a general curved walking path and can be used to encode the desired global-position trajectories along both straight lines and curved paths. (b) Lyapunov-based stability analysis to generate sufficient conditions under which the proposed GPT control method provably stabilizes 3-D multi-domain walking. Full proofs associated with the stability analysis are provided, while only sketches of partial proofs were reported in [27]. (c) Extension from three-domain walking with all motors activated to two-domain gait with inactive ankle motors, by formulating a hybrid two-domain system and developing a GPT controller for this new gait type. Such an extension was missing in [27]. (d) Validation of the proposed control approach through MATLAB simulations of a ROBOTIS OP3 humanoid robot (see Fig.1) with different types of multi-domain walking, both straight and curved paths, and various desired global-position profiles.
In contrast, our previous validation only used a simple 2-D biped with seven links [27]. (e) Casting the multi-domain control law as a quadratic program (QP) to ensure the feasibility of joint torque limits, and comparing its performance with an inputoutput linearizing control law, which were not included in [27].
This paper is structured as follows. Section 2 explains the full-order robot dynamics model associated with a common three-domain walking gait. Section 3 presents the proposed GPT control law for three-domain walking. Section 4 introduces the Lyapunov-based closedloop stability analysis. Section 5 summarizes the controller design extension from three-domain walking to a two-domain gait. Section 6 reports the simulation validation results. Section 7 discusses the capabilities and limitations of the proposed control approach. Section 8 provides the concluding remarks. Proofs of all theorems and propositions are given in Appendix A.
FULL-ORDER DYNAMIC MODELING OF
THREE-DOMAIN WALKING This section presents the hybrid model of bipedal robot dynamics associated with three-domain walking.
Coordinate Systems and Generalized Coordi-
nates This subsection explains the three coordinate systems used in the proposed controller design. Figure 2 illus-trates the three frames, with the x-, y-, and z-axes respectively highlighted in red, green, and blue.
World frame
The world frame, also known as the inertial frame, is rigidly attached to the ground (see "{World}" in Fig. 2).
Base frame
The base frame, illustrated as "{Base}" in Fig. 2, is rigidly attached to the robot's trunk. The x-direction (red) points forward, and the z-direction (blue) points towards the robot's head.
Vehicle frame
The origin of the vehicle frame (see "{Vehicle}" in Fig. 2) coincides with the base frame, and its z-axis remains parallel to that of the world frame. The vehicle frame rotates only about its z-axis by a certain heading (yaw) angle. The yaw angle of the vehicle frame with respect to (w.r.t) the world frame equals that of the base frame w.r.t. the world frame, while the roll and pitch angles of the vehicle frame w.r.t the world frame are 0.
Generalized coordinates
To use Lagrange's method to derive the robot dynamics model, we need to first introduce the generalized coordinates to represent the base pose and joint angles of the robot.
We use p b ∈ R 3 and γ γ γ b ∈ SO(3) to respectively denote the absolute base position and orientation w.r.t the world frame, and their coordinates are represented by (x b , y b , z b ) and (φ b , θ b , ψ b ). Here φ b , θ b , ψ b are the roll, pitch, and yaw angles, respectively. Then, the 6-D pose q b of the base is given by:
q b := [p T b , γ γ γ T b ] T .
Let the scalar real variables q 1 , ..., q n represent the joint angles of the n revolute joints of the robot. Then, the generalized coordinates of a 3-D robot, which has a floating base and n independent revolute joints, can be expressed as:
q = q T b , q 1 , ..., q n T ∈ Q,(1)
where Q ⊂ R n+6 is the configuration space. Note that the number of degrees of freedom (DOFs) of this robot without subjecting to any holonomic constraints is n + 6.
Walking Domain Description
For simplicity and without loss of generality, we consider the following assumptions on the foot-ground contact conditions during 3-D walking:
(A1) The toe and heel are the only parts of a support foot that can contact the ground [1]. (A2) While contacting the ground, the toes and/or heels have line contact with the ground. (A3) There is no foot slipping on the ground. Also, we consider the common assumption below about the robot's actuators:
(A4) All the n revolute joints of the robot are independently actuated.
Let n a denote the number of independent actuators, and n a = n holds under assumption (A4). Figure 3 illustrates the complete gait cycle of humanlike walking with a rolling support foot. As the figure displays, the complete walking cycle involves three continuous phases/domains and three discrete behaviors connecting the three domains. The three domains are:
(i) Full actuation (FA) domain, where n a equals the number of DOFs; (ii) Underactation (UA) domain, where the number of independent actuators (n a ) is less than that of the robot's DOFs; and (iii) Over actuation (OA) domain, where n a is greater than the number of DOFs.
The actuation types associated with the three domains are different because those domains have distinct footground contact conditions, which are explained next under assumptions (A1)-(A4).
FA domain
As illustrated in the "FA" portion of Fig. 3, only one foot is in support and it is static on the ground within the FA domain. Under assumption (A1), we know both the toe and heel of the support foot contact the ground. From assumptions (A2) and (A3), we can completely characterize the foot-ground contact condition with six independent scalar holonomic constraints. Using n c to denote the number of holonomic constraints, we have n c = 6 within an FA domain, and the number of DOFs becomes DOF = n + 6 − n c = n. Meanwhile, n a = n holds under assumption (A4). Since DOF = n a , all of the DOFs are directly actuated; that is, the robot is indeed fully actuated.
UA domain
The "UA" portion of Fig. 3 shows that the robot's support foot rolls about its toe within a UA domain. Under assumptions (A2) and (A3), the number of holonomic constraints is five, i.e., n c = 5. This is because the support foot can only roll about the line toe but its motion is fully restricted in terms of the 3-D translation and the pitch and yaw rotation. Then, the number of DOFs is: DOF = n + 6 − 5 = n + 1. Since the number of independent actuators, n a , equals n under assumption (A4) and is lower than the number of DOFs, (n + 1), the robot is underactuated with one degree of underactuation.
OA domain
Upon exiting the UA domain, the robot's swingfoot heel strikes the ground and enters the OA domain (Fig. 3). Within an OA domain, both the trailing toe and the leading heel of the robot contact the ground, which is described by ten scalar holonomic constraints (i.e., n c = 10). Thus, the DOF becomes DOF = n + 6 − n c = n − 4, which is less than the number of actuators under assumption (A4), meaning the robot is over actuated.
Hybrid Multi-Domain Dynamics
This subsection presents the full-order model of the robot dynamics that corresponds to multi-domain walking. Since multi-domain walking involves both continuous-time dynamics and discrete-time behaviors, a hybrid model is employed to describe the robot dynamics. To aid the readers in comprehending the hybrid system, the fundamentals of hybrid systems will be discussed first.
Preliminaries on hybrid systems
A hybrid control system HC is a tuple: U is the set of admissible control inputs.
S is a set of switching surfaces determining the occurrence of switching between domains. ∆ is a set of reset maps, which represents the impact dynamics between a robot's swing foot and the ground.
FG is a set of vector fields on the state manifold.
The elements of these sets are explained next.
Continuous-phase dynamics
Within any of the three domains, the robot only exhibits continuous movements, and its dynamics model is naturally continuous-time. Applying Lagrange's method, we obtain the second-order, nonlinear robot dynamics as:
M(q)q + c(q,q) = Bu + J T F c ,(2)
where M(q) : Q → R (n+6)×(n+6) is the inertia matrix. The vector c : T Q → R (n+6) is the summation of the Coriolis, centrifugal, and gravitational terms, where T Q is the tangent bundle of Q. The matrix B ∈ R (n+6)×n a is the input matrix. The vector u ∈ U ⊂ R n a is the joint torque vector. The matrix J(q) : Q → R n c ×(n+6) represents the Jacobian matrix. The vector F c ∈ R n c is the constraint force that the ground applies to the foot-ground contact region of the robot. Note that the dimensions of J and F c vary among the three domains due to differences in the ground-contact conditions. The holonomic constraints can be expressed as:
Jq +Jq = 0,(3)
where 0 is a zero matrix with an appropriate dimension. Combining Eqs. (2) and (3), we compactly express the continuous-phase dynamics model as [20]:
M(q)q +c(q,q) =B(q)u,(4)
where the vectorc and matrixB are defined as:c(q,q) :
= c − J T (JM −1 J T ) −1 (JM −1 c −Jq) andB(q) := B − J T (JM −1 J T ) −1 JM −1 B.
Switching surfaces
When a robot's state reaches a switching surface, it exits the source domain and enters the targeted domain. As displayed in Fig. 3, the three-domain walking involves three switching events, which are:
(i) Switching from FA to UA ("Support heel liftoff");
(ii) Switching from UA to OA ("Swing heel touchdown"); and (iii) Switching from OA to FA ("Leading toe touchdown").
The occurrence of these switching events is completely determined by the position and velocity of the robot's swing foot in the world frame as well as the ground-reaction force experienced by the support foot. We use switching surfaces to describe the conditions under which a switching event occurs.
When the heel of the support foot takes off at the end of the FA phase, the robot enters the UA domain (Fig. 3). This support heel liftoff condition can be described using the vertical ground-reaction force applied at the support heel, denoted as F c,z : T Q × U → R. We use S F→U to denote the switching surface connecting an FA domain and its subsequent UA domain, and express it as:
S F→U := {(q,q, u) ∈ T Q ×U : F c,z (q,q, u) = 0}.
The UA→OA switching occurs when the swing foot's heel lands on the ground (Fig. 3). Accordingly, we express the switching surface that connects a UA domain and its subsequent OA domain, denoted as S U→O , as:
S U→O (q,q) := {(q,q) ∈ T Q : z swh (q) = 0,ż swh (q,q) < 0},
where z swh : Q → R represents the height of the lowest point within the swing-foot heel above the ground.
As the leading toe touches the ground at the end of an OA phase, a new FA phase is activated (Fig. 3). In this study, we assume that the leading toe landing and the trailing foot takeoff occur simultaneously at the end of an OA phase, which is reasonable because the trailing foot typically remains contact with the ground for a brief period (e.g., approximately 3% of a complete human gait cycle [1]) after the touchdown of the leading foot's toe. The switching surface, S O→F , that connects an OA domain and its subsequent FA domain is then expressed as:
S O→F (q,q) := {(q,q) ∈ T Q : z swt (q) = 0,ż swt (q,q) < 0},
where z swt : Q → R represents the height of the swingfoot toe above the walking surface.
Discrete impact dynamics
The complete walking cycle involves two foot-landing impacts; one impact occurs at the landing of the swingfoot heel (i.e., transition from UA to OA), and the other at the touchdown of the leading-foot toe between the OA and FA phases. Note that the switching from FA to UA, characterized by the support heel liftoff, is a continuous process that does not induce any impacts.
We consider the case where the robot's feet and the ground are stiff enough to be considered as rigid, as summarized in the following assumptions [8,28]:
(A5) The landing impact between the robot's foot and the ground is a contact between rigid bodies. (A6) The impact occurs instantaneously and lasts for an infinitesimal period of time.
Due to the impact between two rigid bodies (assumption (A5)), the robot's generalized velocityq experiences a sudden jump upon a foot-landing impact. Unlike velocityq, the configuration q remains continuous across an impact event as long as there is no coordinate swap of the two legs at any switching event.
Letq − andq + represent the values ofq just before and after an impact, respectively. The impact dynamics can be described by the following nonlinear reset map [12]:
q + = ∆ ∆ ∆q(q)q − ,(5)
where ∆ ∆ ∆q : Q → R (n+6)×(n+6) is a nonlinear matrixvalued function relating the pre-impact generalized velocityq − to the post-impact valueq + . The derivation of ∆ ∆ ∆q is omitted and can be found in [8]. Note that the dimension of ∆ ∆ ∆q is invariant across the three domains since it characterizes the jumps of all floating-base generalized coordinates.
CONTROLLER DESIGN FOR THREE-DOMAIN WALKING
This section introduces the proposed GPT controller design based on the hybrid model of multi-domain bipedal robotic walking introduced in Section 2. The resulting controller provably ensures the exponential error convergence for the directly regulated DOFs within each domain. The sufficient conditions under which the proposed controller guarantees the stability for the overall hybrid system are provided in Section 4.
Desired Trajectory Encoding
As the primary control objective is to provably drive the global-position tracking error to zero, one set of desired trajectories that the proposed controller aims to reliably track is the robot's desired global-position trajectories. Since a bipedal humanoid robot typically has many more DOFs and actuators than the desired globalposition trajectories, the controller could regulate additional variables of interest (e.g., swing-foot pose).
We use both time-based and state-based phase variables to encode these two sets of desired trajectories, as explained next.
Time-based encoding variable
We choose to use the global time variable t to encode the desired global-position trajectories so that a robot's actual horizontal position trajectories in the world (i.e., x b and y b ) can be accurately controlled with precise timing, which is crucial for real-world tasks such as dynamic obstacle avoidance.
We use x d (t) : R + → R and y d (t) : R + → R to denote the desired global-position trajectories along the xand yaxis of the world frame, respectively, and ψ d (t) : R + → R is the desired heading direction. We assume that the desired horizontal global-position trajectories x d (t) and y d (t) are supplied by a higher-layer planner, and the design of this planner is not the focus of this study. Given x d (t) and y d (t), the desired heading direction ψ d (t) can be designed as a function of x d (t) and y d (t), which is
ψ d (t) := tan −1 (y d /x d ).
Such a definition ensures that the robot is facing forward during walking.
We consider the following assumption on the regularity condition of x d (t) and y d (t):
(A7) The desired global-position trajectories x d (t) and y d (t) are planned as continuously differentiable on t ∈ R + with the norm ofẋ d (t) andẏ d (t) bounded above by a constant number; that is, there exists a positive constant L d such that
ẋ d (t) , ẏ d (t) ≤ L d(6)
for any t ∈ R + .
Under assumption (A7), the time functions x d (t) and y d (t) are Lipschitz continuous on t ∈ R + [29], which we utilize in the proposed stability analysis.
State-based encoding variable
As robotic walking inherently exhibits a cyclic movement pattern in the robot's configuration space, it is natural to encode the desired motion trajectories of the robot with a phase variable that represents the walking progress within a cycle.
To encode the desired trajectories other than the desired global-position trajectories, we choose to use a state-based phase variable, denoted θ (q) : Q → R, that represents the total horizontal distance traveled within a walking step. Accordingly, the phase variable θ (q) increases monotonically within each walking step during straight-line or curved-path walking, which ensures a unique mapping from θ (q) to the encoded desired trajectories. In contrast, in our previous work [18,23], the phase variable is chosen as the walking distance projected along a single direction on the ground, which may not ensure such a unique mapping during curved-path walking.
Since the phase variable θ (q) is essentially the length of a 2-D curve that represents the horizontal projection of the 3-D walking path on the ground, we can use the actual horizontal velocities (ẋ b andẏ b ) of the robot's base to express θ (q) as:
θ (q(t)) = t t 0 ẋ 2 b (t) +ẏ 2 b (t)dt,(7)
where t 0 ∈ R + represents the actual initial time instant of the given walking step and t is the current time.
The normalized phase variable, which represents the percentage completion of a walking step, is given by:
s(θ ) := θ θ max ,(8)
where the real scalar parameter θ max represents the maximum value of the phase variable (i.e., the planned total distance to be traveled within a walking step). At the beginning of each step, the normalized phase variable takes a value of 0, while at the end of the step, it equals 1.
Output Function Design
An output function is a function that represents the difference between a control variable and its desired trajectory, which is essentially the trajectory tracking error. The proposed controller aims to drive the output function to zero for the overall hybrid walking process.
Due to the distinct robot dynamics among different domains, we design different output functions (including the control variables and desired trajectories) for different domains.
FA domain
We use h F c (q) : Q → R n to denote the vector of n control variables that are directly commanded within the FA domain. Without loss of generality, we use the OP3 robot shown in Fig. 1 as an example to explain a common choice of control variables within the FA domain.
The OP3 robot has twenty directly actuated joints (i.e., n = n a = 20) including eight upper body joints. Also, using n up to denote the number of upper body joints, we have n up = 8.
We choose the twenty control variables as follows:
(i) The robot's global-position and orientation represented by the 6-D absolute base pose (i.e., position p b and orientation γ γ γ b ) w.r.t. the world frame; (ii) The position and orientation of the swing foot w.r.t the vehicle frame, respectively denoted as p sw (q) : Q → R 3 and γ γ γ sw (q) : Q → R 3 ; and (iii) The angles of the n up upper body joints q up ∈ R n up .
We choose to directly control the global-position of the robot to ensure that the robot's base follows the desired global-position trajectory. The base orientation is also directly commanded to guarantee a steady trunk (e.g., for mounting cameras) and the desired heading direction. The swing foot pose is regulated to ensure an appropriate foot posture at the landing event, and the upper body joints are controlled to avoid any unexpected arm motions that may affect the overall walking performance.
The stack of control variables h F c (q) are expressed as: The desired trajectory h F d (t, s) is expressed as:
h F c (q) = x b y b ψ b z b φ b θ b p sw γ γ γ sw q up . (9) We use h F d (t, s) : R + × [0, 1] → Rh F d (t, s) = x d (t) y d (t) ψ d (t) φ φ φ F (s) ,(10)
where x d (t), y d (t), and ψ d (t) are defined in Section 3.1.1, and the function φ φ φ F (s) : [0, 1] → R n−3 represents the desired trajectories of the control variables z b , φ b , θ b , ψ b , p sw , γ γ γ sw , and q up . We use Bézier polynomials to parameterize the desired function φ φ φ F (s) because (i) they do not demonstrate overly large oscillations with relatively small parameter variations and (ii) their values at the initial and final instants within a continuous phase can compactly describe the values of control variables at those time instants [8].
The desired function φ φ φ F (s) is given by:
φ φ φ F j (s) := M ∑ k=0 α F k, j M! k!(M − k)! s k (1 − s) M−k ,(11)
where α F k, j ∈ R (k ∈ {0, 1, ..., M} and j ∈ {1, 2, ..., n − 3}) is the coefficient of the Bézier polynomials that are to be optimized (Section 6), and M is the order of the Bézier polynomials.
The output function during an FA phase is defined as:
h F (t, q) := h F c (q) − h F d (t, s).(12)
UA domain
As explained in Section 2.2, a robot has (n + 1) DOF within the UA domain but only n a actuators. Thus, only n a (i.e, n) variables can be directly commanded within the UA domain.
We opt to control individual joint angles within the UA domain to mimic human-like walking. By "locking" the joint angles, the robot can perform a controlled falling about the support toe, similar to human walking.
Thus, the control variable h U c (q) : Q → R n is:
h U c (q) = q 1 q 2 q 3 ... q n .(13)Let h U d (s) : [0, 1] → R n denote the desired joint posi- tion trajectories within the UA domain. These desired trajectories h U d (s) are parameterized using Bézier poly- nomials φ φ φ U (s) : [0, 1] → R n ; that is, h U d = φ φ φ U (s). The function φ φ φ U (s) can be expressed similarly to φ φ φ F (s).
The associated output function is then given by:
h U (q) := h U c (q) − h U d (s).(14)
OA domain
Let h O c (q) : Q → R n−4 denote the control variables within the OA domain. Recall that the robot has n a actuators and (n − 4) DOFs within the OA domain.
We choose the (n − 4) control variables as:
(i) The robot's 6-D base pose w.r.t. the world frame;
(ii) The angles of the n up upper body joints, q up ; and (iii) The pitch angles of the trailing and leading feet, denoted as θ t (q) and θ l (q), respectively.
Similar to the FA domain, we choose to directly command the robot's 6-D base pose within the OA domain to ensure satisfactory global-position tracking performance, as well as the upper body joints to avoid unexpected arm movements that could compromise the robot's balance. Also, regulating the pitch angle of the leading foot helps ensure a flat-foot posture upon switching into the subsequent FA domain where the support foot remains flat on the ground. Meanwhile, controlling the pitch angle of the trailing foot can prevent overly early or late foot-ground contact events.
Thus, the control variable h O c (q) is:
h O c (q) = x b y b ψ b z b φ b θ b θ t θ l q up .(15)
The
desired trajectory h O d (t, s) : R + × [0, 1] → R n−4 within the OA domain is expressed as: h O d (t, s) := x d (t) y d (t) ψ d (t) φ φ φ O (s) ,(16)where φ φ φ O (s) : [0, 1] → R n−4 represents the desired tra- jectories of z b , φ b , θ b , θ t , θ l , and q up , which, similar to φ φ φ F (s) and φ φ φ U (s),h O (t, q) := h O c (q) − h O d (t, s).(17)
Input-Output Linearizing Control
The output functions representing the trajectory tracking errors can be compactly expressed as:
y i = h i (t, q),(18)
where the subscript i ∈ {F,U, O} indicates the domain. Due to the nonlinearity of the robot dynamics and the time-varying nature of the desired trajectories, the dynamics of the output functions are nonlinear and timevarying. To reduce the complexity of controller design, we use input-output linearization to convert the nonlinear, time-varying error dynamics into a linear timeinvariant one.
Let u i (i ∈ {F,U, O}) denote the joint torque vector within the given domain. We exploit the input-output linearizing control law [29]
u i = ( ∂ h i ∂ q M −1B ) −1 [( ∂ h i ∂ q )M −1c + v i − ∂ 2 h i ∂t 2 − ∂ ∂ q ( ∂ h i ∂ qq )q](19)
to linearize the continuous-phase output function dynamics (i.e., Eq. (4)
) intoÿ i = v i , where v i is the control law of the linearized system. Here, the matrix ∂ h i ∂ q M −1B is invertible on Q because (i) M is invertible on Q, (ii) ∂ h i ∂ q
is full row rank on Q by design, and (iii)B is full column rank on Q.
It should be noted that u i has different expressions in different domains, due to the variations in the control variables and desired trajectories. For instance, as the output function is time-independent within the UA domain, the function ∂ 2 h U ∂t 2 in Eq. (19) is always a zero vector because the output function h U is explicitly timeindependent.
We design v i as a proportional-derivative (PD) controller
v i = −K p,i y i − K d,iẏi ,(20)
where K p,i and K d,i are positive-definite diagonal matrices containing the proportional and derivative control gains, respectively. It is important to note that the dimension of the gains K p,i and K d,i depends on that of the output function in each domain; their dimension is n × n in FA and UA domains, and (n − 4) × (n − 4) in the OA domain.
We call the GPT control law in Eqs. (19) and (20) the "IO-PD" controller in the rest of this paper, and the block diagram of the controller is shown in Fig. 4.
With the IO-PD control laws, the closed-loop output function dynamics within domain i becomes linear:
y i = −K d,iẏi − K p,i y i .
Drawing upon the well-studied linear systems theory, we can ensure the exponential convergence of y i to zero within each domain by properly choosing the values of the PD gain matrices (K p,i and K d,i ) [29].
CLOSED-LOOP STABILITY ANALYSIS FOR
THREE-DOMAIN WALKING This section explains the proposed stability analysis of the closed-loop hybrid control system under the continuous IO-PD control law.
The continuous GPT law introduced in Section 3 with properly chosen PD gains achieves exponential stabilization of the output function state within each domain. Nevertheless, the stability of the overall hybrid dynamical system is not automatically ensured for two main reasons. First, within the UA domain, the utilization of the input-output linearization technique and the absence of actuators to directly control all the DOFs induce internal dynamics, which the control law cannot directly regu-late [19,30]. Second, the impact dynamics in Eq. (5) is uncontrolled due to the infinitesimal duration of an impact between rigid bodies (i.e., ground and swing foot). As both internal dynamics and reset maps are highly nonlinear and time-varying, analyzing their effects on the overall system stability is not straightforward.
To ensure satisfactory tracking error convergence for the overall hybrid closed-loop system, we analyze the closed-loop stability via the construction of multiple Lyapunov functions [31]. The resulting sufficient stability conditions can be used to guide the parameter tuning of the proposed IO-PD law for ensuring system stability and satisfactory tracking.
Hybrid Closed-Loop Dynamics
This subsection introduces the hybrid closed-loop dynamics under the proposed IO-PD control law in Eqs. (19) and (20), which serves as the basis of the proposed stability analysis.
State variables within different domains
The state variables of the hybrid closed-loop system include the output function state (y i ,ẏ i ) (i ∈ {F, O, ξ }). This choice of state variables allows our stability analysis to exploit the linear dynamics of the output function state within each domain, thus greatly reducing the complexity of the stability analysis for the hybrid, time-varying, nonlinear closed-loop system.
We use x F ∈ R 2n and x O ∈ R 2n−8 to respectively denote the state within the FA and OA domains, which are exactly the output function state:
x F := y Ḟ y F and x O := y Ȯ y O .
Within the UA domain, the output function state, denoted as x ξ ∈ R 2n−2 , is expressed as:
x ξ := y U y U .
Besides x ξ , the complete state x U within the UA domain also include the uncontrolled state, denoted as x η ∈ R 2 . Since the stance-foot pitch angle θ st (q) is not directly controlled within the UA domain, we define x η as:
x η := θ sṫ θ st .
Thus, the complete state within the UA domain is:
x U := x ξ x η .(21)
Closed-loop error dynamics
The hybrid closed-loop error dynamics associated with the FA and OA domains share the following similar form:
ẋ F = A F x F if (t, x − F ) / ∈ S F→U x + U = ∆ ∆ ∆ F→U (t, x − F ) if (t, x − F ) ∈ S F→U ẋ O = A O x O if (t, x − O ) / ∈ S O→F x + F = ∆ ∆ ∆ O→F (t, x − O ) if (t, x − O ) ∈ S O→F(22)
with
A F := 0 I −K p,F −K d,F and A O := 0 I −K p,O −K d,O ,(23)
where I is an identity matrix with an appropriate dimension, and ∆ ∆ ∆ F→U : R + × R 2n → R 2n+2 and ∆ ∆ ∆ O→F : R + × R 2n−8 → R 2n are respectively the reset maps of the state vectors x F and x O . The expressions of ∆ ∆ ∆ F→U and ∆ ∆ ∆ O→F are omitted for space consideration and can be directly obtained by combining the expressions of the reset map ∆ ∆ ∆q of the generalized coordinates in Eq. (5) and the output functions y F , y O , and y U .
The closed-loop error dynamics associated with the continuous UA phase and the subsequent UA→OA impact map can be expressed as:
ẋ ξ = A ξ x ξ x η = f η (t, x η , x ξ ) if (t, x − U ) / ∈ S U→O x + O = ∆ ∆ ∆ U→O (t, x − ξ , x − η ) if (t, x − U ) ∈ S U→O(24)
where
A ξ := 0 I −K p,U −K d,U .(25)
The expression of f η in Eq. (24) can be directly derived using the continuous-phase dynamics equation of the generalized coordinates and the expression of the output function y U . Similar to ∆ ∆ ∆ F→U and ∆ ∆ ∆ O→F , we can readily obtain the expression of the reset map ∆ ∆ ∆ U→O : R + × R 2n+2 → R 2n−8 based on the reset map in Eq. (5) and the expression of y U and y O .
Multiple Lyapunov-Like Functions
The proposed stability analysis via the construction of multiple Lyapunov functions begins with the design of the Lyapunov-like functions. We use V F (x F ), V U (x U ), and V O (x O ) to respectively denote the Lyapunov-like functions within the FA, UA, and OA domains, and introduce their mathematical expressions next.
FA and OA domains
As the closed-loop error dynamics within the continuous FA and OA phases are linear and time-invariant, we can construct the Lyapunov-like functions V F (x F ) and [32]:
V O (x O ) asV F (x F ) = x T F P F x F and V O (x O ) = x T O P O x O with P i (i ∈
{F, O}) the solution to the Lyapunov equation
P i A i + A T i P i = −Q i ,
where Q i is any symmetric positive-definite matrix with a proper dimension.
UA domain
As the input-output linearization technique is utilized and not all DOFs within the UA domain can be directly controlled, internal dynamics exist that cannot be directly controlled [33]. We design the Lyapunov-like function V U for the UA domain as:
V U = V ξ (x ξ ) + β x η 2 ,(26)
where V ξ (x ξ ) is a positive-definite function and β is a positive constant to be designed. As the dynamics of the output function state x ξ are linear and time-invariant, the construction of V ξ (x ξ ) is similar to that of V F and V O :
V ξ (x ξ ) = x ξ T P ξ x ξ ,
where P ξ is the solution to the Lyapunov equation
P ξ A ξ + A T ξ P ξ = −Q ξ with Q ξ
any symmetric positive-definite matrix with an appropriate dimension.
Definition of Switching Instants
In the following stability analysis, the three domains of the k th (k ∈ {1, 2, ...}) walking step are, without loss of generality, ordered as:
FA → UA → OA.
For the k th walking step, we respectively denote the actual values of the initial time instant of the FA phase, the FA → UA switching instant, the UA → OA switching instant, and the final time instant of the OA phase as:
T 3k−3 , T 3k−2 , T 3k−1 , and T 3k .
The corresponding desired switching instants are denoted as: τ 3k−3 , τ 3k−2 , τ 3k−1 , and τ 3k . Using these notations, the k th actual complete gait cycle on t ∈ (T 3k−3 , T 3k ) comprises:
(i) Continuous FA phase on t ∈ (T 3k−3 , T 3k−2 ); (ii) FA→UA switching at t = T − 3k−2 ; (iii) Continuous UA phase on t ∈ (T 3k−2 , T 3k−1 ); (iv) UA→OA switching at t = T − 3k−1 ;
(v) Continuous OA phase on t ∈ (T 3k−1 , T 3k ); and (vi) OA→FA switching at t = T − 3k . For brevity in notation in the following analysis, the values of any (scalar or vector) variable at t = T − 3k− j and t = T + 3k− j , i.e., (T − 3k− j ) and (T + 3k− j ), are respectively denoted as:
| − 3k− j and | +c 1i x i 2 ≤ V i (x i ) ≤ c 2i x i 2 andV i ≤ −c 3i V i (27)
within their respective domains for any
x i ∈ B r i (0) := {x i : x i ≤ r i },
where 0 is a zero vector with an appropriate dimension. Moreover, Eq. (27) yields
V F | − 3k−2 ≤ e −c 3F (T 3k−2 −T 3k−3 ) V F | + 3k−3 ,(28)V O | − 3k ≤ e −c 3O (T 3k −T 3k−1 ) V O | + 3k−1 ,(29)
and
V ξ | − 3k−1 ≤ e −c 3ξ (T 3k−1 −T 3k−2 ) V ξ | + 3k−2 ,(30)
which describe the exponential continuous-phase convergence of V F , V O , and V ξ within their respective domains.
The proof of Proposition 1 is omitted as Proposition 1 is a direct adaptation of the Lyapunov stability theorems from [29]. Note that the explicit relationship between the PD gains and the continuous-phase convergence rates c 3F , c 3O , and c 3ξ can be readily obtained based on Remark 6 of our previous work [23].
Due to the existence of the uncontrolled internal state, the Lyapunov-like function V U does not necessarily converge within the UA domain despite the exponential continuous-phase convergence of V ξ guaranteed by the proposed IO-PD control law that satisfies condition (B1). Still, we can prove that within the UA domain of any k th walking step, the value of the Lyapunov-like function V U just before switching out of the domain, i.e., V U | − 3k−1 , is bounded above by a positive-definite function of the "switching-in" value of V U , i.e., V U | + 3k−2 , as summarized in Proposition 2.
Proposition 2. (Boundedness of Lyapunov-like
function within UA domain) Consider the IO-PD control law in Eq. (19) and all conditions in Proposition 1. There exists a positive real number r U1 and a positivedefinite function w u (·) such that
V U | − 3k−1 ≤ w u (V U | + 3k−2 )
holds for any k ∈ {1, 2, ...} and x U ∈ B r U1 (0).
Rationale of proof:
The proof of Proposition 2 is given in Appendix A.1. The boundedness of the Lyapunovlike function V U (x U ) at t = T − 3k−1 is proved based on the definition of V U (x U ) given in Eq. (26) and the bounded-
ness of x U | − 3k−1 . Recall x U := x T ξ x T η T .
We establish the needed bound on x U | − 3k−1 through the bounds on x ξ | − 3k−1 and x η | − 3k−1 , which are respectively obtained based on the bounds of their continuous-phase dynamics of x ξ and x η and the integration of those bounds within the given continuous UA phase.
Boundedness of Lyapunov-Like Functions across Jumps
Proposition 3. (Boundedness across jumps)
Consider the IO-PD control law in Eq. (19), all conditions in Proposition 1, and the following two additional conditions:
(B2) The desired trajectories h i d (i ∈ {F,U, O}) are planned to respect the impact dynamics with a small, constant offset γ ∆ ; that is,
∆ ∆ ∆ F→U (τ 3k−2 , 0) ≤ γ ∆ ,(31)∆ ∆ ∆ U→O (τ 3k−1 , 0) ≤ γ ∆ , and(32)∆ ∆ ∆ O→F (τ 3k , 0) ≤ γ ∆ .(33)
(B3) The PD gains are chosen to ensure a sufficiently high convergence rate (i.e., c 3F , c 3O , and c 3ξ in Eqs. (28)-(30)) of V F , V O , and V ξ .
Then, there exists a positive real number r such that for any k ∈ {1, 2, ...}, x i ∈ B r (0), and i ∈ {F,U, O}, the following inequalities
... ≤ V F | + 3k ≤ V F | + 3k−3 ≤ ... ≤ V F | + 3 ≤ V F | + 0 , ... ≤ V U | + 3k+1 ≤ V U | + 3k−2 ≤ ... ≤ V U | + 4 ≤ V U | + 1 , and ... ≤ V O | + 3k+2 ≤ V O | + 3k−1 ≤ ... ≤ V F | + 5 ≤ V F | + 2(
34) hold; that is, the values of each Lyapunov-like function at their associated "switching-in" instants form a nonincreasing sequence.
Rationale of proof: The proof of Proposition 3 is given in Appendix A.2. The proof shows the derivation details for the first inequality in Eq. (34)
(i.e., V F | + 3k ≤ V F | + 3k−3
for any k ∈ {1, 2, ...}), which can be readily extended to prove the other two inequalities. The proposed proof begins the analysis of the time evolution of the three Lyapunov-like functions within a complete gait cycle from t = T + 3k−1 to t = T + 3k , which comprises three continuous phases and three switching events as listed in Section 4.3.
Based on the time evolution, the bounds on the Lyapunov-like functions V F , V O , and V U at the end of their respective continuous phases are given in Proposition 1 and 2, while their bounds at the beginning of those continuous phases are established through the analysis of the reset maps ∆ ∆ ∆ F→U , ∆ ∆ ∆ U→O , and ∆ ∆ ∆ O→F . Finally, we combine these bounds to prove V F | + 3k ≤ V F | + 3k−3 . The offset γ ∆ is introduced in condition (B2) for two primary reasons. Firstly, since the system's actual state trajectories inherently possess the impact dynamics, the desired trajectories need to respect the impact dynamics sufficiently closely (i.e., γ ∆ is small enough) in order to avoid overly large errors after an impact [34,35]. If the desired trajectories do not agree with the impact dynamics sufficiently closely, the tracking errors at the beginning of a continuous phase could be overly large even when the errors at the end of the previous continuous phase are small. Such error expansion could induce aggressive control efforts at the beginning of a continuous phase, which could reduce energy efficiency and might even cause torque saturation. Secondly, while it is necessary to enforce the desired trajectories to respect the impact dynamics (e.g., through motion planning), requiring the exact agreement with the highly nonlinear impact dynamics (i.e., γ ∆ = 0) could significantly increase the computationally burden of planning, which could be mitigated by allowing a small offset.
Main Stability Theorem
We derive the stability conditions for the hybrid error system in Eqs. (22) and (24) based on Propositions 1-3 and the general stability theory via the construction of multiple Lyapunov functions [31].
Theorem 1. (Closed-loop stability conditions)
Consider the IO-PD control law in Eq. (19). If all conditions in Proposition 3 are met, the origin of the hybrid closed-loop error system in Eqs. (22) and (24) is locally stable in the sense of Lyapunov.
Rationale of proof: The full proof of Theorem 1 is given in Appendix A.3. The key idea of the proof is to show that the closed-loop control system satisfies the general multiple-Lyapunov stability conditions given in [31] if all conditions in Proposition 3 are met.
EXTENSION FROM THREE-DOMAIN WALK-ING WITH FULL MOTOR ACTIVATION TO TWO-DOMAIN WALKING WITH INACTIVE ANKLE MOTORS
This section explains the design of a GPT control law for a two-domain walking gait to further illustrate the proposed controller design method. The controller is a direct extension of the proposed controller design for three-domain walking (with full motor activation). For brevity, this section focuses on describing the distinct aspects of the two-domain case compared to the threedomain case explained earlier.
We consider the case of two-domain walking where underactuation is caused due to intentional ankle motor deactivation instead of loss of full contact with the ground as in the case of three-domain walking. Bipedal gait is sometimes intentionally designed as underactuated through motor deactivation at the support ankle [36], which could simplify the controller design. Specifically, by switching off the support ankle motors, the controller can treat the support foot as part of the ground and only handle a point foot-ground contact instead of a finite support polygon. Figure 5 illustrates a complete cycle of a two-domain walking gait, which comprises an FA and a UA domain, with the UA phase induced by intentional motor deactivation. The FA and UA phases share the same footground contact conditions; that is, the toe and heel of the support foot are in a static contact with the ground. Yet, within the UA domain, the ankle-roll and ankle- pitch joints of the support foot are disabled, leading to DOF = n a + 2 > n a (i.e., underactuation).
To differentiate from the case of three-domain walking, we add a " †" superscript to the left of mathematical symbols when introducing the two-domain case. Hybrid robot dynamics: The continuous-time robot dynamics within the FA domain of two-domain walking have exactly the same expression as those of the threedomain dynamics in Eq. (2). The robot dynamics within the UA domain are also the same as Eq. (2) except for the input matrix B (due to the ankle motor deactivation).
The complete gait cycle contains one foot-landing impact event, which occurs as the robot's state leaves the UA domain and enters the FA domain. The form of the associated impact map is similar to the impact map in Eq. (5) of the three-domain case. For brevity, we omit the expression and derivation details of the impact map.
There are two switching events, F→U and U→F, within a complete gait cycle, which are respectively denoted as † S F→U and † S U→F and given by: † S F→U := {q ∈ Q : θ (q) > l s } and † S U→F := {(q,q) ∈ T Q : z sw (q) = 0,ż sw (q,q) < 0}, where θ (q) is defined as in Eq. (??) and the scalar positive variable l s represents the desired traveling distance of the robot's base within the FA phase. Local time-based phase variable: To allow the convenient adjustment of the intended period of motor deactivation, we introduce a new phase variable † θ (t) for the UA phase representing the elapsed time within this phase: † θ (t) = t − T Uk , where T Uk is the initial time instant of the k th UA phase.
The normalized phase variable is defined as: † s( † θ ) := † θ δ τ U , where δ τ U is the expected duration of the UA. δ τ U can be assigned as a gait parameter that a motion planner adjusts for ensuring a reasonable duration of motor deactivation. Output functions: The output function design within the FA domain is the same as the three-domain case.
The control variables within FA, denoted as † h F c (q), are chosen the same as the three-domain walking case in Eq. (9). Then, we have † h F c (q) = h F c (q). Accordingly, the desired trajectories † h F d (t, s) can be chosen the same as h F d (t, s), leading to the output function expressed as:
† h F (t, s) = † h F c (q) − † h F d (t, s).
With two ankle (roll and pitch) motors disabled during the UA phase, the number of variables that can be directly controlled is reduced by two compared to the FA domain. Without loss of generality, We choose the control variables within the UA domain to be the same as the FA domain except that the base roll angle φ b and base pitch angle θ b are no longer controlled.
The control variables † h U c within the UA domain are then expressed as:
† h U c (q) := x b y b ψ b z b p sw (q) γ γ γ sw (q) .(35)
The desired trajectories † h U d are given by
: † h U d (t, † s) := x d (t) y d (t) ψ d (t) † φ φ φ U ( † s) ,(36)
where † φ φ φ U ( † s) : [0, 1] → R n a −5 represents the desired trajectories of z b , p sw , and γ γ γ sw .
Then, we obtain the output function † h U (t, q) as:
† h U (t, q) := † h U c (q) − † h U d (t, † s).(37)
With the output function † h i (i ∈ {F, A,U}) designed, we can use the same form of the IO-PD control law in Eqs. (19) and (39) and the stability conditions in Theorem 1 to design the needed GPT controller for twodomain walking.
SIMULATION
This section reports the simulation results to demonstrate the satisfactory global-position tracking perfor-mance of the proposed controller design.
Comparative Controller: Input-Output Lin-
earizing Control with Quadratic Programming This subsection introduces the formulation of the proposed IO-PD controller as a quadratic program (QP) that handles the limited joint-torque capacities of realworld robots while ensuring a relatively accurate globalposition tracking performance. We refer to the resulting controller as the "IO-QP" controller in this paper. Besides enforcing the actuator limits and providing tracking performance guarantees, another benefit of the QP formulation lies in its computational efficiency for real-time implementation.
Constraints
We incorporate the IO-PD controller in Eq. (19) as an equality constraint in the proposed IO-QP control law. The proposed IO-QP also includes the torque limits as inequality constraints. We use u max,i and u min,i (i ∈ {F,U, O}) to denote the upper and lower limits of the torque command u i given in Eq. (19). Then, the linear inequality constraint that the control signal u i should respect can be expressed as:
u min,i ≤ u i ≤ u max,i .
To ensure the control command u i respects the actuator limits, we incorporate a slack variable δ δ δ QP ∈ R n a in the equality constraint representing the IO-PD control law:
u i = N(q,q) + δ δ δ QP ,(38)
where
N = ( ∂ h i ∂ q M −1B ) −1 [( ∂ h i ∂ q )M −1c + v i − ∂ 2 h i ∂t 2 − ∂ ∂ q ( ∂ h i ∂ qq )q].
To avoid overly large deviation from the original control law in Eq. (19), we include the slack variable in the cost function to minimize its norm as explained next.
Cost function
The proposed cost function is the sum of two components. One term is u T i u i and indicates the magnitude of the control command u i . Minimizing this term helps guarantee the satisfaction of the torque limit and the energy efficiency of walking.
The other term indicates the weighted norm of the slack variable δ δ δ QP , i.e., pδ δ δ T QP δ δ δ QP , with the real positive scalar constant p the slack penalty weight. By including the slack penalty term in the cost function, the deviation of the control signal from the original IO-PD form, which is caused by the relaxation, can be minimized.
QP formulation
Summarizing the constraints and cost function introduced earlier, we arrive at a QP given by:
u i ,δ δ δ QP u T i u i + p δ δ δ T QP δ δ δ QP s.t. u i = N + δ δ δ QP u i ≥ u min,i u i ≤ u max,i(39)
We present validation results for both IO-PD and IO-QP in the following to demonstrate their effectiveness and performance comparison.
Simulation Setup
Robot model
The robot used to validate the proposed control approach is an OP3 bipedal humanoid robot developed by ROBOTIS, Inc. (see Fig.1). The OP3 robot is 50 cm tall and weighs approximately 3.2 kg. It is equipped with 20 active joints, as shown in Fig. 1. The mass distribution and geometric specifications of the robot are listed in Table 1. To validate the proposed controller, we use the MATLAB ODE solver ODE45 to simulate the dynamics models of the OP3 robot for both three-domain walking (Section 2) and two-domain walking (Section 5). The default tolerance settings of the ODE45 solver are used.
Desired global-position trajectories and walking
patterns As mentioned earlier, this study assumes that the desired global-position trajectories are provided by a higher-layer planner. To assess the effectiveness of the proposed controller, three different desired globalposition (GP) trajectories are tested, including singledirection and varying-direction trajectories. These trajectories are specified in Table. 2.
The GPs include two straight-line global-position trajectories with distinct heading directions, labeled as (GP1) and (GP2). We set the velocities of (GP1) and (GP2) to be different to evaluate the performance of the controller under different walking speeds. To assess the effectiveness of the proposed control law in tracking the desired global-position trajectories along a path with different walking directions, we also consider a walking trajectory (GP3) consisting of two straight-line segments connected via an arc. The desired functions φ φ φ F , φ φ φ U , and φ φ φ O are designed as Bézier curves (Section 3.2). To respect the impact dynamics as prescribed by condition (B2), their parameters could be designed using the methods introduced in [8]. The desired walking patterns corresponding to the desired functions φ φ φ F , φ φ φ U , and φ φ φ O used in this study are illustrated in Fig. 6. In three-domain walking (top plot in Fig. 6), the FA, UA, and OA phases take up approximately 33%, 8%, and 59% of one walking step, respectively, while the FA and UA phases of the two-domain walking gait (lower plot in Fig. 6) last 81% and 19% of a step, respectively. For both walking patterns, the step length and maximum swing foot height are 7.1 cm and 2.4 cm, respectively.
Simulation cases
To validate the proposed controller under different desired global-position trajectories, walking patterns, and initial errors, we simulate the following three cases:
(Case A): Combination of desired trajectory (GP1) and two-domain walking pattern (Fig. 6, top). (Case B): Combination of desired trajectory (GP2) and two-domain walking pattern (Fig. 6, top). (Case C): Combination of desired trajectory (GP3) and three-domain walking pattern (Fig. 6, bottom). Table 3 summarizes the initial tracking error norms for all cases. Note that the initial swing-foot position tracking error is roughly 30-40% of the nominal step length.
Controller setting
For the IO-PD and IO-QP controllers, the PD controller gains are set as K p,i = 225 · I and K d,i = 50 · I to ensure the matrix A i (i ∈ {F,U, O}) is Hurwitz. For the IO-QP controller, the slack penalty weight p (Eq. (39)) is set as p = 10 7 . On a desktop with an i7 CPU and 32GB RAM running MATLAB, it takes approximately 1 ms to solve the QP problem in Eq. (39).
To verify the stability of the multi-domain walking system, we construct the three Lyapunov-like functions V f , V u , and V O as introduced in Section 4. In all domains, the matrix P i (where i ∈ {F,U, O}) is obtained by solving the Lyapunov equation using the gain matrices K p,i and K d,i and the matrix Q i . Here without loss of generality, we choose Q i as an identity matrix. For the UA phase, the value of β in the definition of V U in Eq. (26) is set as 0.001.
Simulation Results
This subsection presents the tracking results of our proposed IO-PD and IO-QP controller for Cases A through C. Figures 7 and 8 show the tracking performance of the proposed IO-PD and IO-QP controllers under Cases A and B, respectively. As explained earlier, Cases A and B share the same desired walking pattern of two-domain walking, but they have different desired global-position trajectories and initial errors. For both cases, the IO-PD and IO-QP controllers satisfactorily drive the robot's actual horizontal global position (x b , y b ) to the desired trajectories (x d (t), y d (t)), as shown in the top four plots in each figure. Also, from the footstep locations displayed at the bottom of each figure, the robot is able to walk along the desired walking path over the ground. In particular, the footstep trajectories in Fig. 8 demonstrate that even with a notable initial error (approx. 17
Global-position tracking performance
• ) of the robot's heading direction, the robot is able to quickly converge to the desired walking path. Figure 9 displays the global-position tracking results of three-domain walking for Case C. The top two plots, i.e., the time profiles of the forward and lateral base position (x b and y b ), show that the actual horizontal global position diverges from the reference within the UA phase during which the global position is not directly controlled. Despite the error divergence within the UA phase, the actual global position still converges to close to zero over the entire walking process thanks to convergence within the FA and OA domains, confirming the validity of Theorem 1.
Convergence of Lyapunov-like functions
The multiple Lyapunov-like functions for case C, implemented with IO-PD and IO-QP control laws, is illustrated in Figure 10. Both control laws ensure the continuous-phase convergence of V F and V O satisfies condition (B1). Although V U diverges during the UA phase, it remains bounded, thereby satisfying condition (B3). Moreover, we know the desired trajectories parameterized as Bézier curves are planned to satisfy (B2). Therefore, the multiple Lyapunov-like functions behave as predicted by conditions (C1)-(C3) in the proof of Theorem 1, indicating closed-loop stability. Figure 11 illustrates the joint torque profiles of each leg motor under the IO-PD and IO-QP control methods for Case B. The torque limits u max and u min are set as 4.1 N and −4.1 N, respectively. It is observed that the torque experiences sudden spikes due to the foot-landing impact at the switching from the UA to the FA phases. Due to the notable initial tracking errors, there are also multiple spikes in the joint torques at the beginning of the entire walking process. These spikes tend to be more significant with the IO-PD controller than with the IO-QP controller. In fact, all of the torque peaks under IO-QP are within the torque limits whereas some of those peaks under IO-PD exceed the limits, which is primarily due to the fact that the IO-QP controller explicitly enforces the torque limits but IO-PD does not. This comparison highlights the advantage of using IO-QP over IO-PD in ensuring satisfaction of actuation constraints.
Satisfaction of torque limits
DISCUSSION
This study has introduced a nonlinear GPT control approach for 3-D multi-domain bipedal robotic walking based on hybrid full-order dynamics modeling and multiple Lyapunov stability analysis. Similar to the HZDbased approaches [6,37,10] for multi-domain walking, our controller only acts within continuous phases, leaving the discrete impact dynamics uncontrolled. Another key similarity lies in that we also build the controller based on the hybrid, nonlinear, full-order dynamics model of multi-domain walking that faithfully captures the true robot dynamics and we exploit the inputoutput linearization technique to exactly linearize the complex continuous-phase robot dynamics.
Despite these similarities, our control law focuses on accurately tracking the desired global-position trajectories with the precise timing, whereas the HZD-based approach may not be directly extended to achieve such global-position tracking performance. This is essentially caused by the different stability types that the two approaches impose. The stability conditions proposed in this study enforce the stability of the desired globalposition trajectory, which is a time function encoded by the global time. In contrast, the stability conditions underlying the HZD framework ensure the stability of the desired periodic orbit, which is a curve in the state space on which infinitely many global-position trajectories reside.
Our previous GPT controller design [17] for the multidomain walking of a 2-D robot is only capable of tracking straight-line paths. By explicitly modeling the robot dynamics associated with 3-D walking and considering Table 3.
the robot's 3-D movement in the design of the desired trajectories, the proposed approach is capable of ensuring satisfactory global-position tracking performance for 3-D walking.
One limitation of the proposed approach is that it may be non-feasible to meet the proposed stability conditions in practice if the duration of the underactuation phase, δ τ U , is overly large. From Eq. (59) in the proof of Proposition 3, we know that as δ τ U increases, α 2 will also increase, leading to a larger value ofN. IfN is overly large, Eq. (34) will no longer hold, and the stability conditions will be invalid. To resolve this potential issue, the nominal duration of the UA domain cannot be set overly long. Indeed, the percentage of the UA phase within a complete gait cycle is respectively 19% and 8% of the simulated three-domain and two-domain walking, which is comparable to that of human walking (i.e., 18% [37]).
Another limitation of our control laws lies in that the robot dynamics model needs to be sufficiently accurate for the controller to be effective, due to the utilization of the input-output linearization technique. Yet, model parametric errors, external disturbances, and hardware imperfections (e.g., sensor noise) are prevalent in realworld robot operations [38]. To enhance the robustness of the proposed controller for real-world applications, we can incorporate robust control [39,40,41,22] into the GPT control law to address uncertainties. Furthermore, we can exploit online footstep planning [42,43,44,45,46,47] to adjust the robot's desired behaviors in real-time to better reject modeling errors and external disturbances.
CONCLUSION
This paper has introduced a continuous tracking control law that achieves provably accurate global-position tracking for the hybrid model of multi-domain bipedal robotic walking involving different actuation types. The proposed control law was derived based on input-output linearization and proportional-derivative control, ensuring the exponential stability of the output function dynamics within each continuous phase of the hybrid walking process. Sufficient stability conditions were established via the construction of multiple Lyapunov functions and could be used to guide the gain tuning of the proposed control law for ensuring the provable stability for the overall hybrid system. Both a three-domain and a two-domain walking gait were investigated to illustrate the effectiveness of the proposed approach, and the inputoutput linearizing controller was cast into a quadratic program (QP) to handle the actuator torque saturation. Simulation results on a three-dimensional bipedal humanoid robot confirmed the validity of the proposed control law under a variety of walking paths, desired globalposition trajectories, desired walking patterns, and initial Table 3. errors. Finally, the performance of the input-output linearizing control law with and without the QP formulation was compared to highlight the effectiveness of the former in mitigating torque saturation while ensuring the closed-loop stability and trajectory tracking accuracy. of the Twelfth Workshop on the Algorithmic Foundations of Robotics, pp. 384-399.
[47] Gong, Y., and Grizzle, J. W., 2022, "Zero dynamics, pendulum models, and angular momentum in feedback control of bipedal locomotion," ASME Journal of Dynamic Systems, Measurement, and Control, 144(12), p. 121006.
A APPENDIX: PROOFS OF PROPOSITIONS AND THEOREM 1 A.1 Proof of Proposition 2
Integrating both sides of the UA closed-loop dynamics in Eq. (24) over time t yields
x η | − 3k−1 = T 3k−1 T 3k−2 f η (s, x η (s), x ξ (s))ds + x η | + 3k−2 .(40)
Then,
x η | − 3k−1 ≤ T 3k−1 T 3k−2 f η (s, x η (s), x ξ (s))ds + x η | + 3k−2 ≤ T 3k−1 T 3k−2 f η (s, x η (s), x ξ (s)) ds + x η | + 3k−2 .(41)
Since the expression of f η (·) is obtained using the continuous-phase dynamics of the generalized coordinates in Eq. (4) and the expression of the output function y U in Eqs. (17) and (18), we know f η (t, x η , x ξ ) is continuously differentiable in t, x η , and x ξ . Also, we can prove that there exists a finite, real, positive number r η such that
∂ f η ∂t , ∂ f η ∂ x ξ
, and ∂ f η ∂ x η are bounded on (T 3k−2 , T 3k−1 ) × B r η (0). Then, f η (t, x η , x ξ ) is Lipschitz continuous on (T 3k−2 , T 3k−1 ) × B r η (0) [29], and we can prove that there exists a a real, positive number k f such that f η (t, x η (t), x ξ (t)) ≤ k f (42) holds for any t × (x η , x ξ ) ∈ (T 3k−2 , T 3k−1 ) × B r η (0). Combining the two inequalities above, we have
x η | − 3k−1 ≤ k f (T 3k−1 − T 3k−2 ) + x η | + 3k−2 . (43)
The duration (T 3k−1 − T 3k−2 ) of the UA phase can be estimated as:
|T 3k−1 − T 3k−2 | = |T 3k−1 − τ 3k−1 + τ 3k−1 − T 3k−2 | ≤ |T 3k−1 − τ 3k−1 | + δ τ U ,(44)
where δ τ U := τ 3k−1 − T 3k−2 is the expected duration of the UA phase and |T 3k−1 − τ 3k−1 | is the absolute difference between the actual and planned time instants of the UA→OA switching. From our previous work [20], we know there exists small positive numbers ε U and r U1 such that |T 3k−1 − τ 3k−1 | ≤ ε U δ τ U (45) holds for any k ∈ {1, 2, ...} and x U ∈ B r U1 (0). Thus, using Eqs. (43)-(45), we have
x η | − 3k−1 ≤ k f (1 + ε U )δ τ U + x η | + 3k−2 .(46)
Substituting Eqs. (30) and (46) into Eq. (26) gives
V U | − 3k−1 =V ξ | + 3k−1 + β x η | + 3k−1 2 ≤e −c 3ξ (T 3k−1 −T 3k−2 ) V ξ | + 3k−2 + 2β x η | + 3k−2 2 + 2β k 2 f (1 + ε U ) 2 δ 2 τ U ≤2V U | + 3k−2 + 2β k 2 f (1 + ε U ) 2 .
(47) Thus, for any x U ∈ B r U2 (0) with r U2 := min(r η , r U1 ),
V U | − 3k−1 ≤ w u (V U | + 3k−2 ) holds, where w u (V U | + 3k−2 ) := 2V U | + 3k−2 + 2β k 2 f (1 + ε U ) 2 . It is clear that w u (V U | + 3k−2 ) is a positive-definite function of V U | + 3k−2 .
A.2 Proof of Proposition 3
For brevity, we only show the proof for ...
≤ V F | + 3k ≤ V F | + 3k−3 ≤ ... ≤ V F | + 3 ≤ V F | + 0
, based on which the proofs for the other two sets of inequalities in Eq. (34) can be readily obtained.
To prove that V F | + 3k ≤ V F | + 3k−3 for any k ∈ {1, 2, ...}, we need to analyze the evoluation of the state variables for the k th actual complete gait cycle on t ∈ (T 3k−3 , T 3k ), which comprises three continuous phases and three switching events. Analyzing the continuous-phase state evolution: We analyze the state evolution during the three continuous phases based on the convergence and boundedness results established in Propositions 1 and 2.
Similar to the boundedness of the UA→OA switching time discrepancy given in Eq. (45), there exist small positive numbers ε F , ε O , r tF and r tO such that for any x F ∈ B r tF (0) and x O ∈ B r tO (0),
T 3k−2 − τ 3k−2 ≤ ε F δ τ F and T 3k − τ 3k ≤ ε O δ τ O (48)
hold, where δ τ F and δ τ O are the desired periods of the FA and OA phases of the planned walking cycle, with δ τ F := τ 3k−2 − T 3k−3 and δ τ O := τ 3k − T 3k−1 .
Substituting Eq. (48) into Eqs. (28) and (29) yields
x F | − 3k−2 ≤ c 2F c 1F e − c 3F 2c 2F (1+ε F )δ τ F x F | + 3k−3(49)
and
x O | − 3k ≤ c 2O c 1O e − c 3O 2c 2O (1+ε O )δ τ O x O | + 3k−1(50)
for any x i ∈ Br i (0) (i ∈ {F, O}), with the small positive
Fig. 2 .
2Illustration of the three coordinate systems used in the study: world frame, vehicle frame, and base frame.
Fig. 3 .
3The directed cycle of 3-D three-domain walking. The green circles in the diagram highlight the portions of a foot that are in contact with the ground. The position trajectory of the swing foot is indicated by the dashed arrow. The red and blue legs respectively represent the support and swing legs. Note that when the robot exits the OA domain and enters the FA domain, the swing and support legs switch their roles, and accordingly the leading and trailing legs swap their colors.
HC = (Γ, D,U, S, ∆, FG), where The oriented graph Γ = (V, E) comprises a set of vertices V = {v 1 , v 2 , ..., v N } and a set of edges E = {e 1 , e 2 , ..., e N }, where N is the total number of elements in each set. In this paper, each vertex v i represents the i th domain, while each edge e i represents the transition from the source domain to the target domain, thereby indicating the ordered sequence of all domains. For three-domain walking, we have i = 3. D is a set of domains of admissibility, which are the FA, UA, and OA domains for three-domain walking.
n to denote the desired trajectories for the control variables h F c (q) within the FA domain. These trajectories are encoded by the global time t and the normalized state-based phase variable s(θ ) as follows: (i) the desired trajectories of the base position variables x b and y b and the base yaw angle ψ b are encoded by the global time t, while (ii) those of the other (n − 3) control variables, including the base height z b , base roll angle φ b , base pitch angle θ b , swingfoot pose p sw and γ γ γ sw , and upper joint angle q up , are encoded by the normalized phase variable s(θ ).
Fig. 4 .
4Block diagram of the proposed global-position tracking control law within each domain. Here i ∈ {F,U, O} indicates the domain type.
k ∈ {1, 2, ...} and j ∈ {0, 1, 2, 3}. 4.4 Continuous-Phase Convergence and Boundedness of Lyapunov-Like Functions As the output function state x i (i ∈ {F, O, ξ }) is directly controlled, we can readily analyze the convergence of the output functions (and their associated Lyapunovlike functions, V F , V O , and V ξ ) within each domain based on the well-studied linear systems theory [29]. Proposition 1. (Continuous-phase output function convergence within each domain) Consider the IO-PD control law in Eq. (19), assumptions (A1)-(A7), and the following condition: (B1) The PD gains are selected such that A F , A O , and A ξ are Hurwitz. Then, there exist positive constants r i , c 1i , c 2i , and c 3i (i ∈ {F, O, ξ }) such that the Lyapunov-like functions V F , V O , and V ξ satisfy the following inequalities
Fig. 5 .
5Illustration of a complete two-domain walking cycle. The green circles show the portions of the feet that touch the ground. The leg in red represents the support leg, while the leg in blue the swing leg. The movement of the swing foot is shown by the dashed arrow.
Fig. 6 .
6Desired walking patterns for (a) two-domain walking (Cases A and B) and (b) three-domain walking (Case C) in the sagittal plane. The labels X w and Y w represent the xand y-axes of the world frame, respectively.
Fig. 7 .
7Satisfactory global-position tracking performance under Case A. The top row shows the global-position tracking results, and the bottom row displays the straight-line desired walking path and the actual footstep locations. The initial errors are listed in the
Fig. 8 .
8Satisfactory global-position tracking performance under Case B. The top row shows the global-position tracking results, and the bottom row displays the desired straight-line walking path and the actual footstep locations. The initial errors are listed in the
Fig. 10 .
10Time evolutions of multiple Lyapunov-like functions under Case C. The closed-loop stability is confirmed by the behaviors of the multiple Lyapunov functions, which complies with conditions (C1)-(C3) stated in the proof of Theorem 1 for both (a) IO-PD and (b) IO-QP control laws.
Fig. 11 .
11Torque profiles of each leg motor under the proposed (a) IO-PD and (b) IO-QP controllers for Case B. "L" and "R" stand for left and right, respectively. The red circles highlight the occurrence of torque limit violations. The jumps are more significant under the IO-PD controller than the IO-QP controller because the latter explicitly meets the torque limits. The blue dotted line represents the torque limits. It is evident that the torque profile of the IO-QP controller adheres to the torque limits, whereas the torque profile of the IO-PD controller may exceed the torque limits.
can be chosen as Bézier curves. The tracking error h O (t, q) is expressed as:
Table 1 .
1Mass distribution of the OP3 robot.Body component
Mass (kg)
Length (cm)
trunk
1.34
63
left/right thigh
0.31
11
left/right shank
0.22
11
left/right foot
0.07
12
left/right upper arm
0.19
12
left/right lower arm
0.04
12
head
0.15
N/A
min
Table 2 .
2Desired global-position trajectories.Traj.
index
x d (t) (cm)
y d (t) (cm)
Time in-
terval (s)
(GP1)
8t
0
[0, +∞)
(GP2)
19.1t
5.9t
[0, +∞)
(GP3)
25t
0
[0, 3.13)
3000 sin( t−3.13
80 )+
78.2
3000 cos( t−3.13
80 )−
3000
[3.13, 4.25)
24(t − 4.25) +
120
−7(t − 4.25) −
0.3
[4.25, +∞)
Table 3 .
3Initial tracking error norms for three cases.Tracking error norm
Case A
Case B
Case C
swing foot position (%
of step length)
27.5
27.5
40
base orientation (deg.)
0
17
12
base position (% of step
length)
15
15
8
ACKNOWLEDGEMENTSThe authors would like to thank Sushant Veer and Ayonga Hereid for their constructive comments on the theory and simulations of this work.numberr i defined asr i := min{r i , r ti }.From the definition of the Lyapunov-like function V U in Eq.(26), the continuous-phase boundedness of V U in Eq. (47), and the continuous-phase convergence of V ξ in Eq.(30), we obtain the following inequality characterizing the boundedness of the state variable x U within the UA phase:where the real scalar constantsc 1ξ andc 2ξ are defined as c 1ξ := min(c 1ξ , β ) andc 2ξ := max(c 2ξ , β ). Since 2c 2ξc 1ξwe rewrite Eq. (51) as:Analyzing the state evolution across a jump: Without loss of generality, we first examine the state evolution across the F→U switching event by relating the norms of the state variable just before and after the impact. Using the expression of the reset map ∆ ∆ ∆ F→U at the switching instant t = T − 3k−2 (k ∈ {1, 2, ...}), we obtain the following inequalityNext, we relate the three terms on the right-hand side of the inequality in Eq. (53) explicitly with the norm of the state just before the switching (i.e., x F | − 3k−2 ). Recall that the expressions of ∆ ∆ ∆ F→U (t, x F ) solely depends on the expressions of: (i) the impact dynamics ∆ ∆ ∆q(q)q, which is continuously differentiable on (q,q) ∈ T Q; (ii) the output functions y F (t, q), which is continuously differentiable on t ∈ R + and q ∈ Q under assumption (A7); and (iii) the time derivativeẏ F (t, q,q), which, also under assumption (A7), is continuously dif-ferentiable on t ∈ R + and (q,q) ∈ T Q. Thus, we know ∆ ∆ ∆ F→U is continuously differentiable for any t ∈ R + (i.e., including any continuous phases) and state x F ∈ R 2n .Similarly, under assumption (A7), we can prove that there exists a small, real constant l F such that ∂ ∆ ∆ ∆ F→U ∂t and ∂ ∆ ∆ ∆ F→U ∂ x F are bounded for any t ∈ R + (including all continuous FA phases) and x F ∈ B l F (0). Thus, for anyThus, there exist Lipschitz constants L tF and L xF such that:andFrom condition (A2) and Eqs. (31), (48), and (53)-(55), we know that(56) Analogous to the derivation of the inequality in Eq. (56), we can show that there exist a real, positive number l U and Lipschitz constants L tU and L xU such that:holds for any x U | − 3k−1 ∈ B l U (0). As the robot has full control authority within the OA domain, we can establish a tighter upper bound on x F | + 3k than Eqs. (56) and (57) by applying Proposition 3 from our previous work[23]. That is, there exists a real, positive number l O and Lipschitz constants L tO and L xO such that. From Eqs. (49), (50), (52), and (56)-(58), we obtain:wherēUsing Eqs.(27)and(59), we obtainNote that the scalar positive parametersN andL in Eq. (60) are both dependent on the continuous-phase convergence rates of the Lyapunov-like functions within the OA and FA domains (i.e., c 3F and c 3O ), Specifically, N andL (and accordingly 2c 2FL 2 c 1F and 2c2FN2 ) will decrease towards zero as the continuous-phase convergence rates increase towards the infinity.If condition (A3) holds (i.e., the PD gains can be adjusted to ensure a sufficiently high continuous-phase convergence rate), we can choose the PD gains such that 2c 2FL 2 c 1F is less than 1 and 2c2FN2 is sufficiently close to 0, which will then ensure V F | + 3k ≤ V F | + 3k−3 for any k ∈ {1, 2, ...}.A.3 Proof of Theorem 1By the general stability theory based on multiple Lyapunov functions[31], the origin of the overall hybrid error system described in Eqs.(22)and(24)is locally stable in the sense of Lyapunov if the Lyapunov-like functions V F , V O , and V U satisfy the following conditions:(C1) The Lyapunov-like functions V F and V O exponentially decrease within the continuous FA and OA phases, respectively. (C2) Within the continuous UA phase, the "switchingout" value of the Lyapunov-like function V U is bounded above by a positive-definite function of the "switching-in" value of V U ; and (C3) The values of each Lyapunov-like functions at their associated "switching-in" instants form a nonincreasing sequence.If the proposed IO-PD control law satisfies condition (B1), then the control law ensures conditions (C1) and (C2), as established in Proposition 1 and 2, respectively. By further meeting conditions (B2) and (B3), we know from Proposition 3 that condition (C3) will hold. Thus, under conditions (B1)-(B3), the closed-loop control system meets conditions (C1)-(C3), and thus the origin of the overall hybrid error system described in Eqs.(22)and(23)is locally stable in the sense of Lyapunov.
Multi-contact bipedal robotic locomotion. H Zhao, A Hereid, W Ma, A D Ames, Robotica. 355Zhao, H., Hereid, A., Ma, W.-l., and Ames, A. D., 2017, "Multi-contact bipedal robotic locomotion," Robotica, 35(5), pp. 1072-1106.
Human-inspired multi-contact locomotion with amber2. H.-H Zhao, W.-L Ma, A D Ames, M B Zeagler, Proc. of ACM/IEEE International Conference on Cyber-Physical Systems. of ACM/IEEE International Conference on Cyber-Physical SystemsZhao, H.-H., Ma, W.-L., Ames, A. D., and Zeagler, M. B., 2014, "Human-inspired multi-contact loco- motion with amber2," In Proc. of ACM/IEEE In- ternational Conference on Cyber-Physical Systems, pp. 199-210.
. Robotis Co, Ltd, ROBOTIS Co., Ltd. https://www.robotis.us/ Ac- cessed: 2023-01-20.
Performance analysis and feedback control of ATRIAS, a three-dimensional bipedal robot. A Ramezani, J W Hurst, K A Hamed, J W Grizzle, ASME Journal of Dynamic Systems, Measurement, and Control. 136221012Ramezani, A., Hurst, J. W., Hamed, K. A., and Grizzle, J. W., 2014, "Performance analysis and feedback control of ATRIAS, a three-dimensional bipedal robot," ASME Journal of Dynamic Systems, Measurement, and Control, 136(2), p. 021012.
Spring loaded inverted pendulum running: A plant model University of Michigan. W J Schwind, Schwind, W. J., 1998, Spring loaded inverted pen- dulum running: A plant model University of Michi- gan.
Dynamic multi-domain bipedal walking with atrias through SLIP based human-inspired control. A Hereid, S Kolathaya, M S Jones, J Van Why, J W Hurst, A D Ames, Proc. of International Conference on Hybrid Systems: Computation and Control. of International Conference on Hybrid Systems: Computation and ControlHereid, A., Kolathaya, S., Jones, M. S., Van Why, J., Hurst, J. W., and Ames, A. D., 2014, "Dynamic multi-domain bipedal walking with atrias through SLIP based human-inspired control," In Proc. of In- ternational Conference on Hybrid Systems: Com- putation and Control, pp. 263-272.
The design of atrias 1.0 a unique monopod, hopping robot. J A Grimes, J W Hurst, In Adaptive Mobile Robotics. World Scientific. Grimes, J. A., and Hurst, J. W., 2012, "The design of atrias 1.0 a unique monopod, hopping robot," In Adaptive Mobile Robotics. World Sci- entific, pp. 548-554.
Feedback control of dynamic bipedal robot locomotion. E R Westervelt, C Chevallereau, J H Choi, B Morris, J W Grizzle, CRC pressWestervelt, E. R., Chevallereau, C., Choi, J. H., Morris, B., and Grizzle, J. W., 2007, Feedback control of dynamic bipedal robot locomotion CRC press.
Realizing dynamic and efficient bipedal locomotion on the humanoid robot DURUS. J Reher, E A Cousineau, A Hereid, C M Hubicki, A D Ames, Proc. of IEEE International Conference on Robotics and Automation. of IEEE International Conference on Robotics and AutomationReher, J., Cousineau, E. A., Hereid, A., Hubicki, C. M., and Ames, A. D., 2016, "Realizing dy- namic and efficient bipedal locomotion on the hu- manoid robot DURUS," In Proc. of IEEE Inter- national Conference on Robotics and Automation, pp. 1794-1801.
Satisfactory global-position tracking performance under Case C. The top row shows the global-position tracking results, and the bottom row displays the desired walking path and the actual footstep locations. The desired walking path consists of two straight lines connected by an arc. The initial errors are listed in the Table 3. K A Hamed, W.-L Ma, A D Ames, Proc. of American Control Conference. of American Control ConferenceDynamically stable 3D quadrupedal walking with multi-domain hybrid system models and virtual constraint controllersHamed, K. A., Ma, W.-L., and Ames, A. D., 2019, Fig. 9. Satisfactory global-position tracking performance under Case C. The top row shows the global-position tracking results, and the bottom row displays the desired walking path and the actual footstep locations. The desired walking path consists of two straight lines connected by an arc. The initial errors are listed in the Table 3. "Dynamically stable 3D quadrupedal walking with multi-domain hybrid system models and virtual constraint controllers," In Proc. of American Con- trol Conference, pp. 4588-4595.
Dynamic output controllers for exponential stabilization of periodic orbits for multidomain hybrid models of robotic locomotion. K Hamed, B Safaee, R D Gregg, ASME Journal of Dynamic Systems, Measurement, and Control. 12141Hamed, K., Safaee, B., and Gregg, R. D., 2019, "Dynamic output controllers for exponential sta- bilization of periodic orbits for multidomain hy- brid models of robotic locomotion," ASME Jour- nal of Dynamic Systems, Measurement, and Con- trol, 141(12).
Asymptotically stable walking for biped robots: Analysis via systems with impulse effects. J W Grizzle, G Abba, F Plestan, IEEE Transactions on Automatic Control. 461Grizzle, J. W., Abba, G., and Plestan, F., 2001, "Asymptotically stable walking for biped robots: Analysis via systems with impulse effects," IEEE Transactions on Automatic Control, 46(1), pp. 51- 64.
Hybrid zero dynamics of planar biped walkers. E R Westervelt, J W Grizzle, D E Koditschek, IEEE Transactions on Automatic Control. 481Westervelt, E. R., Grizzle, J. W., and Koditschek, D. E., 2003, "Hybrid zero dynamics of planar biped walkers," IEEE Transactions on Automatic Control, 48(1), pp. 42-56.
A compliant hybrid zero dynamics controller for stable, efficient and fast bipedal walking on MABEL. K Sreenath, H.-W Park, I Poulakakis, J W Grizzle, The International Journal of Robotics Research. 309Sreenath, K., Park, H.-W., Poulakakis, I., and Griz- zle, J. W., 2011, "A compliant hybrid zero dynam- ics controller for stable, efficient and fast bipedal walking on MABEL," The International Journal of Robotics Research, 30(9), pp. 1170-1193.
Input-to-state stability of periodic orbits of systems with impulse effects via poincaré analysis. S Veer, I Poulakakis, IEEE Transactions Automatic Control. 6411Veer, S., Poulakakis, I., et al., 2019, "Input-to-state stability of periodic orbits of systems with impulse effects via poincaré analysis," IEEE Transactions Automatic Control, 64(11), pp. 4583-4598.
Feedback control of a cassie bipedal robot: Walking, standing, and riding a segway. Y Gong, R Hartley, X Da, A Hereid, O Harib, J.-K Huang, J Grizzle, Proc. of American Control Conference. of American Control ConferenceGong, Y., Hartley, R., Da, X., Hereid, A., Harib, O., Huang, J.-K., and Grizzle, J., 2019, "Feedback control of a cassie bipedal robot: Walking, stand- ing, and riding a segway," In Proc. of American Control Conference, pp. 4559-4566.
Bipedal gait recharacterization and walking encoding generalization for stable dynamic walking. Y Gu, B Yao, C G Lee, Proc. of IEEE International Conference on Robotics and Automation. of IEEE International Conference on Robotics and AutomationGu, Y., Yao, B., and Lee, C. G., 2016, "Bipedal gait recharacterization and walking encoding gen- eralization for stable dynamic walking," In Proc. of IEEE International Conference on Robotics and Automation, pp. 1788-1793.
Straight-line contouring control of fully actuated 3-D bipedal robotic walking. Y Gu, B Yao, C G Lee, Proc. of American Control Conference. of American Control ConferenceGu, Y., Yao, B., and Lee, C. G., 2018, "Straight-line contouring control of fully actuated 3-D bipedal robotic walking," In Proc. of American Control Conference, pp. 2108-2113.
Timedependent orbital stabilization of underactuated bipedal walking. Y Gu, B Yao, C G Lee, Proc. of American Control Conference. of American Control ConferenceGu, Y., Yao, B., and Lee, C. G., 2017, "Time- dependent orbital stabilization of underactuated bipedal walking," In Proc. of American Control Conference, pp. 4858-4863.
Global-position tracking control of a fully actuated NAO bipedal walking robot. Y Gao, Y Gu, Proc. of American Control Conference. of American Control ConferenceGao, Y., and Gu, Y., 2019, "Global-position track- ing control of a fully actuated NAO bipedal walking robot," In Proc. of American Control Conference, pp. 4596-4601.
Adaptive robust trajectory tracking control of fully actuated bipedal robotic walking. Y Gu, C Yuan, Proc. of IEEE/ASME International Conference on Advanced Intelligent Mechatronics. of IEEE/ASME International Conference on Advanced Intelligent MechatronicsGu, Y., and Yuan, C., 2020, "Adaptive robust tra- jectory tracking control of fully actuated bipedal robotic walking," In Proc. of IEEE/ASME Interna- tional Conference on Advanced Intelligent Mecha- tronics, pp. 1310-1315.
Adaptive robust tracking control for hybrid models of threedimensional bipedal robotic walking under uncertainties. Y Gu, C Yuan, ASME Journal of Dynamic Systems, Measurement, and Control. 8143Gu, Y., and Yuan, C., 2021, "Adaptive ro- bust tracking control for hybrid models of three- dimensional bipedal robotic walking under uncer- tainties," ASME Journal of Dynamic Systems, Mea- surement, and Control, 143(8).
Global-position tracking control for threedimensional bipedal robots via virtual constraint design and multiple Lyapunov analysis. Y Gu, Y Gao, B Yao, C G Lee, ASME Journal of Dynamic Systems, Measurement, and Control. 14411111001Gu, Y., Gao, Y., Yao, B., and Lee, C. G., 2022, "Global-position tracking control for three- dimensional bipedal robots via virtual constraint design and multiple Lyapunov analysis," ASME Journal of Dynamic Systems, Measurement, and Control, 144(11), p. 111001.
Extended capture point and optimization-based control for quadrupedal robot walking on dynamic rigid surfaces. A Iqbal, Y Gu, IFAC-PapersOnLine. 5420Iqbal, A., and Gu, Y., 2021, "Extended cap- ture point and optimization-based control for quadrupedal robot walking on dynamic rigid sur- faces," IFAC-PapersOnLine, 54(20).
Provably stabilizing controllers for quadrupedal robot locomotion on dynamic rigid platforms. A Iqbal, Y Gao, Y Gu, IEEE/ASME Transactions on Mechatronics. 254Iqbal, A., Gao, Y., and Gu, Y., 2020, "Provably stabilizing controllers for quadrupedal robot loco- motion on dynamic rigid platforms," IEEE/ASME Transactions on Mechatronics, 25(4), pp. 2035- 2044.
Realtime walking pattern generation of quadrupedal dynamic-surface locomotion based on a linear time-varying pendulum model. A Iqbal, S Veer, Y Gu, arXiv:2301.03097arXiv preprintIqbal, A., Veer, S., and Gu, Y., 2023, "Real- time walking pattern generation of quadrupedal dynamic-surface locomotion based on a linear time-varying pendulum model," arXiv preprint arXiv:2301.03097.
Global-position tracking control of multi-domain planar bipedal robotic walking. Y Gao, Y Gu, Prof. of ASME Dynamic Systems and Control Conference. 59148Gao, Y., and Gu, Y., 2019, "Global-position track- ing control of multi-domain planar bipedal robotic walking," In Prof. of ASME Dynamic Systems and Control Conference, Vol. 59148, p. V001T03A009.
A discrete control lyapunov function for exponential orbital stabilization of the simplest walker. P A Bhounsule, A Zamani, Journal of Mechanisms and Robotics. 95Bhounsule, P. A., and Zamani, A., 2017, "A dis- crete control lyapunov function for exponential or- bital stabilization of the simplest walker," Journal of Mechanisms and Robotics, 9(5).
. H K Khalil, Prentice HallKhalil, H. K., 1996, Noninear systems No. 5. Pren- tice Hall.
Optimization of output functions with nonholonomic virtual constraints in underactuated bipedal walking control. W K Chan, Y Gu, B Yao, Proc. of Annual American Control Conference. of Annual American Control ConferenceChan, W. K., Gu, Y., and Yao, B., 2018, "Optimiza- tion of output functions with nonholonomic virtual constraints in underactuated bipedal walking con- trol," In Proc. of Annual American Control Confer- ence, pp. 6743-6748.
Multiple lyapunov functions and other analysis tools for switched and hybrid systems. M S Branicky, IEEE Transactions on Automatic Control. 434Branicky, M. S., 1998, "Multiple lyapunov func- tions and other analysis tools for switched and hy- brid systems," IEEE Transactions on Automatic Control, 43(4), pp. 475-482.
. H K Khalil, Nonlinear control Prentice HallKhalil, H. K., 1996, Nonlinear control Prentice Hall.
Time-dependent nonlinear control of bipedal robotic walking. Y Gu, Purdue UniversityPhD thesisGu, Y., 2017, "Time-dependent nonlinear control of bipedal robotic walking," PhD thesis, Purdue University.
Hybrid systems with state-triggered jumps: Sensitivity-based stability analysis with application to trajectory tracking. M Rijnen, J B Biemond, N Van De Wouw, A Saccon, H Nijmeijer, IEEE Transactions on Automatic Control. 6511Rijnen, M., Biemond, J. B., Van De Wouw, N., Sac- con, A., and Nijmeijer, H., 2019, "Hybrid systems with state-triggered jumps: Sensitivity-based sta- bility analysis with application to trajectory track- ing," IEEE Transactions on Automatic Control, 65(11), pp. 4568-4583.
Hybrid trajectory tracking for a hopping robotic leg. M Rijnen, A Van Rijn, H Dallali, A Saccon, H Nijmeijer, IFAC-PapersOnLine. 4914Rijnen, M., van Rijn, A., Dallali, H., Saccon, A., and Nijmeijer, H., 2016, "Hybrid trajectory track- ing for a hopping robotic leg," IFAC-PapersOnLine, 49(14), pp. 107-112.
Angular momentum about the contact point for control of bipedal locomotion: Validation in a LIP-based controller. Y Gong, J Grizzle, arXiv:2008.10763arXiv preprintGong, Y., and Grizzle, J., 2020, "Angular momen- tum about the contact point for control of bipedal locomotion: Validation in a LIP-based controller," arXiv preprint arXiv:2008.10763.
Algorithmic foundations of realizing multi-contact locomotion on the humanoid robot DURUS. J P Reher, A Hereid, S Kolathaya, C M Hubicki, A D Ames, Algorithmic Foundations of Robotics XII: Proceedings of the Twelfth Workshop on the Algorithmic Foundations of Robotics. SpringerReher, J. P., Hereid, A., Kolathaya, S., Hubicki, C. M., and Ames, A. D., 2020, "Algorithmic foundations of realizing multi-contact locomotion on the humanoid robot DURUS," In Algorithmic Foundations of Robotics XII: Proceedings of the Twelfth Workshop on the Algorithmic Foundations of Robotics, Springer, pp. 400-415.
Decentralized passivity-based control with a generalized energy storage function for robust biped locomotion. M Yeatman, G Lv, R D Gregg, ASME Journal of Dynamic Systems, Measurement, and Control. 10141Yeatman, M., Lv, G., and Gregg, R. D., 2019, "De- centralized passivity-based control with a general- ized energy storage function for robust biped loco- motion," ASME Journal of Dynamic Systems, Mea- surement, and Control, 141(10).
Experimental investigation on highperformance coordinated motion control of highspeed biaxial systems for contouring tasks. C Hu, B Yao, Q Wang, Z Chen, Li , C , International Journal of Machine Tools and Manufacture. 519Hu, C., Yao, B., Wang, Q., Chen, Z., and Li, C., 2011, "Experimental investigation on high- performance coordinated motion control of high- speed biaxial systems for contouring tasks," Inter- national Journal of Machine Tools and Manufac- ture, 51(9), pp. 677-686.
Highperformance adaptive robust control with balanced torque allocation for the over-actuated cutter-head driving system in tunnel boring machine. J Liao, Z Chen, B Yao, Mechatronics. 46Liao, J., Chen, Z., and Yao, B., 2017, "High- performance adaptive robust control with balanced torque allocation for the over-actuated cutter-head driving system in tunnel boring machine," Mecha- tronics, 46, pp. 168-176.
Fast and accurate motion tracking of a linear motor system under kinematic and dynamic constraints: an integrated planning and control approach. M Yuan, Z Chen, B Yao, X Liu, IEEE Transactions on Control System Technology. Yuan, M., Chen, Z., Yao, B., and Liu, X., 2019, "Fast and accurate motion tracking of a linear motor system under kinematic and dynamic constraints: an integrated planning and control approach," IEEE Transactions on Control System Technology.
Time-varying ALIP model and robust foot-placement control for underactuated bipedal robot walking on a swaying rigid surface. Y Gao, Y Gong, V Paredes, A Hereid, Y Gu, arXiv:2210.13371arXiv preprintGao, Y., Gong, Y., Paredes, V., Hereid, A., and Gu, Y., 2022, "Time-varying ALIP model and robust foot-placement control for underactuated bipedal robot walking on a swaying rigid surface," arXiv preprint arXiv:2210.13371.
Asymptotic stabilization of aperiodic trajectories of a hybridlinear inverted pendulum walking on a dynamic rigid surface. A Iqbal, S Veer, Y Gu, Proc. of American Control Conference. of American Control Conferenceto appearIqbal, A., Veer, S., and Gu, Y., 2023, "Asymptotic stabilization of aperiodic trajectories of a hybrid- linear inverted pendulum walking on a dynamic rigid surface," In Proc. of American Control Con- ference, to appear.
Bipedal walking on constrained footholds: Momentum regulation via vertical com control. M Dai, X Xiong, A Ames, Proc. of IEEE International Conference on Robotics and Automation. of IEEE International Conference on Robotics and AutomationDai, M., Xiong, X., and Ames, A., 2022, "Bipedal walking on constrained footholds: Momentum reg- ulation via vertical com control," In Proc. of IEEE International Conference on Robotics and Automa- tion, pp. 10435-10441.
3-D underactuated bipedal walking via H-LIP based gait synthesis and stepping stabilization. X Xiong, A Ames, IEEE Transactions on Robotics. 384Xiong, X., and Ames, A., 2022, "3-D underactu- ated bipedal walking via H-LIP based gait synthe- sis and stepping stabilization," IEEE Transactions on Robotics, 38(4), pp. 2405-2425.
Dynamic walking on stepping stones with gait library and control barrier functions. Q Nguyen, X Da, J Grizzle, K Sreenath, Algorithmic Foundations of Robotics XII: Proceedings. Nguyen, Q., Da, X., Grizzle, J., and Sreenath, K., 2020, "Dynamic walking on stepping stones with gait library and control barrier functions," In Algo- rithmic Foundations of Robotics XII: Proceedings
| [] |
[
"A Hopfield-like network with complementary encodings of memories",
"A Hopfield-like network with complementary encodings of memories"
] | [
"Louis Kang \nRIKEN Center for Brain Science\nNeural Circuits and Computations Unit\n\n\nGraduate School of Informatics\nKyoto University\n\n",
"Taro Toyoizumi \nRIKEN Center for Brain Science\nLaboratory for Neural Computation and Adaptation\n\n\nGraduate School of Information Science and Technology\nUniversity of Tokyo\n\n"
] | [
"RIKEN Center for Brain Science\nNeural Circuits and Computations Unit\n",
"Graduate School of Informatics\nKyoto University\n",
"RIKEN Center for Brain Science\nLaboratory for Neural Computation and Adaptation\n",
"Graduate School of Information Science and Technology\nUniversity of Tokyo\n"
] | [] | We present a Hopfield-like autoassociative network for memories representing examples of concepts. Each memory is encoded by two activity patterns with complementary properties. The first is dense and correlated across examples within concepts, and the second is sparse and exhibits no correlation among examples. The network stores each memory as a linear combination of its encodings. During retrieval, the network recovers sparse or dense patterns with a high or low activity threshold, respectively. As more memories are stored, the dense representation at low threshold shifts from examples to concepts, which are learned from accumulating common example features. Meanwhile, the sparse representation at high threshold maintains distinctions between examples due to the high capacity of sparse, decorrelated patterns. Thus, a single network can retrieve memories at both example and concept scales and perform heteroassociation between them. We obtain our results by deriving macroscopic mean-field equations that yield capacity formulas for sparse examples, dense examples, and dense concepts. We also perform network simulations that verify our theoretical results and explicitly demonstrate the capabilities of the network. | null | [
"https://export.arxiv.org/pdf/2302.04481v2.pdf"
] | 256,697,359 | 2302.04481 | aac75622a63ec1960e0054469f653a1a3d214171 |
A Hopfield-like network with complementary encodings of memories
Louis Kang
RIKEN Center for Brain Science
Neural Circuits and Computations Unit
Graduate School of Informatics
Kyoto University
Taro Toyoizumi
RIKEN Center for Brain Science
Laboratory for Neural Computation and Adaptation
Graduate School of Information Science and Technology
University of Tokyo
A Hopfield-like network with complementary encodings of memories
We present a Hopfield-like autoassociative network for memories representing examples of concepts. Each memory is encoded by two activity patterns with complementary properties. The first is dense and correlated across examples within concepts, and the second is sparse and exhibits no correlation among examples. The network stores each memory as a linear combination of its encodings. During retrieval, the network recovers sparse or dense patterns with a high or low activity threshold, respectively. As more memories are stored, the dense representation at low threshold shifts from examples to concepts, which are learned from accumulating common example features. Meanwhile, the sparse representation at high threshold maintains distinctions between examples due to the high capacity of sparse, decorrelated patterns. Thus, a single network can retrieve memories at both example and concept scales and perform heteroassociation between them. We obtain our results by deriving macroscopic mean-field equations that yield capacity formulas for sparse examples, dense examples, and dense concepts. We also perform network simulations that verify our theoretical results and explicitly demonstrate the capabilities of the network.
I. INTRODUCTION
Autoassociation is the ability for a network to store patterns of activity and to retrieve complete patterns when presented with incomplete cues. Autoassociative networks are widely used as models for neural phenomena, such as episodic memory [1][2][3], and also have applications in machine learning [4,5]. It is well-known that properties of the stored patterns can influence the computational capabilities of the network. Sparse patterns, in which a small fraction of the neurons is active, can be stored higher capacity compared to dense patterns [6][7][8][9][10][11][12]. Correlated patterns can be merged by the network to represent shared features [13][14][15].
Previous autoassociation models have largely considered the storage of patterns with a single set of statistics. We consider the possibility that a network can store two types of patterns with different properties, and thus, different computational roles. This idea is inspired by the architecture of the hippocampus in mammalian brains [16]. The hippocampal subfield CA3 is the presumptive autoassociative network that stores memories of our daily experiences [1,6], and it receives sensory information from two parallel pathways with complementary properties [17]. The mossy fibers present sparser, decorrelated patterns to CA3 for storage, and the perforant path presents denser, correlated patterns. Both pathways originate from the same upstream region, the entorhinal cortex, so they presumably represent the same sensory experiences. We wish to explore whether an autoassociative network can store and retrieve memory encodings from each pathway, and to characterize the computational advantages of encoding the same memory in two different ways. * Corresponding author: [email protected]
To address these aims, we implement a Hopfield-like network [18] that stores memories, each of which is an example µ of a concept ν. Each example is encoded as both a sparse pattern ξ µν and a dense pattern ψ µν . The former is generated independently and exhibits no correlation with other sparsely encoded examples. The latter is generated from a dense encoding ψ µ of the concept µ with correlations among examples within the same concept. The model is defined in Section II, along with an outline of the derivation of its mean-field equations.
In Section III, we present our major results regarding pattern retrieval. A high or low activity threshold is used to retrieve sparse or dense patterns, respectively. The network has a high capacity for sparse examples ξ µν and a low capacity for dense examples ψ µν . As the number of examples stored increases beyond the dense example capacity, a critical load is reached above which the network instead retrieves dense concepts ψ µ . This critical load can be smaller than the sparse example capacity, which means that the network can recover both ξ µν 's as distinct memories and ψ µ 's as generalizations across them.
In Section IV, we show that the network can perform heteroassociation between sparse and dense encodings of the same memory. Their respective energies can predict regimes in which heteroassociation is possible. We discuss our results and their significance in Section V. Mean-field equations governing network behavior are derived in Appendix A, and capacity formulas for ξ µν , ψ µν , and ψ µ are derived in Appendices B, C, and D.
II. THE MODEL
A. Patterns and architecture
We consider a Hopfield network with neurons i = 1, . . . , N that are either inactive (S i = 0) or active arXiv:2302.04481v2 [q-bio.NC] 12 May 2023 (S i = 1). The network stores ν = 1, . . . , s examples for each of µ = 1, . . . , p concepts. The concept load per neuron is α = p/N . Examples are encoded both sparsely as ξ µν and densely as ψ µν . Following Ref. 7, sparse examples are generated independently with sparsity a: ξ i µν = 0 with probability 1 − a 1 with probability a.
(1)
Following Ref. 13, dense examples within a concept are correlated in the following way. Each concept corresponds to a dense pattern ψ µ , generated independently with sparsity 1 2 :
ψ i µ = 0 with probability 1 2 1 with probability 1 2 .
(
Dense examples are then generated from these concepts, with the correlation parameter c > 0 controlling the likelihood that example patterns match their concept:
ψ i µν = ψ i µ with probability 1+c 2 1 − ψ i µ with probability 1−c 2 .(3)
The average Pearson correlation coefficient between ψ µν and ψ µ is c, and that between ψ µν and ψ µω for ν = ω is c 2 . The average overlaps are
ψ i µν ψ i µ = 1 4 + c 4 ψ i µν ψ i µω = 1 4 + c 2 4 ,(4)
where angle brackets indicate averaging over patterns. During storage, the parameter 2γ < 1 2 sets the relative strength of dense encodings compared to sparse encodings. The factor of 2 is for theoretical convenience. Linear combinations of ξ µν and ψ µν are stored in a Hopfieldlike fashion with symmetric synaptic weights
J ij = 1 N µν (1 − 2γ) ξ i µν − a + 2γ ψ i µν − 1 2 × (1 − 2γ) ξ j µν − a + 2γ ψ j µν − 1 2 = 1 N µν (η i µν + ζ i µν )(η j µν + ζ j µν )(5)
for i = j, and J ii = 0. The second expression uses rescaled sparse and dense patterns
η i µν ≡ (1 − 2γ) ξ i µν − a ζ i µν ≡ 2γ ψ i µν − 1 2 .(6)
After initializing the network with a cue, neurons are asynchronously and stochastically updated via Glauber dynamics [19]. That is, at each timestep t, one neuron i is randomly selected, and the probability that it becomes active is given by
P [S i (t + 1) = 1] = 1 1 + exp −β j J ij S j (t) − θ .(7)
Thus, activation likely occurs when the total synaptic input j J ij S j (t) is greater than the activity threshold θ. The inverse temperature β = 1/T sets the width of the threshold, with β → 0 corresponding to chance-level activation and β → ∞ corresponding to a strict, deterministic threshold. We shall see that θ plays a key role in selecting between sparse and dense patterns; a higher θ suppresses activity and favors recovery of sparse patterns, and vice versa for lower θ and dense patterns.
B. Overview of mean-field equations
Network behavior in the mean-field limit is governed by a set of equations relating macroscopic order parameters to one another. Their complete derivation following Refs. 7,13,19, and 20 is provided in Appendix A, but we will outline our approach here. The first task is calculating the replica partition function Z n , where the angle brackets indicate averaging over rescaled patterns η µν and ζ µν and n is the number of replica systems. By introducing auxiliary fields via Hubbard-Stratonovich transformations and integrating over interactions with off-target patterns, we obtain
Z n ∝ νρ dm ρ 1ν βN 2π 1 2 ρσ dq ρσ dr ρσ × exp[−βN f ],(8)
where ρ and σ are replica indices, m ρ 1ν , r ρσ , and q ρσ are order parameters, and
f = 1 2 νρ (m ρ 1ν ) 2 + α 2β Tr log δ νω δ ρσ − βΓ 2 (1 − κ 2 )δ νω + κ 2 q ρσ + βα 2 ρσ q ρσ r ρσ − 1 β log Tr S exp β νρ m ρ 1ν χ 1ν S ρ − θ + αsΓ 2 2 ρ S ρ + βα 2 ρσ r ρσ S ρ S σ .(9)
δ is the Kronecker delta and
Γ 2 ≡ (1 − 2γ) 2 a(1 − a) + γ 2 , κ 2 ≡ γ 2 c 2 (1 − 2γ) 2 a(1 − a) + γ 2 .(10)
Equation (9) assumes a successful retrieval regime in which the network overlaps significantly with either one sparse example η 11 or dense, correlated examples ζ 1ν of one concept. We capture these two possibilities by introducing χ 1ν , where χ i 1ν = η i 11 δ 1ν or ζ i 1ν respectively for retrieval of sparse or dense patterns. Through selfaveraging, we have replaced averages over neurons i with averages over entries χ 1ν at a single neuron. Thus, the index i no longer appears in Eq. (9).
Then, we use the replica symmetry ansatz and saddlepoint method to obtain the following mean-field equations in terms of the replica-symmetric order parameters m 1ν , r, and Q:
m 1ν = ⟪χ 1ν sig[βh]⟫, r = sΓ 4 1 − Q(1 − κ 2 )(1 + s 0 κ 2 ) 2 + s 0 κ 4 1 − Q(1 − κ 2 ) 2 1 − Q(1 + s 0 κ 2 ) 2 × ⟪sig[βh] 2 ⟫, Q = βΓ 2 ⟪sig[βh] 2 − sig[βh]⟫,(11)
where the double angle brackets indicate averages over χ 1ν and z, an auxiliary random field with a standard normal distribution. Meanwhile,
s 0 ≡ s − 1, sig(x) ≡ 1/(1 + e −x ), h ≡ ν m 1ν χ 1ν − φ + √ αrz, φ ≡ θ − QαsΓ 2 2 · 1 + s 0 κ 4 − Q(1 − κ 2 )(1 + s 0 κ 2 ) 1 − Q(1 − κ 2 ) 1 − Q(1 + s 0 κ 2 ) .(12)
As derived in Appendix A, m 1ν 's are network overlaps with the target pattern and other patterns correlated with it, r represents noise due to overlap with off-target patterns, Q is related to the overall neural activity. h is the local field in the mean-field limit, which encapsulates the mean network interaction experienced by each neuron. φ is the shifted threshold, which is empirically very similar to the original threshold θ. Equation (11) applies to all target pattern types χ 1ν that we wish to recover. We now simplify the mean-field equations for either sparse targets with χ 1ν = η 11 δ 1ν or dense patterns with χ 1ν = ζ 1ν . In the latter case, we will perform further simplifications corresponding to recovery of either one dense example ζ 11 or one dense concept ζ 1 , in which case the network overlaps equally with all dense examples ζ 1ν belonging to it. We also take the T → 0 limit, which implies a strict threshold without stochastic activation. The full derivations are provided in Appendices A, B, C, and D, but the results for each target type are provided below.
1. Sparse example η 11 : Equation (11) becomes
m 11 = (1 − 2γ)a 2 erf φ √ 2αr + erf (1 − 2γ)m 11 − φ √ 2αr , r = s 1 + s 0 κ 4 Γ 4 2 1 − erf φ √ 2αr + a erf (1 − 2γ)m 11 − φ √ 2αr .(13)m 11 = γ 2 1 + c 4 erf Y ++ + erf Y +− + 1 − c 4 erf Y −+ + erf Y −− , m 0 = γc 2 1 − Q γ 2 Γ 2 (1 − c 2 ) × 1 + c 4 erf Y ++ + erf Y +− − 1 − c 4 erf Y −+ + erf Y −− , r = sΓ 4 2 · 1 − Q(1 − κ 2 )(1 + s 0 κ 2 ) 2 + s 0 κ 4 1 − Q(1 − κ 2 ) 2 1 − Q(1 + s 0 κ 2 ) 2 × 1 − 1 + c 4 erf Y ++ − erf Y +− − 1 − c 4 erf Y −+ − erf Y −− , Q = Γ 2 √ 2πσ 0 1 + c 4 e −Y 2 ++ + e −Y 2 +− + 1 − c 4 e −Y 2 −+ + e −Y 2 −− ,(14)
where
σ 2 0 ≡ s 0 γ 2 (1 − c 2 )m 2 0 + αr Y ±± ≡ γm 11 ± s 0 γcm 0 ± φ √ 2σ 0 .(15)
Sign choices in Y ±± correspond to respective signs on the right-hand side of the equation.
m 1 = γ 4 erf Y + + erf Y − , m s = γc 4 1 − Q γ 2 Γ 2 (1 − c 2 ) erf Y + + erf Y − , r = sΓ 4 2 · 1 − Q(1 − κ 2 )(1 + s 0 κ 2 ) 2 + s 0 κ 4 1 − Q(1 − κ 2 ) 2 1 − Q(1 + s 0 κ 2 ) 2 × 1 − 1 2 erf Y + − erf Y − , Q = Γ 2 √ 8πσ s e −Y 2 + + e −Y 2 − ,(16)
where
σ 2 s ≡ sγ 2 (1 − c 2 )m 2 s + αr Y ± ≡ sγcm s ± φ √ 2σ s .(17)
The sign choice in Y ± corresponds to the sign on the right-hand side of the equation.
III. T = 0 CAPACITIES
A. Retrieval regimes
Large values for the overlaps m 11 and m 1 in Eqs. (13), (14), and (16) signal that retrieval of target patterns is possible. To be more precise, we derive in Appendix A that for T = 0,
m 1ν = χ 1ν S ,(18)
where χ 1ν and S are respectively the pattern entry and activity for a single neuron and angle brackets indicate an average over χ 1ν . Again, the neuron index i does not appear due to self-averaging. Successful retrieval means that the network activity S is similar to the original, unscaled patterns ξ 11 , ψ 11 , and ψ 1 with 0/1 entries. With the rescalings in Eq. (6), this condition implies m 11 ∼ (1 − 2γ)a(1 − a) for sparse example targets, m 11 ∼ γ/2 for dense example targets, and m 1 ∼ γ/2 for dense concept targets. For ease of comparison, we define a rescaled overlap
m = m 11 /(1 − 2γ)a(1 − a) sparse example, m 11 /(γ/2) dense example, m 1 /(γ/2) dense concept,(19)
so m ∼ 1 corresponds to the retrieval phase, as an orderof-magnitude estimate.
To determine the extent of retrieval phase, we numerically solve the mean-field equations for a given set of network parameters. Phase boundaries are found by adjusting the number of examples stored per concept s and looking for the appearance or disappearance of nontrivial solutions. These boundaries will change as a function of the number of concepts per neuron α, the sparse pattern sparsity a, the dense pattern correlation c, the relative dense storage strength γ. We treat the shifted activity threshold φ as a free parameter that can be adjusted to maximize m 11 and m 1 . Figure 1(a) shows that for a given concept load α, the network can retrieve sparse and dense examples below critical example loads s c , which we call the capacities. Above the capacities, catastrophic interference between the target and off-target patterns prevents successful retrieval. Figure 1(b) shows that the network can retrieve dense concepts above a critical s c . Thus, it builds concepts, which are not directly stored, through accumulating shared features among dense examples. With greater correlation c, fewer examples are required to appreciate commonalities, so s c is lower. Note that for low enough α, the network can recover both sparse examples and dense concepts at intermediate values of s. Thus, our network is capable of retrieving both example and concept representations of the same memories by tuning an activity threshold.
Optimal retrieval of dense patterns occurs at threshold φ = 0 and of sparse patterns at φ/(1−2γ) 2 a ≈ 0.6. These values which match results for classic Hopfield networks that store only dense or only sparse patterns [7,21]. At s c , the rescaled overlap m c takes values above 0.5 over the parameters explored [ Fig. 1(c)] before jumping discontinuously to a much lower value immediately outside the retrieval regime. Such a first-order transition has also been observed in classic Hopfield networks [7,13,19].
B. Overview of capacity formulas
We then seek to obtain mathematical formulas for the capacity, or critical example load, s c of each type of pattern. Not only would these formulas provide a direct way of determining whether pattern retrieval is possible for a given set of network parameters, they would offer mathematical insight into network behavior. As detailed in Appendices B, C, and D, we apply various approximations to the mean-field equations Eqs. (13), (14), and (16) to derive the following formulas for s c , which match well with numerical solutions over a wide range of parameters (Figs. 2, 3, and 4).
1. Sparse example η 11 (Fig. 2): The capacity is
1 α ∼ s c 1 + s c κ 4 Γ 4 (1 − 2γ) 4 |log a| a ,(20)
which means that
s c ∼ 1 4κ 8 + (1 − 2γ) 4 γ 4 c 4 · a |log a| · 1 α − 1 2κ 4 .(21)
In sparse Hopfield networks without dense patterns, the capacity always increases for sparser patterns [7]. In contrast, our capacity for sparse examples peaks at intermediate sparsities a [Fig. 2(c)]. While sparser patterns interfere less with one another, their smaller basins of attraction are more easily overwhelmed by those of dense patterns, whose sparsity is always 0.5. We can quantitatively understand the tradeoff between these two factors in the c 2 → 0 limit, where Eq. (20) becomes
αs c ∼ a (a + a d ) 2 |log a|(22)
for a d ≡ γ 2 /(1 − 2γ) 2 . a d represents interference from dense patterns and acts as the crossover point in the tradeoff. For a a d , αs c ∼ 1/a|log a|, re- covering the classic sparse Hopfield scaling in which sparser patterns exhibit higher capacity [7]. However, for a a d , a d dominates the denominator and αs c ∼ a/|log a|, disfavoring sparser patterns. If we ignore the slowly varying logarithm in Eq. (22), s c is exactly maximized at a = a d . Using the value γ = 0.1 in Fig. 2(c), a d ≈ 0.016, which agrees well with the numerically obtained maxima.
2. Dense example ζ 11 (Fig. 3): The capacity is When examples are distributed into many concepts, concept identity becomes insignificant, so only the total number of stored patterns matters. At small α, s c itself saturates at a constant value determined by the dense correlation c. When concepts are few, interference with other concepts becomes less important than interference within the same concept, so only the number of stored patterns per concept matters.
s c ∼ 1 3c 3 + 18 Γ 4 γ 4 α .(23)
3. Dense concept ζ 1 (Fig. 4): There are two cases.
First, for larger sparsities a, the critical example load approximately collapses as a function of s c c 2 . It can be obtained by numerically inverting the following first equation for y and substituting it into the second:
2Γ 4 γ 4 c 2 α = (1 − 2γ) 2 a(1 − a) γ 2 2 π y e −y 2 /2 3 y 2 erf y √ 2 − 2 π y e −y 2 /2 − 2 π y e −y 2 /2 2 s c c 2 ≈ (1 − 2γ) 2 a(1 − a) γ 2 2 π y e −y 2 /2 erf y √ 2 − 2 π y e −y 2 /2 .(24)
The solution is unique for any parameter values because the right-hand side of the first equation always monotonically decreases as a function of y over its positive range. Second, for smaller a, the critical example load approximately collapses as a function of s c c 3/2 :
s c c 3/2 ≈ 3 3π 4 1/4 Γ 4 γ 4 c 2 α 1/4 + 3π 4 c −1/2 Γ 4 γ 4 c 2 α . (25)
The second term contains a factor of c −1/2 , which changes relatively slowly compared to the other powers of c found in the rescaled concept load αΓ 4 /γ 4 c 2 . The two terms capture the behavior of s c at low and high rescaled concept load, respectively. Nevertheless, more universal scaling relationships have yet to be found for the dense concept s c , indicating that many network features may independently govern concept building.
C. Capacities of simulated networks
We perform simulations to verify our capacity calculations. For each simulation condition, we construct replicate networks that store different randomly generated patterns. When generating sparse patterns of sparsity a, we fix the number of active neurons to N a to reduce finite-size effects. Neural dynamics proceed asynchronously in cycles wherein every neuron is updated once in random order. We use N = 10 000 neurons and dense strength γ = 0.1, unless otherwise noted. Retrieval is assessed by the following definition of overlap between network activity S and the unscaled target pattern ω, which is a sparse example ξ µν , a dense concept ψ µν , or a dense concept ψ µ :
m = 1 N a ω (1 − a ω ) i (ω i − a ω )S i ,(26)
where a ω = a for sparse patterns and a ω = 1/2 for dense patterns. Based on Eqs. (1), (2), and (3), we expectm ≈ 1 to indicate successful retrieval. For random activity, m ≈ 0. This overlapm is similar to m in Eq. (19), which concerned the scaled target patterns χ = η 11 , ζ 11 , and ζ 1 . Capacities are assessed by using the true target patterns as cues; in other words, our simulations probe the stability of the target patterns. For sparse examples, we optimize over the threshold θ by numerical search. For dense patterns, we use θ = 0. We use β → ∞ in Eq. (7) because our theoretical calculations were performed for T → 0. We define successful retrieval aŝ m > (1+m 0 )/2, wherem 0 is the overlap expected for offtarget patterns within the same concept. Using Eq. and numerical analysis of the mean-field equations for capacities of all target types. This supports the validity of our derivations and the simplifications we invoked to perform them.
IV. HETEROASSOCIATION
A. Performance of simulated networks
Our network stores linear combinations of sparse and dense patterns, and its connectivity matrix contains interactions between the two [Eq. (5)]. Thus, we suspect that in addition to autoassociation for each target type, it can perform heteroassociation between them. We use simulations to test this ability. We cue the network with a noisy version of a sparse example ξ µν , dense example ψ µν , or dense concept ψ µ , and attempt to retrieve every type as the target pattern. For cases in which concepts are used as cues and examples are desired as targets, the highest overlap with any example within the cued concept is reported. We use p = 10 concepts and either With theoretical motivation in Appendix B, we define the rescaled parameters (27) with rescaled temperature T = 1/β . To retrieve sparse examples, we apply a threshold θ = 0.6, and to retrieve dense examples and concepts, we apply θ = 0. We reintroduce noise into our simulations with inverse temperature β = 50 and by randomly flipping a fraction 0.01 of the cue pattern between inactive and active during network initialization. Successful retrieval is defined via the overlapm as described above [Eq. (26)].
θ = θ/(1 − 2γ) 2 a and β = β · (1 − 2γ) 2 a,
Figure 6(a) shows that the network is generally capable of heteroassociation using the parameters described above, which define the baseline condition. By increasing the number of concepts, heteroassociative performance is largely preserved, but note that the retrieval of dense concepts from sparse examples is impaired [ Fig. 6(b)]. We next amplify noise by either raising the temperature, which introduces more randomness during retrieval, or randomly flipping more neurons during cue generation. Sparse example targets are more robust than dense example targets with respect to higher temperature [ Fig. 6(c)]; meanwhile, dense example cues are more robust than sparse example cues with respect to cue cor- Fig. 1(b), but it also mitigates the impact of noise since sparse and dense patterns are more robust to retrieval and cue noise, respectively.
B. Bidirectional heteroassociation and γ
Notice in Fig. 6(a) that while dense concept targets can be retrieved from sparse example cues, the reverse is not possible. The ability to perform bidirectional heteroassociation between a concept and its examples is of computational significance, so we seek to find network parameters that achieve it. Intuitively, lowering the stor- The value of γ appears critical to the ability to retrieve sparse examples from dense concepts. We hypothesize that this connection is mediated by the relative energy of different pattern types. As described in Appendix A, the Hamiltonian of our network is
H = − 1 2N µν i =j (η i µν + ζ i µν )(η j µν + ζ j µν )S i S j + θ i S i ,(28)
where, again, η µν and ζ µν are rescalings of sparse examples ξ µν and dense examples ψ µν [Eq. (6)]. We set the network activity S to a sparse example ξ µν or dense concept ψ µ and calculate the average over patterns H . Using Eq. (4), we obtain Fig. 7(a)]. Thus, the progression from cue to target is energetically favored.
H N ≈ − (1 − 2γ) 2 a 2 (1 − a) 2 2 + θa sparse example, − sγ 2 c 2 8 + θ 2 dense concept.(29
The crossover point γ c between the high-threshold energies of dense concepts and sparse examples appears to define the phase boundary for heteroassociation from the former to the latter. To test this prediction, we evaluate simulated networks at varying values of γ. Successful retrieval of sparse examples is assessed through the overlapm with the same cutoff values as described above [Eq. (26)]. Figure 7(b) demonstrates that the energy crossover indeed predicts γ c for c = 0.4. The c = 0.1 case shows lower quantitative agreement between simulation and theory, although the qualitative observation of a higher γ c is captured. Finite-size effects, higher energies of intermediate states along possible transition paths, and trapping in local energy minima may account for the discrepancy. For T > 0, the disregard of entropic contributions in our Hamiltonian analysis may also contribute to the disparity, although the lack of significant temperature dependence in our simulations makes this consideration less important [ Fig. 7(b)].
For the c = 0.4 and T = 0 case, we construct a heteroassociation phase diagram by simulating networks with various dense strengths γ and example loads s sparse examples is much higher with identical cues than with dense concept cues, reflecting our observations that even below capacity, this heteroassociation direction is only granted for certain γ. In contrast, the phase boundary for retrieving dense concepts is similar with either type of cue, indicating an easier heteroassociation direction.
Due to the importance of γ, we present additional mean-field capacity results in which it is systematically varied [Fig. 8
V. DISCUSSION
In summary, we present a Hopfield-like network that stores memories as both sparse patterns with low correlation and dense patterns with high correlation. By adjusting the activity threshold, the network can retrieve patterns of either sparsity. The capacity for sparse patterns is large, so many distinct memories can be retrieved. In contrast, as more dense patterns are stored, they merge according to their correlation structure such that concepts are built through the accumulation of examples. We derive mean-field equations that govern the retrieval of sparse examples, dense examples, and dense concepts, and we calculate capacity formulas for each type of retrieved pattern. We observe that the network can retrieve one type of target pattern from its corresponding cue of a different type, and we explain that regimes of successful heteroassociation can be predicted by the relative energies of cue and target patterns.
Our network offers an alternative paradigm for building memory hierarchies in autoassociative networks. Ultrametric networks have been previously explored as an architecture for storing and retrieving memories at different scales [22][23][24][25][26][27]. Their structure resembles a tree spanning multiple levels. Each pattern at one level serves as a concept-like trunk from which correlated branches are generated to form the next, more example-like level. While these models are insightful and influential, they possess certain disadvantages that our network can address. They typically use an activity threshold or, equivalently, an external field to move between levels, which is also the case in our work. In one ultrametric model, the field is inhomogeneous and proportional to the pattern retrieved [26]. Our activity threshold is homogeneous and does not require memory of the pattern retrieved, though implementing such a feature may improve retrieval performance. In another hierarchical model, coarser representations are stored more sparsely and retrieved at higher threshold [27]. This arrangement prevents the network from leveraging the higher capacity of sparser patterns to store finer representations, which are more numerous. Moreover, ultrametric Hopfield networks often require complex storage procedures that require a priori knowledge of concepts or other examples [23,24,26,27].
They do not permit the unsupervised learning of concepts through the accumulation of examples over time, which is achieved by our simple Hebbian learning rule and strengthens the biological significance of our model. Meanwhile, our model's requirement for sparse, decorrelated patterns in addition to dense, correlated patterns can be implemented by neural circuits which are thought to naturally perform decorrelation through sparsification [6,16,[28][29][30][31]. [31] N. A. Cayco-Gajic, C. Clopath, and R. A. Silver, Sparse synaptic connectivity is required for decorrelation and pattern separation in feedforward networks, Nat. Commun. 8, 1116 (2017).
Appendix A: Mean-field equations
Replica partition function
This derivation of mean-field equations governing the macroscopic behavior of our network is strongly influenced by Refs. 7, and 13, and 20. All of our calculations will be performed in the thermodynamic limit where the network size N → ∞. Our network, presented in Section II, is described by a Hamiltonian
H = − 1 2N µν i =j (η i µν + ζ i µν )(η j µν + ζ j µν )S i S j + θ i S i = − 1 2N µν i (η i µν + ζ i µν )S i 2 + 1 2N µν i (η i µν + ζ i µν )S i 2 + θ i S i .(A1)
To reiterate, S is the network activity and θ is the activity threshold. η µν and ζ µν are rescaled sparse and dense patterns, respectively, for ν = 1, . . . , s examples in each of µ = 1, . . . , αN concepts [Eq. (6)]. Each rescaled pattern entry is randomly generated as follows: for sparse pattern sparsity a, dense pattern correlation c, and dense pattern storage strength 2γ. Their average values are 0, and the average overlaps between them are also 0 except for
η i µν = (1 − 2γ)(1 − a)ζ i µν ζ i µ = γ 2 c ζ i µν ζ i µω = γ 2 c 2 .(A3)
We will forgo introducing external fields. By averaging over examples and concepts,
1 2N µν i (η i µν + ζ i µν )S i 2 ≈ αs 2 (1 − 2γ) 2 a(1 − a) + γ 2 i S i ,(A4)
If we define
Γ 2 ≡ (1 − 2γ) 2 a(1 − a) + γ 2 ,(A5)
we obtain
H = − 1 2N µν i (η i µν + ζ i µν )S i 2 + θ + αsΓ 2 2 i S i .(A6)
To understand this system, we would like to calculate its free energy averaged over instantiations of the patterns:
F = −(1/β) log Z .
Here, Z is the partition function, β = 1/T is inverse temperature, and angle brackets indicate averages over η i µν and ζ i µν . Since we cannot directly average over the logarithm of the partition function Z, we use the replica trick by writing formally:
F N = − 1 βN log Z = − 1 βN lim n→0 Z n − 1 n = − 1 βN lim n→0 1 n log Z n .(A7)
We interpret Z n as a partition function for a set of replica networks ρ = 1, . . . , n with the same parameter values and stored patterns, but the neural activities S ρ i may vary across replicas. The Hamiltonian of each replica is
H ρ = − 1 2N µν i (η i µν + ζ i µν )S ρ i 2 + θ + αsΓ 2 2 i S ρ i ,(A8)
and the replica partition function, averaged over patterns, is
Z n = Tr S ρ exp −βH ρ . (A9)
The trace is evaluated over all neurons i and replicas ρ. We invoke the standard Gaussian integral identity
dm e −Am 2 +Bm = π A e B 2 /4A (A10) to obtain Z n = Tr S ρ exp −β θ + αsΓ 2 2 i S ρ i × µν dm ρ µν βN 2π 1 2 exp − βN 2 (m ρ µν ) 2 + βm ρ µν i (η i µν + ζ i µν )S ρ i . (A11)
Uncondensed patterns
We search for a retrieval regime in which the network successfully recovers a sparse example η 11 , a dense example ζ 11 , or a dense concept ζ 1 . All stored patterns in other concepts µ > 1 are called uncondensed and will not significantly overlap with the network activity. We seek to expand in these small overlaps and integrate over them. First,
µ>1 νρ exp βm ρ µν i (η i µν + ζ i µν )S ρ i = i µ>1 ν exp β(η i µν + ζ i µν ) ρ m ρ µν S ρ i . (A12) Using Y ν ≡ β ρ m ρ µν S ρ i ,(A13)
where we have suppressed dependence on µ and i for convenience, we can write
ν exp (η i µν + ζ i µν )Y ν = ν exp[η i µν Y ν ] ν exp[ζ i µν Y ν ] .(A14)
For uncondensed patterns µ > 1, m ρ µν 1 because, as we will derive later, it is the overlap between S ρ and η µν + ζ µν . Thus, we can crucially expand in Y ν 1 and average over the uncondensed patterns:
exp[η i µν Y ν ] ≈ 1 + 1 2 (1 − 2γ) 2 a(1 − a)Y 2 ν ≈ exp 1 2 (1 − 2γ) 2 a(1 − a)Y 2 ν . (A15) Continuing, ν exp[ζ i µν Y ν ] ≈ 1 2 ν 1 + γcY ν + γ 2 2 Y 2 ν + 1 2 ν 1 − γcY ν + γ 2 2 Y 2 ν = 1 + 1 2 νω γ 2 (1 − c 2 )δ νω + γ 2 c 2 Y ν Y ω ≈ exp 1 2 νω γ 2 (1 − c 2 )δ νω + γ 2 c 2 Y ν Y ω .(A16)
Averaging is performed first over ν, then over µ [Eq. (A3)]. Combining the equations above, we obtain µ>1 νρ
exp βm ρ µν i (η i µν + ζ i µν )S ρ i = µ>1 exp βN 2 βΓ 2 νωρσ (1 − κ 2 )δ νω + κ 2 q ρσ m ρ µν m σ µω (A17) if we define κ 2 ≡ γ 2 Γ 2 c 2 (A18)
and enforce
q ρσ = 1 N i S ρ i S σ i .(A19)
We will do so by introducing the following integrals over delta-function representations:
ρσ dq ρσ δ q ρσ − 1 N i S ρ i S σ i ∝ ρσ dq ρσ dr ρσ exp − β 2 αN 2 ρσ q ρσ r ρσ + β 2 α 2 iρσ r ρσ S ρ i S σ i ,(A20)
where r ρσ are additional auxiliary variables whose integration limits extend from −i∞ to i∞, and the factor of β 2 αN/2 is introduced for later convenience.
We can now integrate over the uncondensed overlaps m ρ µν :
µ>1 νρ dm ρ µν βN 2π 1 2 exp − βN 2 (m ρ µν ) 2 + βm ρ µν i (η i µν + ζ i µν )S ρ i ∝ µ>1 νρ dm ρ µν βN 2π 1 2 exp − βN 2 νωρσ δ νω δ ρσ − βΓ 2 (1 − κ 2 )δ νω + κ 2 q ρσ m ρ µν m σ µω = det δ νω δ ρσ − βΓ 2 (1 − κ 2 )δ νω + κ 2 q ρσ − 1 2 αN −1 ≈ exp − αN 2 Tr log δ νω δ ρσ − βΓ 2 (1 − κ 2 )δ νω + κ 2 q ρσ ,(A21)
where αN 1 is the total number of concepts.
Thus, so far, our partition function is
Z n ∝ νρ dm ρ 1ν βN 2π 1 2 ρσ dq ρσ dr ρσ × exp −βN 1 2 νρ (m ρ 1ν ) 2 + α 2β Tr log δ νω δ ρσ − βΓ 2 (1 − κ 2 )δ νω + κ 2 q ρσ + βα 2 ρσ q ρσ r ρσ × Tr S exp β iνρ m ρ 1ν (η i 1ν + ζ i 1ν )S ρ i − β θ + αsΓ 2 2 iρ S ρ i + β 2 α 2 iρσ r ρσ S ρ i S σ i .(A22)
Condensed patterns
Now we consider the target patterns, whose large overlaps cannot be expanded into Gaussians and integrated away. When retrieving sparse examples, the network overlaps significantly with one stored pattern η 11 , but not η 1ν for ν > 1 and ζ 1ν , which are nearly orthogonal to η 11 . When retrieving dense examples or concepts, the network overlaps significantly with all stored examples ζ 1ν within the target concept because they are correlated, but not η 1ν . Thus,
either i η i 11 S ρ i or i ζ i 1ν S ρ i is much larger than the other terms in i (η i 1ν + ζ i 1ν )S ρ i , so we replace iνρ m ρ 1ν (η i 1ν + ζ i 1ν )S ρ i ≈ iνρ m ρ 1ν χ i 1ν S ρ i ,(A23)
where χ i 1ν = η i 11 δ 1ν or ζ i 1ν depending on whether we are considering recovery of sparse or dense patterns. These patterns with significant overlaps are called condensed patterns.
We now invoke self-averaging over the i indices. For any function G(χ i , S i ),
Tr S exp i G(χ i , S i ) = i Tr Si exp G(χ i , S i ) = exp i log Tr Si exp G(χ i , S i ) = exp N log Tr S exp G(χ, S) .(A24)
Now χ and S represent the pattern entry and activity of a single neuron. This single neuron is representative of the entire network because pattern entries are generated independently for each neuron, so we can replace the average over neurons i with an average over possible pattern entries χ. In doing so, we no longer need to pattern-average the trace of the exponential in Eq. (A22); critically, that average has been subsumed by a pattern average inside the exponential, which allows us to write
Z n ∝ νρ dm ρ 1ν βN 2π 1 2 ρσ dq ρσ dr ρσ exp[−βN f ], (A25) where f = 1 2 νρ (m ρ 1ν ) 2 + α 2β Tr log δ νω δ ρσ − βΓ 2 (1 − κ 2 )δ νω + κ 2 q ρσ + βα 2 ρσ q ρσ r ρσ − 1 β log Tr S exp β νρ m ρ 1ν χ 1ν S ρ − θ + αsΓ 2 2 ρ S ρ + βα 2 ρσ r ρσ S ρ S σ . (A26)
The replica partition function is now written in a form amenable to the saddle-point approximation. That is, in the N → ∞ limit, we can replace integrals in Eq. (A25) with the integrand evaluated where derivatives of f with respect to the variables of integration equal 0.
Saddle-point equations for interpretation
Before proceeding with further simplifying f by invoking replica symmetry, we seek to obtain physical interpretations for m, q, and r, which will serve as the order parameters of our system. To do so, we must recall several previously derived forms of the replica partition function and apply the saddle-point conditions to them.
Recall Eqs. (A17) and (A20) obtained after introducing q and r but before integrating over the uncondensed patterns. Using those expressions in the partition function and performing self-averaging similarly to above, we can obtain Z n ∝ µνρ dm ρ µν βN 2π
1 2 ρσ dq ρσ dr ρσ exp[−βN f ], (A27) where f = 1 2 µνρ (m ρ µν ) 2 − βΓ 2 2 µ>1 νωρσ (1 − κ 2 )δ νω + κ 2 q ρσ m ρ µν m σ µω + βα 2 ρσ q ρσ r ρσ − 1 β log Tr S exp β νρ m ρ 1ν χ 1ν S ρ − θ + αsΓ 2 2 ρ S ρ + βα 2 ρσ r ρσ S ρ S σ = 1 2 µνρ (m ρ µν ) 2 − βΓ 2 2 µ>1 νωρσ (1 − κ 2 )δ νω + κ 2 q ρσ m ρ µν m σ µω + βα 2 ρσ q ρσ r ρσ − 1 β log Tr S exp[−βH] ,(A28)
and
H ≡ − νρ m ρ 1ν χ 1ν S ρ + θ + αsΓ 2 2 ρ S ρ − βα 2 ρσ r ρσ S ρ S σ (A29)
is the effective single-neuron Hamiltonian across replicas. At the saddle point, derivatives of f with respect to variables of integration are 0, so
0 = ∂f ∂m ρ 1ν = m ρ 1ν − Tr S χ 1ν S ρ exp[−βH] Tr S exp[−βH] ⇒ m ρ 1ν = χ 1ν S ρ ,(A30)0 = ∂f ∂r ρσ = βα 2 q ρσ − βα 2 Tr S S ρ S σ exp[−βH] Tr S exp[−βH] ⇒ q ρσ = S ρ S σ = S ρ · S σ ρ = σ S ρ ρ = σ, (A31) 0 = ∂f ∂q ρσ = βα 2 r ρσ − β 2 µ>1 νω Γ 2 (1 − κ 2 )δ νω + κ 2 m ρ µν m σ µω ⇒ r ρσ = 1 α µ>1 νω Γ 2 (1 − κ 2 )δ νω + κ 2 m ρ µν m σ µω . (A32)
Bars over variables represent the thermodynamic ensemble average. Thus, m ρ 1ν is the overlap of the network with the condensed pattern to be recovered, q ρσ is the Edwards-Anderson order parameter reflecting the overall neural activity, and r ρσ represents interference from network overlap with uncondensed patterns m ρ µν . To explicitly see that m ρ µν describes the overlap of the network with uncondensed patterns for µ > 1, recall Eq. (A11) obtained before introducing q and r. By introducing χ and performing self-averaging similarly to above, we can obtain
Z n = µνρ dm ρ µν βN 2π 1 2 exp[−βN f ],(A33)
where
f = 1 2 µνρ (m ρ µν ) 2 − 1 β log Tr S exp β νρ m ρ 1ν χ 1ν S ρ − θ + αsΓ 2 2 ρ S ρ + µ>1 νρ m ρ µν (η µν + ζ µν )S ρ = 1 2 µνρ (m ρ µν ) 2 − 1 β log Tr S exp[−βH] ,(A34)
and
H ≡ − νρ m ρ 1ν χ 1ν S ρ + θ + αsΓ 2 2 ρ S ρ − µ>1 νρ m ρ µν (η µν + ζ µν )S ρ (A35)
is the effective single-neuron Hamiltonian. At the saddle point, this Hamiltonian is equivalent to the form in Eq. (A29) due to Eqs. (A17) and (A32). The saddle-point condition applied to Eqs. (A33) and (A34) yields
0 = ∂f ∂m ρ µν = m ρ µν − Tr S (η µν + ζ µν )S ρ exp[−βH] Tr S exp[−βH] ⇒ m ρ µν = (η µν + ζ µν )S ρ for µ > 1. (A36)
Thus m ρ µν is indeed the network overlap with η µν + ζ µν for µ > 1. As asserted ex ante to derive Eq. (A15), we expect it to be small.
Replica-symmetry ansatz
We are now finished with seeking physical interpretations for order parameters, and we return to the primary task of calculating the free energy Eq. (A7) using Eqs. (A25) and (A26). To do so, we assume replica symmetry:
m ρ µν = m µν , q ρσ = q, q ρρ = q 0 , r ρσ = r, r ρρ = r 0 . (A37)
Our expression for f then becomes
f = 1 2 n ν (m 1ν ) 2 + α 2β Tr log δ νω δ ρσ − βΓ 2 (1 − κ 2 )δ νω + κ 2 (q 0 − q)δ ρσ + q + βαn 2 q 0 r 0 + βαn(n − 1) 2 qr − 1 β log Tr S exp β ν m 1ν χ 1ν − θ − αsΓ 2 2 + βα 2 (r 0 − r) ρ S ρ + βα 2 r ρ S ρ 2 . (A38)
The eigenvalues of a constant n × n matrix with entries A are nA with multiplicity 1 and 0 with multiplicity n − 1. Thus, the second term in Eq. (A38) under the limit in Eq. (A7) becomes
lim n→0 1 n Tr log δ νω δ ρσ − βΓ 2 (1 − κ 2 )δ νω + κ 2 (q 0 − q)δ ρσ + q = lim n→0 1 n log 1 − βΓ 2 (1 + sκ 2 − κ 2 )(q 0 − q + nq) + (n − 1) log 1 − βΓ 2 (1 + sκ 2 − κ 2 )(q 0 − q) + (s − 1) log 1 − βΓ 2 (1 − κ 2 )(q 0 − q + nq) + (s − 1)(n − 1) log 1 − βΓ 2 (1 − κ 2 )(q 0 − q) = (s − 1) log 1 − Q(1 − κ 2 ) − βqΓ 2 (1 − κ 2 ) 1 − Q(1 − κ 2 ) + log 1 − Q(1 + sκ 2 − κ 2 ) − βqΓ 2 (1 + sκ 2 − κ 2 ) 1 − Q(1 + sκ 2 − κ 2 ) , ≡ Λ[q, q 0 ],(A39)
where
Q ≡ β(q 0 − q)Γ 2 . (A40)
To evaluate the last term in Eq. (A38), we can use another Gaussian integral [Eq. (A10)] to perform the trace over S in the limit n → 0:
log Tr S exp β ν m 1ν χ 1ν − θ − αsΓ 2 2 + βα 2 (r 0 − r) ρ S ρ + βα 2 r ρ S ρ 2 = log Tr S dz √ 2π e −z 2 /2 exp β ν m 1ν χ 1ν − θ + βα 2 (r 0 − r) − αsΓ 2 2 + √ αrz ρ S ρ = log dz √ 2π e −z 2 /2 1 + exp β ν m 1ν χ 1ν − θ + βα 2 (r 0 − r) − αsΓ 2 2 + √ αrz n ≈ log dz √ 2π e −z 2 /2 1 + n log 1 + exp β ν m 1ν χ 1ν − θ + βα 2 (r 0 − r) − αsΓ 2 2 + √ αrz ≈ n dz √ 2π e −z 2 /2 log 1 + exp β ν m 1ν χ 1ν − θ + βα 2 (r 0 − r) − αsΓ 2 2 + √ αrz . (A41)
The free energy Eq. (A7) under replica symmetry becomes
F N = 1 2 ν (m 1ν ) 2 + α 2β Λ[q, q 0 ] + βα 2 (q 0 r 0 − qr) − 1 β ⟪log 1 + exp β ν m 1ν χ 1ν − θ + βα 2 (r 0 − r) − αsΓ 2 2 + √ αrz ⟫,(A42)
where now the double angle brackets indicate an average over χ 1ν as well as the Gaussian variable z.
Mean-field equations
We can now minimize this free energy over the order parameters by setting derivatives of F to zero, which yields the mean-field equations. This step is equivalent to applying the saddle-point approximation to replica-symmetric f in the n → 0 limit. We first note that
∂Λ ∂q = (s − 1) β 2 qΓ 4 (1 − κ 2 ) 2 1 − Q(1 − κ 2 ) 2 + β 2 qΓ 4 (1 + sκ 2 − κ 2 ) 2 1 − Q(1 + sκ 2 − κ 2 ) 2 .(A43)
The combined fraction has numerator β 2 qΓ 4 multiplied by
(s − 1) 1 − κ 2 1 − Q(1 + sκ 2 − κ 2 ) 2 + 1 + sκ 2 − κ 2 1 − Q(1 − κ 2 ) 2 = s 1 − Q(1 − κ 2 )(1 + sκ 2 − κ 2 ) 2 + s(s − 1)κ 4 .(A44)
Meanwhile,
∂Λ ∂q 0 = (s − 1) − βΓ 2 (1 − κ 2 ) 1 − Q(1 − κ 2 ) − β 2 qΓ 4 (1 − κ 2 ) 2 1 − Q(1 − κ 2 ) 2 − β(1 + sκ 2 − κ 2 ) 1 − Q(1 + sκ 2 − κ 2 ) − β 2 qΓ 4 (1 + sκ 2 − κ 2 ) 2 1 − Q(1 + sκ 2 − κ 2 ) 2 = − ∂Λ ∂q − βΓ 2 (s − 1)(1 − κ 2 ) 1 − Q(1 − κ 2 ) + 1 + sκ 2 − κ 2 1 − Q(1 + sκ 2 − κ 2 ) .(A45)
The combined fraction inside the square brackets has numerator
(s − 1)(1 − κ 2 ) 1 − Q(1 + sκ 2 − κ 2 ) + (1 + sκ 2 − κ 2 ) 1 − Q(1 − κ 2 ) = s 1 − Q(1 − κ 2 )(1 + sκ 2 − κ 2 ) . (A46)
Thus, derivatives of F with respect to the order parameters are
0 = ∂F ∂q = α 2β β 2 qsΓ 4 1 − Q(1 − κ 2 )(1 + sκ 2 − κ 2 ) 2 + (s − 1)κ 4 1 − Q(1 − κ 2 ) 2 1 − Q(1 + sκ 2 − κ 2 ) 2 − βα 2 r ⇒ r = qsΓ 4 1 − Q(1 − κ 2 )(1 + s 0 κ 2 ) 2 + s 0 κ 4 1 − Q(1 − κ 2 ) 2 1 − Q(1 + s 0 κ 2 ) 2 (A47) 0 = ∂F ∂q 0 = α 2β − ∂Λ ∂q − βsΓ 2 1 − Q(1 − κ 2 )(1 + sκ 2 − κ 2 ) 1 − Q(1 − κ 2 ) 1 − Q(1 + sκ 2 − κ 2 ) + βα 2 r 0 ⇒ r 0 = r + sΓ 2 β 1 − Q(1 − κ 2 )(1 + s 0 κ 2 ) 1 − Q(1 − κ 2 ) 1 − Q(1 + s 0 κ 2 ) (A48) 0 = ∂F ∂m 1ν = m 1ν − ⟪χ 1ν sig[βh]⟫ ⇒ m 1ν = ⟪χ 1ν sig[βh]⟫,(A49)
where sig(x) ≡ 1/(1 + e −x ) and
s 0 ≡ s − 1, h ≡ ν m 1ν χ 1ν − θ + βα 2 (r 0 − r) − αsΓ 2 2 + √ αrz. (A50)
h is the local field under the mean-field approximation. We can simplify it via
βα 2 (r 0 − r) − αsΓ 2 2 = αsΓ 2 2 · 1 − Q(1 − κ 2 )(1 + s 0 κ 2 ) − 1 − Q(1 − κ 2 ) 1 − Q(1 + s 0 κ 2 ) 1 − Q(1 − κ 2 ) 1 − Q(1 + s 0 κ 2 ) = QαsΓ 2 2 · 1 + s 0 κ 4 − Q(1 − κ 2 )(1 + s 0 κ 2 ) 1 − Q(1 − κ 2 ) 1 − Q(1 + s 0 κ 2 ) . (A51) Thus, h = ν m 1ν χ 1ν − φ + √ αrz, φ ≡ θ − QαsΓ 2 2 · 1 + s 0 κ 4 − Q(1 − κ 2 )(1 + s 0 κ 2 ) 1 − Q(1 − κ 2 ) 1 − Q(1 + s 0 κ 2 ) (A52)
φ is the shifted threshold; we shall see that in retrieval regimes, it is almost identical to θ.
Continuing, and using the identities dz e −z 2 /2 zf (z) = dz e −z 2 /2 df (z)/dz and d sig(x)/dx = sig(x) − sig(x) 2 ,
0 = ∂F ∂r = − βα 2 q − √ α 2 √ r ⟪z sig[βh]⟫ + βα 2 ⟪sig[βh]⟫ ⇒ q = ⟪sig[βh] 2 ⟫ (A53) 0 = ∂F ∂r 0 = βα 2 q 0 − βα 2 ⟪sig[βh]⟫ ⇒ q 0 = ⟪sig[βh]⟫.(A54)
Thus, we recover the mean-field equations presented in Eq. (11).
Zero-temperature limit
From now on, we only consider the T = 0 limit with β → ∞. In this limit,
dz √ 2π e −z 2 /2 sig[Az + B] ≈ dz √ 2π e −z 2 /2 Θ[Az + B] = 1 2 1 + erf B √ 2A ,(A55)
where Θ is the Heaviside step function and erf is the error function. Thus, Eqs. (A49), (A53), and (A47) become
m 1ν ≈ ⟪χ 1ν Θ ν m 1ν χ 1ν − φ + √ αrz ⟫ = 1 2 χ 1ν erf ν m 1ν χ 1ν − φ √ 2αr , (A56) q ≈ ⟪Θ ν m 1ν χ 1ν − φ + √ αrz 2 ⟫ = 1 2 1 + erf ν m 1ν χ 1ν − φ √ 2αr , (A57) r ≈ sΓ 4 2 · 1 − Q(1 − κ 2 )(1 + s 0 κ 2 ) 2 + s 0 κ 4 1 − Q(1 − κ 2 ) 2 1 − Q(1 + s 0 κ 2 ) 2 1 + erf ν m 1ν χ 1ν − φ √ 2αr .(A58)
Here, single angle brackets again indicate an average over χ 1ν , with the average over z performed. The formula for m 1ν was obtained using χ 1ν = 0 for both sparse and dense patterns. Also when β → ∞,
dz √ 2π e −z 2 /2 sig[β(Az + B)] − sig[β(Az + B)] 2 = dz √ 2πβA e −z 2 /2 ∂ ∂z sig[β(Az + B)] ≈ dz √ 2πβA e −z 2 /2 ∂ ∂z Θ[Az + B] = dz √ 2πβ e −z 2 /2 δ[Az + B] = 1 √ 2πβ|A| e −B 2 /2A 2 .(A59)
We use this identity to simplify Eq. (A40) via Eqs. (A53) and (A54):
Q ≈ Γ 2 ⟪δ ν m 1ν χ 1ν − φ + √ αrz ⟫ = Γ 2 √ 2παr exp − ν m 1ν χ 1ν − φ 2 2αr .(A60)
Equations (A56), (A58), and (A60) are the zero-temperature mean-field equations connecting the order parameters m 1ν , r, and Q (we no longer need q, q 0 , and r 0 ). All further derivations will start with these equations. The mean-field equations Eqs. (A56), (A58), and (A60) involve a generic target pattern χ 1ν . We now consider the case where the network recovers a sparse example, so χ 1ν = η 11 δ 1ν . Using this expression, we can simplify the mean-field equations and find the critical example load s c above which sparse examples can no longer be retrieved. In this section, we take the sparse limit with a 1. For convenience, we rename m ≡ m 11 and η ≡ η 11 . From Eq. (A2), we have
m = 1 2 η erf mη − φ 2αr = (1 − 2γ)a 2 erf φ √ 2αr + erf (1 − 2γ)m − φ √ 2αr ,(B2)r = sΓ 4 2 · 1 − Q(1 − κ 2 )(1 + s 0 κ 2 ) 2 + s 0 κ 4 1 − Q(1 − κ 2 ) 2 1 − Q(1 + s 0 κ 2 ) 2 1 + erf mη − φ √ 2αr = sΓ 4 2 · 1 − Q(1 − κ 2 )(1 + s 0 κ 2 ) 2 + s 0 κ 4 1 − Q(1 − κ 2 ) 2 1 − Q(1 + s 0 κ 2 ) 2 1 − erf φ √ 2αr + a erf (1 − 2γ)m − φ √ 2αr , (B3) Q = Γ 2 √ 2παr exp − (mη − φ) 2 2αr = Γ 2 √ 2παr exp − φ 2 2αr + a exp − ((1 − 2γ)m − φ) 2 2αr . (B4)
We will soon see that these equations yield Q 1 in the retrieval regime. In that case,
m = (1 − 2γ)a 2 erf φ √ 2αr + erf (1 − 2γ)m − φ √ 2αr ,(B5)r = s(1 + s 0 κ 4 )Γ 4 2 1 − erf φ √ 2αr + a erf (1 − 2γ)m − φ √ 2αr .(B6)
These mean-field equations for sparse examples are presented in Eq. (13) with m replaced by its original name m 11 . They can be numerically solved to find regimes of successful retrieval, but we will analyze them further in search of formulas for the capacity s c . We can map our equations onto the classic sparse Hopfield equations with the rescalings
m = (1 − 2γ)a · m , φ = (1 − 2γ) 2 a · θ , r = s 1 + s 0 κ 4 Γ 4 · r , α = (1 − 2γ) 4 a 2 s 1 + s 0 κ 4 Γ 4 · α .(B7)
Then,
m = 1 2 erf θ √ 2α r + erf m − θ √ 2α r (B8) r = 1 2 1 − erf θ √ 2α r + a erf m − θ √ 2α r .(B9)
Successful retrieval means that m ≈ 1, which requires θ / √ 2α r 1 and (m − θ )/ √ 2α r 1. Under these limits, 0 < θ < 1 and Q 1, which validates our previous assumption. We can use asymptotic forms of the error function to obtain
m = 1 − 1 √ 2π √ α r θ e −θ 2 /2α r − 1 √ 2π √ α r m − θ e −(m −θ ) 2 /2α r (B10) r = 1 √ 2π √ α r θ e −θ 2 /2α r + a 2 − a √ 2π √ α r m − θ e −(m −θ ) 2 /2α r .(B11)
2. Capacity formula for θ → 0
To derive capacity formulas, we need to make further assumptions about θ . First, we consider small θ . Because we still require m ≈ 1, the third term in Eq. (B11) becomes much smaller than the second, so
r ≈ 1 √ 2π √ α r θ e −θ 2 /2α r + a 2 .(B12)
This equation no longer depends on m . If we take y ≡ θ / √ α r 1, it becomes
θ 2 α = 1 √ 2π ye −y 2 /2 + a 2 y 2 . (B13)
The capacity is the maximum example load s for which this equation still admits a solution. Note that s is proportional to α according to Eq. (B7). Thus, we maximize α by minimizing the right-hand side of Eq. (B13) over y:
0 = 1 √ 2π (1 − y 2 )e −y 2 /2 + ay ye −y 2 /2 ≈ √ 2πa y = −W −1 (−2πa 2 ) ≈ 2|log a|,(B14)
where W −1 is the negative branch of the Lambert W function, which is also known as the product logarithm. Substituting Eq. (B14) back into Eq. (B13), we obtain the maximal value
α c ∼ θ 2 a|log a| . (B15)
This expression implicitly defines the capacity s c for θ → 0. We can use this expression to obtain critical values for m c and r c :
m c ≈ 1 − 1 2 π|log a| θ 1 − θ a (1−θ ) 2 /θ 2 , (B16) r c ≈ a 2 .(B17)
Note that m c ≈ 1, which confirms that our solution is self-consistent.
3. Capacity formula for θ → 1
Next, we derive a capacity formula for large θ . In this case, Eqs. (B10) and (B11) become
m = 1 − 1 √ 2π √ α r m − θ e −(m −θ ) 2 /2α r (B18) r = a 2 − a √ 2π √ α r m − θ e −(m −θ ) 2 /2α r ,(B19)
which yields
r = a 2 − a(1 − m ) ≈ a 2 .(B20)
If we define y ≡ (m − θ )/ √ α r 1 and use Eq. (B20), we can write Eq. (B18) as
aα 2 = 1 − θ y − 1 √ 2πy 2 e −y 2 /2 .(B21)
Again, the example load s is proportional to α [Eq. (B7)], so we maximize α by maximizing the right-hand size of Eq. (B21) with respect to y:
0 = − 1 − θ y 2 + 1 √ 2π 2 y 3 + 1 y e −y 2 /2 ye −y 2 /2 ≈ √ 2π(1 − θ ) y ≈ −W −1 −2π(1 − θ ) 2 ≈ 2|log(1 − θ )|.(B22)
Substituting Eq. (B22) into Eq. (B21), we obtain
α c ∼ (1 − θ ) 2 a|log(1 − θ )| . (B23)
This expression implicitly defines the capacity s c for θ → 1.
Similarly to before, we use this expression to obtain the critical value for m c :
m c ≈ 1 − 1 − θ y 2 = 1 − 1 − θ 2|log(1 − θ )| . (B24)
m c ≈ 1, which confirms that our solution was obtained self-consistently.
Maximizing capacity over θ
We have derived two expressions for α c , which is proportional to the capacity s c , in different regimes of the rescaled threshold θ :
α c ∼ θ 2 a|log a| θ → 0, (1 − θ ) 2 a|log(1 − θ )| θ → 1. (B25)
We now take θ to be a free parameter and maximize the capacity over it. The first expression for α c grows from 0 as θ increases from 0, and the second one grows from 0 as θ decreases from 1. Thus, the optimum value should lie somewhere in between, and we estimate its location by finding the crossover point where the two expressions meet. We assume that θ is sufficiently far from 1 such that |log(1 − θ )| ∼ 1. Then, the crossover point is given by
θ 2 a|log a| ≈ (1 − θ ) 2 a θ = |log a| 1 + |log a| .(B26)
Substituting this optimal threshold back into Eq. (B25), we obtain
α c ∼ 1 a|log a| ,(B27)
By converting α back to α with Eq. (B7), we recover the capacity formula Eq. (20) at optimal threshold.
To help us in our calculations, we note the integrals
∞ −∞ dx e −(x−A) 2 /ρ 2 e −(x−B) 2 /σ 2 = π ρ −2 + σ −2 exp − (a − B) 2 ρ 2 + σ 2 , ∞ −∞ dx e −(x−A) 2 /ρ 2 erf x − B σ = √ πρ erf A − B ρ 2 + σ 2 , ∞ −∞ dx e −(x−A) 2 /ρ 2 x erf x − B σ = ρ ρ 2 ρ 2 + σ 2 exp − (A − B) 2 ρ 2 + σ 2 + √ πA erf A − B ρ 2 + σ 2 .(C2)
During successful retrieval, the network overlaps strongly with the target pattern ζ 11 . It will also overlap with other examples ζ 1ν for ν > 1 to a degree governed by the correlation parameter c [Eq. (A3)]. As N → ∞, these other overlaps converge towards one another due to the law of large numbers; we call this asymptotic value m 0 ≡ m 1ν for ν > 1. Thus, we can write ν m 1ν ζ 1ν = m 11 ζ 11 + m 0
ν>1 ζ 1ν = mζ + s 0 m 0 x 0 .(C3)
We rename m ≡ m 11 and ζ ≡ ζ 11 for convenience. x 0 is the average over the s 0 = s − 1 other examples in concept 1, and it follows a binomial distribution with mean cζ 1 variance γ 2 (1 − c 2 )/s 0 according to Eq. (C1). In the large s limit, it can be approximated by a Gaussian random variable with the same moments. We also introduce m 1 , which is the network overlap with the concept pattern ζ 1 .
With these considerations, Eqs. (A56), (A58), and (A60) yield
m = 1 2 ⟪ζ erf mζ + s 0 m 0 x 0 − φ √ 2αr ⟫,(C4)m 0 = 1 2 ⟪x 0 erf mζ + s 0 m 0 x 0 − φ √ 2αr ⟫,(C5)m 1 = 1 2 ⟪ζ 1 erf mζ + s 0 m 0 x 0 − φ √ 2αr ⟫, (C6) r = sΓ 4 2 · 1 − Q(1 − κ 2 )(1 + s 0 κ 2 ) 2 + s 0 κ 4 1 − Q(1 − κ 2 ) 2 1 − Q(1 + s 0 κ 2 ) 2 1 + ⟪erf mζ + s 0 m 0 x 0 − φ √ 2αr ⟫ ,(C7)Q = Γ 2 √ 2παr ⟪exp − (mζ + s 0 m 0 x 0 − φ) 2 2αr ⟫. (C8)
The double angle brackets indicate averages over ζ and x 0 , which is a Gaussian random variable with mean and variance listed above. We define the following variables
σ 2 0 ≡ s 0 γ 2 (1 − c 2 )m 2 0 + αr Y ±± ≡ γm ± s 0 γcm 0 ± φ √ 2σ 0 ,(C9)
with choices for + and − in Y ±± corresponding to signs in the right-hand side. Now we come to the task of performing the averages in Eqs. (C4)-(C8). For each variable, we average successively over ζ, x 0 , and ζ 1 .
First,
Q = Γ 2 √ 2παr 1 + c 2 ⟪exp − (mζ 1 + s 0 m 0 x 0 − φ) 2 2αr ⟫ + 1 − c 2 ⟪exp − (mζ 1 − s 0 m 0 x 0 + φ) 2 2αr ⟫ .(C10)
Then,
1 √ 2παr ⟪exp − (mζ 1 + s 0 m 0 x 0 − φ) 2 2αr ⟫ = 1 √ 2παr s 0 2πγ 2 (1 − c 2 ) dx 0 e −s0(x0−cζ1) 2 /2γ 2 (1−c 2 ) e −s 2 0 m 2 0 (x0+(mζ1−φ)/s0m0) 2 /2αr = 1 2π s 0 γ 2 (1 − c 2 )m 2 0 + αr exp − (mζ 1 + s 0 cm 0 ζ 1 − φ) 2 2 s 0 γ 2 (1 − c 2 )m 2 0 + αr = 1 √ 2πσ 0 · 1 2 e −Y 2 ++ + e −Y 2 +− . (C11)
Thus,
Q = Γ 2 √ 2πσ 0 1 + c 4 e −Y 2 ++ + e −Y 2 +− + 1 − c 4 e −Y 2 −+ + e −Y 2 −− .(C12)
Next,
⟪erf mζ + s 0 m 0 x 0 − φ √ 2αr ⟫ = 1 + c 2 ⟪erf mζ 1 + s 0 m 0 x 0 − φ 2αr ⟫ − 1 − c 2 ⟪erf mζ 1 − s 0 m 0 x 0 + φ 2αr ⟫ .(C13)
Then,
⟪erf mζ 1 + s 0 m 0 x 0 − φ 2αr ⟫ = s 0 2πγ 2 (1 − c 2 ) dx 0 e −s0(x0−cζ1) 2 /2γ 2 (1−c 2 ) erf s 0 m 0 √ 2αr x 0 + mζ 1 − φ s 0 m 0 = erf mζ 1 + s 0 cm 0 ζ 1 − φ 2 s 0 γ 2 (1 − c 2 )m 2 0 + αr = − 1 2 erf Y ++ − erf Y +− .(C14)
Thus,
r = sΓ 4 2 · 1 − Q(1 − κ 2 )(1 + s 0 κ 2 ) 2 + s 0 κ 4 1 − Q(1 − κ 2 ) 2 1 − Q(1 + s 0 κ 2 ) 2 1 − 1 + c 4 erf Y ++ − erf Y +− − 1 − c 4 erf Y −+ − erf Y −− . (C15)
Next,
m = 1 2 1 + c 2 ⟪ζ 1 erf mζ 1 + s 0 m 0 x 0 − φ 2αr ⟫ + 1 − c 2 ⟪ζ 1 erf mζ 1 − s 0 m 0 x 0 + φ 2αr ⟫ .(C16)
Then,
⟪ζ 1 erf mζ 1 + s 0 m 0 x 0 − φ 2αr ⟫ = s 0 2πγ 2 (1 − c 2 ) ζ 1 dx 0 e −s0(x0−cζ1) 2 /2γ 2 (1−c 2 ) erf s 0 m 0 √ 2αr x 0 + mζ 1 − φ s 0 m 0 = ζ 1 erf mζ 1 + s 0 cm 0 ζ 1 − φ 2 s 0 γ 2 (1 − c 2 )m 2 0 + αr = γ 2 erf Y ++ + erf Y +− .(C17)
Thus,
m = γ 2 1 + c 4 erf Y ++ + erf Y +− + 1 − c 4 erf Y −+ + erf Y −− .(C18)
Similarly,
m 1 = 1 2 1 + c 2 ⟪ζ 1 erf mζ 1 + s 0 m 0 x 0 − φ 2αr ⟫ − 1 − c 2 ⟪ζ 1 erf mζ 1 − s 0 m 0 x 0 + φ 2αr ⟫ = γ 2 1 + c 4 erf Y ++ + erf Y +− − 1 − c 4 erf Y −+ + erf Y −− .(C19)
Finally,
m 0 = 1 2 1 + c 2 ⟪x 0 erf mζ 1 + s 0 m 0 x 0 − φ 2αr ⟫ − 1 − c 2 ⟪x 0 erf mζ 1 − s 0 m 0 x 0 + φ 2αr ⟫ .(C20)
Then,
⟪x 0 erf mζ 1 + s 0 m 0 x 0 − φ √ 2αr ⟫ = s 0 2πγ 2 (1 − c 2 ) dx 0 e −s0(x0−cζ1) 2 /2γ 2 (1−c 2 ) x 0 erf s 0 m 0 √ 2αr x 0 + mζ 1 − φ s 0 m 0 = γ 2 (1 − c 2 )m 0 2 π s 0 γ 2 (1 − c 2 )m 2 0 + αr exp − (mζ 1 + s 0 cm 0 ζ 1 − φ) 2 2 s 0 γ 2 (1 − c 2 )m 2 0 + αr + c ζ 1 erf mζ 1 + s 0 cm 0 ζ 1 − φ 2 s 0 γ 2 (1 − c 2 )m 2 0 + αr = γ 2 (1 − c 2 )m 0 √ 2πσ 0 e −Y 2 ++ + e −Y 2 +− + γc 2 erf Y ++ + erf Y +− .(C21)
Thus,
m 0 = γc 2 1 + c 4 erf Y ++ + erf Y +− − 1 − c 4 erf Y −+ + erf Y −− + Q γ 2 Γ 2 (1 − c 2 )m 0 = γc 2 1 − Q γ 2 Γ 2 (1 − c 2 ) 1 + c 4 erf Y ++ + erf Y +− − 1 − c 4 erf Y −+ + erf Y −− .(C22)
These mean-field equations are presented in Eq. (14) with m replaced by its original name m 11 . They can be numerically solved to find regimes of successful retrieval, but we will analyze them further in search of a formula for the capacity s c .
Simplified mean-field equations
To derive a capacity formula, we make three further assumptions. First, we assume c 2 1, which implies κ 2 1 as well. Second, we assume that the rescaled threshold φ = 0. This assumption is justified empirically, for we find that the capacity is maximized at |φ| < 10 −6 over all parameter ranges in Fig. 3. It is also justified theoretically, since we will derive that Q 1, which means φ ≈ θ. For dense patterns in classic Hopfield network, retrieval is maximized at threshold θ = 0 [21]. Finally, we assume s 1, so s 0 = s; this is not necessary, but it makes the expressions simpler.
We rescale the order parameters with
m = γ 2 · m , m 0 = γ 2 · m 0 , r = sΓ 4 2 · r , α = γ 4 2Γ 4 · α .(C23)
The mean-field equations then become
m = 1 + c 2 erf Y + + 1 − c 2 erf Y − ,(C24)m 0 = c 1 − Q γ 2 Γ 2 1 + c 2 erf Y + − 1 − c 2 erf Y − , (C25) r = 1 (1 − Q) 2 ,(C26)Q = 2 π Γ 2 σ 0 γ 2 1 + c 2 e −(Y + ) 2 + 1 − c 2 e −(Y − ) 2 ,(C27)
where
σ 0 2 ≡ s m 0 2 + α r Y ± ≡ m √ 2σ 0 1 ± scm 0 m . (C28)
Successful retrieval means that m ≈ 1, which requires Y ± 1. This condition in turn yields m 0 ≈ c 2 and Q 1 through Eqs. (C25) and (C27), which confirms our previous assumption. Thus,
Y ± ≈ (y/ √ 2)(1 ± sc 3 ), where y ≡ m σ 0 . (C29)
For Y ± 1, we need sc 3 1 and y 1, which we use to boldly simplify Eqs. (C24)-(C27):
m = 1 − 1 y 2 2 π y e −y 2 /2 ,(C30)m 0 = c 2 1 − Q γ 2 Γ 2 1 − 1 y 2 − sc 2 2 π y e −y 2 /2 , (C31) α = m 2 sy 2 − m 0 2 (1 − Q) 2 ,(C32)Q = Γ 2 γ 2 m 1 − sc 4 y 2 + 1 2 s 2 c 6 y 4 2 π y e −y 2 /2 .(C33)
For mathematical tractability, we have expanded in sc 3 and 1/y, even though the former is not strictly small and the latter can be empirically close to 1.
Capacity formula
In Eqs. (C30)-(C33), we substitute formulas for m , m 0 , and Q into the equation for α and keep only leading terms in 1/y and c. After much simplification, we obtain
s(α + c 4 ) ≈ 1 y 2 − Γ 2 γ 2 1 y + 1 2 s 2 c 6 y 3 8 π e −y 2 /2 .(C34)
At the critical value of s above which Eq. (C34) equation cannot be satisfied by any y, derivatives with respect to y on both sides of the equation must be equal. In other words, we expect the critical s c to be a saddle-node bifurcation point. For mathematical tractability, we ignore the term proportional to s 2 . This simplification is rather arbitrary, but it can be empirically justified by comparing the resulting formula with numerical analysis of the full mean-field equations [ Fig. 3]. We also eliminate higher orders in 1/y to obtain
0 ≈ − 2 y 3 + Γ 2 γ 2 8 π e −y 2 /2 .(C35)
Solving for y, we obtain
y = −3W −1 − 1 3 π 2 1/3 γ Γ 4/3 ≈ 3 log 3 2 π 1/3 Γ γ 4/3 ,(C36)
where W −1 is the negative branch of the Lambert W function. Since this function involves a logarithm, it varies very slowly as a function of γ/Γ. For γ = 0.1 and a between 0.001 and 0.1, this expression for y ranges from 1.7 and 3.3. Within this range, m > 0.88 according to Eq. (C30), which confirms that our earlier simplifications using m ≈ 1 yield self-consistent results. We can use Eq. (C35) to simplify Eq. (C34) to leading order in 1/y:
s c (α + c 4 ) ≈ 1 y 2 − s 2 c c 6 .(C37)
Solving for s c ,
s c = (α + c 4 ) 2 + 4 y 2 c 6 − (α + c 4 ) 2c 6 .(C38)
To heuristically obtain a simpler equation, we note that s c → 1/yc 3 when α → 0 and s c → 1/y 2 α when α → ∞. We simply capture both these behaviors with
s c ∼ 1 yc 3 + y 2 α . (C39)
Again, y varies slowly within its range, so we simplify this equation by simply setting y ∼ 3. After converting α back to α with Eq. (C23), we obtain Eq. (23).
Q = Γ 2 √ 2παr ⟪exp − (sm s x s − φ) 2 2αr ⟫.(D5)
The double angle brackets indicate averages over ζ and x s , which is a Gaussian random variable with mean and variance listed above. We define the following variables
σ 2 s ≡ sγ 2 (1 − c 2 )m 2 s + αr Y ± ≡ sγcm s ± φ √ 2σ s ,(D6)
with choices for + and − in Y ± corresponding the sign in the right-hand side. Now we come to the task of performing the averages in Eqs. (D2)-(D5). For each variable, we average successively over ζ 1 and x s .
First,
Q = Γ 2 √ 2παr ⟪exp − (sm s x s − φ) 2 2αr ⟫ = Γ 2 √ 2παr s 2πγ 2 (1 − c 2 )
dx s e −s(xs−cζ1) 2 /2γ 2 (1−c 2 ) e −s 2 m 2 s (xs−φ/sms) 2 /2αr
= Γ 2 2π sγ 2 (1 − c 2 )m 2 s + αr exp − (scm s ζ 1 − φ) 2 2 sγ 2 (1 − c 2 )m 2 s + αr = Γ 2 √ 8πσ s e −Y 2 + + e −Y 2 − . (D7) Next, ⟪erf sm s x s − φ √ 2αr ⟫ = s 2πγ 2 (1 − c 2 )
dx s e −s(xs−cζ1) 2 /2γ 2 (1−c 2 ) erf sm s √ 2αr
x s − φ sm s = erf scm s ζ 1 − φ 2 sγ 2 (1 − c 2 )m 2 s + αr
= − 1 2 erf Y + − erf Y − .(D8)
Thus,
r = sΓ 4 2 · 1 − Q(1 − κ 2 )(1 + s 0 κ 2 ) 2 + s 0 κ 4 1 − Q(1 − κ 2 ) 2 1 − Q(1 + s 0 κ 2 ) 2 1 − 1 2 erf Y + − erf Y − .(D9)
Next,
+ c 2 ζ 1 erf scm s ζ 1 − φ 2 sγ 2 (1 − c 2 )m 2 s + αr = Q γ 2 Γ 2 (1 − c 2 )m s + γc 4 erf Y + + erf Y − = γc 4 1 − Q γ 2 Γ 2 (1 − c 2 ) erf Y + + erf Y − .(D11)
These mean-field equations are presented in Eq. (16).
Simplified mean-field equations
To derive a formula for the critical example load s c , we make three further assumptions. First, we assume c 2 1, which implies κ 2 1 as well. Second, we assume that rescaled threshold φ = 0. This assumption is justified empirically. We find that s c is minimized at |φ| < 0.5 over all parameter ranges in Fig. 4; moreover, these values are very close to that obtained by enforcing φ = 0 [ Fig. 9(a)]. Finally, we assume s 1, so s 0 = s; this is not necessary, but it makes the expressions simpler. We rescale the order parameters with m s = γc 2 · m s , r = sΓ 4 2 · m s r ,
α = γ 4 c 2 2Γ 4 · α .(D12)
We also define y ≡ sc 2 1 + α r , (D13) so Y ± ≈ y/ √ 2. The mean-field equations then become (D17) At the critical value of s above which Eq. (D17) equation cannot be satisfied by any y, its derivative with respect to y must be 0. In other words, we expect the critical s c to be a saddle-node bifurcation point.
m s − Qm s γ 2 Γ 2 = erf y √ 2 ,(D14)
Critical load relations for a γ 2
To derive formulas for s c , we need to make further assumptions about a. First, we consider the case where a is not too small. In Fig. 9(b), we plot the right-hand side (RHS) of Eq. (D17), along with its first two terms and third term separately. The first two terms generally capture the behavior of the RHS. The third term contributes a pole, whose location approximately sets the position of the local maximum of the RHS where its derivative equals 0. Thus, we use the first two terms to satisfy Eq. (D17) and the denominator of the third term to satisfy its derivative:
We can manipulate these equations to obtain Eq. (24) if we convert α back to α with Eq. (D12).
Critical load formula for a γ 2
Next we consider a → 0. In this case, the pole location y → 0 in Eq. (D17), which does not correspond to a retrieval solution m s ≈ 1 according to Eq. (D14). Thus the pole location cannot be used to satisfy the derivative of Eq. (D17). To proceed, we instead set a = 0 in Eq. (D17) and obtain erf y √ 2 − 2 π y e −y 2 /2 + 2 π y e −y 2 /2 erf y √ 2 − 2 π y e −y 2 /2 3 (s c c 2 + 1) 2 π y e −y 2 /2 erf y √ 2 + 1 scc 2 2 π y e −y 2 /2 (y 2 − 1) erf y √ 2 + 2 π y e −y 2 /2 .
(D20)
To find a formula for s c , we boldly expand these equations in leading powers of y while preserving extra powers of c 4 . By solving Eq. (D20) numerically, we see that y ∼ 1, so this simplification is not strictly valid [ Fig. 9
We can solve this equation for y using the cubic formula to obtain
y 2 = A 3 + B 2 + B 1/3 − A 3 + B 2 − B 1/3 ,α = s c c 2 πB 2B − 3A A 3 + B 2 + B 1/3 − A 3 + B 2 − B 1/3 (D24)
Finally, we can solve for s c as a series in α . We keep only the leading term in α and the leading term in c to obtain
3 .
3Dense concept ζ 1 : If we call m 1 the overlap with the target dense concept and m s ≡ m 1ν the overlap with all of its dense examples ν, Eq. (11) becomes
Figure 1 .
1Retrieval properties for sparse examples, dense examples, and dense concepts. (a), (b) Retrieval regimes (shaded regions) obtained by numerically solving the meanfield equations. Their boundaries correspond to capacities sc. Sparse patterns are recovered at high threshold and dense patterns at low threshold. (a) More examples can be retrieved sparsely than they can be densely. (b) For small enough concept loads α and intermediate example loads s, both sparse examples and the dense concepts can be retrieved. (c) Network overlap with target pattern at capacity [Eq. (19)]. Sparse patterns have sparsity a = 0.01 and the dense storage strength is γ = 0.1.
Figure 2 .
2(a) Capacity sc for sparse examples. Connected points indicate numerical analysis of Eq. (13). (b) Collapse of sc curves under rescaled variables. Gray line indicates theoretical formula Eq. (20). (c) sc is maximized at intermediate values of sparsity a. Dense patterns have correlation c = 0.1. The dense storage strength is γ = 0.1.
At large α, this critical number of examples per concept s c is inversely proportional to the number of concepts per neuron α, indicating that the Capacity sc for dense examples. Connected points indicate numerical analysis of Eq. (14). (b) Collapse of sc curves under rescaled variables. Gray lines indicate theoretical formula Eq. (23). The dense storage strength is γ = 0.1. total number of examples stored per neuron αs c saturates at a constant value.
(4), m 0 = 0 for sparse examples,m 0 = c 2 for dense examples, andm 0 = c for dense concepts.
Figure 5 Figure 4 .
54(a) Capacity, or critical example load, sc for dense concepts. Connected points indicate numerical analysis of Eq. (16). (b)-(d) Collapse of sc curves under rescaled variables. Gray lines indicate theoretical formula Eq. (24). (e) For the sparsest patterns, sc curves exhibit better collapse under differently rescaled variables. Gray line indicates theoretical formula Eq. (25), which better matches the numerical results. It exhibits weak dependence on dense correlation c, and we only show its behavior for c = 0.02. The dense storage strength is γ = 0.1.
s = 20 examples per concept during retrieval of sparse examples and dense concepts or s = 3 during retrieval of dense examples. Sparse patterns have sparsity a = 0.01 and dense patterns have correlation parameter c = 0.4.
Figure 5 .
5Capacities sc for (a) sparse examples, (b) dense examples, and (c) dense concepts obtained by numerical calculations (lines) and simulations (points). Lines indicate analysis of the mean-field equations Eqs. (13), (14), and (16). Points indicate means over 8 replicate simulated networks, and vertical bars indicate standard deviations which are often obscured by the points. In each replicate network, 20 cues are tested with simulations lasting 10 update cycles. The dense storage strength is γ = 0.1.ruption [Fig. 6(d)]. These observations encompass autoassociation as well as heteroassociation. Thus, the dual encoding of memories with not only allows for retrieval of both examples and concepts, as noted in
Figure 6 .
6Auto-and heteroassociation among sparse and dense patterns demonstrated by network simulations. (a) Baseline to which various conditions are compared. (b) The number of concepts is increased from p = 5 to 30. (c) The rescaled temperature is increased from T = 1/50 to 1/5. (d) The fraction of the cue pattern flipped is increased from 0.01 to 0.2. (e) The dense pattern storage strength is decreased from γ = 0.1 to 0.055. For dense example and concept targets, we use rescaled threshold θ = 0. For sparse example targets, we use θ = 0.6. Overlapsm reported are averages over 8 replicate networks, with 1 corresponding to perfect retrieval and 0 corresponding to random activity. In each, 20 cues are tested with simulations lasting 20 update cycles.age strength of dense patterns γ should bias the network towards retrieving sparse patterns. Indeed, doing so improves retrieval of sparse examples from dense concepts [Fig. 6(e)]. Moreover, the network is still capable of the reverse process, albeit with some decrease in performance.
Figure 7 .
7The dense pattern storage strength γ controls the ability to retrieve sparse examples from dense concepts by changing their relative energies. (a) Hamiltonian energies for rescaled threshold θ = 0.6 and network size N = 10 000 [Eq. (29)]. For c = 0.1, we store s = 80 patterns per concept, and for c = 0.4, s = 20. Inset shows sparse example energy in detail. (b) Critical dense strength γc below which sparse examples can be retrieved by dense concept cues. Theoretical predictions are the locations of energy crossovers in (a). (c) Phase diagram for auto-and heteroassociation among sparse examples and dense concepts in simulated networks. Dense patterns have correlation parameter c = 0.4, and the temperature is T = 0. Blue and red shaded regions exhibit unidirectional heteroassociation, and the doubly shaded region exhibits bidirectional heteroassociation. Autoassociation occurs below the purple dashed line and above the orange dashed line; for clarity, these regions are not shaded. We use p = 5 concepts, and sparse patterns have sparsity a = 0.01. Simulations are performed without cue noise. For dense concept targets, we use θ = 0, and for sparse example targets, we use θ = 0.6. Points indicate means over 8 replicate networks, and vertical bars indicate standard deviations which are often obscured by the points. In each replicate network, 20 cues are tested with simulations lasting 20 update cycles.
Figure 7 (
7a) shows Eq. (29) calculated in the retrieval regime for sparse examples with θ = 0.6. The Hamiltonian for dense concepts decreases with γ and eventually crosses the value for sparse examples, which remains relatively constant. To connect these results with heteroas-sociative performance, first consider c = 0.4, which is the correlation value used in the simulations inFig. 6. Recall that baseline networks experience difficulty in retrieving sparse examples from dense concepts [Fig. 6(a)]. These networks have γ = 0.1, for which dense concepts exhibit lower energy than sparse examples do [Fig. 7(a)], even with the high threshold θ = 0.6 intended to retrieve the latter. The increase in energy required to proceed from cue to target may explain the failure to perform this heteroassociation. It can be performed for γ = 0.055 [Fig. 6(e)], and here, the energy of dense concepts at high threshold increases above that of sparse examples [
[Fig. 7(c)]. At intermediate values of γ and s, there is a regime for successful bidirectional heteroassociation between sparse examples and dense concepts. At lower values of either γ or s, only unidirectional heteroassociation from dense concept cues to sparse example targets is possible, and at higher values, only the reverse unidirectional heteroassociation is possible. For comparison, autoassociation capacities for sparse examples and dense concepts are also shown. The phase boundary for retrieving
Figure 8 .
8]. For low sparsity a, there is a range of intermediate γ and s in which both sparse examples and dense concepts are stable [Fig. 8(b)]. Figure 8(c)-(h) illustrates that our theoretical capacity formulas are still valid as functions over γ. Capacities sc as a function of dense pattern storage strength γ. (a), (b) Retrieval regimes (shaded regions) for sparse examples, dense examples, and dense concepts obtained by numerically solving the mean-field equations. Their boundaries correspond to capacities sc. Dense patterns have correlation parameter c = 0.1. Capacities sc for (c) sparse examples, (d) dense examples, and (e) dense concepts. Collapse of sc curves for (f) sparse examples, (g) dense examples, and (e) dense concepts under rescaled variables. Gray lines indicate theoretical formulas Eqs. (20), (23), and (24), respectively. The concept load is α = 0.001 concepts per neuron.
ACKNOWLEDGMENTSLK is supported by JSPS KAKENHI for Early-Career Scientists (22K15209) and has been supported by the Miller Institute for Basic Research in Science and a Burroughs Wellcome Fund Collaborative Research Travel Grant. TT is supported by Brain/MINDS from AMED (JP19dm0207001) and JSPS KAKENHI (JP18H05432).
erf scm s ζ 1 − φ 2 sγ 2 (1 − c 2 )m 2 s + αr = γ 4 erf Y + + erf Y − .
Figure 9 .
9(a) Critical example load sc for dense concepts obtained through numerical analysis of Eq.(16). We either set φ = 0 (dark, thin lines) or maximize over φ (light, thick lines). (b) Right-hand side of Eq. (D17) and its terms plotted separately. (c) y as a function of α for sparsity a = 0 obtained by numerically solving Eq. (D20).
−
Qm s (1 + sκ 2 ) 2 + (m s ) 2 sκ 4 m s − Qm s 2 m s − Qm s (1 + sκ 2 ) now substitute expressions for m s and Qm s into Eq.
then directly calculate its derivative with respect to y. Along with the original Eq. (D19), this gives
(b)]; nevertheless, our ultimately derived formula matches reasonably well with numerical results [Fig. 4(e)]. The equations becomeα ≈ 2s c c 2 y 4 3s c c 2 − (3 + s c c 2 )y 2 3π s c c 2 y 4 + 9c 2 (1 + s c c 2 ) 2 α c 2 ≈ s c c 2 (3 + s c c 2 )y 6 27π(1 + s c c 2 ) 2 . (D21)Equating these two expressions for α , we get s c c 2 (3 + s c c 2 )y 6 + 27(1 + s c c 2 ) 2 (3 + s c c 2 )c 2 y 2 = 54s c c 2 (1 + s c c 2 ) 2 c 2 .
where A = 9(1 + s c c 2 ) 2 c 2 s c c 2 , B = 27(1 + s c c 2 ) 2 c 2 3 + s c c 2 . (D23)Substituting this expression into Eq. (D21), we find an equation for α in terms of s c :
Eq. (25) if we convert α back to α with Eq. (D12).
dense concept s.e. d.e. d.c. s.e. d.e. d.c. s.e. d.e. d.c. s.e. d.e. d.c. s.e. d.e. d.c.a)
higher T
(c)
higher cue noise
(d)
lower γ
(e)
sparse example
dense example
cue type
target type
1.00 0.97 0.86
0.78 0.97 0.86
0.30 0.57 0.86
s.e.
d.e.
d.c.
0.69 0.32 0.80
0.69 0.33 0.85
0.29 0.35 0.85
s.e.
d.e.
d.c.
1.00 0.22 0.71
1.00 0.22 0.71
0.64 0.25 0.72
s.e.
d.e.
d.c.
0.01 0.48 0.26
1.00 0.97 0.86
0.30 0.57 0.86
higher p
(b)
s.e.
d.e.
d.c.
1.00 0.85 0.58
0.57 0.91 0.83
0.31 0.22 0.84
overlap m
Appendix C: Capacity for dense examples 1. Dense asymmetric mean-field equationsWe return to the generic mean-field equations Eqs. (A56), (A58), and (A60) and consider the case where the network recovers a dense example ζ 11 . Due to correlations, the network will overlap with all dense patterns ζ 1ν , so χ 1ν = ζ 1ν . Using this expression, we can simplify the mean-field equations and find the critical example load s c above which dense examples can no longer be retrieved.Recall from Eq. (A2) thatAppendix D: Critical load for dense conceptsDense symmetric mean-field equationsWe return to the generic mean-field equations Eqs. (A56), (A58), and (A60) and consider the case where the network recovers a dense concept ζ 1 . Due to correlations, the network will overlap with all dense patterns ζ 1ν , so χ 1ν = ζ 1ν . Using this expression, we can simplify the mean-field equations and find the critical example load s c below which dense concepts cannot be retrieved. Recall the dense pattern statistics Eq. (C1) and Gaussian integrals Eq. (C2), which will aid us in our derivations.Successful retrieval means that the network overlaps strongly with the target concept ζ 1 . The correlation parameter c produces overlaps with all example patterns ζ 1ν [Eq. (A3)], which converge to an asymptotic value m s ≡ m 1ν as N → ∞. The "s" signifies "symmetric", i.e. equal overlap with all examples in concept 1. Thus, we can writex s is the average over the s examples in concept 1, and it follows a binomial distribution with mean cζ 1 and variance γ 2 (1 − c 2 )/s according to Eq. (C1). In the large s limit, it can be approximated by a Gaussian random variable with the same moments. We explicitly introduce m 1 , which is the network overlap with the target concept ζ 1 . With these considerations, Eqs. (A56), (A58), and (A60) yield
Hippocampal synaptic enhancement and information storage within a distributed memory system. B Mcnaughton, R Morris, 10.1016/0166-2236(87)90011-7Trends Neurosci. 10408B. McNaughton and R. Morris, Hippocampal synap- tic enhancement and information storage within a dis- tributed memory system, Trends Neurosci. 10, 408 (1987).
Conjunctive representations in learning and memory: Principles of cortical and hippocampal function. R C O'reilly, J W Rudy, 10.1037/0033-295x.108.2.311Psychol. Rev. 108311R. C. O'Reilly and J. W. Rudy, Conjunctive representa- tions in learning and memory: Principles of cortical and hippocampal function, Psychol. Rev. 108, 311 (2001).
A computational theory of hippocampal function, and empirical tests of the theory. E T Rolls, R P Kesner, 10.1016/j.pneurobio.2006.04.005Prog. Neurobiol. 791E. T. Rolls and R. P. Kesner, A computational theory of hippocampal function, and empirical tests of the theory, Prog. Neurobiol. 79, 1 (2006).
Neural" computation of decisions in optimization problems. J J Hopfield, D W Tank, 10.1007/bf00339943Biol. Cybern. 52141J. J. Hopfield and D. W. Tank, "Neural" computation of decisions in optimization problems, Biol. Cybern. 52, 141 (1985).
On the equivalence of Hopfield networks and Boltzmann Machines. A Barra, A Bernacchia, E Santucci, P Contucci, 10.1016/j.neunet.2012.06.003Neural Networks. 341A. Barra, A. Bernacchia, E. Santucci, and P. Contucci, On the equivalence of Hopfield networks and Boltzmann Machines, Neural Networks 34, 1 (2012).
Simple memory: a theory for archicortex, Philos. D Marr, 10.1098/rstb.1971.0078Trans. R. Soc. B. 26223D. Marr, Simple memory: a theory for archicortex, Phi- los. Trans. R. Soc. B 262, 23 (1971).
Feigel'man, The enhanced storage capacity in neural networks with low activity level. M V Tsodyks, M V , 10.1209/0295-5075/6/2/002Europhys. Lett. 6101M. V. Tsodyks and M. V. Feigel'man, The enhanced stor- age capacity in neural networks with low activity level, Europhys. Lett. 6, 101 (1988).
P Kanerva, Sparse distributed memory. Cambridge, MassachusettsMIT pressP. Kanerva, Sparse distributed memory (MIT press, Cam- bridge, Massachusetts, 1988).
Information storage in sparsely coded memory nets. J.-P Nadal, G Toulouse, 10.1088/0954-898x_1_1_005Netw. Comput. Neural Syst. 161J.-P. Nadal and G. Toulouse, Information storage in sparsely coded memory nets, Netw. Comput. Neural Syst. 1, 61 (1990).
The relative advantages of sparse versus distributed encoding for associative neuronal networks in the brain. E T Rolls, A Treves, 10.1088/0954-898x_1_4_002Netw. Comput. Neural Syst. 1407E. T. Rolls and A. Treves, The relative advantages of sparse versus distributed encoding for associative neu- ronal networks in the brain, Netw. Comput. Neural Syst. 1, 407 (1990).
What determines the capacity of autoassociative memories in the brain?. A Treves, E T Rolls, 10.1088/0954-898x_2_4_004Netw. Comput. Neural Syst. 2371A. Treves and E. T. Rolls, What determines the capacity of autoassociative memories in the brain?, Netw. Com- put. Neural Syst. 2, 371 (1991).
Neural associative memories and sparse coding. G Palm, 10.1016/j.neunet.2012.08.013Neural Networks. 37165G. Palm, Neural associative memories and sparse coding, Neural Networks 37, 165 (2013).
Generalization in a Hopfield network. J F Fontanari, 10.1051/jphys:0199000510210242100J. Phys. 512421J. F. Fontanari, Generalization in a Hopfield network, J. Phys. 51, 2421 (1990).
Generalization in an analog neural network. D A Stariolo, F A Tamarit, 10.1103/physreva.46.5249Phys. Rev. A. 465249D. A. Stariolo and F. A. Tamarit, Generalization in an analog neural network, Phys. Rev. A 46, 5249 (1992).
Information capacity of a hierarchical neural network. D R C Dominguez, 10.1103/physreve.58.4811Phys. Rev. E. 584811D. R. C. Dominguez, Information capacity of a hierarchi- cal neural network, Phys. Rev. E 58, 4811 (1998).
Distinguishing examples while building concepts in hippocampal and artificial networks, bioRxiv. L Kang, T Toyoizumi, 10.1101/2023.02.21.5293652023.02.21.529365L. Kang and T. Toyoizumi, Distinguishing examples while building concepts in hippocampal and artificial net- works, bioRxiv , 2023.02.21.529365 (2023).
The Hippocampus Book. D , L Pierre, 10.1093/acprof:oso/9780195100273.003.0003The Hippocampus Book. P. Andersen, R. Morris, D. Amaral, T. Bliss, and J. O'KeefeOxford University PressD. Amaral and L. Pierre, Hippocampal neuroanatomy, in The Hippocampus Book , The Hippocampus Book, edited by P. Andersen, R. Morris, D. Amaral, T. Bliss, and J. O'Keefe (Oxford University Press, 2006) pp. 37-114.
Neural networks and physical systems with emergent collective computational abilities. J J Hopfield, Proc. Natl. Acad. Sci. U.S.A. 792554J. J. Hopfield, Neural networks and physical systems with emergent collective computational abilities, Proc. Natl. Acad. Sci. U.S.A. 79, 2554 (1982).
Spinglass models of neural networks. D J Amit, H Gutfreund, H Sompolinsky, 10.1103/physreva.32.1007Phys. Rev. A. 321007D. J. Amit, H. Gutfreund, and H. Sompolinsky, Spin- glass models of neural networks, Phys. Rev. A 32, 1007 (1985).
Introduction To The Theory Of Neural Computation, Santa Fe Institute studies in the sciences of complexity: Lecture notes No. J Hertz, A Krogh, R Palmer, CRC PressBoca RatonJ. Hertz, A. Krogh, and R. Palmer, Introduction To The Theory Of Neural Computation, Santa Fe Institute stud- ies in the sciences of complexity: Lecture notes No. 1 (CRC Press, Boca Raton, 2018).
Scaling laws for the attractors of Hopfield networks. G Weisbuch, F Fogelman-Soulie, 10.1051/jphyslet:019850046014062300Journal de Physique Lettres. 46623G. Weisbuch and F. Fogelman-Soulie, Scaling laws for the attractors of Hopfield networks, Journal de Physique Lettres 46, 623 (1985).
The microstructure of ultrametricity. M Mézard, M A Virasoro, 10.1051/jphys:019850046080129300J. Phys. 461293M. Mézard and M. A. Virasoro, The microstructure of ultrametricity, J. Phys. 46, 1293 (1985).
Ordered' spin glass: a hierarchical memory machine. V S Dotsenko, 10.1088/0022-3719/18/31/008J. Phys. C: Solid State Phys. 181017V. S. Dotsenko, 'Ordered' spin glass: a hierarchical mem- ory machine, J. Phys. C: Solid State Phys. 18, L1017 (1985).
Hierarchical associative networks. C Cortes, A Krogh, J A Hertz, 10.1088/0305-4470/20/13/044J. Phys. A: Math. Gen. 204449C. Cortes, A. Krogh, and J. A. Hertz, Hierarchical asso- ciative networks, J. Phys. A: Math. Gen. 20, 4449 (1987).
The effect of synapses destruction on categorization by neural networks. M A Virasoro, 10.1209/0295-5075/7/4/002EPL. 7M. A. Virasoro, The effect of synapses destruction on categorization by neural networks, EPL 7, 293 (1988-10).
Neural networks with hierarchically correlated patterns. H Gutfreund, 10.1103/physreva.37.570Phys. Rev. A. 37570H. Gutfreund, Neural networks with hierarchically corre- lated patterns, Phys. Rev. A 37, 570 (1988).
Mean-field analysis of hierarchical associative networks with 'magnetisation. A Krogh, J A Hertz, 10.1088/0305-4470/21/9/033J. Phys. A: Math. Gen. 212211A. Krogh and J. A. Hertz, Mean-field analysis of hierar- chical associative networks with 'magnetisation', J. Phys. A: Math. Gen. 21, 2211 (1988).
Hippocampal conjunctive encoding, storage, and recall: Avoiding a tradeoff. R C O'reilly, J L Mcclelland, 10.1002/hipo.450040605Hippocampus. 4661R. C. O'Reilly and J. L. McClelland, Hippocampal con- junctive encoding, storage, and recall: Avoiding a trade- off, Hippocampus 4, 661 (1994).
| [] |
[
"Dynamics of SGD with Stochastic Polyak Stepsizes: Truly Adaptive Variants and Convergence to Exact Solution",
"Dynamics of SGD with Stochastic Polyak Stepsizes: Truly Adaptive Variants and Convergence to Exact Solution"
] | [
"Antonio Orvieto \nDepartment of Computer Science\nETH Zürich\nUniversité de Montréal\nJohns Hopkins University\n\n",
"Simon Lacoste-Julien \nDepartment of Computer Science\nETH Zürich\nUniversité de Montréal\nJohns Hopkins University\n\n",
"Mila Diro \nDepartment of Computer Science\nETH Zürich\nUniversité de Montréal\nJohns Hopkins University\n\n",
"Nicolas Loizou \nDepartment of Computer Science\nETH Zürich\nUniversité de Montréal\nJohns Hopkins University\n\n",
"Minds \nDepartment of Computer Science\nETH Zürich\nUniversité de Montréal\nJohns Hopkins University\n\n"
] | [
"Department of Computer Science\nETH Zürich\nUniversité de Montréal\nJohns Hopkins University\n",
"Department of Computer Science\nETH Zürich\nUniversité de Montréal\nJohns Hopkins University\n",
"Department of Computer Science\nETH Zürich\nUniversité de Montréal\nJohns Hopkins University\n",
"Department of Computer Science\nETH Zürich\nUniversité de Montréal\nJohns Hopkins University\n",
"Department of Computer Science\nETH Zürich\nUniversité de Montréal\nJohns Hopkins University\n"
] | [] | Recently Loizou et al.[22], proposed and analyzed stochastic gradient descent (SGD) with stochastic Polyak stepsize (SPS). The proposed SPS comes with strong convergence guarantees and competitive performance; however, it has two main drawbacks when it is used in non-over-parameterized regimes: (i) It requires a priori knowledge of the optimal mini-batch losses, which are not available when the interpolation condition is not satisfied (e.g., regularized objectives), and (ii) it guarantees convergence only to a neighborhood of the solution. In this work, we study the dynamics and the convergence properties of SGD equipped with new variants of the stochastic Polyak stepsize and provide solutions to both drawbacks of the original SPS. We first show that a simple modification of the original SPS that uses lower bounds instead of the optimal function values can directly solve issue (i). On the other hand, solving issue (ii) turns out to be more challenging and leads us to valuable insights into the method's behavior. We show that if interpolation is not satisfied, the correlation between SPS and stochastic gradients introduces a bias, which effectively distorts the expectation of the gradient signal near minimizers, leading to non-convergence -even if the stepsize is scaled down during training. To fix this issue, we propose DecSPS, a novel modification of SPS, which guarantees convergence to the exact minimizer -without a priori knowledge of the problem parameters. For strongly-convex optimization problems, DecSPS is the first stochastic adaptive optimization method that converges to the exact solution without restrictive assumptions like bounded iterates/gradients. | null | [
"https://export.arxiv.org/pdf/2205.04583v3.pdf"
] | 248,665,756 | 2205.04583 | 6f9d72dd797a67d0a477502a509ab29d215761ae |
Dynamics of SGD with Stochastic Polyak Stepsizes: Truly Adaptive Variants and Convergence to Exact Solution
Antonio Orvieto
Department of Computer Science
ETH Zürich
Université de Montréal
Johns Hopkins University
Simon Lacoste-Julien
Department of Computer Science
ETH Zürich
Université de Montréal
Johns Hopkins University
Mila Diro
Department of Computer Science
ETH Zürich
Université de Montréal
Johns Hopkins University
Nicolas Loizou
Department of Computer Science
ETH Zürich
Université de Montréal
Johns Hopkins University
Minds
Department of Computer Science
ETH Zürich
Université de Montréal
Johns Hopkins University
Dynamics of SGD with Stochastic Polyak Stepsizes: Truly Adaptive Variants and Convergence to Exact Solution
Recently Loizou et al.[22], proposed and analyzed stochastic gradient descent (SGD) with stochastic Polyak stepsize (SPS). The proposed SPS comes with strong convergence guarantees and competitive performance; however, it has two main drawbacks when it is used in non-over-parameterized regimes: (i) It requires a priori knowledge of the optimal mini-batch losses, which are not available when the interpolation condition is not satisfied (e.g., regularized objectives), and (ii) it guarantees convergence only to a neighborhood of the solution. In this work, we study the dynamics and the convergence properties of SGD equipped with new variants of the stochastic Polyak stepsize and provide solutions to both drawbacks of the original SPS. We first show that a simple modification of the original SPS that uses lower bounds instead of the optimal function values can directly solve issue (i). On the other hand, solving issue (ii) turns out to be more challenging and leads us to valuable insights into the method's behavior. We show that if interpolation is not satisfied, the correlation between SPS and stochastic gradients introduces a bias, which effectively distorts the expectation of the gradient signal near minimizers, leading to non-convergence -even if the stepsize is scaled down during training. To fix this issue, we propose DecSPS, a novel modification of SPS, which guarantees convergence to the exact minimizer -without a priori knowledge of the problem parameters. For strongly-convex optimization problems, DecSPS is the first stochastic adaptive optimization method that converges to the exact solution without restrictive assumptions like bounded iterates/gradients.
Introduction
We consider the stochastic optimization problem:
min x∈R d f (x) = 1 n n i=1 f i (x) ,(1)
where each f i is convex and lower bounded. We denote by X * the non-empty set of optimal points x * of equation (1). We set f * := min x∈R d f (x), and f * i := inf x∈R d f i (x).
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
In this setting, the algorithm of choice is often Stochastic Gradient Descent (SGD), i.e.
x k+1 = x k − γ k ∇f S k (x k ), where γ k > 0 is the stepsize at iteration k, S k ⊆ [n] a random subset of datapoints (minibatch) with cardinality B sampled independently at each iteration k, and ∇f S k (x) := 1 B i∈S k ∇f i (x) is the minibatch gradient. A careful choice of γ k is crucial for most applications [4,14]. The simplest option is to pick γ k to be constant over training, with its value inversely proportional to the Lipschitz constant of the gradient. While this choice yields fast convergence to the neighborhood of a minimizer, two main problems arise: (a) the optimal γ depends on (often unknown) problem parameters -hence often requires heavy tuning ; and (b) it cannot be guaranteed that X * is reached in the limit [13,16,15]. A simple fix for the last problem is to allow polynomially decreasing stepsizes (second option) [23]: this choice for γ k often leads to convergence to X * , but hurts the overall algorithm speed. The third option, which became very popular with the rise of deep learning, is to implement an adaptive stepsize. These methods do not commit to a fixed schedule, but instead use the optimization statistics (e.g. gradient history, cost history) to tune the value of γ k at each iteration. These stepsizes are known to work very well in deep learning [35], and include Adam [19], Adagrad [11], and RMSprop [29].
Ideally, a theoretically grounded adaptive method should yield fast convergence to X * without knowledge of problem dependent parameters, such as the gradient Lipshitz constant or the strong convexity constant. As a result, an ideal adaptive method should require very little tuning by the user, while matching the performance of a fine-tuned γ k . However, while in practice this is the case for common adaptive methods such as Adam and AdaGrad, the associated convergence rates often rely on strong assumptions -e.g. that the iterates live on a bounded domain, or that gradients are uniformly bounded in norm [11,32,31]. While the above assumptions are valid in the constrained setting, they are problematic for problems defined in the whole R d .
A promising new direction in the adaptive stepsizes literature is based on the idea of Polyak stepsizes, introduced by [25] in the context of deterministic convex optimization. Recently [22] successfully adapted Polyak stepsizes to the stochastic setting, and provided convergence rates matching finetuned SGD -while the algorithm does not require knowledge of the unknown quantities such as the gradient Lipschitz constant. The results especially shines in the overparameterized strongly convex setting, where linear convergence to x * is shown. This result is especially important since, under the same assumption, no such rate exists for AdaGrad (see e.g. [31] for the latest results) or other adaptive stepsizes. Moreover, the method was shown to work surprisingly well on deep learning problems, without requiring heavy tuning [22].
Even if the stochastic Polyak stepsize (SPS) [22] comes with strong convergence guarantees, it has two main drawbacks when it is used in non-over-parameterized regimes: (i) It requires a priori knowledge of the optimal mini-batch losses, which are not often available for big batch sizes or regularized objectives (see discussion in §1.1) and (ii) it guarantees convergence only to a neighborhood of the solution. In this work, we study the dynamics and the convergence properties of SGD equipped with new variants of SPS for solving general convex optimization problems. Our new proposed variants provide solutions to both drawbacks of the original SPS.
Background and Technical Preliminaries
The stepsize proposed by [22] is
γ k = min f S k (x k ) − f * S k c ∇f S k (x k ) 2 , γ b , (SPS max )
where γ b , c > 0 are problem-independent constants, f S k := 1
|S k | i∈S k f i , f * S k = inf x∈R d f S k (x)
.
Dependency on f * Sk . Crucially the algorithm requires knowledge of f * S k for every realization of the mini-batch S k . In the non-regularized overparametrized setting (e.g. neural networks), f S k is often zero for every subset S [34]. However, this is not the only setting where f * S is computable: e.g., in the regularized logistic loss with batch size 1, it is possible to recover a cheap closed form expression for each f * i [22]. Unfortunately, if the batch-size is bigger than 1 or the loss becomes more demanding (e.g. cross-entropy), then no such closed-form computation is possible.
Rates and comparison with AdaGrad. In the convex overparametrized setting (more precisely, under the interpolation condition, i.e. ∃ x * ∈ X * : inf x∈R d f S (x) = f S (x * ) for all S, see also §2), SPS max enjoys a convergence speed of O(1/k), without requiring knowledge of the gradient Lipschitz constant or other problem parameters. Recently, [31] showed that the same rate can be achieved for AdaGrad in the same setting. However, there is an important difference: the rate of [31] is technically O(dD 2 /k), where d is the problem dimension and D 2 is a global bound on the squared distance to the minimizer, which is assumed to be finite. Not only does SPS max not have this dimension dependency, which dates back to crucial arguments in the AdaGrad literature [11,21], but also does not require bounded iterates. While this assumption is satisfied in the constrained setting, it has no reason to hold in the unconstrained scenario. Unfortunately, this is a common problem of all AdaGrad variants: with the exception of [33] (which works in a slightly different scenario), no rate can be provided in the stochastic setting without the bounded iterates/gradients [12] assumptioneven after assuming strong convexity. However, in the non-interpolated setting, AdaGrad enjoys a convergence guarantee of O(1/ √ k) (with the bounded iterates assumption). A similar rate does not yet exist for SPS, and our work aims at filling this gap.
Main Contributions
As we already mentioned, in the non-interpolated setting SPS max has the following issues: Issue (1): For B > 1 (minibatch setting), SPS max requires the exact knowledge of f * S . This is not practical.
Issue (2): SPS max guarantees convergence to a neighborhood of the solution. It is not clear how to modify it to yield convergence to the exact minimizer.
Having the above two issues in mind, the main contributions of our work (see also Table 1 for a summary of the main complexity results obtained in this paper) are summarized as follows:
• In §3, we provide a direct solution for Issue (1). We explain how only a lower bound on f * S (trivial if all f i s are non-negative) is required for convergence to a neighborhood of the solution. While this neighborhood is bigger that the one for SPS max , our modified version provides a practical baseline for the solution to the second issue.
• We explain why Issue (2) is highly non-trivial and requires an in-depth study of the bias induced by the interaction between gradients and Polyak stepsizes. Namely, we show that simply multiplying the stepsize of SPS max by 1/ √ k -which would work for vanilla SGD [23] -yields a bias in the solution found by SPS ( §4), regardless of the estimation of f * S . • In §5, we provide a solution to the problem (Issue (2)) by introducing additional structure -as well as the fix to Issue (1) -into the stepsize. We call the new algorithm Decreasing SPS (DecSPS), and provide a convergence guarantee under the bounded domain assumption -matching the standard AdaGrad results.
• In §5.2 we go one step further and show that, if strong convexity is assumed, iterates are bounded with probability 1 and hence we can remove the bounded iterates assumption. To the best of our knowledge, DecSPS, is the first stochastic adaptive optimization method that converges to the exact solution without assuming strong assumptions like bounded iterates/gradients.
• In §5.3 we provide extensions of our approach to the non-smooth setting.
• In §6, we corroborate our theoretical results with experimental testing.
Background on Stochastic Polyak Stepsize
In this section, we provide a concise overview of the results in [22], and highlight the main assumptions and open questions.
To start, we remind the reader that problem (1) is said to be interpolated if there exists a problem solution
x * ∈ X * such that inf x∈R d f i (x) = f i (x * ) for all i ∈ [n]
. The degree of interpolation
E f (x K ) − f (x * ) , wherex K = 1 K K−1 k=0 x k .
In addition, for all converging methods, we consider the stepsize scaling factor c k = O( √ k), formally defined in the corresponding sections. For the methods without exact convergence, we show in §4 that any different scaling factor cannot make the algorithm convergent.
at batch size B can be quantified by the following quantity, introduced by [22] and studied also in [31,9]: fix a batch size B, and let S ⊆ [n] with |S| = B.
σ 2 B := E S [f S (x * ) − f * S ] = f (x * ) − E S [f * S ](2)
It is easy to realize that as soon as problem (1) is interpolated, then σ 2 B = 0 for each B ≤ n. In addition, note that σ 2 B is non-increasing as a function of B. We now comment on the main result from [22].
Theorem 1 (Main result of [22]). Let each f i be L i -smooth convex functions. Then SGD with SPS max , mini-batch size B, and c = 1, converges as:
E f (x K ) − f (x * ) ≤ x 0 −x * 2 α K + 2γ b σ 2 B α , where α = min 1 2cLmax , γ b andx K = 1 K K−1 k=0 x k .
If in addition f is µ-strongly convex, then, for any c ≥ 1/2, SGD with SPS max converges as:
E x k − x * 2 ≤ (1 − µα) k x 0 − x * 2 + 2γ b σ 2 B µα , where again α = min{ 1 2cLmax , γ b } and L max = max{L i } n i=1
is the maximum smoothness constant.
In the overparametrized setting, the result guarantees convergence to the exact minimizer, without knowledge of the gradient Lipschitz constant (as vanilla SGD would instead require) and without assuming bounded iterates (in contrast to [31]).
As soon as (1) a regularizer is applied to the loss (e.g. L2 penalty), or (2) the number of datapoints gets comparable to the dimension, then the problem is not interpolated and SPS max only converges to a neighborhood and it gets impractical to compute f * S -this is the setting we study in this paper. Remark 1 (What if ∇f S k = 0?). In the rare case that ∇f S k (x k ) 2 = 0, there is no need to evaluate the stepsize. In this scenario, the update direction ∇f S k (x k ) = 0 and thus the iterate is not updated irrespective of the choice of step-size. If this happens, the user should simply sample a different minibatch. We note that in our experiments (see §6), such event never occurred. Related work on Polyak stepsize: The classical Polyak stepsize [25] has been successfully used in the analysis of deterministic subgradient methods in different settings [5,7,18]. First attempts on providing an efficient variant of the stepsize that works well in the stochastic setting were made in [3,24]. However, as explained in [22], none of these approaches provide a natural stochastic extension with strong theoretical convergence guarantees, and thus Loizou et al. [22] proposed the stochastic Polyak stepsize SPS max as a better alternative. 3 Despite its recent appearance, SPS max has already been used and analyzed as a stepsize for SGD for solving structured non-convex problems [15], in combination with other adaptive methods [31], with a moving target [17] and in the update rule of stochastic mirror descent [9]. These extensions are orthogonal to our approach, and we
f(x k ) f * SPS max , * i = f * i SPS max , * i = 0.9 f * i SPS max , * i = 02 (x − x * i ) Hi(x − x * i ) + f * i , with f * i = 1 for all i ∈ [n]
and Hi a random SPD matrix generated using the standard Gaussian matrix Ai ∈ R d×3d as Hi = AiA i /(3d). If x * i = x * j for i = j, then the problem does not satisfy interpolation (left plot). Instead, if all x * i s are equal, then the problem is interpolated (central plot). The plot shows the behaviour of SPS max (γ b = 2) for different choices of the approximated suboptimality * i . We plot (mean and std deviation over 10 runs) the function suboptimality level speculate that our proposed variants can also be used in the above settings. We leave such extensions for future work.
f (x) − f (x * ) for different values of * i . Note that, if instead f * i = 0
3 Removing f * S from SPS As motivated in the last sections, computing f * S in the non-interpolated setting is not practical. In this section, we explore the effect of using a lower bound * S ≤ f * S instead in the SPS max definition.
γ k = min f S k (x k ) − * S k c ∇f S k (x k ) 2 , γ b , (SPS max )
Such a lower bound is easy to get for many problems of interest: indeed, for standard regularized regression and classification tasks, the loss is non-negative hence one can pick * S = 0, for any S ⊆ [n].
The obvious question is: what is the effect of estimating * S on the convergence rates in Thm. 1? We found that the proof of [22] is easy to adapt to this case, by using the following fundamental bound (see also Lemma 3): 1
2cL S k ≤ f S k (x k )−f * S k c ∇f S k (x k ) 2 ≤ f S k (x k )− * S k c ∇f S k (x k ) 2 .
The following results can be seen as an easy extension of the main result of [22], under a newly defined suboptimality measure:
σ 2 B := E S k [f S k (x * ) − * S k ] = f (x * ) − E S k [ * S k ].(3)
Theorem 2. Under SPS max , the same exact rates in Thm. 1 hold (under the corresponding assumptions), after replacing σ 2 B withσ 2 B .
And we also have an easy practical corollary. A numerical illustration of this result can be found in Fig. 1. In essence, both theory and experiments confirm that, if interpolation is not satisfied, then we have a linear rate until a convergence ball, where the size is optimal under exact knowledge of f * S . Instead, under interpolation, if all the f i s are non-negative and f * = 0, then SPS max =SPS max . Finally, in the less common case in practice where f * > 0 but we still have interpolation, then SPS max converges to the exact solution while SPS max does not. To conclude SPS max does not (of course) work better than SPS max , but it is a practical variant which we can use as a baseline in §5 for an adaptive stochastic Polyak stepsize with convergence to the true x * in the non-interpolated setting.
Bias in the SPS dynamics.
In this section, we study convergence of the standard SPS max in the non-interpolated regime, under an additional (decreasing) multiplicative factor, in the most ideal setting: batch size 1, and we have knowledge of each f * i . That is, we consider γ k = min{
fi k (x k )−f * i k c k ∇fi k (x k ) 2 , γ b } with c k → ∞, e.g. c k = O( √ k) or c k = O(k)
. We note that, in the SGD case, simply picking e.g. γ k = γ 0 / √ k + 1 would guarantee convergence of f (x k ) to f (x * ), in expectation and with high probability [20,23]. Therefore, it is natural to expect a similar behavior for SPS, if 1/c k safisfies the usual Robbins-Monro conditions [27]:
∞ k=0 1/c k = ∞, ∞ k=0 1/c 2 k < ∞.
We show that this is not the case: quite interestingly, f (x k ) converges to a biased solution due to the correlation between ∇f i k and γ k . we show this formally, in the case of non-interpolation (otherwise both SGD and SPS do not require a decreasing learning rate). Counterexample. Consider the following finite-sum setting:
f (x) = 1 2 f 1 (x) + 1 2 f 2 (x) with f 1 (x) = a1 2 (x − 1) 2 , f 2 (x) = a2 2 (x + 1) 2 .
To make the problem interesting, we choose a 1 = 2 and a 2 = 1: this introduces asymmetry in the average landscape with respect to the origin. During optimization, we sample f 1 and f 2 independently and seek convergence to the unique minimizer x * = a1−a2 a1+a2 = 1/3. The first thing we notice is that x * is not a stationary point for the dynamics under SPS. Indeed note that since f * i = 0 for i = 1, 2 we have (assuming γ b large enough):
γ k ∇f i k (x) = x−1 2c k , if i k = 1, and γ k ∇f i k (x) = x+1 2c k if i k = 2. Crucially, note that this update is curvature-independent. The expected update is E i k [γ k ∇f i k (x)] = x−1 4c k + x+1 4c k = 1 2c k x.
Hence, the iterates can only converge to x = 0 -because this is the only fixed point for the update rule. The proof naturally extends to the multidimensional setting, an illustration can be found in Fig. 2. Figure 2: Dynamics of SPSmax with decreasing multiplicative constant (SGD style) compared with DecSPS. We compared both in the interpolated setting (right) and in the non-interpolated setting (left). In the non-interpolated setting, a simple multiplicative factor introduces a bias in the final solution, as discussed in this section. We consider two di-
No interpolation Interpolation
mensional fi = 1 2 (x − x * i ) Hi(x − x * i )
, for i = 1, 2 and plot the contour lines of the corresponding landscapes, as well as the average landscape (f1 + f2)/2 we seek to minimize. Solution is denoted with a gold star.
In the same picture, we show how our modified variant of the vanilla stepsize -we call this new algorithm DecSPS, see §5 -instead converges to the correct solution. Remark 2. SGD with (non-adaptive) stepsize γ k instead keeps the curvature, and therefore is able to correctly estimate the average
E i k [γ k ∇f i k (x)] = γ k 2 (a 1 + a 2 ) x − a1−a2
a1+a2 -precisely because γ k is independent from ∇f i k . From this we can see that SGD can only converge to the correct stationary point x * = a1−a2 a1+a2 -because again this is the only fixed point for the update rule. In the appendix, we go one step further and provide an analysis of the bias of SPS in the onedimensional quadratic case (Prop. 4). Yet, we expect the precise characterization of the bias phenomenon in the non-quadratic setting to be particularly challenging. We provide additional insights in §D.2. Instead, in the next section, we show how to effectively modify γ k to yield convergence to x * without further assumptions.
DecSPS: Convergence to the exact solution
We propose the following modification of the vanilla SPS proposed in [22], designed to yield convergence to the exact minimizer while keeping the main adaptiveness properties 4 . We call it Decreasing SPS (DecSPS), since it combines a steady stepsize decrease with the adaptiveness of SPS.
γ k := 1 c k min f S k (x k ) − * S k ∇f S k (x k ) 2 , c k−1 γ k−1 ,(DecSPS)
for k ∈ N, where c k = 0 for every k ∈ N. We set c −1 = c 0 and γ −1 = γ b > 0 (stepsize bound, similar to [22]), to get γ 0 := 1 c0 · min
f S 0 (x 0 )− * S 0 ∇f S 0 (x k ) 2 , c 0 γ b .
Lemma 1. Let each f i be L i smooth and let (c k ) ∞ k=0 be any non-decreasing positive sequence of real numbers. Under DecSPS, we have min
1 2c k Lmax , c0γ b c k ≤ γ k ≤ c0γ b
c k , and γ k−1 ≤ γ k Remark 3. As stated in the last lemma, under the assumption of c k non-decreasing, γ k is trivially
non-increasing since γ k ≤ c k−1 γ k−1 /c k .
The proof can be found in the appendix, and is based on a simple induction argument.
Convergence under bounded iterates
The following result provides a proof of convergence of SGD for the γ k sequence defined above.
Theorem 3. Consider SGD with DecSPS and let (c k ) ∞ k=0 be any non-decreasing sequence such that c k ≥ 1, ∀k ∈ N. Assume that each f i is convex and L i smooth. We have:
E[f (x K ) − f (x * )] ≤ 2c K−1L D 2 K + 1 K K−1 k=0σ 2 B c k ,(4)
where
D 2 := max k∈[K−1] x k − x * 2 ,L := max max i {L i }, 1 2c0γ b andx K = 1 K K−1 k=0 x k . Ifσ 2 B = 0, then c k = 1 for all k ∈ N leads to a rate O( 1 K )
, well known from [22]. Ifσ 2 B > 0, as for the standard SGD analysis under decreasing stepsizes, the choice c k = O( √ k) leads to an optimal asymptotic trade-off between the deterministic and the stochastic terms, hence to the asymptotic
rate O(1/ √ k) since K−1 k=0 1 √ k+1 ≤ 2 √
K. Moreover, picking c 0 = 1 minimizes the speed of convergence for the deterministic factor. Under the assumption thatσ 2 B L D 2 (e.g. reasonable distance initialization-solution and L max > 1/γ b ), this factor is dominant compared to the factor involvingσ 2 B . For this setting, the rate simplifies as follows. Corollary 2. Under the setting of Thm. 3, for
c k = √ k + 1 (c −1 = c 0 ) we have E[f (x K ) − f (x * )] ≤ 2LD 2 + 2σ 2 B √ K .(5)
Remark 4 (Beyond bounded iterates). The result above crucially relies on the bounded iterates assumption: D 2 < ∞. To the best of our knowledge, if no further regularity is assumed, modern convergence results for adaptive methods (e.g. variants of AdaGrad) in convex stochastic programming require 5 this assumption, or else require gradients to be globally bounded. To mention a few: [11,26,32,8,31]. A simple algorithmic fix to this problem is adding a cheap projection step onto a large bounded domain [21]. We can of course include this projection step in DecSPS, and the theorem above will hold with no further modification. Yet we found this to be not necessary: the strong guarantees of SPS in the strongly convex setting [22] let us go one step beyond: in §5.2 we show that, if each f i is strongly convex (e.g. regularizer is added), then one can bound the iterates globally with probability one, without knowledge of the gradient Lipschitz constant. To the best of our knowledge, no such result exist for AdaGrad -except [30], for the deterministic case. Remark 5 (Dependency on the problem dimension). In standard results for AdaGrad, a dependency on the problem dimension often appears (e.g. Thm. 1 in [31]). This dependency follows from a bound on the AdaGrad preconditioner that can be found e.g. in Thm. 4 in [21]. In the SPS case no such dependency appears -specifically because the stepsize is lower bounded by 1/(2c k L max ).
Removing the bounded iterates assumption
We prove that under DecSPS the iterates live in a set of diameter D max almost surely. This can be done by assuming strong convexity of each f i .
The result uses this alternative definition of neighborhood:
σ 2 B,max := max S⊆[n],|S|=B [f S (x * ) − * S ].
Note that triviallyσ 2 B,max < ∞ under the assumption that all f i are lower bounded and n < ∞.
Proposition 1. Let each f i be µ i -strongly convex and L i -smooth. The iterates of SGD with DecSPS with c k = √ k + 1 (and c −1 = c 0 ) are such that x k − x * 2 ≤ D 2 max almost surely ∀k ∈ N, where D 2 max := max x 0 − x * 2 , 2c0γ bσ 2 B,max min{ µ min 2Lmax ,µminγ b } , with µ min = min i∈[n] µ i and L max = max i∈[n] L i .
The proof relies on the variations of constants formula and an induction argument -it is provided in the appendix. We are now ready to state the main theorem for the unconstrained setting, which follows from Prop. 1 and Thm. 3.
Theorem 4. Consider SGD with the DecSPS stepsize γ k := 1 √ k+1 · min f S k (x k )−f * S k ∇f S k (x k ) 2 , γ k−1 √ k ,
for k ≥ 1 and γ 0 defined as at the beginning of this section. Let each f i be µ i -strongly convex and L i -smooth:
E[f (x K ) − f (x * )] ≤ 2LD 2 max + 2σ 2 B √ K .(6)
Remark 6 (Strong Convexity). The careful reader might notice that, while we assumed strong convexity, our rate is slower than the optimal O(1/K). This is due to the adaptive nature of DecSPS. It is indeed notoriously hard to achieve a convergence rate of O(1/K) for adaptive methods in the strongly convex regime. While further investigations will shed light on this interesting problem, we note that the result we provide is somewhat unique in the literature: we are not aware of any adaptive method that enjoys a similar convergence rate without either (a) assuming bounded iterates/gradients or (b) assuming knowledge of the gradient Lipschitz constant or the strong convexity constant. Remark 7 (Comparison with Vanilla SGD). On a convex problem, the non-asymptotic performance of SGD with a decreasing stepsize γ k = η/ √ k strongly depends on the choice of η. The optimizer might diverge if η is too big for the problem at hand. Indeed, most bounds for SGD, under no access to the gradient Lipschitz constant, display a dependency on the size of the domain and rely on projections after each step. If one applies the method in the unconstrained setting, such convergence rates technically do not hold, and tuning is sometimes necessary to retrieve stability and good performance. Instead, for DecSPS, simply by adding a small regularizer, the method is guaranteed to converge at the non-asymptotic rate we derived even in the unconstrained setting.
Extension to the non-smooth setting
For any S ⊆ [n], we denote in this section by g S (x) the subgradient of f S evaluated at x. We discuss the extension of DecSPS to the non-smooth setting.
A straightforward application of DecSPS leads to a stepsize γ k which is no longer lower bounded (see Lemma 1) by the positive quantity min
1 2c k Lmax , c0γ b c k
. Indeed, the gradient Lipschitz constant in the non-smooth case is formally L max = ∞. Hence, γ k prescribed by DecSPS can get arbitrarily small 6 for finite k. One easy solution to the problem is to enforce a lower bound, and adopt a new proof technique. Specifically we propose the following:
γ k := 1 c k · min max c 0 γ , f S k (x k ) − * S k g S k (x k ) 2 , c k−1 γ k−1 , (DecSPS-NS)
where c k = 0 for every k ≥ 0, γ ≥ γ b is a small positive number and all the other quantities are defined as in DecSPS. In particular, as for DecSPS, we set c −1 = c 0 and γ −1 = γ b . Intuitively, γ k is selected to live in the interval [c 0 γ /c k , c 0 γ b /c k ] (see proof in §F, appendix), but has subgradientdependent adaptive value. In addition, this stepsize is enforced to be monotonically decreasing.
Theorem 5. For any non-decreasing positive sequence (c k ) ∞ k=0 , consider SGD with DecSPS-NS. Assume that each f i is convex and lower bounded. We have
E[f (x K ) − f (x * )] ≤ c K−1 D 2 γ c 0 K + 1 K K−1 k=0 c 0 γ b G 2 c k ,(7)
where
D 2 := max k∈[K−1] x k − x * 2 and G 2 := max k∈[K−1] g S k (x k ) 2 .
One can then easily derive an O(1/ √ k) convergence rate. This is presented in §F (appendix).
Numerical Evaluation
We evaluate the performance of DecSPS with c k = c 0 √ k + 1 on binary classification tasks, with regularized logistic loss f (x) = 1
n n i=1 log(1 + exp(y i · a i x)) + λ 2 x 2 ,
where a i ∈ R d is the feature vector for the i-th datapoint and y i ∈ {−1, 1} is the corresponding binary target.
We study performance on three datasets: (1) a Synthetic Dataset, (2) The A1A dataset [6] and (3) the Breast Cancer dataset [10]. We choose different regularization levels and batch sizes bigger than 1. Details are reported in §G, and the code is available at https://github.com/aorvieto/DecSPS. At the batch sizes and regularizer levels we choose, the problems do not satisfy interpolation. Indeed, running full batch gradient descent yields f * > 0. While running SPS max on these problems (1) does not guarantee convergence to f * and (2) Comparison with vanilla SGD with decreasing stepsize. We compare the performance of DecSPS against the classical decreasing SGD stepsize η/ √ k + 1, which guarantees convergence to the exact solution at the same asymptotic rate as DecSPS. We show that, while the asymptotics are the same, DecSPS with hyperparameters c 0 = 1, γ b = 10 performs competitively to a fine-tuned η -where crucially the optimal value of η depends on the problem. This behavior is shown on all the considered datasets, and is reported in Figures 4 (Breast and Synthetic reported in the appendix for space constraints). If lower regularization (1e − 4, 1e − 6) is considered, then DecSPS can still match the performance of tuned SGD -but further tuning is needed (see Figure 14. Specifically, since the non-regularized problems do not have strong curvature, we found that DecSPS works best with a much higher γ b parameter and c 0 = 0.05.
DecSPS yields a truly adaptive stepsize. We inspect the value of γ k returned by DecSPS, shown in Figures 4 & 8 (in the appendix). Compared to the vanilla SGD stepsize η/ √ k + 1, a crucial difference appears: γ k decreases faster than O(1/ √ k). This showcases that, while the factor √ k + 1
can be found in the formula of DecSPS 7 , the algorithm structure provides additional adaptation to curvature. Indeed, in (regularized) logistic regression, the local gradient Lipschitz constant increases as we approach the solution. Since the optimal stepsize for steadily-decreasing SGD is 1/(L √ k + 1), where L is the global Lipschitz constant [13], it is pretty clear that η should be decreased over training for optimal converge (as L effectively increases). This is precisely what DecSPS is doing. Comparison with AdaGrad stepsizes. Last, we compare DecSPS with another adaptive coordinate-independent stepsize with strong theoretical guarantees: the norm version of Ada-Grad (a.k.a. AdaGrad-Norm, AdaNorm), which guarantees the exact solution at the same asymptotic rate as DecSPS [32]. AdaGrad-norm at each iteration updates the scalar b 2 k+1 = b 2 k + ∇f S k (x k ) 2 and then selects the next step as
x k+1 = x k − η b k+1 ∇f i (x k ).
Hence, it has tuning parameters b 0 and η. In Fig. 4 we show that, on the Breast Cancer dataset, after fixing b 0 = 0.1 as recommended in [32] (see their Figure 3), tuning η cannot quite match the performance of DecSPS. This behavior is also observed on the other two datasets we consider (see Fig. 9 in the Appendix). Last, in Fig. 10& 11 in the Appendix, we show that further tuning of b 0 likely does not yield a substantial improvement. Figure 13, and plots comparing AMSgrad with DecSPS on the A1A Dataset can be found in Figure 12. Plotted is also the average stepsize (each parameter evolves with a different stepsize).
Conclusions and Future Work
We provided a practical variant of SPS [22], which converges to the true problem solution without the interpolation assumption in convex stochastic problems -matching the rate of AdaGrad. If in addition, strong convexity is assumed, then we show how, in contrast to current results for AdaGrad, the bounded iterates assumption can be dropped. The main open direction is a proof of a faster rate O(1/K) under strong convexity. Other possible extensions of our work include using the proposed new variants of SPS with accelerated methods, studying further the effect of mini-batching and non-uniform sampling of DecSPS, and extensions to the distributed and decentralized settings. 7 We pick c k = c0 √ k + 1, as suggested by Cor. 2 & 3 10
Supplementary Material
Dynamics of SGD with Stochastic Polyak Stepsizes: Truly Adaptive Variants and Convergence to Exact Solution
The appendix is organized as follows:
1. In §A we provide a more detail comparison with closely related works.
2. In §B we present some technical preliminaries.
3. In §C, we present convergence guarantees for SPS max after replacing f * S with * S (lower bound). 4. In §D, we discuss the lack of convergence of SPS max in the non-interpolated setting. 5. In §E, we discuss convergence of DecSPS, our convergent variant of SPS. 6. In §F, we discuss convergence of DecSPS-NS, our convergent variant of SPS in the nonsmooth setting.
7. In §G we provide some additional experimental results and describe the datasets in detail.
A Comparison with closely related work
In this section we present a more detailed comparison to closely related works on stochastic variants of the Polyak stepsize. We start with the work of Asi and Duchi [2], and then continue with a brief presentation of other papers (already presented in Loizou et al. [22]).
A.1 Comparison with Asi and Duchi [2]
Asi and Duchi [2] proposed the following adaptive method for solving Problem (1) under the interpolation assumption inf x∈R d f S (x) = f S (x * ) = 0 for all subsets S of [n] with |S| = B:
x k+1 = x k − min α k , f S k (x k ) ∇f S k (x k ) 2 ∇f S k (x k ),(8)
where α k = α 0 k −β for some β ∈ R, is a polynomially decreasing/increasing sequence. We provide here a full comparison of this stepsize with SPS max and DecSPS.
Comparison with the adaptive stepsizes in Loizou et al. [22] and our DecSPS. In Loizou et al. [22], the proposed SPS max stepsize is
γ k = min f S k (x k ) − f * S k c ∇f S k (x k ) 2 , γ b .(9)
This stepsize is similar to the one in Asi and Duchi [2]: in both, a Polyak-like stochastic stepsize is bounded from above in order to guarantee convergence. However there are crucial differences.
• SPS max [22] can be applied to non-interpolated problems and leads to fast convergence to a ball around the solution in the non-interpolated setting (see Theorem 1). Instead, Asi and Duchi [1] only formulated and studied Eq. (8) in the interpolated setting.
• As we will see in the next paragraph, one can formulate few conditions under which it is possible to derive linear convergence rates for Eq. (8) in the interpolated setting. As can be easily seen from Theorem 1, SPS max has similar convergence guarantees but works under a more standard/restrictive set of assumptions. In particular, in the interpolated setting, while Asi and Duchi [2] require some specific assumptions on the noise statistics (see next paragraph), the rates in Loizou et al. [22] can be applied without the need for, e.g., probabilistic bound on the gradient magnitude.
In this paper, starting from the SPS max algorithm we propose the following stepsize for convergence to the exact solution in the non-interpolated setting:
γ k := 1 c k min f S k (x k ) − * S k ∇f S k (x k ) 2 , c k−1 γ k−1 ,(DecSPS)
where c k is an increasing sequence (e.g. c k = √ k + 1, see Theorem 4), and * S k is any lower bound on f * S k . At initialization, we set c −1 = c 0 and γ − 1 = γ b > 0. We now compare DecSPS with Eq. (8) and our results with the rates in [2].
• Convergence rates: The form of Eq. (8) and the convergence guarantees of [2] are restricted to the interpolated setting. Instead, in this paper we focus on the non-interpolated setting: using DecSPS we provided the first stochastic adaptive optimization method that converges in the non-interpolated setting to the exact solution without restrictive assumptions like bounded iterates/gradients.
• Inspection of the stepsize: DecSPS provides a version of SPS where γ k is steadily decreasing and is upper bounded by the decreasing quantity c 0 γ b /c k , where c k = √ k + 1 yields the optimal asymptotic rate (Theorem 4) . Hence, DecSPS can be compared to Eq. (8) for α k = α 0 / √ k + 1. However, note that there are two fundamental differences: First, in DecSPS we have that γ k ≥ γ k−1 (see Lemma 1), a feature which Eq. (8) does not have. Secondly, compared to our DecSPS, the stepsize in Eq. (8) with α k decreasing polynomially is asymptotically non-adaptive. Indeed, assuming that each f i has L i -Lipschitz gradients and that each f * S is non-negative, we have (see [22]) that
f S k (x k ) ∇f S k (x k ) 2 ≥ f S k (x k ) − f * S k ∇f S k (x k ) 2 ≥ 1 2L max ,(10)
therefore, after log β (2L max α 0 ) iterations 8 the algorithm coincides with SGD with stepsize α k .
For completeness, we provide in the next paragraph an overview of the results in Asi and Duchi [2].
Precise theoretical guarantees in Asi and Duchi [2]. The stepsize in Equation (8) yields linear convergence guarantees under a specific set of assumptions. We summarize the two main results of Asi and Duchi [2] below in the case of differentiable losses 9 :
Proposition 2 (Proposition 2 in Asi and Duchi [2]). Let each f i be a convex and differentiable function which satisfies a specific set of technical assumptions (see conditions C.i and C.iii in [2]). For a fixed batch-size B assume inf x∈R d f S (x) = f S (x * ) = 0 for all subsets S of [n] with |S| = B (i.e. interpolation). Assume in addition that there exist constants λ 0 , λ 1 > 0 such that for all α > 0, x ∈ R d and x * ∈ X * (set of solutions) we have (sharp growth with shared minimizers assumption)
E S min α[f S (x) − f * S ], (f S (x) − f * S ) 2 ∇f S (x) 2 ≥ min{γ 0 α, λ 1 x − x * } · x − x * .(11)
Then, for α k = α 0 k −β with β ∈ (−∞, 1) the stepsize of Equation (8) yields a linear convergence rate dependent on λ 1 and the choice of β.
Sufficient conditions for Equation (11) to hold is that there exist λ, p > 0 such that, for all x ∈ R d ,
P S [f S (x) − f S (x * )] ≥ λ x − x * 2 ≥ p and E S [ ∇f S (x) 2 ] ≤ M 2 .
Proposition 3 (Proposition 3 in Asi and Duchi [2]). Let each f i be a convex and differentiable function which satisfies a specific set of technical assumptions (see conditions C.i and C.iii in [2]). Under the same interpolation assumptions as Lemma 2, assume that there exist constants λ 0 , λ 1 > 0 such that for all α > 0, x ∈ R d and x * ∈ X * we have (quadrtic growth with shared minimizers assumption)
E S (f S (x) − f * S ) · min α, (f S (x) − f * S ) 2 ∇f S (x) 2 ≥ min{αλ 0 , λ 1 } · x − x * 2 .(12)
Then, for α k = α 0 k −β with β ∈ (−∞, ∞) the stepsize of Equation (8) yields a linear convergence rate dependent on λ 0 , λ 1 and the choice of β.
The authors show that Equation (12) holds under the assumption that the averaged loss f has quadratic growth and has Lipschitz continuous gradients, if in addition there exist constants 0 < c, C < ∞,
p > 0 such that P S ∇f S (x) 2 ≤ C ∇f (x) 2 , [f S (x) − f S (x * )] > c(f (x) − f * (x)) ≥ p.
A.2 Comparisons with other versions of the Polyak stepsize for stochastic problems
To the best of our knowledge, no prior work has provided a computationally feasible modification of the Polyak stepsize for convergence to the exact solution in stochastic non-interpolated problems.
In the next lines, we outline the details for a few related works on Polyak stepsize for stochastic problems.
• SPS max : As discussed in the main paper, our starting point is the SPS max algorithm in [22], which provides linear (for strongly convex) or sublinear (for convex) convergence to a ball around the minimizer, with size dependent of the problems' degree of interpolation. Instead, in this work, we provide convergence guarantees to the exact solution in the non-interpolated setting for a modified version of this algorithm. In addition, when compared to SPS max , our method does not require knowledge of the single f * i s, but just of lower bounds on these quantities (see §3). • L4: A stepsize very similar to SPS max (the L4 algorithm) was proposed back in 2018 by [28].
While this stepsize results in promising performance in deep learning, (1) it has no theoretical convergence guarantees, and (2) each update requires an online estimation of the f * i , which in turn requires tuning up to three hyper-parameters.
• SPLR: Oberman and Prazeres [24] instead study convergence of SGD with the following stepsize:
γ k = 2[f (x k )−f * ]
Ei k ∇fi k (x k ) 2 , which requires knowledge of E i k ∇f i k (x k ) 2 for all iterates x k and the evaluation of f (x k ) -the full-batch loss -at each step. This makes the concrete application of SPLR problematic for sizeable stochastic problems.
• ALI-G: Last, the ALI-G stepsize proposed by Berrada et al. [3] is γ k = min fi(x k ) ∇fi(x k ) 2 +δ , η , where δ > 0 is a tuning parameter. Unlike the SPS max setting, their theoretical analysis relies on an -interpolation condition. Moreover, the values of the parameter δ and η that guarantee convergence heavily depend on the smoothness parameter of the objective f , limiting the method's practical applicability. In addition, in the interpolated setting, while ALI-G converges to a neighborhood of the solution, the SPS max method [22] is able to provide linear convergence to the solution.
B Technical Preliminaries
Let us present some basic definitions used throughout the paper.
Definition 1 (Strong Convexity / Convexity). The function f : R n → R, is µ-strongly convex, if there exists a constant µ > 0 such that ∀x, y ∈ R n :
f (x) ≥ f (y) + ∇f (y), x − y + µ 2 x − y 2(13)
for all x ∈ R d . If inequality (13) holds with µ = 0 the function f is convex.
Definition 2 (L-smooth). The function f : R n → R, L-smooth, if there exists a constant L > 0 such that ∀x, y ∈ R n :
∇f (x) − ∇f (y) ≤ L x − y ,(14)
or equivalently:
f (x) ≤ f (y) + ∇f (y), x − y + L 2 x − y 2 .(15)
Lemma 2. If a function g is µ-strongly convex and L-smooth the following bounds holds:
1 2L ∇g(x) 2 ≤ g(x) − inf x g(x) ≤ 1 2µ ∇g(x) 2 .(16)
The following lemma is the fundamental starting point in [22].
Lemma 3. Let f (x) = 1 n n i=1 f i (x)
where the functions f i are µ i -strongly convex and L i -smooth,
then 1 2L max ≤ 1 2L i ≤ f i (x k ) − f * i ∇f i (x k ) 2 ≤ 1 2µ i ≤ 1 2µ min ,(17)
where
f * i := inf x f i (x), L max = max{L i } n i=1 and µ min = min{µ i } n i=1 .
Proof. Directly using Lemma 2.
C Convergence guarantees after replacing f S * in SPS max with S *
The proofs in this subsection is an easy adaptation of the proofs appeared in [22]. To avoid redundancy in the literature, we provide skecth of the proofs showing the fundamental differences and invite the interested reader to read the details in [22]. Proof. We highlight in blue text the differences between this proof and the one in [22].
Recall the stepsize definition
γ k = min f S k (x k ) − * S k c ∇f S k (x k ) 2 , γ b , (SPS max )
where * S k is any lower bound on f * S k . We also will make use of the bound
1 2cL S ≤ f S k (x k ) − f * S k c ∇f S k (x k ) 2 ≤ f S k (x k ) − * S k c ∇f S k (x k ) 2 = γ k ≤ γ b c .(18)
Convex setting. As in [22] we use a standard expansion as well as the stepsize definition.
x k+1 − x * 2 (19) = x k − x * 2 − 2γ k x k − x * , ∇f S k (x k ) + γ 2 k ∇f S k (x k ) 2 (20) ≤ x k − x * 2 − 2γ k f S k (x k ) − f S k (x * ) + γ 2 k ∇f S k (x k ) 2 (21) ≤ x k − x * 2 − 2γ k f S k (x k ) − f S k (x * ) + γ k c [f S k (x k ) − * S k ](22)= x k − x * 2 − 2γ k f S k (x k ) − f * S k + f * S k − f S k (x * ) + γ k c [f S k (x k ) − * S k ](23)
Next, adding and subtracting f * S k gives
x k+1 − x * 2 (25) ≤ x k − x * 2 − 2γ k f S k (x k ) − f * S k + f * S k − f S k (x * ) + γ k c [f S k (x k ) − f * S k + f * S k − * S k ] (26) = x k − x * 2 − γ k 2 − 1 c f S k (x k ) − f * S k + 2γ k f S k (x * ) − f * S k + γ k c [f * S k − * S k ] ≥0 (27) ≤ x k − x * 2 − γ k 2 − 1 c f S k (x k ) − f * S k >0 +2γ k f S k (x * ) ( ( ( ( ( ( ( ( ( −2γ k f * S k + 2γ k f * S k − 2γ k * S k .(28)
Since c > 1 2 it holds that 2 − 1 c > 0. We obtain:
x k+1 − x * 2 (29) ≤ x k − x * 2 − γ k 2 − 1 c f S k (x k ) − f * S k ≥0 +2γ k [f S k (x * ) − * S k ] ≥0 (30) ≤ x k − x * 2 − α 2 − 1 c f S k (x k ) − f * S k + 2γ b [f S k (x * ) − * S k ] (31) = x k − x * 2 − α 2 − 1 c f S k (x k ) − f S k (x * ) + f S k (x * ) − f * S k + 2γ b [f S k (x * ) − * S k ] (32) = x k − x * 2 − α 2 − 1 c f S k (x k ) − f S k (x * ) − α 2 − 1 c f S k (x * ) − f * S k (33) + 2γ b [f S k (x * ) − * S k ] (34) ≤ x k − x * 2 − α 2 − 1 c f S k (x k ) − f S k (x * ) + 2γ b [f S k (x * ) − * S k ](35)
where in the last inequality we use that α 2 − 1
c f S k (x * ) − f * S k > 0.
Note that this factor f S k (x * ) − f * S k pops up in the proof, not in the stepsize! By rearranging:
α 2 − 1 c f S k (x k ) − f S k (x * ) ≤ x k − x * 2 − x k+1 − x * 2 + 2γ b f S k (x * ) − * S k(36)
The rest of the proof is identical to [22] (Theorem 3.4). Just, at the instead of σ 2 B we haveσ 2
B := E[f S k (x * ) − * S k ].
That is, after taking the expectation on both sides (conditioning on x k ), we can use the tower property and sum over k (from 0 to K − 1) on both sides of the inequality. After dividing by K, thanks to Jensen's inequality, we get (for c = 1):
E f (x K ) − f (x * ) ≤ x 0 − x * 2 α K + 2γ bσ 2 B α , wherex K = 1 K K−1 k=0 x k , α := min{ 1 2cLmax , γ b } and L max = max{L i } n i=1 is the maximum smoothness constant.
Strongly Convex setting. We proceed in the usual way:
x k+1 − x * 2 = x k − x * 2 − 2γ k x k − x * , ∇f S k (x k ) + γ 2 k ∇f S k (x k ) 2 (37) ≤ x k − x * 2 − 2γ k x k − x * , ∇f S k (x k ) + γ k c f S k (x k ) − * S k ≥0 .(38)
Using the fact that c ≥ 1/2, we get
≤ x k − x * 2 − 2γ k x k − x * , ∇f S k (x k ) + 2γ k f S k (x k ) − * S k (40) = x k − x * 2 − 2γ k x k − x * , ∇f S k (x k ) + 2γ k f S k (x k ) − f S k (x * ) + f S k (x * ) − f * S k (41) = x k − x * 2 + 2γ k − x k − x * , ∇f S k (x k ) + f S k (x k ) − f S k (x * ) + 2γ k f S k (x * ) − * S k .(42)
From convexity of functions
f S k it holds that − x k − x * , ∇f S k (x k ) + f S k (x k ) − f S k (x * ) ≤ 0, ∀S k ⊆ [n].
Thus,
x k+1 − x * 2 ≤ x k − x * 2 + 2γ k − x k − x * , ∇f S k (x k ) + f S k (x k ) − f S k (x * ) ≤0 (43) + 2γ b f S k (x * ) − * S k ≥0 (44)
The rest of the proof is identical to [22] (Theorem 3.1). Just, at the instead of σ 2 B we haveσ 2 B :
E[f S k (x * ) − * S k ].
That is, after taking the expectation on both sides (conditioning on x k ), we can use the tower property and solve the resulting geometric series in closed form: for c ≥ 1/2 we get
E x k − x * 2 ≤ (1 − µα) k x 0 − x * 2 + 2γ bσ 2 B µα ,
where α := min{ 1 2cLmax , γ b } and L max = max{L i } n i=1 is the maximum smoothness constant.
D Lack of convergence of SGD with SPS max in the non-interpolated setting D.1 Convergence of SPS with decreasing stepsizes tox = x * in the quadratic case
We recall the variation of constants formula Lemma 4 (Variation of constants). Let w ∈ R d evolve with time-varying linear dynamics z k+1 = A k z k + ε k , where A k ∈ R d×d and ε k ∈ R d for all k. Then, with the convention that k j=k+1 A j = 1,
z k = k−1 j=0 A j z 0 + k−1 i=0 k−1 j=i+1 A j ε i .(45)
Proof. For k = 1 we get z 1 = A 0 z 0 + ε 0 . The induction step yields
z k+1 = A k k−1 j=0 A j z 0 + k−1 i=0 k−1 j=i+1 A j ε i + ε k . (46) = k j=0 A j z 0 + k−1 i=0 A k k−1 j=i+1 A j ε i + ε k . (47) = k j=0 A j z 0 + k−1 i=0 k j=i+1 A j ε i + k j=k+1 A j ε k . (48) = k j=0 A j z 0 + k i=0 k j=i+1 A j ε i .(49)
This completes the proof of the variations of constants formula.
Proposition 4 ( Quadratic 1d). Consider f (x) := 1 n n i=1 f i (x), f i (x) = ai 2 (x − x * i ) 2 . We consider SGD with γ k = fi(x k )−f * i c k ∇fi(x k ) 2 , with c k = (k + 1)/2. Then E|x k+1 −x| 2 = O(1/k), with x = 1 n n i=1 x * i = n i=1 aix * i n i=1 ai = x * .
Proof. To show that x k →x, first notice that the curvature gets canceled out in the update, due to correlation between γ k and ∇f i k (x k ).
x k+1 = x k − γ k ∇f i k (x k ) = x k − a i (x k − x * i k ) 2c k a i = x k − x k − x * i k 2c k(50)
Now let's add and subtractx twice as follows:
x k+1 −x = x k −x − x k −x +x − x * i k 2c k = 1 − 1 2c k (x k −x) + x * i k −x 2c k .(51)
From this equality it is already clear that in expectation the update is in the direction ofx. To provide a formal proof of convergence the first step is to use the variations of constants formula ( Lemma 4). Therefore,
x k+1 −x = k j=0 1 − 1 2c j (x 0 −x) + k =0 k j= +1 1 − 1 2c j x * i −x 2c .(52)
If c j = (j + 1)/2 then 1 − 1 2cj = j j+1 and therefore
k j= +1 1 − 1 2c j = + 1 + 2 · + 2 + 3 · · · k − 1 k · k k + 1 = + 1 k + 1 .(53)
Hence,
k =0 k j= +1 1 − 1 2c j x * i −x 2c = k =0 + 1 k · x * i −x + 1 = 1 k k =0 (x * i −x).(54)
Moreover, since k j= +1 1 − 1 2cj = 0 we have that x k →x in distribution, by the law of large numbers.
Finally, to get a rate on the distance-shrinking, we take the expectation w.r.t. i k conditioned on x k : the cross-term disappears and we get
E i k |x k+1 −x| 2 = 1 − 1 2c k 2 |x k −x| 2 + E|x * i k −x| 2 4c 2 k (55) = 1 − 1 2c k 2 |x k −x| 2 + E|x * i k −x| 2 4c 2 k(56)
Plugging in c k = (k + 1)/2, we get
E i k |x k+1 −x| 2 = k k + 1 2 |x k −x| 2 + E|x * i k −x| 2 (k + 1) 2 .(57)
Therefore, using the tower property and the variation of constants formula,
E|x k+1 −x| 2 = k j=0 j j + 1 2 |x 0 −x| 2 + k =0 k j= +1 j j + 1 2 E|x * i −x| 2 ( + 1) 2 (58) = k =0 ( + 1) 2 (k + 1) 2 E|x * i −x| 2 ( + 1) 2 = E|x * i −x| 2 k + 1 .(59)
This concludes the proof.
D.2 Asymptotic vanishing of the SPS bias in 1d quadratics
As the number of datapoints grows, the bias in the SPS solution (Prop. 4) is alleviated by an averaging effect. Indeed, if we let each pair (a i , x * i ) to be sampled i.i.d, for every n ∈ N we have
E ai,x * i n i=1 a i x * i n i=1 a i = E ai|x * i E x * i n i=1 a i x * i n i=1 a i = E ai|x * i n i=1 a i E x * i [x * i ] n i=1 a i = E ai|x * i n i=1 a i n i=1 a i x * = x * . (60)
As n → ∞, it is possible to see that, under some additional assumptions (e.g. a i Beta-distributed), the variance of
= n i=1 aix * i n i=1 ai and W = a i , the first term is zero since x * is independent of (a i ) n i=1 . Hence Var n i=1 a i x * i n i=1 a i = E ai|xi * Var xi * n i=1 a i x * i n i=1 a i (61) = E n i=1 a 2 i Var[x * i ] ( n i=1 a i ) 2 = E n i=1 a 2 i ( n i=1 a i ) 2 Var[x * i ].(62)Evaluating E n i=1 a 2 i ( n i=1
ai) 2 might be complex, yet if e.g. one assumes e.g. a i ∼ Γ(k, λ) (positive support, to ensure convexity), then it is possible to show 10
that E n i=1 a 2 i ( n i=1 ai) 2 = O(1/n). First, recall that, for q ≥ 0, 1 q 2 = ∞ 0 te −qt dt.(63)
We rewrite the expectation as follows:
E n i=1 a 2 i ( n i=1 a i ) 2 = ∞ 0 t · E n i=1 a 2 i · exp −t · n i=1 a i dt (64) = n ∞ 0 t · E a 2 1 · exp −t · n i=1 a i dt (65) = n ∞ 0 t · E a 2 1 · exp (−ta 1 ) · E exp −t · n i=2 a i dt (66) = n ∞ 0 t · M X (−t) · (M X (−t)) n−1 dt,(67)
where M X (t) denotes the moment generating function of the Γ(k, λ) distribution. Next, we solve the integral using the closed-form expression M X (t) = 1 − t λ −k for t ≤ λ (else does not exist).
Note that we integrate only for t ≤ 0 so the MGF is always defined:
E n i=1 a 2 i ( n i=1 a i ) 2 = n ∞ 0 t · k(k + 1) λ 2 1 + t λ −2−k · 1 + t λ −k(n−1) dt (68) = nk(k + 1) ∞ 0 u (1 + u) kn+2 du (69) = nk(k + 1) 1 0 (1 − s)s kn−1 ds (70) = nk(k + 1) 1 nk − 1 nk + 1 (71) = k + 1 k · n + 1 .(72)
where in the third-last inequality we changed variables t → λu and in the second last we changed variables t → 1−s s .
All in all, we have that Var
n i=1 aix * i n i=1 ai = O(1/n). This implies that n i=1 aix * i n i=1
ai converges to x * in quadratic mean -hence also in probability.
E Convergence of SGD with DecSPS in the smooth setting
Here we study Decreasing SPS (DecSPS), which combines stepsize decrease with the adaptiveness of SPS.
γ k := 1 c k min f S k (x k ) − * S k ∇f S k (x k ) 2 , c k−1 γ k−1 ,(DecSPS)
for k ≥ 0, where we set c −1 = c 0 and γ −1 = γ b (stepsize bound, similar to [22]), to get
γ 0 := 1 c 0 · min f S k (x k ) − * S k ∇f S k (x k ) 2 , c 0 γ b .(73)
E.1 Proof of Lemma 1 Lemma 1. Let each f i be L i smooth and let (c k ) ∞ k=0 be any non-decreasing positive sequence of real numbers. Under DecSPS, we have min
1 2c k Lmax , c0γ b c k ≤ γ k ≤ c0γ b c k , and γ k−1 ≤ γ k
Proof. First, note that γ k is trivially non-increasing since γ k ≤ c k−1 γ k−1 /c k . Next, we prove the bounds on γ k .
For k = 0, we can directly use Lemma 2:
γ b ≥ γ 0 = 1 c 0 · min f S k (x k ) − * S k ∇f S k (x k ) 2 , c 0 γ b ≥ min 1 2c 0 L max , γ b .(74)
Next, we proceed by induction: we assume the proposition holds true for γ k :
min 1 2c k L max , c 0 γ b c k ≤ γ k ≤ c 0 γ b c k .(75)
Then, we have :
γ k+1 = 1 c k+1 min f S k+1 (x k+1 )−f * S k+1 ∇f S k+1 (x k+1 ) 2 , ι , where ι := c k γ k ∈ min 1 2L max , c 0 γ b , c 0 γ b(76)
by the induction hypothesis. This bound directly implies that the proposition holds true for γ k+1 , since again by Lemma 2 we have
f S k+1 (x k+1 )−f * S k+1 ∇f S k+1 (x k+1 ) 2 ≥ 1 2Lmax
. This concludes the induction step.
E.2 Proof of Thm. 3
Remark 8 (Why was this challenging?). The fundamental problem towards a proof for DecSPS is that the error to control due to gradient stochasticity does not come from the term γ 2 k ∇f (x k ) 2 in the expansion of x k − x * 2 , as instead is usual for SGD with decreasing stepsizes. Instead, the error comes from the inner product term γ k ∇f (x k ), x k − x * . Hence, the error is proportional to γ k , and not γ 2 . As a result, the usual Robbins-Monro conditions [27] do not yield convergence. A similar problem is discussed for AdaGrad in [32].
Theorem 3. Consider SGD with DecSPS and let (c k ) ∞ k=0 be any non-decreasing sequence such that c k ≥ 1, ∀k ∈ N. Assume that each f i is convex and L i smooth. We have:
E[f (x K ) − f (x * )] ≤ 2c K−1L D 2 K + 1 K K−1 k=0σ 2 B c k ,(4)
where
D 2 := max k∈[K−1] x k − x * 2 ,L := max max i {L i }, 1 2c0γ b andx K = 1 K K−1 k=0 x k .
Proof. Note that from the definition γ k :
= 1 c k · min f S k (x k )− * S k ∇f S k (x k ) 2 , c k−1 γ k−1 , we have that: γ k ≤ 1 c k · f S k (x k ) − * S k ∇f S k (x k ) 2 .(77)
Multiplying by γ k and rearranging terms we get the fundamental inequality
γ 2 k ∇f S k (x k ) 2 ≤ γ k c k [f S k (x k ) − * S k ],(78)
Using the definition of DecSPS and convexity we get
x k+1 − x * 2 (79) = x k − γ k ∇f S k (x k ) − x * 2 (80) (78) ≤ x k − x * 2 − 2γ k ∇f S k (x k ), x k − x * + γ k c k (f S k (x k ) − * S k ) (81) convexity ≤ x k − x * 2 − 2γ k [f S k (x k ) − f S k (x * )] + γ k c k [f S k (x k ) − f S k (x * ) + f S k (x * ) − * S k ] (82) = x k − x * 2 − 2γ k [f S k (x k ) − f S k (x * )] + γ k c k [f S k (x k ) − f S k (x * )] + γ k c k [f S k (x * ) − * S k ] (83) = x k − x * 2 − 2 − 1 c k γ k [f S k (x k ) − f S k (x * )] + γ k c k [f S k (x * ) − * S k ].(84)
Let us divide everything by γ k > 0.
x k+1 − x * 2 γ k ≤ x k − x * 2 γ k − 2 − 1 c k [f S k (x k ) − f S k (x * )] + 1 c k [f S k (x * ) − * S k ]. (85)
Since by hypothesis c k ≥ 1 for all k ∈ N, we have 2 − 1 c k ≥ 1 and therefore
f S k (x k ) − f S k (x * ) ≤ x k − x * 2 γ k − x k+1 − x * 2 γ k + 1 c k [f S k (x * ) − * S k ].(86)
Next, summing from k = 0 to K − 1:
K−1 k=0 f S k (x k )−f S k (x * ) ≤ K−1 k=0 x k − x * 2 γ k − K−1 k=0 x k+1 − x * 2 γ k + K−1 k=0 1 c k [f S k (x * )− * S k ].(87)
And therefore
K−1 k=0 f S k (x k ) − f S k (x * ) (88) ≤ K−1 k=0 x k − x * 2 γ k − K−1 k=0 x k+1 − x * 2 γ k + K−1 k=0 1 c k [f S k (x * ) − * S k ](89)≤ x 0 − x * 2 γ 0 + K−1 k=1 x k − x * 2 γ k − K−2 k=0 x k+1 − x * 2 γ k − x K − x * 2 γ K−2 + K−1 k=0 1 c k [f S k (x * ) − * S k ](90)≤ x 0 − x * 2 γ 0 + K−2 k=0 x k+1 − x * 2 γ k+1 − K−2 k=0 x k+1 − x * 2 γ k + K−1 k=0 1 c k [f S k (x * ) − * S k ](91)≤ x 0 − x * 2 γ 0 + K−2 k=0 1 γ k+1 − 1 γ k x k+1 − x * 2 + K−1 k=0 1 c k [f S k (x * ) − * S k ] (92) ≤ D 2 1 γ 0 + K−2 k=0 1 γ k+1 − 1 γ k + K−1 k=0 1 c k [f S k (x * ) − * S k ](93)≤ D 2 γ K−1 + K−1 k=0 1 c k [f S k (x * ) − * S k ].(94)
Remark 9 (Where did we use the modified SPS definition?). In step (93), we are able to collect D 2 because 1 γ k+1 − 1 γ k ≥ 0. This is guaranteed by the new SPS definition (DecSPS), along with the fact that c k is increasing. Note that one could not perform this step under the original SPS update rule of [22].
Thanks to Lemma 1, we have:
γ k ≥ min 1 2c k L max , c 0 γ b c k . Hence, 1 γ k ≤ c k · max 2L max , 1 c 0 γ b .(95)
Let us callL = max L max , 1 2c0γ b . By combining (95) with (94) and dividing by K we get:
1 K K−1 k=0 f S k (x k ) − f S k (x * ) ≤ 2c K−1L D 2 K + 1 K K−1 k=0 [f S k (x * ) − * S k ] c k ,(96)
We conclude by taking the expectation and using Jensen's inequality as follows:
E f (x K ) − f (x * ) Jensen ≤ 1 K K−1 k=0 E f (x k ) − f (x * ) ≤ 2c K−1L D 2 K + 1 K K−1 k=0σ 2 B c k .(97)
whereσ 2 B is as defined in (3). Remark 10 (Second term does not depend on γ b ). Note that, in the convergence rate, the second term does not depend on γ b while the first does. This is different from the original SPS result [22], and due to the different proof technique: specifically, we divide by γ k early in the proof -and not at the end. To point to the exact source of this difference, we invite the reader to inspect Equation 24 in the appendix 11 of [22]: the last term there is proportional to γ b /α, where α is a lower bound on the SPS and γ b is an upper bound. In our proof approach, these terms -which bound the same quantityeffectively cancel out (because we divide by γ k earlier in the proof), at the price of having D 2 in the first term.
E.3 Proof of Prop. 1
We need the following lemma. An illustration of the result can be found in Fig. 6.
Lemma 5. Let z k+1 = A k z k + ε k with A k = (1 − a/ √ k + 1) and ε k = b/ √ k + 1. If z 0 > 0, 0 < a ≤ 1, b > 0, then z k ≤ max{z 0 , b/a} for all k ≥ 0.
Proof. Simple to prove by induction. The base case is trivial, since z 0 ≤ max{z 0 , b/a}. Let us now assume the proposition holds true for z k (that is, z k ≤ max{z 0 , b/a}) , we want to show it holds true for k + 1. We have
z k+1 = 1 − a √ k + 1 z k + b √ k + 1 .(98)
If b/a = max{z 0 , b/a}, then we get, by induction
z k+1 ≤ 1 − a √ k + 1 b a + b √ k + 1 = b a = max{z 0 , b/a}.(99)
Else, if z 0 = max{z 0 , b/a}, then by induction 11 http://proceedings.mlr.press/v130/loizou21a/loizou21a-supp.pdf
z k+1 ≤ 1 − a √ k + 1 z 0 + b √ k + 1 = z 0 − az 0 − b √ k + 1 ≤ z 0 = max{z 0 , b/a},(100)k = √ k + 1 (and c −1 = c 0 ) are such that x k − x * 2 ≤ D 2 max almost surely ∀k ∈ N, where D 2 max := max x 0 − x * 2 , 2c0γ bσ 2 B,max min{ µ min 2Lmax ,µminγ b }
, with µ min = min i∈[n] µ i and
L max = max i∈[n] L i .
Proof. Using the SPS definition we directly get
x k+1 − x * 2 = x k − γ k ∇f S k (x k ) − x * 2 (101) = x k − x * 2 − 2γ k ∇f S k (x k ), x k − x * + γ 2 k ∇f S k (x k ) 2 (102) ≤ x k − x * 2 − 2γ k ∇f S k (x k ), x k − x * + γ k c k (f S k (x k ) − * S k ),(103)
where (as always) we used the fact that since from the definition γ k :
= 1 c k · min f S k (x k )− * S k ∇f S k (x k ) 2 , c k−1 γ k−1 , then γ k ≤ 1 c k · f S k (x k )− * S k ∇f S k (x k ) 2 and we have γ 2 k ∇f S k (x k ) 2 ≤ 1 c k [f S k (x k ) − * S k ].(104)
Now recall that, if each f i is µ i -strongly convex then for any x, y ∈ R d we have
− ∇f S k (x), x − y ≤ − µ min 2 x − y 2 − f S k (x) + f S k (y).(105)
For y = x * and x = x k , this implies
− ∇f S k (x k ), x k − x * ≤ − µ min 2 x k − x * 2 − f S k (x k ) + f S k (x * ).(106)
Adding and subtracting * S k to the RHS of the inequality above, we get
− ∇f S k (x k ), x k − x * ≤ − µ min 2 x k − x * 2 − (f S k (x k ) − * S k ) + (f S k (x * ) − * S k ).(107)
almost independent of γ b at c 0 = 1. Similar findings hold for the A1A and Breast Cancer datasets, as shown in Figure 7. For A1A, we can see that the dynamics is almost independent of γ b at c 0 = 1 and that, at γ b = 10, c 0 = 1 indeed yields the best performance. The findings are similar for the Breast Cancer dataset; however, there we see that at γ b = 10, c 0 = 5 yields the best final suboptimalityyet c 0 = 1 is clearly the best tradeoff between convergence speed and final accuracy. [32] for the Synthetic and A1A datasets. AdaGrad-norm at each iteration updates the scalar b 2 k+1 = b 2 k + ∇f S k (x k ) 2 and then selects the next step as x k+1 = x k − η b k+1 ∇f i (x k ). Hence, it has tuning parameters b 0 and η; b 0 = 0.1 is recommended in [32] (see their Figure 3). Using this value for b 0 we show in Figure 9 that the performance of DecSPS is competitive against a well-tuned value of the AdaGrad-norm stepsize η. In Figure 10&11 we show the effect of tuning b 0 on the synthetic dataset: no major improvement is observed.
A1A -Sensitivity γ b A1A -Sensitivity c0 Breast -Sensitivity γ b Breast -Sensitivity c0
Comparison with Adam and AMSgrad. We provide a comparison with Adam [19] and AMSgrad [26]. For both methods, we set the momentum parameter to zero (a.k.a RMSprop) for a fair comparison with DecSPS. For β := β 2 , the parameter that controls the moving average of the second moments, we select the value 0.99 since we found that the standard 0.999 leads to problematic (exploding) stepsizes. Findings are pretty similar for both the A1A ( Figure 12) and Breast Cancer ( Figure 13) datasets: when compared to DecSPS with the usual parameters, fine-tuned Adam with fixed stepsize can reach the same performance after a few tens of thousand iterations -however, it is much slower at the beginning of training. While deriving convergence guarantees for Adam is problematic [26], AMSgrad [26] with stepsize η/ √ k + 1 enjoys a convergence guarantee similar to Adagrad and Adagrad-Norm. This is reflected in the empirical convergence: fine-tuned AMSgrad is able to match the convergence of DecSPS with the usual parameters motivated at the beginning of Performance under light regularization. If the problem at hand does not have strong curvature information, e.g. there is very light regularization, then additional tuning of the DecSPS parameters is required. Figure 14 shows that it is possible to retrieve the performance of SGD also with light regularization parameters (1e − 4, 1e − 6) under additional tuning of c 0 and γ b . SGD, k = 1.00/ k + 1 SGD, k = 5.00/ k + 1 SGD, k = 10.00/ k + 1 SGD, k = 50.00/ k + 1 SGD, k = 100.00/ k + 1 DecSPS, c0 = 1.00, b = 10 DecSPS, c0 = 0.05, b = 1000 Figure 14: Results on A1A for λ = 1e − 4 (left) and λ = 1e − 6 (right). Additional tuning of SPS is required to match the tuned SGD performance.
Figure 1 :
1We consider a 100 dim problem with n = 100 datapoints where each fi = 1
for all i then all the shown algorithms coincide (right plot) and converge to the solution.
Corollary 1 .
1In the context of Thm. 2, assume all f i s are non-negative and estimate * S = 0 for all S ⊆ [n]. Then the same exact rates in Thm. 1 hold for SPS max , after replacing σ 2 B with f * = f (x * ).
requires full knowledge of the set of optimal function values {f * S } |S|=B , in DecSPS we can simply pick the lower bound 0 = * S ≤ f * S for every S. Supported by Theorems 3 & 4 & 5, we expect SGD with DecSPS to converge to the minimum f * . Stability of DecSPS. DecSPS has two hyperparameters: the upper bound γ b on the first stepsize and the scaling constant c 0 . While Thm. 5 guarantees convergence for any positive value of these hyperparameters, the result of Thm. 3 suggests that using c 0 = 1 yields the best performance under the assumption thatσ 2 BL D 2 (e.g. reasonable distance of initialization from the solution, and L max > 1/γ b ). InFig. 3, we show on the synthetic dataset that (1) c 0 = 1 is indeed the best choice in this setting and (2) the performance of SGD with DecSPS is almost independent of γ b . Similar findings are reported and commented inFigure 7(Appendix) for the other datasets. Hence, for all further experiments, we choose the hyperparameters γ b = 10, c 0 = 1.
Figure 3 :
3DecSPS (c k = c0 √k + 1) sensitivity to hyperparameters on the Synthetic Dataset, with λ = 0. Repeated 10 times and plotted is mean and std.
Figure 4 :
4k = 0.50/ k + 1 SGD, k = 1.00/ k + 1 SGD, k = 5.00/ k + 1 SGD, k = 10.00/ k + 1 DecSPS, c0 = 1.00, Left: performance of DecSPS, on the A1A Dataset (λ = 0.01). Right: performance of DecSPS on the Breast Cancer Dataset (λ = 1e − 1). Further experiments can be found in §G (appendix).
Figure 5 :
5Left: Performance of Adam (with fixed stepsize and no momentum) and Right: AMSgrad (with sqrt decreasing stepsize and no momentum) compared to DecSPS on the A1A and Breast Cancer dataset, respectively. Plots comparing the performance of Adam with DecSPS on the Breast Cancer Dataset can be found in
Theorem 2 .
2Under SPS max , the same exact rates in Thm. 1 hold (under the corresponding assumptions), after replacing σ 2 B withσ 2 B .
→ x * in probability, as n → ∞, with rate O(1/n).
First
, recall the law of total variance: Var[Z] = Var[E[Z|W ]] + E[Var[Z|W ]]. In our case, setting Z :
Figure 6 :
6Numerical Verification of Lemma 5. Bound in the lemma is indicated with dashed line. where the last inequality holds because az 0 − b > 0 and a is positive. This completes the proof. Proposition 1. Let each f i be µ i -strongly convex and L i -smooth. The iterates of SGD with DecSPS with c
Figure 7 :Figure 8 :
78Tuning of DecSPS on the A1A and Breast cancer datasets. Comparison with SGD. In addition to Figure 4 (A1A dataset), in Figure 8 we provide comparison of DecSPS with SGD with stepsize γ 0 / √ k + 1 for the Synthetic and Breast Cancer datasets. From the results, it is clear that DecSPS with standard parameters c 0 = 1, γ b = 10 (see discussion in main paper and paragraph above) is comparable if not faster than vanilla SGD with decreasing stepsize. k = 0.50/ k + 1 SGD, k = 1.00/ k + 1 SGD, k = 5.00/ k + 1 SGD, k = 10.00/ k + 1 DecSPS, c0 = 1.00, k = 0.50/ k + 1 SGD, k = 1.00/ k + 1 SGD, k = 5.00/ k + 1 SGD, k = 10.00/ k + 1 DecSPS, c0 = 1.00, DecSPS on the Synthetic Dataset (λ = 1e − 4) and the Breast Cancer Dataset (λ = 1e − 1) . Comparison with Adagrad-Norm. In addition to Figure 4 (Breast Cancer dataset), in Figures 10&11&9 we provide comparison of DecSPS with AdaGrad-Norm
Figure 9 :Figure 10 :
910Performance of AdaGrad-Norm compared to DecSPS on the synthetic and A1A datasets. Performance of AdaGrad-Norm compared to DecSPS on the Synthetic dataset, for b0 = 0.1. This figure is a complement to Figure 4.this section. Yet, we recall that the convergence guarantees of AMSgrad require the iterates to live in a bounded domain, an assumption which is not needed in our DecSPS (see § 5.2).
Figure 11 :Figure 12 :Figure 13 :
111213Performance of AdaGrad-Norm compared to DecSPS on A1A and Breast Cancer datasets, for b0 = 0.1. This figure is a complement toFigure 4. Performance of Adam (with fixed stepsize and no momentum) and AMSgrad (with sqrt decreasing stepsize and no momentum) compared to DecSPS on the A1A dataset. Plotted is also the average stepsize (each parameter evolves with a different stepsize). Performance of Adam (with fixed stepsize and no momentum) and AMSgrad (with sqrt decreasing stepsize and no momentum) compared to DecSPS on the Breast Cancer dataset. Plotted is also the average stepsize (each parameter evolves with a different stepsize). k = 1.00/ k + 1 SGD, k = 5.00/ k + 1 SGD, k = 10.00/ k + 1 SGD, k = 50.00/ k + 1 DecSPS, c0 = 1.00, b = 10 DecSPS, c0 = 0.05,
Table 1 :
1Summary of the considered stepsizes and the corresponding theoretical results in the non-interpolated setting. The studied quantity in all Theorems, with respect to which all rates are expressed is
Comparison with Adam and AMSgrad without momentum. In Figures 5&12&13 we compare DecSPS with Adam[19] and AMSgrad[26] on the A1A and Breast Cancer datasets. Results show that DecSPS with the usual hyperparameters is comparable to the fine-tuned version of both these algorithms -which however do not enjoy convergence guarantees in the unbounded domain setting.
A variant of SGD with SPSmax was also proposed by Asi and Duchi[2] as a special case of a model-based method called the lower-truncated model. Asi and Duchi[2] also proposed a decreasing step-size variant of SPSmax which is closely related but different than the DecSPS that we propose in §5. Among some differences, they assume interpolation for their convergence results whereas we do not in §5. We describe the differences between our work and Asi and Duchi[2] in more detail in Appendix A.
Similar choices are possible. We found that this leads to the biggest stepsize magnitude, allowing for faster convergence in practice.
Perhaps the only exception is the result of[33], where the authors work on a different setting: i.e. they introduce the RUIG inequality.
Take for instance the deterministic setting one-dimensional setting f (x) = |x|. As x → 0, the stepsize prescribed by DecSPS converges to zero. This is not the case e.g. in the quadratic setting.
Simply plugging in α k = α0k −β and solving for k.9 The results of Asi and Duchi[2] also work in the subdifferentiable setting.
This derivation was posted on the Mathematics StackExchange at https://math.stackexchange.com/ questions/138290/finding-e-left-frac-sum-i-1n-x-i2-sum-i-1n-x-i2-right-of-a-sam? rq=1 and we report it here for completeness.
AcknowledgementsThis work was partially supported by the Canada CIFAR AI Chair Program. Simon Lacoste-Julien is a CIFAR Associate Fellow in the Learning in Machines & Brains program.Since γ k > 0, we can substitute this inequality in Equation (103) and getRearranging a few terms we getSince we assumed c k ≥ 1/2 for all k ∈ N, we can drop the termHence, we get the following bound:where we used the inequality minNow we seek an upper bound for the contraction factor. Under c k = √ k + 1, using again Lemma 1 we have, since c 0 = 1,Now have all ingredients to bound the iterates: the result follows from Lemma 5 using a = min µmin 2Lmax , µ min γ b and b = 2c 0 γ bσ 2 B,max . So, we getThis completes the proof.F Convergence of stochastic subgradient method with DecSPS-NS in the non-smooth settingIn this subsection we consider the DecSPS-NS stepsize in the non-smooth setting:26 where g S k (x k ) is the stochastic subgradient using batch size S k at iteration k, and we set c −1 = c 0 and γ −1 = γ b to getF.1 Proof stepsize bounds Lemma 6 (Non-smooth bounds). Let (c k ) ∞ k=0 be any non-decreasing positive sequence. Then, under DecSPS-NS, we have that for every k ∈ N, c0γProof. First, note that γ k is trivially non-increasing since γ k ≤ c k−1 γ k−1 /c k . Next, we prove the bounds on γ k .Without loss of generality, we can work with the simplified stepsizewhere α k ∈ R is any number. We proceed by induction: at k = 0 (base case) we getIn all these cases, we get γ 0 ∈ [γ , γ b ]; hence, the base case holds true.We now proceed with the induction step by assuming c0γ c k ≤ γ k ≤ c0γ c k . Using the definition of DecSPS-NS we will then show that the same inequalities hold for γ k+1 . We start by noting that, since γ k ∈ c0γ c k , c0γ b c k , it holds that,(122) Similarly to the base case, we can write:With a procedure identical to the setting k = 0 we get that minThis concludes the proof.F.2 Proof of Thm. 5Theorem 5. For any non-decreasing positive sequence (c k ) ∞ k=0 , consider SGD with DecSPS-NS. Assume that each f i is convex and lower bounded. We havewhere D 2 := max k∈[K−1] x k − x * 2 and G 2 := max k∈[K−1] g S k (x k ) 2 .Proof. Let us consider the DecSPS stepsize in the non-smooth setting. Using convexity and the gradient bound we getwhere the last line follows from definition of subgradient and Lemma 6.By dividing by γ k > 0,Using the same exact steps as Thm 3, and using the fact that γ k is decreasing, we arrive at the equationNow we use the fact that, since c0γ c k ≤ γ k by Lemma 6, we haveWe conclude by taking the expectation and using Jensen's inequality.Remark 11. The bound in Cor. 3 does not depend on σ 2 B , while the one in Cor. 2 does. This is because the proof is different, and does not rely on bounding squared gradients with function suboptimalities (one cannot, if smoothness does not hold). Similarly, usual bounds for non-smooth optimization do not depend on subgradient variance but instead on G[23,11,12].G Further experimental results• Synthetic Dataset : Following[16]we generate n = 500 datapoints from a standardized Gaussian distribution in R d , with d = 100. We sample the corresponding labels at random. We consider a batch size B = 20 and either λ = 0 or λ = 1e − 4. • A1A dataset (standard normalization) from[6], consisting in 1605 datapoints in 123 dimensions.We consider again B = 20 but a substantial regularization with λ = 0.01. • Breast Cancer dataset (standard normalization)[10], consisting in 569 datapoints in 39 dimensions.We consider a small batch size B = 5 with strong regularization λ = 0.1.All experiments reported below are repeated 5 times. Shown is mean and 2 standard deviations.Tuning of DecSPS. DecSPS has two hyperparameters: the upper bound γ b on the first stepsize and the scaling constant c 0 . As stated in the main paper, while Thm. 5 guarantees convergence for any positive value of these hyperparameters, the result of Thm. 3 suggests that using c 0 = 1 yields the best performance under the assumption thatσ 2 B L D 2 (e.g. reasonable distance of initialization from the solution, and the maximum gradient Lipschitz constant L max = max i L i > 1/γ b ). For the definition of these quantities plese refer to the main paper. InFig. 3in the main paper we showed that (1) c 0 = 1 is optimal in this setting (under γ b = 10) and (2) the performance of SGD with DecSPS is
The importance of better models in stochastic optimization. H Asi, J C Duchi, Proceedings of the National Academy of Sciences. the National Academy of Sciences116H. Asi and J. C. Duchi. The importance of better models in stochastic optimization. Proceedings of the National Academy of Sciences, 116(46):22924-22930, 2019.
Stochastic (approximate) proximal point methods: Convergence, optimality, and adaptivity. H Asi, J C Duchi, SIAM Journal on Optimization. 293H. Asi and J. C. Duchi. Stochastic (approximate) proximal point methods: Convergence, optimality, and adaptivity. SIAM Journal on Optimization, 29(3):2257-2290, 2019.
Training neural networks for and by interpolation. L Berrada, A Zisserman, M P Kumar, International Conference on Machine Learning. L. Berrada, A. Zisserman, and M. P. Kumar. Training neural networks for and by interpolation. In International Conference on Machine Learning, 2020.
Optimization methods for large-scale machine learning. L Bottou, F E Curtis, J Nocedal, Siam Review. 602L. Bottou, F. E. Curtis, and J. Nocedal. Optimization methods for large-scale machine learning. Siam Review, 60(2):223-311, 2018.
Subgradient methods. Lecture Notes of EE392o. S Boyd, L Xiao, A Mutapcic, Autumn Quarter. Stanford UniversityS. Boyd, L. Xiao, and A. Mutapcic. Subgradient methods. Lecture Notes of EE392o, Stanford University, Autumn Quarter, 2004:2004-2005, 2003.
LIBSVM: a library for support vector machines. C.-C Chang, ACM Transactions on Intelligent Systems and Technology. C.-C. Chang. LIBSVM: a library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2011.
Subgradient methods for sharp weakly convex functions. D Davis, D Drusvyatskiy, K J Macphee, C Paquette, Journal of Optimization Theory and Applications. 1793D. Davis, D. Drusvyatskiy, K. J. MacPhee, and C. Paquette. Subgradient methods for sharp weakly convex functions. Journal of Optimization Theory and Applications, 179(3):962-982, 2018.
A Défossez, L Bottou, F Bach, N Usunier, arXiv:2003.02395A simple convergence proof of Adam and Adagrad. arXiv preprintA. Défossez, L. Bottou, F. Bach, and N. Usunier. A simple convergence proof of Adam and Adagrad. arXiv preprint arXiv:2003.02395, 2020.
Stochastic mirror descent: Convergence analysis and adaptive variants via the mirror stochastic Polyak stepsize. R Orazio, N Loizou, I Laradji, I Mitliagkas, arXiv:2110.15412arXiv preprintR. D'Orazio, N. Loizou, I. Laradji, and I. Mitliagkas. Stochastic mirror descent: Conver- gence analysis and adaptive variants via the mirror stochastic Polyak stepsize. arXiv preprint arXiv:2110.15412, 2021.
UCI machine learning repository. D Dua, C Graff, D. Dua and C. Graff. UCI machine learning repository, 2017. URL http://archive.ics. uci.edu/ml.
Adaptive subgradient methods for online learning and stochastic optimization. J Duchi, E Hazan, Y Singer, Journal of Machine Learning Research. 127J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(7), 2011.
Adaptive and universal algorithms for variational inequalities with optimal convergence. A Ene, H L Nguyen, arXiv:2010.07799arXiv preprintA. Ene and H. L. Nguyen. Adaptive and universal algorithms for variational inequalities with optimal convergence. arXiv preprint arXiv:2010.07799, 2020.
Stochastic first-and zeroth-order methods for nonconvex stochastic programming. S Ghadimi, G Lan, SIAM Journal on Optimization. 234S. Ghadimi and G. Lan. Stochastic first-and zeroth-order methods for nonconvex stochastic programming. SIAM Journal on Optimization, 23(4), 2013.
Deep Learning. I Goodfellow, Y Bengio, A Courville, MIT pressI. Goodfellow, Y. Bengio, and A. Courville. Deep Learning. MIT press, 2016.
SGD for structured nonconvex functions: Learning rates, minibatching and interpolation. R Gower, O Sebbouh, N Loizou, International Conference on Artificial Intelligence and Statistics. R. Gower, O. Sebbouh, and N. Loizou. SGD for structured nonconvex functions: Learning rates, minibatching and interpolation. In International Conference on Artificial Intelligence and Statistics, 2021.
SGD: General analysis and improved rates. R M Gower, N Loizou, X Qian, A Sailanbayev, E Shulgin, P Richtárik, International Conference on Machine Learning. R. M. Gower, N. Loizou, X. Qian, A. Sailanbayev, E. Shulgin, and P. Richtárik. SGD: General analysis and improved rates. In International Conference on Machine Learning, 2019.
R M Gower, A Defazio, M Rabbat, arXiv:2106.11851Stochastic Polyak stepsize with a moving target. arXiv preprintR. M. Gower, A. Defazio, and M. Rabbat. Stochastic Polyak stepsize with a moving target. arXiv preprint arXiv:2106.11851, 2021.
Revisiting the Polyak step size. E Hazan, S Kakade, arXiv:1905.00313arXiv preprintE. Hazan and S. Kakade. Revisiting the Polyak step size. arXiv preprint arXiv:1905.00313, 2019.
Adam: A method for stochastic optimization. D P Kingma, J Ba, arXiv:1412.6980D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv:1412.6980, 2014.
Stochastic Approximation and Recursive Algorithms and Applications. H Kushner, G G Yin, Springer Science & Business Media35H. Kushner and G. G. Yin. Stochastic Approximation and Recursive Algorithms and Applica- tions, volume 35. Springer Science & Business Media, 2003.
Online adaptive methods, universality and acceleration. K Y Levy, A Yurtsever, V Cevher, Advances in Neural Information Processing Systems. 31K. Y. Levy, A. Yurtsever, and V. Cevher. Online adaptive methods, universality and acceleration. Advances in Neural Information Processing Systems, 31, 2018.
Stochastic Polyak step-size for SGD: An adaptive learning rate for fast convergence. N Loizou, S Vaswani, I H Laradji, S Lacoste-Julien, International Conference on Artificial Intelligence and Statistics. N. Loizou, S. Vaswani, I. H. Laradji, and S. Lacoste-Julien. Stochastic Polyak step-size for SGD: An adaptive learning rate for fast convergence. In International Conference on Artificial Intelligence and Statistics, 2021.
Robust stochastic approximation approach to stochastic programming. A Nemirovski, A Juditsky, G Lan, A Shapiro, SIAM Journal on Optimization. 194A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to stochastic programming. SIAM Journal on Optimization, 19(4):1574-1609, 2009.
Stochastic gradient descent with Polyak's learning rate. A M Oberman, M Prazeres, arXiv:1903.08688arXiv preprintA. M. Oberman and M. Prazeres. Stochastic gradient descent with Polyak's learning rate. arXiv preprint arXiv:1903.08688, 2019.
Introduction to Optimization. B Polyak, Inc., Publications DivisionNew YorkB. Polyak. Introduction to Optimization. Inc., Publications Division, New York, 1987.
On the convergence of Adam and beyond. S J Reddi, S Kale, S Kumar, International Conference on Learning Representations. S. J. Reddi, S. Kale, and S. Kumar. On the convergence of Adam and beyond. In International Conference on Learning Representations, 2018.
A stochastic approximation method. The Annals of Mathematical Statistics. H Robbins, S Monro, H. Robbins and S. Monro. A stochastic approximation method. The Annals of Mathematical Statistics, 1951.
L4: Practical loss-based stepsize adaptation for deep learning. M Rolinek, G Martius, Advances in neural information processing systems. 31M. Rolinek and G. Martius. L4: Practical loss-based stepsize adaptation for deep learning. Advances in neural information processing systems, 31, 2018.
Lecture 6.5 -RMSprop, Coursera: Neural networks for machine learning. T Tieleman, G Hinton, University of TorontoTechnical ReportT. Tieleman and G. Hinton. Lecture 6.5 -RMSprop, Coursera: Neural networks for machine learning. University of Toronto, Technical Report, 2012.
Sequential convergence of AdaGrad algorithm for smooth convex optimization. C Traoré, E Pauwels, Operations Research Letters. 494C. Traoré and E. Pauwels. Sequential convergence of AdaGrad algorithm for smooth convex optimization. Operations Research Letters, 49(4):452-458, 2021.
Adaptive gradient methods converge faster with over-parameterization (but you should do a line-search). S Vaswani, I Laradji, F Kunstner, S Y Meng, M Schmidt, S Lacoste-Julien, arXiv:2006.06835arXiv preprintS. Vaswani, I. Laradji, F. Kunstner, S. Y. Meng, M. Schmidt, and S. Lacoste-Julien. Adaptive gradient methods converge faster with over-parameterization (but you should do a line-search). arXiv preprint arXiv:2006.06835, 2020.
Adagrad stepsizes: Sharp convergence over nonconvex landscapes. R Ward, X Wu, L Bottou, International Conference on Machine Learning. R. Ward, X. Wu, and L. Bottou. Adagrad stepsizes: Sharp convergence over nonconvex landscapes. In International Conference on Machine Learning, 2019.
Linear convergence of adaptive stochastic gradient descent. Y Xie, X Wu, R Ward, International Conference on Artificial Intelligence and Statistics. Y. Xie, X. Wu, and R. Ward. Linear convergence of adaptive stochastic gradient descent. In International Conference on Artificial Intelligence and Statistics, 2020.
Understanding deep learning (still) requires rethinking generalization. C Zhang, S Bengio, M Hardt, B Recht, O Vinyals, Communications of the ACM. 643C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals. Understanding deep learning (still) requires rethinking generalization. Communications of the ACM, 64(3):107-115, 2021.
Why are adaptive methods good for attention models?. J Zhang, S P Karimireddy, A Veit, S Kim, S Reddi, S Kumar, S Sra, Advances in Neural Information Processing Systems. 33J. Zhang, S. P. Karimireddy, A. Veit, S. Kim, S. Reddi, S. Kumar, and S. Sra. Why are adaptive methods good for attention models? Advances in Neural Information Processing Systems, 33, 2020.
| [
"https://github.com/aorvieto/DecSPS."
] |
[
"Detecting vector charge with extreme mass ratio inspirals onto Kerr black holes",
"Detecting vector charge with extreme mass ratio inspirals onto Kerr black holes"
] | [
"Chao Zhang [email protected]†[email protected]‡[email protected]§[email protected] \nSchool of Physics and Astronomy\nShanghai Jiao Tong University\n200240ShanghaiChina\n",
"Hong Guo \nShanghai Frontier Research Center for Gravitational Wave Detection\nShanghai Jiao Tong University\n200240ShanghaiChina\n",
"Yungui Gong \nSchool of Physics\nHuazhong University of Science and Technology\n430074WuhanHubeiChina\n",
"Bin Wang \nCenter for Gravitation and Cosmology\nYangzhou University\n225009YangzhouChina\n\nSchool of Aeronautics and Astronautics\nShanghai Jiao Tong University\n200240ShanghaiChina\n"
] | [
"School of Physics and Astronomy\nShanghai Jiao Tong University\n200240ShanghaiChina",
"Shanghai Frontier Research Center for Gravitational Wave Detection\nShanghai Jiao Tong University\n200240ShanghaiChina",
"School of Physics\nHuazhong University of Science and Technology\n430074WuhanHubeiChina",
"Center for Gravitation and Cosmology\nYangzhou University\n225009YangzhouChina",
"School of Aeronautics and Astronautics\nShanghai Jiao Tong University\n200240ShanghaiChina"
] | [] | Extreme mass ratio inspirals (EMRIs) are excellent sources for space-based observatories to explore the properties of black holes and test no-hair theorems. We consider EMRIs with a charged compact object inspiralling onto a Kerr black hole in quasi-circular orbits. Using the Teukolsky and generalized Sasaki-Nakamura formalisms for the gravitational and vector perturbations about a Kerr black hole, we numerically calculate the energy fluxes for both gravitational and vector perturbations induced by a charged particle moving in equatorial circular orbits. With one-year observations of EMRIs, we apply the Fisher information matrix method to estimate the charge uncertainty detected by space-based gravitational wave detectors such as the Laser Interferometer Space Antenna, TianQin, and Taiji, and we find that it is possible to detect vector charge as small as q ∼ 0.0049. The results show that EMRIs composed of a Kerr black hole with a higher spin a and lighter mass M , and a secondary charged object with more vector charge give smaller relative error on the charge, thus constrain the charge better. The positive spin of the Kerr black hole can decrease the charge uncertainty by about one or two orders of magnitude. * | null | [
"https://export.arxiv.org/pdf/2301.05915v1.pdf"
] | 255,941,690 | 2301.05915 | 875bc7c7ec397d35757ade9efcdb211da481b515 |
Detecting vector charge with extreme mass ratio inspirals onto Kerr black holes
14 Jan 2023
Chao Zhang [email protected]†[email protected]‡[email protected]§[email protected]
School of Physics and Astronomy
Shanghai Jiao Tong University
200240ShanghaiChina
Hong Guo
Shanghai Frontier Research Center for Gravitational Wave Detection
Shanghai Jiao Tong University
200240ShanghaiChina
Yungui Gong
School of Physics
Huazhong University of Science and Technology
430074WuhanHubeiChina
Bin Wang
Center for Gravitation and Cosmology
Yangzhou University
225009YangzhouChina
School of Aeronautics and Astronautics
Shanghai Jiao Tong University
200240ShanghaiChina
Detecting vector charge with extreme mass ratio inspirals onto Kerr black holes
14 Jan 2023
Extreme mass ratio inspirals (EMRIs) are excellent sources for space-based observatories to explore the properties of black holes and test no-hair theorems. We consider EMRIs with a charged compact object inspiralling onto a Kerr black hole in quasi-circular orbits. Using the Teukolsky and generalized Sasaki-Nakamura formalisms for the gravitational and vector perturbations about a Kerr black hole, we numerically calculate the energy fluxes for both gravitational and vector perturbations induced by a charged particle moving in equatorial circular orbits. With one-year observations of EMRIs, we apply the Fisher information matrix method to estimate the charge uncertainty detected by space-based gravitational wave detectors such as the Laser Interferometer Space Antenna, TianQin, and Taiji, and we find that it is possible to detect vector charge as small as q ∼ 0.0049. The results show that EMRIs composed of a Kerr black hole with a higher spin a and lighter mass M , and a secondary charged object with more vector charge give smaller relative error on the charge, thus constrain the charge better. The positive spin of the Kerr black hole can decrease the charge uncertainty by about one or two orders of magnitude. *
I. INTRODUCTION
In 2015 the Laser Interferometer Gravitational-Wave Observatory (LIGO) Scientific Collaboration and the Virgo Collaboration [1, 2] directly observed the first gravitational wave (GW) event GW150914 coming from the coalescence of binary black holes (BHs), and the discovery opened us a new window to understand the property of BHs and gravity in the nonlinear and strong field regimes. Until now tens of GW events in the frequency range of tens to hundreds Hertz have been confirmed [3][4][5][6][7][8][9]. However, due to the seismic noise and gravity gradient noise, the ground-based GW observatories can only measure transient GWs in the frequency range 10 − 10 3 Hz, radiated by the coalescences of stellar-mass compact binaries. Apart from transient GW sources, the future space-based GW detectors like the Laser Interferometer Space Antenna (LISA) [10,11], TianQin [12] and Taiji [13] will help us uncover unprecedented information about GW sources and fundamental physics [14][15][16][17][18].
One of the most conspicuous sources for the future space-based GW detectors is the extreme mass-ratio inspirals (EMRIs) [19,20]. EMRIs, which consist of a stellar-mass compact object (secondary object) with mass m p ∼ 1 − 100 M such as BHs, neutron stars, white dwarfs, etc. orbiting around a supermassive black hole (SMBH) (primary object) with mass M ∼ 10 5 − 10 7 M , with the mass ratio m p /M in the range of 10 −7 − 10 −4 , radiate millihertz GWs expected to be observed by the future space-based GW detectors.
Future detections of EMRIs with space-based detectors can provide highly precise measurements on source parameters such as the BH masses, spins, etc. In [21], the authors introduced a family of approximate waveforms for EMRIs to make parameter estimation with LISA. For a typical source of m p = 10 M and M = 10 6 M with a signal-to-noise ratio (SNR) of 30, LISA can determine the masses of both primary and secondary objects to within a fractional error of ∼ 10 −4 , measure the spin of the primary object to within ∼ 10 −4 , and localize the source on the sky within ∼ 10 −3 steradians. The improved augmented analytic kludge model [22] provides more accurate and efficient GW waveforms to improve the errors of parameters by one order of magnitude. Thus, EMRIs can be used to precisely measure the slope of the black-hole mass function [23] or as standard sirens [24] to constrain cosmological parameters and investigate the expansion history of the Universe [25][26][27]. The observations of EMRIs can also help us explore gravitational physics. For example, they can be used to figure out the spacetime structure around the central SMBH to high precision, allowing us to test if the spacetime geometry is described by general relativity or an alternative theory and analyze the environments such as dark matter surrounding the central SMBH [19,20,. In [55][56][57], the authors investigated the eccentricity and orbital evolution of BH binaries under the influence of accretion in addition to the scalar/vector and gravitational radiations, and discussed the competition between radiative mechanisms and accretion effects on eccentricity evolution. However, these discussions were mainly based on the Newtonian orbit and dipole emission. A generic, fully-relativistic formalism to study EMRIs in spherically symmetric and non-vacuum BH spacetime was established in [58].
Considering the secondary object of mass m p orbiting the galactic BHs (GBHs) immersed in an astrophysical environment, like an accretion disk or a dark matter halo, the authors found that the relative flux difference at ωM = 0.02 between a vacuum and a GBH with the halo mass M halo = 0.1M and the lengthscale a 0 = 10 2 M halo and 10 3 M halo is ∼ 10% and 1%, respectively. The results clearly indicate that EMRIs can constrain smaller-scale matter distributions around GBHs [58].
According to the no-hair theorem, any BH can be described by three parameters: the mass, angular momentum, and electric charge. Current observations have not yet been able to confirm the no-hair theorem or the existence of extra fields besides the gravitational fields in modified gravity theories. The coupling between scalar fields and higher-order curvature invariants can invalidate the no-hair theorem so that BHs can carry scalar charge which depends on the mass of BH [59][60][61][62]. The possible detection of scalar fields with EMRIs was discussed in [41][42][43][44][45]53]. SMBHs are usually neutral because of long-time charge dissipation through the presence of the plasma around them or the spontaneous production of electron-positron pairs [63][64][65][66][67]. However, the existence of stable charged astrophysical stellar-mass compact objects such as BHs, neutron stars, white dwarfs, etc. in nature remains controversial. In Refs. [68][69][70][71][72][73][74][75], it was shown that there can be a maximal huge amount of charge with the charge-to-mass ratio of the order one for highly compact stars, whose radius is on the verge of forming an event horizon. The balance between the attractive gravitational force from the matter part and the repulsive force from the electrostatic part is unstable and charged compact stars will collapse to a charged BH due to a decrease in the electric field [76]. Also, BHs can be charged through the Wald mechanism by selectively accreting charge in a magnetic field [77], or by accreting minicharged dark matter beyond the standard model [78]. The charge carried by compact objects can affect the parameter estimation of the chirp mass and BH merger [79,80] and the merger rate distribution of primordial BH binaries [81,82]. The electromagnetic self-force acting on a charged particle in an equatorial circular orbit of Kerr BH was calculated in [38]. It showed the dissipative self-force balances with the sum of the electromagnetic flux radiated to infinity and down the BH horizon, and prograde orbits can stimulate BH superradiance although the superradiance is not sufficient to support floating orbits even at the innermost stable circular orbit (ISCO) [38]. The observations of GW150914 can constrain the charge-to-mass ratio of charged BHs to be as high as 0.3 [83].
In [49], the energy fluxes for both gravitational and electromagnetic waves induced by a charged particle orbiting around a Schwarzschild BH were studied. It was demonstrated that the electric charge leaves a significant imprint on the phase of GWs and is observable with space-based GW detectors. In this paper, based on the Teukolsky formalism for BH perturbations [84][85][86], we numerically calculate the energy fluxes for both tensor and vector perturbations induced by a charged particle moving in an equatorial circular orbit around a Kerr BH, the orbital evolution of EMRIs up to the ISCO, and investigate the capabilities to detect vector charge carried by the secondary compact object with space-based GW detectors such as LISA, TianQin, and Taiji. We apply the methods of faithfulness and Fisher information matrix (FIM) to assess the capability of space-based GW detectors to detect the vector charge carried by the secondary compact object. The paper is organized as follows.
In Sec. II, we introduce the model with vector charge and the Teukolsky perturbation formalism. In Sec. III, we give the source terms as well as procedures for solving the inhomogeneous Teukolsky equations. Then we numerically calculate the energy fluxes for gravitational and vector fields using the Teukolsky and generalized Sasaki-Nakamura (SN) formalisms in the background of a Kerr BH. In Sec. IV, we give the numerical results of energy fluxes falling onto the horizon and radiated to infinity for gravitational and vector fields, then we use the dephasing of GWs to constrain the charge. In Sec. V, we calculate the faithfulness between GWs with and without vector charge and perform the FIM to estimate the errors of detecting vector charge with LISA, TianQin, and Taiji. Sec. V is devoted to conclusions and discussions. In this paper, we set c = G = M = 1.
II. EINSTEIN-MAXWELL FIELD EQUATIONS
The simplest model including vector charges is Einstein-Maxwell theory, which is modeled via the massless vector field
S = d 4 x √ −g 16π R − 1 4 F µν F µν − A µ J µ − S matter (g µν , Φ),(1)
where R is the Ricci scalar, A µ is a massless vector field, F µν = ∇ µ A ν − ∇ ν A µ is the field strength and Φ is the matter field, J µ is the electric current density. Varying the action with respect to the metric tensor and the vector field yields the Einstein-Maxwell field equations
G µν = 8πT µν p + 8πT µν e ,(2)∇ ν F µν = 4πJ µ ,(3)
where G µν is the Einstein tensor, T µν p and T µν e are the particle's material stress-energy tensor and vector stress-energy tensor, respectively. Since the amplitude of the vector stress-energy T µν e is quadratic in the vector field, the contribution to the background metric from the vector field is second order. For an EMRI system (m p M ) composing of a small compact object with mass m p and charge-to-mass ratio q orbiting around a Kerr BH with mass M and spin M a, we can ignore the contribution to the background metric from the vector field. The perturbed Einstein and Maxwell equations for EMRIs are
G µν = 8πT µν p ,(4)∇ ν F µν = 4πJ µ ,(5)
where
T µν p (x) = m p dτ u µ u ν δ (4) [x − z(τ )] √ −g ,(6)J µ (x) = qm p dτ u µ δ (4) [x − z(τ )] √ −g ,(7)
and u µ is the velocity of the particle. We use the Newman-Penrose formalism [87] to study perturbations around a Kerr BH induced by a charged particle with mass m p and charge q.
In Boyer-Lindquist coordinate, the metric of Kerr BHs is
ds 2 =(1 − 2r/Σ)dt 2 + (4ar sin(θ)/Σ)dtdϕ − (Σ/∆)dr 2 − Σdθ 2 − sin 2 θ(r 2 + a 2 + 2a 2 r sin 2 θ/Σ)dϕ 2 .(8)
where Σ = r 2 + a 2 cos 2 θ, and ∆ = r 2 − 2r + a 2 . When a = 0, the metric reduces to the Schwarzschild metric. Based on the metric (8), we construct the null tetrad,
l µ = [(r 2 + a 2 )/∆, 1, 0, a/∆], n µ = [r 2 + a 2 , −∆, 0, a]/(2Σ), m µ = [ia sin θ, 0, 1, i/ sin θ]/(2 1/2 (r + ia cos θ)), m µ = [−ia sin θ, 0, 1, −i/ sin θ]/(2 1/2 (r − ia cos θ)).(9)
The propagating vector field is described by the two complex quantities,
φ 0 = F µν l µ m ν , φ 2 = F µνm µ n ν .(10)
The propagating gravitational field is described by the two complex Newman-Penrose vari-
ables ψ 0 = −C αβγδ l α m β l γ m δ , ψ 4 = −C αβγδ n αmβ n γmδ ,(11)
where C αβγδ is the Weyl tensor. A single master equation for tensor (s = −2) and vector (s = −1) perturbations was derived as [84],
(r 2 + a 2 ) 2 ∆ − a 2 sin 2 θ ∂ 2 ψ ∂t 2 + 4ar ∆ ∂ 2 ψ ∂t∂ϕ + a 2 ∆ − 1 sin 2 θ ∂ 2 ψ ∂ϕ 2 − ∆ −s ∂ ∂r ∆ s+1 ∂ψ ∂r − 1 sin θ ∂ ∂θ sin θ ∂ψ ∂θ − 2s a(r − 1) ∆ + i cos θ sin 2 θ ∂ψ ∂ϕ − 2s (r 2 − a 2 ) ∆ − r − ia cos θ ∂ψ ∂t + (s 2 cot 2 θ − s)ψ = 4πΣT,(12)
the explicit field ψ and the corresponding source T are given in Table I [84]. In terms of the eigenfunctions s S lm (θ) [84,88], the field ψ can be written as
s ψ T -1 (r − ia cos θ) 2 φ 2 (r − ia cos θ) 2 J 2 -2 (r − ia cos θ) 4 ψ 4 2(r − ia cos θ) 4 T 4ψ = dω l,m R ωlm (r) s S lm (θ)e −iωt+imϕ ,(13)
where the radial function R ωlm (r) satisfies the inhomogeneous Teukolsky equation
∆ −s d dr ∆ s+1 dR ωlm dr − V T (r)R ωlm = T ωlm ,(14)
the function
V T = − K 2 − 2is(r − 1)K ∆ − 4isωr + λ lmω ,(15)
K = (r 2 + a 2 )ω − am, λ lmω is the corresponding eigenvalue which can be computed by the BH Perturbation Toolkit [89], and the source T ωlm (r) is
T ωlm (r) = 1 2π dtdΩ 4πΣT s S lm (θ)e iωt−imϕ .(16)
For the equatorial circular trajectory at r 0 under consideration, the sources are
T µν p (x) = m p r 2 0 u µ u ν u t δ(r − r 0 )δ(cos θ)δ(ϕ −ωt), J µ (x) = q m p r 2 0 u µ u t δ(r − r 0 )δ(cos θ)δ(ϕ −ωt),(17)
whereω is the orbital angular frequency. Geodesic motion in Kerr spacetime admits three constants of motion: the specific energyÊ, the angular momentumL, and the Carter constantQ, and the geodesic equations are
m p Σ dt dτ =Ê 4 ∆ + aL 1 − 2 ∆ − a 2Ê sin 2 θ,(18)m p Σ dr dτ = ± V r (r 0 ),(19)m p Σ dθ dτ = ± V θ (θ),(20)m p Σ dϕ dτ = aÊ 2 ∆ − 1 − a 2L ∆ +L csc 2 θ,(21)
where ≡ √ r 2 + a 2 , the radial and polar potentials are
V r (r)= Ê 2 − aL 2 − ∆ r 2 + L − aÊ 2 +Q ,(22)V θ (θ)=Q −L 2 cot 2 θ − a 2 1 −Ê 2 cos 2 θ.(23)
In the adiabatic approximation, for a quasi-circular orbit on the equatorial plane, the coordinates r and θ are considered as constants, then Eqs. (18) and (21)
E = m p r 3/2 0 − 2r 1/2 0 ± a r 3/4 0 r 3/2 0 − 3r 1/2 0 ± 2a 1/2 ,(24)L = m p ±(r 2 0 ∓ 2ar 1/2 0 + a 2 ) r 3/4 0 r 3/2 0 − 3r 1/2 0 ± 2a 1/2 ,(25)Q = 0.(26)
The orbital angular frequency isω
≡ dϕ dt = ±1 r 3/2 0 ± a ,(27)
where ± corresponds to co-rotating and counter-rotating, respectively. In the following discussions, we use positive a for co-rotating cases and negative a for counter-rotating cases.
III. NUMERICAL CALCULATION FOR THE ENERGY FLUX
The homogeneous Teukolsky equation (14) admits two linearly independent solutions R in ωlm and R up ωlm , with the following asymptotic values at the horizon r + and at infinity,
R in ωlm = B tran ∆ −s e −iκr * , (r → r + ) B out e iωr * r 2s+1 + B in e −iωr * r , (r → +∞) (28) R up ωlm = D out e iκr * + D in ∆ s e −iκr * , (r → r + ) D tran e iωr * r 2s+1 , (r → +∞)(29)
where κ = ω − ma/(2r + ), r ± = 1 ± √ 1 − a 2 , and the tortoise radius of the Kerr metric
r * = r + 2r + r + − r − ln r − r + 2 − 2r − r + − r − ln r − r − 2 .(30)
The solutions R in ωlm and R up ωlm are purely outgoing at infinity and purely ingoing at the horizon. With the help of these homogeneous solutions, the solution to Eq. (14) is
R ωlm (r) = 1 W R in ωlm +∞ r ∆ s R up ωlm T ωlm dr + R up ωlm r r + ∆ s R in ωlm T ωlm dr .(31)
with the constant Wronskian given by
W = ∆ s+1 R in ωlm dR up ωlm dr − R up ωlm dR in ωlm dr = 2iωB in D tran .(32)
The solution is purely outgoing at infinity and purely ingoing at the horizon,
R ωlm (r → r + ) = Z ∞ ωlm ∆ −s e −iκr * , R ωlm (r → ∞) = Z H ωlm r −2s−1 e iωr * ,(33)
with
Z ∞ ωlm = B tran W +∞ r + ∆ s R up ωlm T ωlm dr, Z H ωlm = D tran W +∞ r + ∆ s R in ωlm T ωlm dr.(34)
For a circular equatorial orbit with orbital angular frequencyω, we get
Z H,∞ ωlm = δ(ω − mω)A H,∞ ωlm .(35)
For s = −1, the energy fluxes at infinity and the horizon reaḋ
E ∞ q = dE dt ∞ EM = ∞ l=1 l m=1 |A H ωlm | 2 π , E H q = dE dt H EM = ∞ l=1 l m=1 α E lm |A ∞ ωlm | 2 π ,(36)
where the coefficient α E lm is [86] α E lm =
128ωκr 3 + (κ 2 + 4 2 ) |B E | 2 (37) with = √ 1 − a 2 /(4r + )
and
|B E | 2 = λ 2 lmω + 4maω − 4a 2 ω 2 .(38)
For s = −2, the gravitational energy fluxes at infinity and the horizon are given bẏ
E ∞ grav = dE dt ∞ GW = ∞ l=2 l m=1 |A H ωlm | 2 2πω 2 , E H grav = dE dt H GW = ∞ l=2 l m=1 α G lm |A ∞ ωlm | 2 2πω 2 ,(39)where the coefficient α G lm is [91] α G lm = 256 (2r + ) 5 κ (κ 2 + 4 2 ) (κ 2 + 16 2 ) ω 3 |B G | 2 ,(40)
and
|B G | 2 = (λ lmω + 2) 2 + 4aω − 4a 2 ω 2 × λ 2 lmω + 36maω − 36a 2 ω 2 + (2λ lmω + 3) 96a 2 ω 2 − 48maω + 144ω 2 1 − a 2 .(41)
Therefore, the total energy fluxes emitted from the EMRIs reaḋ
E =Ė q +Ė grav ,(42)whereĖ q =Ė ∞ q +Ė H q ,Ė grav =Ė ∞ grav +Ė H grav .(43)
The detailed derivation of the above results is given in Appendix A 1. The energy flux emitted by tensor fields can be computed with the BH Perturbation Toolkit [89].
IV. RESULTS
The top panel of Fig. 1 shows the normalized vector energy flux m −2 p M 2Ė q for a charged particle with different charge values of q on a circular orbit about a Kerr BH with the spin a = 0.9, as a function of orbital radius. The vector energy flux is proportional to the square of the vector charge q 2 . The vector energy flux increases as the charged particle inspirals into the central Kerr BH. The ratio between the vector and gravitational energy flux is shown in the bottom panel of Fig. 1. Both the vector and gravitational fluxes are in the same order of (m p /M ) 2 , the ratio of fluxes is independent of the mass ratio and increases as the orbital radius because the gravitational contribution falls off faster than the vector energy flux at a large orbital radius. Figure 2 shows the normalized vector energy flux m −2 p M 2Ė q and the ratio of energy fluxesĖ q /Ė grav as a function of the orbital radius for different values of a.
For larger a, the ISCO is smaller, resulting in higher GW frequency at the coalescence. For the same orbital radius, the vector energy flux is slightly larger for smaller a. However, the total energy flux increases with a for one-year observations before the merger due to the smaller ISCO. Figure 3 shows the ratio of energy flux falling onto the horizon to the energy flux radiated away to infinity, as a function of the orbital radius, for various spin a, and for the vector and gravitational fields. It is interesting to note that the sign of ratio becomes negative for Kerr BH with positive a (co-rotating orbit) at a small orbital radius. In these cases, the vector and gravitational fields generate superradiance, leading to extraction of energy from the horizon. The superradiance only happens when the coefficient κ in Eqs. (37) and (40) becomes negative, which means that the orbital frequency slows down the rotation of the Kerr BH. Our results are consistent with those found in Ref. [38]. The extra energy leakage due to the vector field accelerates the coalescence of binaries.
Therefore, we expect the vector charge to leave a significant imprint on the GW phase over
one-year evolution before the merger for EMRIs. To detect the vector charge carried by the small compact object in EMRIs, we study the dephasing of GWs caused by the additional energy loss during inspirals. The observation time is one year before the merger, where
T obs = fmax f min 1ḟ df = 1 year,(44)f max = min(f ISCO , f up ), f min = max(f low , f start ),(45)
f =ω/π is the GW frequency, f ISCO is the frequency at the ISCO [92], f start is the initial frequency at t = 0, the cutoff frequencies f low = 10 −4 Hz and f up = 1 Hz. The orbit evolution is determined by
dr dt = −Ė dÊ dr −1 , dϕ orb dt = πf,(46)
whereĖ =Ė q +Ė grav . The total number of GW cycles accumulated over one year before the merger is [93] N = fmax f min fḟ df.
V. PARAMETER ESTIMATION
To make the analysis more accurate and account for the degeneracy among parameters, we calculate the faithfulness between two GW waveforms and carry out parameter estimation with the FIM method.
A. Signals
We can obtain the inspiral trajectory from adiabatic evolution in Eq. (46), then compute GWs in the quadrupole approximation. The metric perturbation in the transverse-traceless (TT) gauge is
h TT ij = 2 d L P il P jm − 1 2 P ij P lm Ï lm ,(48)
where d L is the luminosity distance of the source, P ij = δ ij − n i n j is the projection operator acting onto GWs with the unit propagating direction n j , δ ij is the Kronecker delta function, andÏ ij is the second time derivative of the mass quadrupole moment. The GW strain measured by the detector is
h(t) = h + (t)F + (t) + h × (t)F × (t),(49)
where h + (t) = A cos [2ϕ orb + 2ϕ 0 ] (1 + cos 2 ι), h × (t) = −2A sin [2ϕ orb + 2ϕ 0 ] cos ι, ι is the inclination angle between the binary orbital angular momentum and the line of sight, the GW amplitude A = 2m p [Mω(t)] 2/3 /d L and ϕ 0 is the initial phase. The interferometer pattern functions F +,× (t) and ι can be expressed in terms of four angles which specify the source orientation, (θ s , φ s ), and the orbital angular direction (θ 1 , φ 1 ). The faithfulness between two signals is defined as
F n [h 1 , h 2 ] = max {tc,φc} h 1 |h 2 h 1 |h 1 h 2 |h 2 ,(50)
where (t c , φ c ) are time and phase offsets [94], the noise-weighted inner product between two templates h 1 and h 2 is
h 1 | h 2 = 4 fmax f minh 1 (f )h * 2 (f ) S n (f ) df,(51)
h 1 (f ) is the Fourier transform of the time-domain signal h(t), its complex conjugate is h * 1 (f ), and S n (f ) is the noise spectral density for space-based GW detectors. The signalto-noise ratio (SNR) can be obtained by calculating ρ = h|h 1/2 . The sensitivity curves of LISA, TianQin, and Taiji are shown in Fig. 5. As pointed out in [95], two signals can be distinguished by LISA if F n ≤ 0.988. Here we choose the source masses m p = 10 M , M = 10 6 M , the source angles θ s = π/3, φ s = π/2 and θ 1 = φ 1 = π/4, the luminosity distance is scaled to ensure SNR ρ = 30, the initial phase is set as ϕ 0 = 0 and the initial orbital separation is adjusted to experience one-year adiabatic evolution before the plunge r end = r ISCO + 0.1 M . In Fig. 6
B. Fisher information matrix
The signals (49) measured by the detector are determined by the following eleven parameters ξ = (ln M, ln m p , a, q, r 0 , ϕ 0 , θ s , φ s , θ 1 , φ 1 , d L ).
(52)
In the large SNR limit, the posterior probability distribution of the source parameters ξ can be approximated by a multivariate Gaussian distribution centered around the true valuesξ.
Assuming flat or Gaussian priors on the source parameters ξ, their covariances are given by the inverse of the FIM The statistical error on ξ and the correlation coefficients between the parameters are provided by the diagonal and non-diagonal parts of Σ = Γ −1 , i.e.
Γ ij = ∂h ∂ξ i ∂h ∂ξ j ξ=ξ .(53)σ i = Σ 1/2 ii , c ξ i ξ j = Σ ij / σ ξ i σ ξ j .(54)
Because of the triangle configuration of the space-based GW detector, the total SNR is defined by ρ = ρ 2 1 + ρ 2 2 , so the total covariance matrix of the binary parameters is obtained by inverting the sum of the Fisher matrices σ 2 ξ i = (Γ 1 + Γ 2 ) −1 ii . Here we fix the source angles θ s = π/3, φ s = π/2 and θ 1 = φ 1 = π/4, the initial phase is set as ϕ 0 = 0 and the initial orbital separation is adjusted to experience one-year adiabatic evolution before the plunge r end = r ISCO + 0.1 M . The luminosity distance d L is set to be 1 Gpc. We apply the FIM method for LISA, TianQin, and Taiji to estimate the errors of the vector charge.
and α = (r 2 + a 2 )
r 2 √ ∆ − r r 2 + a 2 − iK ∆ , β = (r 2 + a 2 ) r 2 √ ∆.(A10)
The generalized Sasaki-Nakamura equation admits two linearly independent solutions, X in lmω and X up lmω , with the asymptotic behaviour
X in lmω ∼ e −iκr * r → r + A out lmω e iωr * + A in lmω e −iωr * r → ∞ ,(A11)X up lmω ∼ C out lmω e iκr * + C in lmω e −iκr * r → r + e iωr * r → ∞ .(A12)
With the above normalization of the solutions X in lmω X up lmω , the arbitrary constants D tran and B tran are determined as
D tran = 2iω λ lmω ,(A13)B tran = i √ 1 − a 2 + 1 4 √ 2 2 √ 1 − a 2 + 1 ω + i √ 1 − a 2 − am .(A14)
The numerical values of X in lmω (X up lmω ) are obtained by integrating Eq. (A1) from r + (infinity) up to infinity (r + ) using the boundary conditions A11 (A12). We can derive the boundary conditions for the homogeneous generalized Sasaki-Nakamura equation in terms of explicit recursion relations which can be truncated at arbitrary order given by [96]. Then We transform X in lmω , X up lmω back to the Teukolsky solutions R in lmω , R up lmω using Eq. (A8). The amplitude B in mω can be obtained from the Wronskian W at a given orbital separation.
The source term
In this subsection, we give the explicit expressions for the source terms to compute the amplitudes Z ∞,H ωlm for equatorial circular orbital configurations. From Teukolsky's equation (3.8) in [84], we get J 2 J 2 = − 3 (a 2 + (r − 2)r) 2(r − ia cos(θ)) 2 (r + ia cos(θ)) Jm − − (a 2 + r 2 ) ∂ t Jm + (a 2 + (r − 2)r) ∂ r Jm − a∂ ϕ Jm 2(r − ia cos(θ))(r + ia cos(θ)) + i sin(θ) (−2ar − 4ia 2 cos(θ)) √ 2(r − ia cos(θ)) 2 (r + ia cos(θ)) J n + i sin(θ) (a 2 cos(2θ) + a 2 + 2r 2 ) (a∂ t J n + csc 2 (θ)∂ ϕ J n + i csc(θ)∂ θ J n ) 2 √ 2(r − ia cos(θ)) 2 (r + ia cos(θ)) ,
where J n = J µ n µ and Jm = J µm µ . The source term T ωlm for s = −1 is T ωlm (r) = 1 2π dtdΩ 4πΣT −1 S m l (θ)e −imϕ e iωt = 2 dtdΩ e iωt−imϕ r 2 + a 2 cos 2 (θ) (r − ia cos(θ)) −2 −1 S m l (θ)J 2 .
I 01 = − √ 2πq −1 S m l π 2 r (−r 2 ω + (r − 2) (m √ r + i)) (a 2 + (r − 2)r) (a + r 3/2 ) + ia (a 2 + a √ r(1 + irω) + r (−im √ r − 2irω + r − 2)) √ r (a 2 + (r − 2)r) (a + r 3/2 ) ,
I 1 = i √ 2πq √ r ( √ r − a)(A19)
a + r
FIG. 1 .
1The energy fluxes versus the orbital distance. The top panel shows the vector flux normalized with the mass ratio from a charged particle orbiting around a Kerr BH with the spin of a/M = 0.9 for different values of the vector charge q. The bottom panel shows the ratio between vector and gravitational energy fluxes for different values of the vector charge q.
FIG. 2 .
2Same as Fig. 1, but for different values of the primary spin a. The vector charge q is set to 1.
FIG. 3 .
3The ratio of the energy flux falling onto the horizon to the energy flux radiated to infinity, as a function of the orbital radius, for various spin a, and for the vector (the top panel) and gravitational cases (the bottom panel).
Considering
EMRIs with the mass of the second compact object being fixed to be m p = 10 M , we calculate the dephasing ∆N = N (q = 0) − N (q) for different vector charge q, spin a and mass M , and the results are shown in Fig. 4. For one-year observations before the merger, the charged particle starts further away from ISCO due to extra radiation of the vector field and the difference ∆N is always positive. As shown in Fig. 4, ∆N increases monotonically with the spin a and the charge-to-mass ratio q, and it strongly depends on the mass of the central BH such that lighter BHs have larger ∆N . This means that the observations of EMRIs with a lighter and larger-spin Kerr BH can detect the vector charge easier. For the same EMRI configuration in the Kerr background, the co-rotating orbit can detect the vector charge easier than the counter-rotating orbit. Following Refs. [41, 93], we take the threshold for a detectable dephasing that two signals are distinguishable by space-based GW detectors as ∆N = 1. Observations of EMRIs over one year before the merger may be able to reveal the presence of a vector charge as small as q ∼ 0.007 for Kerr BHs with a = 0.9 and M = 10 6 M , and q ∼ 0.01 for Schwarzschild BHs with M = 10 6 M .
FIG. 4 .
4The difference between the number of GW cycles accumulated by EMRIs with and without the vector charge in circular orbits. The left panel shows the dephasing as a function of the mass of the Kerr BH in the range M ∈ 2 × 10 5 , 2 × 10 7 M for different spin values, the charge q = 0.01. The right panel shows the dephasing as a function of the charge-to-mass ratio q for different M , the spin a = 0.9. The red dashed line corresponds to the threshold above which two signals are distinguishable with space-based GW detectors. All observational time is one year before the merger.
, we show the faithfulness between GW signals with and without the vector charge for LISA as a function of the vector charge. The results show that one-year observations of EMRIs with LISA may be able to reveal the presence of a vector charge as small as q ∼ 0.002 for Kerr BHs with a = 0.9 and M = 10 6 M (co-rotating orbit), q ∼ 0.003 for Schwarzschild BHs with a = 0 and M = 10 6 M , and q ∼ 0.004 for Kerr BHs with a = −0.9 and M = 10 6 M (counter-rotating orbit). Larger positive spin of the Kerr BH (co-rotating orbit) can help us detect the vector charge easier, which is consistent with the results obtained from the dephasing in the previous section.
FIG. 5 .
5The sensitivity curves for LISA, TianQin, and Taiji. The horizontal solid lines represent the frequency band f start to f ISCO for EMRIs with a = 0.9, and M = 2 × 10 5 M , M = 10 6 M and M = 10 7 M over one-year evolution before the merger.
FIG. 6 .
6Faithfulness between GW signals with and without the vector charge for LISA as a function of the charge q. The spin of the Kerr BH is a = 0, a = 0.9, and a = −0.9. The horizontal dashed line represents the detection limit with LISA, F n = 0.988.
The relative errors of the vector charge q as a function of the vector charge with LISA, TianQin and Taiji are shown inFig. 7. For one-year observations before the merger, the charged particle starts further away from ISCO due to extra radiation of the vector field, so the 1σ error for the charge decreases with the charge q.For EMRIs with M = 2 × 10 5 M and a = 0.9, as shown in the top panel, the relative errors of the charge q with TianQin are better than LISA and Taiji. For M = 10 6 M and a = 0.9 as shown in the middle panel, the relative errors of the charge q with TianQin and LISA are almost the same. For M = 10 7 M and a = 0.9 as shown in the bottom panel, the relative errors of the charge q with TianQin are worse than LISA and Taiji. In all the cases, the relative errors of the charge q with Taiji are better than LISA, for the reason that the sensitivity of Taiji is always better than LISA. For M = 2 × 10 5 M and a = 0.9, the relative errors with TianQin are better than LISA and Taiji since the sensitivity of TianQin is better than LISA and Taiji in the high-frequency band, but worse than LISA and Taiji in the low-frequency band as shown in Fig. 5. For EMRIs with m p = 10 M , M = 10 6 M and a = 0.9, the vector charge can be constrained for LISA as small as q ∼ 0.021, for TianQin as small as q ∼ 0.028 and for Taiji as small as q ∼ 0.016. For EMRIs with m p = 10 M , M = 2 × 10 5 M and a = 0.9, the vector charge can be constrained for LISA as small as q ∼ 0.0082, for TianQin as small as q ∼ 0.0049 and for Taiji as small as q ∼ 0.0057.
Figure 8 FIG. 7 .FIG. 8 .
878shows the relative errors of the vector charge q versus the spin a with LISA, TianQin, and Taiji. In general, the relative errors of the charge for the co-rotating orbit are better than those for the counter-rotating orbit. We only consider the co-rotating orbit for simplicity. The 1σ error for the charge decreases with the spin a. Comparing the relative errors for Kerr BHs with spin a = 0.9 and spin a = 0, we find that the spin of Kerr BHs can decrease the charge uncertainty by about one or two orders of magnitude, depending on the mass of the Kerr BH. For EMRIs with M = 10 6 M , m p = 10 M , q = 0.05, and different a, the corner plots for source parameters with LISA are shown in Figs. 9, 10 and 11. For comparison, we also show the corner plot for charged EMRIs in the Schwarzschild BH background inFig. 12. For a = (0.9, 0, −0.9), the corresponding errors of charge σ q are (0.0031, 0.086, 0.65), respectively. As expected, σ q is smaller for co-rotating orbits and bigger for counter-rotating orbits. For EMRIs in the Kerr background, the co-rotating orbit can better detect the vector charge. It is interesting to note that the charge q are anticorrelated with the mass M and the spin a of Kerr BHs, and the correlations between q and M , and q and m p in the Kerr BH background are opposite to those in the Schwarzschild BH background.VI. CONCLUSIONSWe study the energy emissions and GWs from EMRIs consisting of a small charged compact object with mass m p and the charge to mass ratio q inspiraling into a Kerr BH with spin a. We derive the formula for solving the inhomogeneous Teukolsky equation with a vector field and calculate the power emission due to the vector field in the Kerr background. By using the difference between the number of GW cycles ∆N accumulated by EMRIs with and without the vector charge in circular orbits over one year before the merger, we may reveal the presence of a vector charge as small as q ∼ 0.007 for Kerr BHs with a = 0.9 and M = 10 6 M , and q ∼ 0.01 for Schwarzschild BHs with M = 10 6 M . The dephasing increases monotonically with the charge-to-mass ratio q, and it strongly depends on the mass of the Kerr BH such that lighter BHs have larger dephasing.We also apply the faithfulness between GW signals with and without the vector charge to discuss the detection of the vector charge q. We find that positive larger spin of the Kerr BHcan help us detect the vector charge easier. We show that one-year observations of EMRIs with LISA may be able to reveal the presence of a vector charge as small as q ∼ 0.002 for Kerr BHs with a = 0.9 and M = 10 6 M , q ∼ 0.003 for Schwarzschild BHs with a = 0 and M = 10 6 M , and q ∼ 0.004 for Kerr BHs with a = −0.9 and M = 10 6 M . The 1σ interval for the charge q as a function of the charge q, inferred after one-year observations of EMRI with a = 0.9 and different M with LISA, Taiji, and TianQin. The horizontal dashed lines represent the 3σ limit 33.3%. The 1σ interval for the charge q as a function of the spin a, inferred after one-year observations of EMRI with q = 0.05 and different M with LISA, Taiji, and TianQin. with LISA, inferred after one-year observations of EMRIs with q = 0.05 and a = 0.9. Vertical lines show the 1σ interval for the source parameter. The contours correspond to the 68%, 95%, and 99% probability confidence intervals. To determine vector charge more accurately and account for the degeneracy among parameters, we calculate the FIM to estimate the errors of the vector charge q. For EMRIs with M = 2 × 10 5 M and a = 0.9, the vector charge q can be constrained as small as q ∼ 0.0049 with TianQin, q ∼ 0.0057 with Taiji, and q ∼ 0.0082 with LISA. For EMRIs with M = 10 6 M and a = 0.9, the vector charge can be constrained as small as q ∼ 0.016 with Taiji, q ∼ 0.021 with LISA, and q ∼ 0.028 with TianQin. Since the sensitivity of with LISA, inferred after one-year observations of EMRIs with q = 0.05 and a = 0. Vertical lines show the 1σ interval for the source parameter. The contours correspond to the 68%, 95%, and 99% probability confidence intervals. TianQin is better than LISA and Taiji in the high-frequency bands, the ability to detect the vector charge for TianQin is better than LISA and Taiji when the mass M of the Kerr BH with a = 0.9 is lighter than ∼ 2 × 10 5 M . For the mass M of the Kerr BH with a = 0.9 above 10 6 M , LISA and Taiji are more likely to detect smaller vector charges. The relative errors of the charge q with Taiji are always smaller than LISA because the sensitivity of Taiji is always better than LISA. Due to the extra radiation of the vector field, the charged 24 FIG. 11. Corner plot for the probability distribution of the source parameters (ln M, ln m p , a, q) with LISA, inferred after one-year observations of EMRIs with q = 0.05 and a = −0.9. Vertical lines show the 1σ interval for the source parameter. The contours correspond to the 68%, 95%, and 99% probability confidence intervals. particle starts further away from ISCO, so the 1σ error for the charge decreases with the charge q. As the spin a of Kerr BHs increases, the ISCO becomes smaller, the positive spin of Kerr BHs (co-rotating) can decrease the charge uncertainty by about one or two orders of magnitude, depending on the mass of the Kerr BH. For EMRIs with M = 10 6 M , m p = 10 M , q = 0.05, and different a = (0.9, 0, −0.9), the corresponding errors of charge σ q with LISA are (0.0031, 0.086, 0.65), respectively, so co-rotating orbits can better detect FIG. 12. Corner plot for the probability distribution of the source parameters (ln M, ln m p , q) with LISA, inferred after one-year observations of EMRIs with q = 0.05 in the Schwarzschild BH background. Vertical lines show the 1σ interval for the source parameter. The contours correspond to the 68%, 95%, and 99% probability confidence intervals. the vector charge. It is interesting to note that the charge q are anti-correlated with the mass M and the spin a of Kerr BHs, and the correlations between q and M , and q and m p in the Kerr BH background are opposite to those in the Schwarzschild BH background. In summary, the observations of EMRIs with a lighter and larger-spin Kerr BH can detect the vector charge easier. =a 2 (1 − 2λ lmω ), c 3 = − 2a 2 (1 + iam), c 4 =a 4 (1 − λ lmω ),
Z ∞,H ωlm in Eq. (34) for s = −1 is +∞ r + ∆ −1 R up,inωlm T ωlm dr = δ(ω − mω) (I 00 + I 01 )R up,in ωlm + I 1
] B. P. Abbott et al. (LIGO Scientific and VIRGO Collaborations), Observation of Gravitational Waves from a Binary Black Hole Merger, Phys. Rev. Lett. 116, 061102 (2016).
TABLE I .
IThe explicit expressions for the the field ψ and the corresponding source T .
ACKNOWLEDGMENTSIn this appendix, we provide further technical details on the formalisms that we compute the vector flux. The generalized Sasaki-Nakamura equation is given bythe function s U (r) iswhere ,r denotes the derivative with respect to r andThe solutions of the Teukolsky and generalized Sasaki-Nakamura equations are related by R lmω (r) = (α + β ,r ∆ s+1 ) χ − β∆ s+1 χ η ,
GW150914: The Advanced LIGO Detectors in the Era of First Discoveries. B P Abbott, LIGO Scientific ; VIRGO Collaborations10.1103/PhysRevLett.116.131103Phys. Rev. Lett. 116131103B. P. Abbott et al. (LIGO Scientific and VIRGO Collaborations), GW150914: The Advanced LIGO Detectors in the Era of First Discoveries, Phys. Rev. Lett. 116, 131103 (2016).
GWTC-1: A Gravitational-Wave Transient Catalog of Compact Binary Mergers Observed by LIGO and Virgo during the First and Second Observing Runs. B P Abbott, LIGO Scientific ; VIRGO Collaborations10.1103/PhysRevX.9.031040Phys. Rev. X. 931040B. P. Abbott et al. (LIGO Scientific and VIRGO Collaborations), GWTC-1: A Gravitational- Wave Transient Catalog of Compact Binary Mergers Observed by LIGO and Virgo during the First and Second Observing Runs, Phys. Rev. X 9, 031040 (2019).
GWTC-2: Compact Binary Coalescences Observed by LIGO and Virgo During the First Half of the Third Observing Run. R Abbott, LIGO Scientific ; VIRGO Collaborations10.1103/PhysRevX.11.021053Phys. Rev. X. 1121053R. Abbott et al. (LIGO Scientific and VIRGO Collaborations), GWTC-2: Compact Binary Coalescences Observed by LIGO and Virgo During the First Half of the Third Observing Run, Phys. Rev. X 11, 021053 (2021).
R Abbott, LIGO Scientific ; VIRGO CollaborationsarXiv:2108.01045GWTC-2.1: Deep Extended Catalog of Compact Binary Coalescences Observed by LIGO and Virgo During the First Half of the Third Observing Run. gr-qcR. Abbott et al. (LIGO Scientific and VIRGO Collaborations), GWTC-2.1: Deep Extended Catalog of Compact Binary Coalescences Observed by LIGO and Virgo During the First Half of the Third Observing Run, arXiv:2108.01045 [gr-qc].
R Abbott, LIGO Scientific ; VIRGO ; KAGRA CollaborationsarXiv:2111.03606GWTC-3: Compact Binary Coalescences Observed by LIGO and Virgo During the Second Part of the Third Observing Run. gr-qcR. Abbott et al. (LIGO Scientific, VIRGO and KAGRA Collaborations), GWTC-3: Compact Binary Coalescences Observed by LIGO and Virgo During the Second Part of the Third Observing Run, arXiv:2111.03606 [gr-qc].
GW170817: Observation of Gravitational Waves from a Binary Neutron Star Inspiral. B P Abbott, LIGO Scientific ; VIRGO Collaborations10.1103/PhysRevLett.119.161101Phys. Rev. Lett. 119161101B. P. Abbott et al. (LIGO Scientific and VIRGO Collaborations), GW170817: Observation of Gravitational Waves from a Binary Neutron Star Inspiral, Phys. Rev. Lett. 119, 161101 (2017).
B P Abbott, LIGO Scientific ; VIRGO Collaborations10.3847/2041-8213/ab75f5GW190425: Observation of a Compact Binary Coalescence with Total Mass ∼ 3.4M. 8923B. P. Abbott et al. (LIGO Scientific and VIRGO Collaborations), GW190425: Observation of a Compact Binary Coalescence with Total Mass ∼ 3.4M , Astrophys. J. Lett. 892, L3 (2020).
KAGRA and VIRGO Collaborations), Observation of Gravitational Waves from Two Neutron Star-Black Hole Coalescences. R Abbott, LIGO Scientific10.3847/2041-8213/ac082eAstrophys. J. Lett. 9155R. Abbott et al. (LIGO Scientific, KAGRA and VIRGO Collaborations), Observation of Grav- itational Waves from Two Neutron Star-Black Hole Coalescences, Astrophys. J. Lett. 915, L5 (2021).
LISA: An ESA cornerstone mission for a gravitational wave observatory. K Danzmann, 10.1088/0264-9381/14/6/002Class. Quant. Grav. 141399K. Danzmann, LISA: An ESA cornerstone mission for a gravitational wave observatory, Class. Quant. Grav. 14, 1399 (1997).
P Amaro-Seoane, LISAarXiv:1702.00786[astro-ph.IMLaser Interferometer Space Antenna. P. Amaro-Seoane et al. (LISA), Laser Interferometer Space Antenna, arXiv:1702.00786 [astro- ph.IM].
TianQin: a space-borne gravitational wave detector. J Luo, 10.1088/0264-9381/33/3/035010Class. Quant. Grav. 3335010TianQin)J. Luo et al. (TianQin), TianQin: a space-borne gravitational wave detector, Class. Quant. Grav. 33, 035010 (2016).
The Taiji Program in Space for gravitational wave physics and the nature of gravity. W.-R Hu, Y.-L Wu, 10.1093/nsr/nwx116Natl. Sci. Rev. 4685W.-R. Hu and Y.-L. Wu, The Taiji Program in Space for gravitational wave physics and the nature of gravity, Natl. Sci. Rev. 4, 685 (2017).
P Amaro-Seoane, LISAarXiv:1702.00786[astro-ph.IMLaser Interferometer Space Antenna. P. Amaro-Seoane et al. (LISA), Laser Interferometer Space Antenna, arXiv:1702.00786 [astro- ph.IM].
The TianQin project: current progress on science and technology. J Mei, TianQin10.1093/ptep/ptaa114PTEP. 2021J. Mei et al. (TianQin), The TianQin project: current progress on science and technology, PTEP 2021, 05A107 (2021).
Taiji program: Gravitational-wave sources. W.-H Ruan, Z.-K Guo, R.-G Cai, Y.-Z Zhang, 10.1142/S0217751X2050075XInt. J. Mod. Phys. A. 352050075W.-H. Ruan, Z.-K. Guo, R.-G. Cai, and Y.-Z. Zhang, Taiji program: Gravitational-wave sources, Int. J. Mod. Phys. A 35, 2050075 (2020).
P Amaro-Seoane, arXiv:2203.06016Astrophysics with the Laser Interferometer Space Antenna. gr-qcP. Amaro-Seoane et al., Astrophysics with the Laser Interferometer Space Antenna, arXiv:2203.06016 [gr-qc].
New horizons for fundamental physics with LISA. K G Arun, LISA)10.1007/s41114-022-00036-9Living Rev. Rel. 254K. G. Arun et al. (LISA), New horizons for fundamental physics with LISA, Living Rev. Rel. 25, 4 (2022).
Astrophysics, detection and science applications of intermediate-and extreme massratio inspirals. P Amaro-Seoane, J R Gair, M Freitag, M Coleman Miller, I Mandel, C J Cutler, S Babak, 10.1088/0264-9381/24/17/R01Class. Quant. Grav. 24113P. Amaro-Seoane, J. R. Gair, M. Freitag, M. Coleman Miller, I. Mandel, C. J. Cutler, and S. Babak, Astrophysics, detection and science applications of intermediate-and extreme mass- ratio inspirals, Class. Quant. Grav. 24, R113 (2007).
Science with the space-based interferometer LISA. V: Extreme mass-ratio inspirals. S Babak, J Gair, A Sesana, E Barausse, C F Sopuerta, C P L Berry, E Berti, P Amaro-Seoane, A Petiteau, A Klein, 10.1103/PhysRevD.95.103012Phys. Rev. D. 95103012S. Babak, J. Gair, A. Sesana, E. Barausse, C. F. Sopuerta, C. P. L. Berry, E. Berti, P. Amaro- Seoane, A. Petiteau, and A. Klein, Science with the space-based interferometer LISA. V: Extreme mass-ratio inspirals, Phys. Rev. D 95, 103012 (2017).
LISA capture sources: Approximate waveforms, signal-to-noise ratios, and parameter estimation accuracy. L Barack, C Cutler, 10.1103/PhysRevD.69.082005Phys. Rev. D. 6982005L. Barack and C. Cutler, LISA capture sources: Approximate waveforms, signal-to-noise ratios, and parameter estimation accuracy, Phys. Rev. D 69, 082005 (2004).
Augmented kludge waveforms for detecting extreme-mass-ratio inspirals. A J K Chua, C J Moore, J R Gair, 10.1103/PhysRevD.96.044005Phys. Rev. D. 9644005A. J. K. Chua, C. J. Moore, and J. R. Gair, Augmented kludge waveforms for detecting extreme-mass-ratio inspirals, Phys. Rev. D 96, 044005 (2017).
LISA extreme-mass-ratio inspiral events as probes of the black hole mass function. J R Gair, C Tang, M Volonteri, 10.1103/PhysRevD.81.104014Phys. Rev. D. 81104014J. R. Gair, C. Tang, and M. Volonteri, LISA extreme-mass-ratio inspiral events as probes of the black hole mass function, Phys. Rev. D 81, 104014 (2010).
Using gravitational-wave standard sirens. D E Holz, S A Hughes, 10.1086/431341Astrophys. J. 62915D. E. Holz and S. A. Hughes, Using gravitational-wave standard sirens, Astrophys. J. 629, 15 (2005).
Precision of Hubble constant derived using black hole binary absolute distances and statistical redshift information. C L Macleod, C J Hogan, 10.1103/PhysRevD.77.043512Phys. Rev. D. 7743512C. L. MacLeod and C. J. Hogan, Precision of Hubble constant derived using black hole binary absolute distances and statistical redshift information, Phys. Rev. D 77, 043512 (2008).
D Laghi, N Tamanini, W Pozzo, A Sesana, J Gair, S Babak, D Izquierdo-Villalba, 10.1093/mnras/stab2741Gravitational-wave cosmology with extreme mass-ratio inspirals. 5084512D. Laghi, N. Tamanini, W. Del Pozzo, A. Sesana, J. Gair, S. Babak, and D. Izquierdo-Villalba, Gravitational-wave cosmology with extreme mass-ratio inspirals, Mon. Not. Roy. Astron. Soc. 508, 4512 (2021).
P Auclair, arXiv:2204.05434Cosmology with the Laser Interferometer Space Antenna. LISA Cosmology Working Groupastro-ph.COP. Auclair et al. (LISA Cosmology Working Group), Cosmology with the Laser Interferometer Space Antenna, arXiv:2204.05434 [astro-ph.CO].
. P A Seoane, eLISAarXiv:1305.5720[astro-ph.COThe Gravitational Universe. P. A. Seoane et al. (eLISA), The Gravitational Universe, arXiv:1305.5720 [astro-ph.CO].
New Probe of Dark-Matter Properties: Gravitational Waves from an Intermediate-Mass Black Hole Embedded in a Dark-Matter Minispike. K Eda, Y Itoh, S Kuroyanagi, J Silk, 10.1103/PhysRevLett.110.221101Phys. Rev. Lett. 110221101K. Eda, Y. Itoh, S. Kuroyanagi, and J. Silk, New Probe of Dark-Matter Properties: Gravita- tional Waves from an Intermediate-Mass Black Hole Embedded in a Dark-Matter Minispike, Phys. Rev. Lett. 110, 221101 (2013).
Gravitational waves as a probe of dark matter minispikes. K Eda, Y Itoh, S Kuroyanagi, J Silk, 10.1103/PhysRevD.91.044045Phys. Rev. D. 9144045K. Eda, Y. Itoh, S. Kuroyanagi, and J. Silk, Gravitational waves as a probe of dark matter minispikes, Phys. Rev. D 91, 044045 (2015).
Can environmental effects spoil precision gravitationalwave astrophysics?. E Barausse, V Cardoso, P Pani, 10.1103/PhysRevD.89.104059Phys. Rev. D. 89104059E. Barausse, V. Cardoso, and P. Pani, Can environmental effects spoil precision gravitational- wave astrophysics?, Phys. Rev. D 89, 104059 (2014).
Gravitational waves with dark matter minispikes: the combined effect. X.-J Yue, W.-B Han, 10.1103/PhysRevD.97.064003Phys. Rev. D. 9764003X.-J. Yue and W.-B. Han, Gravitational waves with dark matter minispikes: the combined effect, Phys. Rev. D 97, 064003 (2018).
Dark matter: an efficient catalyst for intermediate-massratio-inspiral events. X.-J Yue, W.-B Han, X Chen, 10.3847/1538-4357/ab06f6Astrophys. J. 87434X.-J. Yue, W.-B. Han, and X. Chen, Dark matter: an efficient catalyst for intermediate-mass- ratio-inspiral events, Astrophys. J. 874, 34 (2019).
C P L Berry, S A Hughes, C F Sopuerta, A J K Chua, A Heffernan, K Holley-Bockelmann, D P Mihaylov, M C Miller, A Sesana, arXiv:1903.03686The unique potential of extreme mass-ratio inspirals for gravitational-wave astronomy. astro-ph.HEC. P. L. Berry, S. A. Hughes, C. F. Sopuerta, A. J. K. Chua, A. Heffernan, K. Holley- Bockelmann, D. P. Mihaylov, M. C. Miller, and A. Sesana, The unique potential of extreme mass-ratio inspirals for gravitational-wave astronomy, arXiv:1903.03686 [astro-ph.HE].
Extreme dark matter tests with extreme mass ratio inspirals. O A Hannuksela, K C Y Ng, T G F Li, 10.1103/PhysRevD.102.103022Phys. Rev. D. 102103022O. A. Hannuksela, K. C. Y. Ng, and T. G. F. Li, Extreme dark matter tests with extreme mass ratio inspirals, Phys. Rev. D 102, 103022 (2020).
Testing spacetime symmetry through gravitational waves from extreme-mass-ratio inspirals. K Destounis, A G Suvorov, K D Kokkotas, 10.1103/PhysRevD.102.064041Phys. Rev. D. 10264041K. Destounis, A. G. Suvorov, and K. D. Kokkotas, Testing spacetime symmetry through gravitational waves from extreme-mass-ratio inspirals, Phys. Rev. D 102, 064041 (2020).
Reissner-Nordström perturbation framework with gravitational wave applications. J Y J Burton, T Osburn, 10.1103/PhysRevD.102.104030Phys. Rev. D. 102104030J. Y. J. Burton and T. Osburn, Reissner-Nordström perturbation framework with gravitational wave applications, Phys. Rev. D 102, 104030 (2020).
Electromagnetic self-force on a charged particle on Kerr spacetime: Equatorial circular orbits. T Torres, S R Dolan, 10.1103/PhysRevD.106.024024Phys. Rev. D. 10624024T. Torres and S. R. Dolan, Electromagnetic self-force on a charged particle on Kerr spacetime: Equatorial circular orbits, Phys. Rev. D 106, 024024 (2022).
Prospects for Fundamental Physics with LISA. E Barausse, 10.1007/s10714-020-02691-1Gen. Rel. Grav. 5281E. Barausse et al., Prospects for Fundamental Physics with LISA, Gen. Rel. Grav. 52, 81 (2020).
Constraints on the astrophysical environment of binaries with gravitational-wave observations. V Cardoso, A Maselli, 10.1051/0004-6361/202037654Astron. Astrophys. 644147V. Cardoso and A. Maselli, Constraints on the astrophysical environment of binaries with gravitational-wave observations, Astron. Astrophys. 644, A147 (2020).
Detecting scalar fields with Extreme Mass Ratio Inspirals. A Maselli, N Franchini, L Gualtieri, T P Sotiriou, 10.1103/PhysRevLett.125.141101Phys. Rev. Lett. 125141101A. Maselli, N. Franchini, L. Gualtieri, and T. P. Sotiriou, Detecting scalar fields with Extreme Mass Ratio Inspirals, Phys. Rev. Lett. 125, 141101 (2020).
Detecting fundamental fields with LISA observations of gravitational waves from extreme mass-ratio inspirals. A Maselli, N Franchini, L Gualtieri, T P Sotiriou, S Barsanti, P Pani, 10.1038/s41550-021-01589-5Nature Astron. 6464A. Maselli, N. Franchini, L. Gualtieri, T. P. Sotiriou, S. Barsanti, and P. Pani, Detecting fundamental fields with LISA observations of gravitational waves from extreme mass-ratio inspirals, Nature Astron. 6, 464 (2022).
Detection of scalar fields by extreme mass ratio inspirals with a Kerr black hole. H Guo, Y Liu, C Zhang, Y Gong, W.-L Qian, R.-H Yue, 10.1103/PhysRevD.106.024047Phys. Rev. D. 10624047H. Guo, Y. Liu, C. Zhang, Y. Gong, W.-L. Qian, and R.-H. Yue, Detection of scalar fields by extreme mass ratio inspirals with a Kerr black hole, Phys. Rev. D 106, 024047 (2022).
C Zhang, Y Gong, D Liang, B Wang, arXiv:2210.11121Gravitational waves from eccentric extreme mass-ratio inspirals as probes of scalar fields. gr-qcC. Zhang, Y. Gong, D. Liang, and B. Wang, Gravitational waves from eccentric extreme mass-ratio inspirals as probes of scalar fields, arXiv:2210.11121 [gr-qc].
Extreme mass-ratio inspirals as probes of scalar fields: Eccentric equatorial orbits around Kerr black holes. S Barsanti, N Franchini, L Gualtieri, A Maselli, T P Sotiriou, 10.1103/PhysRevD.106.044029Phys. Rev. D. 10644029S. Barsanti, N. Franchini, L. Gualtieri, A. Maselli, and T. P. Sotiriou, Extreme mass-ratio inspirals as probes of scalar fields: Eccentric equatorial orbits around Kerr black holes, Phys. Rev. D 106, 044029 (2022).
Black holes in galaxies: Environmental impact on gravitational-wave generation and propagation. V Cardoso, K Destounis, F Duque, R P Macedo, A Maselli, 10.1103/PhysRevD.105.L061501Phys. Rev. D. 10561501V. Cardoso, K. Destounis, F. Duque, R. P. Macedo, and A. Maselli, Black holes in galaxies: Environmental impact on gravitational-wave generation and propagation, Phys. Rev. D 105, L061501 (2022).
Intermediate mass-ratio inspirals with dark matter minispikes. N Dai, Y Gong, T Jiang, D Liang, 10.1103/PhysRevD.106.064003Phys. Rev. D. 10664003N. Dai, Y. Gong, T. Jiang, and D. Liang, Intermediate mass-ratio inspirals with dark matter minispikes, Phys. Rev. D 106, 064003 (2022).
Constraint on Brans-Dicke theory from intermediate/extreme mass ratio inspirals. T Jiang, N Dai, Y Gong, D Liang, C Zhang, 10.1088/1475-7516/2022/12/023J. Cosmol. Astropart. Phys. 1223T. Jiang, N. Dai, Y. Gong, D. Liang, and C. Zhang, Constraint on Brans-Dicke theory from intermediate/extreme mass ratio inspirals, J. Cosmol. Astropart. Phys. 12 (2022) 023.
Detecting electric charge with extreme mass ratio inspirals. C Zhang, Y Gong, 10.1103/PhysRevD.105.124046Phys. Rev. D. 105124046C. Zhang and Y. Gong, Detecting electric charge with extreme mass ratio inspirals, Phys. Rev. D 105, 124046 (2022).
Constraint on the mass of graviton with gravitational waves. Q Gao, 10.1007/s11433-022-1971-9Sci. China Phys. Mech. Astron. 66220411Q. Gao, Constraint on the mass of graviton with gravitational waves, Sci. China Phys. Mech. Astron. 66, 220411 (2023).
Q Gao, Y You, Y Gong, C Zhang, C Zhang, arXiv:2212.03789Testing alternative theories of gravity with space-based gravitational wave detectors. gr-qcQ. Gao, Y. You, Y. Gong, C. Zhang, and C. Zhang, Testing alternative theories of gravity with space-based gravitational wave detectors, arXiv:2212.03789 [gr-qc].
K Destounis, A Kulathingal, K D Kokkotas, G O Papadopoulos, arXiv:2210.09357Gravitationalwave imprints of compact and galactic-scale environments in extreme-mass-ratio binaries. gr-qcK. Destounis, A. Kulathingal, K. D. Kokkotas, and G. O. Papadopoulos, Gravitational- wave imprints of compact and galactic-scale environments in extreme-mass-ratio binaries, arXiv:2210.09357 [gr-qc].
S Barsanti, A Maselli, T P Sotiriou, L Gualtieri, arXiv:2212.03888Detecting massive scalar fields with Extreme Mass-Ratio Inspirals. gr-qcS. Barsanti, A. Maselli, T. P. Sotiriou, and L. Gualtieri, Detecting massive scalar fields with Extreme Mass-Ratio Inspirals, arXiv:2212.03888 [gr-qc].
D Liang, R Xu, Z.-F Mai, L Shao, arXiv:2212.09346Probing vector hair of black holes with extreme mass ratio inspirals. gr-qcD. Liang, R. Xu, Z.-F. Mai, and L. Shao, Probing vector hair of black holes with extreme mass ratio inspirals, arXiv:2212.09346 [gr-qc].
Eccentricity evolution of compact binaries and applications to gravitational-wave physics. V Cardoso, C F B Macedo, R Vicente, 10.1103/PhysRevD.103.023015Phys. Rev. D. 10323015V. Cardoso, C. F. B. Macedo, and R. Vicente, Eccentricity evolution of compact binaries and applications to gravitational-wave physics, Phys. Rev. D 103, 023015 (2021).
Gravitational and electromagnetic radiation from binary black holes with electric and magnetic charges: Circular orbits on a cone. L Liu, O Christiansen, Z.-K Guo, R.-G Cai, S P Kim, 10.1103/PhysRevD.102.103520Phys. Rev. D. 102103520L. Liu, O. Christiansen, Z.-K. Guo, R.-G. Cai, and S. P. Kim, Gravitational and electromag- netic radiation from binary black holes with electric and magnetic charges: Circular orbits on a cone, Phys. Rev. D 102, 103520 (2020).
Gravitational and electromagnetic radiation from binary black holes with electric and magnetic charges: elliptical orbits on a cone. L Liu, O Christiansen, W.-H Ruan, Z.-K Guo, R.-G Cai, S P Kim, 10.1140/epjc/s10052-021-09849-4Eur. Phys. J. C. 811048L. Liu, O. Christiansen, W.-H. Ruan, Z.-K. Guo, R.-G. Cai, and S. P. Kim, Gravitational and electromagnetic radiation from binary black holes with electric and magnetic charges: elliptical orbits on a cone, Eur. Phys. J. C 81, 1048 (2021).
Gravitational Waves from Extreme-Mass-Ratio Systems in Astrophysical Environments. V Cardoso, K Destounis, F Duque, R Panosso Macedo, A Maselli, 10.1103/PhysRevLett.129.241103Phys. Rev. Lett. 129241103V. Cardoso, K. Destounis, F. Duque, R. Panosso Macedo, and A. Maselli, Gravitational Waves from Extreme-Mass-Ratio Systems in Astrophysical Environments, Phys. Rev. Lett. 129, 241103 (2022).
Black hole hair in generalized scalar-tensor gravity. T P Sotiriou, S.-Y Zhou, 10.1103/PhysRevLett.112.251102Phys. Rev. Lett. 112251102T. P. Sotiriou and S.-Y. Zhou, Black hole hair in generalized scalar-tensor gravity, Phys. Rev. Lett. 112, 251102 (2014).
Spontaneous scalarization of black holes and compact stars from a Gauss-Bonnet coupling. H O Silva, J Sakstein, L Gualtieri, T P Sotiriou, E Berti, 10.1103/PhysRevLett.120.131104Phys. Rev. Lett. 120131104H. O. Silva, J. Sakstein, L. Gualtieri, T. P. Sotiriou, and E. Berti, Spontaneous scalarization of black holes and compact stars from a Gauss-Bonnet coupling, Phys. Rev. Lett. 120, 131104 (2018).
New Gauss-Bonnet Black Holes with Curvature-Induced Scalarization in Extended Scalar-Tensor Theories. D D Doneva, S S Yazadjiev, 10.1103/PhysRevLett.120.131103Phys. Rev. Lett. 120131103D. D. Doneva and S. S. Yazadjiev, New Gauss-Bonnet Black Holes with Curvature-Induced Scalarization in Extended Scalar-Tensor Theories, Phys. Rev. Lett. 120, 131103 (2018).
Evasion of No-Hair Theorems and Novel Black-Hole Solutions in Gauss-Bonnet Theories. G Antoniou, A Bakopoulos, P Kanti, 10.1103/PhysRevLett.120.131102Phys. Rev. Lett. 120131102G. Antoniou, A. Bakopoulos, and P. Kanti, Evasion of No-Hair Theorems and Novel Black- Hole Solutions in Gauss-Bonnet Theories, Phys. Rev. Lett. 120, 131102 (2018).
Vacuum Polarization and the Spontaneous Loss of Charge by Black Holes. G W Gibbons, 10.1007/BF01609829Commun. Math. Phys. 44245G. W. Gibbons, Vacuum Polarization and the Spontaneous Loss of Charge by Black Holes, Commun. Math. Phys. 44, 245 (1975).
Astrophysical processes near black holes. D M Eardley, W H Press, 10.1146/annurev.aa.13.090175.002121Ann. Rev. Astron. Astrophys. 13381D. M. Eardley and W. H. Press, Astrophysical processes near black holes, Ann. Rev. Astron. Astrophys. 13, 381 (1975).
Limits on the charge of a collapsed object. R S Hanni, 10.1103/PhysRevD.25.2509Phys. Rev. D. 252509R. S. Hanni, Limits on the charge of a collapsed object, Phys. Rev. D 25, 2509 (1982).
Limits of the charge of a collapsed object. P S Joshi, J V Narlikar, 10.1007/BF02848041Pramana. 18385P. S. Joshi and J. V. Narlikar, Limits of the charge of a collapsed object., Pramana 18, 385 (1982).
On neutralization of charged black holes. Y Gong, Z Cao, H Gao, B Zhang, 10.1093/mnras/stz1904Mon. Not. Roy. Astron. Soc. 4882722Y. Gong, Z. Cao, H. Gao, and B. Zhang, On neutralization of charged black holes, Mon. Not. Roy. Astron. Soc. 488, 2722 (2019).
Hydrostatic Equilibrium and Gravitational Collapse of Relativistic Charged Fluid Balls. J D Bekenstein, 10.1103/PhysRevD.4.2185Phys. Rev. D. 42185J. D. Bekenstein, Hydrostatic Equilibrium and Gravitational Collapse of Relativistic Charged Fluid Balls, Phys. Rev. D 4, 2185 (1971).
Relativistic charged spheres. F De Felice, Y Yu, J Fang, 10.1093/mnras/277.1.L17Mon. Not. Roy. Astron. Soc. 27717F. de Felice, Y. Yu, and J. Fang, Relativistic charged spheres, Mon. Not. Roy. Astron. Soc. 277, L17 (1995).
Relativistic charged spheres. 2. Regularity and stability. F De Felice, S Liu, Y.-Q Yu, 10.1088/0264-9381/16/8/307Class. Quant. Grav. 162669F. de Felice, S.-m. Liu, and Y.-q. Yu, Relativistic charged spheres. 2. Regularity and stability, Class. Quant. Grav. 16, 2669 (1999).
Static charged perfect fluid spheres in general relativity. B V Ivanov, 10.1103/PhysRevD.65.104001Phys. Rev. D. 65104001B. V. Ivanov, Static charged perfect fluid spheres in general relativity, Phys. Rev. D 65, 104001 (2002).
A class of exact solutions of Einstein's field equations. S D Majumdar, 10.1103/PhysRev.72.390Phys. Rev. 72390S. D. Majumdar, A class of exact solutions of Einstein's field equations, Phys. Rev. 72, 390 (1947).
The influence of a net charge on the critical mass of a neutron star. J L Zhang, W Y Chau, T Y Deng, 10.1007/BF00648990Astrophys. Space Sci. 8881J. L. Zhang, W. Y. Chau, and T. Y. Deng, The influence of a net charge on the critical mass of a neutron star, Astrophys. Space Sci. 88, 81 (1982).
Instability of extremal relativistic charged spheres. P Anninos, T Rothman, 10.1103/PhysRevD.65.024003Phys. Rev. D. 6524003P. Anninos and T. Rothman, Instability of extremal relativistic charged spheres, Phys. Rev. D 65, 024003 (2002).
Are very large gravitational redshifts possible. W B Bonnor, S B P Wickramasuriya, Mon. Not. Roy. Astron. Soc. 170643W. B. Bonnor and S. B. P. Wickramasuriya, Are very large gravitational redshifts possible, Mon. Not. Roy. Astron. Soc. 170, 643 (1975).
Electrically charged compact stars and formation of charged black holes. S Ray, A L Espindola, M Malheiro, J P S Lemos, V T Zanchin, 10.1103/PhysRevD.68.084004Phys. Rev. D. 6884004S. Ray, A. L. Espindola, M. Malheiro, J. P. S. Lemos, and V. T. Zanchin, Electrically charged compact stars and formation of charged black holes, Phys. Rev. D 68, 084004 (2003).
Black hole in a uniform magnetic field. R M Wald, 10.1103/PhysRevD.10.1680Phys. Rev. D. 101680R. M. Wald, Black hole in a uniform magnetic field, Phys. Rev. D 10, 1680 (1974).
Black holes and gravitational waves in models of minicharged dark matter. V Cardoso, C F B Macedo, P Pani, V Ferrari, 10.1088/1475-7516/2016/05/054J. Cosmol. Astropart. Phys. 05Erratum: JCAP 04, E01 (2020)V. Cardoso, C. F. B. Macedo, P. Pani, and V. Ferrari, Black holes and gravitational waves in models of minicharged dark matter, J. Cosmol. Astropart. Phys. 05 (2016) 054, [Erratum: JCAP 04, E01 (2020)].
Charged Black Hole Mergers: Orbit Circularisation and Chirp Mass Bias. O Christiansen, J Beltrán, D F Jiménez, Mota, 10.1088/1361-6382/abdaf5Class. Quant. Grav. 3875017O. Christiansen, J. Beltrán Jiménez, and D. F. Mota, Charged Black Hole Mergers: Orbit Circularisation and Chirp Mass Bias, Class. Quant. Grav. 38, 075017 (2021).
Black hole merger estimates in Einstein-Maxwell and Einstein-Maxwell-dilaton gravity. P Jai-Akson, A Chatrabhuti, O Evnin, L Lehner, 10.1103/PhysRevD.96.044031Phys. Rev. D. 9644031P. Jai-akson, A. Chatrabhuti, O. Evnin, and L. Lehner, Black hole merger estimates in Einstein-Maxwell and Einstein-Maxwell-dilaton gravity, Phys. Rev. D 96, 044031 (2017).
Merger rate of charged black holes from the two-body dynamical capture. L Liu, S P Kim, 10.1088/1475-7516/2022/03/059J. Cosmol. Astropart. Phys. 0359L. Liu and S. P. Kim, Merger rate of charged black holes from the two-body dynamical capture, J. Cosmol. Astropart. Phys. 03 (2022) 059.
Merger rate distribution of primordial black hole binaries with electric charges. L Liu, Z.-K Guo, R.-G Cai, S P Kim, 10.1103/PhysRevD.102.043508Phys. Rev. D. 10243508L. Liu, Z.-K. Guo, R.-G. Cai, and S. P. Kim, Merger rate distribution of primordial black hole binaries with electric charges, Phys. Rev. D 102, 043508 (2020).
General Relativistic Simulations of the Quasicircular Inspiral and Merger of Charged Black Holes: GW150914 and Fundamental Physics Implications. G Bozzola, V Paschalidis, 10.1103/PhysRevLett.126.041103Phys. Rev. Lett. 12641103G. Bozzola and V. Paschalidis, General Relativistic Simulations of the Quasicircular Inspiral and Merger of Charged Black Holes: GW150914 and Fundamental Physics Implications, Phys. Rev. Lett. 126, 041103 (2021).
Perturbations of a rotating black hole. 1. Fundamental equations for gravitational electromagnetic and neutrino field perturbations. S A Teukolsky, 10.1086/152444Astrophys. J. 185635S. A. Teukolsky, Perturbations of a rotating black hole. 1. Fundamental equations for gravi- tational electromagnetic and neutrino field perturbations, Astrophys. J. 185, 635 (1973).
Perturbations of a Rotating Black Hole. II. Dynamical Stability of the Kerr Metric. W H Press, S A Teukolsky, 10.1086/152445Astrophys. J. 185649W. H. Press and S. A. Teukolsky, Perturbations of a Rotating Black Hole. II. Dynamical Stability of the Kerr Metric, Astrophys. J. 185, 649 (1973).
Perturbations of a rotating black hole. III -Interaction of the hole with gravitational and electromagnet ic radiation. S A Teukolsky, W H Press, 10.1086/153180Astrophys. J. 193443S. A. Teukolsky and W. H. Press, Perturbations of a rotating black hole. III -Interaction of the hole with gravitational and electromagnet ic radiation, Astrophys. J. 193, 443 (1974).
Note on the Bondi-Metzner-Sachs group. E T Newman, R Penrose, 10.1063/1.1931221J. Math. Phys. 7863E. T. Newman and R. Penrose, Note on the Bondi-Metzner-Sachs group, J. Math. Phys. 7, 863 (1966).
Spin-s spherical harmonics and ð. J N Goldberg, A J Macfarlane, E T Newman, F Rohrlich, E C G Sudarshan, 10.1063/1.1705135J. Math. Phys. 82155J. N. Goldberg, A. J. MacFarlane, E. T. Newman, F. Rohrlich, and E. C. G. Sudarshan, Spin-s spherical harmonics and ð, J. Math. Phys. 8, 2155 (1967).
Black Hole Perturbation Toolkit, (bhptoolkit.org). Black Hole Perturbation Toolkit, (bhptoolkit.org).
Black Holes and Gravitational Waves. I. Circular Orbits About a Rotating Hole. S L Detweiler, 10.1086/156529Astrophys. J. 225687S. L. Detweiler, Black Holes and Gravitational Waves. I. Circular Orbits About a Rotating Hole, Astrophys. J. 225, 687 (1978).
The Evolution of circular, nonequatorial orbits of Kerr black holes due to gravitational wave emission. S A Hughes, 10.1103/PhysRevD.65.069902Phys. Rev. D. 61109904Phys.Rev.DS. A. Hughes, The Evolution of circular, nonequatorial orbits of Kerr black holes due to gravitational wave emission, Phys. Rev. D 61, 084004 (2000), [Erratum: Phys.Rev.D 63, 049902 (2001), Erratum: Phys.Rev.D 65, 069902 (2002), Erratum: Phys.Rev.D 67, 089901 (2003), Erratum: Phys.Rev.D 78, 109902 (2008), Erratum: Phys.Rev.D 90, 109904 (2014)].
Innermost stable circular orbits of spinning test particles in Schwarzschild and Kerr space-times. P I Jefremov, O Y Tsupko, G S Bisnovatyi-Kogan, 10.1103/PhysRevD.91.124030Phys. Rev. D. 91124030P. I. Jefremov, O. Y. Tsupko, and G. S. Bisnovatyi-Kogan, Innermost stable circular orbits of spinning test particles in Schwarzschild and Kerr space-times, Phys. Rev. D 91, 124030 (2015).
Estimating spinning binary parameters and testing alternative theories of gravity with LISA. E Berti, A Buonanno, C M Will, 10.1103/PhysRevD.71.084025Phys. Rev. D. 7184025E. Berti, A. Buonanno, and C. M. Will, Estimating spinning binary parameters and testing alternative theories of gravity with LISA, Phys. Rev. D 71, 084025 (2005).
Model Waveform Accuracy Standards for Gravitational Wave Data Analysis. L Lindblom, B J Owen, D A Brown, 10.1103/PhysRevD.78.124020Phys. Rev. D. 78124020L. Lindblom, B. J. Owen, and D. A. Brown, Model Waveform Accuracy Standards for Grav- itational Wave Data Analysis, Phys. Rev. D 78, 124020 (2008).
Constructing Gravitational Waves from Generic Spin-Precessing Compact Binary Inspirals. K Chatziioannou, A Klein, N Yunes, N Cornish, 10.1103/PhysRevD.95.104004Phys. Rev. D. 95104004K. Chatziioannou, A. Klein, N. Yunes, and N. Cornish, Constructing Gravitational Waves from Generic Spin-Precessing Compact Binary Inspirals, Phys. Rev. D 95, 104004 (2017).
Extreme mass ratio inspirals with spinning secondary: a detailed study of equatorial circular motion. G A Piovano, A Maselli, P Pani, 10.1103/PhysRevD.102.024041Phys. Rev. D. 10224041G. A. Piovano, A. Maselli, and P. Pani, Extreme mass ratio inspirals with spinning secondary: a detailed study of equatorial circular motion, Phys. Rev. D 102, 024041 (2020).
| [] |
[
"Interpolation of Hilbert and Sobolev Spaces: Quantitative Estimates and Counterexamples",
"Interpolation of Hilbert and Sobolev Spaces: Quantitative Estimates and Counterexamples"
] | [
"S N Chandler-Wilde ",
"D P Hewett ",
"A Moiola "
] | [] | [] | This paper provides an overview of interpolation of Banach and Hilbert spaces, with a focus on establishing when equivalence of norms is in fact equality of norms in the key results of the theory. (In brief, our conclusion for the Hilbert space case is that, with the right normalisations, all the key results hold with equality of norms.) In the final section we apply the Hilbert space results to the Sobolev spaces H s (Ω) and H s (Ω), for s ∈ R and an open Ω ⊂ R n . We exhibit examples in one and two dimensions of sets Ω for which these scales of Sobolev spaces are not interpolation scales. In the cases when they are interpolation scales (in particular, if Ω is Lipschitz) we exhibit examples that show that, in general, the interpolation norm does not coincide with the intrinsic Sobolev norm and, in fact, the ratio of these two norms can be arbitrarily large.A main result of the paper is to exhibit one-and two-dimensional counterexamples that show that H s (Ω) and H s (Ω) are not in general interpolation scales. It is well-known that these Sobolev spaces are interpolation scales for all s ∈ R when Ω is Lipschitz. In that case we demonstrate, via a number of counterexamples that, in general (we suspect, in fact, whenever Ω R n ), H s (Ω) and H s (Ω) are not exact interpolation scales. Indeed, we exhibit simple examples where the ratio of interpolation norm to intrinsic Sobolev norm may be arbitrarily large. Along the way we give explicit formulas for some of the interpolation norms arising that may be of interest in their own right. We remark that our investigations, which are inspired by applications arising in boundary integral equation methods (see [9]), in particular are inspired by McLean [18], and by its appendix on interpolation of Banach and Sobolev spaces. However a result of §4 is that one result claimed by McLean ( [18, Theorem B.8]) is false.Much of the Hilbert space Section 3 builds strongly on previous work. In particular, our result that, with the right normalisations, the norms in the K-and J-methods of interpolation coincide in the Hilbert space case is a (corrected version of) an earlier result of Ameur [2] (the normalisations proposed and the definition of the J-method norm seem inaccurate in [2]). What is new in our Theorem 3.3 is the method of proof-all of our proofs in this section are based on the spectral theorem that every bounded normal operator is unitarily equivalent to a multiplication operator on L 2 (X , M, µ), for some measure space (X , M, µ), this coupled with an elementary explicit treatment of interpolation on weighted L 2 spaceswhich deals seamlessly with the general Hilbert space case without an assumption of separability or that H 0 ∩ H 1 is dense in H 0 and H 1 . Again, our result in Theorem 3.5 that there is only one (geometric) interpolation space of exponent θ, when interpolating Hilbert spaces, is a version of McCarthy's [17] uniqueness theorem. What is new is that we treat the general Hilbert space case by a method of proof based on the aforementioned spectral theorem. Our focus in this section is real interpolation, but we note in Remark 3.6 that, as a consequence of this uniqueness result (as noted in[17]), complex and real interpolation coincide in this Hilbert space case.While our focus is primarily on interpolation of Hilbert spaces, large parts of the theory of interpolation spaces are appropriately described in the more general Banach space context, not least when trying to clarify those results independent of the extra structure a Hilbert space brings. The first §2 describes real interpolation in this general Banach space context. Mainly this section sets the scene. What is new is that our perspective leads us to pay close attention to the precise choice of normalisation in the definitions of the K-and J-methods of real interpolation (while at the same time making definitions suited to the later Hilbert space case).We intend primarily that, throughout, vector space, Banach space, Hilbert space, should be read as complex vector space, Banach space, Hilbert space. But all the definitions and results proved apply equally in the real case with fairly obvious and minor changes and extensions to the arguments.We finish this introduction by a few words on the history of interpolation (and see [4, 5,23,24]). There are two standard procedures for constructing interpolation spaces (see, e.g., [5]) in the Banach space setting. The first is the complex method due to Lions and Calderón, this two closely-related procedures for constructing interpolation spaces [5, Section 4.1], inspired by the classical proof of the Riesz-Thorin interpolation theorem. (These two procedures applied to a compatible pair X = (X 0 , X 1 ) (defined in §2) produce the identical Banach space (with the identical norm) if either one of X 0 or X 1 is reflexive, in particular if either is a Hilbert space [5, Theorem 4.3.1].) We will mention the complex method only briefly, in Remark 3.6. Our focus is on the so-called real interpolation method. This term is used to denote a large class of methods for constructing interpolation spaces from a compatible pair, all these methods constructing the same interpolation spaces [24] (to within isomorphism, see Theorem 2.3 below). In this paper we focus on the two standard such methods, the K-method and the J-method, which are complementary, dual constructions due to Peetre and Lions (see e.g.[19]), inspired by the classical Marcinkiewicz interpolation theorem [5, Section 1.3].Real interpolation of Banach spacesSuppose that X 0 and X 1 are Banach spaces that are linear subspaces of some larger vector space V . In this case we say that X = (X 0 , X 1 ) is a compatible pair and ∆ = ∆(X) := X 0 ∩ X 1 and Σ = Σ(X) := X 0 + X 1 are also linear subspaces of V : we equip these subspaces with the norms φ ∆ := max φ X0 , φ X1 and φ Σ := inf φ 0 X0 + φ 1 X1 : φ 0 ∈ X 0 , § This is the version of the corrigendum that has been accepted for publication in Mathematika on 11 May 2022. 1 Let X 1 be a Banach space with norm · X 1 and construct (using a Hamel basis and Zorn's lemma, see, e.g., [2, Example A, p. 249]) an unbounded linear functional f on X 1 , and let X 0 be the completion of X 1 with respect to the norm · X 0 given by φ X 0 := φ X 1 + |f (φ)|, φ ∈ X 1 . Then X 0 and X 1 are Banach spaces with X 1 a linear subspace of X 0 (so X 0 and X 1 are linear subspaces of a larger linear space V , namely V = X 0 ), but the inclusion map is not continuous. This implies, by the closed graph theorem, that the inclusion map is not closed, i.e. there exists a sequence (φn) ⊂ X 1 which is convergent in X i to x i ∈ X i , i = 0, 1, with x 0 = x 1 . This in turn implies that x 0 − x 1 = 0 but x 0 − x 1 Σ ≤ lim infn→∞( x 0 − φn X 0 + φn − x 1 X 1 ) = 0, so that · Σ is not a norm. | 10.1112/s0025579314000278 | [
"https://arxiv.org/pdf/1404.3599v4.pdf"
] | 10,955,619 | 1404.3599 | 1509f75ae8fa88a25ad45bf3862a3a1c3d769da7 |
Interpolation of Hilbert and Sobolev Spaces: Quantitative Estimates and Counterexamples
May 18, 2022
S N Chandler-Wilde
D P Hewett
A Moiola
Interpolation of Hilbert and Sobolev Spaces: Quantitative Estimates and Counterexamples
May 18, 2022Dedicated to Vladimir Maz'ya, on the occasion of his 75th Birthday
This paper provides an overview of interpolation of Banach and Hilbert spaces, with a focus on establishing when equivalence of norms is in fact equality of norms in the key results of the theory. (In brief, our conclusion for the Hilbert space case is that, with the right normalisations, all the key results hold with equality of norms.) In the final section we apply the Hilbert space results to the Sobolev spaces H s (Ω) and H s (Ω), for s ∈ R and an open Ω ⊂ R n . We exhibit examples in one and two dimensions of sets Ω for which these scales of Sobolev spaces are not interpolation scales. In the cases when they are interpolation scales (in particular, if Ω is Lipschitz) we exhibit examples that show that, in general, the interpolation norm does not coincide with the intrinsic Sobolev norm and, in fact, the ratio of these two norms can be arbitrarily large.A main result of the paper is to exhibit one-and two-dimensional counterexamples that show that H s (Ω) and H s (Ω) are not in general interpolation scales. It is well-known that these Sobolev spaces are interpolation scales for all s ∈ R when Ω is Lipschitz. In that case we demonstrate, via a number of counterexamples that, in general (we suspect, in fact, whenever Ω R n ), H s (Ω) and H s (Ω) are not exact interpolation scales. Indeed, we exhibit simple examples where the ratio of interpolation norm to intrinsic Sobolev norm may be arbitrarily large. Along the way we give explicit formulas for some of the interpolation norms arising that may be of interest in their own right. We remark that our investigations, which are inspired by applications arising in boundary integral equation methods (see [9]), in particular are inspired by McLean [18], and by its appendix on interpolation of Banach and Sobolev spaces. However a result of §4 is that one result claimed by McLean ( [18, Theorem B.8]) is false.Much of the Hilbert space Section 3 builds strongly on previous work. In particular, our result that, with the right normalisations, the norms in the K-and J-methods of interpolation coincide in the Hilbert space case is a (corrected version of) an earlier result of Ameur [2] (the normalisations proposed and the definition of the J-method norm seem inaccurate in [2]). What is new in our Theorem 3.3 is the method of proof-all of our proofs in this section are based on the spectral theorem that every bounded normal operator is unitarily equivalent to a multiplication operator on L 2 (X , M, µ), for some measure space (X , M, µ), this coupled with an elementary explicit treatment of interpolation on weighted L 2 spaceswhich deals seamlessly with the general Hilbert space case without an assumption of separability or that H 0 ∩ H 1 is dense in H 0 and H 1 . Again, our result in Theorem 3.5 that there is only one (geometric) interpolation space of exponent θ, when interpolating Hilbert spaces, is a version of McCarthy's [17] uniqueness theorem. What is new is that we treat the general Hilbert space case by a method of proof based on the aforementioned spectral theorem. Our focus in this section is real interpolation, but we note in Remark 3.6 that, as a consequence of this uniqueness result (as noted in[17]), complex and real interpolation coincide in this Hilbert space case.While our focus is primarily on interpolation of Hilbert spaces, large parts of the theory of interpolation spaces are appropriately described in the more general Banach space context, not least when trying to clarify those results independent of the extra structure a Hilbert space brings. The first §2 describes real interpolation in this general Banach space context. Mainly this section sets the scene. What is new is that our perspective leads us to pay close attention to the precise choice of normalisation in the definitions of the K-and J-methods of real interpolation (while at the same time making definitions suited to the later Hilbert space case).We intend primarily that, throughout, vector space, Banach space, Hilbert space, should be read as complex vector space, Banach space, Hilbert space. But all the definitions and results proved apply equally in the real case with fairly obvious and minor changes and extensions to the arguments.We finish this introduction by a few words on the history of interpolation (and see [4, 5,23,24]). There are two standard procedures for constructing interpolation spaces (see, e.g., [5]) in the Banach space setting. The first is the complex method due to Lions and Calderón, this two closely-related procedures for constructing interpolation spaces [5, Section 4.1], inspired by the classical proof of the Riesz-Thorin interpolation theorem. (These two procedures applied to a compatible pair X = (X 0 , X 1 ) (defined in §2) produce the identical Banach space (with the identical norm) if either one of X 0 or X 1 is reflexive, in particular if either is a Hilbert space [5, Theorem 4.3.1].) We will mention the complex method only briefly, in Remark 3.6. Our focus is on the so-called real interpolation method. This term is used to denote a large class of methods for constructing interpolation spaces from a compatible pair, all these methods constructing the same interpolation spaces [24] (to within isomorphism, see Theorem 2.3 below). In this paper we focus on the two standard such methods, the K-method and the J-method, which are complementary, dual constructions due to Peetre and Lions (see e.g.[19]), inspired by the classical Marcinkiewicz interpolation theorem [5, Section 1.3].Real interpolation of Banach spacesSuppose that X 0 and X 1 are Banach spaces that are linear subspaces of some larger vector space V . In this case we say that X = (X 0 , X 1 ) is a compatible pair and ∆ = ∆(X) := X 0 ∩ X 1 and Σ = Σ(X) := X 0 + X 1 are also linear subspaces of V : we equip these subspaces with the norms φ ∆ := max φ X0 , φ X1 and φ Σ := inf φ 0 X0 + φ 1 X1 : φ 0 ∈ X 0 , § This is the version of the corrigendum that has been accepted for publication in Mathematika on 11 May 2022. 1 Let X 1 be a Banach space with norm · X 1 and construct (using a Hamel basis and Zorn's lemma, see, e.g., [2, Example A, p. 249]) an unbounded linear functional f on X 1 , and let X 0 be the completion of X 1 with respect to the norm · X 0 given by φ X 0 := φ X 1 + |f (φ)|, φ ∈ X 1 . Then X 0 and X 1 are Banach spaces with X 1 a linear subspace of X 0 (so X 0 and X 1 are linear subspaces of a larger linear space V , namely V = X 0 ), but the inclusion map is not continuous. This implies, by the closed graph theorem, that the inclusion map is not closed, i.e. there exists a sequence (φn) ⊂ X 1 which is convergent in X i to x i ∈ X i , i = 0, 1, with x 0 = x 1 . This in turn implies that x 0 − x 1 = 0 but x 0 − x 1 Σ ≤ lim infn→∞( x 0 − φn X 0 + φn − x 1 X 1 ) = 0, so that · Σ is not a norm.
Introduction
This paper ‡ provides in the first two sections a self-contained overview of the key results of the real method of interpolation for Banach and Hilbert spaces. This is a classical subject of study (see, e.g., [4,5,23,24] and the recent review paper [3] for the Hilbert space case), and it might be thought that there is little more to be said on the subject. The novelty of our presentation-this the perspective of numerical analysts who, as users of interpolation theory, are ultimately concerned with the computation of interpolation norms and the computation of error estimates expressed in terms of interpolation norms-is that we pay particular attention to the question: "When is equivalence of norms in fact equality of norms in the interpolation of Banach and Hilbert spaces?"
At the heart of the paper is the study, in Section 3, of the interpolation of Hilbert spaces H 0 and H 1 embedded in a larger linear space V , in the case when the interpolating space is also Hilbert (this the so-called problem of quadratic interpolation, see, e.g., [2,3,10,15,17]). The one line summary of this section is that all the key results of interpolation theory hold with "equality of norms" in place of "equivalence of norms" in this Hilbert space case, and this with minimal assumptions, in particular we assume nowhere that our Hilbert spaces are separable (as, e.g., in [2,3,15,17]).
Real interpolation between Hilbert spaces H 0 and H 1 produces interpolation spaces H θ , 0 < θ < 1, intermediate between H 0 and H 1 . In the last section of the paper we apply the Hilbert space interpolation results of §3 to the Sobolev spaces H s (Ω) := {U | Ω : U ∈ H s (R n )} and H s (Ω) (defined as the closure of C ∞ 0 (Ω) in H s (R n )), for s ∈ R. Questions we address are: (ii) When the interpolation space is the expected intermediate Sobolev space, do the interpolation space norm and intrinsic Sobolev norm coincide (the interpolation scale is exact), or, if not, how different can they be?
with which ∆ and Σ are Banach spaces [5,Lemma 2.3.1]. We note that, for j = 0, 1, ∆ ⊂ X j ⊂ Σ, and these inclusions are continuous as φ Σ ≤ φ Xj , φ ∈ X j , and φ Xj ≤ φ ∆ , φ ∈ ∆. Thus every compatible pair is a pair of Banach spaces that are subspaces of, and continuously embedded in, a larger Banach space. In our later application to Sobolev spaces we will be interested in the important special case where X 1 ⊂ X 0 . In this case ∆ = X 1 and Σ = X 0 with equivalence of norms, indeed equality of norms if φ X1 ≥ φ X0 , for φ ∈ X 1 . If X and Y are Banach spaces and B : X → Y is a bounded linear map, we will denote the norm of B by B X,Y , abbreviated as B X when X = Y . Given compatible pairs X = (X 0 , X 1 ) and Y = (Y 0 , Y 1 ) one calls the linear map A : Σ(X) → Σ(Y ) a couple map, and writes A : X → Y , if A j , the restriction of A to X j , is a bounded linear map from X j to Y j . Automatically A : Σ(X) → Σ(Y ) is bounded and A ∆ , the restriction of A to ∆(X), is also a bounded linear map from ∆(X) to ∆(Y ). On the other hand, given bounded linear operators A j : X j → Y j , for j = 0, 1, one says that A 0 and A 1 are compatible if A 0 φ = A 1 φ, for φ ∈ ∆(X). If A 0 and A 1 are compatible then there exists a unique couple map A : Σ(X) → Σ(Y ) which has A 0 and A 1 as its restrictions to X 0 and X 1 , respectively.
Given a compatible pair X = (X 0 , X 1 ) we will call a Banach space X an intermediate space between X 0 and X 1 [5] if ∆ ⊂ X ⊂ Σ with continuous inclusions. We will call an intermediate space X an interpolation space relative to X if, whenever A : X → X, it holds that A(X) ⊂ X and A : X → X is a bounded linear operator. Generalising this notion, given compatible pairs X and Y , and Banach spaces X and Y , we will call (X, Y ) a pair of interpolation spaces relative to (X, Y ) if X and Y are intermediate with respect to X and Y , respectively, and if, whenever A : X → Y , it holds that A(X) ⊂ Y and A : X → Y is a bounded linear operator [5]. If (X, Y ) is a pair of interpolation spaces relative to (X, Y ) then [5,Theorem 2.4.2] there exists C > 0 such that, whenever A : X → Y , it holds that
A X,Y ≤ C max A X0,Y0 , A X1,Y1 .(1)
If the bound (1) holds for every A : X → Y with C = 1, then (X, Y ) are said to be exact interpolation spaces: for example the pairs (∆(X), ∆(Y )) and (Σ(X), Σ(Y )) are exact interpolation spaces with respect to (X, Y ), for all compatible pairs X and Y [5, Section 2.3]. If, for all A : X → Y ,
A X,Y ≤ A 1−θ X0,Y0 A θ X1,Y1 ,(2)
then the interpolation space pair (X, Y ) is said to be exact of exponent θ.
The K-method for real interpolation
To explain the K-method, for every compatible pair X = (X 0 , X 1 ) define the K-functional by
K(t, φ) = K(t, φ, X) := inf φ 0 2 X0 + t 2 φ 1 2 X1 1/2 : φ 0 ∈ X 0 , φ 1 ∈ X 1 , φ 0 + φ 1 = φ ,(3)
for t > 0 and φ ∈ Σ(X); our definition is precisely that of [15, p. 98], [6,18]. (More usual, less suited to the Hilbert space case, but leading to the same interpolation spaces and equivalent norms, is to replace the 2-norm φ 0 | 2 X0 + t 2 φ 1 2 X1 1/2 by the 1-norm φ 0 X0 + t φ 1 X1 in this definition, e.g. [5].) Elementary properties of this K-functional are noted in [18, p. 319]. An additional elementary calculation is that, for φ ∈ ∆,
K(t, φ) ≤ K 1 (t, φ) := inf a∈C |a| 2 φ 2 X0 + t 2 |1 − a| 2 φ 2 X1 1/2 = t φ X0 φ X1 φ 2 X0 + t 2 φ 2 X1 1/2 ,(4)
this infimum achieved by the choice a = t 2 φ 2 X1 /( φ 2 X0 + t 2 φ 2 X1 ). Next we define a weighted L q norm by f θ,q := ∞ 0 |t −θ f (t)| q dt t 1/q , for 0 < θ < 1 and 1 ≤ q < ∞, with the modification when q = ∞, that
f θ,∞ := ess sup t>0 |t −θ f (t)|.(5)
Now define, for every compatible pair X = (X 0 , X 1 ), and for 0 < θ < 1 and 1 ≤ q ≤ ∞,
K θ,q (X) := φ ∈ Σ(X) : K(·, φ) θ,q < ∞ ,(6)
this a normed space (indeed a Banach space [5,Theorem 3.4.2]) with the norm φ Kθ,q(X) := N θ,q K(·, φ) θ,q .
Here the constant N θ,q > 0 is an arbitrary normalisation factor. We can, of course, make the (usual) choice N θ,q = 1, but our preferred choice of N θ,q will be, where g(s) := s/ √ 1 + s 2 ,
N θ,q := g −1 θ,q = ∞ 0 s q(1−θ)−1 (1 + s 2 ) −q/2 ds −1/q , 1 ≤ q < ∞, θ −θ/2 (1 − θ) −(1−θ)/2 , q = ∞;(8)
the supremum in (5) when f = g achieved for t = (1 − θ)/θ. We note that, with this choice, N θ,q = N 1−θ,q (substitute s = t −1 in (8)). Further, min(1, s)/
√ 2 ≤ g(s) ≤ min(1, s), so that N θ,q ≤ N θ,q ≤ √ 2 N θ,q , where N θ,q := min(1, ·) −1 θ,q = [qθ(1 − θ)] 1/q , 1 ≤ q < ∞, 1, q = ∞.
We note also that [18,Exercise B.5], with the choice (8),
N θ,2 = (2/π) sin(πθ) 1/2 .(9)
The normalisation N θ,q is used in [18, (B.4)] (and see [5,Theorem 3.4.1(e)]); (9) is used in [18, (B.9)], [6, p. 143], and dates back at least to [15, p. 99]. K θ,q (X), for 0 < θ < 1 and 1 ≤ q ≤ ∞, is the family of spaces constructed by the K-method. We will often use the alternative notation (X 0 , X 1 ) θ,q for K θ,q (X).
Our preference for the normalisation (8) is explained by part (iii) of the following lemma.
Lemma 2.1. Suppose that X = (X 0 , X 1 ) is a compatible pair and define the norm on K θ,q (X) with the normalisation (8).
(i) If φ ∈ ∆(X) then φ ∈ K θ,q (X) and φ Kθ,q(X) ≤ φ 1−θ X0 φ θ X1 ≤ φ ∆(X) . (ii) If φ ∈ K θ,q (X) then φ ∈ Σ(X) and φ Σ(X) ≤ φ Kθ,q(X) .
(iii) If X 0 = X 1 (with equality of norms) then X 0 = X 1 = Σ(X) = ∆(X) = K θ,q (X), with equality of norms.
Proof. If φ ∈ ∆(X) is non-zero then, for 0 < θ < 1, 1 ≤ q < ∞, using (4),
φ q Kθ,q(X) ≤ N q θ,q K 1 (·, φ) q θ,q = N q θ,q φ q X0 φ q X1 ∞ 0 t 1−θ φ 2 X0 + t 2 φ 2 X1 1/2 q dt t = φ q(1−θ) X0 φ qθ X1 ,
the last equality a consequence of the identity
∞ 0 t α (a + bt 2 ) q/2 dt = a (α+1−q)/2 b −(1+α)/2 ∞ 0 t α (1 + t 2 ) q/2 dt = a (α+1−q)/2 b −(1+α)/2 N −q (q−α−1)/q,q ,(10)
for a, b > 0 and −1 < α < q − 1. Similarly,
φ Kθ,∞(X) ≤ N θ,∞ K 1 (·, φ) θ,∞ = N θ,∞ φ 1−θ X0 φ θ X1 sup s>0 s 1−θ √ 1 + s 2 = φ 1−θ X0 φ θ X1 . Clearly also φ 1−θ X0 φ θ X1 ≤ φ ∆(X) so that (i) holds. For φ 0 ∈ X 0 , φ 1 ∈ X 1 , φ 0 2 X0 + t 2 φ 1 2 X1 ≥ (t 2 /(1 + t 2 ))( φ 0 X0 + φ 1 X1 ) 2 , from which it follows that K(t, φ) ≥ g(t) φ Σ(X) , for φ ∈ Σ(X), t > 0,
where g(t) = t/ √ 1 + t 2 , which implies (ii). To see (iii), we recall that we have observed already that, if X 1 ⊂ X 0 , with φ X0 ≤ φ X1 , then X 1 = ∆(X) and X 0 = Σ(X), with equality of norms. Thus (iii) follows from (i) and (ii).
The following theorem collects key properties of the spaces K θ,q (X), in the first place that they are indeed interpolation spaces. Of these properties: (i) is proved, for example, as [18,Theorem B.2]; (ii) in [5,Theorem 3.4.1]; (iii) follows immediately from the definitions and Lemma 2.1; (iv) and (v) are part of [5,Theorem 3.4.2]; (vi) is obvious from the definitions. Finally (vii) is the reiteration or stability theorem, that K-method interpolation of K-interpolation spaces gives the expected K-interpolation spaces, proved, for example, in [18,Theorem B.6].
Theorem 2.2. Suppose that X = (X 0 , X 1 ) and Y = (Y 0 , Y 1 ) are compatible pairs. Then we have the following statements:
(i) For 0 < θ < 1, 1 ≤ q ≤ ∞, (K θ,q (X), K θ,q (Y )
) is a pair of interpolation spaces with respect to (X, Y ) that is exact of exponent θ.
(ii) For 0 < θ < 1, 1 ≤ q ≤ ∞, (X 0 , X 1 ) θ,q = (X 1 , X 0 ) 1−θ,q , with equality of norms if N θ,q = N 1−θ,q (which holds for the choice (8)).
(iii) For 0 < θ 1 < θ 2 < 1 and 1 ≤ q ≤ ∞, if X 1 ⊂ X 0 , then X 1 ⊂ K θ2,q (X) ⊂ K θ1,q (X) ⊂ X 0 , and the inclusion mappings are continuous. Furthermore, if φ X0 ≤ φ X1 , for φ ∈ X 1 , then, with the choice of normalisation (8), φ Kθ 1 ,q (X) ≤ φ Kθ 2 ,q (X) for φ ∈ K θ2,q (X), φ X0 ≤ φ Kθ 1 ,q (X) , for φ ∈ K θ1,q (X), and φ Kθ 2 ,q (X) ≤ φ X1 , for φ ∈ X 1 . (iv) For 0 < θ < 1, 1 ≤ q < ∞, ∆(X) is dense in K θ,q (X). (v) For 0 < θ < 1, 1 ≤ q < ∞, where X • j denotes the closure of ∆(X) in X j , (X 0 , X 1 ) θ,q = (X • 0 , X 1 ) θ,q = (X 0 , X • 1 ) θ,q = (X • 0 , X • 1 ) θ,q ,
with equality of norms.
(vi) For 0 < θ < 1, 1 ≤ q ≤ ∞, if Z j is a closed subspace of X j , for j = 0, 1, and Z = (Z 0 , Z 1 ), then K θ,q (Z) ⊂ K θ,q (X), with φ Kθ,q(X) ≤ φ Kθ,q(Z) , for φ ∈ K θ,q (Z).
(vii) Suppose that 1 ≤ q ≤ ∞, θ 0 , θ 1 ∈ [0, 1], and, for j = 0, 1, Z j := (X 0 , X 1 ) θj ,q , if 0 < θ j < 1, while Z j := X θj , if θ j ∈ {0, 1}. Then (Z 0 , Z 1 ) η,q = (X 0 , X 1 ) θ,q , with equivalence of norms, for θ = (1 − η)θ 0 + ηθ 1 and 0 < η < 1.
The J-method
We turn now to study of the J-method, which we will see is complementary and dual to the K-method. Given a compatible pair X = (X 0 , X 1 ), define the J-functional by
J(t, φ) = J(t, φ, X) := φ 2 X0 + t 2 φ 2 X1
1/2 , for t > 0 and φ ∈ ∆(X), our precise definition here that of [18]. (More usual, less suited to the Hilbert space case but leading to the same interpolation spaces and equivalent norms, is to define J(t, φ) := max( φ X0 , t φ X1 ), e.g. [5].) The space J θ,q (X) is now defined as follows. The elements of J θ,q (X) are those φ ∈ Σ(X) that can be represented in the form
φ = ∞ 0 f (t) dt t ,(11)
for some function f : (0, ∞) → ∆(X) that is strongly ∆(X)-measurable (see, e.g., [18, p. 321]) when (0, ∞) is equipped with Lebesgue measure, and such that
b a f (t) ∆(X) dt t < ∞ if 0 < a < b < ∞, and ∞ 0 f (t) Σ(X) dt t < ∞.(12)
J θ,q (X) is a normed space with the norm defined by
φ Jθ,q(X) := N −1 θ,q * inf f g f θ,q ,(13)where 1 q * + 1 q = 1,(14)
g f (t) := J(t, f (t)), and the infimum is taken over all measurable f : (0, ∞) → ∆(X) such that (11) and (12) hold. Our definition is standard (precisely as in [18]), except for the normalisation factor N −1 θ,q * . It is a standard result, e.g. [18,Theorem B.3], that the spaces K θ,q (X) and J θ,q (X) coincide. Theorem 2.3. For 0 < θ < 1, 1 ≤ q ≤ ∞, J θ,q (X) = K θ,q (X), with equivalence of norms.
A major motivation for introducing the J-method is the following duality result. Here, for a Banach space X, X * denotes the dual of X.
Theorem 2.4. If X = (X 0 , X 1 ) is a compatible pair and ∆(X) is dense in X 0 and X 1 , then ∆(X) is dense in Σ(X) and X * := (X * 0 , X * 1 ) is a compatible pair, and moreover
∆(X) * = Σ(X * ) and Σ(X) * = ∆(X * ),(15)
with equality of norms. Further, for 0 < θ < 1, 1 ≤ q < ∞, with q * defined by (14),
(X 0 , X 1 ) * θ,q = (X * 0 , X * 1 ) θ,q * ,
with equivalence of norms: precisely, if we use the normalisation (8), for φ ∈ (X 0 , X 1 ) θ,q ,
φ Kθ,q(X) * ≤ φ J θ,q * (X * ) and φ K θ,q * (X * ) ≤ φ Jθ,q(X) * .
Proof. We embed X * j in ∆(X) * , for j = 0, 1, in the obvious way, mapping ψ ∈ X * j to its restriction to ∆(X), this mapping injective since ∆(X) is dense in X j . That (15) holds is shown as Theorem 2.7.1 in [5]. The remainder of the theorem is shown in the proof of [18,Theorem B.5].
The above theorem has the following corollary that is one motivation for our choice of normalisation in (13) (cf., the corresponding result for K-norms in Lemma 2.1 (iii)).
Corollary 2.5. If X = (X, X) then J θ,q (X) = X with equality of norms.
Proof. It is clear, from Lemma 2.1 and Theorem 2.3, that J θ,q (X) = X. It remains to show equality of the norms which we will deduce from Theorem 2.4 for 1 < q ≤ ∞.
We first observe (cf. part (vi) of Theorem 2.2) that, for 0 < θ < 1, 1 ≤ q ≤ ∞, it follows immediately from the definitions that if Z j is a closed subspace of Y j , for j = 0, 1, and Z = (Z 0 , Z 1 ), Y = (Y 0 , Y 1 ), then φ Jθ,q(Y ) ≤ φ Jθ,q(Z) , for φ ∈ J θ,q (Z). We will apply this result in the case where, for some Banach space X, Z j = X and Y j = X * * , the second dual of X, for j = 0, 1. (Recall that X is canonically and isometrically embedded as a closed subspace of X * * , this subspace the whole of X * * if X is reflexive.)
Now suppose that 0 < θ < 1 and 1 < q ≤ ∞. Then, for every Banach space X, where X = (X, X) and X * = (X * , X * ), it holds by Lemma 2.1 that X * = K θ,q * (X * ) with equality of norms. Applying Theorem 2.4 it holds for φ ∈ X that
φ X = φ X * * = φ K θ,q * (X * ) * ≤ φ Jθ,q(X * * ) ≤ φ Jθ,q(X)
and, where ·, · is the duality pairing on X × X * ,
φ Jθ,q(X) = sup 0 =ψ∈X * | φ, ψ | ψ Jθ,q(X) * ≤ sup 0 =ψ∈X * | φ, ψ | ψ K θ,q * (X * ) = sup 0 =ψ∈X * | φ, ψ | ψ X * = φ X .
Thus, for φ ∈ X, φ Jθ,q(X) = φ X for 0 < θ < 1 and 1 < q ≤ ∞. To see that this holds also for q = 1 we note that, for 1 ≤ q < ∞, 0 < θ < 1, and φ ∈ X,
φ Jθ,q(X) = inf f J θ,q (f ), where J θ,q (f ) := N −1 θ,q * ∞ 0 f (t) X g θ (t) q dt t 1/q
, g θ (t) := t θ / √ 1 + t 2 , and the infimum is taken over all f that satisfy (11) with
∞ 0 ( f (t) X /t) dt < ∞. Note that g θ (t) has a global maximum on [0, ∞) at t 0 = θ/(1 − θ), with g θ (t 0 ) = N −1 θ,∞ ≤ 2 −1/2 < 1,
and is decreasing on [t 0 , ∞). Given > 0 set f (t) = −1 t χ (t0,t0+ ) φ, for t > 0, where χ (a,b) denotes the characteristic function of (a, b) ⊂ R. Then (11) holds and
φ Jθ,1(X) ≤ J θ,1 (f ) = φ X N θ,∞ t0+ t0 dt g θ (t) ≤ g θ (t 0 ) g θ (t 0 + ) φ X .
As this holds for arbitrary > 0, φ Jθ,1(X) ≤ φ X .
On the other hand, if > 0 and f satisfies (11) with
∞ 0 ( f (t) X /t) dt < ∞ and J θ,1 (f ) ≤ φ Jθ,1(X) + , then, choosing η ∈ (0, 1) such that (0,∞)\(η,η −1 ) ( f (t) X /(tg θ (t)))dt < , it follows (since g θ (t) ≤ 1) that φ − φ η X < , where φ η := ∞ 0 (f η (t)/t)dt and f η := f χ (η,η −1 ) . Thus φ X − ≤ φ η X = lim q→1 + φ η Jθ,q(X) ≤ lim q→1 + J θ,q (f η ) = J θ,1 (f η ) ≤ J θ,1 (f ) + ≤ φ Jθ,1(X) + 2 .
As > 0 here is arbitrary, it follows that φ Jθ,1(X) = φ X .
Interpolation of Hilbert spaces
We focus in this section on so-called quadratic interpolation, meaning the special case of interpolation where the compatible pairs are pairs of Hilbert spaces and the interpolation spaces are also Hilbert spaces. For the remainder of the paper we assume the normalisations (8) and (13) for the K-and J-methods, and focus entirely on the case q = 2, in which case the normalisation factors are given explicitly by (9). With the norms we have chosen, the K-method and J-method interpolation spaces K θ,2 (X) and J θ,2 (X) are Hilbert spaces (in fact, as we will see, the same Hilbert space if X is a Hilbert space compatible pair).
The K-and J-methods in the Hilbert space case
We begin with a result on real interpolation that at first sight appears to be a special case, but we will see later is generic.
Theorem 3.1. Let (X , M, µ) be a measure space and let Y denote the set of measurable functions X → C. Suppose that, for j = 0, 1, w j ∈ Y, with w j > 0 almost everywhere, and let H j := L 2 (X , M, w j µ) ⊂ Y, a Hilbert space with norm
φ Hj := X w j |φ| 2 dµ 1/2 , for φ ∈ H j .
For 0 < θ < 1, where w θ := w 1−θ 0 w θ 1 , let H θ := L 2 (X , M, w θ µ), a Hilbert space with norm
φ H θ := X w θ |φ| 2 dµ 1/2 , for φ ∈ H θ .
Then, for 0 < θ < 1, where H = (H 0 , H 1 ),
H θ = K θ,2 (H) = J θ,2 (H),
with equality of norms.
Proof. We have to show that, for φ ∈ Σ(H),
φ H θ = φ Kθ,2(H) = φ Jθ,2(H) , for 0 < θ < 1. Now φ 2 Kθ,2(H) = N 2 θ,2 ∞ 0 t −1−2θ inf φ0+φ1=φ φ 0 2 H0 + t 2 φ 1 2 H1 dt and φ 0 2 H0 + t 2 φ 1 2 H1 = X w 0 |φ 0 | 2 + w 1 t 2 |φ 1 | 2 dµ.
Further (by applying [18, B.4, p. 333] pointwise inside the integral),
inf φ0+φ1=φ X w 0 |φ 0 | 2 + w 1 t 2 |φ 1 | 2 dµ = X w 0 w 1 t 2 w 0 + t 2 w 1 |φ| 2 dµ,
this infimum achieved by φ 1 = w 0 φ/(w 0 + t 2 w 1 ). Hence, and by Tonelli's theorem and (10),
φ 2 Kθ,2(H) = N 2 θ,2 X ∞ 0 t −1−2θ w 0 w 1 t 2 w 0 + t 2 w 1 dt |φ| 2 dµ = φ 2 H θ . Also, φ 2 Jθ,2(H) = N −2 θ,2 inf f ∞ 0 t −1−2θ f (t) 2 H0 + t 2 f (t) 2 H1 dt and f (t) 2 H0 + t 2 f (t) 2 H1 = X (w 0 + w 1 t 2 )|f (t)| 2 dµ,
so that, by Tonelli's theorem,
φ 2 Jθ,2(H) = N −2 θ,2 inf f X ∞ 0 t −1−2θ (w 0 + w 1 t 2 )|f (t)| 2 dt dµ.(16)
Now we show below that this infimum is achieved for the choice
f (t) = t 2θ φ (w 0 + w 1 t 2 ) ∞ 0 s 2θ−1 /(w 0 + w 1 s 2 ) ds = w θ N 2 θ,2 t 2θ φ w 0 + w 1 t 2 ;(17)
to get the second equality we use that, from (10),
∞ 0 s 2θ−1 w 0 + w 1 s 2 ds = ∞ 0 s 1−2θ w 0 s 2 + w 1 ds = w 1−θ N 2 θ,2 w 0 w 1 = 1 w θ N 2 θ,2 .
Substituting from (17) in (16) gives that
φ 2 Jθ,2(H) = N 2 θ,2 X w 2 θ |φ| 2 ∞ 0 t −1+2θ w 0 + w 1 t 2 dt dµ = X w θ |φ| 2 dµ = φ 2 H θ .
It remains to justify that the infimum is indeed attained by (17). We note first that the definition of f implies that ∞ 0 (f (t)/t)dt = φ, so that (11) holds. Now suppose that g is another eligible function such that (11) holds, and let δ = g − f . Then (17),
X ∞ 0 t −1−2θ (w 0 + w 1 t 2 )|g(t)| 2 dt dµ − X ∞ 0 t −1−2θ (w 0 + w 1 t 2 )|f (t)| 2 dt dµ ≥ 2 X ∞ 0 t −1−2θ (w 0 + w 1 t 2 )f (t)δ(t)dt dµ = 2N 2 θ,2 X w θ φ ∞ 0δ (t) t dt dµ = 0.
The following is a straightforward corollary of the above theorem.
Σ(H) → Y and, for j = 0, 1, functions w j ∈ Y, with w j > 0 almost everywhere, such that the mappings A : H j → L 2 (X , M, w j µ) are unitary isomorphisms. For 0 < θ < 1 define intermediate spaces H θ , with ∆(H) ⊂ H θ ⊂ Σ(H), by H θ := φ ∈ Σ(H) : φ H θ := X w θ Aφ 2 dµ 1/2 < ∞ , where w θ := w 1−θ 0 w θ 1 . Then, for 0 < θ < 1, H θ = K θ,2 (H) = J θ,2 (H)
, with equality of norms. In the next theorem we justify our earlier statement that the situation described in Theorem 3.1 is generic, the point being that it follows from the spectral theorem for bounded normal operators that every Hilbert space compatible pair is unitarily equivalent to a compatible pair of weighted L 2 -spaces. We should make clear that, while our method of proof that the K-method and J-method produce the same interpolation space, with equality of norms, appears to be new, this result (for the somewhat restricted separable Hilbert space case, with ∆(H) dense in H 0 and H 1 ) is claimed recently in Ameur [2, Example 4.1] (see also [3,Section 7]), though the choice of normalisation, details of the argument, and the definition of the J-method norm appear inaccurate in [2].
In the following theorem and subsequently, for a Hilbert space H, (·, ·) H denotes the inner product on H. We note that ∆(H) and Σ(H) are Hilbert spaces if we equip them with the equivalent norms defined by φ ∆(H) := J(1, φ, H) and φ Σ(H) := K(1, φ, H), respectively. In the next theorem we use standard results (e.g., [13, Section VI, Theorem 2.23]) on non-negative, closed symmetric forms and their associated self-adjoint operators.
Theorem 3.3. Suppose that H = (H 0 , H 1 ) is a compatible pair of Hilbert spaces. Then, for 0 < θ < 1, φ Kθ,2(H) = φ Jθ,2(H) , for φ ∈ (H 0 , H 1 ) θ,2 . Further, where H • 1 denotes the closure in H 1 of ∆(H), defining the unbounded, self-adjoint, injective operator T : H • 1 → H • 1 by (T φ, ψ) H1 = (φ, ψ) H0 , φ, ψ ∈ ∆(H),
and where S is the unique non-negative square root of T , it holds that
φ H0 = Sφ H1 and φ Kθ,2(H) = φ Jθ,2(H) = S 1−θ φ H1 , for φ ∈ ∆(H), so that K θ,2 (H) is the closure of ∆(H) in Σ(H) with respect to the norm defined by φ θ := S 1−θ φ H1 . Proof. For j = 0, 1, define the non-negative bounded, injective operator A j : ∆(H) → ∆(H) by the relation (A j φ, ψ) ∆(H) = (φ, ψ) Hj , for φ, ψ ∈ ∆(H), where (·, ·) ∆(∆(H) → L 2 (X , M, µ) such that A 0 φ = U −1 w 0 U φ, for φ ∈ ∆(H),
and w 0 > 0 µ-almost everywhere since A 0 is non-negative and injective. Defining
w 1 := 1 − w 0 we see that A 1 φ = U −1 w 1 U φ, for φ ∈ ∆(H), so that also w 1 > 0 µ-almost everywhere. For φ ∈ ∆(H), φ 2 Hj = (U −1 w j U φ, φ) ∆(H) = (w j U φ, U φ) L 2 (X ,M,µ) = U φ 2 L 2 (X ,M,wj µ) , for j = 0, 1. Thus, where (similarly to H • 1 ) H • 0 denotes the closure of ∆(H) in H 0 , U extends to an isometry U : H • j → L 2 (X ,H θ := φ ∈ Σ(H) : φ H θ := U φ L 2 (X ,M,wθµ) < ∞ , and w θ := w 1−θ 0 w θ 1 . Moreover, for φ ∈ ∆(H), the unbounded operator T : H • 1 → H • 1 satisfies T φ = U −1 (w 0 /w 1 )U φ so that S 1−θ φ 2 H1 = (T 1−θ φ, φ) H1 = (A 1 T 1−θ φ, φ) ∆(H) = (w θ U φ, U φ) L 2 (X ,M,µ) = φ 2 H θ , for 0 < θ < 1, and Sφ 2 H1 = (w 0 U φ, U φ) L 2 (X ,M,µ) = φ 2 H0 .
Suppose that φ ∈ ∆(H). The above proof shows that
φ Hj = X w j |U φ| 2 dµ 1/2 , for j = 0, 1, and φ Kθ,2(H) = X w θ |U φ| 2 dµ 1/2 , for 0 < θ < 1, with U φ ∈ L 2 (X , M, µ), 0 < w j ≤ 1 µ-almost everywhere, for j = 0, 1, and w θ := w 1−θ 0 w θ 1 .
It follows from the dominated convergence theorem that φ Kθ,2(H) depends continuously on θ and that
lim θ→0 + φ Kθ,2(H) = φ H0 and lim θ→1 − φ Kθ,2(H) = φ H1 .
In the special case, considered in [15], that H 0 is densely and continuously embedded in H 1 , when ∆(H) = H 0 and Σ(H) = H 1 , the above theorem can be interpreted as stating that (H 0 , H 1 ) θ,2 is the domain of the unbounded self-adjoint operator S 1−θ : H 1 → H 1 (and H 0 the domain of S), this a standard characterisation of the K-method interpolation spaces in this special case, see, e.g., [15, p. 99] or [6]. The following theorem (cf., [6,Theorem B.2]), further illustrating the application of Corollary 3.2, treats the special case when H 1 ⊂ H 0 , with a compact and dense embedding (which implies that both H 0 and H 1 are separable). Theorem 3.4. Suppose that H = (H 0 , H 1 ) is a compatible pair of Hilbert spaces, with H 1 densely and compactly embedded in H 0 . Then the operator T :
H 1 → H 1 , defined by (T φ, ψ) H1 = (φ, ψ) H0 , φ, ψ ∈ H 1 ,
is compact, self-adjoint, and injective, and there exists an orthogonal basis, {φ j : j ∈ N}, for H 1 , where each φ j is an eigenvector of T with corresponding eigenvalue λ j . Further, λ 1 ≥ λ 2 ≥ ... > 0 and λ j → 0 as j → ∞. Moreover, normalising this basis so that φ j H0 = 1 for each j, it holds for 0 < θ < 1 that
(H 0 , H 1 ) θ,2 = φ = ∞ k=1 a j φ j ∈ H 0 : φ * θ := ∞ j=1 λ −θ j |a j | 2 1/2 < ∞ . Further, for 0 < θ < 1, φ * θ = φ Kθ,2(H) = φ Jθ,2(H) , for φ ∈ (H 0 , H 1 ) θ,2 , and, for j = 0, 1, φ * j = φ Hj , for φ ∈ H j .
Proof. Clearly T is injective and self-adjoint, and we see easily (this a standard argument) that T is compact. The existence of an orthogonal basis of H 1 consisting of eigenvectors of T , and the properties of the eigenvalues claimed, follow from standard results [18,Theorem 2.36], and the positivity and injectivity of T . Further,
(φ i , φ j ) H0 = (T φ i , φ j ) H1 = λ i (φ i , φ j ) H1 . Thus, normalising by φ j H0 = 1, it holds that (φ i , φ j ) H0 = δ ij , and (φ i , φ j ) H1 = λ −1 i δ ij , for i, j ∈ N. Since H 1 is dense in H 0 , {φ j } is an orthonormal basis of H 0 . Further, for φ ∈ H 1 and j ∈ N, (φ j , φ) H1 = λ −1 j (T φ j , φ) H1 = λ −1 j (φ j , φ) H0 . Thus, for φ ∈ H 0 , φ 2 H0 = ∞ j=1 |(φ j , φ) H0 | 2 , while, for φ ∈ H 1 , φ 2 H1 = ∞ j=1 λ i |(φ, φ j ) H1 | 2 = ∞ j=1 λ −1 i |(φ, φ j ) H0 | 2 .
To complete the proof we use Corollary 3.2, with the measure space (N, 2 N , µ), where µ is counting measure, and where Aφ, for φ ∈ H 0 , is the function Aφ : N → C defined by Aφ(j) = (φ, φ j ) H0 , j ∈ N, and w 0 and w j are defined by w 0 (j) = 1 and
w 1 (j) = λ −1/2 j , j ∈ N.
Uniqueness of interpolation in the Hilbert space case
Theorem 3.3 is a statement that, in the Hilbert space case, three standard methods of interpolation produce the same interpolation space, with the same norm. This is illustrative of a more general result. It turns out, roughly speaking, that all methods of interpolation between Hilbert spaces that produce, for 0 < θ < 1, interpolation spaces that are Hilbert spaces and that are exact of exponent θ, must coincide.
To make a precise statement we need the following definition: given a Hilbert space compatible pair H = (H 0 , H 1 ), an intermediate space H between H 0 and H 1 is said to be a geometric interpolation space of exponent θ [17], for some 0 < θ < 1, relative to H, if H is a Hilbert space, ∆(H) is dense in H, and the following three conditions hold for linear operators T : The following is the key uniqueness and existence theorem; the uniqueness part is due to McCarthy [17] in the separable Hilbert space case with ∆(H) dense in H 0 and H 1 . We emphasise that this theorem states that, given a Hilbert space compatible pair, two geometric interpolation spaces with the same exponent must have equal norms, not only equivalent norms.
i) If T maps ∆(H) to ∆(H) and T φ H0 ≤ λ 0 φ H0 and T φ H1 ≤ λ 1 φ H1 , for all φ ∈ ∆(H), then T φ H ≤ λ 1−θ 0 λ θ 1 φ H , for all φ ∈ ∆(H); ii) If T maps ∆(H) to H, for some Hilbert space H, and T φ H ≤ λ 0 φ H0 and T φ H ≤ λ 1 φ H1 , for all φ ∈ ∆(H), then T φ H ≤ λ 1−θ 0 λ θ 1 φ H , for all φ ∈ ∆(H); iii) If T maps H to ∆(H), for some Hilbert space H, and T φ H0 ≤ λ 0 φ H and T φ H1 ≤ λ 1 φ H , for all φ ∈ H, then T φ H ≤ λ 1−θ 0 λ θ 1 φ H , for all φ ∈ H.
Theorem 3.5. Suppose that H = (H 0 , H 1 ) is a compatible pair of Hilbert spaces. Then, for 0 < θ < 1, K θ,2 (H) is the unique geometric interpolation space of exponent θ relative to H.
Proof. That H θ := K θ,2 (H) is a geometric interpolation space of exponent θ follows from Lemma 2.1(iii) and Theorem 2.2 (i) and (iv). To see that H θ is the unique geometric interpolation space we adapt the argument of [17], but using the technology (and notation) of the proof of Theorem 3.3.
So suppose that G is another geometric interpolation space of exponent θ relative to H. To show that G = H θ with equality of norms it is enough to show that φ G = φ Hθ , for φ ∈ ∆(H).
Using the notation of the proof of Theorem 3.3, recall that T : H • 1 → H • 1 is given by T = U −1 ωU , where ω := w 0 /w 1 . For 0 ≤ a < b let χ a,b ∈ Y denote the characteristic function of the set {x ∈ X : a ≤ ω(x) < b}, and define the projection operator P (a, b) :
Σ(H • ) → ∆(H) by P (a, b)φ = U −1 χ a,b U φ.
Recalling that U : H • j → L 2 (X , M, w j µ) is unitary, we see that the mapping P (a, b) : H • j → H • j has norm one, for j = 0, 1: since G and H θ are geometric interpolation spaces, also P (a, b) : G → G and P (a, b) : H θ → H θ have norm one. Thus P (a, b) is an orthogonal projection operator on each of H • j , j = 0, 1, G, and H θ , for otherwise there exists a ψ in the null-space of P (a, b) which is not orthogonal to some φ in the range of P (a, b), and then, for some η ∈ C, φ > φ + ηψ ≥ P (φ + ηψ) = φ , a contradiction.
Let H denote the range of P (a, b) : Σ(H) → ∆(H) equipped with the norm of H 1 . Clearly P (a, b) :
H • 1 → H has norm one, while it is a straightforward calculation that P (a, b) : H • 0 → H has norm ≤ χ a,b ω −1/2 L ∞ (X ,M,µ) ≤ a −1/2 , so that P (a, b) : G → H has norm ≤ a −(1−θ)/2 . Similarly, where R is the inclusion map (so Rφ = φ), R : H → H • 1 has norm one, R : H → H • 0 has norm ≤ χ a,b ω 1/2 L ∞ (X ,M,µ) ≤ b 1/2 , so that R : H → G has norm ≤ b (1−θ)/2 . Thus, for φ ∈ H, a 1−θ φ 2 H1 ≤ φ 2 G ≤ b 1−θ φ 2 H1 .(18)
Finally, for every p > 1, we observe that, for φ ∈ ∆(H), where φ n := P (p n , p n+1 )φ, since {x : ω(x) = 0} has µ-measure zero,
φ 2 Hθ = ∞ n=−∞ φ n 2 Hθ and φ 2 G = ∞ n=−∞ φ n 2 G .(19)
Further, for each n,
φ n 2 Hθ = X w θ |U φ n | 2 dµ = X χ p n ,p n+1 w θ |U φ n | 2 dµ = X χ p n ,p n+1 ω 1−θ w 1 |U φ n | 2 dµ so that p n(1−θ) φ n 2 H1 ≤ φ n 2 Hθ ≤ p (n+1)(1−θ) φ n 2
H1
. Combining these inequalities with (18) (taking a = p n , b = p n+1 ) and (19), we see that
p −(1−θ) φ 2 G ≤ φ 2 Hθ ≤ p 1−θ φ 2 G .
Since this holds for all p > 1, φ Hθ = φ G .
Remark 3.6. For those who like the language of category theory (commonly used in the interpolation space context, e.g. [5,24]), the above theorem says that there exists a unique functor F from the category of Hilbert space compatible pairs to the category of Hilbert spaces such that: (ii) If ∆(H) is dense in H 0 and H 1 , so that H * := (H * 0 , H * 1 ) is a Hilbert space compatible pair, then
Duality and interpolation scales
(H 0 , H 1 ) * θ,2 = (H * 0 , H * 1 ) θ,2 ,
for 0 < θ < 1, with equality of norms.
Proof. To prove (i) we note first that, by Theorem 2.2 (v), we can assume ∆(H) is dense in H 0 and H 1 .
With this assumption it holds-see the proof of Theorem 3.3-that there exists a measure space (X , M, µ), a unitary operator U : ∆(H) → L 2 (X , M, µ), and functions w j : X → [0, ∞) that are µmeasurable and non-zero µ-almost everywhere, such that U : H j → L 2 (X , M, w j µ) is a unitary operator for j = 0, 1 and U : (H 0 , H 1 ) θ,2 → L 2 (X , M, w θ µ) is a unitary operator for 0 < θ < 1, where w θ := w 1−θ 0 w θ 1 . But this identical argument, repeated for (H 0 , H 1 ) η,2 , gives that U : (H 0 , H 1 ) η,2 → L 2 (X , M, W η µ) is a unitary operator, where W η := W 1−η 0 W η 1 and W j := w θj . But this proves the result since W η = w θ if θ = (1 − η)θ 0 + ηθ 1 and 0 < η < 1.
That (ii) holds is immediate from Theorems 2.4 and 3.3. We will say that {H s : s ∈ I} is an exact interpolation scale if, moreover, the norms of (H s , H t ) η,2 and H θ are equal, for s, t ∈ I and 0 < η < 1.
In this terminology part (i) of the above theorem is precisely a statement that, for every Hilbert space
compatible pair H = (H 0 , H 1 ), where H s := (H 0 , H 1 ) s,2 , for 0 < s < 1, {H s : 0 ≤ s ≤ 1} is an exact interpolation scale. If ∆(H) is dense in H 0 and H 1 , part (ii) implies that also {H * s : 0 ≤ s ≤ 1}
is an exact interpolation scale.
Interpolation of Sobolev spaces
In this section we study Hilbert space interpolation, analysed in Section 3, applied to the classical Sobolev spaces H s (Ω) and H s (Ω), for s ∈ R and an open set Ω. (Our notations here, which we make precise below, are those of [18].) This is a classical topic of study (see, e.g., notably [15]). Our results below provide a more complete answer than hitherto available to the following questions: Our answers to (i) and (ii) will consist mainly of examples and counterexamples. In particular, in the course of answering these questions we will write down, in certain cases of interest, explicit expressions for interpolation norms that may be of some independent interest. Our investigations in this section are in very large part prompted and inspired by the results and discussion in [18, Appendix B], though we will exhibit a counterexample to one of the results claimed in [18]. We talk a little vaguely in the above paragraph about "Hilbert space interpolation". This vagueness is justified in Section 3.2 which makes clear that, for 0 < θ < 1, there is only one method of interpolation of a pair of compatible Hilbert spaces H = (H 0 , H 1 ) which produces an interpolation space H θ that is a geometric interpolation space of exponent θ (in the terminology of §3.2). Concretely this intermediate space is given both by the real interpolation methods, the K-and J-methods with q = 2, and by the complex interpolation method: to emphasise, these methods give the identical interpolation space with identical norm (with the choice of normalisations we have made for the K-and J-methods). We will, throughout this section, use H θ and (H 0 , H 1 ) θ as our notations for this interpolation space and · Hθ as our notation for the norm, so that H θ = (H 0 , H 1 ) θ and · Hθ are abbreviations for (H 0 , H 1 ) θ,2 = K θ,2 (H) = J θ,2 (H) and · Kθ,2(H) = · Jθ,2(H) , respectively, the latter defined with the normalisation (9).
The spaces H s (R n )
Our function space notations and definitions will be those in [18]. For n ∈ N let S(R n ) denote the Schwartz space of smooth rapidly decreasing functions, and S * (R n ) its dual space, the space of tempered distributions. For u ∈ S(R n ), v ∈ S * (R n ), we denote by u, v the action of v on u, and we embed L 2 (R n ) ⊃ S(R n ) in S * (R n ) in the usual way, i.e., u, v := (u, v), where (·, ·) denotes the usual inner product on L 2 (R n ), in the case that u ∈ S(R n ), v ∈ L 2 (R n ). We define the Fourier transformû = Fu ∈ S(R n ), for u ∈ S(R n ), byû (ξ) := 1 (2π) n/2 R n e −iξ·x u(x) dx, for ξ ∈ R n , and then extend F to a mapping from S * (R n ) to S * (R n ) in the canonical way, e.g., [18]. For s ∈ R we define the Sobolev space H s (R n ) ⊂ S * (R n ) by [18] (for an open set Ω, D(Ω) denotes the space of those u ∈ C ∞ (Ω) that are compactly supported in Ω). Hence and from (20) it is clear that H t (R n ) is continuously and densely embedded in H s (R n ), for s < t, with u H s (R n ) ≤ u H t (R n ) , for u ∈ H t (R n ). By Plancherel's theorem, H 0 (R n ) = L 2 (R n ) with equality of norms, so that H s (R n ) ⊂ L 2 (R n ) for s ≥ 0. Moreover, from the definition (20),
H s (R n ) := u ∈ S * (R n ) : u H s (R n ) := R n (1 + |ξ| 2 ) s |û(ξ)| 2 dξ 1/2 < ∞ .(20)D(R n ) ⊂ S(R n ) ⊂ H s (R n ) are dense in H s (R n )u 2 H m (R n ) = |α|≤m m |α| |α| α ∂ α u 2 L 2 (R n ) , for m ∈ N 0 := N ∪ {0},(21)
where, for α = (α 1 , ...α n ) ∈ N n 0 , |α| := n i=1 α i , |α| α := |α|!/(α 1 ! · · · α n !), ∂ α := n i=1 ∂ αi i , and ∂ j := ∂/∂x j (the derivative in a distributional sense).
The following result ( [18, Theorem B.7]) is all there is to say about H s (R n ) and Hilbert space interpolation.
Theorem 4.1. {H s (R n ) : s ∈ R} is an exact interpolation scale, i.e., for s 0 , s 1 ∈ R, 0 < θ < 1, (H s0 (R n ), H s1 (R n )) θ = H s (R n ), with equality of norms, if s = s 0 (1 − θ) + s 1 θ.
Proof. This follows from Corollary 3.2, applied with A = F, X = R n , and w j (ξ) = (1 + |ξ| 2 ) sj /2 .
The spaces H s (Ω)
For Ω ⊂ R n there are at least two definitions of H s (Ω) in use (equivalent if Ω is sufficiently regular). Following [18] (or see [24, With this norm, for s ∈ R, H s (Ω) is a Hilbert space, D(Ω) := {U | Ω : U ∈ D(R n )} is dense in H s (Ω), and H t (Ω) is continuously and densely embedded in H s (Ω) with u H s (Ω) ≤ u H t (Ω) , for s < t and u ∈ H t (Ω) [18]. Further L 2 (Ω) = H 0 (Ω) with equality of norms, so that H s (Ω) ⊂ L 2 (Ω) for s > 0.
For m ∈ N 0 , let W m 2 (Ω) := u ∈ L 2 (Ω) : ∂ α u ∈ L 2 (Ω) for |α| ≤ m (our notation is that of [18] and other authors, but the notation H m (Ω) for this space is very common, e.g. [15]). ensures that u H m (Ω) ≥ u W m 2 (Ω) , for u ∈ H m (Ω), with equality when Ω = R n . Clearly, H m (Ω) is continuously embedded in W m 2 (Ω), for all Ω. Whether H s (Ω) is an interpolation scale depends on the smoothness of Ω. As usual (see, e.g., [18, p. 90]), for m ∈ N 0 we will say that Ω ⊂ R n is a C m open set if ∂Ω is bounded and if, in a neighbourhood of every point on ∂Ω, ∂Ω is (in some rotated coordinate system) the graph of a C m function, and the part of Ω in that neighbourhood is part of the hypograph of the same function. Likewise, for 0 < β ≤ 1, we will say that Ω is a C 0,β open set, if it is a C 0 open set and ∂Ω is locally the graph of a function that is Hölder continuous of index β. In particular, a C 0,1 or Lipschitz open set has boundary that is locally the graph of a Lipschitz continuous function. We say that {x ∈ R n : x n < (x 1 , . . . , x n−1 )} is a Lipschitz hypograph if : R n−1 → R is a Lipschitz function.
Let R : H s (R n ) → H s (Ω) be the restriction operator, i.e., RU = U | Ω , for U ∈ H s (R n ): this is an operator with norm one for all s ∈ R. It is clear that W m 2 (Ω) = H m (Ω), with equivalence of norms, if there exists a continuous extension operator E : W m 2 (Ω) → H m (R n ) that is a right inverse to R, so that REu = u, for all u ∈ W m 2 (Ω). Such an extension operator is also a continuous extension operator E : H m (Ω) → H m (R n ). An extension operator E s : H s (Ω) → H s (R n ) exists for all Ω and all s ∈ R: for u ∈ H s (Ω), set U := E s u to be the unique minimiser in H s (R n ) of U H s (R n ) subject to U | Ω = u (see [18, p. 77]). These operators E s have norm one for all s ∈ R and all Ω. But whether H s (Ω) is an interpolation scale, for some range of s, depends on the existence of an extension operator which, simultaneously, maps H s (Ω) to H s (R n ), for two distinct values of s. The following lemma is a quantitative version of standard arguments, e.g. [24,Section 4.3].
Lemma 4.2. Suppose that s 0 ≤ s 1 , 0 < θ < 1, and set s = s 0 (1 − θ) + s 1 θ, H = (H s0 (Ω), H s1 (Ω)). Then H s (Ω) ⊂ H θ = (H s0 (Ω), H s1 (Ω)) θ , with u Hθ ≤ u H s (Ω) , for u ∈ H s (Ω). If also, for some λ 0 , λ 1 ≥ 1, E : H s0 (Ω) → H s0 (R n ) is an extension operator, with Eu H s j (R n ) ≤ λ j u H s j (Ω) for u ∈ H s1
(Ω) and j = 0, 1, then H s (Ω) = H θ with equivalence of norms, precisely with
λ θ−1 0 λ −θ 1 u H s (Ω) ≤ u Hθ ≤ u H s (Ω) , for u ∈ H s (Ω).1 2 u H s (Ω) ≤ u Hs ≤ u H s (Ω) , for u ∈ H s (Ω).
The construction of a continuous extension operator E : W m 2 (Ω) → H m (R n ), for each m ∈ N 0 in the case Ω Lipschitz, dates back to Calderón [7]. Stein [22, p. 181], in the case Ω Lipschitz, constructed an extension operator E : W m 2 (Ω) → H m (R n ), not depending on m ∈ N 0 , that is continuous for all m. It is well known that if an open set is merely C 0,β , for some β < 1, rather than C 0,1 , then, for each m ∈ N, there may exist no extension operator E : W m 2 (Ω) → H m (R n ), so that H m (Ω) W m 2 (Ω). This is the case, for example, for the cusp domain in Lemma 4.13 below: see [16, p. 88]. (Here, as usual, domain means connected open set.) A strictly larger class than the class of C 0,1 domains for which continuous extension operators do exist is the class of locally uniform domains [12].
Definition 4.4. A domain Ω ⊂ R n is said to be ( , δ) locally uniform if, between any pair of points x, y ∈ Ω such that |x − y| < δ, there is a rectifiable arc γ ⊂ Ω of length at most |x − y|/ and having the property that, for all z ∈ γ, dist(z, ∂Ω) ≥ |z − x| |z − y| |x − y| .
All Lipschitz domains are locally uniform, but the class of locally uniform domains contains also sets Ω ⊂ R n with wilder, fractal boundaries, in fact with ∂Ω having any Hausdorff dimension in [n − 1, n) [12, p. 73]. Jones [12] proves existence of an extension operator E : W m 2 (Ω) → H m (R n ) for each m ∈ N when Ω is locally uniform. More recently the following uniform result is proved. Theorem 4.5 (Rogers,[20]). If Ω ⊂ R n is an ( , δ) locally uniform domain then there exists an extension operator E : W m 2 (Ω) → H m (R n ), not depending on m ∈ N 0 , that is continuous for all m.
The following uniform extension theorem for the spaces H s (Ω) is a special case of a much more general uniform extension theorem for Besov spaces [21], and generalises Stein's classical result to negative s. Rychkov's [21] result is stated for Lipschitz hypographs and bounded Lipschitz domains, but his localisation arguments for bounded domains [21, p. 244] apply equally to all Lipschitz open sets. Except in the case Ω = R n , it appears that {H s (Ω) : s ∈ R} is not an exact interpolation scale. In particular, Lemma 4.13 below shows that, for Ω = (0, a) with 0 < a < 1, {H s (Ω) : 0 ≤ s ≤ 2} is not an exact interpolation scale, indeed that, for interpolation between L 2 (Ω) and H 2 (Ω), the ratio of the interpolation norm to the intrinsic norm on H 1 (Ω) can be arbitrarily small for small a. Example 4.14 below is a bounded open set Ω ⊂ R for which Proof. Let H θ := (H 0 (Ω), H 2 (Ω)) θ , for 0 < θ < 1. Choose an even function χ ∈ C ∞ (R) such that 0 ≤ χ(t) ≤ 1 for t ∈ R, with χ(t) = 0 if |t| > 1, and
H 1 (Ω) L 2 (Ω), H 2 (Ω) 1/2 ,(22)χ(t) = 1 if |t| < 1/2. For 0 < h ≤ 1 define φ h ∈ H 2 (Ω) by φ h (x) = χ(x 1 /h), x ∈ Ω. We observe that φ h (x) = 0 for x 1 > h so that, where Ω h := {x ∈ Ω : 0 < x 1 < h}, φ h 2 L 2 (Ω) ≤ Ωh dx = 2 h 0 x p 1 dx 1 = 2h p+1 p + 1 . Further, defining φ + h (x) = χ(x 1 /h)χ(x 2 /(2h)), for x = (x 1 , x 2 ) ∈ R 2 and 0 < h ≤ 1, it is clear that φ h = φ + h | Ω . Moreover, ∂ α φ + h L 2 (R 2 ) = h 1−|α| ∂ α φ + 1 L 2 (R 2 ) , for α ∈ N 2 0 .
Thus, using the identity (21),
φ h H 2 (Ω) ≤ φ + h H 2 (R 2 ) = O(h −1 ), as h → 0,
so that, applying Lemma 2.1(i), φ h Hθ = O(h β ), as h → 0, where β := (1 − θ)(p + 1)/2 − θ. Let θ = 1/2, so that β = (p − 1)/4 > 0. Put h n = n −q , for n ∈ N, for some q > β −1 . Then φ hn H 1/2 = O(n −qβ ) as n → ∞, so that ∞ n=1 φ hn is convergent in H 1/2 to some ψ ∈ H 1/2 . Let Ω b be some C 1 bounded domain containing Ω. Then, by the Sobolev embedding theorem (e.g. [1, p. 97]), H 1 (Ω b ) ⊂ L r (Ω b ) for all 1 ≤ r < ∞, so that H 1 (Ω) ⊂ L r (Ω). We will show (22) by showing ψ ∈ L r (Ω) if r is sufficiently large.
Clearly, ψ = ∞ n=1 φ hn satisfies ψ(x) ≥ n, for 0 < x 1 < h n /2 = n −q /2, so that
Ω |ψ| r dx ≥ 2 ∞ n=1 n r n −q /2 (n+1) −q /2 x p 1 dx 1 = 1 (p + 1)2 p ∞ n=1 n r n −q(p+1) − (n + 1) −q(p+1) ≥ q 2 p ∞ n=1 n r (n + 1) −q(p+1)−1 ,
where in the last step we use the mean value theorem, which gives that, for some ξ ∈ (n, n + 1), n −t − (n + 1) −t = tξ −t−1 ≥ t(n + 1) −t−1 , where t = q(p + 1) > 0. Thus ψ ∈ L r (Ω) if r − q(p + 1) − 1 ≥ −1, i.e., if r ≥ q(p + 1). Since we can choose any q > β −1 , we see that, in fact, H 1/2 ⊂ L r (Ω) for r > 4(p + 1)/(p − 1).
The spaces H s (Ω)
For s ∈ R and Ω ⊂ R n we define H where V ∈ H −s (R n ) denotes any extension of v with V | Ω = v, and ·, · −s is the standard duality pairing on H s (R n ) × H −s (R n ), the natural extension of the duality pairing ·, · on S(R n ) × S * (R n ). This result is well known when Ω is C 0 [18]; that it holds for arbitrary Ω is shown in [8,9].
The following corollary follows from this duality result and Theorem 3.7 (ii). Combining this with Corollary 4.7, we obtain the following result.
with ρ = λ −1 − 1. In turn, by local elliptic regularity, (23) holds if and only if φ ∈ H 1 0 (Ω) ∩ C 2 (Ω) and −∆φ = ρφ in Ω (in a classical sense).] From Theorem 3.4, the interpolation norm on H s is
φ Hs = φ * s := ∞ j=1 λ −s j |a j | 2 1/2 = ∞ j=1 (1 + ρ j ) s |a j | 2 1/2 , for 0 < s < 1 and φ ∈ H s ,(24)
where, for j ∈ N, ρ j := λ −1 j − 1 and a j := Ω φφ j dx. Further, φ H j (Ω) = φ * j for φ ∈ H j = H j (Ω) and j = 0, 1. Moreover, by Corollary 4.10, if Ω is Lipschitz, H s = H s (Ω) for 0 < s < 1, with equivalence of norms.
One-dimensional examples and counterexamples
Our first example, Lemma 4.13, which illustrates that {H s (Ω) : 0 ≤ s ≤ 2} needs not be an exact interpolation scale, requires explicit values for the H 1 (Ω) and H 2 (Ω) norms, for Ω = (0, a) with a > 0. These norms are computed using the minimal extension operator E s : H s (Ω) → H s (R) for s = 1, 2.
Lemma 4.12.
For Ω = (0, a) ⊂ R, with a > 0, the H 1 (Ω) and H 2 (Ω) norms are given by
φ 2 H 1 (Ω) =|φ(0)| 2 + |φ(a)| 2 + a 0 |φ| 2 + |φ | 2 dx,(25)φ 2 H 2 (Ω) =|φ(0)| 2 + |φ (0)| 2 + |φ(0) − φ (0)| 2 + |φ(a)| 2 + |φ (a)| 2 + |φ(a) − φ (a)| 2 + a 0 (|φ| 2 + 2|φ | 2 + |φ | 2 ) dx.(26)
Proof. By the definitions (20) and (21),
φ 2 H 1 (Ω) = E 1 φ 2 H 1 (R) = R (|E 1 φ| 2 + |(E 1 φ) | 2 )dx,
where the extension E 1 φ of φ ∈ H 1 (Ω) with minimal H 1 (R) norm is computed as an easy exercise in the calculus of variations, recalling that H 1 (R) ⊂ C 0 (R), to be
E 1 φ(x) = φ(0) e x , x ≤ 0, φ(x), 0 < x < a, φ(a) e a−x , x ≥ a.
The assertion (25) follows by computing
R (|E 1 φ| 2 + |(E 1 φ) | 2 )dx.
Similarly, for φ ∈ H 2 (Ω), φ H 2 (Ω) = E 2 φ H 2 (R) and E 2 φ is computed by minimizing the functional
J 2 (ψ) = ψ 2 H 2 (R) = R (1 + ξ 2 ) 2 |ψ| 2 dξ = R (|ψ| 2 + 2|ψ | 2 + |ψ | 2 )dx
under the constraint ψ| Ω = φ. By computing the first variation of the functional J 2 and integrating by parts, we see that ψ solves the differential equation ψ − 2ψ + ψ = 0 (whose solutions are e x , e −x , xe x , xe −x ) in the complement of Ω, and, recalling that H 2 (R) ⊂ C 1 (R), we obtain
E 2 φ(x) = xe x φ (0) + (1 − x)e x φ(0), x ≤ 0, φ(x), 0 < x < a, (x − a)e a−x φ (a) + (1 − a + x)e a−x φ(a), x ≥ a.
The assertion (26) is obtained by computing R (|E 2 φ| 2 + 2|(E 2 φ) | 2 + |(E 2 φ) | 2 )dx.
Lemma 4.13.
If Ω = (0, a), with a > 0, then {H s (Ω) : 0 ≤ s ≤ 2} is not an exact interpolation scale. In particular, where H θ := (L 2 (Ω), H 2 (Ω) θ , it holds that H θ = H 2θ (Ω), for 0 < θ < 1, but 1 H 1/2 = 1 H 1 (Ω) . Precisely,
1 H 1/2 1 H 1 (Ω) ≤ a 2 + 4a a 2 + 4a + 4 1/4 < min(a 1/4 , 1).(27)
Proof. The inequality (27) follows from Lemma 4.12 and Lemma 2.1(i), which give that := (a 1 , a 2 , . . .) be a real sequence satisfying a 1 := 1, 0 < a n+1 < a n /4, n ∈ N, and let Ω := ∞ n=1 (a n /2, a n ) ⊂ (0, 1). Let H 0 := L 2 (Ω), H 1 := H 2 (Ω), H := (H 0 , H 1 ), and H 1/2 := (H 0 , H 1 ) 1/2 . We note first that if u ∈ H 1 (R) then, by standard Sobolev embedding results [1, p. 97], u ∈ C 0 (R), so u| Ω ∈ L ∞ (Ω) and H 1 (Ω) ⊂ L ∞ (Ω). We will see that there is a choice of the sequence a = (a 1 , a 2 , ...) such that H 1/2 ⊂ L ∞ (Ω) so that H 1/2 = H 1 (Ω).
1 2 L 2 (Ω) = a, 1 2 H 1 (Ω) = 2 + a, 1 2 H 2 (Ω) = 4 + a, 1 2 H 1/2 (Ω) ≤ 1 L 2 (Ω) 1 H 2 (Ω) = a 2 + 4a.
To see this, choose an even function χ ∈ C ∞ (R) such that χ(t) = 0 for |t| > 1 and χ(0) = 1, and consider the sequence of functions in H 1 ⊂ H 1/2 ∩ H 1 (Ω) defined by φ n (t) = 1, t ∈ [0, a n ] ∩ Ω, 0, t ∈ (a n , ∞) ∩ Ω, for n ∈ N. Clearly φ n H0 ≤ a 1/2 n and φ n H1 = inf
ψ∈H 2 (R), ψ|Ω=φn ψ H 2 (R) ≤ ψ n H 2 (R) , where ψ n ∈ C 1 (R) ∩ H 2 (R) is defined by ψ n (t) = χ(t), t < 0, 1, 0 ≤ t ≤ a n , χ((t − a n )/b n ),
t > a n , with b 1 := 1 and b n := a n−1 /2 − a n , for n ≥ 2. Further, where α := χ H 2 (R) ,
ψ n 2 H 2 (R) = ∞ −∞ |ψ n | 2 + 2|ψ n | 2 + |ψ n | 2 dt = α 2 + a n + ∞ 0 b n |χ(r)| 2 + 2b −1 n |χ (r)| 2 + b −3 n |χ (r)| 2 dr,
and, for n ≥ 2, a n ≤ 1/2, 1/2 ≥ b n ≥ a n−1 /4, so that
ψ n 2 H 2 (R) ≤ 1 2 1 + 1 + b −3 n α ≤ 1 2 1 + 1 + 64a −3 n−1 α .
Applying Lemma 2.1(i) we see that, for n ≥ 2,
φ n H 1/2 ≤ φ n 1/2 H0 φ n 1/2 H1 ≤ 2 −1/4 a 1/4 n 1 + 1 + 64a −3 n−1 α 1/4 .
Now choosing a n according to the rule a 1 = 1, a n = a n−1 4 1 + 1 + 64a −3 n−1 α −1 < a n−1 4 , n = 2, 3, . . . , it follows that a n ≤ 4 −n and that φ n H 1/2 ≤ 2 −1/4 4 −n/4 ≤ ( √ 2) −n → 0 as n → ∞. In fact φ n → 0 so rapidly that ∞ n=1 φ n is convergent in H 1/2 to a limit Φ ∈ H 1/2 . This limit is not in H 1 (Ω) as Φ ∈ L ∞ (Ω): explicitly, Φ(t) = n, for a n /2 < t < a n .
Our last example uses the results of Lemma 4.11, and shows that { H s (0, 1) : 0 ≤ s ≤ 1} is not an exact interpolation scale by computing values of the Sobolev and interpolation norms for specific functions. This example also demonstrates that no normalisation of the interpolation norm can make the two norms equal. √ 2 sin(jπx) and ρ j = j 2 π 2 , so that, for 0 < θ < 1, the interpolation norm on H θ = H θ (Ω) is given by (24). In particular,
φ j * θ = (1 + j 2 π 2 ) θ/2 , for j ∈ N.
Noting that
φ j (ξ) = 1 √ π 1 0 sin(jπx)e −iξx dx = j √ π 1 − (−1) j e −iξ j 2 π 2 − ξ 2 = 2j √ πe −iξ/2 j 2 π 2 − ξ 2
cos ξ/2, j odd, i sin ξ/2, j even, it holds that
φ j H θ (Ω) = ∞ −∞ (1 + ξ 2 ) θ |φ j (ξ)| 2 dξ 1/2 = 2j √ 2π ∞ 0 (1 + ξ 2 ) θ (j 2 π 2 − ξ 2 ) 2 cos 2 (ξ/2) sin 2 (ξ/2) dξ 1/2 .
A comparison of φ j * θ and φ j H θ (Ω) for j = 1, 2 and θ ∈ (0, 1) is shown in Figure 1(a). It is clear from Figure 1(a) that the interpolation and Sobolev norms do not coincide in this case. In particular, for θ = 1/2 we have The ratio between the two norms is plotted for both φ 1 and φ 2 in Figure 1(b). In particular,
φ 1 * 1/2 ≈ 1.816, φ 1 H 1/2 (Ω) ≈ 1.656, φ 2φ 1 * 1/2 / φ 1 H 1/2 (Ω) ≈ 1.096, φ 2 * 1/2 / φ 2 H 1/2 (Ω) ≈ 1.049.
As the values of these two ratios are different, not only are the two norms not equal with the normalisation (9) we have chosen, it is clear that there is no possible choice of normalisation factor in the definition (7) that could make the interpolation and Sobolev norms equal.
Introduction
Since we published the paper [4] in 2015 the quantitative results we derive therein, and the summary we provide of results in the literature on interpolation spaces, have been of use in our own work (e.g., [5]) and elsewhere (e.g., [13]). But the paper as published is marred by inaccuracies which we correct in this note § , including the inaccuracy flagged in [13, p. 1768]. We use throughout the notations of [4]. As in [4] we intend primarily that vector space, Banach space, and Hilbert space should be read as their complex versions. But, except where we deal with complex interpolation, the results below apply equally in the real case, with minor changes to the statements and proofs.
2 Corrections to Section 2 1. The main inaccuracy in this section is in the first sentence, which should read: "Suppose that X 0 and X 1 are Banach spaces that are linear subspaces continuously embedded in some larger Hausdorff topological vector space V ." With this correction the definition of a compatible pair in the second sentence coincides with the standard definition, as e.g. in [1,3,14]. The requirement that X 0 and X 1 are continuously embedded in a larger TVS, rather than just being linear subspaces of a larger linear space V (as in [4, §2] or [12, Appendix B]), is needed to establish that · Σ is a norm (and not just a semi-norm) on Σ(X). That this requirement is sufficient to establish that · Σ is a norm is shown e.g. in [ Following this strengthening of the meaning of compatible pair, one needs to check that every use of this term in the rest of the paper is accurate with compatible pair understood in this stronger sense. Where some additional justification is needed we have noted this below. Helpfully (and this is the case in all the applications considered in [4, §4]) it is immediate that (X 0 , X 1 ) is a compatible pair in this stronger sense if X 1 ⊂ X 0 with continuous embedding, for then we may take X 0 as the larger TVS. Similarly if X 0 ⊂ X 1 with continuous embedding.
2. "X 1 ⊂ X 0 " should read "X 1 ⊂ X 0 with continuous embedding" in the penultimate line of the first paragraph of Section 2 and in item (iii) of Theorem 2.2.
3 Corrections to Section 3 3.1 Corrections relating to Corollary 3.2
We provided in [4] no proof of Corollary 3.2, which we viewed at the time as a straightforward extension of the proof of Theorem 3.1. There is a typo; |w θ Aφ| 2 should read w θ |Aφ| 2 in the definition of H θ . But, even with this typo fixed, the corollary is false as stated. We will state the corrected corollary with a proof, and provide a counterexample to the corollary as stated in [4] (with the above typo fixed). The correction, in addition to fixing the above typo, is just to add the word "injective" before "linear" in the statement of the corollary. The corrected corollary is: Corollary 3.2 Let H = (H 0 , H 1 ) be a compatible pair of Hilbert spaces, (X , M, µ) be a measure space, and let Y denote the set of measurable functions X → C. Suppose that there exists an injective linear map A : Σ(H) → Y and, for j = 0, 1, functions w j ∈ Y, with w j > 0 almost everywhere, such that the mappings A : H j → L 2 (X , M, w j µ) are unitary isomorphisms. For 0 < θ < 1 define intermediate spaces
H θ , with ∆(H) ⊂ H θ ⊂ Σ(H), by H θ := φ ∈ Σ(H) : φ H θ := X w θ Aφ 2 dµ 1/2 < ∞ ,
where w θ := w 1−θ 0 w θ 1 . Then, for 0 < θ < 1, H θ = K θ,2 (H) = J θ,2 (H), with equality of norms.
Remark 3.1. In the case that H 1 ⊂ H 0 (with H 1 a linear subspace of H 0 ) it holds that Σ(H) = H 0 so that the injectivity of A in the above corollary follows from the assumption that A :
H 0 → L 2 (X , M, w 0 µ) is an isomorphism. Similarly if H 0 ⊂ H 1 .
That Corollary 3.2 holds is most easily seen as a corollary of the following more general result. We use in the statement and proof of this lemma the notations from the start of [4, §2].
Lemma 3.2. Suppose that X = (X 0 , X 1 ) and Y = (Y 0 , Y 1 ) are compatible pairs of Banach spaces, that A : Σ(X) → Σ(Y ) is an injective linear map, and that A j , the restriction of A to X j , is an isometric isomorphism from X j to Y j , j = 0, 1. Suppose also that (X, Y ) is a pair of exact interpolation spaces relative to (X, Y ) and that (Y, X) is a pair of exact interpolation spaces relative to (Y , X). Then A(X) = Y and A : X → Y is an isometric isomorphism.
Proof. Note first that, since A(X j ) = Y j , j = 0, 1, A : Σ(X) → Σ(Y ) is surjective, and so bijective and a linear isomorphism. 2 Thus A has a linear inverse B : Σ(Y ) → Σ(X), which is a couple map with B j = A −1 j , where B j denotes the restriction of B to Y j . Since (X, Y ) is a pair of exact interpolation spaces relative to (X, Y ), and A is a couple map, A(X) ⊂ Y , A : X → Y is bounded, and A X,Y ≤ max( A X0,Y0 , A X1,Y1 ) = 1. Similarly, since (Y, X) is a pair of exact interpolation spaces relative to (Y , X), B(Y ) ⊂ X and B Y,X ≤ max( B Y0,X0 , B Y1,X1 ) = 1. Thus A(X) = Y and A is an isometric isomorphism. Remark 3.3. If, in the statement of the above lemma, we require only that each A j is an isomorphism (not necessarily isometric), and that (X, Y ) and (Y, X) are pairs of interpolation spaces (not necessarily exact), then the argument of the above proof shows that A(X) = Y and that A : X → Y is an isomorphism (cf. [1, Theorem 6.4.2]). Remark 3.4. As a concrete application of the above lemma we can take X and Y to be complex interpolation spaces, i.e. X = (X 0 , X 1 ) [θ] and Y = (Y 0 , Y 1 ) [θ] , for some 0 < θ < 1, in the notation of [1], or we can take X = K θ,q (X) and Y = K θ,q (Y ), for any 0 < θ < 1 and 1 ≤ q ≤ ∞. With either of these choices (X, Y ) and (Y, X) are pairs of interpolation spaces that are exact of exponent θ (see, e.g., [ To see that the same is true for the choice X = J θ,q (X) and Y = J θ,q (Y ) we just need to check that the standard argument (e.g., [1, Theorem 3.2.2]) carries over to the not-quite-standard definition (suited to the Hilbert space case) we make for the J-functional in [4, §2.2], following [12,Appendix B]. That (X, Y ) and (Y, X) are pairs of interpolation spaces is clear since K θ,q (X) = J θ,q (X) and K θ,q (Y ) = J θ,q (Y ), with equivalence of norms [4,Theorem 2.3]. It remains to check that these pairs are exact of exponent θ. So suppose that X = (X 0 , X 1 ) and Y = (Y 0 , Y 1 ) are compatible pairs of Banach spaces, that A : Σ(X) → Σ(Y ) is a linear map, and that A j , the restriction of A to X j , is bounded from X j to Y j , j = 0, 1, and set M j := A j Xj ,Yj , j = 0, 1, X := J θ,q (X), and Y := J θ,q (Y ). Since (X, Y ) is a pair of interpolation spaces, it follows that A : X → Y and is bounded. Further, if φ ∈ X, φ can be written as the Bochner integral
φ = ∞ 0 f (t) dt t ,(1)
for some ∆(X)-strongly measurable function f : (0, ∞) → ∆(X), with the integral convergent in Σ(X), and in ∆(X) when the interval of integration is reduced to (a, b) with 0 < a < b < ∞ (see [4, §2.2]; for the definition of the Bochner integral see, e.g., [12,Appendix B] or [6,Appendix E]). Arguing as in the proof of [1, Theorem 3.2.2] it follows, by linearity, and since our assumptions imply that A : ∆(X) → ∆(Y ) and A : Σ(X) → Σ(Y ) are bounded, that the mapping t → Af (t) is ∆(Y )-strongly measurable and that
Aφ = ∞ 0 Af (t) dt t ,(2)
with the integral (2) convergent in Σ(Y ) and in ∆(Y ) on (a, b) with 0 < a < b < ∞. Moreover, for t > 0, Proof of Corollary 3.2 . Suppose that the conditions of Corollary 3.2 are satisfied, let Y j := L 2 (X , M, w j µ), j = 0, 1, and note that Y := (Y 0 , Y 1 ) is a compatible pair, since each Y j is continuously embedded in Y * := L 2 (X , M, w * µ) (for example), where w * := min(w 0 , w 1 ). It is clear from the conditions of the corollary that A(Σ(H)) ⊂ Σ(Y ), i.e. A : Σ(H) → Σ(Y ). Set Y θ := L 2 (X , M, w θ µ) ⊂ Σ(Y ), for 0 < θ < 1. For each fixed 0 < θ < 1, where H denotes either K θ,2 (H) or J θ,2 (H), and Y denotes, correspondingly, K θ,2 (Y ) or J θ,2 (Y ), Lemma 3.2 and Remark 3.4 imply that A(H) = Y and that A : H → Y is an isometric isomorphism. But also Y = Y θ , with equality of norms, by [4, Theorem 3.1]. It follows that H θ = H and that, for φ ∈ H, φ H = Aφ Y θ = φ H θ .
J(t, Af (t), Y ) = Af (t) 2 Y0 + t 2 Af (t) 2 Y1 1/2 ≤ M 2 0 f (t) 2 X0 + t 2 M 2 1 f (t) 2
Remark 3.5 (Counterexample to Corollary 3.2 as stated in [4] (with the typo fixed)). To see that Corollary 3.2 is false if the word "injective" is deleted, let X = (0, ∞), M be the Lebesgue measurable subsets of X , and take dµ = dt/t, where dt is Lebesgue measure. For some α > 0 and −2 < β < 0 let w 0 (t) := t −α , w 1 (t) := t −β , t > 0, let Y j := L 2 (X , M, w j µ), j = 0, 1, and define A : Σ(Y ) → Y by Aφ(s) = φ(s) − 1 s s 0 φ(t) dt, s > 0, φ ∈ Σ(Y ).
Then, see [10,Remark 3], A : Y j → Y j is an isomorphism, for j = 0, 1, but A : Σ(Y ) → Y is not injective; explicitly, the constant 1 ∈ Σ(Y ) and A1 = 0. Further, taking θ = α/(α − β) we have that Y θ := L 2 (X , M, w θ µ) = L 2 (X , M, µ), and that K θ,2 (Y ) = J θ,2 (Y ) = Y θ , with equality of norms, by [4, Theorem 3.1], so that A : Y θ → Y θ is bounded by [4, Theorem 2.2(i)]. But, see [10, Remark 2], A : Y θ → Y θ is not an isomorphism; precisely, there exists a sequence (φ n ) n∈N ⊂ Y θ with φ n Y θ = 1 and Aφ n Y θ → 0 as n → ∞. For j = 0, 1 let H j denote the set Y j equipped with the norm · Hj defined by φ Hj := A −1 φ Yj , equivalent to the norm · Yj . It is easy to see that this is a Hilbertspace norm, so that H j is a Hilbert space, and clearly A : H j → Y j is an isometric isomorphism, so a unitary isomorphism. Further, K θ,2 (H) = J θ,2 (H) = Y θ , with equivalence of norms. But, if it holds that H θ = Y θ , it does not hold that the norms are equivalent, for φ n H θ / φ n Y θ = φ n H θ = Aφ n Y θ → 0 as n → ∞. Thus it is not true that H θ = K θ,2 (H) = J θ,2 (H) with equivalence of norms. Thus Corollary 3.2 is false if the word "injective" is deleted.
(i )
)For what ranges of s are H s (Ω) and H s (Ω) interpolation scales, meaning that the interpolation space H θ , when interpolating between s = s 0 and s = s 1 , is the expected intermediate Sobolev space with s = s 0 (1 − θ) + s 1 θ?
Corollary 3. 2 .
2Let H = (H 0 , H 1 ) be a compatible pair of Hilbert spaces, (X , M, µ) be a measure space and let Y denote the set of measurable functions X → C. Suppose that there exists a linear map A :
H) denotes the inner product induced by the norm · ∆(H) . By the spectral theorem [11, Corollary 4, p. 911] there exists a measure space (X , M, µ), a bounded µ-measurable function w 0 , and a unitary isomorphism U :
M, w j µ) for j = 0, 1. These extensions are unitary operators since their range contains L 2 (X , M, µ), which is dense in L 2 (X , M, w j µ) for j = 0, 1. Where H • := (H • 0 , H • 1 ), U extends further to a linear operator U : Σ(H • ) → Y, the space of µ-measurable functions defined on X . Thus, applying Corollary 3.2 and noting part (v) of Theorem 2.2, we see that H θ = K θ,2 (H) = J θ,2 (H), with equality of norms, where
More briefly but equivalently, in the language introduced in Section 2, H is a geometric interpolation space of exponent θ if ∆(H) ⊂ H ⊂ Σ(H), with continuous embeddings and ∆(H) dense in H, and if: (i) (H, H) is a pair of interpolating spaces relative to (H, H) that is exact of exponent θ; and (ii) for every Hilbert space H, where H := (H, H), (H, H) and (H, H) are pairs of interpolation spaces, relative to (H, H) and (H, H), respectively, that are exact of exponent θ.
(i) for every Hilbert space compatible pair H, ∆(H) ⊂ F (H) ⊂ Σ(H), with the embeddings continuous and ∆(H) dense in F (H); (ii) for every Hilbert space H, where H := (H, H), it holds that F (H) = H; (iii) for every pair (H, H) of Hilbert space compatible pairs, (F (H), F (H)) is a pair of interpolation spaces, relative to (H, H), that is exact of exponent θ. Theorems 3.5, 2.2 (i), and 3.3 tell us that the K-method and the J-method are both instances of this functor. It follows from Theorems 4.1.2, 4.2.1(c), 4.2.2(a) in[5] that the complex interpolation method is also an instance of this functor, so that, for every Hilbert space compatible pair H = (H 0 , H 1 ), the standard complex interpolation space (H 0 , H 1 ) [θ] (in the notation of[5]) coincides with K θ,2 (H), with equality of norms.
Remark 3.6 make clear that life is simpler in the Hilbert space case. Two further simplications are captured in the following theorem (cf., Theorem 2.2 (vii) and Theorem 2.4).Theorem 3.7. Suppose that H = (H 0 , H 1 ) is a Hilbert space compatible pair. (i) If θ 0 , θ 1 ∈ [0, 1], and, for j = 0, 1, H j := (H 0 , H 1 ) θj ,2 , if 0 < θ j < 1, while H j := H θj , if θ j ∈ {0, 1},then (H 0 , H 1 ) η,2 = (H 0 , H 1 ) θ,2 , with equal norms, for θ = (1 − η)θ 0 + ηθ 1 and 0 < η < 1.
Remark 3. 8 .
8A useful concept, used extensively in Section 4 below, is that of an interpolation scale. Given a closed interval I ⊂ R (e.g., I = [a, b], for some a < b, I = [0, ∞), I = R) we will say that a collection of Hilbert spaces {H s : s ∈ I}, indexed by I, is an interpolation scale if, for all s, t ∈ I and 0 < η < 1, (H s , H t ) η,2 = H θ , for θ = (1 − η)s + ηt.
(i) Let H s , for s ∈ R, denote H s (Ω) or H s (Ω). For which classes of Ω and what range of s is {H s } an (exact) interpolation scale? (ii) In cases where {H s } is an interpolation scale but not an exact interpolation scale, how different are the H s norm and the interpolation norm?
Section 4.2.1]), we will define H s (Ω) := u ∈ D * (Ω) : u = U | Ω , for some U ∈ H s (R n ) , where D * (Ω) denotes the space of Schwartz distributions, the continuous linear functionals on D(Ω) [18, p. 65], and U | Ω denotes the restriction of U ∈ D * (R n ) ⊃ S * (R n ) to Ω. H s (Ω) is endowed with the norm u H s (Ω) := inf U H s (R n ) : U | Ω = u , for u ∈ H s (Ω).
W m 2 2 L 2
222(Ω) is a Hilbert space with the norm u W m 2 (Ω) := |α|≤m a α,m ∂ α u (
Further,
{H s (Ω) : s 0 ≤ s ≤ s 1 } is an interpolation scale. Proof. By Theorem 4.1, (H s0 (R n ), H s1 (R n )) θ = H s (R n ) with equality of norms. For all t ∈ R, R : H t (R n ) → H t (Ω) has norm one. Thus, by Theorem 2.2 (i), H s (Ω) = R(H s (R n )) ⊂ H θ and R : H s (R n ) → H θ with norm one, so that, for u ∈ H s (Ω), u Hθ = RE s u Hθ ≤ E s u H s (R n ) = u H s (Ω) ,where E s u is the extension with minimal H s (R n ) norm, described above.If also the extension operator E has the properties claimed, then, by Theorem 2.2 (i), E(H θ ) ⊂ H s (R n ) so that H θ = RE(H θ ) ⊂ R(H s (R n )) = H s (Ω). Further, E : H θ → H s (R n ) with norm ≤ λ 1−θ 0 λ θ 1 , so that, for u ∈ H s (Ω) = H θ , u H s (Ω) = REu H s (Ω) ≤ Eu H s (R n ) ≤ λ Hθ .Hence, noting Theorem 3.7 (i), {H s (Ω) : s 0 ≤ s ≤ s 1 } is an interpolation scale.
Example 4 . 3 .
43As an example, consider the case that Ω is the half-space Ω = {x = (x 1 , ..., x n ) : x 1 > 0}, s 0 = 0, and s 1 = 1. In this case a simple extension operator is just reflection: Eu(x) := u(x), for x 1 ≥ 0, and Eu(x) := u(x ), for x 1 < 0, where x = (−x 1 , x 2 , ..., x n ). In this example E : H s (Ω) → H s (R n ) has norm 2 for s = 0, 1 and, applying Lemma 4.2, H s (Ω) = H s := (L 2 (Ω), H 1 (Ω)) s , for 0 < s < 1, with
Theorem 4. 6 (
6Rychkov,[21]). If Ω ⊂ R n is a Lipschitz open set or a Lipschitz hypograph, then there exists an extension operator E : H s (Ω) → H s (R n ), not depending on s ∈ R, that is continuous for all s.Combining Theorems 4.5 and 4.6 with Lemma 4.2 and Theorem 3.7 (i) we obtain the following interpolation result.
Corollary 4 . 7 .
47If Ω ⊂ R n is a Lipschitz open set or a Lipschitz hypograph, then {H s (Ω) : s ∈ R} is an interpolation scale. If Ω ⊂ R n is an ( , δ) locally uniform domain then {H s (Ω) : s ≥ 0} is an interpolation scale.
. 8 .
8so that {H s (Ω) : 0 ≤ s ≤ 2} is not an interpolation scale. The following lemma exhibits (22) for a C 0,β domain in R 2 , for every β ∈ (0, 1). These results contradict [18, Theorem B.8] which claims that {H s (Ω) : s ∈ R} is an exact interpolation scale for any non-empty open Ω ⊂ R n . (The error in McLean's proof lies in the wrong deduction of the bound K(t, U ; Y ) ≤ K(t, u; X) (in his notation) from K(t, U ; Y ) For some p > 1 let Ω := {(x 1 , x 2 ) ∈ R 2 : 0 < x 1 < 1 and |x 2 | < x p 1 }. Then Ω is a C 0,β domain for β = p −1 < 1 and (22) holds, so that {H s (Ω) : 0 ≤ s ≤ 2} is not an interpolation scale.
s (Ω) := D(Ω) H s (R n ) , the closure of D(Ω) in H s (R n ). We remark that if Ω is C 0 then H s (Ω) = {u ∈ H s (R n ) : supp u ⊂ Ω} [18, Theorem 3.29], but that these two spaces are in general different if Ω is not C 0 [9]. Also, for any m ∈ N 0 , H m (Ω) is unitarily isomorphic (via the restriction operator R) to H m 0 (Ω), the closure of D(Ω) in H m (Ω). For any open Ω ⊂ R n , H s (Ω) is a natural unitary realisation of the dual space of H −s (Ω), with duality paring (cf. [18, Theorem 3.14]) u, v H s (Ω)×H −s (Ω) := u, V −s , for u ∈ H s (Ω), v ∈ H −s (Ω),
Corollary 4. 9 .
9Suppose that s 0 ≤ s 1 , 0 < θ < 1, and set s = s 0 (1 − θ) + s 1 θ, H = (H s0 (Ω), H s1 (Ω)), andH * = ( H −s0 (Ω), H −s1 (Ω)). Then H * θ = ( H −s0 (Ω), H −s1 (Ω)) θ ⊂ H −s (Ω), with u H −s (Ω) ≤ u H *θ , for u ∈ H * θ . Further, H θ = H s (Ω) if and only if H * θ = H −s (Ω) and, if both these statements are true, then, for a > 1, a −1 u H s (Ω) ≤ u Hθ , ∀u ∈ H s (Ω) if and only if u H * θ ≤ a u H−s (Ω) , ∀u ∈ H −s (Ω).
Corollary 4 . 10 .
410If Ω ⊂ R n is a Lipschitz open set or a Lipschitz hypograph, then { H s (Ω) : s ∈ R} is an interpolation scale. If Ω ⊂ R n is an ( , δ) locally uniform domain then { H s (Ω) : s ≤ 0} is an interpolation scale.Except in the case Ω = R n , it appears that { H s (Ω) : s ∈ R} is not an exact interpolation scale. Example 4.15 below shows, for the simple one-dimensional case Ω = (0, 1), that { H s (Ω) : 0 ≤ s ≤ 1} is not an exact interpolation scale, using a representation for the norm for interpolation between L 2 (Ω) = H 0 (Ω) and H 1 (Ω) given in the following lemma that illustrates the abstract Theorem 3.4 (cf.,[14, Chapter 8]). For the cusp domain example of Lemma 4.8, by Lemma 4.8 and Corollary 4.9, { H s (Ω) : −2 ≤ s ≤ 0} is not an interpolation scale at all.
Lemma 4 . 11 .
411Let Ω be bounded and set H 0 := H 0 (Ω) = L 2 (Ω), H 1 := H 1 (Ω) = H 1 0 (Ω). Then H := (H 0 , H 1 ) satisfies the assumptions of Theorem 3.4, since the embedding of H 1 0 (Ω) into L 2 (Ω) is compact. The orthogonal basis for H 1 , {φ j : j ∈ N}, of eigenvectors of T (with λ j the eigenvalue corresponding to φ j and φ j H0 = 1), is a basis of eigenfunctions of the Laplacian. [This follows since T φ = λφ, for λ > 0 and φ ∈ H 1 = H 1 0 (Ω), if and only if Ω ∇φ · ∇ψ − ρφψ dx = 0, for ψ ∈ H 1 0 (Ω),
Lemma 4 .
413 shows that, for the regular domain Ω = (0, a), the spaces H s (Ω) are not an exact interpolation scale, and that the ratio (27) between the interpolation norm and the H s (Ω) norm can be arbitarily small. (However, the two norms are equivalent: Corollary 4.7 shows that H s (Ω) constitutes an interpolation scale in this case.) The next example provides an irregular open set for which {H s (Ω) : 0 ≤ s ≤ 2} is not an interpolation scale so that, by Corollary 4.9, also { H s (Ω) : −2 ≤ s ≤ 0} is not an interpolation scale Example 4.14. Let a
Example 4 . 15 .
415Let Ω = (0, 1), H 0 = H 0 (Ω) = L 2 (Ω) and H 1 = H 1 (Ω) = H 1 0 (Ω). The eigenfunctions and eigenvalues in Lemma 4.11 are φ j (x) =
Figure 1 :
1Comparison of Sobolev and interpolation norms in H θ (Ω), for the functions φ 1 and φ 2 of Example 4.15.
X1 1/ 2 =
X12M 0 J(tM 1 /M 0 , f (t), X), so that Aφ Y ≤ J(·, Af (·), Y ) θ,q ≤ M 0 J(·M 1 /M 0 , f (·), X) θ,q = M (·,f M1/M0 (·), X) q,θ , where, for s > 0,f s (t) := f (st), t > 0. Noting that, for every s > 0, (1) holds if and only if φ = ∞ 0f s (t) dtt , it follows, taking the infimum over all f for which (1) holds, that Aφ Y ≤ M X . Thus (X, Y ) is exact of exponent θ; clearly the same holds true for (Y, X).
14, Lemma 1.2.1], [3, Proposition 2.1.7]. That · Σ need not be a norm if the Banach spaces X 0 and X 1 are just linear subspaces of some larger vector space can be shown by Hamel basis-type constructions 1 .
1, Theorem 4.1.2] for the complex case, [12, Theorem B.2] or [4, Thm 2.2(i)] for the K-interpolation spaces).
Indeed also a topological isomorphism, by the Banach bounded inverse theorem, since A is bounded as each A j is bounded.
This observation was missing in the original proof.
Acknowledgements. We are grateful for helpful discussions with and feedback on the original paper from António Caetano (Aveiro, Portugal), Karl-Mikael Perfekt (Trondheim, Norway), and Eugene Shargorodsky (King's College London).Corrections to Theorem 3.3As pointed out in[13, p. 1768]there is an inaccuracy in the definition of the domain of the unbounded operator T in[4,Theorem 3.3], and consequent inaccuracies in the proof. Additionally, the corrections made above to the statement of Corollary 3.2 necessitate modification where we apply this corollary in the proof of Theorem 3.3. Here is the amended statement of Theorem 3.3 together with an amended version of the second half of the proof. As in[4]we use in this theorem and its proof properties of non-negative, closed symmetric forms and their associated self-adjoint operators, see, e.g.,[9,or[7,Chapter IV].is a compatible pair of Hilbert spaces. Then, for 0 < θ < 1, φ Kθ,2(H) = φ Jθ,2(H) , for φ ∈ (H 0 , H 1 ) θ,2 . Further, where H • 1 denotes the closure in H 1 of ∆(H), (·, ·) H0 , with domain ∆(H), is a closed, densely defined, non-negative symmetric form on H • 1 , with an associated unbounded, non-negative, self-adjoint, injective operator T :where D(T ), the domain of T , is a dense linear subspace of the Hilbert space ∆(H). Moreover, where S :Proof. The proof that φ Kθ,2(H) = φ Jθ,2(H) proceeds as the proof of[4,Theorem 3.3]. In particular, where A j : ∆(H) → ∆(H), for j = 0, 1, is the bounded, non-negative, self-adjoint, injective operator defined by (A j φ, ψ) ∆(H) = (φ, ψ) Hj , for φ, ψ ∈ ∆(H), it holds by the spectral theorem that there exists a measure space (X , M, µ), bounded µ-measurable functions w j , with w j > 0 almost everywhere and w 0 + w 1 = 1, and a unitary isomorphism U : ∆(H) → L 2 (X , M, µ), such thatAs we note in the proof in[4], where H and U φ = 0, in which case φ = φ 0 + φ 1 with φ j ∈ H • j for j = 0, 1, then, defining y := U φ 0 = −U φ 1 , we see that y ∈ L 2 (X , M, w j µ), j = 0, 1, so that y ∈ L 2 (X , M, µ) and, since U : ∆(H) → L 2 (X , M, µ) is surjective, y = U ψ for some ψ ∈ ∆(H). Since U : H j → L 2 (X , M, w j µ) is injective, j = 0, 1, it follows that φ 0 = ψ = −φ 1 , so that φ = 0, i.e. U : Σ(H • ) → Y is injective. Thus, applying Corollary 3.for φ ∈ ∆(H) and 0 ≤ θ ≤ 1. Thus, for φ ∈ ∆(H) and 0 < θ < 1,3.3 Other corrections to Section 3 1. As a consequence of the correction above to Corollary 3.2 (see Corollary 3.2 ), w 1 (j) = λ −1/2 j in the last line of the proof of Theorem 3.4 should read w 1 (j) = λ −1 j . In the same sentence "w 0 and w j " should read "w 0 and w 1 ", and, in the displayed equation three lines above, the two instances of λ i should read λ j .
R A Adams, Sobolev Spaces. Academic PressR. A. Adams, Sobolev Spaces, Academic Press, 1973.
A new proof of Donoghue's interpolation theorem. Y Ameur, J. Funct. Spaces Appl. 2Y. Ameur, A new proof of Donoghue's interpolation theorem, J. Funct. Spaces Appl., 2 (2004), pp. 253-265.
Interpolation and operator constructions. arXiv:1401.6090(accessed20/10/2014Preprint, Interpolation and operator constructions. Preprint, 2014, arXiv:1401.6090 (accessed 20/10/2014).
Interpolation of Operators. C Bennet, R Sharpley, Academic PressC. Bennet and R. Sharpley, Interpolation of Operators, Academic Press, 1988.
Interpolation Spaces: an Introduction. J Bergh, J Löfström, Springer-VerlagJ. Bergh and J. Löfström, Interpolation Spaces: an Introduction, Springer-Verlag, 1976.
J H Bramble, Multigrid Methods. Chapman & HallJ. H. Bramble, Multigrid Methods, Chapman & Hall, 1993.
Lebesgue spaces of differentiable functions and distributions. A P Calderón, Proc. Symp. Pure Math. 4A. P. Calderón, Lebesgue spaces of differentiable functions and distributions, Proc. Symp. Pure Math, 4 (1961), pp. 33-49.
Acoustic scattering by fractal screens: mathematical formulations and wavenumber-explicit continuity and coercivity estimates. S N Chandler-Wilde, D P Hewett, arXiv:1401.2805University of Reading PreprintMPS-2013-17accessed 20/10/2014S. N. Chandler-Wilde and D. P. Hewett, Acoustic scattering by fractal screens: mathematical formulations and wavenumber-explicit continuity and coercivity estimates. University of Reading Preprint, 2013, MPS-2013-17; arXiv:1401.2805 (accessed 20/10/2014).
Sobolev spaces on subsets of R n with application to boundary integral equations on fractal screens. S N Chandler-Wilde, D P Hewett, A Moiola, In preparationS. N. Chandler-Wilde, D. P. Hewett, and A. Moiola, Sobolev spaces on subsets of R n with application to boundary integral equations on fractal screens. In preparation.
The interpolation of quadratic norms. W Donoghue, Acta Mathematica. 118W. Donoghue, The interpolation of quadratic norms, Acta Mathematica, 118 (1967), pp. 251-270.
N Dunford, J T Schwarz, Linear Operators, Part II. Spectral Theory. John WileyN. Dunford and J. T. Schwarz, Linear Operators, Part II. Spectral Theory, John Wiley, 1963.
Quasiconformal mappings and extendability of functions in Sobolev spaces. P W Jones, Acta Mathematica. 147P. W. Jones, Quasiconformal mappings and extendability of functions in Sobolev spaces, Acta Math- ematica, 147 (1981), pp. 71-88.
Perturbation Theory for Linear Operators. T Kato, Springer-Verlag2nd EditionT. Kato, Perturbation Theory for Linear Operators, Springer-Verlag, 1980. 2nd Edition.
R Kress, Linear Integral Equations. New YorkSpringer-Verlag2nd ed.R. Kress, Linear Integral Equations, Springer-Verlag, New York, 2nd ed., 1999.
Non-Homogeneous Boundary Value Problems and Applications I. J.-L Lions, E Magenes, Springer-VerlagJ.-L. Lions and E. Magenes, Non-Homogeneous Boundary Value Problems and Applications I, Springer-Verlag, 1972.
V G Maz'ya, Sobolev Spaces with Applications to Elliptic Partial Differential Equations. Springer2nd ed.V. G. Maz'ya, Sobolev Spaces with Applications to Elliptic Partial Differential Equations, Springer, 2nd ed., 2011.
Geometric interpolation between Hilbert spaces. J E Mccarthy, Arkiv för Matematik. 30J. E. McCarthy, Geometric interpolation between Hilbert spaces, Arkiv för Matematik, 30 (1992), pp. 321-330.
W Mclean, Strongly Elliptic Systems and Boundary Integral Equations, CUP. W. McLean, Strongly Elliptic Systems and Boundary Integral Equations, CUP, 2000.
A theory of interpolation of normed spaces, Notas de Matemática, No. 39, Instituto de Matemática Pura e Aplicada. J Peetre, Conselho Nacional de Pesquisas, Rio de JaneiroJ. Peetre, A theory of interpolation of normed spaces, Notas de Matemática, No. 39, Instituto de Matemática Pura e Aplicada, Conselho Nacional de Pesquisas, Rio de Janeiro, 1968.
Degree-independent Sobolev extension on locally uniform domains. L G Rogers, J. Funct. Anal. 235L. G. Rogers, Degree-independent Sobolev extension on locally uniform domains, J. Funct. Anal., 235 (2006), pp. 619-665.
On restrictions and extensions of the Besov and Triebel-Lizorkin spaces with respect to Lipschitz domains. V S Rychkov, J. London Math. Soc. 60V. S. Rychkov, On restrictions and extensions of the Besov and Triebel-Lizorkin spaces with respect to Lipschitz domains, J. London Math. Soc., 60 (1999), pp. 237-257.
E M Stein, Singular integrals and differentiability properties of functions. Princeton University Press1E. M. Stein, Singular integrals and differentiability properties of functions, vol. 1, Princeton Uni- versity Press, 1970.
An introduction to Sobolev spaces and interpolation spaces. L Tartar, Springer-VerlagL. Tartar, An introduction to Sobolev spaces and interpolation spaces, Springer-Verlag, 2007.
Corrigendum to "Interpolation of Hilbert and Sobolev Spaces: Quantitative Estimates and Counterexamples. H Triebel, Interpolation Theory, Function Spaces, Differential Operators. North Holland61H. Triebel, Interpolation Theory, Function Spaces, Differential Operators, North Holland, 1978. Corrigendum to "Interpolation of Hilbert and Sobolev Spaces: Quantitative Estimates and Counterexamples" [Mathematika 61 (2015), 414-443]
. S N Chandler-Wilde, * , D P Hewett, † , A Moiola, ‡ , S. N. Chandler-Wilde * , D. P. Hewett † , A. Moiola ‡ May 18, 2022
More briefly but equivalently, in the language introduced in §2, a Hilbert space H is a geometric interpolation space of exponent θ. a Hilbert space" are missing from the first sentence of the last paragraph on page 428, which should startIn Section 3.2 the words "a Hilbert space" are missing from the first sentence of the last paragraph on page 428, which should start: "More briefly but equivalently, in the language introduced in §2, a Hilbert space H is a geometric interpolation space of exponent θ ...".
As we have noted at the beginning of §3, H θ := K θ,2 (H) is a Hilbert space thanks to the particular definition we have made for K θ,2 (H) and since H 0 and H 1 are Hilbert spaces (this follows, in particular, from Theorem 3.3, since φ Kθ,2(H) = S 1−θ φ H1 , and the latter is clearly a Hilbert-space norm). That H θ is moreover a geometric interpolation space of. The proof of Theorem 3.5 is arguably not quite complete because we do not say explicitly that H θ is a Hilbert space. The first sentence of the proof should be replaced with the following sentences. exponent θ follows from Lemma 2.1(iii) and Theorem 2.2(i) and (ivThe proof of Theorem 3.5 is arguably not quite complete because we do not say explicitly that H θ is a Hilbert space. The first sentence of the proof should be replaced with the following sentences: "As we have noted at the beginning of §3, H θ := K θ,2 (H) is a Hilbert space thanks to the particular definition we have made for K θ,2 (H) and since H 0 and H 1 are Hilbert spaces (this follows, in particular, from Theorem 3.3, since φ Kθ,2(H) = S 1−θ φ H1 , and the latter is clearly a Hilbert-space norm). That H θ is moreover a geometric interpolation space of exponent θ follows from Lemma 2.1(iii) and Theorem 2.2(i) and (iv)."
the complex interpolation method is also an instance of that functor" is not quite complete, because we do not justify that complex interpolation of Hilbert spaces produces a Hilbert space. This must be well known. In Remark 3.6 our justification that. it is implicit in the last sentence of [11, §1In Remark 3.6 our justification that "the complex interpolation method is also an instance of that functor" is not quite complete, because we do not justify that complex interpolation of Hilbert spaces produces a Hilbert space. This must be well known, e.g. it is implicit in the last sentence of [11, §1].
3]) so that, noting the comments on complex interpolation in Remark 3.4, a version of Corollary 3.2 holds for complex interpolation. One way of seeing this is to note that a version of [4, Theorem 3.2] holds for complex interpolation (see [1, Theorem 5.5.. so that Theorem 3.3 can be strengthened to conclude that K θ,2 (H) = J θ,2 (H) = (H 0 , H 1 ) [θ] , with equal norms (the relationship φ θ,2 = S 1−θ φ H1 makes clear that this norm is a Hilbert-space norm). Of course this argument also shows directly the claim in the last sentence of Remark 3.6One way of seeing this is to note that a version of [4, Theorem 3.2] holds for complex interpolation (see [1, Theorem 5.5.3]) so that, noting the comments on complex interpolation in Remark 3.4, a version of Corollary 3.2 holds for complex interpolation, so that Theorem 3.3 can be strengthened to conclude that K θ,2 (H) = J θ,2 (H) = (H 0 , H 1 ) [θ] , with equal norms (the relationship φ θ,2 = S 1−θ φ H1 makes clear that this norm is a Hilbert-space norm). Of course this argument also shows directly the claim in the last sentence of Remark 3.6.
H t ) is a compatible pair and" is missing. The end of the second sentence of the remark should read: ... a collection of Hilbert spaces {H s : s ∈ I}, indexed by I, is an interpolation scale if, for all s, t ∈ I and 0 < η < 1, (H s , H t ) is a compatible pair and (H s , H t ) η,2 = H θ. In Remark 3.8 the phrase "(H s ,. for θ = (1 − η)s + ηtIn Remark 3.8 the phrase "(H s , H t ) is a compatible pair and" is missing. The end of the second sentence of the remark should read: ... a collection of Hilbert spaces {H s : s ∈ I}, indexed by I, is an interpolation scale if, for all s, t ∈ I and 0 < η < 1, (H s , H t ) is a compatible pair and (H s , H t ) η,2 = H θ , for θ = (1 − η)s + ηt.
Interpolation Spaces: an Introduction. J Bergh, J Löfström, Springer-VerlagJ. Bergh and J. Löfström, Interpolation Spaces: an Introduction, Springer-Verlag, 1976.
Introduction to Operator Theory I: Elements of Functional Analysis. A Brown, C Pearcy, Springer-VerlagA. Brown and C. Pearcy, Introduction to Operator Theory I: Elements of Functional Analysis, Springer-Verlag, 1977.
Interpolation functors and interpolation spaces. Yu A Brudnyȋ, N Ya, Krugljak, INorth-HollandYu. A. Brudnyȋ and N. Ya. Krugljak, Interpolation functors and interpolation spaces, volume I, North-Holland, 1991.
Interpolation of Hilbert and Sobolev spaces: quantitative estimates and counterexamples. S N Chandler-Wilde, D P Hewett, A Moiola, Mathematika. S. N. Chandler-Wilde, D. P. Hewett, and A. Moiola, Interpolation of Hilbert and Sobolev spaces: quantitative estimates and counterexamples, Mathematika, 61 (2015), pp. 414-443.
Boundary element methods for acoustic scattering by fractal screens. S N Chandler-Wilde, D P Hewett, A Moiola, J Besson, Numer. Math. 147S. N. Chandler-Wilde, D. P. Hewett, A. Moiola, and J. Besson, Boundary element methods for acoustic scattering by fractal screens, Numer. Math., 147 (2021), pp. 785-837.
D L Cohn, Measure Theory, Birkhäuser. 2nd ed.D. L. Cohn, Measure Theory, Birkhäuser, 2nd ed., 2013.
D E Edmunds, W D Evans, Spectral theory and differential operators. Oxford University PressD. E. Edmunds and W. D. Evans, Spectral theory and differential operators, Oxford University Press, 1987.
On the maximal Sobolev regularity of distributions supported by subsets of Euclidean space. D P Hewett, A Moiola, Anal. Appl. 15D. P. Hewett and A. Moiola, On the maximal Sobolev regularity of distributions supported by subsets of Euclidean space, Anal. Appl., 15 (2017), pp. 731-770.
Perturbation Theory for Linear Operators. T Kato, SpringerCorrected printing of 2nd EdT. Kato, Perturbation Theory for Linear Operators, Springer, 1995. Corrected printing of 2nd Ed.
On an elementary approach to the fractional Hardy inequality. N Krugljak, L Maligranda, L E Persson, Proc. Am. Math. Soc. 128N. Krugljak, L. Maligranda, and L. E. Persson, On an elementary approach to the fractional Hardy inequality, Proc. Am. Math. Soc., 128 (1999), pp. 727-734.
Geometric interpolation between Hilbert spaces. J E Mccarthy, Arkiv för Matematik. 30J. E. McCarthy, Geometric interpolation between Hilbert spaces, Arkiv för Matematik, 30 (1992), pp. 321-330.
W Mclean, Strongly Elliptic Systems and Boundary Integral Equations, CUP. W. McLean, Strongly Elliptic Systems and Boundary Integral Equations, CUP, 2000.
The transmission problem on a three-dimensional wedge, Archive for Rational Mechanics and Analysis. K.-M Perfekt, 231K.-M. Perfekt, The transmission problem on a three-dimensional wedge, Archive for Rational Mechanics and Analysis, 231 (2019), pp. 1745-1780.
H Triebel, Interpolation Theory, Function Spaces, Differential Operators. North-HollandH. Triebel, Interpolation Theory, Function Spaces, Differential Operators, North-Holland, 1978.
| [] |
[
"Generic scaling relation in the scalar φ 4 model",
"Generic scaling relation in the scalar φ 4 model"
] | [
"S É Derkachov \nDepartment of Mathematics\nSt.-Petersburg Technology Institute, St.-Petersburg\nRussia\n",
"A N Manashov \nDepartment of Theoretical Physics\nSankt-Petersburg State University\nSt.-PetersburgRussia\n",
"\nIntroduction\n\n"
] | [
"Department of Mathematics\nSt.-Petersburg Technology Institute, St.-Petersburg\nRussia",
"Department of Theoretical Physics\nSankt-Petersburg State University\nSt.-PetersburgRussia",
"Introduction\n"
] | [] | The results of analysis of the one-loop spectrum of anomalous dimensions of composite operators in the scalar φ 4 model are presented. We give the rigorous constructive proof of the hypothesis on the hierarchical structure of the spectrum of anomalous dimensions -the naive sum of any two anomalous dimensions generates a limit point in the spectrum. Arguments in favor of the nonperturbative character of this result and the possible ways of a generalization to other field theories are briefly discussed. * | 10.1088/0305-4470/29/24/024 | [
"https://export.arxiv.org/pdf/hep-th/9604173v1.pdf"
] | 15,530,029 | hep-th/9604173 | f2918a4a618278f2039aaa232424fbd447b1d227 |
Generic scaling relation in the scalar φ 4 model
Apr 1996 April, 1996 March 28, 2022
S É Derkachov
Department of Mathematics
St.-Petersburg Technology Institute, St.-Petersburg
Russia
A N Manashov
Department of Theoretical Physics
Sankt-Petersburg State University
St.-PetersburgRussia
Introduction
Generic scaling relation in the scalar φ 4 model
Apr 1996 April, 1996 March 28, 2022arXiv:hep-th/9604173v1 27
The results of analysis of the one-loop spectrum of anomalous dimensions of composite operators in the scalar φ 4 model are presented. We give the rigorous constructive proof of the hypothesis on the hierarchical structure of the spectrum of anomalous dimensions -the naive sum of any two anomalous dimensions generates a limit point in the spectrum. Arguments in favor of the nonperturbative character of this result and the possible ways of a generalization to other field theories are briefly discussed. *
Introduction
The great interest to the renormalization of composite operators is mainly motivated by the successful application of the operator product expansion (OPE) method [1] for the description of the processes of deep inelastic scattering in QCD. It is well known now that the behavior of the structure functions at large Q 2 is closely related to the spectrum of anomalous dimensions of composite operators [2]. The mixing matrix describing the renormalization of the operators in QCD is well known for any set of the latters [3]. Unfortunately, the size of the mixing matrix for the operators with given spin j and twist greater then 2 increases very fast with the growth of j, so the problem of the calculation of anomalous dimensions seems to admit only numerical solution for not too large values of j.
The first attempt to analyze the spectrum of anomalous dimensions for the particular class of twist-3 operators in the limit j → ∞ has been undertaken in the paper [4]. In this work the analytic solution for the minimal anomalous dimension has been obtained. The exact solution as well as the results of the numerical study of the spectrum allow to suggest that the asymptotic behavior of anomalous dimensions of twist-3 operators in some sense is determined by the spectrum of those with twist-2.
It should be stressed that the troubles connected with the calculation of the spectrum of anomalous dimensions are not the peculiarity of QCD only. One encounters the same problems and in more simple theories. In the recent papers [5,6,7,8] the analysis of the spectrum of anomalous dimensions in O(N) -vector model in 4 − ǫ dimensions has been carried out. Due to relative simplicity of this model it is appeared possible to obtain the exact solution of the eigenvalue problem for some classes of composite operators (see refs. [5,7]). The consideration of these solutions together with the results of the numerical analysis fulfilled in [6] for the wide class of operators leads to the same conclusion that the asymptotic of anomalous dimensions of the operators with given twist in the j → ∞ limit is determined by the dimensions of the operators with a smaller twist [7,8]. It was supposed in [8] that spectrum of anomalous dimensions has a "hierarchical" structure; this means that the sum of any two points of the spectrum is the limit point of the latter.
In the present paper we investigate the large spin asymptotic of one-loop anomalous dimensions of the spatially traceless and symmetric composite operators in the scalar φ 4 in 4 − ǫ dimensions. The approach used here is not specific only for this model and admits the straightforward generalization for other theories.
Before proceeding with calculations we would like to discuss the main troubles which arise in the course of the analysis of the spectrum of anomalous dimensions of large spin operators. It is easy to understand that the source of all difficulties is the mixing problem. Indeed, a long time ago C. Callan and D. Gross proved a very strong statement concerning anomalous dimensions of the twist-2 operators [9] for which the mixing problem is absent. They obtained that in all orders of the perturbation theory the anomalous dimension λ l of the operator φ∂ µ 1 . . . ∂ µ l φ tends to 2λ φ at l → ∞ (λ φ is the anomalous dimension of the field φ).
Let us see what prevents the generalization of this result, even on the one loop level, for the case of the operators of higher twists. Although to calculate the mixing matrix is not very difficult, the extraction of the information about eigenvalues of the latter needs a lot of work. Really, if one has not any idea about structure of eigenvectors the only way to obtain eigenvalues is to solve a characteristic equation. But it is almost a hopeless task. However, let us imagine that one has a guess on the form of an eigenfunction; then there are no problem with the evaluating the corresponding eigenvalue. (Note, that the exact solutions in [4,5,7] were obtained precisely in this manner.) Thus the more promising strategy is to guess the approximate structure of eigenfunctions in the "asymptotic" region. But the simple criterion to determine that given vector is close to some eigenvector exists only for hermitian matrices (see sec.2).
Thus for the successful analysis of the asymptotic behavior of anomalous dimensions two ingredients -the hermiticity of the mixing matrix and the true choice of test vector -are essential. It is not evident that first condition can be satisfied at all. But for the model under consideration one can choose the scalar product in a such way that a mixing matrix will be hermitian [5,6]. Some arguments in favor that it can be done in a general case will be given in Sec. 4. As to the choice of the trial vector this will be discussed below.
Henceforth, taking in mind the considerations given above, we shall carry out the analysis of the asymptotic behavior of anomalous dimensions for the whole class of the symmetric and traceless operators in the scalar φ 4 theory.
The paper is organized in the following way: in sec.2 we shall introduce notations, derive some formulae and give the exact formulation of the problem; the sec. 3 is devoted to the proof of the theorem about asymptotic behavior of anomalous dimension, which is the main result of this paper; in the last section we discuss the obtained results.
Preliminary remarks
It had been shown In the papers [5,6] that the problem of calculation of anomalous dimensions of the traceless and symmetric composite operators in the scalar φ 4 theory in the one-loop approximation is equivalent to the eigenvalue problem for the hermitian operator H acting on a Fock space H:
H = 1 2 ∞ n=0 1 n + 1 h † n h n = 1 2 ∞ n=0 1 n + 1 n i=0 a † i a † n−i n j=0
a j a n−j .
(2.1)
Here a † i , a i are the creation and annihilation operators with the standard commutation relations [a i , a † k ] = δ ik . The eigenvalues of H and the anomalous dimensions of composite operators are simply related: γ an = ǫ/3 · λ + O(ǫ 2 ). There is also a one to one correspondence between the eigenvectors of H and the multiplicatively renormalized composite operators [6].
It can be easily shown that H commutes with the operator of particles number N and with the generators of SL(2, C) group S, S + , S − :
[S − , S] = S − , [S + , S] = −S + , [S + , S − ] = 2S. (2.2)
They can be written as:
N = ∞ j=0 a † j a j , S = ∞ j=0 (j + 1/2) · a † j a j ,(2.
3)
S − = ∞ j=0 (j + 1) · a † j a j+1 , S + = − ∞ j=0 (j + 1) · a † j+1 a j . (2.4)
Further, due to commutativity of H with SL(2, C) generators, each of the subspaces H l n and H l n ∈ H l n (n, l = 0, . . . , ∞ ):
H l n = {ψ ∈ H|Nψ = nψ , Sψ = (l + n/2)ψ},H l n = {ψ ∈ H \ |S − ψ = 0} (2.5)
are invariant subspaces of the operator H. Since every eigenvector from H l n which is orthogonal toH l n has the form [7]:
|ψ = k c k S k + |ψ λ , |ψ λ ∈H l n ,
to obtain all spectrum of the operator H it is sufficient to solve the eigenvalue problem for H on eachH l n separately. Moreover, there exists a large subspace of the eigenvectors with zero eigenvalues in eachH l n . They have been completely described in ref. [6] and will not be considered here.
As to nonzero eigenvalues, although at finite l the spectrum of H has a very complicated structure (the numerical results for particular values of n and l are given in refs. [6,8]), at large l as it will be shown below considerable simplifications take place.
The main result of the present work can be formulated in the form of the following theorem:
Theorem 1 Let eigenvectors ψ 1 ∈H r n and ψ 2 ∈H s m (ψ 1 = ψ 2 ) of operator H have the eigenvalues λ 1 and λ 2 correspondingly. Then there exists a number L such, that for every l ≥ L, there exists eigenvector ψ l ∈H l (m+n) with the eigenvalue λ l such that
|λ l − λ 1 − λ 2 | ≤ C √ ln l/l, (2.6)
where C is some constant independent of l. In the case when ψ 1 = ψ 2 the same inequality holds only for even l ≥ L.
The proof is based on the simple observation. Since any of subspacesH l n has a finite dimension, operator H restricted onH l n has only pointlike spectrum. In this case it can be easily shown that if there is a vector ψ, for which the condition
||(H −λ)ψ|| ≤ ǫ||ψ|| (2.7)
is fulfilled, then there exists the eigenvector ψ λ (Hψ λ = λψ λ ), such that |λ −λ| ≤ ǫ. Indeed, expanding a vector ψ in the basis of the eigenvectors of H ψ = k c k ψ k we obtain:
ǫ||ψ|| ≥ ||(H −λ)ψ|| = ( k (λ k −λ) 2 c 2 k ) 1/2 ≥ min k |λ k −λ| · ||ψ||.
So, to prove the theorem it is sufficient to find out in the each subspaceH l n+m a vector which satisfies the corresponding inequality. Note, that for a nonhermitian matrix these arguments are not applicable.
Before to proceed to the proof we give another formulation of the eigenvalue problem for the operator H. Let us note that there exists the one to one correspondence between the vectors from H l n and the symmetric homogeneous polynomials degree of l of n variables:
|Ψ >= {j i } c j 1 ,...,jn a † j 1 . . . a † jn |0 → ψ(z 1 , . . . , z n ) = {j i } c j 1 ,...,jn z j 1 1 . . . z jn n , (2.8)
the coefficient c j 1 ,...,jn being assumed totally symmetric. It is evident that this mapping can be continued to all space. The operators S, S + , S − in the n -particles sector take the form [7]:
S = n i=1 (z i ∂ z i + 1/2), S − = n i=1 ∂ z i , S + = − n i=1 (z 2 i ∂ z i + z i ). (2.9)
The operator H in its turn can be represented as the sum of the two-particle hamiltonians:
H = n i<k H(z i , z k ), (2.10) where H(z i , z k )ψ(z 1 , ..., z n ) = 1 0 dαψ(z 1 , ..., αz i + (1 − α)z k , ..., αz i + (1 − α)z k , ..., z n ). (2.11)
It should be stressed that not only H, but every H(z i , z k ) commutes with S, S + , S − . For further calculations it is very convenient to put into correspondence to every function of n variables the another one by the following formula [7]:
ψ(z 1 , . . . , z n ) = {j i } c j 1 ,...,jn z j 1 1 . . . z jn n → φ(z 1 , . . . , z n ) = {j i } (j 1 ! · · · j n !) −1 c j 1 ,...,jn z j 1 1 . . . z jn n . (2.12)
The function ψ can be expressed in terms of φ in the compact form:
ψ(z 1 , . . . , z n ) = φ(∂ x 1 , . . . , ∂ xn ) n i=1 1 (1 − x i z i ) x 1 =...=xn=0 (2.13)
Then one obtains the following expression for the scalar product for two vectors from H l n :
< ψ 1 |ψ 2 > H = n! · φ(∂ z 1 , . . . , ∂ zn )ψ(z 1 , . . . , z n )| z 1 =...=zn=0 . (2.
14)
It is easy now to check that the operators S, S + , S − , H on the space of the "conjugated" functions φ(z 1 , . . . , z n ) look as:
S = n i=1 (z i ∂ z i + 1/2), S + = n i=1 z i , S − = − n i=1 (z i ∂ 2 z i + ∂ z i ), (2.15) Hφ(z 1 , ..., z n ) = i<k 1 0 dαφ(z 1 , ..., (1 − α)(z i + z k ), ..., α(z i + z k ), ..., z n ). (2.16)
Up to now we assume the functions ψ(z 1 , . . . , z n ) to be totally symmetric. But in the following we shall deal with nonsymmetric functions as well. To treat them on equal footing it is useful to enlarge the region of the definition of the operators S, S + , S − , H up to the space of all polynomial functions B = ∞ n,l=0 B l n , where B l n is the linear space of the homogeneous polynomials of degree l of n variables with the scalar product given by eq. (2.14). Then the Fock space H will be isomorphic to the subspace of the symmetric functions of B; and the subspaceH l n -to the subspace of the symmetric homogeneous translational invariant polynomials of degree l of n variablesB l n ∈ B l n .
Proof of Theorem
Part I
Let us consider two eigenvectors of H: ψ 1 ∈H r n and ψ 2 ∈H s m (Hψ 1(2) = λ 1(2) ψ 1(2) ); and let ψ 1 (x 1 , . . . , x n ) and ψ 2 (y 1 , . . . , y m ) are the symmetric translation-invariant homogeneous polynomials corresponding to them, of degree r and s respectively. To prove the theorem it is enough to pick out in the subspaceB l n+m (or, the same, in theH l n+m ) the function, for which the inequality (2.7) holds.
Let us consider the following function (nonsymmetric yet): ·]; and for the brevity we used notations ψ l (x, y) for ψ l (x 1 , . . . , x n , y 1 , . . . , y m ), and ψ 1(2) (x) for ψ 1(2) (x 1 , . . . , x n(m) ).
ψ l (x, y) = l k=0 c k (Ad k S + )ψ 1 (x)(Ad l−k S + )ψ 2 (y), (3.1.1) where c k = (−1) k C l k · C l+A++B k+A (C l k is the binomial coefficient); A = n + 2r − 1; B = m + 2s − 1; AdS + = [S + ,
Using eq. (2.2) it is straightforward to check that function ψ given by eq.
(3.1.1) is translational invariant, i.e. S − ψ l (x, y) = 0.
To construct the test function we symmetrize ψ l (x, y) over all x and y:
ψ l S (x, y) = Sym {x,y} ψ l (x, y) = n!m! (n + m)! m k=0 {i 1 <...<i k } {j 1 <...<j k } ψ (i 1 ...i k ) (j 1 ...j k ) (x, y), (3.1.2) where ψ (i 1 ...i k ) (j 1 ...j k ) (x, y) is obtained from ψ l (x, y)
by interchanging x i 1 ↔ y j 1 , and so on. Also, without loss of generality, hereafter we take n ≥ m. ( For the cases m = 1 and n = 1(2) the expression for the ψ l S (x, y) yields the exact eigenfunctions, so one might hope that it will be a good approximation in other cases also.)
The corresponding expression for the "conjugate" function φ l (x, y) looks more simple. Really, taking into account that S + φ(....) = (x 1 + · · · + x n )φ(....) one obtains:
φ l (x, y) = l k=0 c k (x 1 + · · · + x n ) k (y 1 + · · · + y m ) l−k φ 1 (x)φ 2 (y) = = φ 1 (x)φ 2 (y)K(a, b) exp a( n i=1 x i ) exp b( m j=1 x j ) a=b=0 , (3.1.3) where K(a, b) ≡ l k=0 c k ∂ k a ∂ (l−k) b . (3.1.4)
Then, with the help of eqs. (2.13),(3.1.3) the following representation for the function ψ l (x, y) can be derived:
ψ l (x, y) = φ 1 (∂ ξ )φ 2 (∂ η )K(a, b) i,j 1 (1 − x i (a + ξ i )) 1 (1 − y i (b + η i )) (a,b,ξ i ,η j )=0
.
(3.1.5)
Now we are going to show that the inequality (2.7) with λ = λ 1 + λ 2 and ǫ = C √ ln l/l holds for the function ψ l S (x, y). As it was mentioned before this is sufficient to the prove the theorem. Our first task is the calculation of the norm of the function ψ l S (x, y).
To be more precise, in the rest of this subsection we obtain the estimate from below of the norm of ψ l S (x, y) for large values of l.
Using eqs. (2.14),(3.1.5) and taking into account that ψ p S (x, y) is totally symmetric, one gets:
||ψ l S || 2 = n!m! m k=0 {i 1 <...<i k } {j 1 <...<j k } φ l (∂ x , ∂ y )ψ (i 1 ...i k ) (j 1 ...j k ) (x, y) = n!m! m k=0 C n k C m k A (k) l . (3.1.6)
The coefficients A (k) l are given by the formula:
A (k) l = N φ l (∂ x , ∂ y )ψ (1...k) (1...k) (x, y), = N φ 1 (∂ x i )φ 2 (∂ y i )K(a, b)ψ (1...k) (1...k) (x, y + b − a),(3.
1.7)
where the symbol N denotes that in the end of calculation all arguments must be set to zero.
The substitution of (3.1.3), (3.1.5) into (3.1.7) yields :
A (k) l = N φ 1 (∂ x )φ 2 (∂ y )φ 1 (∂x)φ 2 (∂ȳ)K(a, b)K(ā,b) m i=k+1 1 (1 − (ȳ i +b −ā)(y i + b)) k i=1 1 (1 − (ȳ i +b −ā)(x i + a))(1 −x i (y i + b)) n i=k+1 1 (1 −x i (x i + b))
.
(3.1.8)
The expression in the square brackets of (3.1.8) depends only on the differenceb −ā, hence
K(ā,b) .... ā,b..=0 = k (−1) k c k ∂ l b .... ā,b..=0 .
In the resulting expression the dependence onb can be factorized after the appropriate rescaling of the variables x, y,x,ȳ, a, b. At last, taking advantage of eq. (2.13) and remembering that the function ψ 1 , ψ 2 (but not φ(...)) are translation invariant, one gets:
A (k) p = ZN φ 1 (∂ x i )φ 2 (∂ y i )K(a, b)F k (a, b, x, y), (3.1.9) where Z = l! l k=0 c k (−1) k = l!C 2l+A+B l+A and F k (a, b, x, y) = ψ 1 (y 1 , . . . , y k , x k+1 + a − b, . . . , x n + a − b) k i=1 1 1 − x i − a m i=k+1 1 1 − y i − b (1 − a) −s ψ 2 ( x 1 1 − x 1 − a , . . . , x k 1 − x k − a , y k+1 + b − a 1 − y k+1 − b , . . . , y m + b − a 1 − y m − b ) . (3.1.10)
It is easy to understand that after the differentiation with respect to x i , y j the resulting expression will have form:
A (k) l = ZN K(a, b) n 1 ,n 2 ,n 3 C k n 1 ,n 2 ,n 3 (a − b) n 1 (1 − a) n 2 (1 − b) n 3 = Z n 1 ,n 2 ,n 3
C k n 1 ,n 2 ,n 3 A k n 1 ,n 2 ,n 3 (l), (3.1.11) the summation over n 1 , n 2 , n 3 being carried out in the limits, which as well as the coefficients C k n 1 ,n 2 ,n 3 are independent from the parameter l. Thus, all dependence on l of A (k) l , except for the trivial factor Z, is contained in the coefficients A k n 1 ,n 2 ,n 3 . Our further strategy is the following: first of all we shall obtain the result for the quantity A . (These terms gives the main contributions to the norm of the vector ψ l S .) Then, we shall show (it will be done in Appendix A) that for all other A k l (1 ≤ k ≤ m − 1) for which we are not able to get the exact result, the ratio A k l /A 0 l tends to zero as 1/l 2 at least. To calculate A (0) l it is sufficient to note that the expression for F 0 (a, b, x, y) (eq. (3.1.10)) after the appropriate shift of the arguments in the functions ψ 1 and ψ 2 reads:
ψ 1 (x 1 , . . . , x n ) m i=1 1 1 − y i − b (1 − b) −s ψ 2 ( y 1 1 − y 1 − b , . . . , y m 1 − y m − b ) . (3.1.12)
Then carrying out the differentiation with respect to x i , y j in eq. (3.1.9) one obtains:
A (0) l = Z(m!n!) −1 ||ψ 1 || 2 ||ψ 2 || 2 N K(a, b)(1 − b) −(2s+m) = (m!n!) −1 A(l), (3.1.13) where A(l) = (2l + A + B)! l!(l + A + B)! (l + A)!(l + B)! ||ψ 1 || 2 ||ψ 2 || 2 A!B! .
(3.1.14)
The evaluation of A m l in the case when n = m = k differs from the considered above only in the interchange of variables x, a ↔ y, b in (3.1.12). Then the straightforward calculations yield:
A (m) l = Z| < ψ 1 |ψ 2 > | 2 N K(a, b)(1 − a) −(2s+m) = (−1) l A (0) l δ ψ 1 ψ 2 . (3.1.15)
Here, we take into account that ψ 1 , ψ 2 are the eigenfunctions of the self-adjoint operator. Thus, when l is odd and ψ 1 = ψ 2 these two contributions (A (0) and A (m) ) cancel each other. But, as one can easily see from eq. (3.1.1), the function ψ l (x, y) is identically equal to zero in this case.
In the case when k = m and m < n eq. (3.1.11) for A (m) l reads:
A (m) l = ZN K(a, b) r z=s c z (a − b) z−s (1 − a) s+z+m = ZN K(a, b) r−s k=0c z (1 − b) k (1 − a) 2s+m+k .
(3.1.16)
After some algebra one obtains:
A (m) l = Z(l + A + B)! r−s k=0 k i=0c k,i l!(l + B + k − i)! (l − i)!(l + A − i)! ≤ c A (0) l l n−m ≤ c A (0) l l . (3.1.17)
The similar calculations (see Appendix A for details) in the case of 0 < k < m give:
|A (k) l | ≤ α k A (0) l /l 2 ,(3.
Part II
To complete the proof of the theorem it is remained to obtain the following inequality:
ǫ(l) = ||δHψ l S || 2 = ||(H − λ 1 − λ 2 )ψ l S || 2 ≤ C ln l/l 2 A(l)δHψ l (x, y) = i,k H(x i , y k )ψ l (x, y) (3.2.3)
immediately follows. At last, taking into consideration that
||H(x i , y k )ψ l (x, y)|| = ||H(x 1 , y 1 )ψ l (x, y)|| and ||H(x 1 , y 1 )ψ l (x, y)|| 2 =< ψ l (x, y)H(x 1 , y 1 )ψ l (x, y) >
we obtain the following estimate of ǫ(l):
ǫ(l) ≤ (mn) 2 < ψ l (x, y)H(x 1 , y 1 )ψ l (x, y) > . (3.2.4)
Our next purpose is to obtain the expression for the matrix element in (3.2.4) similar to that for A k l (eq. (3.1.11)). This matrix element, with the help of formulae (3.1.3),(3.1.5), can be represented in the following form: to the expression in the square brackets it is convenient to rewrite the latter in more suitable for this purpose form:
< ψ l (x, y)H(x 1 , y 1 )ψ l (x, y) >= Z(n + m)!K(a, b)φ 1 (∂ x )φ 2 (∂ y ) n i=2 1 1 − ax i m i=2 1 1 − by i 1 0 ds 1 1 − aθ(s) 1 1 − bθ(s) ψ 1 θ(s) 1 − aθ(s) , 1 1 − ax i ψ 2 θ(s) 1 − bθ(s) , 1 1 − by i x=0,a n 2 −n 1 (1 − b) β−m 2 ∂ n 1 x ∂ m 1 y 1 0 ds θ n 2 +m 2 (1 − aθ) n 2 +1 (1 − bθ) m 2 +1.... = (n 2 !m 2 !) −1 a n 2 −n 1 (1 − b) β−m 2 1 0 dss m 1 (1 − s) n 1 ∂ n 2 a ∂ m 2 b ∂ (m 1 +n 1 ) s 1 (1 − as)(1 − bs) . (3.2.8)
Now all differentiations with respect to a and b can be easily fulfilled:
∂ k a a n 2 −n 1 ∂ n 2 a 1 (1 − as) | a=0 = (k + n 1 )!s k+n 1 ∂ n 2 −n 1
x x k | x=1 , ∂ l−k b 1 (1 − b) β ∂ m 2 b 1 (1 − sb) = s m 2 (l − k + β)! Γ(β − m 2 ) 1 0 dαα β−m 2 −1 (1 − α) m 2 [α + (1 − α)s] l−k .
At last, after a representation of the ratio of the factorials like (k + n 1 )!/(k + A)! in the form 1/Γ(A − n 1 − 1) = 1 0 duu k+n 1 (1 − u) A−n 1 −1 the summation over k becomes trivial and we obtain the following expression forã n 1 n 2 m 1 ,m 2 ,m 3 (l): a n 1 n 2 m 1 ,m 2 ,m 3 (l) = A(l)a n 1 n 2 m 1 ,m 2 ,m 3 (l), (3.2.9) where a n 1 n 2 m 1 ,m 2 ,m 3 (l) =
1 Γ(A − n 1 )Γ(β − m 2 )Γ(B − β) ∂ n 2 −n 1 x 1 0 .. 1 0 ds dα du dv u n 1 (1 − u) A−n 1 −1 s m 1 (1 − s) n 1 ∂ n 1 +m 1 s s n 1 +m 2 v β (1 − v) B−β−1 α β−m 2 −1 (1 − α) m 2 [v(α + (1 − α)s) − sux] l x=1 . (3.2.10)
Note, that when the arguments of Γ -functions become equal to zero, the following evident changes must be done:
1/Γ(A − n 1 ) du(1 − u) A−n 1 −1 .. → duδ(1 − u) if A = n 1 , and so on.
With the account of eq. (3.2.9) eq. (3.2.7) reads:
< ψ l (x, y)H(x 1 , y 1 )ψ l (x, y) >= A(l) r n 1 =0 r n 2 =n 1 s m 1 =0 s m 2 =0 s−m 1 m 3 =0 c n 1 n 2 m 1 ,m 2 ,m 3 a n 1 n 2 m 1 ,m 2 ,m 3 (l). (3.2.11)
Thus to prove the inequality (3.2.1) for ǫ(l) we should only show that the coefficients a n 1 n 2 m 1 ,m 2 ,m 3 (l) tend to zero not slowly than ln l/l 2 at l → ∞.
First of all, let us consider the cases when at least one Γ function in (3.2.10) has zero argument. (It is worth to remind that A = 2r + n − 1, B = 2s + m − 1; n, m (m ≤ n) are the numbers of variables and r, s -degrees of the translational invariant polynomials ψ 1 and ψ 2 correspondingly). 1. A = n 1 . It is easy to understand that equality A − n 1 = 0 is possible only when n = 1, r = 0, and consequently m = 1, s = 0. Then one immediately obtains that arguments of two other Γ functions are also zero, and the corresponding integral (see eq. (3.2.10) and the note to it) is zero. 2. β − m 2 = 0, n > 1 (0 < A − n 1 ). In this case one obtains m = 1, s = 0 and B − β = 0. Since two of Γ functions have arguments equal to zero, the integration over v and α are removed. After this it is trivial to check that a(l) tends to zero as 1/l 2 at l → ∞.
1 0 ds du (1 − su) l+1 − s l+1 (1 − u) l+1 1 − s = 2C (l + 1)(l + 2) [ψ(l + 2) − ψ(1)],
where ψ(x) = ∂ x ln Γ(x). 4. At last, we consider the case when all arguments of Γ functions in eq. (3.2.10) are greater then zero. As in the previous case it is convenient to replace ∂ n 2 −n 1
x with u n 2 −n 1 ∂ n 2 −n 1 u and fulfil the integration over u and s by parts. But the boundary terms arise now with the integration over s at upper bound (s = 1). However, it is not hard to show that each of them decreases as 1/l 2 at l → ∞. (All calculations practically repeat those given in Appendix A)
The last term to be calculated has the form:
I(l) = ds dα du dv A(s, α, u, v)[v(α + (1 − α)s) − su] l ,(3.
2.13)
where A(s, α, u, v) is some polynomial of variables s, α, u, v , such that A(s, α, u, v) < C in the domain 0 ≤ s, α, u, v ≤ 1. Then for even l one obtains:
I(l) ≤ C 1 0 . . . 1 0 ds dα du dv [v(α + (1 − α)s) − su] l . (3.2.14)
If p is odd, let us divide the domain of the integration into two regions -Ω + and Ω − -in which the function [v(α + (1 − α)s) − su] is either positive or negative. It is straightforward to get that Ω + , Ω − are defined by the following conditions:
Ω + : ( u ≤ v) or (v ≤ u, s ≤ v/u, s(u − v)/v(1 − s) ≤ α ≤ 1)
;
Ω − : (v ≤ u); and (s ≥ v/u or (s ≤ v/u, 0 ≤ α ≤ s(u − v)/v(1 − s))).
Then integral I(l) is estimated as:
I(l) ≤ C + Ω + ds dα du dv [v(α+(1−α)s)−su] l −C − Ω − ds dα du dv [v(α+(1−α)s)−su] l . (3.2.15)
The evaluation of the corresponding integrals does not cause any troubles and leads to the following result:
I(l) ≤ C ln l/l 2 .(3.||(H − λ 1 − λ 2 )ψ l S || 2 ≤ C ln l/l 2 ||ψ l S || 2 (3.2.17)
holds for all l ≥ L. This inequality, as it has been shown in sec. 2 guarantees the existence of the eigenvector of the operator H with the eigenvalue satisfying the eq. (2.6).
Conclusion
The theorem proven in the previous section provides a number of consequences for the spectrum of the operator H:
• Every point of the spectrum is either a limit point of the latter or an exact eigenvalue of infinite multiplicity;
• Any finite sum of eigenvalues and limit points of the spectrum is a limit point again.
These statements directly follow from the theorem. Further, let us denote by S n the spectrum of operator H restricted on n -particle sector of Fock space, (Nψ = nψ) and byS n the set of the limit points of S n . Then it is easy to see that the definite relations ("hierarchical structures") between S n ,S n (n = 2, . . . ∞) exist. Indeed, let Σ n is the set of all possible sums of s i 1 + s i 2 + · · · + s im type, where s i k ∈ S k and i 1 + · · · + i m ≤ n, i 1 ≤ i 2 ≤ n · · · ≤ i m . Then one can easily conclude that the following relations Σ n ⊂S n are valid. For n = 2, 3 the more strong equations Σ n =S n hold, but the examination of this conjecture for a general n requires additional analysis.
So one can see that a number of interesting properties are specific to one-loop spectrum of anomalous dimensions of composite operators in φ 4 theory. Of course, two question arise: which of these (one-loop) properties survive in a higher order of the perturbation theory? And, in what extent they are conditioned by the peculiarity of φ 4 theory?
As to the first question we can only adduce some arguments in favor of a nonperturbative character of the obtained results. In the paper [8] the spectrum of critical exponents of the Nvector model in 4 − ǫ dimensions was investigated to the second order in ǫ. In this work it was obtained that some one-loop properties of the spectrum, a generic class of degeneracies [6,7] in particular, are lifted in two-loop order. However, the results of the numerical analysis of critical exponents carried out for the operators with number of fields ≤ 4 distinctly show that a limit points structure of the spectrum is preserved.
The other evidence in favor of this hypothesis can be found in the works of K. Lang and W. Rühl [10]. They investigated the spectrum of critical exponents in the nonlinear σ model in 2 < d < 4 dimensions in the first order of 1/N expansion. The results for various classes of composite operators [10] display the existence of a similar limit points structures in this model for whole range 2 < d < 4. Since the critical exponents should be consistent in 1/N expansion for σ model and the 4 − ǫ expansion for (φ 2 ) 2 model one may expect this property of the spectrum to hold to all orders in the ǫ in the latter.
To answer the second question it is useful to understand what features of the model under consideration determine the properties of operator H -hermiticity, invariance under SL(2, C) group, two-particle type of interaction -which were crucial for the proof of the theorem. First two properties are closely related to the conformal invariance of the φ 4 model [11]. It can be shown that a two-particle form of operator H and the conformal invariance of a theory lead to hermiticity of H in the scalar product given by eq. 2.14. (The relation between the functions ψ and φ in the general case is given in ref. [12].) Further, it should be emphasized that the commutativity of H with S and S + reflects two simple facts: 1. Nontrivial mixing occurs (in φ 4 theory) only between operators with equal number of fields. 2. The total derivative of a eigenoperator is an eigenoperator with the same anomalous dimension as well.
But if operator H is hermitian it must commute with the operator conjugated to S + as well. So the minimal group of invariance of H (SL(2, C) in our case) has three generators.
Thus, the method of analysis of anomalous dimensions of composite operators in the l → ∞ limit presented here is not peculiar for φ 4 theory only and can be applied to other theories, which are conformal invariant at one-loop level.
(let us remind that B = 2s + m − 1, A = 2n + r − 1, and m ≤ n). After the integration by parts in (A.3) (note, that in the case r ≤ s there are no boundary terms) and the differentiation with respect to a, b and x one has: Here, α j are some unessential constants. Since |(u − v)| ≤ 1 every integral in (A.5) tends to zero at l → ∞. For more precise estimate it is convenient to divide the domain of integration into two regions (u ≤ v and v ≤ u) and to rescale the variables v = ut (u = vt) in each of two integrals. Then replacing all terms of (1 − ut) α type by unit one obtains the answer in the form of the product of two beta functions. Taking into account that s − r + m − k − 1 ≤ j ≤ B − 1 and collecting all necessary terms one gets that the contribution from A (k) n 1 ,n 2 ,n 3 to the A (k) l is of order A(l)/l 2 .
A (k) n 1 ,
Thus we have obtained the required result for the case r ≤ s. To do the same for the s ≤ r, first of all, note that representation of A (k) l in the form (3.1.11) is not unique. Indeed, the expansion of (a − b) n 1 in series in (1 − a) (1 − b) will lead to the redefinition of coefficients C k n 1 ,n 2 ,n 3 and to the change of the limits of the summation. We use this freedom to represent A (k) l in the form in which all calculations for the case s ≤ r can be done in the same manner as those for r ≤ s. Let us consider the eq. (3.1.7). It is obvious that due to the translation invariance of function ψ l (x, y) it does not change by the shift of the variablesȳ +b −ā →ȳ andx →x +ā −b. Then carrying out successively all operations as in the Sec.3, one arrive at the formulae (3.1.11), the summation being carried out in the following range: n 1 = m 1 + m 2 ; n 2 = r + n − k + m 2 − m 3 ; n 3 = k + r + m 1 + m 3 ; 0 ≤ m 1 ≤ min [s, r]; 0 ≤ m 2 ≤ r; 0 ≤ m 3 ≤ s − m 1 .
(A.6)
The calculation of the coefficientsà k n 1 ,n 2 ,n 3 in this case simply repeats the one given above. Thus, for 1 ≤ k ≤ m − 1 one has the required inequality:
1.18) where α k are some constants. Then, taking into account (3.1.14), (3.1.15) (3.1.17), (3.1.18), one obtains: ||ψ l S || 2 = (1 + (−1) l δ ψ 1 ,ψ 2 )A(l)(1 + O(1/l)). (3.1.19)
≥ l 0 . To do this let us, first of all, substitute the expression (3.1.2) for ψ l S in (3.2.1). Then, taking into account the invariance of the operator H under any transposition of its arguments (see eq.(2.10)) one gets: (ǫ(l)) 1/2 = n!m! (n + m)! || m k=0 {i 1 <...<i k } {j 1 <...<j k } δHψ (i 1 ...i k ) (j 1 ...j k ) (x, y)|| ≤ ||δHψ l (x, y)||.
n 1 n 2 m 1
1(s) = sx 1 + (1 − s)y 1 . After the differentiation with respect to x i , y j , i, j > 1 one gets: < ....,m 2 ,m 3 (l) = ZK(a, b)
= s + m 3 + m − 1. Note that the coefficients c n 1 n 2 m 1 ,m 2 ,m 3 in eq. (3.2.6) do not depend on l. Before applying the operator K(a, b) = l k=0 c k ∂ k a ∂ l−k b
3. B − β = 0 and 1 < m ≤ n. Evidently, this is possible only when m 1 = 0 and m 3 = s. Again one integration (over v) is removed. To calculate a(l) first of all let us write ∂ n 2 −n 1 x in (3.2.10) as u n 2 −n 1 ∂ n 2 −n 1 u and carry out the integration by parts as over u as well as over s. Note, that the boundary terms do not appear when m 1 = 0. Then it is clear that integrand represent by itself the product of two functions, one of which, [(α + (1 − α)s) − su] l , is positive definite in the region of integration and other is the sum of the monoms like s i 1 (1 − s) i 2 α i 2 ... with finite coefficients. But since 0 ≤ s, u, α ≤ 1 this sum can be limited by some constant which is independent of l. Then, taking into account this remark, one obtains the following estimate of a(l): |a n 1 n 2 m 1 ,m 2 ,m 3 (l)| ≤ C du [(α + (1 − α)s) − su] l = (3.2.
1 − v) j v B−j−1 (u − v) l . (A.5)
AcknowledgmentsAuthors are grateful to Dr. S. Kehrein and Dr. A.A. Bolokhov for stimulating discussions and critical remarks.The work was supported by Grant 95-01-00569a of Russian Fond of Fundamental Research.Appendix AIn this appendix we calculate the quantities A (k) l for 1 ≤ k ≤ m − 1. Let us remind the representation for A (k) l (see eq. (3.1.11)):The summation over n 1 , n 2 , n 3 is carried out in the following range:The more simple way to obtain these bounds from
. K Wilson, Phys.Rev. 1791499K. Wilson, Phys.Rev., 179 (1969) 1499.
. D Gross, F Wilzek, Phys.Rev. 8980D. Gross and F. Wilzek, Phys.Rev. D8 (1973) 3633; D9 (1974) 980.
. A P Bukhostov, G F Frolov, L N Lipatov, E Kuraev, Nucl.Phys. 258601A.P. Bukhostov, G.F. Frolov, L.N. Lipatov, E.A Kuraev, Nucl.Phys. B258 (1985) 601;
. P G Ratcliffe, Nucl.Phys. 264493P.G. Ratcliffe, Nucl.Phys. B264 (1986) 493;
. X Ji, C Chou, Phys.Rev. 423637X. Ji and C. Chou, Phys.Rev. D42 (1990) 3637.
. A Ali, V M Braun, G Hiller, Phys.Lett. 266117A. Ali, V.M. Braun and G. Hiller, Phys.Lett B266 (1991) 117.
. S K Kehrein, F J Wegner, Y M Pis'mak, Nucl. Phys. 402669S.K. Kehrein, F.J. Wegner and Y.M. Pis'mak, Nucl. Phys. B402 (1993) 669.
. S K Kehrein, F J Wegner, Nucl. Phys. 424521S.K. Kehrein, F.J. Wegner, Nucl. Phys. B424 (1994) 521.
. S Derkachov, A Manashov, Nucl.Phys. 685B455 [FSS. Derkachov and A. Manashov, Nucl.Phys. B455 [FS] (1995) 685.
Nucl.Phys. B453 [FS. S Kehrein, 777S. Kehrein, Nucl.Phys. B453 [FS] (1995) 777.
. C Callan, J Gross, Phys.Rev. 84383C. Callan and J. Gross, Phys.Rev. D8 (1973) 4383.
. K Lang, W Rühl, Nucl.Phys. 63597Z.Phys.K. Lang and W. Rühl, Z.Phys. C63 (1994) 531; Nucl.Phys. 400 597.
. L Shäfer, J.Phys. 9377L. Shäfer, J.Phys. A9 (1976) 377.
The spectrum of the anomalous dimensions of the composite operators in ǫ-expansion in the scalar φ 4 -theory, preprint hep-th 9505110. S Derkachov, A Manashov, St.Petersburg State UniversityS. Derkachov and A. Manashov, The spectrum of the anomalous dimensions of the composite operators in ǫ-expansion in the scalar φ 4 -theory, preprint hep-th 9505110 (St.Petersburg State University, 1995).
| [] |
[
"STRUCTURED QUASI-NEWTON METHODS FOR OPTIMIZATION WITH ORTHOGONALITY CONSTRAINTS",
"STRUCTURED QUASI-NEWTON METHODS FOR OPTIMIZATION WITH ORTHOGONALITY CONSTRAINTS"
] | [
"Jiang Hu ",
"B O Jiang ",
"Lin Lin ",
"Zaiwen Wen ",
"ANDYaxiang Yuan "
] | [] | [] | In this paper, we study structured quasi-Newton methods for optimization problems with orthogonality constraints. Note that the Riemannian Hessian of the objective function requires both the Euclidean Hessian and the Euclidean gradient. In particular, we are interested in applications that the Euclidean Hessian itself consists of a computational cheap part and a significantly expensive part. Our basic idea is to keep these parts of lower computational costs but substitute those parts of higher computational costs by the limited-memory quasi-Newton update. More specically, the part related to Euclidean gradient and the cheaper parts in the Euclidean Hessian are preserved. The initial quasi-Newton matrix is further constructed from a limited-memory Nyström approximation to the expensive part. Consequently, our subproblems approximate the original objective function in the Euclidean space and preserve the orthogonality constraints without performing the so-called vector transports. When the subproblems are solved to sufficient accuracy, both global and local q-superlinear convergence can be established under mild conditions. Preliminary numerical experiments on the linear eigenvalue problem and the electronic structure calculation show the effectiveness of our method compared with the state-of-art algorithms. | 10.1137/18m121112x | [
"https://arxiv.org/pdf/1809.00452v1.pdf"
] | 52,832,928 | 1809.00452 | 1f72aa0161fcb96d108dcce2308e7c3005618cc7 |
STRUCTURED QUASI-NEWTON METHODS FOR OPTIMIZATION WITH ORTHOGONALITY CONSTRAINTS
3 Sep 2018
Jiang Hu
B O Jiang
Lin Lin
Zaiwen Wen
ANDYaxiang Yuan
STRUCTURED QUASI-NEWTON METHODS FOR OPTIMIZATION WITH ORTHOGONALITY CONSTRAINTS
3 Sep 2018optimization with orthogonality constraintsstructured quasi-Newton methodlimited-memory Nyström approximationHartree-Fock total energy minimizationconvergence AMS subject classifications 15A1865K1065F1590C2690C30
In this paper, we study structured quasi-Newton methods for optimization problems with orthogonality constraints. Note that the Riemannian Hessian of the objective function requires both the Euclidean Hessian and the Euclidean gradient. In particular, we are interested in applications that the Euclidean Hessian itself consists of a computational cheap part and a significantly expensive part. Our basic idea is to keep these parts of lower computational costs but substitute those parts of higher computational costs by the limited-memory quasi-Newton update. More specically, the part related to Euclidean gradient and the cheaper parts in the Euclidean Hessian are preserved. The initial quasi-Newton matrix is further constructed from a limited-memory Nyström approximation to the expensive part. Consequently, our subproblems approximate the original objective function in the Euclidean space and preserve the orthogonality constraints without performing the so-called vector transports. When the subproblems are solved to sufficient accuracy, both global and local q-superlinear convergence can be established under mild conditions. Preliminary numerical experiments on the linear eigenvalue problem and the electronic structure calculation show the effectiveness of our method compared with the state-of-art algorithms.
1. Introduction. In this paper, we consider the optimization problem with orthogonality constraints:
(1.1) min X∈C n×p f (X) s.t. X * X = I p ,
where f (X) : C n×p → R is a R-differentiable function [26]. Although our proposed methods are applicable to a general function f (X), we are in particular interested in the cases that the Euclidean Hessian ∇ 2 f (X) takes a natural structure as
(1.2) ∇ 2 f (X) = H c (X) + H e (X),
where the computational cost of H e (X) is much more expensive than that of H c (X). This situation occurs when f is a summation of functions whose full Hessian are expensive to be evaluated or even not accessible. A practical example is the Hartree-Fock-like total energy minimization problem in electronic structure theory [38,31], where the computation cost associated with the Fock exchange matrix is significantly larger than the cost of the remaining components.
There are extensive methods for solving (1.1) in the literature. By exploring the geometry of the manifold (i.e., orthogonality constraints), the Riemannian gradient descent, conjugate gradient (CG), Newton and trust-region methods are proposed in [11,10,41,36,1,2,44]. Since the second-order information sometimes is not available, the quasi-Newton type method serves as an alternative method to guarantee the good convergence property. Different from the Euclidean quasi-Newton method, the vector transport operation [2] is used to compare tangent vectors in different tangent spaces. After obtaining a descent direction, the so-called retraction provides a curvilinear search along manifold. By adding some restrictions between differentiable retraction and vector transport, a Riemannian Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is presented in [33,34,35]. Due to the requirement of differentiable retraction, the computational cost associated with the vector transport operation may be costly. To avoid this disadvantage, authors in [18,21,23,20] develop a new class of Riemannian BFGS methods, symmetric rank-one (SR1) and Broyden family methods. Moreover, a selection of Riemannian quasi-Newton methods has been implemented in the software package Manopt [6] and ROPTLIB [19]. where ξ is any tangent vector in the tangent space T X := {ξ ∈ C n×p : X * ξ +ξ * X = 0} and Proj X (Z) := Z − Xsym(X * Z) is the projection of Z onto the tangent space T X and sym(A) := (A + A * )/2. See [3] for details on the structure (1.3). We briefly summarize our contributions as follows.
• By taking the advantage of this structure (1.3), we construct an approximation to Euclidean Hessian ∇ 2 f (X) instead of the full Riemannian Hessian Hess f (X) directly, but keep the remaining parts ξsym(X * ∇f (X)) and Proj X (·). Then, we solve a subproblem with orthogonality constraints, whose objective function uses an approximate second-order Taylor expansion of f with an extra regularization term. Similar to [16], the trust-region-like strategy for the update of the regularization parameter and the modified CG method for solving the subproblem are utilized. The vector transport is not needed in since we are working in the ambient Euclidean space. • By further taking advantage of the structure (1.2) of f , we develop a structured quasi-Newton approach to construct an approximation to the expensive part H e while preserving the cheap part H c . This kind of structured approximation usually yields a better property than the approximation constructed by the vanilla quasi-Newton method. For the construction of an initial approximation of H e , we also investigate a limited-memory Nyström approximation, which gives a subspace approximation of a known good but still complicated approximation of H e . • When the subproblems are solved to certain accuracy, both global and local q-superlinear convergence can be established under certain mild conditions. • Applications to the linear eigenvalue problem and the electronic structure calculation are presented. The proposed algorithms perform comparably well with state-of-art methods in these two applications.
1.2. Applications to electronic structure calculation. Electronic structure theories, and particularly Kohn-Sham density functional theory (KSDFT), play an important role in quantum physics, quantum chemistry and materials science. This problem can be interpreted as a minimization problem for the electronic total energy over multiple electron wave functions which are orthogonal to each other. The mathematical structure of Kohn-Sham equations depends heavily on the choice of the exchange-correlation (XC) functional. With some abuse of terminology, throughout the paper we will use KSDFT to refer to Kohn-Sham equations with local or semi-local exchange-correlation functionals. Before discretization, the corresponding Kohn-Sham Hamiltonian is a differential operator. On the other hand, when hybrid exchange-correlation functionals [4,15] are used, the Kohn-Sham Hamiltonian becomes an integro-differential operator, and the Kohn-Sham equations become Hartree-Fock-like equations. Again with some abuse of terminology, we will refer to such calculations as the HF calculation.
For KSDFT calculations, the most popular numerical scheme is the self-consistent field (SCF) iteration which can be efficient when combined with certain charge mixing techniques. Since the hybrid exchange-correlation functionals depend on all the elements of the density matrix, HF calculations are usually more difficult than KS-DFT calculations. One commonly used algorithm is called the nested two-level SCF method [12]. In the inner SCF loop, by fixing the density matrix and the hybrid exchange operator, it only performs an update on the charge density ρ, which is solved by the SCF iteration. Once the stopping criterion of the inner iteration is satisfied, the density matrix is updated in the outer loop according to the Kohn-Sham orbitals computed in the inner loop. This method can also utilize the charge mixing schemes for the inner SCF loop to accelerate convergence. Recently, by combining with the adaptively compressed exchange operator (ACE) method [28], the convergence rate of the nested two-level SCF method is greatly improved. Another popular algorithm to solve HF calculations is the commutator direction inversion of the iterative subspace (C-DIIS) method. By storing the density matrix explicitly, it can often lead to accelerated convergence rate. However, when the size of the density matrix becomes large, the storage cost of the density matrix becomes prohibitively expensive. Thus Lin et al. [17] proposed the projected C-DIIS (PC-DIIS) method, which only requires storage of wave function type objects instead of the whole density matrix.
HF calculations can be also solved via using the aforementioned Riemannian optimization methods (e.g., a feasible gradient method on the Stiefel manifold [44]) without storing the density matrix or the wave function. However, these existing methods often do not use the structure of the Hessian in KSDFT or HF calculations. In this paper, by exploiting the structure of the Hessian, we apply our structured quasi-Newton method to solve these problems. Preliminary numerical experiments show that our algorithm performs at least comparably well with state-of-art methods in their convergent case. In the case that state-of-art methods failed, our algorithm often returns high quality solutions.
1.3. Organization. This paper is organized as follows. In section 2, we introduce our structured quasi-Newton method and present our algorithm. In section 3, the global and local convergence is analyzed under certain inexact conditions. In sections 4 and 5, detailed applications to the linear eigenvalue problem and the electronic structure calculation are discussed. Finally, we demonstrate the efficiency of our proposed algorithm in section 6.
1.4. Notation. For a matrix X ∈ C n×p , we useX, X * , ℜX and ℑX to denote its complex conjugate, complex conjugate transpose, real and imaginary parts, respectively. Let span{X 1 , . . . , X l } be the space spanned by the matrices X 1 , . . . , X l . The vector denoted vec(X) in C np is formulated by stacking each column of X one by one, from the first to the last column; the operator mat(·) is the inverse of vec(·), i.e., mat(vec(X)) = X. Given two matrices A, B ∈ C n×p , the Frobenius inner product ·, · is defined as A, B = tr(A * B) and the corresponding Frobenius norm · F is defined as
A F = tr(A * A). The Hadamard product of A and B is A ⊙ B with (A ⊙ B) ij = A ij B ij .
For a matrix M ∈ C n×n , the operator diag(M ) is a vector in C n formulated by the main diagonal of M ; and for c ∈ C n , the operator Diag(c) is an n-by-n diagonal matrix with the elements of c on the main diagonal. The notation I p denotes the p-by-p identity matrix. Let St(n, p) := {X ∈ C n×p : X * X = I p } be the (complex) Stiefel manifold. The notation N refers to the set of all natural numbers.
2.
A structured quasi-Newton approach.
2.1. Structured quasi-Newton subproblem. In this subsection, we develop the structured quasi-Newton subproblem for solving (1.1). Based on the assumption (1.2), methods using the exact Hessian ∇ 2 f (X) may not be the best choices. When the computational cost of the gradient ∇f (X) is significantly cheaper than that of the Hessian ∇ 2 f (X), the quasi-Newton methods, which mainly use the gradients ∇f (X) to construct an approximation to ∇ 2 f (X), may outperform other methods. Considering the form (1.2), we can construct a structured quasi-Newton approximation B k for ∇ 2 f (X k ). The details will be presented in section 2.2. Note that a similar idea has been presented in [47] for the unconstrained nonlinear least square problems [24,37]. Then our subproblem at the k-th iteration is constructed as
(2.1) min X∈C n×p m k (X) s.t. X * X = I, where m k (X) := ℜ ∇f (X k ), X − X k + 1 2 ℜ B k [X − X k ], X − X k + τ k 2 d(X, X k )
is an approximation to f (X) in the Euclidean space. For the second-order Taylor expansion of f (X) at a point X k , we refer to [43, section 1.1] for details. Here, τ k is a regularization parameter and d(X, X k ) is a proximal term to guarantee the convergence.
The proximal term can be chosen as the quadratic regularization
(2.2) d(X, X k ) = X − X k 2 F
or the cubic regularization
(2.3) d(X, X k ) = 2 3 X − X k 3 F .
In the following, we will mainly focus on the quadratic regularization (2.2). Due to the Stiefel manifold constraint, the quadratic regularization (2.2) is actually equivalent to the linear term −2ℜ X, X k . By using the Riemannian Hessian formulation (1.3) on the Stiefel manifold, we have
(2.4) Hess m k (X k )[ξ] = Proj X k B k [ξ] − ξsym((X k ) * ∇f (X k ) + τ k ξ, ξ ∈ T X k .
Hence, the regularization term is to shift the spectrum of the corresponding Riemannian Hessian of the approximation B k with τ k . The Riemannian quasi-Newton methods for (1.1) in the literature [19,21,22,23] focus on constructing an approximation to the Riemannian Hessian Hess f (X k ) directly without using its special structure (1.3). Therefore, vector transport needs to be utilized to transport the tangent vectors from different tangent spaces to one common tangent space. If p ≪ n, the second term sym (X k ) * ∇f (X k ) is a small-scaled matrix and thus can be computed with low cost. In this case, after computing the
approximation B k [ξ] of ∇ 2 f (X)[ξ], we obtain a structured Riemannian quasi-Newton approximation Proj X k B k [ξ] − ξsym((X k ) * ∇f (X k ) of Hess f (X k )[ξ]
without using any vector transport.
2.2.
Construction of B k . The classical quasi-Newton methods construct the approximation B k such that it satisfies the secant condition
(2.5) B k [S k ] = ∇f (X k ) − ∇f (X k−1 ), where S k := X k − X k−1 .
Noticing that ∇ 2 f (X) takes the natural structure (1.2), it is reasonable to keep the cheaper part H c (X) while only to approximate H e (X). Specifically, we derive the approximation B k to the Hessian ∇ 2 f (X k ) as
(2.6) B k = H c (X k ) + E k ,
where E k is an approximation to H e (X k ). Substituting (2.6) into (2.5), we can see that the approximation E k should satisfy the following revised secant condition
(2.7) E k [S k ] = Y k , where (2.8) Y k := ∇f (X k ) − ∇f (X k−1 ) − H c (X k )[S k ].
For the large scale optimization problems, the limited-memory quasi-Newton methods are preferred since they often make simple but good approximations of the exact Hessian. Considering that the part H e (X k ) itself may not be positive definite even when X k is optimal, we utilize the limited-memory symmetric rank-one (LSR1) scheme to approximate H e (X k ) such that it satisfies the secant equation (2.7).
Let l = min{k, m}. We define the (np) × l matrices S k,m and Y k,m by
S k,m = vec(S k−l ), . . . , vec(S k−1 ) , Y k,m = vec(Y k−l ), . . . , vec(Y k−1 ) .
Let E k 0 : C n×p → C n×p be the initial approximation of H e (X k ) and define the (np) × l matrix Σ k,m :
= vec E k 0 [S k−l ] , . . . , vec E k 0 [S k−1 ] . Let F k,m be a matrix in C l×l with (F k,m ) i,j = S k−l+i−1 , Y k−l+j−1 for 1 ≤ i, j ≤ l. Under the assumption that S j , E j [S j ] − Y j = 0, j = k − l, .
. . , k − 1, it follows from [9, Theorem 5.1] that the matrix F k,m − (S k,m ) * Σ k,m is invertible and the LSR1 gives
(2.9) E k [U ] = E k 0 [U ] + mat N k,m F k,m − (S k,m ) * Σ k,m −1 (N k,m ) * vec(U ) ,
where U ∈ C n×p is any direction and N k,m = Y k,m − Σ k,m . In the practical implementation, we skip the update if
S j , E j [S j ] − Y j ≤ r S j F E j [S j ] − Y j F
with small number r, say r = 10 −8 . Similar idea can be found in [32].
2.3. Limited-memory Nyström approximation of E k 0 . A good initial guess to the exact Hessian is also important to accelerate the convergence of the limitedmemory quasi-Newton method. Here, we assume that a good initial approximation E k 0 of the expensive part of the Hessian H e (X k ) is known but its computational cost is still very high. We conduct how to use the limited-memory Nyström approximation to construct another approximation with lower computational cost based on E k 0 . Specially, let Ω be a matrix whose columns form an orthogonal basis of a wellchosen subspace S and denote W = E k 0 [Ω]. To reduce the computational cost and keep the good property of E k 0 , we construct the following approximation
(2.10)Ê k 0 [U ] := W (W * Ω) † W * U,
where U ∈ C n×p is any direction. This is called the limited-memory Nyström approximation; see [40] and references therein for more details. By choosing the dimension of the subspace S properly, the rank of W (W * Ω) † W * can be small enough such that the computational cost ofÊ k 0 [U ] is significantly reduced. Furthermore, we still want E k 0 to satisfy the secant condition (2.7) as E k 0 does. More specifically, we need to seek the subspace S such that the secant condition
E k 0 [S k ] = Y k
holds. To this aim, the subspace S can be chosen as
span{X k−1 , X k },
which contains the element S k . By assuming that
E k 0 [U V ] = E k 0 [U ]
V for any matrices U, V with proper dimension (this condition is satisfied when E k 0 is a matrix), we have Ê k 0 will satisfy the secant condition whenever E k 0 does. From the methods for linear eigenvalue computation in [25] and [30], the subspace S can also be decided as
(2.11) span{X k−1 , X k , E k 0 [X k ]} or span{X k−h , . . . , X k−1 , X k }
with small memory length h. Once the subspace is defined, we can obtain the limitedmemory Nyström approximation by computing the E k 0
[Ω] once and the pseudo inverse of a small scale matrix.
A structured quasi-Newton method with subspace refinement.
Based on the theory of quasi-Newton method for unconstrained optimization, we know that algorithms which set the solution of (2.1) as the next iteration point may not converge if no proper requirements on approximation B k or the regularization parameter τ k . Hence, we update the regularization parameter here using a trustregion-like strategy. Refereeing to [16], we compute a trial point Z k by utilizing a modified CG method to solve the subproblem inexactly, which is to solve the Newton equation of (2.1) at X k as
(2.12) gradm k (X k ) + Hess m k (X k )[ξ] = 0, ξ ∈ T X k ,
where gradm k (X k ) = gradf (X k ) and Hess m k (X k ) are given in (2.4). After obtaining the trial point Z k of (2.1), we calculate the ratio between the predicted reduction and the actual reduction
(2.13) r k = f (Z k ) − f (X k ) m k (Z k ) .
If r k ≥ η 1 > 0, then the iteration is successful and we set X k+1 = Z k ; otherwise, the iteration is unsuccessful and we set X k+1 = X k , that is,
(2.14) X k+1 = Z k , if r k ≥ η 1 , X k , otherwise.
The regularization parameter τ k+1 is updated as
(2.15) τ k+1 ∈ (0, γ 0 τ k ], if r k ≥ η 2 , [τ k , γ 1 τ k ], if η 1 ≤ r k < η 2 , [γ 1 τ k , γ 2 τ k ], otherwise, where 0 < η 1 ≤ η 2 < 1 and 0 < γ 0 < 1 < γ 1 ≤ γ 2 .
These parameters determine how aggressively the regularization parameter is decreased when an iteration is successful or it is increased when an iteration is unsuccessful. In practice, the performance of the regularized trust-region algorithm is not very sensitive to the values of the parameters. Noticing that the Newton-type method may still be very slow when the Hessian is close to be singular [8]. Numerically, it may happen that the regularization parameter turns to be huge and the Riemannian Newton direction is nearly parallel to the negative gradient direction. Hence, it leads to an update X k+1 belonging to the subspacẽ S k := span{X k , gradf (X k )}, which is similar to the Riemannian gradient approach.
To overcome this issue, we propose an optional step of solving (1.1) restricted to a subspace. Specifically, at X k , we construct a subspace S k with an orthogonal basis Q k ∈ C n×q (p ≤ q ≤ n), where q is the dimension of S k . Then any point X in the subspace S k can be represented by
X = Q k M
for some M ∈ C q×p . Similar to the constructions of linear eigenvalue problems in [25] and [30], the subspace S k can be decided by using the history information
{X k , X k−1 , . . .}, {gradf (X k ), gradf (X k−1 ), .
. .} and other useful information. Given the subspace S k , the subspace method aims to find a solution of (1.1) with an extra constraint X ∈ S k , namely,
(2.16) min M∈C q×p f (Q k M ) s.t. M * M = I p .
The problem (2.16) can be solved inexactly by existing methods for optimization with orthogonality constraints. Once a good approximate solution M k of (2.16) is obtained, then we update X k+1 = Q k M k which is an approximate minimizer in the subspace S k instead ofS k . This completes one step of the subspace iteration. In fact, we compute the ratios between the norms of the Riemannian gradient of the last few iterations. If all of these ratios are almost 1, we infer that the current iterates stagnates and the subspace method is called. Consequently, our algorithm framework is outlined in Algorithm 1.
Algorithm 1: A structured quasi-Newton method with subspace refinement Input initial guess X 0 ∈ C n×p with (X 0 ) * X 0 = I p and the memory length m.
Choose τ 0 > 0, 0 < η 1 ≤ η 2 < 1, 1 < γ 1 ≤ γ 2 . Set k = 0.
while stopping conditions not met do Choose E k 0 (use the limited-memory Nyström approximation if necessary). Construct the approximation B k via (2.6) and (2.9). Construct the subproblem (2.1) and use the modified CG method (Algorithm 2 in [16]) to compute a new trial point Z k . Compute the ratio r k via (2.13). Update X k+1 from the trial point Z k based on (2.14). Update τ k according to (2.15).
k ← k + 1. if stagnate conditions met then
Solve the subspace problem (2.16) to update X k+1 .
3. Convergence analysis. In this section, we present the convergence property of Algorithm 1. To guarantee the global convergence and fast local convergence rate, the inexact conditions for the subproblem (2.1) (with quadratic or cubic regularization) can be chosen as
m k (Z k ) ≤ −c gradf (X k ) 2 F (3.1) gradm k (Z k ) F ≤ θ k gradf (X k ) F (3.2)
with some positive constant c and θ k := min{1, gradf (X k ) F }. Here, the inequality (3.1) is to guarantee the global convergence and the inequality (3.2) leads to fast local convergence. Throughout the analysis of convergence, we assume that the stagnate conditions are never met. (In fact, a sufficient decrease for the original problem in each iteration can be guaranteed from the description of subspace refinement. Hence, the global convergence still holds.)
3.1. Global convergence. Since the regularization term is used, the global convergence of our method can be obtained by assuming the boundedness on the constructed Hessian approximation. We first make the following assumptions.
Assumption 1. Let {X k } be generated by Algorithm 1 without subspace refinement. We assume:
(A1) The gradient ∇f is Lipschitz continuous on the convex hull of St(n, p), i.e., there exists L f > 0 such that ∇f (X) − ∇f (Y ) F ≤ L f X − Y F , ∀ X, Y ∈ conv(St(n, p)). (A2) There exists κ H > 0 such that B k ≤ κ H for all k ∈ N,
where · is the operator norm introduced by the Euclidean inner product.
Remark 2. By Assumption (A1), ∇f (X) is uniformly bounded by some constant κ g on the compact set conv(St(n, p)), i.e., ∇f (X) F ≤ κ g , X ∈ conv(St(n, p)).
Assumption (A2) is often used in the traditional symmetric rank-1 method [7] which appears to be reasonable in practice.
Based on the similar proof in [16,43], we have the following theorem for global convergence.
Theorem 3. Suppose that Assumptions (A1)-(A2) and the inexact conditions (3.1) hold. Then, either
(3.3) gradf (X t ) = 0 f or some t > 0 or lim k→∞ gradf (X k ) F = 0.
Proof. For the quadratic regularization (2.2), let us note that the Riemannian Hessian Hess m(X k ) can be guaranteed to be bounded from Assumption 1. In fact, from (2.4), we have
Hessm k (X k ) ≤ B k + X k ∇f (X k ) F + τ k ≤ κ H + κ g + τ k ,
where X k = 1 because of its unitary property. Hence, we can guarantee that the direction obtained from the modified CG method is a descent direction via similar techniques in [16, Lemma 7]. Then the convergence of the iterates {X k } can be proved in a similar way by following the details in [16] for the quadratic regularization. As to the cubic regularization, we can refer [43,Theorem 4.9] for a similar proof.
Local convergence.
We now focus on the local convergence with the inexact conditions (3.1) and (3.2). We make some necessary assumptions below.
Assumption 4. Let {X k } be the sequence generated by Algorithm 1 without sub- space refinement. We assume (B1) The sequence {X k } converges to X * with gradf (X * ) = 0. (B2) The Euclidean Hessian ∇ 2 f is continuous on conv(St(n, p)). (B3) The Riemannian Hessian Hess f (X) is positive definite at X * . (B4) The Hessian approximation B k satisfies (3.4) (B k − ∇ 2 f (X k ))[Z k − X k ] F Z k − X k F → 0, k → ∞.
Following the proof in [16,Lemma 17], we show that all iterations are eventually very successful (i.e., r k ≥ η 2 , for all sufficiently large k) when Assumptions (B1)-(B4) and the inexact conditions (3.1) and (3.2) hold. Proof. From the second-order Taylor expansion, we have
f (Z k ) − f (X k ) − m k (Z k ) ≤ 1 2 ℜ (∇ 2 f (X k δ ) − B k )[Z k − X k ]), Z k − X k , for some suitable δ k ∈ [0, 1] and X k δ := X k + δ k (Z k − X k ).
Since the Stifel manifold is compact, there exist some η k such that Z k = Exp X k (η k ) where Exp X k is the exponential map from T X k St(n, p) to St(n, p). Following the proof in [5,Appendix B] and Assumption (B1) (Z k can be sufficiently close to X k for large k), we have
(3.5) Z k − X k − η k F ≤ κ 1 η k 2 F
with a positive constant κ 1 for all sufficiently large k. Moreover, since the Hessian Hess f (X * ) is positive definite and (B4) is satisfied, it holds for sufficiently large k:
Hessm k (X k )[η k ] F = Hess m k (X k )[Z k − X k ] F + O( η k 2 F ) = Hessf (X k )[Z k − X k ] + (Hess m k (X k ) − Hess f (X k ))[Z k − X k ] F + O( η k 2 F ) ≥ λ min (Hess f (X k )) Z k − X k F + o( Z k − X k F ) + O( η k 2 F ) ≥ λ min (Hess f (X k )) η k F + o( η k F ),
where λ min (Hess f (X k )) is the minimal spectrum of Hess f (X k ). From Assumption (B2)-(B3), [2, Proposition 5.5.4] and the Taylor expansion of m k • Exp X k , we have
grad(m k • Exp X k )(η k ) − gradf (X k ) F = Hessf (X k )[η k ] F + o( η k F ) ≥ κ 2 2 η k F ,
where κ 2 := λ min (Hess f (X * )). By [1, Lemma 7.4.9], we have
(3.6) η k F ≤ 2 κ 2 ( gradf (X k ) F +c gradm k (Z k ) F ≤ 2(1 +cθ k ) κ 2 gradf (X k ) F ,
wherec > 0 is a constant and the second inequality is from the inexact condition (3.2). It follows from the continuity of ∇ 2 f , (3.1), (3.5) and (3.6) that
1 − r k ≤ 1 2c (∇ 2 f (X k ) − B k )[Z k − X k ] F Z k − X k F gradf (X k ) 2 F + ∇ 2 f (X k δ ) − ∇ 2 f (X k ) Z k − X k 2 F gradf (X k ) 2 F → 0.
Therefore, the iterations are eventually very successful.
As a result, the q-superlinear convergence can also be guaranteed.
Theorem 6. Suppose that Assumptions (B1)-(B4) and conditions (3.1) and (3.2) hold. Then the sequence {X k } converges q-superlinearly to X * .
Proof. We consider the cubic model here, while the local q-superlinear convergence of quadratic model can be showed by a similar fashion. Since the iterations are eventually very successful, we have X k+1 = Z k and τ k converges to zero. From (3.2), we have
(3.7) gradm k (X k+1 ) F = Proj X k+1 ∇f (X k ) + B k [∆ k ] + τ k ∆ k F ∆ k F ≤ θ k gradf (X k ) F , where ∆ k = Z k − X k . Hence, (3.8) gradf (X k+1 ) F = Proj X k+1 ∇f (X k+1 ) F = Proj X k+1 ∇f (X k ) + ∇ 2 f (X k )[∆ k ] + o( ∆ k F ) F = Proj X k+1 ∇f (X k ) + B k [∆ k ] + o( ∆ k F ) + (∇ 2 f (X k ) − B k )[∆ k ] F ≤ θ k gradf (X k ) F + o( ∆ k F ).
It follows from a similar argument to (3.6) that there exists some constant c 1
∆ k F ≤ c 1 gradf (X k ) F ,
for sufficiently large k. Therefore, from (3.8) and the definition of θ k , we have
(3.9) gradf (X k+1 ) F gradf (X k ) F → 0.
Combining (3.9), Assumption (B3) and [2,Lemma 7.4.8], it yields
dist(X k+1 , X * ) dist(X k , X * ) → 0,
where dist(X, Y ) is the geodesic distance between X and Y which belong to St(n, p). This completes the proof.
Linear eigenvalue problem.
In this section, we apply the aforementioned strategy to the following linear eigenvalue problem
(4.1) min X∈R n×p f (X) := 1 2 tr(X ⊤ CX) s.t. X ⊤ X = I p ,
where C := A + B. Here, A, B ∈ R n×n are symmetric matrices and we assume that the multiplication of BX is much more expensive than that of AX. Motivated by the quasi-Newton methods and eliminating the linear term in subproblem (2.1), we investigate the multisecant conditions in [13] (4.2)B k X k = BX k ,B k S k = BS k with S k = X k − X k−1 . By a brief induction, we have an equivalent form of (4.2)
(4.3)B k [X k−1 , X k ] = B[X k−1 , X k ].
Then, using the limited-memory Nyström approximation, we obtain the approximated matrixB k as
(4.4)B k = W k ((W k ) ⊤ O k ) † W ⊤ k , where (4.5) O k = orth(span{X k−1 , X k }), and W k = BO k .
Here, orth(Z) is to find the orthogonal basis of the space spanned by Z. Therefore, an approximation C k to C can be set as
(4.6) C k = A +B k .
Since the objective function is invariant under rotation, i.e., f (XQ) = f (X) for orthogonal matrix Q ∈ R p×p , we also wants to construct a subproblem whose objective function inherits the same property. Therefore, we use the distance function between X k and X as d p (X, X k ) = XX ⊤ − X k (X k ) ⊤ 2 F , which has been considered in [10,39,46] for the electronic structure calculation. Since X k and X are orthonormal matrices, we have
(4.7) d p (X, X k ) = tr((XX ⊤ − X k (X k ) ⊤ )(XX ⊤ − X k (X k ) ⊤ )) = 2p − 2tr(X ⊤ X k (X k ) ⊤ X),
which implies that d p (X, X k ) is a quadratic function on X. Consequently, the subproblem can be constructed as
(4.8) min X∈R n×p m k (X) s.t. X ⊤ X = I p , where m k (X) := 1 2 tr(X ⊤ C k X) + τ k 4 d p (X, X k ).
From the equivalent expression of d p (X, X k ) in (4.7), problem (4.8) is actually a linear eigenvalue problem
(A +B k − τ k X k (X k ) ⊤ )X = XΛ, X ⊤ X = I p ,
where Λ is a diagonal matrix whose diagonal elements are the p smallest eigenvalues of A +B k − τ k X k (X k ) ⊤ . Due to the low computational cost of A +B k − τ k X k (X k ) ⊤ compared to A + B, the subproblem (4.8) can be solved efficiently using existing eigensolvers. As in Algorithm 1, we first solve subproblem (4.8) to obtain a trial point and compute the ratio (2.13) between the actual reduction and predicted reduction based on this trial point. Then the iterate and regularization parameter are updated according to (2.13) and (2.15). Note that it is not necessary to solve the subproblems highly accurately in practice.
Convergence. Although the convergence analysis in section 3 is based on the regularization terms (2.2) and (2.3)
, similar results can be established with the specified regularization term τ k 4 d p (X, X k ) using the sufficient descent condition (3.1). It follows from the construction of C k in (4.6) that
C 2 ≤ A 2 + B 2 , C k 2 ≤ A 2 + B 2
for any given matrices A and B. Hence, Assumptions (A1) and (A2) hold with
L f = κ H = A 2 + B 2 .
We have the following theorem on the global convergence.
Theorem 7. Suppose that the inexact condition (3.1) holds. Then, for the Riemannian gradients, either
I n − X t (X t ) ⊤ (CX t ) = 0 for some t > 0 or lim k→∞ I n − X k (X k ) ⊤ (CX k ) F = 0.
Proof. It can be guaranteed that the distance d p (X, X k ) is very small for a large enough regularization parameter τ k by a similar argument to [16, Lemma 9]. Specifically, the reduction of the subproblem requires that
Z k , C k Z k + τ k 4 Z k (Z k ) ⊤ − X k (X k ) ⊤ 2 F − X k , C k X k ≤ 0.
From the cyclic property of the trace operator, it holds that
C k , Z k (Z k ) ⊤ − X k (X k ) ⊤ + τ k 4 Z k (Z k ) ⊤ − X k (X k ) ⊤ 2 F ≤ 0. Then (4.9) Z k (Z k ) ⊤ − X k (X k ) ⊤ F ≤ 4κ H τ k .
From the descent condition (3.1) for the subproblem, there exists some positive constant ν such that
(4.10) m k (Z k ) − m k (X k ) ≥ − ν τ k gradf (X k ) 2 F .
Based on the properties of C k and C, we have
(4.11) f (Z k ) − f (X k ) − (m k (Z k ) − m k (X k )) = Z k , CZ k − Z k , C k Z k − τ k 4 Z k (Z k ) ⊤ − X k (X k ) ⊤ 2 F ≤ C − C k , Z k (Z k ) ⊤ = C − C k , Z k (Z k ) ⊤ − X k (X k ) ⊤ 2 ≤ (L f + κ H ) Z k (Z k ) ⊤ − X k (X k ) ⊤ 2 F ≤ 16κ 2 H (L f + κ H ) τ 2 k ,
where the second equality is due to CX k = C k X k , the unitary Z k and X k , as well as
C − C k , Z k (Z k ) ⊤ X k (X k ) ⊤ = C − C k , X k (X k ) ⊤ Z k (Z k ) ⊤ = 0.
Combining (4.10) and (4.11), we have that
1 − r k = f (Z k ) − f (X k ) − (m k (Z k ) − m k (X k )) m k (X k ) − m k (Z k ) ≤ 1 − η 2
for sufficiently large τ k as in [16,Lemma 8]. Since the subproblem is solved with some sufficient reduction, the reduction of the original objective f holds for large τ k (i.e., r k is close to 1). Then the convergence of the norm of the Riemannian gradient (I n − X k (X k ) ⊤ )CX k follows in a similar fashion as [16, Theorem 11].
The ACE method in [29] needs an estimation β explicitly such that B −βI n is negative definite. By considering an equivalent matrix (A + βI n ) + (B − βI n ), the convergence of ACE to a global minimizer is given. On the other hand, our algorithmic framework uses an adaptive strategy to choose τ k to guarantee the convergence to a stationary point. By using similar proof techniques in [29], one may also establish the convergence to a global minimizer.
5. Electronic structure calculation.
Formulation.
We now introduce the KS and HF total minimization models and present their gradient and Hessian of the objective functions in these two models. After some proper discretization, the wave functions of p occupied states can be approximated by a matrix X = [x 1 , . . . , x p ] ∈ C n×p with X * X = I p , where n corresponds to the spatial degrees of freedom. The charge density associated with the occupied states is defined as ρ(X) = diag(XX * ).
Unless otherwise specified, we use the abbreviation ρ for ρ(X) in the following. The total energy functional is defined as (5.1)
E ks (X) := 1 4 tr(X * LX) + 1 2 tr(X * V ion X) + 1 2 l i ζ l |x * i w l | 2 + 1 4 ρ ⊤ L † ρ + 1 2 e ⊤ ǫ xc (ρ),
where L is a discretized Laplacian operator, V ion is the constant ionic pseudopotentials, w l represents a discretized pseudopotential reference projection function, ζ l is a constant whose value is ±1, e is a vector of all ones in R n , and ǫ xc is related to the exchange correlation energy. Therefore, the KS total energy minimization problem can be expressed as
(5.2) min X∈C n×p E ks (X) s.t. X * X = I p .
Let µ xc (ρ) = ∂ǫxc(ρ) ∂ρ and denote the Hamilton H ks (X) by
(5.3) H ks (X) := 1 2 L + V ion + l ζ l w l w * l + Diag((ℜL † )ρ) + Diag(µ xc (ρ) * e).
Then the Euclidean gradient of E ks (X) is computed as
(5.4) ∇E ks (X) = H ks (X)X.
Under the assumption that ǫ xc (ρ(X)) is twice differentiable with respect to ρ(X), Lemma 2.1 in [43] gives an explicit form of the Hessian of E ks (X) as
(5.5) ∇ 2 E ks (X)[U ] = H ks (X)U + R(X)[U ],
where U ∈ C n×p and R(X)[U ] := Diag ℜL † + ∂ 2 ǫxc ∂ρ 2 e (X ⊙ U + X ⊙Ū )e X. Compared with KSDFT, the HF theory can provide a more accurate model to electronic structure calculations by involving the Fock exchange operator. After discretization, the exchange-correlation operator V(·) : C n×n → C n×n is usually a fourthorder tensor, see equations (3.3) and (3.4) in [27] for details. Furthermore, it is easy to see from [27] that V(·) satisfies the following properties: (i) For any D 1 , D 2 ∈ C n×n , there holds V(D 1 ), D 2 = V(D 2 ), D 1 , which further implies that
(5.6) V(D 1 + D 2 ), D 1 + D 2 = V(D 1 ), D 1 + 2 V(D 1 ), D 2 + V(D 2 ), D 2 .
(ii) If D is Hermitian, V(D) is also Hermitian. Besides, it should be emphasized that computing V(U ) is always very expensive since it needs to perform the multiplication between a n × n × n × n fourth-order tensor and a n-by-n matrix. The corresponding Fock energy is defined as
(5.7) E f (X) := 1 4 V(XX * )X, X = 1 4 V(XX * ), XX * .
Then the HF total energy minimization problem can be formulated as
(5.8) min X∈C n×p E hf (X) := E ks (X) + E f (X) s.t. X * X = I p .
We now can explicitly compute the gradient and Hessian of E f (X) by using the properties of V(·).
Lemma 8. Given U ∈ C n×p , the gradient and the Hessian along U of E f (X) are, respectively, ∇E f (X) = V(XX * )X, (5.9)
∇ 2 E f (X)[U ] = V(XX * )U + V(XU * + U X * )X. (5.10)
Proof. We first compute the value E f (X + U ). For simplicity, denote D := XU * + U X * . Using the property (5.6), by some easy calculations, we have
4E f (X + U ) = V((X + U )(X + U ) * ) , (X + U )(X + U ) * = 4E f (X) + 2 V(XX * ), D + U U * + V(D + U U * ), D + U U * = 4E f (X) + 2 V(XX * ), D + 2 V(XX * ), U U * + V(D), D + h.o.t.,
where h.o.t. denotes the higher-order terms. Noting that V(XX * ) and V(D) are both Hermitian, we have from the above assertions that
(5.11) E f (X+U ) = E f (X)+ℜ V(XX * )X, U + 1 2 ℜ V(XX * )U + V(D)X, U +h.o.t..
Finally, it follows from expansion (1.2) in [43] that the second-order Taylor expression in X can be expressed as
E f (X + U ) = E f (X) + ℜ ∇E f (X), U + 1 2 ℜ ∇ 2 E f (X)[U ], U + h.o.t.,
which with (5.11) implies (5.9) and (5.10). The proof is completed.
Let H hf (X) := H ks (X) + V(XX * ) be the HF Hamilton. Recalling that E hf (X) = E ks (X) + E f (X), we have from (5.4) and (5.9) that (5.12) ∇E hf (X) = H ks (X)X + V(XX * )X = H hf (X)X and have from (5.5) and (5.10) that
(5.13) ∇ 2 E hf (X)[U ] = H hf (X)U + R(X)[U ] + V(XU * + U X * )X.
5.2.
Review of Algorithms for the KSDFT and HF Models. We next briefly introduce the widely used methods for solving the KSDFT and HF models. For the KSDFT model (5.2), the most popular method is the SCF method [27]. At the k-th iteration, SCF first fixes H ks (X) to be H(X k ) and then updates X k+1 via solving the linear eigenvalue problem (5.14)
X k+1 := arg min
X∈C n×p 1 2 X, H(X k )X s.t. X * X = I p .
Because the complexity of the HF model (5.8) is much higher than that of the KSDFT model, using SCF method directly may not obtain desired results. Since computing V X k (X k ) * U with some matrix U of proper dimension is still very expensive, we investigate the limited-memory Nyström approximationV X k (X k ) * to approximate V X k (X k ) * to reduce the computational cost, i.e.,
(5.15)V X k (X k ) * := Z(Z * Ω) † Z * ,
where Z = V X k (X k ) * Ω and Ω is any orthogonal matrix whose columns form an orthogonal basis of the subspace such as
span{X k }, span{X k−1 , X k } or span{X k−1 , X k , V X k (X k ) * X k }.
We should note that a similar idea called adaptive compression method was proposed in [28], which only considers to compress the operator V(X k (X k ) * ) on the subspace span{X k }. Then a new subproblem is constructed as
(5.16) min X∈C n×p E ks (X) + 1 4 V X k (X k ) * X, X s.t. X * X = I p .
Here, the exact form of the easier parts E ks is preserved while its second-order approximation is used in the construction of subproblem (2.1). As in the subproblem (2.1), we can utilize the Riemannian gradient method or the modified CG method based on the following linear equation
Proj X k ∇ 2 E ks (X k )[ξ] + 1 2V (X k (X k ) * )ξ − ξsym((X k ) * ∇f (X k )) = −gradE hf (X k )
to solve (5.16) inexactly. Since (5.16) is a KS-like problem, we can also use the SCF method. Here, we present the detailed algorithm in Algorithm 2.
Algorithm 2: Iterative method for (5.8) using Nyström approximation Input initial guess X 0 ∈ C n×p with (X 0 ) * X 0 = I p . Set k = 0. while Stopping condtions not met do Compute the limited-memory Nyström approximationV X k (X k ) * . Construct the subproblem (5.16) and solve it inexactly via the Riemannian gradient method or the modified CG method or the SCF method to obtain X k+1 . Set k ← k + 1.
We note that Algorithm 2 is similar to the two-level nested SCF method with the ACE formulation [28] when the subspace in (5.15) and inner solver for (5.16) are chosen as span{X k } and SCF, respectively.
5.3.
Construction of the structured approximation B k . Note that the Hessian of the KSDFT or HF total energy minimization takes the natural structure (1.2), we next give the specific choices of H c (X k ) and H e (X k ), which are key to formulate the the structured approximation B k .
For the KS problem (5.2), we have its exact Hessian in (5.5). Since the computational cost of the parts 1 2 L + l ζ l w l w * l are much cheaper than the remaining parts in ∇ 2 E ks , we can choose
(5.17) H c (X k ) = 1 2 L + l ζ l w l w * l , H e (X k ) = ∇ 2 E ks (X k ) − H c (X k ).
The exact Hessian of E hf (X) in (5.8) can be separated naturally into two parts, i.e., ∇ 2 E ks (X) + ∇ 2 E f (X). Usually the hybrid exchange operator V(XX * ) can take more than 95% of the overall time of the multiplication of H hf (X)[U ] in many real applications [29]. Recalling (5.5), (5.10) and (5.13), we know that the computational cost of ∇ 2 E f (X) is much higher than that of ∇ 2 E ks (X). Hence, we obtain the decomposition as
(5.18) H c (X k ) = ∇ 2 E ks (X k ), H e (X k ) = ∇ 2 E f (X k ).
Moreover, we can split the Hessian of ∇ 2 E ks (X k ) as done in (5.17) and obtain an alternative decomposition as
(5.19) H c (X k ) = H ks (X k ), H e (X k ) = ∇ 2 E f (X k ) + (∇ 2 E ks (X k ) − H c (X k )).
Finally, we emphasize that the limited-memory Nyström approximation (5.15) can serve as a good initial approximation for the part ∇ 2 E f (X k ).
5.4.
Subspace construction for the KSDFT model. As presented in Algorithm 1, the subspace method plays an important role when the modified CG method does not perform well. The first-order optimality conditions for (5.2) and (5.8) are
H(X)X = XΛ, X * X = I p ,
where X ∈ C n×p , Λ is a diagonal matrix and H represents H ks for (5.2) and H hf for (5.8). Then, problems (5.2) and (5.8) are actually a nonlinear eigenvalue problem which aims to find the p smallest eigenvalues of H. We should point out that in principle X consists of the eigenvectors of H(X) but not necessary the eigenvectors corresponding to the p smallest eigenvalues. Since the optima X is still the eigenvectors of H(X), we can construct some subspace which contains these possible wanted eigenvectors. Specifically, at current iterate, we first compute the first γp smallest eigenvalues and their corresponding eigenvectors of H(X k ), denoted by Γ k , then construct the subspace as (5.20) span{X k−1 , X k , gradf (X k ), Γ k }, with some small integer γ. With this subspace construction, Algorithm 1 will more likely escape a stagnated point.
6. Numerical experiments. In this section, we present some experiment results to illustrate the efficiency of the limited-memory Nyström approximation and our Algorithm 1. All codes were run in a workstation with Intel Xenon E5-2680 v4 processors at 2.40GHz and 256GB memory running CentOS 7.3.1611 and MATLAB R2017b.
6.1. Linear eigenvalue problem. We first construct A and B by using the following MATLAB commands:
A = randn(n, n); A = (A + A ⊤ )/2; B = 0.01rand(n, n); B = (B + B ⊤ )/2; B = B − T ; B = −B,
where randn and rand are the built-in functions in MATLAB, T = λ min (B)I n and λ min (B) is the smallest eigenvalue of B. Then B is negative definite and A is symmetric. In our implementation, we compute the multiplication BX using 1 19 19 i=1 BX such that BX consumes about 95% of the whole computational time. In the second example, we set A to be a sparse matrix as A = gallery('wathen', 5s, 5s) with parameter s and B is the same as the first example except that BX is computed directly. Since A is sufficiently sparse, its computational cost AX is much smaller than that of BX. We use the following stopping criterion
(6.1) err := max i=1,...,p (A + B)x i − µ i x i 2 max(1, |µ i |) ≤ 10 −10 ,
where x i is the i-th column of the current iterate X k and µ i is the corresponding approximated eigenvalue. The numerical results of the first and second examples are summarized in Tables 1 and 2, respectively. In these tables, EIGS is the built-in function "eigs" in MATLAB. LOBPCG is the locally optimal block preconditioned conjugate gradient method [25]. ASQN is the algorithm described in section 4. The difference between ACE and ASQN is that we take O k as orth(span{X k }) but not orth(span{X k−1 , X k }). Since a good initial guess X k is known at the (k + 1)-th iteration, LOBPCG is utilized to solve the corresponding linear eigenvalue subproblem (4.8). Note that BX k−1 and BX k are available from the computation of the residual, we then adopt the orthogonalization technique in [30] to compute O k and W k in (4.5) without extra multiplication BO k . The labels "AV" and "BV" denote the total number of matrix-vector multiplications (MV), counting each operation AX, BX ∈ R n×p as p MVs. The columns "err" and "time" are the maximal relative error of all p eigenvectors defined in (6.1), and the wall-clock time in seconds of each algorithm, respectively. The maximal number of iterations for ASQN and ACE is set to 200. As shown in Table 1, with fixed p = 10 and different n = 5000, 6000, 8000 and 10000, we see that ASQN performs better than EIGS, LOBPCG and ACE in terms of both accuracy and time. ACE spends a relative long time to reach a solution with a similar accuracy. For the case n = 5000, ASQN can still give a accurate solution with less time than EIGS and LOBPCG, but ACE usually takes a long time to get a solution of high accuracy. Similar conclusions can also be seen from Table 2. In which, ACE and LOBPCG do not reach the given accuracy in the cases n = 11041 and p = 30, 40, 50, 60. From the calls of AV and BV , we see that the limited-memory Nyström method reduces the number of calls on the expensive part by doing more evaluations on the cheap part.
6.2. Kohn-Sham total energy minimization. We now test the electron structure calculation models in subsections 6.2 and 6.3 using the new version of the KS-SOLV package [45]. One of the main differences is that the new version uses the more recently developed optimized norm-conserving Vanderbilt pseudopotentials (ONCV) [14], which are compatible to those used in other community software packages such as Quantum ESPRESSO. The problem information is listed in Table 3. For fair com- parisons, we stop all algorithms when the Frobenius norm of the Riemannian gradient is less than 10 −6 or the maximal number of iterations is reached. In the following tables, the column "solver" denotes which specified solver is used. The columns "fval", "nrmG", "time" are the final objective function value, the final Frobenius norm of the Riemannian gradient and the wall-clock time in seconds of each algorithm, respectively.
In this test, we compare structured quasi-Newton method with the SCF in KS-SOLV [45], the Riemannian L-BFGS method (RQN) in Manopt [6], the Riemannian gradient method with BB step size (GBB) and the adaptive regularized Newton method (ARNT) [16]. The default parameters therein are used. Our Algorithm 1 with the approximation with (5.17) is denoted by ASQN. The parameters setting of ASQN is same to that of ARNT [16].
For each algorithm, we first use GBB to generate a good starting point with stopping criterion grad f (X k ) F ≤ 10 −1 and a maximum of 2000 iterations. The maximal numbers of iterations for SCF, GBB, ARNT, ASQN and RQN are set as 1000, 10000, 500, 500, 500 and 1000, respectively. The numerical results are reported in Tables 4 and 5. The column "its" represents the total number of iterations in SCF, GBB and RQN, while the two numbers in ARNT, ASQN are the total number of outer iterations and the average numbers of inner iterations. From Tables 4 and 5, we can see that SCF failed in "graphene16", "graphene30", "al", "ptnio" and "c". We next explain why SCF fails by taking "c" and "graphene16" as examples. For the case "c", we obtain the same solution by using GBB, ARNT and ASQN. The number of wanted wave functions are 2, i.e., p = 2. With some abuse of notation, we denote the final solution by X = [x 1 , x 2 ]. Since X satisfies the first-order optimality condition, the columns of X are also eigenvectors of H(X) and the corresponding eigenvalues of H(X) are -1. , respectively. Comparing the angles between X and Y shows that x 1 is nearly parallel to y 1 but x 2 lies in the subspace spanned by [y 3 , y 4 ] other than y 2 . Hence, when the SCF method is used around X, the next point will jump to the subspace spanned by [y 1 , y 2 ]. This indicates the failure of the aufbau principle, and thus the failure of the SCF procedure. This is consistent with the observation in the chemistry literature [42], where sometimes the converged solution may have a "hole" (i.e., unoccupied states) below the highest occupied energy level.
In the case "graphene16", we still obtain the same solution from GBB, ARNT and ASQN. The number of wave functions p is 37. Let X be the computed solution and the corresponding eigenvalues of H(X) be d. The smallest 37 eigenvalues and their corresponding eigenvectors of H(X) are g and Y . We find that the first 36 elements of d and g are almost the same up to a machine accuracy, but the 37th element of d and g is 0.5821 and 0.5783, respectively. The energies and norms of Riemannian gradients of X and Y are (−94.2613, 8.65 × 10 −7 ) and (−94.2030, 6.95 × 10 −1 ), respectively. the error of the objective function values is defined as
∆E ks (X k ) = E ks (X k ) − E min ,
where E min be the minimum of the total energy attained by all methods. (d) grad E ks (X k ) F versus time Fig. 1. Comparisons of different algorithms on "glutamine" of KS total energy minimization. The first two points are the input and output of the initial solver GBB, respectively. 6.3. Hartree-Fock total energy minimization. In this subsection, we compare the performance of three variants of Algorithm 2 where the subproblem is solved by SCF (ACE), the modified CG method (ARN) and by GBB (GBBN), respectively, the Riemannian L-BFGS (RQN) method in Manopt [6], and two variants of Algorithm 1 with approximation (5.18) (ASQN) and approximation (5.19) (AKQN). Since the computation of the exact Hessian ∇ 2 E hf is time-consuming, we do not present the results using the exact Hessian. The limited-memory Nyström approximation (5.15) serves as an initial Hessian approximation in both ASQN and AKQN. To compare the effectiveness of quasi-Newton approximation, we set H e (X k ) to be the limitedmemory Nyström approximation (5.15) in (5.19) and use the same framework as in Algorithm 1. We should mention that the subspace refinement is not used in ASQN and AKQN. Hence, only structured quasi-Newton iterations are performed in them. The default parameters in RQN and GBB are used. For ACE, GBBN, ASQN, AKQN and ARN, the subproblem is solved until the Frobenius-norm of the Riemannian gradient is less than 0.1 min{ grad f (X k ) F , 1}. We also use the adaptive strategy for choosing the maximal number of inner iterations of ARNT in [16] for GBBN, ASQN, AKQN and ARN. The settings of other parameters of ASQN, AKQN and ARN are the same to those in ARNT [16]. For all algorithms, we generate a good initial guess by using GBB to solve the corresponding KS total energy minimization problem (i.e., remove E f part from E hf in the objective function) until a maximal number of iterations 2000 is reached or the Frobenius-norm of the Riemannian gradient is smaller than 10 −3 . The maximal number of iterations for ACE, GBBN, ASQN, ARN and AKQN is set to 200 while that of RQN is set to 1000.
A detailed summary of computational results is reported in Table 6. We see that ASQN performs best among all the algorithms in terms of both the number of iterations and time, especially in the systems: "alanine", "graphene30", "gaas" and "si40". Usually, algorithms takes fewer iterations if more parts in the Hessian are preserved. Since the computational cost of the Fock energy dominates that of the KS part, algorithms using fewer outer iterations consumes less time to converge. Hence, ASQN is faster than AKQN. Comparing with ARN and RQN, we see that ASQN benefits from our quasi-Newton technique. Using a scaled identity matrix as the initial guess, RQN takes many more iterations than our algorithms which use the adaptive compressed form of the hybrid exchange operator. ASQN is two times faster than ACE in "graphene30" and "si40". Finally, the convergence behaviors of these six methods on the system "glutamine" in Figure 2, where ∆E hf (X k ) is defined similarly as the KS case. In summary, algorithms utilizing the quasi-Newton technique combining with the Nyström approximation is often able to give better performance.
Conclusion.
We present a structured quasi-Newton method for optimization with orthogonality constraints. Instead of approximating the full Riemannian Hessian directly, we construct an approximation to the Euclidean Hessian and a regularized subproblem using this approximation while the orthogonality constraints are kept. By solving the subproblem inexactly, the global and local q-superlinear convergence can be guaranteed under certain assumptions. Our structured quasi-Newton method also takes advantage of the structure of the objective function if some parts are much more expensive to be evaluated than other parts. Our numerical experiments on the linear eigenvalue problems, KSDFT and HF total energy minimization demonstrate that our structured quasi-Newton algorithm is very competitive with the state-of-art algorithms.
The performance of the quasi-Newton methods can be further improved in several perspectives. For example, finding a better initial quasi-Newton matrix than the (d) grad E hf (X k ) F versus time
1. 1 .
1Our contribution. Since the set of orthogonal matrices can be viewed as the Stiefel manifold, the existing quasi-Newton methods focus on the construction of an approximation to the Riemannian Hessian Hess f (X):(1.3)Hess f (X)[ξ] = Proj X (∇ 2 f (X)[ξ] − ξsym(X * ∇f (X))),
Lemma 5 .
5Let Assumptions (B1)-(B4) be satisfied. Then, all iterations are eventually very successful.
8790, -0.6058. On the other hand, the smallest four eigenvalues of H(X) are -1.8790, -0.6577, -0.6058, -0.6058 and the corresponding eigenvectors, denoted by Y = [y 1 , y 2 , y 3 , y 4 ]. The energies and norms of Riemannian gradients of the different eigenvector pairs [x 1 , x 2 ], [y 1 , y 2 ], [y 1 , y 3 ] and [y 1 , y 4 ] are (−5.3127, 9.96 × 10 −7 ), (−5.2903, 3.07 × 10 −1 ), (−5.2937, 1.82 × 10 −1 ) and (−4.6759, 1.82 × 10 −1 )
(c) grad E ks (X k )
Fig. 2 .
2Comparisons of different algorithms on "glutamine" of HF total energy minimization.
Table 1
1Numerical results on random matricesAV/BV
err
time AV/BV
err
time
p = 10
n
5000
6000
EIGS
459/459 8.0e-11 45.1
730/730 6.9e-11 94.3
LOBPCG 1717/1717 9.9e-11 128.9 2105/2105 9.8e-11 249.9
ASQN 2323/150 9.2e-11 13.3 2798/160 9.5e-11 22.8
ACE
4056/460 9.7e-11 30.8 4721/460 9.4e-11 47.4
n
8000
10000
EIGS
538/538 8.7e-11 131.9 981/981 8.8e-11 327.3
LOBPCG 1996/1996 9.9e-11 336.7 2440/2440 9.7e-11 763.8
ASQN 2706/150 8.9e-11 29.8 2920/150 9.7e-11 50.2
ACE
4537/450 9.8e-11 66.3 4554/400 9.6e-11 99.4
n = 5000
p
10
20
EIGS
459/459 8.0e-11 45.1
638/638 3.2e-11 62.7
LOBPCG 1717/1717 9.9e-11 128.9 2914/2914 9.8e-11 130.3
ASQN 2323/150 9.2e-11 13.3 3809/260 9.2e-11 8.9
ACE
4056/460 9.7e-11 30.8 5902/680 9.5e-11 16.5
p
30
50
EIGS
660/660 3.0e-11 62.8
879/879 1.6e-12 83.6
LOBPCG 4458/4458 1.0e-10 217.6 5766/5766 9.5e-11 186.7
ASQN 5315/420 9.8e-11 11.4 7879/650 9.8e-11 17.8
ACE
9701/1530 9.4e-11 23.0 21664/4450 1.0e-10 50.9
Table 2
2Numerical results on sparse matricesAV/BV
err
time
AV/BV
err
time
p = 10
s
7
8
EIGS
1589/1589 8.9e-11 10.8 1097/1097 6.1e-11 13.4
LOBPCG 3346/3346 9.8e-11 24.6 4685/4685 4.6e-10 48.6
ASQN
5387/180 9.6e-11 7.1
4861/150 9.9e-11 5.9
ACE
14361/1600 9.6e-11 21.7 8810/600 9.6e-11 12.7
s
9
10
EIGS
1326/1326 9.3e-11 21.2 1890/1890 6.8e-11 44.4
LOBPCG 4306/4306 1.7e-07 66.9 3895/3895 9.9e-11 91.9
ASQN
5303/190 8.5e-11 7.7
6198/200 8.9e-11 10.1
ACE
16253/1850 9.9e-11 34.6 10760/820 9.0e-11 22.2
s
11
12
EIGS
1882/1882 1.5e-07 58.9 1463/1463 9.6e-11 65.4
LOBPCG 4282/4282 9.5e-11 136.0 4089/4089 9.9e-11 190.6
ASQN
8327/240 9.6e-11 16.7 6910/220 9.3e-11 17.5
ACE
15323/1060 9.7e-11 38.9 17907/2010 1.7e-08 65.5
s = 12
p
10
20
EIGS
1463/1463 9.6e-11 65.4 1148/1148 5.8e-11 50.2
LOBPCG 4089/4089 9.9e-11 190.6 5530/5530 9.8e-11 86.4
ASQN
6910/220 9.3e-11 17.5 9749/340 9.5e-11 16.3
ACE
17907/2010 1.7e-08 65.5 14108/960 9.8e-11 23.4
p
30
40
EIGS
1784/1784 8.1e-11 74.8 1836/1836 4.8e-11 69.1
LOBPCG 9076/9076 5.3e-09 173.3 12192/12192 4.6e-10 207.2
ASQN
17056/870 9.6e-11 41.5 19967/960 9.9e-11 39.9
ACE
37162/6030 9.1e-09 78.4 48098/8040 4.6e-07 105.4
p
50
60
EIGS
1743/1743 7.3e-11 69.1 2122/2122 1.6e-11 86.7
LOBPCG 12288/12288 1.4e-09 168.4 15716/15716 1.1e-08 199.5
ASQN 21330/1300 9.3e-11 53.6 26343/1620 9.7e-11 71.8
ACE
49165/10050 2.9e-06 110.1 62668/12060 2.3e-08 134.0
Table 3
3Problem information.name
(n 1 , n 2 , n 3 )
n
p
alanine
(91,68,61)
35829 18
c12h26
(136,68,28) 16099 37
ctube661 (162,162,21) 35475 48
glutamine
(64,55,74)
16517 29
graphene16 (91,91,23)
12015 37
graphene30 (181,181,23) 48019 67
pentacene (80,55,160) 44791 51
gaas
(49,49,49)
7153 36
si40
(129,129,129) 140089 80
si64
(93,93,93)
51627 128
al
(91,91,91)
47833 12
ptnio
(89,48,42)
11471 43
c
(46,46,46)
6031
2
Table 5
5Numerical results on KS total energy minimization. ARNT -5.31268e+0 5.7e-7 96(49.1) 211.3 ASQN -5.31268e+0 6.7e-7 104(38.5) 183.1 RQN -5.31244e+0 1.4e-3 73 10.8solver
fval
nrmG
its
time
c
SCF -5.29296e+0 7.3e-3
1000
168.3
GBB -5.31268e+0 1.0e-6
3851
112.7
Table 4Numerical results on KS total energy minimization. solver fval nrmG its time fval nrmG its time alanine c12h26 SCF -6.27084e+1 6.3e-7 1164.0 -8.23006e+1 6.5e-7 10 61.1 GBB -6.27084e+1 8.2e-7 92 71.3 -8.23006e+1 9.5e-7 89 65.8 ARNT -6.27084e+1 3.8e-7 3(13.3) 63.0 -8.23006e+1 7.5e-7 3(15.3)60.9 ASQN -6.27084e+1 9.3e-7 13(11.8) 81.9 -8.23006e+1 9.3e-7 10(13.3)67.8 RQN -6.27084e+1 1.5e-6 34114.9 -8.23006e+1 1.7e-6 45120.0 ctube661 glutamine SCF - Hence, SCF does not converge around the point X.InTables 4 and 5, ARNT usually converges in a few iterations due to the usage of the second-order information. It is often the fastest one in terms of time since the computational cost of two parts of the Hessian ∇ 2 E ks has no significant difference. GBB also performs comparably well as ARNT. ASQN works reasonably well on most problems. It takes more iterations than ARNT since the limit-memory approximation often is not as good as the Hessian. Because the costs of solving the subproblems of ASQN and ARNT are more or less the same, ASQN is not competitive to ARNT. However, by taking advantage of the problem structures, ASQN is still better than RQN in terms of computational time and accuracy. Finally, we show the convergence behaviors of these five methods on the system "glutamine" inFigure 1. Specifically, Nyström approximation and developing a better quasi-Newton approximation than the LSR1 technique. Our technique can also be extended to the general Riemannian optimization with similar structures.
Trust-region methods on Riemannian manifolds. P.-A Absil, C G Baker, K A Gallivan, Found. Comput. Math. 7P.-A. Absil, C. G. Baker, and K. A. Gallivan, Trust-region methods on Riemannian man- ifolds, Found. Comput. Math., 7 (2007), pp. 303-330.
P.-A Absil, R Mahony, R Sepulchre, Optimization algorithms on matrix manifolds. Princeton, NJPrinceton University PressP.-A. Absil, R. Mahony, and R. Sepulchre, Optimization algorithms on matrix manifolds, Princeton University Press, Princeton, NJ, 2008.
An extrinsic look at the Riemannian Hessian, in Geometric science of information. P.-A Absil, R Mahony, J Trumpf, SpringerP.-A. Absil, R. Mahony, and J. Trumpf, An extrinsic look at the Riemannian Hessian, in Geometric science of information, Springer, 2013, pp. 361-368.
Density-functional thermochemistry. III. the role of exact exchange. A D Becke, J. Chem. Phys. 98A. D. Becke, Density-functional thermochemistry. III. the role of exact exchange, J. Chem. Phys., 98 (1993), pp. 5648-5652.
Global rates of convergence for nonconvex optimization on manifolds. N Boumal, P.-A Absil, C Cartis, IMA J. Numer. Anal. N. Boumal, P.-A. Absil, and C. Cartis, Global rates of convergence for nonconvex optimiza- tion on manifolds, IMA J. Numer. Anal., (2016).
Manopt, a Matlab toolbox for optimization on manifolds. N Boumal, B Mishra, P.-A Absil, R Sepulchre, J. Mach. Learn. Res. 15N. Boumal, B. Mishra, P.-A. Absil, and R. Sepulchre, Manopt, a Matlab toolbox for optimization on manifolds, J. Mach. Learn. Res., 15 (2014), pp. 1455-1459, http://www. manopt.org.
Analysis of a symmetric rank-one trust region method. R H Byrd, H F Khalfan, R B Schnabel, SIAM J. Optim. 6R. H. Byrd, H. F. Khalfan, and R. B. Schnabel, Analysis of a symmetric rank-one trust region method, SIAM J. Optim., 6 (1996), pp. 1025-1039.
On the convergence of Newton iterations to non-stationary points. R H Byrd, M Marazzi, J Nocedal, Math. Program. 99R. H. Byrd, M. Marazzi, and J. Nocedal, On the convergence of Newton iterations to non-stationary points, Math. Program., 99 (2004), pp. 127-148.
Representations of quasi-Newton matrices and their use in limited memory methods. R H Byrd, J Nocedal, R B Schnabel, Math. Program. 63R. H. Byrd, J. Nocedal, and R. B. Schnabel, Representations of quasi-Newton matrices and their use in limited memory methods, Math. Program., 63 (1994), pp. 129-156.
The geometry of algorithms with orthogonality constraints. A Edelman, T A Arias, S T Smith, SIAM J. Matrix Anal. Appl. 20A. Edelman, T. A. Arias, and S. T. Smith, The geometry of algorithms with orthogonality constraints, SIAM J. Matrix Anal. Appl., 20 (1999), pp. 303-353.
Minimizing a differentiable function over a differential manifold. D Gabay, J. Optim. Theory Appl. 37D. Gabay, Minimizing a differentiable function over a differential manifold, J. Optim. Theory Appl., 37 (1982), pp. 177-219.
Quantum espresso: a modular and open-source software project for quantum simulations of materials. P Giannozzi, S Baroni, N Bonini, M Calandra, R Car, C Cavazzoni, D Ceresoli, G L Chiarotti, M Cococcioni, I Dabo, Journal of Physics: Condensed matter. 21395502P. Giannozzi, S. Baroni, N. Bonini, M. Calandra, R. Car, C. Cavazzoni, D. Ceresoli, G. L. Chiarotti, M. Cococcioni, I. Dabo, et al., Quantum espresso: a modular and open-source software project for quantum simulations of materials, Journal of Physics: Condensed matter, 21 (2009), p. 395502.
Multi-secant equations, approximate invariant subspaces and multigrid optimization. S Gratton, P L Toint, FUNDP. Dept of Mathematicstech. reportS. Gratton and P. L. Toint, Multi-secant equations, approximate invariant subspaces and multigrid optimization, tech. report, Dept of Mathematics, FUNDP, Namur (B), 2007.
Optimized norm-conserving Vanderbilt pseudopotentials. D Hamann, Phys. Rev. B. 8885117D. Hamann, Optimized norm-conserving Vanderbilt pseudopotentials, Phys. Rev. B, 88 (2013), p. 085117.
Hybrid functionals based on a screened Coulomb potential. J Heyd, G E Scuseria, M Ernzerhof, J. Chem. Phys. 118J. Heyd, G. E. Scuseria, and M. Ernzerhof, Hybrid functionals based on a screened Coulomb potential, J. Chem. Phys., 118 (2003), pp. 8207-8215.
Adaptive quadratically regularized Newton method for Riemannian optimization. J Hu, A Milzarek, Z Wen, Y Yuan, SIAM J. Matrix Anal. Appl. 39J. Hu, A. Milzarek, Z. Wen, and Y. Yuan, Adaptive quadratically regularized Newton method for Riemannian optimization, SIAM J. Matrix Anal. Appl., 39 (2018), pp. 1181-1207.
Projected commutator DIIS method for accelerating hybrid functional electronic structure calculations. W Hu, L Lin, C Yang, J. Chem. Theory Comput. 13W. Hu, L. Lin, and C. Yang, Projected commutator DIIS method for accelerating hybrid func- tional electronic structure calculations, J. Chem. Theory Comput., 13 (2017), pp. 5458- 5467.
Optimization algorithms on Riemannian manifolds with applications. W Huang, The Florida State UniversityPhD thesisW. Huang, Optimization algorithms on Riemannian manifolds with applications, PhD thesis, The Florida State University, 2013.
W Huang, P Absil, K Gallivan, P Hand, FSU16-14ROPTLIB: an object-oriented C++ library for optimization on Riemannian manifolds, tech. report. Florida State UniversityTechnical ReportW. Huang, P. Absil, K. Gallivan, and P. Hand, ROPTLIB: an object-oriented C++ li- brary for optimization on Riemannian manifolds, tech. report, Technical Report FSU16-14, Florida State University, 2016.
A Riemannian BFGS method without differentiated retraction for nonconvex optimization problems. W Huang, P.-A Absil, K Gallivan, SIAM J. Optim. 28W. Huang, P.-A. Absil, and K. Gallivan, A Riemannian BFGS method without differenti- ated retraction for nonconvex optimization problems, SIAM J. Optim., 28 (2018), pp. 470- 495.
A Riemannian symmetric rank-one trust-region method. W Huang, P.-A Absil, K A Gallivan, Math. Program. 150W. Huang, P.-A. Absil, and K. A. Gallivan, A Riemannian symmetric rank-one trust-region method, Math. Program., 150 (2015), pp. 179-216.
A Riemannian BFGS method for nonconvex optimization problems. W Huang, P.-A Absil, K A Gallivan, Numerical Mathematics and Advanced Applications ENUMATH 2015. SpringerW. Huang, P.-A. Absil, and K. A. Gallivan, A Riemannian BFGS method for nonconvex optimization problems, in Numerical Mathematics and Advanced Applications ENUMATH 2015, Springer, 2016, pp. 627-634.
A Broyden class of quasi-Newton methods for Riemannian optimization. W Huang, K A Gallivan, P.-A , SIAM J. Optim. 25W. Huang, K. A. Gallivan, and P.-A. Absil, A Broyden class of quasi-Newton methods for Riemannian optimization, SIAM J. Optim., 25 (2015), pp. 1660-1685.
Nonlinear regression analysis and its applications. R E Kass, J. Am. Stat. Assoc. 85R. E. Kass, Nonlinear regression analysis and its applications, J. Am. Stat. Assoc., 85 (1990), pp. 594-596.
Toward the optimal preconditioned eigensolver: Locally optimal block preconditioned conjugate gradient method. A V Knyazev, SIAM J. Sci. Comput. 23A. V. Knyazev, Toward the optimal preconditioned eigensolver: Locally optimal block precon- ditioned conjugate gradient method, SIAM J. Sci. Comput., 23 (2001), pp. 517-541.
The complex gradient operator and the CR-calculus. K Kreutz-Delgado, K. Kreutz-Delgado, The complex gradient operator and the CR-calculus, 2009. http://arxiv.org/abs/0906.4835.
Computational chemistry from the perspective of numerical analysis. C , Le Bris, Acta Numer. 14C. Le Bris, Computational chemistry from the perspective of numerical analysis, Acta Numer., 14 (2005), pp. 363-444.
Adaptively compressed exchange operator. L Lin, J. Chem. Theory Comput. 12L. Lin, Adaptively compressed exchange operator, J. Chem. Theory Comput., 12 (2016), pp. 2242-2249.
L Lin, M Lindsey, Convergence of adaptive compression methods for Hartree-Fock-like equations. in pressL. Lin and M. Lindsey, Convergence of adaptive compression methods for Hartree-Fock-like equations, Commun. Pure Appl. Math., in press, (2017).
Limited memory block Krylov subspace optimization for computing dominant singular value decompositions. X Liu, Z Wen, Y Zhang, SIAM J. Sci. Comput. 35X. Liu, Z. Wen, and Y. Zhang, Limited memory block Krylov subspace optimization for computing dominant singular value decompositions, SIAM J. Sci. Comput., 35 (2013), pp. A1641-A1668.
R M Martin, Electronic structure: basic theory and practical methods. Cambridge university pressR. M. Martin, Electronic structure: basic theory and practical methods, Cambridge university press, 2004.
Numerical Optimization. J Nocedal, S J Wright, Springer Series in Operations Research and Financial Engineering. New YorkSpringersecond ed.J. Nocedal and S. J. Wright, Numerical Optimization, Springer Series in Operations Re- search and Financial Engineering, Springer, New York, second ed., 2006.
C Qi, Numerical optimization methods on Riemannian manifolds. Florida State UniversityPhD thesisC. Qi, Numerical optimization methods on Riemannian manifolds, PhD thesis, Florida State University, 2011.
Optimization methods on Riemannian manifolds and their application to shape space. W Ring, B Wirth, SIAM J. Optim. 22W. Ring and B. Wirth, Optimization methods on Riemannian manifolds and their application to shape space, SIAM J. Optim., 22 (2012), pp. 596-627.
M Seibert, M Kleinsteuber, K Hüper, Properties of the BFGS method on Riemannian manifolds, Mathematical System Theory C Festschrift in Honor of Uwe Helmke on the Occasion of his Sixtieth Birthday. M. Seibert, M. Kleinsteuber, and K. Hüper, Properties of the BFGS method on Rieman- nian manifolds, Mathematical System Theory C Festschrift in Honor of Uwe Helmke on the Occasion of his Sixtieth Birthday, (2013), pp. 395-412.
S T Smith, Optimization techniques on Riemannian manifolds, Fields Institute Communications. 3S. T. Smith, Optimization techniques on Riemannian manifolds, Fields Institute Communica- tions, 3 (1994).
W Sun, Y Yuan, Optimization theory and methods: nonlinear programming. Springer Science & Business Media1W. Sun and Y. Yuan, Optimization theory and methods: nonlinear programming, vol. 1, Springer Science & Business Media, 2006.
Modern quantum chemistry: introduction to advanced electronic structure theory. A Szabo, N S Ostlund, Courier CorporationA. Szabo and N. S. Ostlund, Modern quantum chemistry: introduction to advanced electronic structure theory, Courier Corporation, 2012.
The trustregion self-consistent field method in Kohn-Sham density functional theory. L Thogersen, J Olsen, A Kohn, P Jorgensen, P Salek, T Helgaker, J. Chem. Phys. 12374103L. Thogersen, J. Olsen, A. Kohn, P. Jorgensen, P. Salek, and T. Helgaker, The trust- region self-consistent field method in Kohn-Sham density functional theory, J. Chem. Phys., 123 (2005), p. 074103.
Fixed-rank approximation of a positive-semidefinite matrix from streaming data. J A Tropp, A Yurtsever, M Udell, V Cevher, Advances in Neural Information Processing Systems. J. A. Tropp, A. Yurtsever, M. Udell, and V. Cevher, Fixed-rank approximation of a positive-semidefinite matrix from streaming data, in Advances in Neural Information Pro- cessing Systems, 2017, pp. 1225-1234.
C Udriste, Convex functions and optimization methods on Riemannian manifolds. Springer Science & Business Media297C. Udriste, Convex functions and optimization methods on Riemannian manifolds, vol. 297, Springer Science & Business Media, 1994.
Density functional approach to the many-body problem: key concepts and exact functionals. R Van Leeuwen, Adv. Quantum Chem. 43R. van Leeuwen, Density functional approach to the many-body problem: key concepts and exact functionals, Adv. Quantum Chem., 43 (2003), pp. 25-94.
Adaptive regularized self-consistent field iteration with exact Hessian for electronic structure calculation. Z Wen, A Milzarek, M Ulbrich, H Zhang, SIAM J. Sci. Comput. 35Z. Wen, A. Milzarek, M. Ulbrich, and H. Zhang, Adaptive regularized self-consistent field iteration with exact Hessian for electronic structure calculation, SIAM J. Sci. Comput., 35 (2013), pp. A1299-A1324.
A feasible method for optimization with orthogonality constraints. Z Wen, W Yin, Math. Program. 142Z. Wen and W. Yin, A feasible method for optimization with orthogonality constraints, Math. Program., 142 (2013), pp. 397-434.
KSSOLV-a MATLAB toolbox for solving the Kohn-Sham equations. C Yang, J C Meza, B Lee, L.-W Wang, ACM Trans. Math. Softw. 36C. Yang, J. C. Meza, B. Lee, and L.-W. Wang, KSSOLV-a MATLAB toolbox for solving the Kohn-Sham equations, ACM Trans. Math. Softw., 36 (2009), pp. 1-35.
A trust region direct constrained minimization algorithm for the Kohn-Sham equation. C Yang, J C Meza, L.-W Wang, SIAM J. Sci. Comput. 29C. Yang, J. C. Meza, and L.-W. Wang, A trust region direct constrained minimization algorithm for the Kohn-Sham equation, SIAM J. Sci. Comput., 29 (2007), pp. 1854-1875.
Global convergence of a new hybrid Gauss-Newton structured BFGS method for nonlinear least squares problems. W Zhou, X Chen, SIAM J. Optim. 20W. Zhou and X. Chen, Global convergence of a new hybrid Gauss-Newton structured BFGS method for nonlinear least squares problems, SIAM J. Optim., 20 (2010), pp. 2422-2441.
| [] |
[
"Thermal Bogoliubov transformation in nuclear structure theory",
"Thermal Bogoliubov transformation in nuclear structure theory"
] | [
"A I Vdovin \nBogoliubov Laboratory of Theoretical Physics\nJoint Institute for Nuclear Research\n141980DubnaRussia\n",
"Alan A Dzhioev \nBogoliubov Laboratory of Theoretical Physics\nJoint Institute for Nuclear Research\n141980DubnaRussia\n"
] | [
"Bogoliubov Laboratory of Theoretical Physics\nJoint Institute for Nuclear Research\n141980DubnaRussia",
"Bogoliubov Laboratory of Theoretical Physics\nJoint Institute for Nuclear Research\n141980DubnaRussia"
] | [] | Thermal Bogoliubov transformation is an essential ingredient of the thermo field dynamics -the real time formalism in quantum field and many-body theories at finite temperatures developed by H. Umezawa and coworkers. The approach to study properties of hot nuclei which is based on the extension of the well-known Quasiparticle-Phonon Model to finite temperatures employing the TFD formalism is presented. A distinctive feature of the QPM-TFD combination is a possibility to go beyond the standard approximations like the thermal Hartree-Fock or the thermal RPA ones.Among numerous outstanding achievements by N. N. Bogoliubov there is the well-known Bogoliubov transformation for bosons [1] and fermions [2]. This unitary transformation played a crucial role in constructing microscopical theories of superfluidity and superconductivity and till now has been an extremely useful and powerful tool in many branches of theoretical physics. Quite unexpectedly, a new version of the Bogoliubov transformation appeared in the middle of the 1970s when H. Umezawa and coworkers formulated the basic ideas of thermo field dynamics (TFD) [3, 4] -a new formalism extending the quantum field and many-body theories to finite temperatures.Within TFD[3,4], the thermal average of a given operator A is calculated as the expectation value in a specially constructed, temperature-dependent state |0(T ) which is termed the thermal vacuum. This expectation value is equal to the usual grand canonical average of A. In this sense, the thermal vacuum describes the thermal equilibrium of the system. To construct the state |0(T ) , a formal doubling of the system degrees of freedom is introduced 1 . In TFD, a tilde conjugate operator A-acting in the independent Hilbert space -is associated with A, in accordance with properly formulated tilde conjugation rules[3,4,5]. For a heated system governed by the Hamiltonian H the whole Hilbert space is spanned by the direct product of the eigenstates of H and those of the tilde Hamiltonian H, both corresponding to the same eigenvalues, i.e. H|n = E n |n and H| n = E n | n . In the doubled Hilbert space, the thermal vacuum is defined as the zero-energy eigenstate U λ 1 i 1 | 10.1134/s1063779610070336 | [
"https://arxiv.org/pdf/1001.3607v1.pdf"
] | 55,497,560 | 1001.3607 | 00cfa60407d6d54b8bc67ca7a2da819a13dc22b9 |
Thermal Bogoliubov transformation in nuclear structure theory
20 Jan 2010
A I Vdovin
Bogoliubov Laboratory of Theoretical Physics
Joint Institute for Nuclear Research
141980DubnaRussia
Alan A Dzhioev
Bogoliubov Laboratory of Theoretical Physics
Joint Institute for Nuclear Research
141980DubnaRussia
Thermal Bogoliubov transformation in nuclear structure theory
20 Jan 2010
Thermal Bogoliubov transformation is an essential ingredient of the thermo field dynamics -the real time formalism in quantum field and many-body theories at finite temperatures developed by H. Umezawa and coworkers. The approach to study properties of hot nuclei which is based on the extension of the well-known Quasiparticle-Phonon Model to finite temperatures employing the TFD formalism is presented. A distinctive feature of the QPM-TFD combination is a possibility to go beyond the standard approximations like the thermal Hartree-Fock or the thermal RPA ones.Among numerous outstanding achievements by N. N. Bogoliubov there is the well-known Bogoliubov transformation for bosons [1] and fermions [2]. This unitary transformation played a crucial role in constructing microscopical theories of superfluidity and superconductivity and till now has been an extremely useful and powerful tool in many branches of theoretical physics. Quite unexpectedly, a new version of the Bogoliubov transformation appeared in the middle of the 1970s when H. Umezawa and coworkers formulated the basic ideas of thermo field dynamics (TFD) [3, 4] -a new formalism extending the quantum field and many-body theories to finite temperatures.Within TFD[3,4], the thermal average of a given operator A is calculated as the expectation value in a specially constructed, temperature-dependent state |0(T ) which is termed the thermal vacuum. This expectation value is equal to the usual grand canonical average of A. In this sense, the thermal vacuum describes the thermal equilibrium of the system. To construct the state |0(T ) , a formal doubling of the system degrees of freedom is introduced 1 . In TFD, a tilde conjugate operator A-acting in the independent Hilbert space -is associated with A, in accordance with properly formulated tilde conjugation rules[3,4,5]. For a heated system governed by the Hamiltonian H the whole Hilbert space is spanned by the direct product of the eigenstates of H and those of the tilde Hamiltonian H, both corresponding to the same eigenvalues, i.e. H|n = E n |n and H| n = E n | n . In the doubled Hilbert space, the thermal vacuum is defined as the zero-energy eigenstate U λ 1 i 1
of the so-called thermal Hamiltonian H = H − H. Moreover, the thermal vacuum satisfies the thermal state condition [3,4,5] A|0(T ) = σ e H/2T A † |0(T ) ,
where σ = 1 for bosonic A and σ = −i for fermionic A. It is seen from (1) that, in TFD, there always exists a certain combination of A and A † which annihilates the thermal vacuum. That mixing is promoted by a specific transformation called the thermal Bogoliubov transformation [4]. This transformation must be canonical in the sense that the algebra of the original system remains the same, keeping its dynamic. The temperature dependence comes from the transformation parameters.
The important point is that in the doubled Hilbert space the time-translation operator is the thermal Hamiltonian H. This means that the excitations of the thermal system are obtained by the diagonalization of H. The existence of the thermal vacuum annihilation operators provides us with a powerful method to analyze physical systems at finite temperatures and allows for straightforward extensions of different zero-temperature approximations.
In the present note, we exemplify advantages of TFD while treating the behavior of atomic nuclei at finite temperatures. In particular, we will show a way of going beyond the thermal RPA and allowing one to treat a coupling of the basic nuclear modes, quasiparticles and phonons [7], at finite temperatures. This problem was already studied in Refs. [8,9,10]. However, the new aspects have been revealed recently [11].
To avoid unnecessary complications in the formulae, we consider a nuclear Hamiltonian which is a simplified version of the Hamiltonian of the Quasiparticle-Phonon Model [7]. It consists of a mean field H sp , the BCS pairing interaction H pair , and a separable multipole-multipole particle-hole interaction H ph . Moreover, protons and neutrons are not distinguished. The Hamiltonian reads
H = H sp + H pair + H ph = jm (E j − λ)a † jm a jm − 1 4 G j 1 m 1 j 2 m 2 a † j 1 m 1 a † 1 m 1 a 2 m 2 a j 2 m 2 − 1 2 λµ κ (λ) 0 M † λµ M λµ (2)
where a † jm and a jm are the nucleon creation and annihilation operators, a m = (−1) j−m a j−m , and M † λµ is the multipole single-particle operator of the electric type with multipolarity λ.
At first, we apply TFD to treat pairing correlations at finite temperature (see also [12,13]). To this aim, we make the standard Bogoliubov u, v-transformation from nucleon operators to quasiparticle operators α † , α
α † jm = u j a † jm − v j a m , α jm = u j a jm − v j a † m (u 2 j + v 2 j = 1) .(3)
The same transformation with the same u, v coefficients is applied to nucleonic tilde operators a † jm , a jm , thus producing the tilde quasiparticle operators α † jm and α jm . Thermal effects appear after the thermal Bogoliubov transformation which mixes ordinary and tilde quasiparticle operators and produces the operators of so-called thermal quasiparticles β † jm , β jm and their tilde counterparts. Following the Ojima's formulation of the double tilde conjugation rule for fermions ( a = a) [5] we use here the complex form of the thermal Bogoliubov transformation:
β † jm = x j α † jm − iy j α jm , β † jm = x j α † jm + iy j α jm (x 2 j + y 2 j = 1).(4)
The reasons for this are given in [11]. Then we express the thermal Hamiltonian in terms of thermal quasiparticle operators (4) and require that the one-body part of the thermal BCS Hamiltonian H BCS = H sp + H pair − H sp − H pair becomes diagonal in terms of thermal quasiparticles. This yields the following expressions for u j , v j :
u 2 j = 1 2 1 + E j − λ ε j , v 2 j = 1 2 1 − E j − λ ε j ,(5)
where ε j = (E j − λ) 2 + ∆ 2 . The gap parameter ∆ and the chemical potential λ are the solutions of the equations
∆ = G 2 j (2j + 1)(x 2 j − y 2 j )u j v j , N = j (2j + 1)(v 2 j x 2 j + u 2 j y 2 j ),(6)
where N is the number of nucleons in a nucleus. With u j , v j from (5) the one-body part of the thermal BCS Hamiltonian reads
H BCS ≃ jm ε j (β † jm β jm − β † jm β jm ).(7)
One can see that the Hamiltonian H BCS describes a system of noninteracting thermal quasiparticles and tilde-quasiparticles with energies ε j and −ε j , respectively.
To determine the thermal vacuum corresponding to H BCS , we need to fix appropriately the coefficients x j , y j . In Refs. [12,13], the coefficients were found by minimizing the thermodynamic potential of the system of noninteracting Bogoliubov quasiparticles. Here we demand that the vacuum |0(T ); qp of thermal quasiparticles obey the thermal state condition (1)
a jm |0(T ); qp = −i e H BCS /2T a † jm |0(T ); qp .(8)
Combining (8) and (3) one gets
y j = 1 + exp ε j T −1/2 , x j = 1 − y 2 j 1/2 .(9)
We see that the coefficients y 2 j are the thermal Fermi-Dirac occupation factors which determine the average number of thermally excited Bogoliubov quasiparticles in the BCS thermal vacuum. Equations (5), (6), and (9) are the well-known finitetemperature BCS equations [14].
In the next stage we partially take into account the particle-hole residual interaction H ph . Now the thermal Hamiltonian reads
H = jm ε j (β † jm β jm − β † jm β jm ) − 1 2 λµ κ (λ) 0 M † λµ M λµ − M † λµ M λµ(10)
and it can be divided into two parts -H TQRPA and H qph . The part H TQRPA that contains H BCS and the terms with even numbers of creation and annihilation operators of thermal quasiparticles is approximately diagonalized within the Thermal Quasiparticle Random Phase Approximation, whereas the part H qph containing odd numbers of creation and annihilation operators is responsible for the coupling of TQRPA eigenvectors (thermal phonons).
To diagonalize H TQRPA , the following operator of thermal phonon is introduced:
Q † λµi = 1 2 j 1 j 2 ψ λi j 1 j 2 [β † j 1 β † j 2 ] λ µ + ψ λi j 1 j 2 [ β † 1 β † 2 ] λ µ + 2iη λi j 1 j 2 [β † j 1 β † 2 ] λ µ + (−1) λ−µ φ λi j 1 j 2 [β j 1 β j 2 ] λ −µ + φ λi j 1 j 2 [ β 1 β 2 ] λ −µ − 2iξ λi j 1 j 2 [β j 1 β 2 ] λ −µ ,
where the notation [ ] λ µ means the coupling of single-particle momenta j 1 , j 2 to the angular momentum λ with the projection µ. Now the thermal equilibrium state is treated as a vacuum |0(T ); ph for thermal phonons. In addition, the thermal phonon operators are assumed to obey bosonic commutation rules. This imposes some constraints on the phonon amplitudes ψ, φ, η etc. (see Ref. [11] for more details).
To find eigenvalues of H TQRPA , the variational principle is applied, i.e. we find the minimum of the expectation value of H TQRPA with respect to one-phonon states Q † λµi |0(T ); ph or Q † λµi |0(T ); ph under afore-mentioned constraints on the phonon amplitudes. As a result we arrive at the following equation for thermal phonon energies ω λi :
2λ + 1 κ (λ) 0 = j 1 j 2 (f (λ) j 1 j 2 ) 2 (u (+) j 1 j 2 ) 2 ε (+) j 1 j 2 (1− y 2 j 1 − y 2 j 2 ) (ε (+) j 1 j 2 ) 2 − ω 2 − (v (−) j 1 j 2 ) 2 ε (−) j 1 j 2 (y 2 j 1 − y 2 j 2 ) (ε (−) j 1 j 2 ) 2 − ω 2 ,(11)
where f (λ) j 1 j 2 is the reduced single-particle matrix element of the multipole operator M λµ ; ε (±)
j 1 j 2 = ε j 1 ± ε j 2 , u (+) j 1 j 2 = u j 1 v j 2 + v j 1 u j 2 , v (−) j 1 j 2 = u j 1 u j 2 − v j 1 v j 2 .
Although at the present stage phonon amplitudes cannot be determined unambiguously, the TQRPA Hamiltonian is diagonal in terms of thermal phonon operators
H TQRPA = λµi ω λi (Q † λµi Q λµi − Q † λµi Q λµi ).(12)
One can see that H TQRPA is invariant under the following thermal Bogoliubov transformation:
Q † λµi → X λi Q † λµi − Y λi Q λµi , Q † λµi → X λi Q † λµi − Y λi Q λµi (13) with X 2 λi − Y 2 λi = 1.
To fix the coefficients X λi , Y λi and finally determine the phonon amplitudes ψ, ψ, φ, φ, η, ξ, we again demand that the thermal phonon vacuum obey the thermal state condition 2 . For A in (1), it is convenient to take the multipole operator M λµ . Then the thermal state condition takes the form
M λµ |0(T ); ph = e H TQRPA /2T M † λµ |0(T ); ph .(14)
Expressing M λµ through phonon operators we find the coefficients X λi , Y λi
Y λi = exp ω λi T − 1 −1/2 , X λi = 1 + Y 2 λi 1/2 .
The coefficients Y 2 λi appear to be the thermal occupation factors of the Bose-Einstein statistics. Thus, the phonon amplitudes are dependent on both the types of thermal occupation numbers: quasiparticle ones (the Fermi-Dirac type) and phonon ones (the Bose-Einstein type). The expressions for all the phonon amplitudes ψ, ψ, φ, φ, η, ξ can be found in [11].
Once the structure of thermal phonons is determined, one can find the Eλtransition strengths from the TQRPA thermal vacuum to one-phonon states. The transition strengths to non-tilde and tilde one-phonon states are related by
Φ 2 λi = exp(−ω λi /T )Φ 2 λi .(15)
This relation is equivalent to the principle of detailed balancing connecting the probabilities for a probe to transfer energy ω to a heated system and to absorb energy ω from a heated system. Now we are ready to go beyond TQRPA and consider the effects of the term H qph which is a thermal analogue of the quasiparticle-phonon interaction [7]. It reads
H qph = − 1 2 λµi j 1 j 2 f (λ) j 1 j 2 √ N λi Q † λµi +Q λµi B λµi (j 1 j 2 )+(h.c.)−(t.c.)(16)
where the notation "(h.c.)" and "(t.c.)" stands for the items which are hermitianand tilde-conjugate to the displayed ones; N λi is the normalization factor in the phonon amplitudes. The operator B λµi (j 1 j 2 ) reads
B λµi (j 1 j 2 ) = iu (+) j 1 j 2 Z λi j 1 j 2 [β † j 1 β j 2 ] λ µ + Z λi j 2 j 1 [ β † 1 β 2 ] λ µ − v (−) j 1 j 2 X λi j 1 j 2 [β † j 1 β 2 ] λ µ + Y λi j 1 j 2 [ β † 1 β j 2 ] λ µ ,
where the coefficients X λi j 1 j 2 , Y λi j 1 j 2 and Z λi j 1 j 2 are the following:
X Y λi j 1 j 2 = x j 1 x j 2 X Y λi + y j 1 y j 2 Y X λi , Z λi j 1 j 2 = x j 1 y j 2 X λi + y j 1 x j 2 Y λi .
The term H qph couples states with a different number of thermal phonons. To take into account the phonon coupling, we consider a trial wave function of the following form:
|Ψ ν (JM) = i R i (Jν) Q † JM i + R i (Jν) Q † JM i + λ 1 i 1 λ 2 i 2 P λ 1 i 1 λ 2 i 2 (Jν) Q † λ 1 i 1 Q † λ 2 i 2 J M + λ 1 i 1 λ 2 i 2 S λ 1 i 1 λ 2 i 2 (Jν) Q † λ 1 i 1 Q † λ 2 i 2 J M + λ 1 i 1 λ 2 i 2 P λ 1 i 1 λ 2 i 2 (Jν) Q † λ 1 i 1 Q † λ 2 i 2 J M |0(T ); ph . (17)
It should be stressed that in (17) we keep the thermal vacuum of TQRPA. It means that we do not consider the influence of phonon coupling on thermal occupation numbers. Note also that the function (17) contains not only non-tilde one-phonon components but the tilde ones as well. This is a new point in comparison with Ref. [11]. The function (17) has to be normalized. This demand imposes the following constraint on the amplitudes R, R, P, S, P :
i R i (Jν) 2 + R i (Jν) 2 + λ 1 i 1 λ 2 i 2 2 P λ 1 i 1 λ 2 i 2 (Jν) 2 + S λ 1 i 1 λ 2 i 2 (Jν) 2 + 2 P λ 1 i 1 λ 2 i 2 (Jν) 2 = 1. (18)
Since the trial function contains three different types of two-phonon components, there are three types of interaction matrix elements which couple a thermal onephonon state with two-phonon ones Applying the variational principle to the average value of the thermal Hamiltonian H TQRPA + H qph with respect to |Ψ ν (JM) under the normalization constraint (18) one gets a system of linear equations for the amplitudes R, R, P, S, P . The system has a nontrivial solution if the energy η ν of the state |Ψ ν (JM) obeys the following secular equation:
det A(η ν ) B(η ν ) B(−η ν ) A(−η ν ) = 0,(20)
where
A ii ′ (η ν ) = ω Ji − η ν δ ii ′ − 1 2 λ 1 i 1 λ 2 i 2 U λ 1 i 1 λ 2 i 2 (Ji)U λ 1 i 1 λ 2 i 2 (Ji ′ ) ω λ 1 i 1 + ω λ 2 i 2 − η ν + 2 V λ 1 i 1 λ 2 i 2 (Ji)V λ 1 i 1 λ 2 i 2 (Ji ′ ) ω λ 1 i 1 − ω λ 2 i 2 − η ν − W λ 1 i 1 λ 2 i 2 (Ji)W λ 1 i 1 λ 2 i 2 (Ji ′ ) ω λ 1 i 1 + ω λ 2 i 2 + η ν(21)
and B ii ′ (η ν ) = 1 2
λ 1 i 1 λ 2 i 2 U λ 1 i 1 λ 2 i 2 (Ji)W λ 1 i 1 λ 2 i 2 (Ji ′ ) ω λ 1 i 1 + ω λ 2 i 2 − η ν + 2(−1) λ 1 +λ 2 +J V λ 1 i 1 λ 2 i 2 (Ji)V λ 2 i 2 λ 1 i 1 (Ji ′ ) ω λ 1 i 1 − ω λ 2 i 2 − η ν − W λ 1 i 1 λ 2 i 2 (Ji)U λ 1 i 1 λ 2 i 2 (Ji ′ ) ω λ 1 i 1 + ω λ 2 i 2 + η ν .(22)
Physical effects which can be treated with the function |Ψ ν (JM) and Eq. (20) relate to fragmentation of basic nuclear excitations like quasiparticles and phonons, their spreading widths and/or more consistent description of transition strength distributions over a nuclear spectrum in hot nuclei.
The authors are thankful to Dr. V. Ponomarev for valuable discussions and comments.
It is worth mentioning the general statement[6] that the effect of finite temperature can be included in a free field theory if one doubles the field degrees of freedom.
Earlier, in[11] we have used the other procedure to this aim. That procedure seems to be much less evident and more lengthy.
The expressions for the matrix elements U λ 1 i 1 λ 2 i 2 (Ji), V λ 1 i 1 λ 2 i 2 (Ji), and W λ 1 i 1 λ 2 i 2 (Ji) via the phonon amplitudes ψ, ψ, φ etc. can be found in[11].
. N Bogoliubov, 1958. V.34. P.58Bogoliubov N. N. // JETP. 1958. V.34. P.58
. Y Takahashi, H Umezawa, 55Takahashi Y., Umezawa H.// Coll. Phenom. 1975. V.2. P.55.
// Thermo field dynamics and condensed states. H Umezawa, H Matsumoto, M Tachiki, North-HollandAmsterdamUmezawa H., Matsumoto H., Tachiki M. // Thermo field dynamics and con- densed states. Amsterdam: North-Holland, 1982.
. I Ojima, Ann. Phys. 1Ojima I. // Ann. Phys. 1981. V.137. P.1.
. R Haag, N W Hugenholtz, M Winnihk, Comm. Math. Phys. 215Haag R., Hugenholtz N. W., Winnihk M. // Comm. Math. Phys. 1967. V.5. P.215.
Theory of Atomic Nuclei: Quasiparticles and Phonons. Bristol. IoPSoloviev V. G. // Theory of Atomic Nuclei: Quasiparticles and Phonons. Bris- tol: IoP, 1992.
. K Tanabe, Phys. Rev. C. 2802Tanabe K. // Phys. Rev. C. 1988. V.37. P.2802.
. D S Kosov, A I Vdovin, Mod, Phys. Lett. A. 1735Kosov D. S., Vdovin A. I. // Mod. Phys. Lett. A. 1994. V.9. P.1735.
D S Kosov, A I Vdovin, Wambach J. ; S N Ershov, R V Jolos, V V Voronov, Dubna Jinr, Proc. Intern. Conf. "Nuclear Structure and Related Topics. Intern. Conf. "Nuclear Structure and Related TopicsDubnaKosov D. S., Vdovin A. I., and Wambach J., Proc. Intern. Conf. "Nuclear Structure and Related Topics", Dubna, Sept. 9-13, 1997, eds. S. N. Ershov, R. V. Jolos, V. V. Voronov, JINR, Dubna, E4-97-327, 1997, P.254-261.
. A A Dzhioev, A I Vdovin, Int. J. Mod. Phys. E. 1535Dzhioev A. A., Vdovin A. I. // Int. J. Mod. Phys. E. 2009. V.18. P.1535.
. O Civitarese, L. // Z. Phys. A. 1993. V.344. P.243A Depaoli, L. // Z. Phys. A. 1993. V.344. P.243Civitarese O., DePaoli A. L. // Z. Phys. A. 1993. V.344. P.243.
Vdovin A. I. // Izv. RAN, ser. fiz. D S Kosov, 41Kosov D. S., Vdovin A. I. // Izv. RAN, ser. fiz. 1994. V.58. P.41.
L. // Nucl. Phys. A. 1981. V.352. P.30. A Goodman, Goodman A. L. // Nucl. Phys. A. 1981. V.352. P.30.
| [] |
[
"A GRAPHICAL DESCRIPTION OF THE BNS-INVARIANTS OF BESTVINA-BRADY GROUPS AND THE RAAG RECOGNITION PROBLEM",
"A GRAPHICAL DESCRIPTION OF THE BNS-INVARIANTS OF BESTVINA-BRADY GROUPS AND THE RAAG RECOGNITION PROBLEM"
] | [
"Yu-Chan Chang ",
"Lorenzo Ruffoni "
] | [] | [] | A finitely presented Bestvina-Brady group (BBG) admits a presentation involving only commutators. We show that if a graph admits a certain type of spanning trees, then the associated BBG is a right-angled Artin group (RAAG). As an application, we obtain that the class of BBGs contains the class of RAAGs. On the other hand, we provide a criterion to certify that certain finitely presented BBGs are not isomorphic to RAAGs (or more general Artin groups). This is based on a description of the Bieri-Neumann-Strebel invariants of finitely presented BBGs in terms of separating subgraphs, analogous to the case of RAAGs. As an application, we characterize when the BBG associated to a 2-dimensional flag complex is a RAAG in terms of certain subgraphs. Γ is a finite 1-dimensional simplicial complex, not necessarily connected. We denote by V (Γ) the set of its vertices and by E(Γ) the set of its edges. We do not fix any orientation on Γ, but we often need to work with oriented edges. If e is an oriented edge, then we denote its initial vertex and terminal vertex by ιe and τ e, respectively; | null | [
"https://export.arxiv.org/pdf/2212.06901v1.pdf"
] | 254,636,144 | 2212.06901 | c0acbe2cf0e2ca6819723c54d063d6f187a0cde5 |
A GRAPHICAL DESCRIPTION OF THE BNS-INVARIANTS OF BESTVINA-BRADY GROUPS AND THE RAAG RECOGNITION PROBLEM
13 Dec 2022
Yu-Chan Chang
Lorenzo Ruffoni
A GRAPHICAL DESCRIPTION OF THE BNS-INVARIANTS OF BESTVINA-BRADY GROUPS AND THE RAAG RECOGNITION PROBLEM
13 Dec 2022arXiv:2212.06901v1 [math.GR]
A finitely presented Bestvina-Brady group (BBG) admits a presentation involving only commutators. We show that if a graph admits a certain type of spanning trees, then the associated BBG is a right-angled Artin group (RAAG). As an application, we obtain that the class of BBGs contains the class of RAAGs. On the other hand, we provide a criterion to certify that certain finitely presented BBGs are not isomorphic to RAAGs (or more general Artin groups). This is based on a description of the Bieri-Neumann-Strebel invariants of finitely presented BBGs in terms of separating subgraphs, analogous to the case of RAAGs. As an application, we characterize when the BBG associated to a 2-dimensional flag complex is a RAAG in terms of certain subgraphs. Γ is a finite 1-dimensional simplicial complex, not necessarily connected. We denote by V (Γ) the set of its vertices and by E(Γ) the set of its edges. We do not fix any orientation on Γ, but we often need to work with oriented edges. If e is an oriented edge, then we denote its initial vertex and terminal vertex by ιe and τ e, respectively;
Introduction
Let Γ be a finite simplicial graph and denote its vertex set and edge set by V (Γ) and E(Γ), respectively. The associated right-angled Artin group (RAAG) A Γ is the group defined by the following finite presentation
A Γ = ⟨V (Γ) [v, w] whenever (v, w) ∈ E(Γ)⟩.
RAAGs have been a central object of study in geometric group theory because of the beautiful interplay between algebraic properties of the groups and combinatorial properties of the defining graphs, and also because they contain many interesting subgroups, such as the fundamental group of many surfaces and 3-manifolds, and more generally, specially cubulated groups; see [HW08].
The RAAG recognition problem consists in deciding whether a given group is a RAAG. Several authors have worked on this problem for various classes of groups, for instance, the pure symmetric automorphism groups of RAAGs in [Cha+10] and [KP14], the pure symmetric outer automorphism groups of RAAGs in [DW18], and a certain class of subgroups of RAAGs and RACGs in [DL20] and of mapping class groups in [Kob12]. An analogous recognition problem for right-angled Coxeter groups has been considered in [Cun+16] However, the RAAG recognition problem is not easy to answer in general, even when the given group shares some essential properties with RAAGs. For example, the group G with the following presentation
G = ⟨a, b, c, d, e [a, b], [b, c], [c, d], [b −1 c, e]⟩
is finitely presented with only commutator relators; it is CAT(0) and splits as a graph of free abelian groups. However, it is not a RAAG; see [PS07,Example 2.8]. Even more is true: Bridson [Bri20] showed that there is no algorithm to determine whether or not a group presented by commutators is a RAAG, answering a question by Day and Wade [DW18,Question 1.2].
In this article, we study the RAAG recognition problem for a class of normal subgroups of RAAGs, namely, the Bestvina-Brady groups (BBGs). Let χ∶ A Γ → Z be the homomorphism sending all the generators to 1. The BBG defined on Γ is the kernel of χ and is denoted by BB Γ . For example, the group G from above is the BBG defined on the trefoil graph (see Figure 1). BBGs were introduced and studied in [BB97], and they have become popular as a source of pathological examples in the study of finiteness properties and cohomology of groups. For instance, some BBGs are finitely generated but not finitely presented; and there are some BBGs that are potential counterexamples to either the Eilenberg-Ganea conjecture or the Whitehead conjecture. Inspired by the example of the group G from above, we are interested in understanding how much a BBG can be similar to a RAAG without being a RAAG. In particular, we are interested in a criterion that can be checked directly on the defining graph. It is well-known that two RAAGs are isomorphic if and only if their defining graphs are isomorphic; see [Dro87a]. However, this is not the case for BBGs. For instance, the BBG defined on a tree with n vertices is always the free group of rank n − 1. Nevertheless, some features of BBGs can still be seen directly from the defining graphs. For example, it was proved in [BB97] that BB Γ is finitely generated if and only if Γ is connected; and BB Γ is finitely presented if and only if the flag complex ∆ Γ associated to Γ is simply connected. When a BBG is finitely generated, an explicit presentation was found by Dicks and Leary [DL99]. More properties that have been discussed from a graphical perspective include various cohomological invariants in [PS07; DPS08; PS10; LS11], Dehn functions in [Cha21], and graph of groups decompositions in [Cha22;BRY21;DR22].
In this paper, we add to this list a solution to the RAAG recognition problem for BBGs whose associated flag complexes are 2-dimensional (equivalently, the largest complete subgraphs of the defining graphs are triangles). We note that it is natural to make two assumptions. The first one is that Γ is biconnected, that is, it has no cut vertices (otherwise, one can split BB Γ as the free product of the BBGs on the biconnected components of Γ; see Corollary 4.22). The second assumption is that the associated flag complex ∆ Γ is simply connected (otherwise, the group BB Γ is not even finitely presented). Our solution to the RAAG recognition problem in dimension 2 is in terms of the presence or absence of two particular types of subgraphs. A tree 2-spanner T of the graph Γ is a spanning tree such that for any two vertices x and y, we have d T (x, y) ≤ 2d Γ (x, y). A crowned triangle of the associated flag complex ∆ Γ is a triangle whose edges are not on the boundary of ∆ Γ (see §5 for the precise definition). For instance, the central triangle of the trefoil graph in Figure 1 is a crowned triangle.
Theorem A. Let Γ be a biconnected graph such that ∆ Γ is 2-dimensional and simply connected. Then the following statements are equivalent.
(1) Γ admits a tree 2-spanner.
(2) ∆ Γ does not contain crowned triangles.
(3) BB Γ is a RAAG. (4) BB Γ is an Artin group.
Our proof of Theorem A relies on two conditions that are independent, in the sense that they work separately and regardless of the dimension of ∆ Γ . The first one is a sufficient condition for a BBG to be a RAAG that is based on the existence of a tree 2-spanner (see §1.1). The second one is a sufficient condition for any finitely generated group not to be a RAAG that is based on certain properties of the Bieri-Neumann-Strebel invariant (BNS-invariant) and may be of independent interest (see §1. 3). We prove that these two conditions are equivalent when the flag complex ∆ Γ is 2-dimensional (see §1. 4).
This allows one to recover the fact that the group G from above (that is, the BBG defined on the trefoil graph from Figure 1) is not a RAAG. This was already known by the results of [PS07] or [DW18]. While the results in these two papers apply to groups that are more general than the group G, they do not address the case of a very minor modification of that example, such as the BBG defined on the graph in Figure 2. This BBG shares all the properties with the group G described above. But again, it is not a RAAG by Theorem A since the defining graph contains a crowned triangle; see Example 5.14.
The features of the BNS-invariant that we use to show that a BBG is not a RAAG turn out to imply that the BBG cannot even be a more general Artin group. This relies on the theory of resonance varieties developed by Papadima and Suciu in [PS06;PS07]. Roughly speaking, we show that for the BBGs under consideration in this paper, the resonance varieties are the same as the complements of the BNSinvariants (see §4.6). 1.1. The condition to be a RAAG: tree 2-spanners. As observed in [PS07,Corollary 2.3], when BB Γ is finitely presented, any spanning tree T of Γ provides a finite presentation whose relators are commutators. If T is a tree 2-spanner, then this presentation can actually be simplified to a standard RAAG presentation, and in particular, the group BB Γ is a RAAG. We can even identify the defining graph for this RAAG in terms of the dual graph T * of T , that is, the graph whose vertices are edges of T , and two vertices are adjacent if and only if the corresponding edges of T are contained in the same triangle of Γ. Note that the following result does not have any assumption on the dimension of ∆ Γ .
Theorem B. If Γ admits a tree 2-spanner T , then BB Γ is a RAAG. More precisely, the Dicks-Leary presentation can be simplified to the standard RAAG presentation with generating set E(T ). Moreover, we have BB Γ ≅ A T * .
Here are two applications. The first one is that if Γ admits a tree 2-spanner, then ∆ Γ is contractible; see Corollary 3.9. The second application is that for any graph Λ, the BBG defined on the cone over Λ is isomorphic to A Λ , regardless of the structure of Λ; see Corollary 3.10. This means that the class of BBGs contains the class of RAAGs. That is, every RAAG arises as the BBG defined on some graph.
As we have mentioned before, two RAAGs are isomorphic if and only if their defining graphs are isomorphic, but this is not true for BBGs. However, when two graphs admit tree 2-spanners, the associated BBGs and the dual graphs completely determine each other. Corollary 1. Let Γ and Λ be two graphs admitting tree 2-spanners T Γ and T Λ , respectively. Then BB Γ ≅ BB Λ if and only if T * Γ ≅ T * Λ . This provides new examples of non-isomorphic graphs defining isomorphic BBGs; see Example 3.7. On the other hand, when Γ does not admit a tree 2-spanner, the presentation for BB Γ associated to any spanning tree is never a RAAG presentation. However, there might be a RAAG presentation not induced by a spanning tree. In order to obstruct this possibility, we need to look for invariants that do not depend on the choice of a generating set. We will consider the BNS-invariant Σ 1 (BB Γ ) of BB Γ .
1.2. The BNS-invariants of BBGs from the defining graphs. The BNSinvariant Σ 1 (G) of a finitely generated group G is a certain open subset of the character sphere S(G), that is, the unit sphere in the space of group homomorphisms Hom(G, R). This invariant was introduced in [BNS87] as a tool to study finiteness properties of normal subgroups of G with abelian quotients, such as kernels of characters. In general, the BNS-invariants are hard to compute.
The BNS-invariants of RAAGs have been characterized in terms of the defining graphs by Meier and VanWyk in [MV95]. The BNS-invariants of BBGs are less understood. In [PS10,Theorem 15.8], Papadima and Suciu gave a cohomological upper bound for the BNS-invariants of BBGs. Recently, Kochloukova and Mendonça have shown in [KM22,Corollary 1.3] how to reconstruct the BNS-invariant of a BBG from that of the ambient RAAG. However, an explicit description of the BNS-invariant of a BBG from its defining graph is still in need (recall that the correspondence between BBGs and graphs is not as explicit as in the case of RAAGs).
Since the vertices of Γ are generators for A Γ , a convenient way to describe characters of A Γ is via vertex-labellings. Inspired by this, in the present paper, we encode characters of BB Γ as edge-labellings. This relies on the fact that the edges of Γ form a generating set for BB Γ (see [DL99] and §4.1.3). We obtain the following graphical criterion for a character of a BBG to belong to the BNS-invariant. The condition appearing in the following statement involves the dead edge subgraph DE(χ) of a character χ of BB Γ , which is the graph consisting of edges on which χ vanishes. This is reminiscent of the living subgraph criterion for RAAGs in [MV95]. However, it turns out that the case of BBGs is better understood in terms of the dead edge subgraph (see Example 4.16). An analogous dead subgraph criterion for RAAGs was considered in [BRY21].
Theorem C (Graphical criterion for the BNS-invariant of a BBG). Let Γ be a biconnected graph with ∆ Γ simply connected. Let χ ∈ Hom(BB Γ , R) be a nonzero character. Then [χ] ∈ Σ 1 (BB Γ ) if and only if DE(χ) does not contain a full subgraph that separates Γ.
Theorem C allows one to work explicitly in terms of graphs with labelled edges. In particular, we show in Corollary 4.21 that the following are equivalent: the graph Γ is biconnected, the BNS-invariant Σ 1 (BB Γ ) is nonempty, and BB Γ algebraically fibers (that is, it admits a homomorphism to Z with finitely generated kernel).
In the same spirit, we obtain the following graphical description for (the complement of) the BNS-invariants of BBGs. Here, a missing subsphere is a subsphere of the character sphere that is in the complement of the BNS-invariant (see §4.1 for details).
Theorem D (Graphical description of the BNS-invariant of a BBG). Let Γ be a biconnected graph with ∆ Γ simply connected. Then Σ 1 (BB Γ ) c is a union of missing subspheres corresponding to full separating subgraphs. More precisely,
(1) Σ 1 (BB Γ ) c = ⋃ Λ S Λ ,
where Λ ranges over the minimal full separating subgraphs of Γ. (2) There is a bijection between maximal missing subspheres in Σ 1 (BB Γ ) c and minimal full separating subgraphs of Γ.
In particular, as observed in [KM22,Corollary 1.4], the set Σ 1 (BB Γ ) c carries a natural structure of a rationally defined spherical polyhedron. A set of defining equations can be computed directly by looking at the minimal full separating subgraphs of Γ. This is analogous to the case of RAAGs; see [MV95].
As a corollary of our description, we can identify the complement of the BNSinvariant with the first resonance variety (see Proposition 4.43). This improves the inclusion from [PS10,Theorem 15.8] to an equality. Once again, this is analogous to the case of RAAGs; see [PS06,Theorem 5.5]. It should be noted that there are groups for which the inclusion is strict; see [Suc21].
1.3. The condition not to be a RAAG: redundant triangles. The BNSinvariant of a RAAG or BBG is the complement of a certain arrangement of subspheres of the character sphere. (Equivalently, one could consider the arrangement of linear subspaces given by the linear span of these subspheres.) The structural properties of this arrangement do not depend on any particular presentation of the group, so this arrangement turns out to be a useful invariant. In §4.4, inspired by the work of [KP14; DW18], we consider the question of whether the maximal members in this arrangement are "in general position", that is, whether they satisfy the inclusion-exclusion principle.
In [KP14], Koban and Piggott proved that the maximal members in the arrangement for a RAAG satisfy the inclusion-exclusion principle. Day and Wade in [DW18] developed a homology theory to detect when an arrangement does not satisfy the inclusion-exclusion principle. These results can be used together with our description of the BNS-invariants of BBGs to see that many BBGs are not RAAGs. However, some BBGs elude Day-Wade's homology theory, such as the BBG defined on the graph in Figure 2. This motivated us to find an additional criterion to certify that a group G is not a RAAG. A more general result in Proposition 4.36 roughly says that if there are three maximal subspheres of Σ 1 (G) c that are not "in general position", then G is not a RAAG.
We are able to apply Proposition 4.36 to a wide class of BBGs. This is based on the notion of a redundant triangle. Loosely speaking, a redundant triangle is a triangle in Γ such that the links of its vertices are separating subgraphs of Γ that do not overlap too much (see §4.5 for the precise definition). The presence of such a triangle provides a triple of missing subspheres (in the sense of our graphical description; see Theorem D) that does not satisfy the inclusion-exclusion principle.
Theorem E. Let Γ be a biconnected graph such that ∆ Γ is simply connected. If Γ has a redundant triangle, then BB Γ is not a RAAG.
We emphasize that Theorem E works without any assumptions on the dimension of ∆ Γ . On the other hand, the obstruction is 2-dimensional, in the sense that it involves a triangle, regardless of the dimension of ∆ Γ ; see Example 5.17.
1.4. The 2-dimensional case: proof of Theorem A. The two conditions described in §1.1 and §1.3 are complementary when Γ is biconnected and ∆ Γ is 2dimensional and simply connected. This follows from some structural properties enjoyed by 2-dimensional flag complexes.
In Proposition 5.11, we establish that Γ admits a tree 2-spanner if and only if ∆ Γ does not contain crowned triangles. The "if" direction relies on a decomposition of ∆ Γ into certain elementary pieces, namely, the cones over certain 1-dimensional flag complexes. It then follows from Theorem B that BB Γ is a RAAG.
On the other hand, we show in Lemma 5.13 that every crowned triangle is redundant in dimension two. It then follows from Theorem E that BB Γ is not a RAAG. The theory of resonance varieties (see §4.6) allows us to conclude that BB Γ cannot be a more general Artin group either. Figure 3 illustrates the various implications. The only implication we do not prove directly is that if BB Γ is a RAAG, then Γ has a tree 2-spanner. This implication follows from the other ones, and in particular, it means that one can write down the RAAG presentation for BB Γ associated to the tree 2-spanner. This fact is a priori not obvious but quite satisfying.
For the sake of completeness, we note that Theorem A fails for higher-dimensional flag complexes; see Remark 5.15.
BB Γ is a RAAG (BB Γ is an Artin group)
Γ admits a tree 2-spanner ∆ Γ does not contain crowned triangles Prop. 5.11
Thm. B
Thm. E Figure 3. The implications in the proof of Theorem A 1.5. Structure of the paper. The rest of the paper is organized as follows. In §2, we fix some terminology and give some background on BBGs. In §3, we study tree 2-spanners and use them to provide a sufficient condition for a BBG to be a RAAG (Theorem B). We also give many examples. In §4, we present a graphical criterion (Theorem C) and a graphical description (Theorem D) for the BNS-invariants of BBGs. We use this to provide a sufficient condition for a BBG not to be a RAAG (Theorem E). This is based on a study of the inclusion-exclusion principle for the arrangements that define the complement of the BNS invariants. We discuss the relation with resonance varieties in §4.6. In §5, we provide a solution to the RAAG recognition problem for BBGs defined on 2-dimensional flag complexes (Theorem A). In the end, we include some observations about the higher dimensional case.
we denote byē the same edge with opposite orientation. We always identify edges of Γ with the unit interval and equip Γ with the induced length metric. A subgraph of Γ is a simplicial subcomplex, possibly not connected, possibly not full. A path, a cycle, and a complete graph on n vertices are denoted by P n , C n , and K n , respectively. (Note that by definition, there is no repetition of edges in a path or cycle.) A clique of Γ is a complete subgraph. A tree is a simply connected graph. A spanning tree of a graph Γ is a subgraph T ⊆ Γ such that T is a tree and
V (T ) = V (Γ).
The link of a vertex v ∈ V (Γ), denoted by lk (v, Γ), is the full subgraph induced by the vertices that are adjacent to v. The star of v in Γ, denoted by st (v, Γ) , is the full subgraph on lk (v, Γ) ∪ {v}. More generally, let Λ be a subgraph of Γ. The link of Λ is the full subgraph lk (Λ, Γ) induced by vertices at distance 1 from Λ. The star of Λ in Γ, denoted by st (Λ, Γ) , is the full subgraph on lk (Λ, Γ) ∪ V (Λ).
The join of two graphs Γ 1 and Γ 2 , denoted by Γ 1 * Γ 2 , is the full graph on V (Γ 1 ) ∪ V (Γ 2 ) together with an edge joining each vertex in V (Γ 1 ) to each vertex in V (Γ 2 ). A vertex in a graph that is adjacent to every other vertex is called a cone vertex. A graph that has a cone vertex is called a cone graph. In other words, a cone graph Γ can be written as a join {v} * Γ ′ . In this case, we also say that Γ is a cone over Γ ′ . The complement of Λ in Γ is the full subgraph Γ ∖ Λ spanned by V (Γ) ∖ V (Λ). We say that Λ is separating if Γ ∖ Λ is disconnected. A cut vertex of Γ is a vertex that is separating as a subgraph. A cut edge of Γ is an edge that is separating as a subgraph. A graph is biconnected if it has no cut vertices. If a graph is not biconnected, its biconnected components are the maximal biconnected subgraphs.
Given a graph Γ, the flag complex ∆ Γ on Γ is the simplicial complex obtained by gluing a k-simplex to Γ for every collection of k + 1 pairwise adjacent vertices of Γ (for k ≥ 2). The dimension of ∆ Γ is denoted by dim ∆ Γ and defined to be the maximal dimension of a simplex in ∆ Γ . (If ∆ Γ 1-dimensional, then it coincides with Γ, and the following terminology agrees with the one introduced before.) If Z is a subcomplex of ∆ Γ , the link of Z in ∆ Γ , denoted by lk (Z, ∆ Γ ), is defined as the full subcomplex of ∆ Γ induced by the vertices at a distance one from Z. Similarly, the star of Z in ∆ Γ , denoted by st (Z, ∆ Γ ), is defined as the full subcomplex induced by lk (Z, ∆ Γ ) ∪ Z.
2.2. The Dicks-Leary presentation. Let Γ be a graph, and let A Γ be the associated RAAG. Let χ∶ A Γ → Z be the homomorphism sending all the generators to 1. The Bestvina-Brady group (BBG) on Γ, denoted by BB Γ , is defined to be the kernel of χ. When Γ is connected, the group BB Γ is finitely generated (see [BB97]) and has the following (infinite) presentation, called the Dicks-Leary presentation. Theorem 1]) Let Γ be a graph. If Γ is connected, then BB Γ is generated by the set of oriented edges of Γ, and the relators are words of the form e n 1 . . . e n l for each oriented cycle (e 1 , . . . , e l ), where n, l ∈ Z, n ≠ 0, and l ≥ 2. Moreover, the group BB Γ embeds in A Γ via e ↦ τ e(ιe) −1 for each oriented edge e.
Theorem 2.1. ([DL99,
For some interesting classes of graphs, the Dicks-Leary presentation can be considerably simplified. For instance, when the flag complex ∆ Γ on Γ is simply connected, the group BB Γ admits the following finite presentation.
Corollary 2.2. ([DL99, Corollary 3])
When the flag complex ∆ Γ on Γ is simply connected, the group BB Γ admits the following finite presentation: the generating set is the set of the oriented edges of Γ, and the relators are eē = 1 for every oriented edge e, and e i e j e k = 1 and e k e j e i = 1 whenever (e i , e j , e k ) form an oriented triangle; see Figure 4. Example 2.4. If Γ is a tree on n vertices, then BB Γ is a free group of rank n − 1. If Γ = K n is a complete graph on n vertices, then BB Γ = Z n−1 .
Moreover, as observed by Papadima and Suciu, the edge set of a spanning tree is already enough to generate the whole group. Remark 2.6 (Oriented vs unoriented edges). The presentation from Corollary 2.2 is very symmetric but clearly redundant because each (unoriented) edge appears twice. The orientation is just an accessory tool, and one can obtain a shorter presentation by choosing an arbitrary orientation for each edge e, dropping the relator eē, and allowing inverses in the relators whenever needed. For instance, this is what happens in Corollary 2.5. Strictly speaking, each choice of orientation for the edges results in a slightly different presentation. However, switching the orientation of an edge simply amounts to replacing a generator with its inverse. Therefore, in the following sections, we will naively regard the generators in Corollary 2.5 as being given by unoriented edges of T , and we will impose a specific orientation only when needed in a technical argument.
BBGs that are RAAGs
When Γ is a tree or complete graph, the group BB Γ is a free group or abelian group, respectively. Hence, it is a RAAG (see Example 2.4). In this section, we identify a wider class of graphs whose associated BBGs are RAAGs.
3.1. Tree 2-spanners. Let Γ be a connected graph. Recall from the introduction that a tree 2-spanner of Γ is a spanning tree T of Γ such that for all x, y ∈ V (T ), we have d T (x, y) ≤ 2d Γ (x, y). If Γ is a tree, then Γ is a tree 2-spanner of itself. Here we are interested in more general graphs which admit tree 2-spanners. We start by proving some useful properties of tree 2-spanners. Lemma 3.1. Let T be a tree 2-spanner of Γ, and let e ∈ E(Γ). Then either e ∈ E(T ) or there is a unique triangle (e, f, g) such that f, g ∈ E(T ).
Proof. Write e = (x, y), then d T (x, y) ≤ 2d Γ (x, y) = 2. If e is not an edge of T , then d T (x, y) = 2. So, there must be some z ∈ V (T ) adjacent to both x and y in Γ such that the edges f = (x, z) and g = (y, z) are in T . Obviously, the edges e, f , and g form a triangle.
To see that such a triangle is unique, let (e, f ′ , g ′ ) be another triangle such that f ′ , g ′ ∈ E(T ). Then (f, g, f ′ , g ′ ) is a cycle in the spanning tree T , which leads to a contradiction. Proof. Let (e, f, g) be a triangle in Γ, and assume by contradiction that e ∈ E(T ) but f, g ∈ E(T ). Then by Lemma 3.1, the edges f and g are contained in uniquely determined triangles (f, f 1 , f 2 ) and (g, g 1 , g 2 ), respectively, with f 1 , f 2 , g 1 , g 2 ∈ E(T ).
Then (e, f 1 , f 2 , g 1 , g 2 ) is a loop in T , which is absurd since T is a tree. Lemma 3.3. Let T be a tree 2-spanner of Γ, and let (e, f, g) be a triangle in Γ with no edges from T . Then there are edges e ′ , f ′ , g ′ ∈ E(T ) that together with e, f, g form a K 4 in Γ.
Proof. By Lemma 3.1, there are uniquely determined triangles (e, e 1 , e 2 ), (f, f 1 , f 2 ), and (g, g 1 , g 2 ) such that e 1 , e 2 , f 1 , f 2 , g 1 , g 2 ∈ E(T ). Let v e be a common vertex shared by e 1 and e 2 and similarly define v f and v g . If at least two vertices among v e , v f , and v g are distinct, then concatenating the edges e 1 , e 2 , f 1 , f 2 , g 1 , g 2 gives a non-trivial loop in T , which is absurd. Thus, we have v e = v f = v g . Therefore, there is a K 4 induced by the vertex v e and the triangle (e, f, g).
We establish the following result about the global structure of ∆ Γ . (We will prove in Corollary 3.9 that if Γ is a tree 2-spanner, then ∆ Γ is even contractible.) Lemma 3.4. If Γ has a tree 2-spanner, then ∆ Γ is simply connected.
Proof. It is enough to check that every cycle of Γ bounds a disk in ∆ Γ . Let T be a tree 2-spanner of Γ, and let C = (e 1 , e 2 , . . . , e n ) be a cycle of Γ. If n = 3, then by construction C bounds a triangle in ∆ Γ . So, we may assume n ≥ 4. If C contains a pair of vertices connected by an edge not in C, then C can be split into the concatenation of two shorter cycles. So, we assume that C contains no such a pair of vertices, that is, a chordless cycle. In particular, all edges of C are distinct.
For each e i ∈ E(C), either e i ∈ E(T ) or e i ∈ E(T ). In the second case, by Lemma 3.1, there are two edges e − i and e + i in E(T ) such that (e i , e − i , e + i ) form a triangle. We denote by w i the common vertex of e − i and e + i . Note that w i ∈ V (C) and e − i , e + i ∈ E(C) because C is assumed to be chordless and of length n ≥ 4. Let L be the loop obtained by the following surgery on C (see Figure 5, left): for each edge e i , if e i ∈ E(T ), then keep it; otherwise, replace it with the concatenation of the two edges e − i and e + i . Then L is a loop made of edges of T . Since T is a tree, the loop L is contractible. Thus, if we start from a vertex of L and travel along the edges of L back to the starting vertex, then we must travel along each edge an even number of times in opposite directions.
Since e − i , e + i ∈ E(C), each edge of C appears at most once in L. So, if some edge of C appears in L, then L is not contractible. This proves that E(C) ∩ E(T ) = ∅, and therefore, we have L = (e − 1 , e + 1 , e − 2 , e + 2 , . . . e − n , e + n ). Once again, since the edges of L must appear an even number of times, the loop L contains a repeated edge. That is, we have e + i = e − i+1 and w i = w i+1 for some i. Deleting the vertex v i+1 (and the repeated edge), we obtain a shorter cycle in T , made of edges from L. Iterating the process, we see that w 1 , . . . , w n are all actually the same vertex, say w (see Figure 5, right). Notice that every vertex of C is adjacent to w, so C is entirely contained in st (w, Γ). Therefore, the cycle C bounds a triangulated disk in ∆ Γ as desired. In the next statement, we show that if Γ has a tree 2-spanner, then BB Γ is a RAAG. Even more: the tree 2-spanner itself provides a RAAG presentation for BB Γ . Let T be a tree 2-spanner for Γ. Recall from the introduction that the dual graph T * of T is the graph whose vertices are edges of T , and two vertices are adjacent if and only if the corresponding edges of T are contained in the same triangle of Γ. Roughly speaking, the dual graph encodes the way in which T sits inside Γ.
Theorem B. If Γ admits a tree 2-spanner T , then BB Γ is a RAAG. More precisely, the Dicks-Leary presentation can be simplified to the standard RAAG presentation with generating set E(T ). Moreover, we have BB Γ ≅ A T * .
Proof. Let T be a tree 2-spanner of Γ. By Lemma 3.4, the flag complex ∆ Γ is simply connected. By Corollary 2.2, the Dicks-Leary presentation for BB Γ is finite. The generators are the oriented edges of Γ, and the relators correspond to the oriented triangles in Γ. By Corollary 2.5, the presentation can be further simplified by discarding all edges not in T to obtain a presentation that only involves commutators between words in the generators. We explicitly note that to achieve this, one also needs to choose an arbitrary orientation for each edge of T (compare Remark 2.6). To ensure that the resulting presentation is a standard RAAG presentation, we need to check that it is enough to use relators that are commutators of edges of T (as opposed to commutators of more general words). In order to do this, we check what happens to the Dicks-Leary presentation from Corollary 2.2 when we remove a generator corresponding to an edge that is not in T . The relators involving such an edge correspond to the triangles of Γ that contain it. One of them is the special triangle from Lemma 3.1, and there might be other ones corresponding to other triangles.
Let e ∈ E(Γ) ∖ E(T ). By Lemma 3.1, we know that there is a unique triangle (e, f, g) with f, g ∈ E(T ). Then (e ε1 , f ε2 , g ε3 ) is an oriented triangle (in the sense of Figure 4) for some suitable ε j = ±1, where the negative exponent stands for a reversal in the orientation. When we drop e from the generating set, the relations e ε1 f ε2 g ε3 = 1 = g ε3 f ε2 e ε1 can be replaced by f ε2 g ε3 = e −ε1 = g ε3 f ε2 , hence, by the commutator [f ε2 , g ε3 ] (compare with Remark 2.3). But such a commutator can always be replaced by [f, g]. This is completely insensitive to the chosen orientation. This shows that the relators of the presentation from Corollary 2.2, which arise from the triangles provided by Lemma 3.1, are turned into commutators between generators in the presentation from Corollary 2.5.
We need to check what happens to the other type of relators. We now show that they follow from the former type of relators and hence can be dropped. As before, let e ∈ E(Γ) ∖ E(T ), and let (e, f, g) be the triangle from Lemma 3.1 having f, g ∈ E(T ). Let (e, f ′ , g ′ ) be another triangle containing e. Since e ∈ E(T ), by the uniqueness in Lemma 3.2 we have e, f ′ , g ′ ∈ E(T ). Therefore, by Lemma 3.3, there are e ′′ , f ′′ , g ′′ ∈ E(T ) that form a K 4 together with e, f ′ , and g ′ ; see the left picture of Figure 6. Up to relabelling, say that e ′′ is the edge of this K 4 that is disjoint from e. Then (e, f ′′ , g ′′ ) is a triangle containing e with f ′′ , g ′′ ∈ E(T ). Again, since the triangle (e, f, g) is unique, we have {f ′′ , g ′′ } = {f, g}. In particular, the triangles (e, f, g) and (e, f ′ , g ′ ) are part of a common K 4 ; see the right picture of Figure 6. The edges of this K 4 that are in T are precise e ′′ , f , and g, and any two of them commute by Remark 2.3. So, the relator ef ′ g ′ follows from the fact that e, f ′ , and g ′ can be rewritten in terms of f , g, and e ′′ . In particular, this relator can be dropped.
Therefore, the Dicks-Leary presentation for BB Γ can be simplified to a presentation in which the generating set is E(T ), and the relators are commutators [e i , e j ], where e i and e j are in E(T ) and are contained in the same triangle of ∆ Γ . In particular, we have Figure 6. The graph on the left shows the triangle (e, f, g) and a K 4 consisting of the edges e, f ′ , g ′ , e ′′ , f ′′ and ,g ′′ . The red edges are in E(T ). The graph on the right illustrates the uniqueness of the triangle (e, f, g).
BB Γ ≅ A T * . e g f g ′ f ′ e ′′ f ′′ g ′′ e g ′ f ′ e ′′ f g
Remark 3.5. It is natural to ask which graph admits a tree 2-spanner. The problem of determining whether a graph admits a tree 2-spanner is NP-complete (see [Ber87]). However, if a graph contains a tree 2-spanner, then it can be found in linear time (see [CC95,Theorem 4.5]).
As a consequence, we have the following criterion to detect whether two BBGs are isomorphic in terms of the defining graphs in the special case where they admit tree 2-spanners. Corollary 1. Let Γ and Λ be two graphs admitting tree 2-spanners T Γ and T Λ , respectively. Then BB Γ ≅ BB Λ if and only if T * Γ ≅ T * Λ . Proof. The result follows from Theorem B and the fact that two RAAGs are isomorphic if and only if their defining graphs are isomorphic; see Droms [Dro87a].
Remark 3.6. In general, non-isomorphic graphs can define isomorphic BBGs. For example, any two trees with n vertices define the same BBG (the free group of rank n−1). Notice that every tree is a tree 2-spanner of itself with a totally disconnected dual graph. Even when Γ admits a tree 2-spanner with a connected dual graph, the group BB Γ does not determine Γ; see Example 3.7.
Example 3.7. Denote the graphs in Figure 7 by Γ and Λ. Let T Γ and T Λ be the tree 2-spanners of Γ and Λ, respectively, given by the red edges in the pictures. One can see that Γ ≅ Λ as well as T Γ ≅ T Λ . However, the dual graphs T * Γ and T * Λ are isomorphic to the path on five vertices P 5 . Thus, by Theorem B and Corollary 1, The following graph-theoretic statements might be of independent interest. The first one says that any two tree 2-spanners for a graph Γ sit in the same way inside Γ (even though they do not have to be isomorphic as trees; see Example 3.7). The second one strengthens the conclusion of Lemma 3.4.
we have BB Γ ≅ A P5 ≅ BB Λ .
Corollary 3.8. If T 1 and T 2 are tree 2-spanners for Γ, then T * 1 ≅ T * 2 . Proof. Take Γ = Λ in Corollary 1. Corollary 3.9. If Γ admits a tree 2-spanner, then ∆ Γ is contractible.
Proof. By Theorem B, the group BB Γ is isomorphic to the RAAG A Λ on some graph Λ. The Salvetti complex associated to Λ is a finite classifying space for A Λ , so the group BB Γ ≅ A Λ is of type F . It follows from [BB97] that ∆ Γ is simply connected and acyclic. By the Hurewicz Theorem, the homotopy group π k (∆ Γ ) is trivial for k ≥ 1. By the Whitehead Theorem, we can conclude that ∆ Γ is contractible.
3.2.
Joins and 2-trees. In this section, we describe some ways of constructing new graphs out of old ones in such a way that the BBG defined on the resulting graph is a RAAG. Proof. Since v is a cone vertex, the edges that are incident to v form a tree 2-spanner T of Γ. By Theorem B, we know that BB Γ is a RAAG, namely, BB Γ ≅ A T * . The result follows from the observation that T * ≅ Λ.
For instance, if Γ does not contain a full subgraph isomorphic to C 4 or P 3 , then Γ is a cone (see the first Lemma in [Dro87b]), and the previous corollary applies. Actually, in this case every subgroup of A Γ is known to be a RAAG by the main Theorem in [Dro87b].
Remark 3.11. Corollary 3.10 implies that the class of BBGs contains the class of RAAGs, that is, every RAAG arises as the BBG defined on some graph. Remark 3.12. Corollary 3.10 indicates that the fact that BB Γ is not a RAAG is not obviously detected by subgraphs in general. Indeed, if Γ is a cone over Λ, then BB Γ is always a RAAG, regardless of the fact that BB Λ is a RAAG or not.
Corollary 3.13. Let Λ be a graph and Γ ′ a cone graph. If Γ = Γ ′ * Λ, then BB Γ is a RAAG.
Proof. Since Γ ′ is a cone graph, so is Γ. Therefore, the group BB Γ is a RAAG by Corollary 3.10.
Corollary 3.14. If A Γ has non-trivial center, then BB Γ is a RAAG.
Proof. By [Ser89, The Centralizer Theorem], when A Γ has non-trivial center, there is a complete subgraph Γ ′ ⊆ Γ such that each vertex of Γ ′ is adjacent to every other vertex of Γ. That is, the graph Γ decomposes as Γ = Γ ′ * Λ, where V (Λ) = V (Γ) ∖ V (Γ) ′ .
Since a complete graph is a cone graph, the result follows from Corollary 3.13. Remark 3.15. BBGs defined on arbitrary graph joins are not necessarily isomorphic to RAAGs. For example, the cycle of length four C 4 is the join of two pairs of non-adjacent vertices. The associated RAAG is F 2 × F 2 , and the associated BBG is not a RAAG because it is not even finitely presented (see [BB97]).
2-trees.
Roughly speaking, a 2-tree is a graph obtained by gluing triangles along edges in a tree-like fashion. More formally, the class of 2-trees is defined recursively as follows: the graph consisting of a single edge is a 2-tree, and then a graph Γ is a 2-tree if it contains a vertex v such that the neighborhood of v in Γ is an edge and the graph obtained by removing v from Γ is still a 2-tree. The trefoil graph from Figure 1 is an explicit example of a 2-tree. A general 2-tree may not be a triangulation of a 2-dimensional disk as it can have branchings; see Figure 8 for an example. It is not hard to see that the flag complex on a 2-tree is simply connected. So, the associated BBG is finitely presented and has only commutator relators.
Cai showed that a 2-tree contains no trefoil subgraphs if and only if it admits a tree 2-spanner; see [Cai97,Proposition 3.2]. The next corollary follows from Cai's result and Theorem B. In Section 5, we will prove a more general result that especially implies the converse of the following statement.
Corollary 3.16. Let Γ be a 2-tree. If Γ is trefoil-free, then BB Γ is a RAAG.
BBGs that are not RAAGs
While in the previous section, we have provided a condition on Γ to ensure that BB Γ is a RAAG. In this section, we want to obtain a condition on Γ to guarantee that BB Γ is not a RAAG. The main technical tool consists of a description of the BNS-invariants of BBGs in terms of the defining graphs.
4.1. BNS-invariants of finitely generated groups. Let G be a finitely generated group. A character of G is a homomorphism χ∶ G → R. Two characters χ 1 and χ 2 are equivalent, denoted by χ 1 ∼ χ 2 , whenever χ 1 = λχ 2 for some positive real number λ. Denote by [χ] the equivalence class of χ. The set of equivalence classes of non-zero characters of G is called the character sphere of G:
S(G) = [χ] χ ∈ Hom(G, R) ∖ {0} .
The character sphere naturally identifies with the unit sphere in Hom(G, R) (with respect to some background inner product), so by abuse of notation, we will often write S(G) ⊆ Hom(G, R). A character χ∶ G → R is called integral, rational, or discrete if its image is an infinite cyclic subgroup of Z, Q, or R, respectively. In particular, an integral character is rational, a rational character is discrete, and the equivalent class of a discrete character always contains an integral representative.
Let S be a finite generating set for G, and let Cay(G, S) be the Cayley graph for G with respect to S. Note that the elements of G are identified with the vertex set of the Cayley graph. For any character χ∶ G → R, let Cay(G, S) χ≥0 be the full subgraph of Cay(G, S) spanned by {g ∈ G χ(g) ≥ 0}. Bieri, Neumann, and Strebel [BNS87] introduced a geometric invariant of G, known as the BNS-invariant Σ 1 (G) of G, which is defined as the following subset of S(G):
Σ 1 (G) = [χ] ∈ S(G) Cay(G, S) χ≥0 is connected .
They also proved that the BNS-invariant of G does not depend on the generating set S.
The interest in Σ 1 (G) is due to the fact that it can detect finiteness properties of normal subgroups with abelian quotients, such as kernels of characters. For instance, the following statement can be taken as an alternative definition of what it means for a discrete character to belong to Σ 1 (G) (see [BNS87,§4] For each group G of interest in this paper, it admits an automorphism that acts as the antipodal map χ ↦ −χ on Hom(G, R). In this case, the BNS-invariant Σ 1 (G) is invariant under the antipodal map. Therefore, its rational points correspond exactly to discrete characters with finitely generated kernels.
1 (G) is by definition Σ 1 (G) c = S(G) ∖ Σ 1 (G). A great subsphere is defined as a subsphere of S(G) of the form S W = S(G)∩W , where W is a linear subspace of Hom(G, R) going through the origin. We say that a great subsphere S W is a missing subsphere if S W ⊆ Σ 1 (G) c .
The subspace W is the linear span of S W and is called a missing subspace. [MV95]. Let Γ be a graph and χ∶ A Γ → R a character of A Γ . Define the living subgraph L(χ) of χ to be the full subgraph of Γ on the vertices v with χ(v) ≠ 0 and the dead subgraph D(χ) of χ to be the full subgraph of Γ on the vertices v with χ(v) = 0. Note that L(χ) and D(χ) are disjoint, and they do not necessarily cover Γ. By Theorem 4.1, if χ is a discrete character, then L(χ) detects whether ker(χ) is finitely generated. Indeed, in a RAAG, the map sending a generator to its inverse is a group automorphism. Hence, the set Σ 1 (A Γ ) is invariant under the antipodal map χ ↦ −χ (see Remark 4.2).
The BNS-invariants of RAAGs. The BNS-invariants of RAAGs have a nice description given by Meier and VanWyk
A subgraph Γ ′ of Γ is dominating if every vertex of Γ ∖ Γ ′ is adjacent to some vertex of Γ ′ .
We find it convenient to work with the following reformulation of the condition in Theorem 4.4. It previously appeared inside the proof of [BRY21, Corollary 3.4]. We include a proof for completeness. For the sake of clarity, in the following lemma, the graph Λ is a subgraph of D(χ) that is separating as a subgraph of Γ, but it may not separate D(χ) (see Figure 9 for an example). (1) The living graph L(χ) is either not connected or not dominating.
(2) There exists a full subgraph Λ ⊆ Γ such that Λ separates Γ and Λ ⊆ D(χ).
Proof. We begin by proving that (1) implies (2).
If L(χ) is not connected, then Λ = D(χ) separates Γ. If L(χ) is not dominating, then there is a vertex v ∈ V (Γ)
such that χ vanishes on v and on all the vertices adjacent to v. Since χ is non-zero, the vertex v is not a cone vertex. In particular, the graph Λ = lk (v, Γ) is a subgraph of D(χ). Moreover, the subgraph Λ is a full separating subgraph of Γ, as desired.
To prove that (2) implies (1), assume that L(χ) is connected and dominating. Let Λ ⊆ D(χ) be a full subgraph of Γ, and let u 1 , u 2 ∈ V (Γ) ∖ V (Λ). We want to show that u 1 and u 2 can be connected in the complement of Λ. There are three cases. Firstly, if u 1 and u 2 are vertices of L(χ), then they are connected by a path entirely in L(χ). Secondly, if both u 1 and u 2 are vertices of D(χ), then they are adjacent to some vertices in L(χ), say v 1 and v 2 , respectively. Then we can extend a path in L(χ) between v 1 and v 2 to a path between u 1 and u 2 avoiding V (Λ). Finally, suppose that u 1 is a vertex of L(χ) and u 2 is a vertex of D(χ), then u 2 is adjacent to a vertex v 2 of L(χ). Again, we can extend a path in L(χ) between u 1 and v 2 to a path between u 1 and u 2 avoiding V (Λ). As a result, we have connected u 1 to u 2 with a path disjoint from Λ. This shows that Λ is not separating, which contradicts (2).
Notice that a subgraph Λ arising from Lemma 4.5 may not be connected and may not be equal to D(χ). Also, it may not even be a union of connected components of D(χ). This is in particular true when looking for a minimal such Λ; see Figure 9.
(BB Γ ) ≠ ∅.
The following lemma shows that the BNS-invariant of a BBG is invariant under the antipodal map, as in the case of a RAAG.
Lemma 4.9. For all χ∶ BB Γ → R, if [χ] ∈ Σ 1 (BB Γ ), then [−χ] ∈ Σ 1 (BB Γ ).
Proof. Choose an orientation for the edges of Γ, and let f ∶ Γ → Γ be the map that reverses the orientation on each edge. Then f induces an automorphism f * ∶ BB Γ → BB Γ which sends every generator e to its inverse e −1 . Then the lemma follows from the fact that −χ = χ ○ f * (see Remark 4.2).
Beyond these observations, not many explicit properties are known, and more refined tools are needed. We will use a recent result of Kochloukova and Mendonça that relates the BNS-invariant of a BBG to that of the ambient RAAG. The following statement is a particular case of [KM22, Corollary 1.3].
Proposition 4.10. Let Γ be a connected graph, and let χ ∶ BB Γ → R be a character.
Then [χ] ∈ Σ 1 (BB Γ ) if and only if [χ] ∈ Σ 1 (A Γ ) for every characterχ ∶ A Γ → R that extends χ.
Coordinates and labellings.
Here, we want to describe a useful parametrization for Hom(A Γ , R) and Hom(BB Γ , R) in terms of labelled graphs. This is based on the following elementary observation about a class of groups with a particular type of presentation that includes RAAGs and BBGs. Proof. Given a homomorphism G → A, one obtains a function S → A just by restriction. Conversely, let f ∶ S → A be any function and letf ∶ F(S) → A be the induced homomorphism on the free group on S. Let r ∈ F(S) be a relator for G. Since the exponent sum of each generator in r is zero and A is abelian, we have thatf (r) is trivial in A. Therefore, the homomorphismf ∶ F(S) → A descends to a well-defined homomorphism G → A.
A typical example of a relator in which the exponent sum of every generator is zero is a commutator. In particular, the standard presentation for a RAAG and the simplified Dicks-Leary presentation for a BBG in Corollary 2.5 are presentations of this type. We now show how Lemma 4.11 can be used to introduce nice coordinates on Hom(G, R) for these two classes of groups.
Let Γ be a connected graph, and let V (Γ) = {v 1 , . . . , v n }. By Lemma 4.11, a homomorphism χ∶ A Γ → R is uniquely determined by its values on V (Γ). Therefore, we get a natural identification
Hom(A Γ , R) → R V (Γ) , χ ↦ (χ(v 1 ), . . . , χ(v n )).
In other words, a character χ∶ A Γ → R is the same as a labelling of V (Γ) by real numbers. A natural basis for Hom(A Γ , R) is given by the characters
χ 1 , . . . , χ n , where χ i (v j ) = δ ij .
For BBGs, a similar description is available in terms of edge labellings. Different from RAAGs, not every assignment of real numbers to the edges of Γ corresponds to a character. Indeed, the labels along an oriented cycle must sum to zero. So, assigning the labels on sufficiently many edges already determines the labels on the other ones. To find a clean description, we assume that the flag complex ∆ Γ is simply connected, and we fix a spanning tree T of Γ with E(T ) = {e 1 , . . . , e m }. By Corollary 2.5, we know that the Dicks-Leary presentation can be simplified to have only E(T ) as a generating set and all relators are commutators. By Lemma 4.11, we get an identification
Hom(BB Γ , R) → R E(T ) , χ ↦ (χ(e 1 ), . . . , χ(e m )).
In other words, a character χ∶ BB Γ → R is encoded by a labelling of E(T ) by real numbers. To obtain a basis for Hom(BB Γ , R), one can take the characters χ 1 , . . . , χ m , where χ i (e j ) = δ ij , with respect to some arbitrary orientation of the edges of T (compare Remark 4.12).
Remark 4.12 (Computing a character on an edge). Note that there is a slight abuse of notation: strictly speaking, in order to see an edge e as an element of BB Γ , one needs to orient it. So, it only makes sense to evaluate a character χ on an oriented edge (see Remark 2.6). However, the value of χ(e) with respect to the two possible orientations just differs by a sign. Indeed, if we change the orientation of an edge and the sign of the corresponding label, then we obtain a different description of the same character of BB Γ . In particular, it makes sense to say that a character vanishes or not on an edge, regardless of orientation. Moreover, given the symmetry of Σ 1 (BB Γ ) under sign changes (see Remark 4.2 and Lemma 4.9), it is still quite meaningful and useful to think of a character as an edge labelling for a spanning tree T .
As a result of the previous discussions, we obtain the following lemma, which we record for future reference. (1) A character χ∶ BB Γ → R is uniquely determined by its values on E(T ).
(2) Any assignment E(T ) → R uniquely extends to a character χ∶ BB Γ → R.
We conclude this section with a description of a natural restriction map. Recall from Theorem 2.1 that BB Γ embeds in A Γ via e ↦ τ e(ιe) −1 for each oriented edge e, where ιe and τ e respectively denote the initial vertex and terminal vertex of e. We have an induced restriction map Proof. The map r is clearly linear. Let us prove that it is surjective. Let χ∶ BB Γ → R be a character. Define a characterχ ∈ Hom(A Γ , R) by prescribing its values on vertices as follows. Fix some v 0 ∈ V (Γ) and chooseχ(v 0 ) ∈ R arbitrarily. Pick a vertex v ∈ V (Γ) and choose an oriented path p = e 1 . . . e k connecting v 0 to v. Here, we mean that p is oriented from v 0 to v and that all the edges along p are given the induced orientation. In particular, the edges of p can be seen as elements of BB Γ . Define the value at v to be:
r∶ Hom(A Γ , R) → Hom(BB Γ , R),χ ↦ rχ,(4.1)χ(v) =χ(v 0 ) + k i=1 χ(e i ).
We claim thatχ∶ V (Γ) → R is well-defined. Suppose that there is another oriented path p ′ = e ′ 1 . . . e ′ h from v 0 to v. Then the loop p(p ′ ) −1 gives a relator in the Dicks-Leary presentation for BB Γ . Thus, we have
χ(e 1 . . . e k (e ′ 1 . . . e ′ h ) −1 ) = 0.
In other words, we have
k i=1 χ(e i ) = h j=1 χ(e ′ j )
Therefore, the valueχ(v) does not depend on the choice of a path from v 0 to v, as desired. This provides thatχ ∶ A Γ → R is a character. A direct computation shows that for each oriented edge e, we have χ(e) =χ(τ v) −χ(ιv). That is, the character χ is an extension of χ to A Γ .
To describe the kernel of r, note that ifχ is constant on V (Γ), then for each oriented edge e we have (rχ)(e) =χ(τ e) −χ(ιe) = 0. Conversely, letχ ∈ ker(r). It follows from (4.1) thatχ(v) =χ(w) for any choice of v, w ∈ V (Γ).
Note that the (non-zero) characters defined by the constant functions V (Γ) → R all differ by a (multiplicative) constant. In particular, they all have the same kernel, which is precise BB Γ . The restriction map r has a natural linear section s∶ Hom(BB Γ , R) → Hom(A Γ , R), χ ↦ sχ, defined as follows. Letχ be any extension of χ to A Γ . Then define
sχ =χ − 1 V (Γ) v∈V (Γ)χ (v).
The image of s is a hyperplane W going through the origin that can be regarded as a copy of Hom(BB Γ , R) inside Hom(A Γ , R).
Recall that if V (Γ) = {v 1 , . . . , v n }, then Hom(A Γ , R) carries a canonical basis given by the characters χ 1 , . . . , χ n such that χ i (v j ) = δ ij . Fix an inner product that makes this basis orthonormal. Then ker(r) = span(1, . . . , 1), the hyperplane W is the orthogonal complement of ker(r), and the restriction map (or rather the composition s ○ r) is given by the orthogonal projection onto W .
It is natural to ask how this behaves with respect to the BNS-invariants, that is, whether r restricts to a map Σ 1 (A Γ ) → Σ 1 (BB Γ ). In general, this is not the case.
4.2.
A graphical criterion for Σ 1 (BB Γ ). In this subsection, we give a graphical criterion for the BNS-invariants of BBGs that is analogous to Theorem 4.4. Let χ ∈ Hom(BB Γ , R) be a non-zero character. An edge e ∈ E(Γ) is called a living edge of χ if χ(e) ≠ 0; it is called a dead edge of χ if χ(e) = 0. This is well-defined, regardless of orientation, as explained in Remark 4.12. We define the living edge subgraph, denoted by LE(χ), to be the subgraph of Γ consisting of the living edges of χ. The dead edge subgraph DE(χ) is the subgraph of Γ consisting of the dead edges of χ. We will also say that χ vanishes on any subgraph of DE(χ) because the associated labelling (in the sense of §4.1.3) is zero on each edge of DE(χ). Notice that LE(χ) and DE(χ) cover Γ, but they are not disjoint; they intersect at vertices. Also, note that LE(χ) and DE(χ) are not always full subgraphs and do not have isolated vertices. Moreover, in general, the dead subgraph of an extension of χ is only a proper subgraph of DE(χ). See Figure 10 for an example displaying all these behaviors. The next lemma establishes a relation between the dead edge subgraph of a character of a BBG and the dead subgraphs of its extensions to the ambient RAAG. Note that the statement fails without the assumption that Λ is connected (see the example in Figure 10 once again). Proof. Suppose Λ ⊆ DE(χ). By Lemma 4.14, there exists an extension of χ to A Γ , unique up to additive constants. In particular, if we fix a vertex v 0 ∈ V (Λ), then we can find an extensionχ ∈ Hom(A Γ , R) such thatχ(v 0 ) = 0. Let v ∈ V (Λ). Since Λ is connected, there is a path p connecting v 0 to v entirely contained in Λ. Sinceχ extends χ and χ vanishes on edges of p, a direct computation shows thatχ(v) = 0. Therefore, we have Λ ⊆ D(χ).
For the other direction, letχ ∈ Hom(A Γ , R) be an extension of χ such that Λ ⊆ D(χ). For every oriented edge e = (v, w) ∈ E(Λ), we have χ(e) =χ(vw −1 ) = χ(v) −χ(w) = 0. Thus, the edge e is in DE(χ). Hence, we have Λ ⊆ DE(χ).
The main reason to focus on the dead edge subgraph instead of the living edge subgraph is that it is not clear how to transfer connectivity conditions from L(χ) to LE(χ). On the other hand, the disconnecting features of D(χ) do transfer to DE(χ). This is showcased by the following example.
Example 4.16. Let Γ be a cone over the path P 5 and consider a character χ∶ A Γ → Z as shown in Figure 11. The living subgraph L(χ) is neither connected nor dominating. It follows from Theorem 4.4 that [χ] ∈ Σ 1 (A Γ ), and therefore, the restriction χ =χ BBΓ ∶ BB Γ → Z is not in Σ 1 (BB Γ ) by Proposition 4.10. However, the living edge subgraph LE(χ) is connected and dominating. On the other hand, note that D(χ) contains a full subgraph that separates Γ (compare with Lemma 4.5), and so does DE(χ). Our goal now is to show that the observations made in Example 4.16 about the dead edge subgraph hold in general. We will need the following general topological facts that we collect for the convenience of the reader. Here and in the following, "minimality" is always with respect to the inclusion of subgraphs. More precisely, a "minimal full separating subgraph" is a "full separating subgraph whose full subgraphs are not separating". Proof. Proof of (1). Let A = ∆ Γ ∖∆ Λ (set-theoretic difference) and B = st (∆ Λ , ∆ Γ ). Then ∆ Γ = A ∪ B, and lk (∆ Λ , ∆ Γ ) deformation retracts to A ∩ B. The Mayer-Vietoris sequence for reduced homology associated to this decomposition of ∆ Γ provides the following exact sequence:
⋯ → H 1 (∆ Γ ) →H 0 (lk (∆ Λ , ∆ Γ )) →H 0 (A) ⊕H 0 (B) →H 0 (∆ Γ ) → 0
We have H 1 (∆ Γ ) = 0 =H 0 (∆ Γ ) since ∆ Γ is simply connected. Moreover, since Λ is connected, the subcomplex B is connected, and therefore, we obtainH 0 (B) = 0. This gives a bijection betweenH 0 (lk (∆ Λ , ∆ Γ )) andH 0 (A), as desired.
Proof of (2). Take Λ = v to be a single vertex. Since Γ is biconnected, the vertex v is not a cut vertex, so its complement is connected. Then the conclusion follows from (1). Proof of (3). Let Λ be a minimal full separating subgraph. Then we can find two subcomplexes A and B of ∆ Γ such that A ∪ B = ∆ Γ and A ∩ B = ∆ Λ . The Mayer-Vietoris sequence for reduced homology gives
⋯ → H 1 (∆ Γ ) →H 0 (∆ Λ ) →H 0 (A) ⊕H 0 (B) →H 0 (∆ Γ ) → 0
Arguing as in (1), we have H 1 (∆ Γ ) = 0 =H 0 (∆ Γ ) since ∆ Γ is simply connected. Therefore, we obtainH 0 (∆ Λ ) =H 0 (A) ⊕H 0 (B). Suppose by contradiction that Λ is disconnected. Then at least one of A or B is disconnected. Without loss of generality, say A = A 1 ∪ A ′ , with A 1 a connected component of A and A 1 ∩ A ′ = ∅. Let B ′ = B ∪ A ′ and let Λ ′ be the subgraph such that ∆ Λ ′ = A 1 ∩ B ′ . Then Λ ′ is a proper full subgraph of Λ which separates Γ, contradicting the minimality of Λ.
Finally, if by contradiction Λ were a single vertex, then it would be a cut vertex. But this is impossible because Γ is biconnected.
Proof of (4). Suppose by contradiction that there is an edge e = (u, v) in ∆ Γ that is not contained in a triangle. Since Γ has at least three vertices, at least one endpoint of e, say v, is adjacent to at least another vertex different from u. Since e is not contained in a triangle, the vertex u is an isolated component of lk (v, Γ). Therefore, the subgraph lk (v, Γ) has at least two components, and hence, the vertex v is a cut vertex of Γ by (1). This contradicts the fact that Γ is biconnected.
We now give a graphical criterion for a character to belong to the BNS-invariant of a BBG that is analogous to the living subgraph criterion in [MV95], or rather to the dead subgraph criterion Lemma 4.5 (see also [BRY21,Corollary 3.4]). Proof. Let [χ] ∈ Σ 1 (BB Γ ). Suppose by contradiction that there is a full subgraph Λ ⊆ DE(χ) that separates Γ. Up to passing to a subgraph, we can assume that Λ is a minimal full separating subgraph. So, by (3) in Lemma 4.17, we can assume that Λ is connected. By Lemma 4.15, there is an extensionχ ∈ Hom(A Γ , R) of χ such that Λ ⊆ D(χ). Since Λ separates Γ, by Lemma 4.5, we have [χ] ∉ Σ 1 (A Γ ), and therefore [χ] ∉ Σ 1 (BB Γ ) by Proposition 4.10. Hence, we reach a contradiction.
Conversely, assume [χ] ∉ Σ 1 (BB Γ ). Then by Proposition 4.10, there is an extensionχ ∈ Hom(A Γ , R) of χ such that [χ] ∉ Σ 1 (A Γ ). So, the living subgraph L(χ) is either disconnected or not dominating. Equivalently, by Lemma 4.5, the dead subgraph D(χ) contains a full subgraph Λ which separates Γ. Note that every edge of Λ is contained in DE(χ) because Λ ⊆ D(χ). A priori, the subgraph Λ could have some components consisting of isolated points. Once again, passing to a subgraph, we can assume that Λ is a minimal full separating subgraph. By (3) in Lemma 4.17, we can also assume that Λ is connected and not reduced to a single vertex. Therefore, we have Λ ⊆ DE(χ). This completes the proof.
We give two examples to illustrate that the hypotheses of Theorem C are optimal. Here, characters are represented by labels in the sense of §4.1.3. Figure 12. Then ∆ Γ is not simply connected. Note that in this case, the group BB Γ is finitely generated but not finitely presented; see [BB97]. Letχ ∈ Hom(A Γ , R) be the character of A Γ that sends two non-adjacent vertices to 0 and the other two vertices to 1. Let χ =χ BBΓ ∈ Hom(BB Γ , R) be the restriction ofχ to BB Γ . Then the dead edge subgraph DE(χ) is empty. In particular, it does not contain any subgraph that separates Γ. However, the living subgraph L(χ) consists of two opposite vertices, which is not connected. Thus, we have [χ] ∈ Σ 1 (A Γ ). Hence, by Proposition 4.10, we obtain [χ] ∈ Σ 1 (BB Γ ). Example 4.19 (Biconnectedness is needed). Let Γ be the graph obtained by gluing two triangles at a vertex; see the right-hand side of Figure 12. Letχ ∈ Hom(A Γ , R) be the character that sends the cut vertex to 0 and all the other vertices to 1. Let χ =χ BBΓ ∈ Hom(BB Γ , R) be the restriction ofχ to BB Γ . Then the dead edge subgraph DE(χ) consists of the two edges that are not incident to the cut vertex. In particular, it does not contain any subgraph that separates Γ. However, the living subgraph L(χ) is not connected (also notice that L(χ) = DE(χ)). Thus, we have [χ] ∈ Σ 1 (A Γ ). Hence, Proposition 4.10 implies [χ] ∈ Σ 1 (BB Γ ).
As mentioned in Example 4.8, the graph Γ in Example 4.19 has a cut vertex, and hence, the BNS-invariant Σ 1 (BB Γ ) is empty; see [PS10, Corollary 15.10]. As promised, we now show the following result. Then this defines a character thanks to Lemma 4.13. We claim that χ does not vanish on any edge of Γ. Indeed, let e ∈ E(Γ). The claim is clear if e ∈ E(T ). Suppose e ∈ E(T ), and let (e i1 , . . . , e ip ) be the path in T between the endpoints of e. Then (e, e i1 , . . . , e ip ) is a cycle in Γ, and hence, the element ee i1 . . . e ip is a relator in BB Γ by Theorem 2.1. Therefore, we have 0 = χ(e) + χ(e i1 ) + ⋅ ⋅ ⋅ + χ(e ip ) = χ(e) ± 10 ki 1 ± ⋅ ⋅ ⋅ ± 10 ki p , where the signs are determined by the orientations of the corresponding edges. The sum ±10 ki 1 ± ⋅ ⋅ ⋅ ± 10 ki p is never zero since all the exponents are different. Thus, we have χ(e) ≠ 0. This proves the claim. It immediately follows that DE(χ) = ∅, and therefore, we have [χ] ∈ Σ 1 (BB Γ ) by Theorem C.
As a summary, we have the following corollary. Most implications are well-known. Our contribution is that (1) implies (2). Recall that a finitely generated group G algebraically fibers if there is a surjective homomorphism G → Z whose kernel is finitely generated.
Corollary 4.21. Let Γ be a connected graph such that ∆ Γ is simply connected. Then the following statements are equivalent.
(1) Γ is biconnected.
(2) Σ 1 (BB Γ ) ≠ ∅.
(3) BB Γ does not split as a free product.
(4) BB Γ is 1-ended.
(5) BB Γ algebraically fibers.
Proof. The equivalence of (1) and (2) (3) and (4) ). This is actually equivalent to just requiring that [χ] ∈ Σ 1 (BB Γ ), because Σ 1 (BB Γ ) is symmetric; see Lemma 4.9. Note that the points of the character sphere S(BB Γ ) given by the equivalent classes of discrete characters are exactly the rational points. In particular, since Σ 1 (BB Γ ) is an open subset of the character sphere S(BB Γ ) (see Theorem A in [BNS87]), it is non-empty if and only if it contains the equivalent class of a discrete character.
We record the following consequence for future reference. It will reduce our discussion about the RAAG recognition problem to the case of biconnected graphs. Proof. It is clear from the Dicks-Leary presentation that BB Γ is the free product of the BB Γi . Moreover, since Γ i is biconnected, each BB Γi is freely indecomposable (see Corollary 4.21). If all the BB Γi are RAAGs, then BB Γ is a RAAG because the free product of RAAGs is a RAAG. This proves one implication. For the converse implication, suppose that BB Γ is a RAAG, say BB Γ = A Λ for some graph Λ. Let Λ 1 , . . . , Λ m be the connected components of Γ. Then BB Γ = A Λ can also be written as the free product of the RAAGs A Λj , each of which is freely indecomposable. It follows that m = n, and for each i, there is some j such that BB Γi ≅ A Λj .
Remark 4.23. We conclude this subsection by observing that when Γ is a chordal graph, the statement in Theorem C can also be obtained as follows. By [BRY21, §3.2], the group BB Γ splits as a finite graph of groups. More precisely, the vertex groups correspond to the BBGs on the maximal cliques of Γ, and the edge groups correspond to BBGs on the minimal separating subgraphs of Γ (that are also cliques because Γ is chordal). In particular, all these groups are finitely generated free abelian groups. Hence, one can apply the results from §2 of [CL16].
4.3.
A graphical description of Σ 1 (BB Γ ). We now provide a graphical description of Σ 1 (BB Γ ), that is, a way to compute the BNS-invariant of BB Γ in terms of subgraphs of Γ. Recall from Remark 4.6 that Σ 1 (A Γ ) c is given by an arrangement of missing subspheres parametrized by the separating subgraphs of Γ. Thanks to [KM22, Corollary 1.4], we know that Σ 1 (BB Γ ) c is also an arrangement of missing subspheres. Moreover, the restriction map r∶ Hom(A Γ , R) → Hom(BB Γ , R) sends the missing subspheres of Σ 1 (A Γ ) c to those of Σ 1 (BB Γ ) c (see the discussion after Lemma 4.14). So, it makes sense to look for a description of the missing subspheres of Σ 1 (BB Γ ) c in terms of subgraphs of Γ, analogous to the one available for Σ 1 (A Γ ) c .
However, recall from Example 3.7 that BB Γ does not completely determine Γ, so it is a priori not clear that Σ 1 (BB Γ ) c should admit such a description. Moreover, the restriction map is not always well-behaved with respect to the vanishing behavior of characters, in the sense that the dead edge subgraph of a character can be strictly larger than the dead subgraph of any of its extensions; see Figure 10. To address this, we need a way to construct characters with prescribed vanishing behavior.
For any subgraph Λ of Γ, we define the following linear subspace of Hom(BB Γ , R)
W Λ = {χ∶ BB Γ → R χ(e) = 0, ∀e ∈ E(Λ)} = {χ∶ BB Γ → R Λ ⊆ DE(χ)}
and the great subsphere S Λ given by the following intersection
S Λ = W Λ ∩ S(BB Γ ).
Note that if a character χ of BB Γ vanishes on a spanning tree of Γ, then χ is trivial (see Lemma 4.13). In other words, if Λ is a spanning tree, then W Λ = 0 and S Λ = ∅. We look for a condition on Λ such that W Λ ≠ 0 and S Λ ≠ ∅. Notice that the following lemma applies as soon as V (Λ) ≠ V (Γ), and that if it applies to Λ, then it also applies to all of its subgraphs. Proof. Let T Λ be a spanning forest of Λ. Observe e 0 ∈ E(Λ) by assumption. Therefore, we can extend T Λ ∪ {e 0 } to a spanning tree T of Γ. Orient the edges of T arbitrarily and label the edges of T Λ by 0 and all the remaining edges of T by 1. By Lemma 4.13, this defines a character χ∶ BB Γ → R. By construction, we have χ(e 0 ) = 1 and χ(e) = 0 for e ∈ E(T Λ ). Let e ∈ E(Λ) ∖ E(T Λ ). Since T Λ is a spanning forest of Λ, there is a unique path p in T Λ from τ e to ιe. Then ep is a cycle in Γ, and therefore, it is a relator in the Dicks-Leary presentation for BB Γ . Since χ vanishes on p, it must also vanish on e, as desired.
Remark 4.25. Notice that if two subgraphs Λ and Λ ′ have the same edge sets, then W Λ = W Λ ′ because these subspaces only depend on the edge sets. In particular, we have S Λ = S Λ ′ . This is the reason why we use the strict inclusion ⊊ instead of the weak inclusion ⊆ in the statement (2) of the following lemma.
Lemma 4.26. Let Γ be a biconnected graph with ∆ Γ simply connected, and let Λ and Λ ′ be full separating subgraphs. Then we have the following statements.
(1) S Λ is a missing subsphere, that is, we have S Λ ⊆ Σ 1 (BB Γ ) c .
(2) Λ ′ ⊊ Λ if and only if S Λ ⊊ S Λ ′ .
Proof. Proof of (1). If [χ] ∈ S Λ , then DE(χ) contains Λ, which is a separating subgraph. Then the statement follows from Theorem C. Proof of (2). The implication Λ ′ ⊊ Λ ⇒ S Λ ⊊ S Λ ′ follows from the definitions. For the reverse implication S Λ ⊊ S Λ ′ ⇒ Λ ′ ⊊ Λ we argue as follows. The inclusion S Λ ⊊ S Λ ′ implies that a character vanishing on Λ must also vanish on Λ ′ . We need to show that Λ ′ is a proper subgraph of Λ.
By contradiction, suppose that Λ ′ is not a subgraph of Λ. Notice that if Λ ′ ∖ Λ consists of isolated vertices, then S Λ = S Λ ′ (see Remark 4.25). So, we can assume that there is an edge e 0 ∈ E(Λ ′ )∖E(Λ). Since Λ is full, the edge e 0 cannot have both endpoints in Λ. By Lemma 4.24, there is a character χ∶ BB Γ → R with χ(e 0 ) = 1 and χ(e) = 0 for all e ∈ E(Λ). This is a character that vanishes identically on Λ but not on Λ ′ , which is absurd.
Recall that if Λ is a separating subgraph, then S Λ ≠ ∅.
Theorem D (Graphical description of the BNS-invariant of a BBG). Let Γ be a biconnected graph with ∆ Γ simply connected. Then Σ 1 (BB Γ ) c is a union of missing subspheres corresponding to full separating subgraphs. More precisely,
(1) Σ 1 (BB Γ ) c = ⋃ Λ S Λ ,
where Λ ranges over the minimal full separating subgraphs of Γ. (2) There is a bijection between maximal missing subspheres of Σ 1 (BB Γ ) c and minimal full separating subgraphs of Γ.
Proof. Proof of (1). We start by proving that Σ 1 (BB Γ ) c = ⋃ Λ S Λ , where Λ ranges over the full separating subgraphs of Γ. If Λ is a full separating subgraph, then we know that S Λ ⊆ Σ 1 (BB Γ ) c by (1) in Lemma 4.26. So one inclusion is clear. Vice versa, let [χ] ∈ Σ 1 (BB Γ ) c . Then by Theorem C we have that DE(χ) contains a full separating subgraph Λ. In particular, the character χ vanishes on Λ, hence [χ] ∈ S Λ . This proves the other inclusion. To see that one can restrict to Λ ranging over minimal full separating subgraphs, just observe that the latter correspond to maximal missing subspheres by (2) in Lemma 4.26. This completes the proof of (1). Proof of (2). By (1), we know that Σ 1 (BB Γ ) c is a union of maximal missing subspheres. Notice that this is a finite union because Γ has only finitely many subgraphs. So, each maximal missing subsphere S is of the form S = S Λ for Λ a minimal full separating subgraph.
Vice versa, let Λ be a minimal full separating subgraph. We know from (1) in Lemma 4.26 that S Λ is a missing subsphere. We claim that S Λ is a maximal missing subsphere in Σ 1 (BB Γ ) c . Let S be a maximal missing subsphere in Σ 1 (BB Γ ) c such that S Λ ⊆ S. By the previous paragraph, we know that S = S Λ ′ for some minimal full separating subgraph Λ ′ . If we had S Λ ⊊ S = S Λ ′ , then by (2) in Lemma 4.26 it would follow that Λ ′ ⊊ Λ. But this would contradict the minimality of Λ. Thus, we have S Λ = S Λ ′ = S. Hence, the missing subsphere S Λ is maximal.
The following example establishes a correspondence between the cut edges in Γ and the missing hyperspheres (the missing subspheres of codimension one) in Σ 1 (BB Γ ) c . It should be compared with the case of RAAGs, where the correspondence is between the cut vertices of Γ and the missing hyperspheres in Σ 1 (A Γ ) c (compare Remark 4.6 and Example 4.29).
Example 4.27 (Hyperspheres). Let Γ be a biconnected graph with ∆ Γ simply connected. Let e be a cut edge of Γ. Notice that e is a minimal separating subgraph since Γ is biconnected, and it is also clearly full. So by Theorem D we know that S e is a maximal missing subsphere in Σ 1 (BB Γ ) c . We want to show that the subspace W e = span(S e ) is a hyperplane. To see this, let T be a spanning tree of Γ with E(T ) = {e 1 , . . . , e m }, and let y i be the coordinate dual to e i in the sense of §4.1.3. This means that y i (χ) = χ(e i ) for all χ ∈ Hom(BB Γ , R). Note that W ei is the hyperplane given by the equation y i = 0. If e ∈ E(T ), then e = e i for some i = 1, . . . , m and W e = W ei is a hyperplane. If e ∈ E(T ), then there is a unique path (e j1 , . . . , e jp ) in T connecting the endpoints of e. Since (e j1 , . . . , e jp , e) is a cycle in Γ, the word e j1 . . . e jp e is a relator in the Dicks-Leary presentation. So, we have χ(e j1 ) + ⋅ ⋅ ⋅ + χ(e jp ) + χ(e) = 0. Therefore, we obtain χ(e) = 0 if and only if y j1 (χ) + ⋅ ⋅ ⋅ + y jp (χ) = 0. This means that W e is the hyperplane defined by the equation y j1 + ⋅ ⋅ ⋅ + y jp = 0.
Vice versa, let S ⊆ Σ 1 (BB Γ ) c be a hypersphere. We claim that S = S e for some cut edge e. To see this, let [χ] ∈ S. By Theorem C we know that DE(χ) contains a full subgraph Λ that separates Γ. Since Γ is biconnected, the subgraph Λ must contain at least one edge. In particular, the character χ vanishes on E(Λ), and therefore, we have [χ] ∈ ⋂ e∈E(Λ) S e . This proves S ⊆ ⋂ e∈E(Λ) S e . However, by the discussion above, we know that S e is a hypersphere. Since S is also a hypersphere, the subgraph Λ must consist of a single edge e only. In particular, it is a cut edge.
Remark 4.28. The linear span of the arrangement of the missing subspheres of Σ 1 (G) c gives rise to a subspace arrangement in Hom(G, R). The main difference between RAAGs and BBGs is that the arrangement for a RAAG is always "in general position" while the arrangement for a BBG is not. We will discuss the details in the next section.
4.4.
The inclusion-exclusion principle. Given a group G, one can consider the collection of maximal missing subspheres. That is, the maximal great subspheres of the character sphere S(G) that are in the complement of the BNS-invariant Σ 1 (G) (see Remark 4.3). Additionally, one can also consider the collection of maximal missing subspaces in Hom(G, R), that is, the linear spans of the maximal missing subspheres. This provides an arrangement of (great) subspheres in S(G) and an arrangement of (linear) subspaces in Hom(G, R) that can be used as invariants for G. For instance, these arrangements completely determine the BNS-invariant when G is a RAAG or a BBG (see Remark 4.6 or Theorem D respectively). Moreover, in the case of RAAGs, these arrangements satisfy a certain form of the inclusionexclusion principle (see §4.4.1). This fact can be used to detect when a group G is not a RAAG. We take this point of view from the work of Koban and Piggott in [KP14] and Day and Wade in [DW18]. The former focuses on the subsphere arrangement, while the latter focuses on the subspace arrangement. In this section, we find it convenient to focus on the subspace arrangement.
Let V be a real vector space. (The reader should think V = Hom(G, R) for a group G.) For convenience, we fix some background inner product on V . All arguments in the following are combinatorial and do not depend on the choice of inner product. We say that a finite collection of linear subspaces {W j } j∈J of V satisfies the inclusion-exclusion principle if the following equality holds:
(4.2) dim ⎛ ⎝ J j=1 W j ⎞ ⎠ = J k=1 (−1) k+1 ⎛ ⎝ I⊂J, I =k dim ⎛ ⎝ ⋂ j∈I W j ⎞ ⎠ ⎞ ⎠
Notice that if an arrangement satisfies (4.2), then any linearly equivalent arrangement also satisfies (4.2). Here are two examples. The first is a RAAG, and the collection of maximal subspaces in the complement of its BNS-invariant satisfies the inclusion-exclusion principle. The second is a BBG, and the collection of maximal subspaces in the complement of its BNS-invariant does not satisfy the inclusion-exclusion principle. Note that this BBG is known to be not isomorphic to any RAAG by [PS07]. Example 4.30 (The trefoil). Let Γ be the (oriented) trefoil graph with a choice of a spanning tree T whose edge set is E(T ) = {e 1 , e 2 , e 3 , e 4 , e 5 }; see Figure 13. We consider the three cut edges e 1 , e 2 , and f . By Example 4.27, we have that S e1 , S e2 , and S f are missing hyperspheres in Σ 1 (BB Γ ) c . By Theorem D, we have Σ 1 (BB Γ ) c = S e1 ∪S e2 ∪S f . If y 1 , . . . , y 5 are the dual coordinates on Hom(BB Γ , R) ≅ R 5 with respect to T (in the sense of §4.1.3), then S e1 , S e2 , and S f are given by y 1 = 0, y 2 = 0, and y 1 − y 2 = 0, respectively. To see the latter, first note that we have a relator e 1 f = e 2 = f e 1 in the Dicks-Leary presentation. Then for any character χ ∈ Hom(BB Γ , R), we have χ(e 1 ) + χ(f ) = χ(e 2 ). Thus, we obtain χ(f ) = 0 if and only if χ(e 1 ) = χ(e 2 ). Therefore, the hypersphere S f is defined by y 1 = y 2 , that is, the equation y 1 − y 2 = 0. A direct computation shows that the associated missing subspaces do not satisfy the inclusion-exclusion principle (4.2). It is natural to ask whether the phenomenon from Example 4.30 is actually a general obstruction for a BBG to be a RAAG. In [DW18], Day and Wade developed a homology theory H * (V) for a subspace arrangement V in a vector space that is designed to measure the failure of the inclusion-exclusion principle for V. They proved that if G is a RAAG, then H k (V G ) = 0 for all k > 0, where V G denotes the arrangement of maximal subspaces corresponding to the maximal missing spheres in Σ 1 (G) c ; see [DW18,Theorem B].
Given our description of the BNS-invariant for BBGs from §4.1.3 and Theorem D, we can determine that certain BBGs are not RAAGs. For example, a direct computation shows that the group G = BB Γ from Example 4.30 has H 1 (V G ) ≠ 0. On the other hand, there are BBGs that cannot be distinguished from RAAGs in this way, as in the next example.
Example 4.31 (The extended trefoil). Let Γ be the trefoil graph with one extra triangle attached; see Figure 14. Imitating Example 4.30, we choose a spanning tree T whose edge set is E(T ) = {e 1 , e 2 , e 3 , e 4 , e 5 , e 6 }. By Theorem D, we have Σ 1 (BB Γ ) c = S e1 ∪ S e2 ∪ S f ∪ S e5 . If y 1 , . . . , y 6 are the dual coordinates on Hom(BB Γ , R) ≅ R 6 with respect to T (in the sense of §4.1.3), then these missing hyperspheres are defined by the hyperplanes given by y 1 = 0, y 2 = 0, y 1 − y 2 = 0, and y 5 = 0, respectively. A direct computation shows that H k (V BBΓ ) = 0 for all k ≥ 0, that is, these homology groups look like the homology groups for the arrangement associated to a RAAG. However, we will show that this BBG is not a RAAG in Example 5.14. Our goal now is to obtain a criterion to detect when a BBG is not a RAAG that is still based on a certain failure of the inclusion-exclusion principle in the complement of the BNS-invariant. The obstruction always involves only a collection of three subspaces, regardless of the complexity of the graph. So, we find it convenient to introduce the following notation: Proof. Recall that the subspace W j corresponds to a minimal full separating subgraph Λ j of Γ (see Remark 4.6). Moreover, the dimension of W j is equal to the number of vertices in the complement A j = Γ ∖ Λ j of Λ j (those vertices provide a basis for W j , in the sense of §4.1.3.) It follows that
IEP(W 1 , W 2 , W 3 ) = dim W 1 + dim W 2 + dim W 3 − dim(W 1 ∩ W 2 ) − dim(W 1 ∩ W 3 ) + dim(W 2 ∩ W 3 ) + dim(W 1 ∩ W 2 ∩ W 3 ).dim ⎛ ⎝ J j=1 W j ⎞ ⎠ = J ⋃ j=1 V (A j ) = J k=1 (−1) k+1 ⎛ ⎜ ⎜ ⎝ I⊂J I =k ⋂ j∈I V (A j ) ⎞ ⎟ ⎟ ⎠ = J k=1 (−1) k+1 ⎛ ⎜ ⎜ ⎝ I⊂J I =k dim ⎛ ⎝ ⋂ j∈I W j ⎞ ⎠ ⎞ ⎟ ⎟ ⎠ .
This means precisely that {W j } j∈J satisfies (4.2), as desired.
Non RAAG behavior.
We now want to identify a condition that is not compatible with the property established in §4.4.1 for the arrangement associated to a RAAG. More precisely, we look for a sharp lower bound for the term IEP(W 1 , W 2 , W 3 ).
The key condition is the one in Lemma 4.34. It is inspired by [DW18], and it could be interpreted in the setting of the homology theory introduced in that paper (see Remark 4.33). For the reader's convenience, we provide a self-contained exposition. Let V be a real vector space of dimension n. Once again, the reader should think of the case V = Hom(G, R) ≅ R n for some group G with n generators. We fix some inner product, an orthonormal basis {e 1 , . . . , e n }, and the corresponding coordinates {y 1 , . . . , y n }, that is, y i (e j ) = δ ij . Consider three subspaces of V given by the following systems of equations:
W 1 = {y 1 = 0, n i=1 λ 1
ij y i = 0 for j = 1, . . . , m 1 },
W 2 = {y 2 = 0, n i=1
λ 2 ij y i = 0 for j = 1, . . . , m 2 },
W 3 = {y 1 − y 2 = 0, n i=1 λ 3 ij y i = 0 for j = 1, . . . , m 3 },(4.4)
where for k ∈ {1, 2, 3}, we have λ k ij ∈ R, and m k is a non-negative integer (possibly zero, in which case it is understood that the subspace is just given by the first equation, as in Example 4.30). Without loss of generality, we assume that each set of equations is minimal. That is, we have dim W k = n − (m k + 1).
We now proceed to compute the term IEP(W 1 , W 2 , W 3 ) defined in (4.3). In the naive system of equations that defines the intersection W 1 ∩W 2 ∩W 3 (that is, the one obtained by putting all the equations together), there is an obvious linear relation among the equations y 1 = 0, y 2 = 0, and y 1 − y 2 = 0. This can cause the dimension of W 1 ∩ W 2 ∩ W 3 to be higher than expected. From this perspective, one of the three equations is redundant. We find it convenient to work with the orthogonal complements. For i, j ∈ {1, 2, 3}, i ≠ j, consider the following natural maps:
I ij ∶ W ⊥ i ∩ W ⊥ j → W ⊥ i ⊕ W ⊥ j , I ij (u) = (u, −u), F ij ∶ W ⊥ i ⊕ W ⊥ j → W ⊥ i + W ⊥ j , F ij (ζ i , ζ j ) = ζ i + ζ j , J ij ∶ W ⊥ i ⊕ W ⊥ j → W ⊥ 1 ⊕ W ⊥ 2 ⊕ W ⊥ 3 ,(4.5)
where the last one is the natural inclusion (for example, J 12 (ζ 1 , ζ 2 ) = (ζ 1 , ζ 2 , 0)). These maps fit in the diagram in Figure 15, where the first row is exact. The exactness implies the Grassmann's identity. Figure 15. The diagram for Lemma 4.34.
0 (W i + W j ) ⊥ = W ⊥ i ∩ W ⊥ j W ⊥ i ⊕ W ⊥ j W ⊥ i + W ⊥ j 0 W ⊥ 1 ⊕ W ⊥ 2 ⊕ W ⊥ 3 I ij F ij J ijLet K ij ⊆ W ⊥ 1 ⊕ W ⊥ 2 ⊕ W ⊥ 3 be the image of J ij ○ I ij . By construction, we have K ij ≅ (W i + W j ) ⊥ = W ⊥ i ∩ W ⊥ j . Finally, consider the vector ξ = (−e 1 , e 2 , e 1 − e 2 ) ∈ W ⊥ 1 ⊕ W ⊥ 2 ⊕ W ⊥ 3 .
We say that a triple of subspaces {W 1 , W 2 , W 3 } as above is a redundant triple of subspaces if ξ ∈ K 12 + K 23 + K 13 . Remark 4.33. Although we will not need it, we observe that the condition ξ ∈ K 12 + K 23 + K 13 described above can be interpreted in the sense of the subspace arrangement homology introduced in [DW18] as follows. Consider the arrangement W ⊥ given by the orthogonal complements {W ⊥ 1 , W ⊥ 2 , W ⊥ 3 }. Then {W 1 , W 2 , W 3 } is a redundant triple of subspaces precisely when ξ defines a non-trivial class in H 1 (W ⊥ ).
Lemma 4.34. In the above notation, if {W 1 , W 2 , W 3 } is a redundant triple of subspaces, then it does not satisfy the inclusion-exclusion principle. More precisely,
dim(W 1 + W 2 + W 3 ) + 1 ≤ IEP(W 1 , W 2 , W 3 ).
Proof. We will compute all the terms that appear in IEP(W 1 , W 2 , W 3 ) (see (4.3)). The exactness of the first row of the diagram in Figure 15 yields that We deal with the triple intersection similarly. Consider the map
dim(W ⊥ i + W ⊥ j ) = dim(W ⊥ i ⊕ W ⊥ j ) − dim(W ⊥ i ∩ W ⊥ j ) = 2 + m i + m j − dim K ij . It follows that dim(W i ∩ W j ) =n − dim((W i ∩ W j ) ⊥ ) =n − dim(W ⊥ i + W ⊥ j ) =n − (2 + m i + m j ) + dim K ij .F ∶ W ⊥ 1 ⊕ W ⊥ 2 ⊕ W ⊥ 3 → W ⊥ 1 + W ⊥ 2 + W ⊥ 3 , F (ζ 1 , ζ 2 , ζ 3 ) = ζ 1 + ζ 2 + ζ 3 .
We have dim(W ⊥ 1 ⊕W ⊥ 2 ⊕W ⊥ 3 ) = 3+m 1 +m 2 +m 3 . Since F is surjective, its codomain has dimension 3 + m 1 + m 2 + m 3 − dim(ker F ). It follows that
dim(W 1 ∩ W 2 ∩ W 3 ) = n − dim((W 1 ∩ W 2 ∩ W 3 ) ⊥ ) = n − dim(W ⊥ 1 + W ⊥ 2 + W ⊥ 3 ) = n − (3 + m 1 + m 2 + m 3 ) + dim(ker F ).
(4.7)
Using dim W k = n − (m k + 1), (4.6) and (4.7), we obtain:
(4.8) IEP(W 1 , W 2 , W 3 ) = n + dim(ker F ) − dim K 12 − dim K 13 − dim K 23 .
We now claim that dim(ker F ) ≥ 1 + dim K 12 + dim K 13 + dim K 23 . The vector ξ = (−e 1 , e 2 , e 1 − e 2 ) is in ker F , and K ij is a subspace of ker F by definition. A direct computation shows that K ij ∩ K ik = 0.
By assumption, we also have ξ ∉ K 12 + K 13 + K 23 . Therefore, the direct sum span(ξ) ⊕ K 12 ⊕ K 13 ⊕ K 23 is a subspace of ker F . This proves the claim. Then it follows from (4.8) that
IEP(W 1 , W 2 , W 3 ) = n + dim(ker F ) − dim K 12 − dim K 13 − dim K 23 ≥ n + 1 ≥ dim(W 1 + W 2 + W 3 ) + 1.
This completes the proof.
On the other hand, if {W 1 , W 2 , W 3 } is not a redundant triple of subspaces, then we have the dichotomy in the following statement. This criterion will be useful in the proof of Theorem E. Lemma 4.35. In the above notation, if {W 1 , W 2 , W 3 } is not a redundant triple of subspaces, then one of the following situations occurs:
(1) either e 1 , e 2 ∈ W ⊥ j for all j = 1, 2, 3, (2) or there exists some i ≥ 3 such that e i ∈ W j for all j = 1, 2, 3. Figure 15 at the beginning of §4.4.2). We have an induced map
Proof. Recall that K ij is the image of the natural map J ij ○ I ij ∶ W ⊥ i ∩ W ⊥ j → W ⊥ 1 ⊕ W ⊥ 2 ⊕ W ⊥ 3 (seeK ∶ (W ⊥ 1 ∩ W ⊥ 2 ) ⊕ (W ⊥ 2 ∩ W ⊥ 3 ) ⊕ (W ⊥ 1 ∩ W ⊥ 3 ) → W ⊥ 1 ⊕ W ⊥ 2 ⊕ W ⊥ 3 , K(a, b, c) = (a + c, −a + b, −b − c)
, whose image is precise K 12 + K 23 + K 13 . Since {W 1 , W 2 , W 3 } is not a redundant triple of subspaces, we have ξ = (−e 1 , e 2 , e 1 − e 2 ) ∈ Im(K). This means that there
exist a = ∑ n i=1 a i e i ∈ W ⊥ 1 ∩ W ⊥ 2 , b = ∑ n i=1 b i e i ∈ W ⊥ 2 ∩ W ⊥ 3 , and c = ∑ n i=1 c i e i ∈ W ⊥ 1 ∩ W ⊥ 3 , such that a + c = −e 1 , −a + b = e 2 , and −b − c = e 1 − e 2 , where a i , b i , c i ∈ R.
A direct computation shows that a, b, and c must satisfy the following relations:
(4.9) a 1 = b 1 = −1 − c 1 , a 2 = −c 2 = b 2 − 1, and a i = b i = −c i for i ≥ 3.
Note that if a i , b i , and c i are equal to zero for all i ≥ 3, then a = a 1 e 1 + a 2 e 2 ∈ W ⊥ 1 ∩ W ⊥ 2 . Since e 1 ∈ W ⊥ 1 , we have e 2 ∈ W ⊥ 1 . Similar arguments show that e 1 and e 2 also belong to W ⊥ 2 and W ⊥ 3 . Therefore, we are in case (1). If (1) does not occur, then we can reduce to the case that one of a, b, and c has at least one non-zero coordinate along e i for some i ≥ 3. But a i ≠ 0 implies that W ⊥ 1 and e i are not orthogonal, so we have e i ∈ W 1 . Thanks to (4.9), we also know that b and c have non-zero coordinates along e i . Then a similar argument shows that e i ∈ W 2 , W 3 . Therefore, we are in case (2).
Finally, we obtain a criterion to certify that a group is not a RAAG.
+W 3 ) = IEP(W 1 , W 2 , W 3 ).
This leads to a contradiction.
The fact that certain BBGs are not isomorphic to RAAGs can be obtained via the methods in [PS07] or [DW18], such as the BBG defined on the trefoil graph in Example 4.30. Proposition 4.36 allows us to obtain new examples that were not covered by previous criteria, such as the BBG defined on the extended trefoil (see Examples 4.31 and 5.14).
4.5.
Redundant triples for BBGs. The purpose of this section is to find a general graphical criterion to certify that a BBG is not a RAAG. The idea is to start from a triangle in the flag complex ∆ Γ and find suitable subspaces of the links of its vertices that induce a redundant triple of subspaces in the complement of Σ 1 (BB Γ ). Let τ be a triangle in ∆ Γ with vertices (v 1 , v 2 , v 3 ). Let e j be the edge opposite to v j . We say that τ is a redundant triangle if for each j = 1, 2, 3, there exists a subgraph Λ j ⊆ lk (v j , Γ) such that:
(1) e j ∈ E(Λ j );
(2) Λ j is a minimal separating subgraph of Γ;
(3) Λ 1 ∩ Λ 2 ∩ Λ 3 is the empty subgraph. The purpose of this section is to prove the following theorem.
Theorem E. Let Γ be a biconnected graph such that ∆ Γ is simply connected. If Γ has a redundant triangle, then BB Γ is not a RAAG.
We start by considering a redundant triangle τ with a choice of subgraph Λ j of the link lk (v j , Γ) as in the above definition of redundant triangle. We denote by W j = W Λj the induced subspace of V = Hom(BB Γ , R). By Theorem D, we know that W j = W Λj is a maximal subspace in the complement of Σ 1 (BB Γ ). We want to show that {W 1 , W 2 , W 3 } is a redundant triple of subspaces. To do this, we will choose some suitable coordinates on V , that is, a suitable spanning tree for Γ. Notice that different spanning trees correspond to different bases on Hom(BB Γ , R). In particular, the linear isomorphism class of the arrangement of missing subspaces does not depend on these choices, and we can work with a convenient spanning tree.
To construct a desired spanning tree, we will need the following terminology. Let v ∈ V (Γ). The spoke of v in Γ is the subgraph spoke (v) consisting of the edges that contain v. Note that spoke (v) is a spanning tree of st (v). Let Λ be a subgraph of lk (v). We define the relative star of v with respect to Λ to be the full subgraph st (v, Λ) of st (v) generated by {v} ∪ V (Λ). We define the relative spoke of v with respect to Λ to be the subgraph spoke (v, Λ) of spoke (v) consisting of the edges that connect v to a vertex of Λ. Note that spoke (v, Λ) is a spanning tree of st (v, Λ). We now construct a spanning tree T for Γ as follows.
• Let T 3 = spoke (v 3 , Λ 3 ). Since we chose Λ 3 to contain e 3 , we have v 1 , v 2 ∈ V (Λ 3 ) and e 1 , e 2 ∈ E(T 3 ). Recall
• Let Z 2 = spoke (v 2 , Λ 2 ∖ st (v 3 , Λ 3 )) and let T 2 = T 3 ∪ Z 2 . Notice that T 2 is a spanning tree of st (v 2 , Λ 2 ) ∪ st (v 3 , Λ 3 ). • Let Z 1 = spoke (v 1 , Λ 1 ∖ (st (v 2 , Λ 2 ) ∪ st (v 3 , Λ 3 ))) and let T 1 = T 2 ∪ Z 1 . No- tice that T 1 is a spanning tree of st (v 1 , Λ 1 ) ∪ st (v 2 , Λ 2 ) ∪ st (v 3 , Λ 3 ).that {χ f f ∈ E(T )} is a basis for Hom(BB Γ , R), where χ f ∶ BB Γ → R is the character defined by χ f (e) = 1 if f = e and χ f (e) = 0 if f ≠ e.
We also fix a background inner product with respect to which {χ f f ∈ E(T )} is an orthonormal basis. We now proceed to prove some technical lemmas that will be used to recognize the edges f ∈ E(T ) for which the associated character χ f is in one of the subspaces W 1 , W 2 , and W 3 . This is needed to use Lemma 4.35. We start with the following general fact. We now proceed to use Lemma 4.38 for each Λ j , with respect to a suitable choice of spanning tree for st (v j , Λ j ).
Lemma 4.39. Let f ∈ E(T ). If f ∈ E(T 3 ), then χ f ∈ W 3 .
Proof. Since T 3 is a spanning tree of st (v 3 , Λ 3 ), the statement follows directly from Lemma 4.38.
Lemma 4.40. Let f ∈ E(T ). If f ∈ E(Z 2 ), f ≠ e 1 , and f does not join v 3 to a vertex in Λ 2 ∩ Λ 3 , then χ f ∈ W 2 .
Proof. We construct a spanning tree for st (v 2 , Λ 2 ) as follows. First, note that Z 2 is a spanning tree of st (v 2 , Λ 2 ∖ st (v 3 , Λ 3 )) by construction. If u is a vertex in st (v 2 , Λ 2 ) but not in Z 2 , then either u = v 3 or u ∈ V (Λ 3 ). Let T ′ 2 be the result of extending Z 2 with the edge e 1 = (v 2 , v 3 ) and all the edges that join v 3 to the vertices in Λ 2 ∩ Λ 3 . This gives a spanning subgraph T ′ 2 of st (v 2 , Λ 2 ). Note that T ′ 2 is a tree because it is a subgraph of T . By the choice of f , we have f ∈ E(T ′ 2 ). Then it follows from Lemma 4.38 that χ f ∈ W 2 . Proof. We construct a spanning tree for st (v 1 , Λ 1 ) as follows. First, note that Z 1 is a spanning tree for st v 3 ), all the edges that join v 3 to the vertices in Λ 1 ∩ Λ 3 , and all the edges that join v 2 to the vertices in Λ 1 ∩ Λ 2 . This gives a spanning subgraph T ′ 1 of st (v 1 , Λ 1 ). Note that T ′ 1 is a tree because it is a subgraph of T . By the choice of f , we have f ∈ E(T ′ 1 ). Then it follows from Lemma 4.38 that
(v 1 , Λ 1 ∖ (st (v 2 , Λ 2 ) ∪ st (v 3 , Λ 3 ))) by construction. If u is a vertex in st (v 1 , Λ 1 ) but not in Z 1 , then either u = v 2 , u = v 3 , u ∈ V (Λ 2 ), or u ∈ V (Λ 3 ). Let T ′ 1 be the result of extending Z 1 with the edges e 1 = (v 2 , v 3 ), e 2 = (v 1 ,χ f ∈ W 1 . Lemma 4.42. Let f ∈ E(T ). If χ f ∈ W j for all j = 1, 2, 3, then f = e 1 or f = e 2 .
Proof. By contradiction, suppose that there is an edge f ≠ e 1 , e 2 such that χ f ∈ W j for all j = 1, 2, 3. Since χ f ∈ W 3 , we know that f ∈ E(T 3 ) by Lemma 4.39. In particular, we have v 3 ∈ V (f ). Since f ≠ e 1 , e 2 , this implies that v 1 , v 2 ∈ V (f ), and in particular this means that f ∉ E(Z 1 ), E(Z 2 ).
The assumption χ f ∈ W 2 implies that f joins v 3 to a vertex in Λ 2 ∩ Λ 3 , thanks to Lemma 4.40. Similarly, the assumption χ f ∈ W 1 implies that f joins v 3 to a vertex in Λ 1 ∩ Λ 3 , thanks to Lemma 4.41. Therefore, we have obtained that f connects v 3 to a vertex in Λ 1 ∩ Λ 2 ∩ Λ 3 . But this is absurd because this intersection is empty, by condition (3) in the definition of redundant triangle.
We are now ready for the proof of Theorem E.
Proof of Theorem E. Recall by construction that the subspaces W 1 , W 2 , and W 3 are given by equations of the form (4.4) with respect to the coordinates defined by the spanning tree T constructed above. Suppose by contradiction that {W 1 , W 2 , W 3 } is not a redundant triple By Lemma 4.35, one of the following cases occurs:
(1) either χ e1 , χ e2 ∈ W ⊥ j for all j = 1, 2, 3, (2) or there exists some i ≥ 3 such that χ fi ∈ W j for all j = 1, 2, 3. We claim that neither of these two situations can occur in our setting. To see that (1) does not occur, observe that χ e1 + χ e2 ∈ W 3 and χ e1 + χ e2 is not orthogonal to χ e1 , so χ e1 ∈ W ⊥ 3 . The same is true for χ e2 . On the other hand, (2) does not occur by Lemma 4.42. We have reached a contradiction, so {W 1 , W 2 , W 3 } is a redundant triple of subspaces. Then it follows from Proposition 4.36 that BB Γ is not a RAAG.
We will use Theorem E in §5 to prove that certain BBGs are not isomorphic to RAAGs (see Theorem A for the case in which ∆ Γ is 2-dimensional and Example 5.17 for a higher-dimensional example).
4.6.
Resonance varieties for BBGs. The goal of this section is to show that for a finitely presented BBG, the complement of its BNS-invariant coincides with the restriction of its first real resonance variety to the character sphere.
Let A = H * (G, R) be the cohomology algebra of G over R. For each a ∈ A 1 = H 1 (G, R) = Hom(G, R), we have a 2 = 0. So, we can define a cochain complex (A, a)
(A, a) ∶ A 0 → A 1 → A 2 → ⋯,
where the coboundary is given by the right-multiplication by a. The (first) resonance variety is defined to be the set points in A 1 so that the above chain complex fails to be exact, that is,
R 1 (G) = {a ∈ A 1 H 1 (A, a) ≠ 0}.
In many cases of interest, the resonance variety R 1 (G) is an affine algebraic subvariety of the vector space A 1 = H 1 (G, R) = Hom(G, R). For G a RAAG or a BBG, these varieties have been computed in [PS06] and [PS07], respectively. These varieties turn out to be defined by linear equations; that is, they are subspace arrangements. Following the notation in [PS07], let Γ be a finite graph. For any U ⊆ V (Γ), let H U be the set of characters χ ∶ A Γ → R vanishing (at least) on all the vertices in the complement of U . In our notation, this means U ⊆ D(χ).
Σ 1 (BB Γ ) c = R 1 (BB Γ ) ∩ S(BB Γ ).
Proof. By [PS07, Theorem 1.4], we have that R 1 (BB Γ ) is the union of the subspaces H ′ U , where U runs through the maximal collections of vertices that induce disconnected subgraphs. Similarly, it follows from Theorem D that Σ 1 (BB Γ ) c is the union of the subspheres S Λ , where Λ runs through the minimal separating subgraphs. Note that U is a maximal collection of vertices inducing a disconnected subgraph precisely when the subgraph Λ induced by V (Γ) ∖ U is a minimal separating subgraph. So, it is enough to show that for each such U , we have H ′ U = W Λ , where W Λ is the linear span of the sphere S Λ , as defined above in §4.3.
To show this equality, let χ ∶ BB Γ → R. Then χ ∈ H ′ U if and only if there is an extensionχ of χ to A Γ such thatχ ∈ H U . This means that Λ ⊆ D(χ). Note that Λ is connected by (3) We now recall a construction that reduces an Artin group to a RAAG (see [DPS09,§11.9] or [PS07, §9]). Let (Γ, m) be a weighted graph, where m∶ E(Γ) → N is an assignment of positive integers on the edge set. We denote by A Γ,m the associated Artin group. When m = 2 on every edge, it reduces to A Γ,m = A Γ . The odd contraction of (Γ, m) is an unweighted graphΓ defined as follows. Let Γ odd be the graph whose vertex set is V (Γ) and edge set is {e ∈ E(Γ) m(e) is an odd number}. The vertex set V (Γ) ofΓ is the set of connected components of Γ odd , and two vertices C and C ′ are connected by an edge if there exist adjacent vertices v ∈ V (C) and v ′ ∈ V (C) ′ in the original graph Γ.
Corollary 4.45. Let Γ be a biconnected graph such that ∆ Γ is simply connected. If Γ has a redundant triangle, then BB Γ is not an Artin group.
Proof. Let τ be a redundant triangle, with chosen minimal full separating subgraphs {Λ 1 , Λ 2 , Λ 3 }. Let W j = W Λj be the subspace of V = Hom(BB Γ , R) defined by Λ j . Arguing as in the proof of Theorem E, we have that W j is a maximal missing subspace in the complement of Σ 1 (BB Γ ), and that {W 1 , W 2 , W 3 } is a redundant triple in subspaces of V . By Lemma 4.34, we have
dim(W 1 + W 2 + W 3 ) + 1 ≤ IEP(W 1 , W 2 , W 3 ).
Now, assume by contradiction that BB Γ is isomorphic to an Artin group A Γ ′ ,m . LetΓ ′ be the odd contraction of (Γ ′ , m). Then AΓ′ is a RAAG. Notice that A Γ ′ ,m and AΓ ′ have the same abelianization. Hence, the three spaces Hom(A Γ ′ ,m , R), Hom(AΓ ′ , R), and V can be identified together. In particular, the three character spheres S(A Γ,m ), S(AΓ ′ ), and S(BB Γ ) can be identified as well.
Arguing as in [PS07,Proposition 9.4], there is an ambient isomorphism of the resonance varieties R 1 (BB Γ ) ≅ R 1 (A Γ ′ ,m ) ≅ R 1 (AΓ′), seen as subvarieties of V . Since AΓ ′ is a RAAG, by [PS06,Theorem 5.5] we have Σ 1 (AΓ ′ ) c = R 1 (AΓ ′ )∩S(AΓ ′ ). Similarly, since BB Γ is a BBG, by Proposition 4.43 we have Σ 1 (BB Γ ) c = R 1 (BB Γ )∩ S(BB Γ ). It follows that we have an ambient isomorphism of the complements of the BNS-invariant Σ 1 (BB Γ ) c ≅ Σ 1 (AΓ′ ) c (seen as arrangements of subspheres in S(BB Γ )), as well as an ambient isomorphism of the associated arrangements of (linear) subspaces of V . In particular, the arrangement of maximal missing subspaces of BB Γ inside V is ambient isomorphic to the arrangement of maximal missing subspaces of a RAAG. Applying Lemma 4.32 to the triple {W 1 , W 2 , W 3 } gives
IEP(W 1 , W 2 , W 3 ) = dim(W 1 + W 2 + W 3 ).
This leads to a contradiction.
BBGs on 2-dimensional flag complexes
If ∆ Γ is a simply connected flag complex of dimension one, then Γ is a tree. In this case, the group BB Γ is a free group generated by all the edges of Γ, and in particular, it is a RAAG. The goal of this section is to determine what happens in dimension two. Namely, we will show that the BBG defined on a 2-dimensional complex is a RAAG if and only if a certain poison subgraph is avoided. We will discuss some higher dimensional examples at the end; see Examples 5.16 and Example 5.17.
Throughout this section, we assume that Γ is a biconnected graph such that ∆ Γ is 2-dimensional and simply connected unless otherwise stated. Note that by Lemma 4.17 this implies that ∆ Γ is homogeneous of dimension two. We say that • An edge e is a boundary edge if it is contained in exactly one triangle.
Denote by ∂∆ Γ the boundary of ∆ Γ . This is a 1-dimensional subcomplex consisting of boundary edges. An edge e is an interior edge if e ∩ ∂∆ Γ = ∅. Equivalently, none of its vertices is on the boundary. • A boundary vertex is a vertex contained in ∂∆ Γ . Equivalently, it is contained in at least one boundary edge. A vertex v is an interior vertex if it is contained only in edges that are not boundary edges.
• A triangle τ is an interior triangle if τ ∩ ∂∆ Γ = ∅. A triangle τ is called a
crowned triangle if none of its edges is on ∂∆ Γ . This is weaker than being an interior triangle because a crowned triangle can have vertices on ∂∆ Γ . If τ is a crowned triangle, each of its edges is contained in at least one triangle different from τ .
Remark 5.1. We will prove in Lemma 5.13 that in dimension two, a crowned triangle is redundant in the sense of §4.5. If ∂∆ Γ is empty, then every triangle is crowned, simply because no edge can be a boundary edge. Note that a vertex is either a boundary vertex or an interior vertex, but we might have edges which are neither boundary edges nor interior edges. For example, the trefoil graph (see Figure 1) has no interior edges, but only six of its nine edges are boundary edges. Moreover, it has no interior triangles, but it has one crowned triangle. Notice that a crowned triangle is contained in a trefoil subgraph of Γ, but the trefoil subgraph is not necessarily a full subgraph of Γ; see Figure 16. Figure 16. Graphs that contain crowned triangles, but the resulting trefoil subgraphs are not full subgraphs.
5.1.
Complexes without crowned triangles. The goal of this section is to provide a characterization of complexes without crowned triangles.
Lemma 5.2. A vertex v ∈ V (Γ) is an interior vertex if and only if for each vertex
w in lk (v, Γ), its degree in lk (v, Γ) is at least two.
Proof. First of all, notice that for a vertex w in lk (v, Γ), its degree in lk (v, Γ) is equal to the number of triangles of ∆ Γ that contain the edge (v, w). Suppose that v ∈ V (Γ) is an interior vertex, and let w be a vertex of lk (v, Γ). Since v is interior, the edge (v, w) is not a boundary edge, hence it is contained in at least two triangles. Therefore, the vertex w has degree at least two in lk (v, Γ). Conversely, let v ∈ V (Γ) and let e = (v, w) be an edge containing v, where w is some vertex in lk (v, Γ). Since the degree of w in lk (v, Γ) is at least two, the edge e must be contained in at least two triangles. Thus, the edge e is not on ∂∆ Γ . Hence, the vertex v is an interior vertex.
Lemma 5.3. Let Γ be a biconnected graph such that ∆ Γ is 2-dimensional and simply connected. If ∆ Γ has no crowned triangles, then ∆ Γ has no interior triangles, no interior edges, and has at most one interior vertex.
Proof. Since an interior triangle is automatically a crowned triangle, it is clear that ∆ Γ has no interior triangles.
For the second statement, assume that there is an interior edge e = (u, v) of ∆ Γ . Since e is an interior edge, it is contained in at least two triangles. Let τ be a triangle containing e. Let w be the third vertex of τ , and let e 1 and e 2 be the other two edges of τ . Since u and v are interior vertices, we have that e 1 and e 2 are not on ∂∆ Γ . So, no edge of τ is a boundary edge. That is, the triangle τ is a crowned triangle, a contradiction.
Finally, let v be an interior vertex. By definition, none of the edges containing v is on ∂∆ Γ . We claim that Γ = st (v, Γ), and in particular, there are no other interior vertices. First, take a triangle τ containing v, then the two edges of τ that meet at v are not on ∂∆ Γ . The third edge of τ must be a boundary edge; otherwise, the triangle τ would be a crowned triangle. This shows that all vertices in lk (v, Γ) are in ∂∆ Γ . Now, assume by contradiction that there is a vertex u at distance two from v. Let w be a vertex in lkv, Γ that is adjacent to u. Note that w is a boundary vertex. Since lk (w, Γ) is connected by (2) in Lemma 4.17, there is a path p in lk (w, Γ) from u to a vertex u ′ in lk (v, Γ)∩lk (w, Γ); see Figure 17. Then the path p, together with the edges (w, u) and (w, u ′ ), bounds a triangulated disk in ∆ Γ . Then the edge (w, u ′ ) is contained in more than one triangle, and therefore, it is not a boundary edge, and the triangle formed by the vertices v, w, and u ′ is a crowned triangle, a contradiction. Figure 17. The path p and the edges (w, u) and (w, u ′ ) bound a triangulated disk in ∆ Γ . This implies that the edge (w, u ′ ) is not a boundary edge, and the vertices v, w, and u ′ form a crowned triangle.
Before we prove the next result, we give some terminologies on graphs. A graph Γ is called an edge-bonding of two graphs Γ 1 and Γ 2 if it is obtained by identifying two edges e 1 ∈ E(Γ 1 ) and e 2 ∈ E(Γ 2 ). If e denotes the image of e 1 and e 2 in Γ, we also write Γ = Γ 1 ∪ e Γ 2 and say that e is the bonding edge.
Remark 5.4. Since an edge-bonding involves identifying two edges from two different graphs, one can perform several edge-bondings of a collection of graphs simultaneously. In particular, if one performs a sequence of edge-bondings, then the result can actually be obtained by a simultaneous edge-bonding.
We also note that there are two ways of identifying e 1 and e 2 that can result in two different graphs. However, this will not be relevant in the following.
Our goal is to decompose a given graph as an edge-bonding of certain elementary pieces that we now define. A fan is a cone over a path. Let Γ 0 be a connected graph having no vertices of degree 1 and whose associated flag complex ∆ Γ0 is 1dimensional. Note that Γ 0 contains no triangles. The cone over such a Γ 0 is called a simple cone; see Figure 18 for an example. Remark 5.5. Fans and simple cones could be further decomposed via edge-bonding by disconnecting them along a cut edge. For example, a fan can be decomposed into triangles. However, we will not take this point of view. Instead, it will be convenient to decompose a graph into fans and simple cones and regard them as elementary pieces.
It follows from Corollary 3.10 that the BBG defined on a fan or simple cone is a RAAG. Here are some further properties of fans and simple cones that follow directly from the definitions.
Lemma 5.6. Let Γ be a fan or a simple cone. The following statements hold.
(1) The flag complex ∆ Γ is 2-dimensional, simply connected, and contractible.
(2) The flag complex ∆ Γ has no interior edges, no interior triangles, and no crowned triangles. Lemma 5.7. Let Γ be a biconnected graph such that ∆ Γ is 2-dimensional and simply connected. Suppose that ∆ Γ has no crowned triangles. Then Γ decomposes as edge-bondings of fans and simple cones.
Proof. We argue by induction on the number of cut edges of Γ. Suppose that Γ has no cut edges. By Lemma 5.3, the complex ∆ Γ contains at most one interior vertex. We claim that if ∆ Γ contains no interior vertices, then Γ is a fan. Let v ∈ V (Γ). Since v is a boundary vertex, its link has degree one vertices by Lemma 5.2. Moreover, since Γ has no cut edges, the link of v has no cut vertices. Then lk (v, Γ) must be a single edge, and therefore, the graph Γ is a triangle, which is a fan. Thus, the claim is proved. If ∆ Γ contains one interior vertex u, then Γ = st (u, Γ) as in the proof of Lemma 5.3. So, the graph Γ is the cone over lk (u, Γ). Since u is an interior vertex, its link has no degree one vertices. Note that the flag complex on lk (u, Γ) is 1-dimensional; otherwise, the dimension of ∆ Γ would be greater than two. Thus, the graph Γ = st (u, Γ) is a simple cone. This proves the base case of induction. Suppose that the conclusion holds for graphs having n cut edges, n ≥ 1. Assume that Γ has n + 1 cut edges. Let e be a cut edge of Γ. Cutting along e gives some connected components Γ 1 , . . . , Γ k . Each of these components, as a full subgraph of Γ, satisfies all the assumptions of the lemma and has at most n cut edges. By induction, the subgraphs Γ 1 , . . . , Γ k are edge-bondings of fans and simple cones. Therefore, the graph Γ, as an edge-bonding of Γ 1 , . . . , Γ k , is also an edge-bonding of fans and simple cones.
Remark 5.8. The decomposition in Lemma 5.7 is not unique (for instance, it is not maximal; see Remark 5.5). We do not need this fact in this paper.
We now proceed to study the ways in which one can perform edge-bondings of fans and simple cones. Recall from §4.5 that the spoke of a vertex v is the collection of edges containing v. When Γ is a fan, write Γ = {v} * P n , where P n is the path on n labelled vertices; see Figure 19. We call the edges (v, w 1 ) and (v, w n ) peripheral edges, and the edges (w 1 , w 2 ) and (w n−1 , w n ) are called modifiedperipheral edges. A peripheral triangle is a triangle containing a peripheral edge and a modified-peripheral edge. v w 1 w 2 w n−1 w n Figure 19. The red edges are peripheral edges, and the green edges are modified-peripheral edges. The left-most and right-most triangles are peripheral triangles.
We say that an edge of a fan is good if either it belongs to the spoke or it is a modified-peripheral edge. Similarly, we say that an edge of a simple cone is good if it belongs to the spoke. We say an edge is bad if it is not good. Note that a bad edge is necessarily a boundary edge; see Lemma 5.6. We extend this definition to more general graphs as follows: let Γ be a graph obtained via an edge-bonding on a collection of fans and simple cones, and let e ∈ E(Γ) be a bonding edge of Γ. We say that e is good if it is good in each fan component or simple cone component of Γ that contains e. We say that e is bad otherwise. These concepts are motivated by the fact that forming edge-bonding along good edges does not create crowned triangles; see the following example.
Example 5.9. Let Γ 1 and Γ 2 be a fan and a simple cone, respectively. If we form the edge-bonding of Γ 1 and Γ 2 by identifying a good edge in each of them, the resulting graph has no crowned triangles. The situation is analogous if Γ 1 and Γ 2 are both fans or both simple cones.
Lemma 5.10. Let Γ = Γ 1 ∪ e Γ 2 , where Γ 1 is a fan or a simple cone, and Γ 2 is any graph obtained via edge-bonding of fans and simple cones. If e is a bad edge of Γ 1 , then Γ contains a crowned triangle.
Proof. If e ∈ E(Γ 1 ) is bad, then it is in ∂∆ Γ1 . In particular, there is a unique triangle τ of Γ 1 containing e (namely, the cone over e), and the other two edges of τ are not boundary edges (in the case of a fan, recall that a modified-peripheral edge is good). When we form an edge-bonding along e, the edge e is no longer a boundary edge in Γ, so τ becomes a crowned triangle in Γ.
Proposition 5.11. Let Γ be a biconnected graph such that ∆ Γ is 2-dimensional and simply connected. Then Γ admits a tree 2-spanner if and only if ∆ Γ does not contain crowned triangles.
Proof. Let T be a tree 2-spanner of Γ. Suppose by contradiction that Γ contains a crowned triangle τ whose edges are e, f , and g. By Lemma 3.2, either two of e, f , and g are in E(T ) or none of them is in E(T ). If e, f , and g are not in E(T ), then by Lemma 3.3, the graph Γ contains a K 4 . This contradicts the fact that ∆ Γ is 2-dimensional. Now consider the case that e ∉ E(T ) and f and g are in E(T ). Since τ is a crowned triangle, the edge e is not on the boundary of ∆ Γ , and there is another triangle τ ′ based on e that is different from τ . Denote the other edges of τ ′ by f ′ and g ′ . Note that f ′ and g ′ cannot be in E(T ) by the uniqueness part Remark 5.12. When Γ is a 2-tree (recall from §3.2.2), the flag complex ∆ Γ is a biconnected contractible 2-dimensional flag complex. In [Cai97], Cai showed that a 2-tree admits a tree 2-spanner if and only if it does not contain a trefoil subgraph (see Figure 1). Proposition 5.11 generalizes Cai's result to any biconnected and simply connected 2-dimensional flag complex. Note that a trefoil subgraph in a 2-tree is necessarily full, but this is not the case in general; see Figure 16. 5.2. The RAAG recognition problem in dimension 2. In this section, we provide a complete answer to the RAAG recognition problem on 2-dimensional complexes. In other words, we completely characterize the graphs Γ such that BB Γ is a RAAG, under the assumption dim ∆ Γ = 2.
Observe that a RAAG is always finitely presented (recall that all graphs are finite in our setting). On the other hand, by [BB97,Main Theorem (3)], a BBG is finitely presented precisely when the defining flag complex is simply connected. Therefore, we can assume that ∆ Γ is simply connected. Moreover, by Corollary 4.22 we can assume that Γ is also biconnected. Note that RAAGs are actually groups of type F , so one could even restrict to the case that ∆ Γ is contractible, thanks to [BB97,Main Theorem]; compare this with Corollary 3.9. However, we do not need this fact. We start by showing that in dimension two, any crowned triangle is redundant.
Lemma 5.13. If dim(∆ Γ ) = 2, then every crowned triangle is a redundant triangle.
Proof. Let τ be a crowned triangle with edges e 1 , e 2 , e 3 and vertices v 1 , v 2 , v 3 , where v j is opposite to e j . Since τ is a crowned triangle, no edge e j is a boundary edge. Hence, there is another triangle τ j adjacent to τ along e j . Let u j be the vertex of τ j not in τ . If u j were adjacent to v j , then we would have a K 4 , which is impossible since dim ∆ Γ = 2. Thus, the vertices v j and u j are not adjacent. As a consequence, Indeed, the full subgraphs Λ 1 , Λ 2 , and Λ 3 induced by the sets of vertices {u, v 2 , v 3 }, {u, v 1 , v 3 }, and {v 1 , v 2 , w}, respectively, satisfy condition (3) in the definition of redundant triangle. Then it follows from Theorem E that this BB Γ is not a RAAG.
Figure 1 .
1The trefoil graph
Figure 2 .
2The extended trefoil graph. The BBG defined by it has this presentation: ⟨a, b, c, d, e, f [a, b], [b, c], [c, d], [b −1 c, e], [e, f ]⟩
Figure 4 .
4Oriented triangle.
Remark 2 . 3 .
23In the notations of Corollary 2.2, it follows that e i , e j , and e k generate a Z 2 subgroup.
Corollary 2.5. ([PS07, Corollary 2.3]) Let T be a spanning tree of Γ. When the flag complex ∆ Γ on Γ is simply connected, the group BB Γ admits a finite presentation in which the generators are the edges of T , and the relators are commutators between words in generators.
Lemma 3 . 2 .
32Let T be a tree 2-spanner of Γ. Then in every triangle of Γ, either no edge is in T or two edges are in T .
Figure 5 .
5The construction of the loop L from the cycle C (left), and its contraction to a cone vertex (right).
Figure 7 .
7Non-isomorphic biconnected graphs give isomorphic BBGs.
3. 2 . 1 .
21Joins. Recall from Section 2.1 the definition of the join of two graphs. It corresponds to a direct product operation on the associated RAAGs. The following corollary can also be found in [PS07, Example 2.5] and [Cha21, Proposition 3.4].
Corollary 3 . 10 .
310Let Λ be a graph. If Γ = {v} * Λ, then BB Γ ≅ A Λ .
Example 3.17 (A bouquet of triangles). Let Γ be a 2-tree as shown inFigure 8. Since Γ does not contain trefoil subgraphs, the group BB Γ is a RAAG by Corollary 3.16. The reader can check that the red edges form a tree 2-spanner of Γ.
Figure 8 .
8A 2-tree whose flag complex is not a triangulated disk.
Theorem 4 . 1 .
41Let χ∶ G → R be a discrete character. Then ker(χ) is finitely generated if and only if both [χ] and [−χ] are in Σ 1 (G).As a major motivating example, when G is the fundamental group of a compact 3-manifold M , the BNS-invariant Σ 1 (G) describes all the possible ways in which M fibers over the circle with fiber a compact surface (see[Sta62;Thu86;BNS87]).
Remark 4 . 3 (
43The complement and the missing subspheres). It is often the case that the BNS-invariant Σ 1 (G) is better described in terms of its complement in the character sphere S(G). Moreover, for many groups of interest, the complement of the BNS-invariant is often a union of subspheres (see[MV95;KM22;Koc21;BG84;CL16] for examples). In this paper, the complement of Σ
Theorem 4.4. (A graphical criterion for Σ 1 (A Γ ), [MV95, Theorem 4.1]) Let χ∶ A Γ → R be a character. Then [χ] ∈ Σ 1 (A Γ ) if and only if L(χ) is connected and dominating.
Lemma 4. 5 .
5Let χ∶ A Γ → R be a non-zero character. Then the following statements are equivalent.
Figure 9 .
9The subgraph Λ in Lemma 4.5 may not be a union of connected components of D(χ). Here, the graph Λ is given by the two red vertices.
Remark
Proposition 4.10 allows one to recover the previous observations as well as more properties of the BNS-invariants of BBGs, which are reminiscent of those of RAAGs. For instance, the complement of the BNS-invariant of a BBG is a rationally defined spherical polyhedron (see[KM22, Corollary 1.4] and compare with Remark 4.6).
Lemma 4 . 11 .
411Let G be a group with a presentation G = ⟨S R⟩ in which for each generator s and each relator r, the exponent sum of s in r is zero. Let A be an abelian group. Then there is a bijection between Hom(G, A) and {f ∶ S → A}.
Lemma 4 . 13 .
413Let Γ be a graph with ∆ Γ simply connected. Let T be a spanning tree of Γ. Fix an orientation for the edges of T . Then the following statements hold.
where (rχ)(e) =χ(τ e) −χ(ιe). SeeFigure 12for examples. The next result follows from the Dicks-Leary presentation in Theorem 2.1, so it holds without additional assumptions on Γ.
Lemma 4 . 14 .
414Let Γ be a connected graph. The restriction map r∶ Hom(A Γ , R) → Hom(BB Γ , R) is a linear surjection, and its kernel consists of the characters defined by the constant function V (Γ) → R.
For instance, the set Σ 1 (BB Γ ) could be empty even if Σ 1 (A Γ ) is not empty. On the other hand, the restriction map r maps each missing subspace of Hom(A Γ , R) into one of the missing subspaces of Hom(BB Γ , R) (compare Remark 4.3 and Remark 4.28). Indeed, one way to reinterpret the content of Proposition 4.10 (a particular case of [KM22, Corollary 1.3]) is to say that if χ ∈ Hom(BB Γ , R) ≅ W , then [χ] ∈ Σ 1 (BB Γ ) if and only if the line parallel to ker(r) passing through χ avoids all the missing subspaces of Hom(A Γ , R).
Figure 10 .
10The dead edge subgraph DE(χ) consists of a pair of opposite edges, and the living edge subgraph LE(χ) consists of the remaining edges. Neither is a full subgraph. Moreover, the dead subgraph of any extension of χ is a proper subgraph of DE(χ).
Lemma 4 . 15 .
415Let Γ be a connected graph and let Λ ⊆ Γ be a connected subgraph with at least one edge. Let χ ∈ Hom(BB Γ , R) be a non-zero character. Then Λ ⊆ DE(χ) if and only if there is an extensionχ ∈ Hom(A Γ , R) of χ such that Λ ⊆ D(χ).
Figure 11 .
11The living subgraph L(χ) consists of two red vertices. It is neither connected nor dominating. The living edge subgraph LE(χ) (labelled by ±1) is connected and dominating.
Lemma 4 . 17 .
417Let Γ be a biconnected graph with ∆ Γ simply connected. (1) If Λ ⊆ Γ is a connected full subgraph, then there is a bijection between the components of its complement and the components of its link. (2) The link of every vertex is connected. (3) If Λ ⊆ Γ is a minimal full separating subgraph, then Λ is connected and not a single vertex. (4) If V (Γ) ≥ 3, then every edge is contained in at least one triangle. In particular, we have dim ∆ Γ ≥ 2.
Theorem C (Graphical criterion for the BNS-invariant of a BBG). Let Γ be a biconnected graph with ∆ Γ simply connected. Let χ ∈ Hom(BB Γ , R) be a nonzero character. Then [χ] ∈ Σ 1 (BB Γ ) if and only if DE(χ) does not contain a full subgraph that separates Γ.
Example 4.18 (Simple connectedness is needed). Consider the cycle of length four Γ = C 4 ; see the left-hand side of
Figure 12 .
12Theorem C does not hold on a graph with ∆ Γ not simply connected (left), nor on a graph with a cut vertex (right).
Corollary 4 . 20 .
420Let Γ be a biconnected graph with ∆ Γ simply connected. ThenΣ 1 (BB Γ ) ≠ ∅.Proof. Let T be a spanning tree of Γ. Assign an orientation to each edge of T and write E(T ) = {e 1 , . . . , e m }. Let χ∶ E(T ) → R, χ(e k ) = 10 k , k = 1, . . . , m.
Corollary 4 . 22 .
422Let Γ be a connected graph with ∆ Γ simply connected, and let Γ 1 , . . . , Γ n be its biconnected components. Then BB Γ is a RAAG if and only if BB Γi is a RAAG for all i = 1, . . . , n.
Lemma 4 . 24 .
424Let Γ be a graph with ∆ Γ simply connected, and let Λ ⊆ Γ be a subgraph. Assume that there is an edge e 0 ∈ E(Γ) with at least one endpoint not in V (Λ). Then there exists a character χ∶ BB Γ → R such that χ(e 0 ) = 1 and χ(e) = 0 for all e ∈ E(Λ). In particular, we have [χ] ∈ S Λ .
Example 4 .
429 (Trees). Let Γ be a tree on n vertices, and let {v 1 , . . . , v m } be the set of cut vertices of Γ. Then it follows that Σ 1 (A Γ ) is obtained from S(A Γ ) = S n by removing the hyperspheres S i defined by x i = 0 for i = 1, . . . , m (see §4.1.3). The associated missing subspaces satisfy the inclusion-exclusion principle (4.2).
Figure 13 .
13An oriented trefoil graph with a spanning tree.
Figure 14 .
14The extended trefoil: a new example of a BBG that is not a RAAG.
.
RAAG behavior. The following lemma states that the arrangement defining the BNS-invariant of any RAAG satisfies the inclusion-exclusion principle. This is due to the fact that in this case, the missing subspaces are effectively described in terms of sets of vertices of Γ and the inclusion-exclusion principle holds for subsets of a given set. The argument follows the proof of [KP14, Lemma 5.3]. We include a proof for completeness.
Lemma 4 . 32 .
432Let Γ be a connected graph. Let {W j } j∈J be a collection of maximal missing subspaces in Hom(A Γ , R). Then {W j } j∈J satisfies (4.2). In particular, when J = {1, 2, 3} we have dim(W 1 + W 2 + W 3 ) = IEP(W 1 , W 2 , W 3 ).
Example 4 . 37 .
437The central triangle in the trefoil graph inFigure 1is redundant. However, if we consider the cone over the trefoil graph, then the central triangle in the base trefoil graph is not redundant. Redundant triangles can appear in higher-dimensional complexes; see Example 5.17.
•
Finally, extend T 1 to a spanning tree T for Γ. To fix a notation, say E(T ) = {f 1 , f 2 , . . . , f n }. Without loss of generality, say f 1 = e 1 and f 2 = e 2 . Fix an arbitrary orientation for the edges of T . With respect to the associated system of coordinates the subspaces W 1 , W 2 , and W 3 are given by equations of the form (4.4): W 1 = {y 1 = 0, . . . }, W 2 = {y 2 = 0, . . . }, and W 3 = {y 1 − y 2 = 0, . . . }.
Lemma 4 . 38 .
438Let v ∈ Γ, and let Λ be a subgraph of lk (v). Let T Λ be a spanningtree of st (v, Λ). If f ∈ E(T Λ ), then χ f ∈ W Λ .Proof. Suppose f ∉ E(T Λ ). Then χ f = 0 on T Λ . Since T Λ is a spanning tree of st (v, Λ), the character χ f vanishes on st (v, Λ) by Lemma 4.13. In particular, it vanishes on Λ, hence χ f ∈ W Λ .
Lemma 4 . 41 .
441Let f ∈ E(T ). If f ∈ E(Z 1 ), f ≠ e 1 , e 2 , and f does not join v 3 to a vertex in Λ 1 ∩ Λ 3 nor v 2 to a vertex in Λ 1 ∩ Λ 2 , then χ f ∈ W 1 .
Moreover, let H ′ U be the image of H U under the restriction map r∶ Hom(A Γ , R) → Hom(BB Γ , R); see §4.1.3.
Proposition 4 . 43 .
443Let Γ be a biconnected graph with ∆ Γ simply connected. Then
in Lemma 4.17. So, by Lemma 4.15, we have Λ ⊆ D(χ) if and only if Λ ⊆ DE(χ), which is equivalent to χ ∈ W Λ .
Remark 4 . 44 .
444We note that in general one has the inclusion Σ 1 (G) c ⊆ R 1 (G) ∩ S(G) thanks to[PS10, Theorem 15.8]. However, the equality does not always hold; see[Suc21, §8] for some examples, such as the Baumslag-Solitar group BS(1, 2).
Figure 18 .
18A simple cone.
( 3 )
3If Γ = {v} * P is a fan over a path P with endpoints u and w, then ∂∆ Γ = P ∪ {(v, u), (v, w)}, and there are no interior vertices. (4) If Γ = {v} * Γ 0 is a simple cone over Γ 0 , then ∂∆ Γ = Γ 0 , and the cone vertex v is the only interior vertex.
Figure 20 .
20A 3-dimensional complex that contains a redundant triangle.
4.6. It follows fromTheorem 4.4 that Σ 1 (A Γ ) c is a rationally defined spherical polyhedron, given by a union of missing subspheres (see [MV95, Theorem 5.1]). Moreover, each (maximal) missing subsphere consists of characters that vanish on a (minimal) separating subgraph of Γ, thanks to Lemma 4.5 (see also[Str12, Proposition A4.14]). For example, the missing hyperspheres in Σ 1 (A Γ ) c are in bijective correspondence with the cut vertices of Γ. We will further discuss the correspondence in Example 4.29.4.1.2. The BNS-invariants of BBGs.As in the case of RAAGs, some elementary properties of the BNS-invariants of BBGs can be seen directly from the defining graph.Example 4.7. The graph Γ is complete if and only if Σ 1 (BB Γ ) = S(BB Γ ). Indeed, in this case, the group BB Γ is free abelian.Example 4.8. At the opposite extreme, if Γ is connected and has a cut vertex, then Σ 1 (BB Γ ) = ∅ (see[PS10, Corollary 15.10]). Vice versa, if Γ has no cut vertices and BB Γ is finitely presented, then we will prove in Corollary 4.20 that Σ 1
is given by Corollary 4.20 and [PS10, Corollary 15.10]. Given that BB Γ is torsion-free, the equivalence of
It follows from Theorem 4.1 that BB Γ algebraically fibers if and only if there exists a discrete character χ ∶ BB Γ → R such that both [χ] and [−χ] are in Σ 1 (BB Γis just
Stalling's theorem about the ends of groups (see [BH99, Theorem I.8.32]). The
fact that (2) implies (3) is discussed in [Str12, Example 3 in A2.1a]. The fact
that (3) implies (1) can be seen directly from the Dicks-Leary presentation from
Theorem 2.1.
Finally, we show that (5) is equivalent to (2).
Proposition 4.36. Let G be a finitely generated group. Suppose that there exist three maximal missing subspaces W 1 , W 2 , and W 3 in Hom(G, R). If they form a redundant triple of subspaces, then G is not a RAAG. Proof. Since {W 1 , W 2 , W 3 } is a redundant triple of subspaces, by Lemma 4.34 we have that dim(W 1 +W 2 +W 3 )+1 ≤ IEP(W 1 , W 2 , W 3 ). Assume by contradiction that G is a RAAG. Then by Lemma 4.32 we have dim(W 1 +W 2
Acknowledgements We thank Tullia Dymarz for bringing[KM22]to our attention and Daniel C. Cohen, Pallavi Dani, Max Forester, Wolfgang Heil, Alexandru Suciu, and Matthew Zaremsky for helpful conversations. We thank the School of Mathematics at the Georgia Institute of Technology for their hospitality during a visit in which part of this work was done. The second author acknowledges support from the AMS and the Simons Foundation.
We now construct a tree 2-spanner for Γ. We do this by constructing a tree 2-spanner T i for each Γ i and then gluing them together. For a simple cone component Γ i , choose T i to be the spoke. For a fan component Γ i , write Γ i = {v i } * P ni and order the vertices of P ni as w 1. . , Conversely, suppose that ∆ Γ has no crowned triangles. By Lemma 5.7 the graph Γ decomposes as edge-bondings of some fans and simple cones Γ 1. (v i , w ni−1 ), together with two more edges, one from each peripheral triangle, chosen as follows. If the peripheral edge or the modified-peripheral edge in a peripheral triangle is involved in some edge-bondings. then choose that edge to be in T i . If none of them is involved in any edge-bonding, then choose either one of them. Note that it is not possible that both the peripheral edge and the modified-peripheral edge of the same peripheral triangle are involved in edge-bondings; otherwise, the graph Γ vertex. Otherwise, we would see a K 4 , which is against the assumption that ∆ Γ is 2-dimensionalin E(T ). Again, by Lemma 3.3 we obtain a K 4 , hence a contradiction. Therefore, the graph Γ has no crowned triangles. Conversely, suppose that ∆ Γ has no crowned triangles. By Lemma 5.7 the graph Γ decomposes as edge-bondings of some fans and simple cones Γ 1 , . . . , Γ m . Let {e 1 , . . . , e n } be the set of bonding edges. Note that by Remark 5.4 these edge- bonding operations can be performed simultaneously. Since Γ has no crowned tri- angles, by Lemma 5.10, each of the edges in {e 1 , . . . , e n } is good. We now construct a tree 2-spanner for Γ. We do this by constructing a tree 2-spanner T i for each Γ i and then gluing them together. For a simple cone component Γ i , choose T i to be the spoke. For a fan component Γ i , write Γ i = {v i } * P ni and order the vertices of P ni as w 1 , . . . , w ni . Define T i to consist of the edges (v i , w n2 ), . . . , (v i , w ni−1 ), together with two more edges, one from each peripheral triangle, chosen as follows. If the peripheral edge or the modified-peripheral edge in a peripheral triangle is involved in some edge-bondings, then choose that edge to be in T i . If none of them is involved in any edge-bonding, then choose either one of them. Note that it is not possible that both the peripheral edge and the modified-peripheral edge of the same peripheral triangle are involved in edge-bondings; otherwise, the graph Γ vertex. Otherwise, we would see a K 4 , which is against the assumption that ∆ Γ is 2-dimensional.
Let Γ be a biconnected graph such that ∆ Γ is 2-dimensional and simply connected. Then the following statements are equivalent. A Theorem, Theorem A. Let Γ be a biconnected graph such that ∆ Γ is 2-dimensional and simply connected. Then the following statements are equivalent.
Γ admits a tree 2-spanner. Γ admits a tree 2-spanner.
∆ Γ does not contain crowned triangles. ∆ Γ does not contain crowned triangles.
. BB Γ is a RAAG. BB Γ is a RAAG.
. BB Γ is an Artin group. BB Γ is an Artin group.
The implications (1) ⇔ (2) follows from Proposition 5.11. Moreover, the implication (1) ⇒ (3) is Theorem B. The implication (3) ⇒ (4) is obvious. We prove the implication (3) ⇒ (2) as follows. Assume that ∆ Γ contains a crowned triangle τ . Then by Lemma 5.13 we know that τ is also a redundant triangle. Then it follows from Theorem E that BB Γ is not a RAAG. The implicationProof. The implications (1) ⇔ (2) follows from Proposition 5.11. Moreover, the implication (1) ⇒ (3) is Theorem B. The implication (3) ⇒ (4) is obvious. We prove the implication (3) ⇒ (2) as follows. Assume that ∆ Γ contains a crowned triangle τ . Then by Lemma 5.13 we know that τ is also a redundant triangle. Then it follows from Theorem E that BB Γ is not a RAAG. The implication
4] showed that if ∆ Γ is a certain type of triangulation of the 2-disk (which they call extra-special triangulation), then BB Γ is not a RAAG. Those triangulations always contain a crowned triangle, so Theorem A recovers Papadima-Suciu's result and extends it to a wider class of graphs. ) is obtained in a similar way, using Corollary 4.45 instead of Theorem E. Papadima and Suciu in [PS07, Proposition 9. such as arbitrary triangulations of disks (see Example 5.14), or even flag complexes that are not triangulations of disks (see Example 3.17.⇒ (2) is obtained in a similar way, using Corollary 4.45 instead of Theorem E. Papadima and Suciu in [PS07, Proposition 9.4] showed that if ∆ Γ is a certain type of triangulation of the 2-disk (which they call extra-special triangulation), then BB Γ is not a RAAG. Those triangulations always contain a crowned triangle, so Theorem A recovers Papadima-Suciu's result and extends it to a wider class of graphs, such as arbitrary triangulations of disks (see Example 5.14), or even flag complexes that are not triangulations of disks (see Example 3.17.)
Example 5.14 (The extended trefoil continued). Let Γ be the graph in Figure 14. Example 5.14 (The extended trefoil continued). Let Γ be the graph in Figure 14.
Note that this fact does not follow from [PS07]: the flag complex ∆ Γ is a triangulation of the disk but not an extra-special triangulation. This fact also does not follow from [DW18], because all the subspace arrangement homology groups vanish for this group BB Γ , that is. Since Γ contains a crowned triangle, the group BB Γ is not a RAAG by Theorem A. they look like those of a RAAG (as observed in Example 4.31Since Γ contains a crowned triangle, the group BB Γ is not a RAAG by Theorem A. Note that this fact does not follow from [PS07]: the flag complex ∆ Γ is a triangu- lation of the disk but not an extra-special triangulation. This fact also does not follow from [DW18], because all the subspace arrangement homology groups van- ish for this group BB Γ , that is, they look like those of a RAAG (as observed in Example 4.31).
On the other hand, Theorem A fails for higher dimensional complexes. Indeed, the mere existence of a crowned triangle is not very informative in higher dimension cases. The criterion for a BBG to be a RAAG from Theorem B works in any dimension. see Example 5.16. However, the existence of a redundant triangle is an obstruction for a BBG to be a RAAG even in higher dimensional complexesRemark 5.15. The criterion for a BBG to be a RAAG from Theorem B works in any dimension. On the other hand, Theorem A fails for higher dimensional complexes. Indeed, the mere existence of a crowned triangle is not very informative in higher dimension cases; see Example 5.16. However, the existence of a redundant triangle is an obstruction for a BBG to be a RAAG even in higher dimensional complexes
Let Γ be the cone over the trefoil graph in Figure 1. Then ∆ Γ is 3-dimensional and Γ contains a crowned triangle (the one sitting in the trefoil graph). However, this crowned triangle is not a redundant triangle. 516 (A crowned triangle in dimension three does not imply that the BBG is not a RAAG). and the group BB Γ is actually a RAAG by Corollary 3.10Example 5.16 (A crowned triangle in dimension three does not imply that the BBG is not a RAAG). Let Γ be the cone over the trefoil graph in Figure 1. Then ∆ Γ is 3-dimensional and Γ contains a crowned triangle (the one sitting in the trefoil graph). However, this crowned triangle is not a redundant triangle, and the group BB Γ is actually a RAAG by Corollary 3.10.
Consider the graph Γ in Figure 20. Then ∆ Γ is 3-dimensional and every 3-simplex has a 2-face on ∂∆ Γ . However, we can show that this BB Γ is not a RAAG. The triangle induced by the vertices v 1. Example 5.17 (A redundant triangle in dimension three implies that the BBG is not a RAAG). and v 3 is a redundant triangle. ReferencesExample 5.17 (A redundant triangle in dimension three implies that the BBG is not a RAAG). Consider the graph Γ in Figure 20. Then ∆ Γ is 3-dimensional and every 3-simplex has a 2-face on ∂∆ Γ . However, we can show that this BB Γ is not a RAAG. The triangle induced by the vertices v 1 , v 2 , and v 3 is a redundant triangle. References
Morse theory and finiteness properties of groups. M Bestvina, N Brady, In: Invent. Math. 129M. Bestvina and N. Brady. "Morse theory and finiteness properties of groups". In: Invent. Math. 129.3 (1997), pp. 445-470.
Network design problems : Steiner trees and spanning ktrees. eng. M W Bern, M. W. Bern. Network design problems : Steiner trees and spanning k - trees. eng. 1987.
The geometry of the set of characters induced by valuations. R Bieri, J R J Groves, J. Reine Angew. Math. 347R. Bieri and J. R. J. Groves. "The geometry of the set of characters induced by valuations". In: J. Reine Angew. Math. 347 (1984), pp. 168- 195.
Metric spaces of non-positive curvature. M R Bridson, A Haefliger, M. R. Bridson and A. Haefliger. Metric spaces of non-positive curvature.
Grundlehren der mathematischen Wissenschaften. 319Fundamental Principles of Mathematical SciencesVol. 319. Grundlehren der mathematischen Wissenschaften [Fundamen- tal Principles of Mathematical Sciences].
. Springer-Verlag, 643BerlinSpringer-Verlag, Berlin, 1999, pp. xxii+643.
A geometric invariant of discrete groups. R Bieri, W D Neumann, R Strebel, In: Invent. Math. 903R. Bieri, W. D. Neumann, and R. Strebel. "A geometric invariant of discrete groups". In: Invent. Math. 90.3 (1987), pp. 451-477.
On the recognition of right-angled Artin groups. M R Bridson, Glasg. Math. J. 62M. R. Bridson. "On the recognition of right-angled Artin groups". In: Glasg. Math. J. 62.2 (2020), pp. 473-475.
Graphical splittings of Artin kernels. E M Barquinero, L Ruffoni, K Ye, In: J. Group Theory. 24E. M. Barquinero, L. Ruffoni, and K. Ye. "Graphical splittings of Artin kernels". In: J. Group Theory 24.4 (2021), pp. 711-735.
On spanning 2-trees in a graph. L Cai, Discrete Appl. Math. 74L. Cai. "On spanning 2-trees in a graph". In: Discrete Appl. Math. 74.3 (1997), pp. 203-216.
Tree spanners. L Cai, D G Corneil, SIAM J. Discrete Math. 8L. Cai and D. G. Corneil. "Tree spanners". In: SIAM J. Discrete Math. 8.3 (1995), pp. 359-387.
The automorphism group of a graph product with no SIL. R Charney, K Ruane, N Stambaugh, A Vijayan, Illinois J. Math. 541R. Charney, K. Ruane, N. Stambaugh, and A. Vijayan. "The automor- phism group of a graph product with no SIL". In: Illinois J. Math. 54.1 (2010), pp. 249-262.
Identifying Dehn functions of Bestvina-Brady groups from their defining graphs. Y.-C Chang, Geom. Dedicata. 214Y.-C. Chang. "Identifying Dehn functions of Bestvina-Brady groups from their defining graphs". In: Geom. Dedicata 214 (2021), pp. 211- 239.
Abelian Splittings and JSJ-Decompositions of Bestvina-Brady Groups. Y.-C Chang, arXiv:2003.09927Journal of Group Theory. to appear. math.GRY.-C. Chang. "Abelian Splittings and JSJ-Decompositions of Bestvina- Brady Groups". In: Journal of Group Theory (to appear) (Sept. 2022). arXiv: 2003.09927 [math.GR].
Mapping tori of free group automorphisms, and the Bieri-Neumann-Strebel invariant of graphs of groups. C H Cashen, G Levitt, J. Group Theory. 19C. H. Cashen and G. Levitt. "Mapping tori of free group automor- phisms, and the Bieri-Neumann-Strebel invariant of graphs of groups". In: J. Group Theory 19.2 (2016), pp. 191-216.
Recognizing right-angled Coxeter groups using involutions. C Cunningham, A Eisenberg, A Piggott, K Ruane, In: Pacific J. Math. 2841C. Cunningham, A. Eisenberg, A. Piggott, and K. Ruane. "Recogniz- ing right-angled Coxeter groups using involutions". In: Pacific J. Math. 284.1 (2016), pp. 41-77.
Right-angled Artin subgroups of right-angled Coxeter and Artin groups. P Dani, I Levcovitz, arXiv:2003.05531In: arXiv e-prints. math.GRP. Dani and I. Levcovitz. "Right-angled Artin subgroups of right-angled Coxeter and Artin groups". In: arXiv e-prints (Mar. 2020). arXiv: 2003.05531 [math.GR].
Presentations for subgroups of Artin groups. W Dicks, I J Leary, Proc. Amer. Math. Soc. 127W. Dicks and I. J. Leary. "Presentations for subgroups of Artin groups". In: Proc. Amer. Math. Soc. 127.2 (1999), pp. 343-348.
Quasi-Kähler Bestvina-Brady groups. A Dimca, S Papadima, A I Suciu, J. Algebraic Geom. 17A. Dimca, S. Papadima, and A. I. Suciu. "Quasi-Kähler Bestvina-Brady groups". In: J. Algebraic Geom. 17.1 (2008), pp. 185-197.
Topology and geometry of cohomology jump loci. A Dimca, A I Papadima, Suciu, Duke Math. J. 148A. Dimca, c. Papadima, and A. I. Suciu. "Topology and geometry of cohomology jump loci". In: Duke Math. J. 148.3 (2009), pp. 405-457.
On the structure of finitely presented Bestvina-Brady groups. P Deshpande, M Roy, arXiv:2205.09154In: arXiv e-prints. math.GRP. Deshpande and M. Roy. "On the structure of finitely presented Bestvina-Brady groups". In: arXiv e-prints (May 2022). arXiv: 2205.09154 [math.GR].
Isomorphisms of graph groups". C Droms, In: Proc. Amer. Math. Soc. 100C. Droms. "Isomorphisms of graph groups". In: Proc. Amer. Math. Soc. 100.3 (1987), pp. 407-408.
Subgroups of graph groups. C Droms, J. Algebra. 110C. Droms. "Subgroups of graph groups". In: J. Algebra 110.2 (1987), pp. 519-522.
Subspace arrangements, BNS invariants, and pure symmetric outer automorphisms of right-angled Artin groups. M B Day, R D Wade, Groups Geom. Dyn. 12M. B. Day and R. D. Wade. "Subspace arrangements, BNS invariants, and pure symmetric outer automorphisms of right-angled Artin groups". In: Groups Geom. Dyn. 12.1 (2018), pp. 173-206.
Special cube complexes. F Haglund, D T Wise, Geom. Funct. Anal. 17F. Haglund and D. T. Wise. "Special cube complexes". In: Geom. Funct. Anal. 17.5 (2008), pp. 1551-1620.
On the Bieri-Neumann-Strebel-Renz Σ-invariants of the Bestvina-Brady groups. D H Kochloukova, L Mendonça, Forum Math. 343D. H. Kochloukova and L. Mendonça. "On the Bieri-Neumann-Strebel- Renz Σ-invariants of the Bestvina-Brady groups". In: Forum Math. 34.3 (2022), pp. 605-626.
Right-angled Artin groups and a generalized isomorphism problem for finitely generated subgroups of mapping class groups. T Koberda, Geom. Funct. Anal. 22T. Koberda. "Right-angled Artin groups and a generalized isomorphism problem for finitely generated subgroups of mapping class groups". In: Geom. Funct. Anal. 22.6 (2012), pp. 1541-1590.
On the Bieri-Neumann-Strebel-Renz Σ 1 -invariant of even Artin groups. D H Kochloukova, In: Pacific J. Math. 3121D. H. Kochloukova. "On the Bieri-Neumann-Strebel-Renz Σ 1 -invariant of even Artin groups". In: Pacific J. Math. 312.1 (2021), pp. 149-169.
The Bieri-Neumann-Strebel invariant of the pure symmetric automorphisms of a right-angled Artin group. N Koban, A Piggott, Illinois J. Math. 58N. Koban and A. Piggott. "The Bieri-Neumann-Strebel invariant of the pure symmetric automorphisms of a right-angled Artin group". In: Illinois J. Math. 58.1 (2014), pp. 27-41.
The cohomology of Bestvina-Brady groups. I J Leary, M Saadetoglu, Groups Geom. Dyn. 5. 1I. J. Leary and M. Saadetoglu. "The cohomology of Bestvina-Brady groups". In: Groups Geom. Dyn. 5.1 (2011), pp. 121-138.
The Bieri-Neumann-Strebel invariants for graph groups. J Meier, L Vanwyk, Proc. nullLondon Math. Soc71J. Meier and L. VanWyk. "The Bieri-Neumann-Strebel invariants for graph groups". In: Proc. London Math. Soc. (3) 71.2 (1995), pp. 263- 280.
Algebraic invariants for right-angled Artin groups. S Papadima, A I Suciu, In: Math. Ann. 3343S. Papadima and A. I. Suciu. "Algebraic invariants for right-angled Artin groups". In: Math. Ann. 334.3 (2006), pp. 533-555.
Algebraic invariants for Bestvina-Brady groups. S Papadima, A Suciu, J. Lond. Math. Soc. 762S. Papadima and A. Suciu. "Algebraic invariants for Bestvina-Brady groups". In: J. Lond. Math. Soc. (2) 76.2 (2007), pp. 273-292.
Bieri-Neumann-Strebel-Renz invariants and homology jumping loci. S Papadima, A I Suciu, Proc. Lond. Math. Soc. 1003S. Papadima and A. I. Suciu. "Bieri-Neumann-Strebel-Renz invariants and homology jumping loci". In: Proc. Lond. Math. Soc. (3) 100.3 (2010), pp. 795-834.
Automorphisms of graph groups. H Servatius, J. Algebra. 126H. Servatius. "Automorphisms of graph groups". In: J. Algebra 126.1 (1989), pp. 34-60.
On fibering certain 3-manifolds. J Stallings, Topology of 3-manifolds and related topics (Proc. The Univ. of Georgia Institute. Englewood CliffsPrentice-HallJ. Stallings. "On fibering certain 3-manifolds". In: Topology of 3-manifolds and related topics (Proc. The Univ. of Georgia Institute, 1961). Prentice- Hall, Englewood Cliffs, N.J., 1962, pp. 95-100.
Notes on the Sigma invariants. R Strebel, arXiv:1204.0214In: arXiv e-prints. math.GRR. Strebel. "Notes on the Sigma invariants". In: arXiv e-prints (Apr. 2012). arXiv: 1204.0214 [math.GR].
Sigma-invariants and tropical varieties. A I Suciu, In: Math. Ann. 380A. I. Suciu. "Sigma-invariants and tropical varieties". In: Math. Ann. 380.3-4 (2021), pp. 1427-1463.
A norm for the homology of 3-manifolds. W P Thurston, i-vi and 99-130In: Mem. Amer. Math. Soc. 59W. P. Thurston. "A norm for the homology of 3-manifolds". In: Mem. Amer. Math. Soc. 59.339 (1986), i-vi and 99-130.
| [] |
[
"COALESCING AT 8 GEV IN THE FERMILAB MAIN INJECTOR",
"COALESCING AT 8 GEV IN THE FERMILAB MAIN INJECTOR"
] | [
"D J Scott \n60510Fermilab, BataviaILU.S.A\n",
"D Capista \n60510Fermilab, BataviaILU.S.A\n",
"B Chase \n60510Fermilab, BataviaILU.S.A\n",
"J Dye \n60510Fermilab, BataviaILU.S.A\n",
"I Kourbanis \n60510Fermilab, BataviaILU.S.A\n",
"K Seiya \n60510Fermilab, BataviaILU.S.A\n",
"M.-J Yang \n60510Fermilab, BataviaILU.S.A\n"
] | [
"60510Fermilab, BataviaILU.S.A",
"60510Fermilab, BataviaILU.S.A",
"60510Fermilab, BataviaILU.S.A",
"60510Fermilab, BataviaILU.S.A",
"60510Fermilab, BataviaILU.S.A",
"60510Fermilab, BataviaILU.S.A",
"60510Fermilab, BataviaILU.S.A"
] | [] | For Project X, it is planned to inject a beam of 3 10 11 particles per bunch into the Main Injector. To prepare for this by studying the effects of higher intensity bunches in the Main Injector it is necessary to perform coalescing at 8 GeV. The results of a series of experiments and simulations of 8 GeV coalescing are presented. To increase the coalescing efficiency adiabatic reduction of the 53 MHz RF is required. This results in ~70% coalescing efficiency of 5 initial bunches. Data using wall current monitors has been taken to compare previous work and new simulations for 53 MHz RF reduction, bunch rotations and coalescing, good agreement between experiment and simulation was found. By increasing the number of bunches to 7 and compressing the bunch energy spread a scheme generating approximately 3 10 11 particles in a bunch has been achieved. These bunches will then be used in further investigations. | null | [
"https://export.arxiv.org/pdf/1301.7439v1.pdf"
] | 119,232,905 | 1301.7439 | 2b06186c80fd4130c92bb02cd8dbeaaa4021ca9e |
COALESCING AT 8 GEV IN THE FERMILAB MAIN INJECTOR
D J Scott
60510Fermilab, BataviaILU.S.A
D Capista
60510Fermilab, BataviaILU.S.A
B Chase
60510Fermilab, BataviaILU.S.A
J Dye
60510Fermilab, BataviaILU.S.A
I Kourbanis
60510Fermilab, BataviaILU.S.A
K Seiya
60510Fermilab, BataviaILU.S.A
M.-J Yang
60510Fermilab, BataviaILU.S.A
COALESCING AT 8 GEV IN THE FERMILAB MAIN INJECTOR
For Project X, it is planned to inject a beam of 3 10 11 particles per bunch into the Main Injector. To prepare for this by studying the effects of higher intensity bunches in the Main Injector it is necessary to perform coalescing at 8 GeV. The results of a series of experiments and simulations of 8 GeV coalescing are presented. To increase the coalescing efficiency adiabatic reduction of the 53 MHz RF is required. This results in ~70% coalescing efficiency of 5 initial bunches. Data using wall current monitors has been taken to compare previous work and new simulations for 53 MHz RF reduction, bunch rotations and coalescing, good agreement between experiment and simulation was found. By increasing the number of bunches to 7 and compressing the bunch energy spread a scheme generating approximately 3 10 11 particles in a bunch has been achieved. These bunches will then be used in further investigations.
INTRODUCTION
Coalescing is a non-adiabatic process that can be used to increase bunch intensities [1]. In the Fermilab Main Injector (MI) five 53 MHz bunches are rotated, via synchrotron oscillations, in a 2.5 MHz bucket and then recaptured in a 53 MHz RF bucket. The basic manipulations are shown in Figure 1 and Table 1 gives some basic parameters for standard operations and those expected for Project X. The coalescing efficiency, α, is the ratio of captured to initial particles. This strongly depends on the energy spread, σ E , of the beam before rotation, the bunch lengths, σ t , have little to no influence. Figure 2 shows the results of simulations calculating α for different initial σ E and σ t and 5 bunches The typical beam injected into the MI has σ E = 3 MeV giving α = 63 %. Previous studies [2] attempted to achieve α = 85 % by reducing the energy spread using bunch compression with the 53 MHz RF and bunch stretching on the unstable fixed point during rotation in the 2.5 MHz RF. To implement bunch stretching with the MI requires modifications to the timing resolution of the low level RF (LLRF) system are required, that have yet to be implemented. Figure 2: α vs σ E with σ t = 1 and 9 ns (red, green).
This paper will outline the results of experiments and simulations that have achieved coalesced bunch intensities of 3 10 11 particles using the existing MI LLRF. This has been achieved through using 7 initial bunches of the highest intensity available from the Fermilab Booster and allowing the beam to rotate in the 2.5 MHz RF for 0.75 rotations, instead of 0.25, to allow for the current minimum time between LLRF commands.
DETERMINING THE BEAM ENERGY SPREAD
To compare experimental results with simulations an estimation of σ E is required. This has been achieved by using Wall Current Monitor (WCM) measurements of the beam intensity vs longitudinal position around the ring during compression of σ E . Compressing σ E is achieved by adiabatically reducing the 53 MHz voltage from the nominal value of 1.1 MV. During this manipulation σ t increases and that can be measured from the WCM data. Figure 3 shows an example of the bunch length increasing as the 53 MHz RF is adiabatically reduced from 1.1 MV to 45 kV over 0.2 seconds. The measured values of σ t can be compared to simulations for a given σ E , a good match indicating a reasonable estimation of σ E . The simulation and experimental results are shown in Figure 4 using σ E values given in Table 2. The simulation deviates from the measurements near the end of the compression for the lower intensity bunch data. This could be due to the WCM data becoming more noisy as the peak signal decreases and the fits underestimate the bunch length. The longitudinal beam emittance, ε, can be estimated using ε = 4πσ t σ E and this is also given in Table 2. The expected emittance from the booster is 0.1 eV s indicating that the low intensity bunches are not well matched, which is expected as nominal operations uses the higher, 5 10 11 , bunch intensities.
ROTATION IN THE 2.5 MHZ RF
The rotation time in the 2.5 MHz bucket depends on the voltage, and can be calculated from the synchrotron tune. This has been measured and compared with theory. Figure 5 shows a typical contour plot of WCM data for 5 rotating bunches. The rotation time can be found by plotting the peak signal for each turn of WCM data and then finding the time between maximums of these peaks, as shown in Figure 6. Figure 6 also shows the simulated and measured rotation times for different values of the 2.5 MHz RF voltage. There is excellent agreement between the two.
COALESCING WITH EXISTING RF
The timing limitations of the MI LLRF were overcome by para-phasing the 53 MHz RF whilst the 2.5 MHz was on in order to set the effective voltage to zero. The bunches were allowed to rotate for 0.75 rotations or synchrotron periods, (instead of the minimum 0.25) in the 2.5 MHz RF in order to have enough time between LLRF commands. After 0.75 rotations the 2.5 MHz is switched off and the 53 MHz is snapped back on, again with paraphrasing. Figure 7 shows the 53 and 2.5 MHz RF voltages and the beam current in the machine for this coalescing scheme. Figure 8 shows an example WCM contour plot of successful coalescing. Here 7 initial bunches start rotating for about 15 ms in the 2.5 MHz RF (0.75 rotations). They are then recaptured in the 53 MHz as one high intensity middle bunch with two smaller intensity satellites either side.
Profile at Recapture
There is a good agreement between the simulated and measured bunch profiles at recapture, shown in Figure 9.
Expected increases in intensity due to increased bunch compression can also been seen. Figure 10 shows a comparison between simulations and measurements of the expected coalescing efficiency for 5 and 7 bunches and different 53 MHz RF voltages after compression, i.e. different values. The reduction in efficiency as the 53 MHz is reduced below 15 kV is expected as the beam emittance becomes comparable to the bucket area at around this voltage and so particles are lost. Figure 10: Simulated, (lines) and measured (points) coalescing efficiency for 5 (red, blue) and 7 (green black) bunches
Coalescing Efficiency
Number of Particles in Coalesced Bunch
The number of particles in the bunch was found by normalising the integrated WCM signal with beam current measurements from the MI control system. The number of particles in the central coalesced bunches are shown in Figure 11 for over 300 different events.
CONCLUSION
It has been shown that bunches with 3 10 11 particles can be generated in the MI using coalescing at 8 GeV and the existing RF. Throughout, the experiments have well matched simulations. There is a spread of bunch intensities of ± 5 % between different measurements. This is attributed to effects that are difficult to control, such as alignment of the two RF systems drifting over time. As this experiment is in an initial phase there are, as yet, no good online diagnostic and analysis tools to help maintain a consistent coalescing efficiency. These high intensity bunches have been used to study space charge tune shifts in the MI in preparation for Project X [3].
Figure1:
Coalescing schematic. 5 bunches initially in 53 MHz Buckets (blue lines) rotate in a 2.5 MHz bucket (red line) then are recaptured in the 53 MHz RF, the particles then mix in the central 53 MHz bucket.
Fermi Research Alliance, LLC under Contract No. De-AC02-07CH11359 with the United States Department of Energy.
Figure 3 :
3WCM data, points, for initial bunches (left) and after σ E compression (right). Gaussian fits for each bunch are also shown, used to determine σ t .
Figure 4 :
4Mean σ t for 0.25 and 0.15 s compression for 5 10 10 (red, black) and 1 10 10 (blue, green) bunch intensities.
Figure 5 :
5Contour plot of the WCM signal data showing 5 initial bunches rotating in the 2.5 MHz RF. The horizontal axis is longitudinal position in the MI.
Figure 6 :
6(Left) example calculation of rotation time in 2.5 MHz bucket showing the peak of the WCM signal for each turn of data. (Right) simulated (line) and measured (points) rotation time for different 2.5 MHz RF voltages.
Figure 7 :
753 MHz voltage, (red), 2.5 MHz RF voltage (yellow) and Beam current (green) during coalescing.
Figure 8 :
8WCM contour plot showing an example of coalescing with 7 initial bunches.
Figure 9 :
9Beam profile for WCM data (top) and simulations (bottom) when 53 MHz is reduced to 30 (red) and 25 (green) kV.
Figure 11 :
11Coalesced bunch intensities.
Table 1 :
1Beam Power in the MIUnit
Operations
Project X
Beam Power
kW
400
2000
Total Intensity
10 14
0.4
1.6
# of Bunches
492
458
Bunch Intensity
10 11
1.0
3.0
MI Cycle Time
s
2.2
1.4
Table 2 :
2Longitudinal emittance and σ E determined by simulations.Bunch
Intensity
Initial σ E
(MeV)
Final σ E
(MeV)
ε
(eV s)
1 10 10
9.5
3.3
0.22
5 10 10
6.7
2.8
0.11
Improvements in Bunch Coalescing in the Fermilab Main ring. J Dye, PAC 95J. Dye et al, "Improvements in Bunch Coalescing in the Fermilab Main ring," PAC 95.
Space Charge Measurements with a High Intensity Bunch at the Fermilab Main Injector. K Seiya, K. Seiya et al, "Space Charge Measurements with a High Intensity Bunch at the Fermilab Main Injector," PAC 2011.
Single Few Bunch Space Charge Effects at 8 GeV in the Fermilab Main Injector. D J Scott, these proceedingsD. J. Scott et al, "Single Few Bunch Space Charge Effects at 8 GeV in the Fermilab Main Injector," these proceedings.
| [] |
[
"THE STABLE ADAMS OPERATIONS ON HERMITIAN K-THEORY",
"THE STABLE ADAMS OPERATIONS ON HERMITIAN K-THEORY"
] | [
"Jean Fasel ",
"Olivier Haution "
] | [] | [] | We prove that exterior powers of (skew-)symmetric bundles induce a λ-ring structure on the ring GW 0 (X) ⊕ GW 2 (X), when X is a scheme where 2 is invertible. Using this structure, we define stable Adams operations on Hermitian K-theory. As a byproduct of our methods, we also compute the ternary laws associated to Hermitian K-theory. | null | [
"https://export.arxiv.org/pdf/2005.08871v3.pdf"
] | 218,673,850 | 2005.08871 | a6d1bf47ed3e9e8546439efee6c5e070890a0e75 |
THE STABLE ADAMS OPERATIONS ON HERMITIAN K-THEORY
7 Jun 2023
Jean Fasel
Olivier Haution
THE STABLE ADAMS OPERATIONS ON HERMITIAN K-THEORY
7 Jun 2023
We prove that exterior powers of (skew-)symmetric bundles induce a λ-ring structure on the ring GW 0 (X) ⊕ GW 2 (X), when X is a scheme where 2 is invertible. Using this structure, we define stable Adams operations on Hermitian K-theory. As a byproduct of our methods, we also compute the ternary laws associated to Hermitian K-theory.
Introduction
From their introduction by Adams in his study of vector fields on spheres [Ada62], Adams operations have been extremely useful in solving various problems in topology, algebra and beyond. One may mention for instance the proof of Serre vanishing conjecture by Gillet-Soulé [GS87], or their use in intersection theory. In algebraic geometry, the work of several authors Date: June 8, 2023. This work was supported by the DFG grant HA 7702/5-1 and Heisenberg fellowship HA 7702/4-1.
permitted to extend these operations (initially defined at the level of the Grothendieck group K 0 ) to the whole world of K-theory; the most recent and probably most natural extension being due to Riou [Rio10] using (stable) motivic homotopy theory.
Over a scheme X, it is often useful to study vector bundles endowed with some extra decoration, such as a symmetric or a symplectic form. The analogues of the Grothendieck group K 0 (X) in this context are the so-called Grothendieck-Witt groups (or Hermitian K-theory groups) GW i (X) for i ∈ Z/4 (see e.g. [Sch17]), which classify symmetric and symplectic bundles [Wal03]. Very often, the constructions and questions pertaining to algebraic K-theory can be generalized to the context of Grothendieck-Witt groups. For instance, Serre's Vanishing Conjecture makes sense in this broader context [FS08].
As for the Adams operations, Zibrowius [Zib15,Zib18] has proved that the exterior power operations on symmetric bundles yield a λ-ring structure on the Grothendieck-Witt group GW 0 (X) of any smooth variety X over a field of characteristic not two. This provides in particular Adams operations on these groups. It is not very difficult to construct λ-operations in GW 0 (X), and a significant portion of the papers [Zib15,Zib18] consists in showing that this pre-λ-ring is actually a λ-ring, which means that the λ-operations verify certain additional relations pertaining to their multiplicative and iterative behaviour. In particular, it is not so difficult to construct the Adams operations ψ n , but much harder to show that they are multiplicative and verify the relations ψ mn = ψ m • ψ n . To prove that GW 0 (X) is a λ-ring, Zibrowius followed the strategy used in [BGI71] for the analog problem in K-theory, and reduced the question to proving that the symmetric representation ring GW 0 (G) of an affine algebraic group G (over a field of characteristic not two) is a λ-ring. This is done by further reducing to the case when G is the split orthogonal group, and using explicit descriptions of the representations of certain subgroups in that case.
A first purpose of this paper is to extend the construction of Zibrowius in two directions:
(1) allow X to be an arbitrary quasi-compact quasi-separated Z[ 1 2 ]-scheme admitting an ample family of line bundles, (2) replace GW 0 (X) with GW ± (X), the ring of symmetric and symplectic forms.
The objective is achieved by first showing that the map GW 0 (G) → GW 0 (G Q ) is injective, when G is a split reductive algebraic group over Z[ 1 2 ]. Since the target is a λ-ring by the result of Zibrowius, so is GW 0 (G), and thus also GW 0 (X) when X is as in (1).
For (2), a natural strategy is to mimic Zibrowius's proof, by considering not just symmetric representations of algebraic groups, but also skew-symmetric ones. Although we believe that this idea might work, we were not able to implement it satisfyingly. Instead we observe that we may pass from GW − (X) to GW + (X) using the quaternionic projective bundle theorem [PW21].
The Witt groups are natural companions of the Grothendieck-Witt groups, obtained from them by modding out the hyperbolic classes. Their behaviour is somewhat easier to understand, and they keep track of an important part of the quadratic information, while forgetting some of the K-theoretic information. Our λ-ring structure on the Grothendieck-Witt groups does not descend to one on the Witt groups. There is a good reason for this: the Witt ring cannot admit a (functorial) λ-ring structure, because it takes the value F 2 on every algebraically closed field, and F 2 has no such structure. Nonetheless, we prove that the odd Adams operations (as well as the even ones when additionally −1 is a square) do descend to operations on the Witt ring. It would be interesting to find algebraic axioms describing a weak form of the structure of λ-ring (including odd Adams operations) which applies to the Witt ring, but we will not investigate this question further in this paper.
The next natural step consists in considering the groups GW i (X) for i odd, as well as the higher Grothendieck-Witt groups GW i j (X) for j ∈ Z. To do so, we focus on Adams operations, and follow the approach pioneered by Riou [Rio10] to construct stable versions of those. The fact that GW ± (X) is a λ-ring ends up being a crucial input, allowing us to understand the behaviour of the Adams operations with respect to stabilization. This approach is carried out in Section 5, where we build a morphism of motivic ring spectra, for any integer n ∈ N Ψ n : GW → GW 1 n * . Here the left-hand side is the spectrum representing Hermitian K-theory and the right-hand side is the same after inversion of the class n * ∈ GW + (X), which equals n when n is odd, and the class of the hyperbolic n-dimensional symmetric form when n is even. These operations extend the Adams operations on K-theory, in the sense that there is a commutative diagram of motivic ring spectra When n is even, inverting n * in GW + (X) seems to be a fairly destructive procedure, so in practice the stable even Adams operations are unlikely to be very valuable improvements of their K-theoretic counterparts. By contrast, we expect that the odd operations will be useful in many situations. For instance, Bachmann and Hopkins recently used them in [BH20] to compute the η-inverted homotopy sheaves of the algebraic symplectic and special linear cobordism spaces. Their construction of Adams operations is quite different in spirit to the one presented here but satisfy (almost) the same properties (see [BH20,Remark 3.2]).
In the last section of this paper, we offer an application under the form of the computation of the ternary laws associated to Hermitian K-theory. These laws are the analogue, in the context of Sp-oriented ring spectra, of the formal group laws associated to any oriented ring spectrum. In short, they express the characteristic classes of a threefold product of symplectic bundles of rank 2, and are expected to play an important role in the classification of Sp-oriented cohomology theories. We refer the interested reader to [DF23] for more information on these laws.
Grothendieck-Witt groups and spectra
All schemes will be assumed to be quasi-compact and quasi-separated, and to admit an ample family of line bundles.
Let X be a scheme. In this paper, we will denote by GW + (X), resp. GW − (X), the Grothendieck-Witt group of symmetric forms, resp. skew-symmetric forms, defined e.g. in [Wal03,§6] using the exact category of vector bundles over X. The product of two skewsymmetric forms being symmetric, we have a pairing
GW − (X) × GW − (X) → GW + (X) turning GW ± (X) = GW + (X) ⊕ GW − (X) into a (commutative) Z/2-graded ring.
Assume now that X is a scheme over Z[ 1 2 ]. Following [Sch17,Definition 9.1], we can consider the Grothendieck-Witt groups GW i j (X) for any i, j ∈ Z which are 4-periodic in i in the sense that there are natural isomorphisms GW i j (X) ≃ GW i+4 j (X) for any i ∈ Z. For X affine and i = 0, the groups GW 0 j (X) are (naturally isomorphic to) the orthogonal K-theory groups KO j (X) as defined by Karoubi, while for i = 2 (and X still affine) the groups GW 2 j (X) are (naturally isomorphic to) the symplectic K-theory groups KSp j (X) ([Sch17, Corollary A.2]). Also by [Wal03, Theorem 6.1] and [Sch17, Proposition 5.6] we have natural isomorphisms GW + (X) ≃ GW 0 0 (X) and GW − (X) ≃ GW 2 0 (X). Notation 1.1. We will denote by h ∈ GW 0 0 (Spec(Z[ 1 2 ])), resp. τ ∈ GW 2 0 (Spec(Z[ 1 2 ])), the class of the hyperbolic symmetric, resp. skew-symmetric, bilinear form. When u ∈ (Z[ 1 2 ]) × , we will denote by u ∈ GW 0 0 (Spec(Z[ 1 2 ])) the class of the symmetric bilinear form (x, y) → uxy, and write ǫ = − −1 . Thus h = 1 − ǫ.
The collection of groups GW i j (X) fit into a well-behaved cohomology theory, which is SL coriented by [PW19, Theorem 5.1], and in particular Sp-oriented. The functors X → GW i j (X) are actually representable by explicit (geometric) spaces GW i in the A 1 -homotopy category
H(Z[ 1 2 ]) of Morel-Voevodsky (see [ST15, Theorem 1.3]) [Σ j S 1 X + , GW i ] H(Z[ 1 2 ]) = GW i j (X)
. Further, one can express the aforementioned periodicity under the following form: there exists an element γ ∈ GW 4 0 (Spec(Z[ 1 2 ])) such that multiplication by γ induces the periodicity isomorphisms
(1.a) GW i ≃ GW i+4 . When X is a Z[ 1 2 ]-scheme, the Z-graded ring (1.b) GW even 0 (X) := j∈Z GW 2j 0 (X)
can be identified with the Z-graded subring GW ± (X) of GW ± (X)[x ±1 ] defined in Appendix B (where γ corresponds to x 2 ), and we have a canonical isomorphism of Z/2-graded rings GW even 0 (X)/(γ − 1) ≃ GW ± (X). The P 1 -projective bundle theorem of Schlichting [Sch17,Theorem 9.10] allows to build a ring spectrum GW in SH(Z[ 1 2 ]), having the property to represent Hermitian K-theory. A convenient construction is recalled in [PW19, Theorem 12.2], and we explain the relevant facts in the next few lines in order to fix notations.
Recall first that Panin and Walter [PW21] defined a smooth affine Z[ 1 2 ]-scheme HP n for any n ∈ N, called the quaternionic projective space. On HP n , there is a canonical bundle U of rank 2 endowed with a symplectic form ϕ, yielding a canonical element u = (U, ϕ) ∈ GW − (HP n ). For any n ∈ N, there are morphisms (1.c) i n : HP n → HP n+1 such that i * n u = u, whose colimit (say in the category of sheaves of sets) is denoted by HP ∞ . It is a geometric model of the classifying space BSp 2 of rank 2 symplectic bundles. As HP 0 = Spec(Z[ 1 2 ]), we consider all these schemes as pointed by i 0 and note that i * 0 (u) = τ . Recall moreover from [PW19, Theorem 9.8] that HP 1 is A 1 -weak equivalent to (P 1 ) ∧2 . In fact HP 1 = Q 4 , where the latter is the affine scheme considered for instance in [ADF16].
Notation 1.2. We set T := HP 1 , that we consider as pointed by i 0 . We also denote by Ω T the right adjoint of the endofunctor T ∧ (−) of H(Z[ 1 2 ]). The spectrum GW is defined as the T -spectrum whose component in degree n is GW 2n and bonding maps
(1.d) σ : T ∧ GW 2n → GW 2n+2
induced by multiplication by the class u−τ in GW 2 0 (T ). This T -spectrum determines uniquely a P 1 -spectrum in view of [Rio07, Proposition 2.22] or [PW19, Theorem 12.1], which has the property that
GW i j (X) = [Σ ∞ P 1 X + , Σ −j S 1 Σ i P 1 GW] SH(Z[ 1 2 ]) for a smooth Z[ 1 2 ]-scheme X. If now X is a regular Z[ 1 2 ]-scheme with structural morphism p X : X → Spec(Z[ 1 2 ])
, we can consider the functor p * X : SH(Z[ 1 2 ]) → SH(X) and the spectrum p * X GW. On the other hand, one can consider the P 1 X -spectrum GW X representing Grothendieck-Witt groups in the stable category SH(X). It follows from [PW19, discussion before Theorem 13.5] that the natural map p * X GW → GW X is in fact an isomorphism. Consequently, GW i j (X) = [Σ ∞ P 1 X + , Σ −j S 1 Σ i P 1 p * X GW] SH(X) and we say that GW is an absolute P 1 -spectrum over Z[ 1 2 ]. It is in fact an absolute ring spectrum by [PW19, Theorem 13.4].
Exterior powers and rank two symplectic bundles
When V is a vector bundle on a scheme X, we denote its dual by V ∨ . A bilinear form on V is a morphism of vector bundles ν : V → V ∨ . When x, y ∈ H 0 (X, V ), we will sometimes write ν(x, y) instead of ν(x)(y). We will abuse notation, and denote by n ν, for n ∈ N, the bilinear form on n V given by the composite
n V ∧ n ν − −− → n (V ∨ ) → ( n V ) ∨ .
We will also denote the pair ( n V, n ν) by n (V, ν). Similar conventions will be used for the symmetric or tensor powers of bilinear forms, or their tensor products.
Explicit formulas for symmetric and exterior powers are given as follows. Let n be an integer, and denote by S n the symmetric group on n letters and by ǫ : S n → {−1, 1} the signature homomorphism. Then for any open subscheme U of X and x 1 , . . . , x n , y 1 , . . . , y n ∈ H 0 (U, V ), we have (2.a) (Sym n ν)(x 1 · · · x n , y 1 · · · y n ) = σ∈Sn ν(x 1 , y σ(1) ) · · · ν(x n , y σ(n) ),
(2.b) ( n ν)(x 1 ∧ · · · ∧ x n , y 1 ∧ · · · ∧ y n ) = σ∈Sn ǫ(σ)ν(x 1 , y σ(1) ) · · · ν(x n , y σ(n) ), or more succinctly (2.c) ( n ν)(x 1 ∧ · · · ∧ x n , y 1 ∧ · · · ∧ y n ) = det(ν(x i , y j )).
If V, W are vector bundles equipped with bilinear forms ν, µ, then for any i, j the bilinear
form i+j (ν ⊥ µ) restricts to ( i ν) ⊗ ( j µ) on ( i V ) ⊕ ( j W ) ⊂ i+j (V ⊕ W )
. This yields an isometry, for any n ∈ N
(2.d) n (V ⊕ W, ν ⊕ µ) ≃ n ⊥ i=0 i (V, ν) ⊗ n−i (W, µ)
Lemma 2.1. Let (E, ε) and (F, ϕ) be vector bundles over a scheme X equipped with bilinear forms, of respective ranks e and f . Then we have an isometry
( e (E, ε)) ⊗f ⊗ ( f (F, ϕ)) ⊗e ≃ ef (E ⊗ F, ε ⊗ ϕ).
Proof. Let us first assume that E, F are free and that X = Spec R is affine. Let (x 1 , . . . , x e ), resp. (y 1 , . . . , y f ), be an R-basis of H 0 (X, E), resp. H 0 (X, F ). Then the element
(2.e) z = (x 1 ∧ · · · ∧ x e ) ⊗f ⊗ (y 1 ∧ · · · ∧ y f ) ⊗e is a basis of H 0 (X, ( e E) ⊗f ⊗ ( f F ) ⊗e ), and the element (2.f) u = (x 1 ⊗ y 1 ) ∧ · · · ∧ (x 1 ⊗ y f ) ∧ (x 2 ⊗ y 1 ) ∧ · · · ∧ (x e ⊗ y f ) is a basis of H 0 (X, ef (E ⊗ F ).
The mapping z → u then defines an isomorphism of line bundles
(2.g) ( e E) ⊗f ⊗ ( f F ) ⊗e ∼ − → ef (E ⊗ F ),
Consider now the matrices
A = (ε(x i , x j )) ∈ M e (R), B = (ϕ(y i , y j )) ∈ M f (R). By (2.c) we have (( e ε) ⊗f ⊗ ( f F ) ⊗e )(z, z) = (det A) f · (det B) e , and ef (ε ⊗ ϕ)(u, u) is the determinant of the block matrix C = ε(x 1 , x 1 )B . . . ε(x 1 , x e )B . . . . . . ε(x e , x 1 )B . . . ε(x e , x e )B ∈ M ef (R).
It then follows from [Bou70, III, §9, Lemme 1, p.112] that
det C = det(det(ε(x i , x j )B)) = det(det(A)B e ) = (det A) f · (det B) e . Therefore (( e ε) ⊗f ⊗ ( f F ) ⊗e )(z, z) = ef (ε ⊗ ϕ)(u, u)
, which shows that (2.g) is the required isometry.
Next, assume given R-linear automorphisms α : H 0 (X, E) → H 0 (X, E) and β : H 0 (X, F ) → H 0 (X, F ). Replacing the basis (x 1 , . . . , x e ) and (y 1 , . . . , y f ) by their images under α and β multiplies the element (2.e) by the quantity (det α) e · (det β) f , and the element (2.f) by the same quantity (this is a similar determinant computation as above, based on [Bou70, III, §9, Lemme 1, p.112]). We deduce that the isometry (2.g) glues when E, F are only (locally free) vector bundles, and X is possibly non-affine.
Lemma 2.2. Let V be a vector bundle of constant rank n over a scheme X, equipped with a nondegenerate bilinear form ν. Then we have an isometry
n−1 (V, ν) ≃ (V, ν) ⊗ n (V, ν).
Proof. The natural morphism ( n−1 V )⊗V → n V induces a morphism n−1 V → Hom(V, n V ). As V is a vector bundle (of finite rank) the natural morphism V ∨ ⊗ n V → Hom(V, n V ) is an isomorphism. Composing with the inverse of ν ⊗ id ∧ n V , we obtain a morphism
s : n−1 V → V ⊗ n V.
To verify that it induces the required isometry, we may argue locally and assume that V is free and X = Spec R is affine. Pick an R-basis
(v 1 , . . . , v n ) of H 0 (X, V ). Then (w 1 , . . . , w n ) is an R-basis of H 0 (X, n−1 V ), where w i = (−1) n−i v 1 ∧ · · · ∧ v i ∧ v n . Let z = v 1 ∧ · · · ∧ v n ∈ H 0 (X, n V ), and note that w i ∧ v i = z for all i ∈ {1, . . . , n}. Consider the unique elements v * 1 , . . . , v * n ∈ H 0 (X, V ) satisfying ν(v * i , v j ) = δ ij (Kronecker symbol) for all i, j ∈ {1, . . . , n}. Then we have (2.h) s(w i ) = v * i ⊗ z, for i = 1, . . . , n.
Consider the matrix
A = (ν(v i , v j )) ∈ M n (R). Observe that the j-th coordinate of v * i in the basis (v 1 , . . . , v n ) is the (i, j)-th coefficient of the matrix A −1 , from which it follows that (2.i) t (A −1 ) = (ν(v * i , v * j )) ∈ M n (R)
. Let k, l ∈ {1, . . . , n}. It follows from (2.c) that ( n−1 ν)(w k , w l ) is the (k, l)-th cofactor of the matrix A, and thus coincides with the (k, l)-th coefficient of the matrix (det A) · t (A −1 ). In view of (2.i), we deduce that (using (2.c) for the last equality)
( n−1 ν)(w k , w l ) = ν(v * k , v * l ) · det A = ν(v * k , v * l ) · ( n ν)(z, z)
. By the formula (2.h), this proves that s is the required isometry.
In the rest of the section, we fix a Z[ 1 2 ]-scheme X. By a symplectic bundle on X, we will mean a vector bundle on X equipped with a nondegenerate skew-symmetric form. For an invertible element λ ∈ H 0 (X, O X ), we denote by λ the trivial line bundle on X equipped with the nondegenerate bilinear form given by (x, y) → λxy.
Lemma 2.3. Let (V, ν) be a symplectic bundle of constant rank n over X. Then the exists an isometry n (V, ν) ≃ 1 .
Proof. We may assume that X = ∅ and n ≥ 1. Then we may write n = 2m for some integer m (the form induced by (V, ν) over the residue field of a closed point of X is skew-symmetric, hence symplectic as 2 is invertible, and such forms over fields have even dimension [MH73, I, (3.5)]). The morphism
V ⊗n ≃ V ⊗m ⊗ V ⊗m → m V ⊗ m V ∧ m ν⊗id − −−−− → ( m V ) ∨ ⊗ m V → O X
descends to a morphism λ (V,ν) : n V → O X . If (V i , ν i ), for i = 1, 2, are symplectic bundles over X of ranks n i = 2m i such that (V, ν) = (V 1 , ν 1 ) ⊥ (V 2 , ν 2 ), we have a commutative
diagram m V ∧ m ν / / ( m V ) ∨ ( m 1 V 1 ) ⊗ ( m 2 V 2 ) ∧ m 1 ν 1 ⊗∧ m 2 ν 2 / / O O ( m 1 V 1 ) ∨ ⊗ ( m 2 V 2 ) ∨ O O Therefore the identification n V = n 1 V 1 ⊗ n 2 V 2 yields an identification λ (V,ν) = λ (V 1 ,ν 1 ) ⊗ λ (V 2 ,ν 2 ) .
In order to prove that λ (V,ν) induces the claimed isometry, we may assume that X is the spectrum of a local ring. In this case the nondegenerate skew-symmetric form (V, ν) is hyperbolic [MH73, I, (3.5)]. Given the behaviour of λ (V,ν) with respect to orthogonal sums, we may assume that n = 2 and that (V, ν) is the hyperbolic plane. So there exists a basis
(v 1 , v 2 ) of H 0 (X, V ) such that ν(v 1 , v 1 ) = 0, ν(v 2 , v 2 ) = 0 and ν(v 1 , v 2 ) = 1. By (2.b) we have ( 2 ν)(v 1 ∧ v 2 , v 1 ∧ v 2 ) = 1. Since λ (V,ν) (v 1 ∧ v 2 ) = ν(v 1 , v 2 ) = 1 ∈ H 0 (X, O X ), it follows that λ (V,ν) induces an isometry 2 (V, ν) ≃ 1 .
Let V be a vector bundle over X. Consider the involution σ of V ⊗2 exchanging the two factors. Set V ⊗2 + = ker(σ − id) and V ⊗2 − = ker(σ + id). Since 2 is invertible we have a direct sum decomposition V ⊗2 = V ⊗2 + ⊕ V ⊗2 − . Let now ν be a bilinear form on V . There are induced bilinear forms ν ⊗2
+ on V ⊗2 + and ν ⊗2 − on V ⊗2 − . Writing (V, ν) ⊗2 + , resp. (V, ν) ⊗2 − , instead of (V ⊗2 + , ν ⊗2 + ), resp. (V ⊗2 − , ν ⊗2 − ), we have an orthogonal decomposition (2.j) (V, ν) ⊗2 = (V, ν) ⊗2 + ⊥ (V, ν) ⊗2 − . Lemma 2.4. There are isometries (V, ν) ⊗2 + ≃ 2 ⊗ Sym 2 (V, ν) and (V, ν) ⊗2 − ≃ 2 ⊗ 2 (V, ν). Proof.
It is easy to see that the morphism
i : 2 V → V ⊗2 ; v 1 ∧ v 2 → v 1 ⊗ v 2 − v 2 ⊗ v 1 , induces an isomorphism 2 V ≃ V ⊗2 − . If U is an open subscheme of X and v 1 , v 2 , w 1 , w 2 ∈ H 0 (U, V ), we have, using (2.b) ν ⊗2 (i(v 1 ∧ v 2 ), i(w 1 ∧ w 2 )) =ν ⊗2 (v 1 ⊗ v 2 − v 2 ⊗ v 1 , w 1 ⊗ w 2 − w 2 ⊗ w 1 ) =ν(v 1 , w 1 )ν(v 2 , w 2 ) − ν(v 2 , w 1 )ν(v 1 , w 2 ) − ν(v 1 , w 2 )ν(v 2 , w 1 ) + ν(v 2 , w 2 )ν(v 1 , w 1 ) =2ν(v 1 , w 1 )ν(v 2 , w 2 ) − 2ν(v 2 , w 1 )ν(v 1 , w 2 ) =2( 2 ν)(v 1 ∧ v 2 , w 1 ∧ w 2 ),
proving the second statement. The first is proved in a similar fashion, using the morphism
Sym 2 V → V ⊗2 ; v 1 v 2 → v 1 ⊗ v 2 + v 2 ⊗ v 1 .
Lemma 2.5. There is an isometry
(V, ν) ⊗2 ≃ 2 ⊗ Sym 2 (V, ν) ⊥ 2 (V, ν) .
Proof. This follows from Lemma 2.4 and (2.j).
Lemma 2.6. Let E, F be vector bundles over X, respectively equipped with bilinear forms ε, ϕ. Then there is an isometry
2 (E ⊗ F, ε ⊗ ϕ) ≃ 2 ⊗ Sym 2 (E, ε) ⊗ 2 (F, ϕ) ⊥ 2 ⊗ 2 (E, ε) ⊗ Sym 2 (F, ϕ) .
Proof. It is easy to see that there is an isometry
(E ⊗ F, ε ⊗ ϕ) ⊗2 − ≃ (E, ε) ⊗2 + ⊗ (F, ϕ) ⊗2 − ⊥ (E, ε) ⊗2 − ⊗ (F, ϕ) ⊗2
+ , so that the statement follows by five applications of Lemma 2.4 (and tensoring by the form 2 −1 ).
Proposition 2.7. Let E, F be rank two vector bundles over a Z[ 1 2 ]-scheme X, equipped with nondegenerate skew-symmetric forms ε, ϕ. Then we have in GW + (X):
[ n (E ⊗ F, ε ⊗ ϕ)] = [(E, ε) ⊗ (F, ϕ)] if n ∈ {1, 3}, [(E, ε) ⊗2 ] + [(F, ϕ) ⊗2 ] − 2 if n = 2, 1 if n ∈ {0, 4}, 0 otherwise.
Proof. The cases n = 0, 1 and n ≥ 5 are clear. The case n = 4 follows from Lemma 2.1 and Lemma 2.3. The case n = 3 then follows from the case n = 4 and Lemma 2.2. We now consider the case n = 2. We have in GW + (X) and 2 + 2 = 2 ∈ GW + (Spec(Z[ 1 2 ])), as evidenced by the computation 1 −1 1 1
[ 2 (E ⊗ F, ε ⊗ ϕ)] = 2 [Sym 2 (E, ε)] + 2 [Sym 2 (F, ϕ)]1 0 0 1 1 1 −1 1 = 2 0 0 2 .
Grothendieck-Witt groups of representations
Let B be a commutative ring with 2 ∈ B × and G be a flat affine group scheme over B. Let R B be the abelian category of representations of G over B, which are of finite type as Bmodules. We let P B be the full subcategory of R B whose objects are projective as B-modules. The latter category is exact. If P is an object of P B , then its dual P ∨ := Hom B (P, B) is naturally endowed with an action of G and thus can be seen as an object of P B . The morphism of functors ̟ : 1 ≃ ∨∨ is easily seen to be an isomorphism of functors P B → P B , and it follows that P B is an exact category with duality.
Let now D b (R B ), resp. D b (P B ), be the derived category of bounded complexes of objects of R B , resp. P B . The category D b (P B ) is a triangulated category with duality in the sense of Balmer ([Bal05, Definition 1.4.1]) and therefore one can consider its (derived) Witt groups W i (D b (P B )) ([Bal05, Definition 1.4.5]) that we denote by W i (B; G) for simplicity. We can also consider the Grothendieck-Witt groups GW i (D b (P B )) (as defined in [Wal03,§2]) that we similarly denote by GW i (B; G).
Lemma 3.1. Suppose that B is a field of characteristic not two. For any i ∈ Z, we have
W 2i+1 (B; G) = 0. Proof. Since P B = R B , the category D b (R B )
is the derived category of an abelian category. We can thus apply [BW02, Proposition 5.2].
We now suppose that A is a Dedekind domain with quotient field K (we assume that A = K). We assume that 2 ∈ A × , and let G be a flat affine group scheme over A. Then we may consider the full subcategory R fl A of R A consisting of those representations of G over A, which as A-modules are of finite length, or equivalently are torsion.
Any object of D b (P A ) has a well-defined support, and we can consider the (full) subcategory D b fl (P A ) of D b (P A ) whose objects are supported on a finite number of closed points of Spec(A). This is a thick subcategory stable under the duality. As a consequence of [Bal05, Theorem 73], we obtain a 12-term periodic long exact sequence
(3.a) · · · → W i (D b fl (P A )) → W i (D b (P A )) → W i (D b (P A )/ D b fl (P A )) → W i+1 (D b fl (P A )) → · · · We now identify the quotient category D b (P A )/ D b fl (P A ).
Note that the extension of scalars induces a duality-preserving, triangulated functor D
b (P A ) → D b (P K ) which is trivial on the subcategory D b fl (P A ). (The category D b (P K )
is constructed by setting B = K above, for the group scheme G K over K obtained by base-change from G.) We thus obtain a dualitypreserving, triangulated functor
D b (P A )/ D b fl (P A ) → D b (P K ). Lemma 3.2. The functor D b (P A )/ D b fl (P A ) → D b (P K )
is an equivalence of triangulated categories with duality.
Proof. We have a commutative diagram of functors
D b (P A ) / / D b (P K ) D b (R A ) / / D b (R K ) in which the vertical arrows are equivalences (use [Ser68, §2.2, Corollaire]. The composite D b fl (P A ) → D b (P A ) → D b (R A ) has essential image the subcategory D b fl (R A ) of objects of D b (R A ) whose homology is of finite length. As observed in [Ser68, Remarque, p.43], the functor R A → R K induces an equivalence R A /R fl A ≃ R K . Then it follows from [Kel99, §1.15, Lemma] that the induced functor D b (R A )/ D b fl (R A ) → D b (R K )
is an equivalence (the argument given in [Kel99, §1.15, Example (b)] works in the equivariant setting). The statement follows.
As a consequence, the exact sequence (3.a) becomes
(3.b) · · · → W i (D b fl (P A )) → W i (A; G) → W i (K; G K ) → W i+1 (D b fl (P A )) → · · · Now, suppose that M is a representation of G over A that is of finite length. By [Ser68,
§2.2, Corollaire], we have an exact sequence of representations
(3.c) 0 → P 1 → P 0 → M → 0 where P 0 , P 1 ∈ P A . Note that the A-module M is torsion, hence M ∨ = Hom A (M, A) vanishes.
We obtain an exact sequence, by dualizing
0 → P ∨ 0 → P ∨ 1 → Ext 1 A (M, A) → 0
and it follows that M ♯ := Ext 1 A (M, A) is naturally endowed with a structure of a representation of G over A. The isomorphisms P 0 → (P ∨ 0 ) ∨ and P 1 → (P ∨ 1 ) ∨ induce an isomorphism M → (M ♯ ) ♯ , which does not depend on the choice of the resolution (3.c). The association M → M ♯ in fact defines a duality on the category R fl A .
Lemma 3.3. For every i ∈ Z, there exists an isomorphism
W i+1 (D b fl (P A )) ≃ W i (D b (R fl A )).
Proof. This follows from the existence of an equivalence of triangulated categories
D b fl (P A ) → D b (R fl A ), which is compatible with the duality ♯ of D b (R fl A )
, and the duality ∨ of D b fl (P A ) shifted by 1. This equivalence is constructed using word-for-word the proof of [BW02, Lemma 6.4], where the categories
VB O , O-mod, O-fl -mod are replaced by P A , R A , R fl A . Lemma 3.4. For every i ∈ Z, we have W 2i (D b fl (P A )) = 0.
Proof. In view of Lemma 3.3, this follows from [BW02, Proposition 5.2], as the category R fl A is abelian.
Proposition 3.5. Let A be a Dedekind domain with quotient field K, such that 2 ∈ A × , and let G be a flat affine group scheme over A.
Then for every i ∈ Z, the morphism W 2i (A; G) → W 2i (K; G K ) is injective.
Proof. This follows from Lemma 3.4 and the sequence (3.b).
Theorem 3.6. Let A be a Dedekind domain with quotient field K, such that 2 ∈ A × , and let G be a split reductive group scheme over A. Then for every i ∈ Z, the morphism GW 2i (A; G) → GW 2i (K; G K ) is injective.
Proof. We have a commutative diagram where rows are exact sequences (constructed in [Wal03, Theorem 2.6])
K 0 (A; G) / / GW 2i−1 (A; G) / / W 2i−1 (A; G) / / 0 K 0 (K; G K ) / / GW 2i−1 (K; G K ) / / W 2i−1 (K; G K ) / / 0
in which the vertical arrows are induced by the extension of scalars, and K 0 (A; G) (resp. K 0 (K; G K )) denotes the Grothendieck group of the triangulated category D b (P A ) (resp. D b (P K )). Denoting by K 0 (R A ) (resp. K 0 (R K )) the Grothendieck group of the category R A (resp. R K ), the natural morphisms K 0 (R A ) → K 0 (A; G) and K 0 (R K ) → K 0 (K; G) are isomorphisms (their inverses are constructed using the Euler characteristic). Since the morphism
K 0 (R A ) → K 0 (R K ) is an isomorphism by [Ser68, Théorème 5], so is K 0 (A; G) → K 0 (K; G K ).
On the other hand, we have W 2i−1 (K; G K ) = 0 by Lemma 3.1. We deduce that the morphism
GW 2i−1 (A; G) → GW 2i−1 (K; G K ) is surjective.
Next consider the commutative diagram where rows are exact sequences (see again [Wal03, Theorem 2.6])
GW 2i−1 (A; G) / / K 0 (A; G) / / ≃ GW 2i (A; G) / / W 2i (A; G) / / 0 GW 2i−1 (K; G K ) / / K 0 (K; G K ) / / GW 2i (K; G K ) / / W 2i (K; G K ) / / 0
The indicated surjectivity and bijectivity have been obtained above, and the injectivity in Proposition 3.5. The statement then follows from a diagram chase.
The λ-operations
Let X be a scheme, and G a flat affine group scheme over X. We denote by GW + (X; G) and GW − (X; G) the Grothendieck-Witt groups of the exact category of G-equivariant vector bundles over X. We set GW ± (X; G) = GW + (X; G) ⊕ GW − (X; G). When A is a commutative noetherian Z[ 1 2 ]-algebra and X = Spec(A), by [Wal03, Theorem 6.1] we have natural isomorphisms GW + (Spec(A); G) ≃ GW 0 (A; G) and GW − (Spec(A); G) ≃ GW 2 (A; G) (in the notation of §3).
4.1.
Exterior powers of metabolic forms. Let X be a scheme and G a flat affine group scheme over X.
Let E → X be a G-equivariant vector bundle. For ε ∈ {1, −1}, the associated hyperbolic ε-symmetric G-equivariant bundle over X is H ε (E) = E ⊕ E ∨ , 0 1 ε̟ E 0 where ̟ E : E → (E ∨ ) ∨ is the canonical isomorphism.
These constructions induce morphisms of abelian groups (see e.g. [Wal03, Proposition 2.2 (c), Theorem 6.1])
(4.1.a) h + : K 0 (X; G) → GW + (X; G), h − : K 0 (X; G) → GW − (X; G)
where K 0 (X; G) denotes the Grothendieck group of G-equivariant vector bundles on X. (i) The class [ n (M, µ)] ∈ GW ± (X; G) depends only on n, ε and the G-equivariant vector bundle L over X (but not on (M, µ)). (ii) If n is odd, the G-equivariant nondegenerate ε n -symmetric bilinear form n (M, µ) is metabolic.
Proof. We may assume that X is connected. Let Q = M/L, and recall that µ induces an isomorphism ϕ : Q ∼ − → L ∨ . The vector bundle n M is equipped with a decreasing filtration by G-invariant subsheaves
( n M ) i = im( i L ⊗ n−i M → n M ) fitting into commutative squares (4.1.b) i L ⊗ n−i M ( n M ) i i L ⊗ n−i Q / / ( n M ) i /( n M ) i+1
where the bottom horizontal arrow is an isomorphism (see e.g. [BGI71, V, Lemme 2.2.1]).
Since Q ≃ L ∨ , this yields exact sequences of G-equivariant sheaves
(4.1.c) 0 → ( n M ) i+1 → ( n M ) i → i L ⊗ n−i L ∨ → 0 from which we deduce by induction on i that ( n M ) i is a subbundle of n M (i.e. the quotient n M/( n M ) i is a vector bundle).
Assuming that L has rank r, then i L ⊗ n−i L ∨ has rank r i r n−i . By induction on i, using the sequences (4.1.c), we obtain
rank( n M ) i = r j=i r j r n − j .
An elementary computation with binomial coefficients then yields:
(4.1.d) rank( n M ) i + rank( n M ) n+1−i = rank n M.
Let i, j be integers. We have a commutative diagram
i L ⊗ n−i M / / / / α ( n M ) i / / n M ∧ n µ ( j L ⊗ n−j M ) ∨ (( n M ) j ) ∨ ? _ o o ( n M ) ∨ o o o o
where α is defined by setting, for every open subscheme U of X and x 1 , . . . , x i , y 1 , . . . , y j ∈ H 0 (U, L) and x i+1 , . . . , x n , y j+1 , . . . , y n ∈ H 0 (U, M ) (see (2.c)) (4.1.e) α(x 1 ∧ · · · ∧ x i ⊗ x i+1 ∧ · · · ∧ x n )(y 1 ∧ · · · ∧ y j ⊗ y j+1 ∧ · · · ∧ y n ) = det(µ(x i , y j )).
If i + j > n, then for each σ ∈ S n there exists e ∈ {1, . . . , n} such that x e ∈ H 0 (U, L) and y σ(e) ∈ H 0 (U, L), so that µ(x e , y σ(e) ) = 0, which by (4.1.e) implies that α = 0. Thus
( n M ) i ⊂ (( n M ) j ) ⊥ in this case. In particular ( n M ) i is a sub-Lagrangian of n (M, µ) when 2i > n.
If n = 2k − 1 with k ∈ N, then 2 rank( n M ) k = rank n M by (4.1.d), hence the subbundle ( n M ) k is a Lagrangian in n (M, µ). This proves (ii). Moreover, it follows that the class of n (M, µ) in GW ± (X; G) coincides with the class of the hyperbolic form H ε (( n M ) k ), hence depends only on the class in K 0 (X; G) of the G-equivariant vector bundle ( n M ) k (see (4.1.a)). In view of the sequences (4.1.c), the latter depends only on the classes of i L ⊗ n−i L ∨ in K 0 (X; G) for i ≥ k, from which (i) follows when n is odd. Assume now that n = 2k with k ∈ N. Then the inclusion of the subbundle ( n M ) k ⊂ (( n M ) k+1 ) ⊥ is an equality by rank reasons (see (4.1.d)). By [Wal03, Proposition 2.2 (d), Theorem 6.1] we have
(4.1.f) [(M, µ)] = [H 1 (( n M ) k+1 )] + [(( n M ) k /( n M ) k+1 , ρ)] ∈ GW + (X; G),
where ρ is the bilinear form induced by µ on ( n M ) k /( n M ) k+1 , which in view of (4.1.b) is G-equivariantly isometric to the form β fitting into the commutative diagram
k L ⊗ k M / / / / α k L ⊗ k Q ≃ / / k L ⊗ k L ∨ β ( k L ⊗ k M ) ∨ ( k L ⊗ k Q) ∨ ? _ o o ( k L ⊗ k L ∨ ) ∨ ≃ o o
where the horizontal isomorphisms are induced by ϕ : Q ∼ − → L ∨ . The formula (4.1.e) (and the fact that ϕ is induced by µ) yields the formula, for every open subscheme U of X and x 1 , . . . , x k , y 1 , . . . , y k ∈ H 0 (U, L) and f 1 , . . . , f k , g 1 , . . . , g k ∈ H 0 (U, L ∨ ),
β(x 1 ∧ · · · ∧ x k ⊗ f 1 ∧ · · · ∧ f k , y 1 ∧ · · · ∧ y k ⊗ g 1 ∧ · · · ∧ g k ) = det 0 (εg j (x i )) (f i (y j )) 0
(where i, j run over 1, . . . , k, and so the indicated determinant is n × n), which shows that the bilinear form β depends only on the G-equivariant vector bundle L (and not on µ). It follows that the isometry class of that G-equivariant form (( n M ) k /( n M ) k+1 , ρ) depends only on L. As above, the class of the hyperbolic form H 1 (( n M ) k+1 ) in GW + (X; G) also depends only on L, so that (i) follows from (4.1.f) when n is even.
4.2. The λ-ring structure. We will use the notion of (pre-)λ-rings, recalled in Appendix A below.
Proposition 4.2.1. Let X be a scheme and G a flat affine group scheme over X. Then the exterior powers operations
λ i : GW ± (X; G) → GW ± (X; G)
defined by (P, ϕ) → ( i P, i ϕ) endow the ring GW ± (X; G) with the structure of a pre-λ-ring.
Proof. The structure of the proof is the same as that of [Zib15, Proposition 2.1], and is based on the description of GW ± (X; G) in terms of generators and relations (see e.g. [Wal03,p.20]). It is clear that the exterior power operations descend to the set of isometry classes, and moreover the total exterior power operation is additive in the sense of (2.d). Finally, let M is a G-equivariant vector bundle over X equipped with a G-equivariant nondegenerate εsymmetric bilinear form µ, for some ε ∈ {1, −1}. If (M, µ) admits a G-equivariant Lagrangian L, then L is also a G-equivariant Lagrangian in the hyperbolic form H ε (L), so that by Lemma 4.1.1 (i) the forms n (M, µ) and n (H ε (L)) have the same class in GW ± (X; G). When x ∈ GW − (X) is the class of a rank two symplectic bundle, it follows from Lemma 2.3 that λ t (x) = 1 + tx + t 2 (see (A.a)). In other words, in the notation of (C.1.b), we have Theorem 4.2.5. For every Z[ 1 2 ]-scheme X, the pre-λ-ring GW ± (X) is a λ-ring. Proof. Taking Proposition 4.2.2 and Lemma 4.2.4 into account, it only remains to verify (A.b) when x ∈ GW + (X) and y ∈ GW − (X). Let i ≥ n, and consider the scheme X × HP i . It is endowed with a universal symplectic bundle of rank two, whose class we denote by u ∈ GW − (X ×HP i ). Denote again by x, y ∈ GW ± (X ×HP i ) the pullbacks of x, y ∈ GW ± (X). Then using successively Proposition 4.2.2 and Lemma 4.2.4
(4.2.a) λ i (x) = ℓ i (x) ∈ GW ± (X) for all i ∈ N {0}.λ t (xyu) = λ t (x)λ t (yu) = λ t (x)λ t (y)λ t (u).
On the other hand, by Lemma 4.2.4
λ t (xyu) = λ t (xy)λ t (u).
The quaternionic projective bundle theorem [PW21, Theorem 8.1] implies that the GW even 0 (X)module GW even 0 (X × HP i ) is free on the basis 1, u, . . . , u i . Modding out γ − 1, we obtain a decomposition GW ± (X × HP i ) = GW ± (X) ⊕ GW ± (X)u ⊕ · · · ⊕ GW ± (X)u i .
In view of (4.2.a), it follows from Lemma C.1.2 that the u n -component of the t n -coefficient of λ t (xy)λ t (u) is λ n (xy), and that the u n -component of the t n -coefficient of λ t (x)λ t (y)λ t (u) is P n (λ 1 (x), . . . , λ n (x), λ 1 (y), . . . , λ n (y)). This proves (A.b).
Let X be a Z[ 1 2 ]-scheme. In view of Lemma B.1, the λ-ring structure on GW ± (X) induces a λ-ring structure on GW ± (X) ≃ GW even 0 (X). Explicitly, denoting by ρ : GW ± (X) → GW even 0 (X) the canonical homomorphism of abelian groups (see Appendix B), we have for i ∈ Z and n ∈ N,
(4.2.b) λ n (ρ(r) · γ i ) = ρ(λ n (r)) · γ ni if r ∈ GW + (X), ρ(λ n (r)) · γ n(2i+1) 2
if r ∈ GW − (X) and n is even,
ρ(λ n (r)) · γ n(2i+1)−1 2
if r ∈ GW − (X) and n is odd. (5.1.a) ψ n − λ 1 ψ n−1 + λ 2 ψ n−2 + · · · + (−1) n−1 λ n−1 ψ 1 + (−1) n nλ n = 0.
For instance, this yields ψ 1 = id and ψ 2 = (id) 2 − 2λ 2 . We also define ψ 0 as the composite
(5.1.b) ψ 0 : GW 2i 0 (X) rank − −− → Z π 0 (X) → GW 0 0 (X)
. Assume now that (E, ν) is a rank two symplectic bundle on X, and let x = [(E, ν)] ∈ GW 2 0 (X) be its class. Then, by Lemma 2.3, we have for n ∈ N {0}
λ n (x) = x if n = 1, γ if n = 2, 0 if n ∈ {1, 2}.
Thus (5.1.a) yields the inductive formula for x as above (the class of a rank two symplectic bundle)
(5.1.c) ψ n (x) = xψ n−1 (x) − γψ n−2 (x) for n ≥ 2.
Proposition 5.1.1. The operations ψ n : GW even 0 (X) → GW even 0 (X) are ring morphisms for n ∈ N, and satisfy the relation ψ m • ψ n = ψ mn for m, n ∈ N.
Proof. This follows from Theorem 4.2.5 (see for instance [AT69, Propositions 5.1 and 5.2]).
Remark 5.1.2. The operations ψ n for n < 0 are classically defined using duality; since by definition a nondegenerate symmetric (resp. skew-symmetric) form is isomorphic to its dual (resp. the opposite of its dual), in our situation we could set, for n < 0
ψ n (x) = ψ −n (x) when x ∈ GW 4i 0 (X) for i ∈ Z, −ψ −n (x) when x ∈ GW 4i+2 0 (X) for i ∈ Z,
making Proposition 5.1.1 valid for m, n ∈ Z.
Adams
Operations on hyperbolic forms. Let X be a Z[ 1 2 ]-scheme, and consider its Grothendieck group of vector bundles K 0 (X). The exterior power operations yield a λ-ring structure on K 0 (X) (and in particular Adams operations ψ n for n ∈ N {0}, using the formula (5.1.a)), such that the forgetful morphism (5.2.a) f : GW even 0 (X) → K 0 (X) (mapping γ to 1) is a morphism of λ-rings. In this section, we consider the hyperbolic morphisms h 2i : K 0 (X) → GW even 0 (X) (defined just below). Those are of course not morphisms of λ-rings (not even ring morphisms), but as we will see in Proposition 5.2.4, they do satisfy some form of compatibility with the Adams operations.
We define morphisms (5.2.b) h 2i : K 0 (X) → GW 2i 0 (X) for i ∈ Z by the requirements that h 0 = h + and h 2 = h − (see (4.1.a)) under the identifications GW 0 0 (X) ≃ GW + (X) and GW 2 0 (X) ≃ GW − (X), and for any vector bundle E → X
(5.2.c) γ · h 2i (E) = h 2(i+2) (E) for i ∈ Z.
Lemma 5.2.1. Let a ∈ K 0 (X) and b ∈ GW 2j 0 (X). Then, in the notation of (5.2.a) and (5.2.b), we have for any
i ∈ Z h 2i (a) · b = h 2(i+j) (a · f (b)).
Proof. Let ε, ε ′ ∈ {1, −1}. Let us consider vector bundles A, B on X, and a nondegenerate ε-symmetric bilinear form ν on B. The isomorphism
(A ⊗ B) ⊕ (A ∨ ⊗ B) 1 0 0 1 ⊗ ν − −−−−−−−− → (A ⊗ B) ⊕ (A ∨ ⊗ B ∨ ) ≃ (A ⊗ B) ⊕ (A ⊗ B) ∨ induces an isometry (A ⊗ B) ⊕ (A ∨ ⊗ B), 0 1 ⊗ ν ε ′ ̟ A ⊗ ν 0 ≃ (A ⊗ B) ⊕ (A ⊗ B) ∨ , 0 1 εε ′ ̟ A⊗B 0 ,
as evidenced by the computation
1 0 0 1 ⊗ ν ∨ 0 1 εε ′ ̟ A ⊗ ̟ B 0 1 0 0 1 ⊗ ν = 0 1 ⊗ ν εε ′ ̟ A ⊗ (ν ∨ • ̟ B ) 0 = 0 1 ⊗ ν ε ′ ̟ A ⊗ ν 0 .
The lemma follows.
Lemma 5.2.2. For any i, j ∈ Z we have h 2i (1)h 2j (1) = 2h 2(i+j) (1).
Proof. Take a = 1 ∈ K 0 (X) and b = h 2j (1) ∈ GW 2j 0 (X) in Lemma 5.2.1. Observe that the classes h and τ (see Notation 1.1) coincide respectively with h 0 (1) and h 2 (1). Thus Lemma 5.2.2 implies that (5.2.d) h 2 = 2h ; hτ = 2τ ; τ 2 = 2γh.
Combining the relations hτ = 2τ and h = 1 + −1 yields (5.2.e) −1 τ = τ.
Lemma 5.2.3. For n ∈ N, we have in GW 2n 0 (Spec(Z[ 1 2 ])) (see Notation 1.1)
ψ n (τ ) = τ γ n−1 2 if n is odd. 2 −1 n 2 γ n 2
if n is even.
Proof. We prove the lemma by induction on n, the cases n = 0, 1 being clear. If n ≥ 2, we have by (5.1.c)
(5.2.f) ψ n (τ ) = τ ψ n−1 (τ ) − γψ n−2 (τ ).
Assume that n is odd. Using the induction hypothesis together with (5.2.e) we obtain and the result follows as above from (5.2.f) when n is even.
Proposition 5.2.4. Let E → X be a vector bundle, and n ∈ N, i ∈ Z. For j ∈ Z, let us denote by I j the image of h 2j : K 0 (X) → GW 2j 0 (X). (i) If n is odd, then λ n • h 2i (E) lies in I in .
(ii) If n is odd, then ψ n • h 2i (E) lies in I in .
(iii) If n is even, then ψ n • h 2i (E) lies in 2 GW 2in 0 (X) + I in .
Proof. Statement (i) follows from Lemma 4.1.1 (ii) with G = 1 (observe that by construction of the Grothendieck-Witt group, the classes of metabolic forms belong to the subgroup I in ⊂ GW 2in 0 (X)). Let us prove (ii) by induction on n. This is clear when n = 1. Assume that n is odd. When j ∈ {1, . . . , n − 1} is even, the element ψ n−j • h 2i (E) belongs to I i(n−j) by induction. When j ∈ {1, . . . , n} is odd the element λ j • h 2i (E) belongs to I ij by (i). Since I ik · GW 2i(n−k) 0 (X) ⊂ I in for all k ∈ Z by Lemma 5.2.1, it follows from the inductive formula (5.1.a) that ψ n • h 2i (E) belongs to I in . The proof of (iii) is similar, noting that nλ n • h 2i (E) is divisible by 2 (the starting case n = 0 being clear from (5.1.b)).
Recall the exact sequence of [Wal03, Theorem 2.6], for i ∈ Z,
K 0 (X) h 2i − − → GW 2i 0 (X) → W 2i (X) → 0.
When X = ∅, the λ-ring structure on GW even 0 (X) does not descend to its quotient i∈Z W 2i (X), for instance because λ 2 (h 0 (1)) = −1 has nonzero image in the Witt ring. However, Proposition 5.2.4 implies the following:
Corollary 5.2.5. Let n ∈ N be odd. Then the operations ψ n , λ n : GW 2i 0 (X) → GW 2in 0 (X) descend to operations ψ n , λ n : W 2i (X) → W 2ni (X).
Remark 5.2.6. If −1 is a square in H 0 (X, O X ), then 2 = h 0 (1) ∈ GW 0 0 (X). Therefore Proposition 5.2.4 (iii) implies that the operation ψ n does descend to the Witt groups when n is even (even though λ n does not).
5.3.
Adams operations on the universal rank two bundle. In this section, we consider the universal symplectic bundle (U, ϕ) over HP 1 , and denote by u its class in GW 2 0 (HP 1 ).
Proposition-Definition 5.3.1. Let n ∈ N. There exists a unique element
ω(n) ∈ GW 2n−2 0 (Spec(Z[ 1 2 ]))
such that ψ n (u − τ ) = ω(n) · (u − τ ) ∈ GW 2n 0 (HP 1 ).
Proof. The first Borel class of (U, ϕ) in GW 2 0 (HP 1 ) is u − τ (see [PW19, Theorem 9.9]). By the quaternionic projective bundle theorem [PW21, Theorem 8.1], the GW even 0 (Spec(Z[ 1 2 ]))module GW even 0 (HP 1 ) is free on the basis 1, u − τ . This implies in particular the uniqueness part of the statement. Let us write
ψ n (u − τ ) = a + b(u − τ )
with a ∈ GW 2n 0 (Spec(Z[ 1 2 ])) and b ∈ GW 2n−2 0 (Spec(Z[ 1 2 ])). Consider the morphism of Z[ 1 2 ]schemes i 0 : Spec(Z[ 1 2 ]) = HP 0 → HP 1 of (1.c). Since i * 0 (u) = τ , we have
a = i * 0 (a + b(u − τ )) = i * 0 • ψ n (u − τ ) = ψ n • i * 0 (u − τ ) = ψ n (0) = 0. So we may set ω(n) = b.
Lemma 5.3.2. Let m, n ∈ N. Then ω(mn) = ω(n) · ψ n (ω(m)).
Proof. Indeed by Proposition 5.1.1, we have in GW 2mn 0 (HP 1 )
ψ mn (u − τ ) = ψ n • ψ m (u − τ ) = ψ n (ω(m) · (u − τ )) = ψ n (ω(m)) · ψ n (u − τ ) = ω(n) · ψ n (ω(m)) · (u − τ ).
From the inductive definition of the Adams operations, we deduce an inductive formula for the classes ω(n):
Lemma 5.3.3. We have ω(0) = 0, ω(1) = 1, and if n ≥ 2 ω(n) = τ ω(n − 1) − γω(n − 2) + ψ n−1 (τ ).
Proof. The computations of ω(0) and ω(1) are clear. Assume that n ≥ 2. Then by (5.1.c) we have in GW 2n 0 (HP 1 ) ψ n (u − τ ) = ψ n (u) − ψ n (τ ) = uψ n−1 (u) − γψ n−2 (u) − τ ψ n−1 (τ ) + γψ n−2 (τ ) = uψ n−1 (u − τ ) + (u − τ )ψ n−1 (τ ) − γψ n−2 (u − τ ).
By the quaternionic projective bundle theorem [PW21, Theorem 8.1] we have (u − τ ) 2 = 0, hence u(u − τ ) = τ (u − τ ), so that
ψ n (u − τ ) = (u − τ ) τ ω(n − 1) + ψ n−1 (τ ) − γω(n − 2) ,
from which the result follows.
We are now in position to find an explicit expression for the elements ω(n). For this, recall from Notation 1.1 that h = 1 − ǫ. if n is odd,
n 2 2 τ γ n−2 2
if n is even.
Proof. We proceed by induction on n, the cases n = 0, 1 being clear. Let n ≥ 2. Assume that n is even. Recall that hτ = 2τ by (5.2.d) and that τ −1 = τ by (5.2.e). Combining these observations with the explicit formula for ω(n − 1) (known by induction) yields τ ω(n − 1) = (n − 1) 2 τ γ
ω(n) = τ ω(n − 1) − γω(n − 2) + ψ n−1 (τ ) = (n − 1) 2 2 τ 2 γ n−3 2 − (n − 2) n − 1 2 h − −1 n−1 2 γ n−1 2 + 2 −1 n−1 2 γ n−1 2 = (n − 1) 2 h − (n − 2) n − 1 2 h + (n − 2) −1 n−1 2 + 2 −1 n−1 2 γ n−1 2 = n n − 1 2 h + −1 n−1 2 γ n−1 2 .
5.4.
Inverting ω(n). In order to define the stable Adams operations, we will be led to invert the elements ω(n) ∈ GW 2n−2 0 (Spec(Z[ 1 2 ])). Let us first observe that it is equivalent to invert somewhat simpler elements. Proof. We use the explicit formulas of Proposition 5.3.4. Assume that n is odd. Since n = n ⋆ divides ω(n), it is invertible in R[ 1 ω(n) ]. Conversely, writing n = 2m + 1 we have (recall that ǫ = − −1 , so that ǫ 2 = 1)
ω(n) · (m(1 + ǫ) + ǫ m ) = γ m n(m(1 − ǫ) + (−ǫ) m ) · (m(1 + ǫ) + ǫ m ) = γ m n(m(1 − ǫ)ǫ m + m(1 + ǫ)(−ǫ) m + (−1) m ) = γ m n(mǫ m (1 − ǫ + (−1) m (1 + ǫ)) + (−1) m )
= γ m n(2m + 1)(−1) m = γ m n 2 (−1) m (where the penultimate equality is seen for instance by distinguishing cases according to the parity of m). It follows that ω(n) is invertible in R[ 1
n ⋆ ] = R[ 1 n ].
Now assume that n is even. Then, by (5.2.d) (5.4.a) ω(n) 2 = n 2 2 τ 2 γ n−2 = n 4 2 hγ n−1 = n 3 n ⋆ γ n−1 ,
so that n ⋆ is invertible in R[ 1 ω(n) ].
On the other hand, using (5.2.d), we have
(n ⋆ ) 2 = n 2 2 h = n n 2 h, hence n is invertible in R[ 1 n ⋆ ]. Thus (5.4.a) implies that ω(n) is invertible in R[ 1 n ⋆ ].
We want now to formally invert the action of n ⋆ on the spectrum GW. Remark 5.4.4. Observe that the ring morphism B → GW even 0 (Spec(Z[ 1 2 ])) given by e → ǫ maps n * to n ⋆ . The ring B may be identified with GW + (Spec Z), but we will not use this observation. ]) (S), e → − −1 , which allows us to see n * as an endomorphism of the sphere spectrum and perform the formal inversion of n * in an efficient way as explained in [Bac18,§6]. In short, we consider the diagram For any i, j ∈ Z and any smooth Z[ 1 2 ]-scheme X, the spectrum Σ j S 1 Σ i P 1 Σ ∞ P 1 X + is a compact object in SH(Z[
(5.4.f) Σ ∞ P 1 X + , Σ −j S 1 Σ i P 1 GW 1 n * SH(Z[ 1 2 ]) = GW i j (X) 1 n *
In case X is merely a regular Z[ 1 2 ]-scheme, the same property holds using the spectrum (where GW 2 0 (X) is denoted KSp 0 (X)) the Adams operations ψ n : GW 2 0 (X) → GW 2n 0 (X) constructed in §5.1, where X runs over the smooth Z[ 1 2 ]-schemes, are induced by a unique morphism ψ n : GW 2 → GW 2n in H(Z[ 1 2 ]). Using the periodicity isomorphisms (1.a), we obtain Adams operations:
p * X (GW[ 1 n * ]), where p X : X → Spec(Z[ 1 2 ]) is
(5.5.a) ψ n : GW 2i → GW 2ni for i odd.
We will need the following complement to [DF23, Theorem 4.1.4]:
Lemma 5.5.1. Let E ∈ SH(Z[ 1 2 ]) be a Sp-oriented ring spectrum, and consider for a, b ∈ Z the pointed motivic space
E = Ω ∞ P 1 Σ a S 1 Σ b P 1 E ∈ H(Z[ 1 2 ]). Let i 1 , . . . , i r ∈ Z be odd integers. Then each map GW 2i 1 ∧ · · · ∧ GW 2ir → E in H(Z[ 1 2 ]) is determined by the induced maps (5.5.b) GW 2i 1 0 (X 1 ) × · · · × GW 2ir 0 (X r ) → [(X 1 × · · · × X r ) + , E] H(Z[ 1 2 ])
where X 1 , . . . , X r run over the smooth Z[ 1 2 ]-schemes.
Proof. By [PW19, §8], for j = 1, . . . , r, the pointed motivic space GW 2i j can be expressed as a (homotopy) colimit of pointed smooth Z[ 1 2 ]-schemes Y m,j = {−m, . . . , m} × HGr m,j over m ∈ N, where HGr m,j denotes an appropriate symplectic Grassmannian. Set Y m = Y m,1 ∧· · ·∧Y m,r and G = GW 2i 1 ∧ · · · ∧ GW 2ir . Then by [PW19, Theorem 10.1] we have an exact sequence
0 → lim m 1 [S 1 ∧ Y m , E] H(Z[ 1 2 ]) → [G, E] H(Z[ 1 2 ]) → lim m [Y m , E] H(Z[ 1 2 ]) → 0.(Y m,1 → GW 2i 1 , . . . , Y m,r → GW 2ir ) ∈ GW 2i 1 0 (Y m,1 ) × · · · × GW 2ir 0 (Y m,r ) under the map (5.5.b).
We are now in position to follow the procedure described in [DF23,§4] to construct the n-th stable Adams operation, for n ∈ N. For any integer i ∈ Z, consider the motivic space
GW 2i 1 n * = Ω ∞ T Σ i T GW 1 n * ∈ H(Z[ 1 2 ]). The composite Σ ∞ T GW 2i → Σ i T GW → Σ i T GW[ 1 n * ] in SH(Z[ 1 2 ]) yields by adjunction a morphism (5.5.c) GW 2i → GW 2i 1 n * in H(Z[ 1 2 ]), while the morphism in SH(Z[ 1 2 ]) Σ ∞ T T ∧ GW 2i 1 n * = Σ T Σ ∞ T GW 2i 1 n * Σ T (counit) − −−−−−− → Σ T Σ i T GW 1 n * = Σ i+1 T GW 1 n * yields a morphism (5.5.d) T ∧ GW 2i 1 n * → GW 2(i+1) 1 n * in H(Z[ 1 2 ]).
Using the morphism ω(n) −i of (5.4.e), we define a morphism in H(Z[ 1 2 ]), for i odd,
(5.5.e) Ψ n i : GW 2i ψ n − − → GW 2ni (5.5.c) − −−− → GW 2ni 1 n * Ω ∞ T Σ i T ω(n) −i −−−−−−−−→ GW 2i 1 n * .
Proposition 5.5.2. Let i ∈ Z be odd and let n ∈ N. Then the diagram
T ∧2 ∧ GW 2i / / id T ∧2 ∧ω(n) 2 ψ n GW 2(i+2) ψ n T ∧2 ∧ GW 2n(i+2)−4 / / GW 2n(i+2) commutes in H(Z[ 1 2 ]),
where the horizontal arrows are induced by the bonding map σ of (1.d). Proof. Let X be a smooth Z[ 1 2 ]-scheme, and denote by p : HP 1 ×HP 1 ×X → X the projection. Let u 1 , u 2 ∈ GW 2 0 (HP 1 × HP 1 × X) be the pullbacks of u ∈ GW 2 0 (HP 1 ) under the two projections. Consider the diagram
GW 2i 0 (X) ≃ / / ω(n) 2 ψ n GW 2(i+2) 0 (T ∧2 ∧ X + ) ψ n / / GW 2(i+2) 0 (HP 1 × HP 1 × X) ψ n GW 2n(i+2)−4 0 (X) ≃ / / GW 2n(i+2) 0 (T ∧2 ∧ X + ) / / GW 2n(i+2) 0 (HP 1 × HP 1 × X)
where the horizontal composites are given by x → p * (x) · (u 1 − τ )(u 2 − τ ). Then, for x ∈ GW 2i 0 (X) we have by Proposition 5.1.1 and Proposition-Definition 5.3.1
ψ n (p * (x)·(u 1 −τ )(u 2 −τ )) = p * (ψ n (x))·ψ n (u 1 −τ )·ψ n (u 2 −τ ) = ω(n) 2 ·p * (ψ n (x))(u 1 −τ )(u 2 −τ ),
showing that the exterior square in the above diagram commutes. Since the lower right horizontal arrow is injective (e.g. by [PW19, Lemma 7.6]), it follows that the interior left square commutes. By Lemma 5.5.1, this implies that the following diagram commutes
GW 2i / / ω(n) 2 ψ n Ω 2 T GW 2(i+2)
Ω 2 T ψ n GW 2n(i+2)−4 / / Ω 2 T GW 2n(i+2) which implies the statement by adjunction.
Definition 5.5.5. For n ∈ N, we denote by Ψ n : GW → GW 1 n * . the morphism of spectra corresponding to the family Ψ n i of (5.5.e), for i odd, under the bijection of Proposition 5.5.4 (with r = 1). We call it the stable n-th Adams operation.
Remark 5.5.6. If X is a regular Z[ 1 2 ]-scheme with structural morphism p X : X → Spec(Z[ 1 2 ])), we obtain a morphism of spectra
Ψ n : GW X = p * X GW → p * X GW 1 n * = (p * X GW) 1 n * = GW X 1 n * . For i ∈ Z, let us define Ψ n i = Ω ∞ T Σ i T (Ψ n ) : GW 2i → GW 2ni 1 n * .
Note that, by construction, we have Ψ n i = Ψ n i when i is odd. Let us mention that the stable Adams operation has the expected relation to the unstable one, also in even degrees:
Lemma 5.5.7. When X is a smooth Z[ 1 2 ]-scheme, for any i ∈ Z, the morphism GW 2i 0 (X) → GW 2i 0 (X)[ 1 n * ] induced by Ψ n i equals ω(n) −i ψ n .
Proof. This is true when i is odd, since Ψ n i = Ψ n i in this case. Assume that i is even. Let p : X × HP 1 → X be the projection. We have a commutative diagram
GW 2i 0 (X) ∼ / / Ψ n i GW 2i+2 0 (T ∧ X + ) Ψ n i+1 / / GW 2i+2 0 (HP 1 × X) Ψ n i+1 GW 2i 0 (X)[ 1 n * ] ∼ / / GW 2i+2 0 (T ∧ X + )[ 1 n * ] / / GW 2i+2 0 (HP 1 × X)[ 1 n * ]
where the horizontal composites are given by x → p * (x) · (u − τ ). Now, by the odd case treated above, we have for x ∈ GW 2i 0 (HP 1 × X) Ψ n i+1 (p * (x) · (u − τ )) = ω(n) −i−1 ψ n (p * (x) · (u − τ )) by the odd case = ω(n) −i−1 · p * ψ n (x) · ψ n (u − τ ) by Proposition 5.1.1 = p * (ω(n) −i · ψ n (x)) · (u − τ ) by Proposition-Definition 5.3.1.
The statement then follows from the injectivity of the lower horizontal composite (e.g. by [PW19, Lemma 7.6]).
Theorem 5.5.8. For any integer n ∈ N, the stable Adams operation Ψ n : GW → GW[ 1 n * ] is a morphism of ring spectra.
Proof. We have first to check that the diagram in SH(Z[ 1 2 ])
(5.5.f) GW ∧ GW Ψ n ∧Ψ n / / GW[ 1 n * ] ∧ GW[ 1 n * ] GW Ψ n / / GW[ 1 n * ]
commutes, where the vertical arrows are the multiplications. In view of Proposition 5.5.4, we have to check that the following diagram, in which the vertical maps are induced by the multiplication in the ring spectrum GW, commutes in H(Z[ 1 2 ]), for any i ∈ Z odd
GW 2i ∧ GW 2i Ψ n i ∧ Ψ n i / / GW 2i { 1 n * } ∧ GW 2i { 1 n * } GW 4i Ψ n 2i
/ / GW 4i { 1 n * }. By Lemma 5.5.1 and Lemma 5.5.7 (taking into account [PW19, Theorem 11.4]), this reduces to the formula, when X, Y are smooth Z[ 1 2 ]-schemes and x ∈ GW 2i 0 (X), y ∈ GW 2i 0 (Y )
(5.5.g) p * 1 (ω(n) −i ψ n (x)) · p * 2 (ω(n) −i ψ n (x)) = ω(n) −2i ψ n (p * 1 (x) · p * 2 (y)) ∈ GW 4i 0 (X × Y ), where p 1 : X × Y → X, p 2 : X × Y → Y
are the projections. But the formula (5.5.g) readily follows from Proposition 5.1.1.
Next, we need to prove the commutativity of the diagram in SH( In view of Proposition 5.5.4, it will suffice to show that, for i ∈ N odd, the composite
Z[ 1 2 ]) S ε / / ε # # • • • • • • • • • GW Ψ n GW[ 1 n * ]GW 2i Ψ n i − − → GW 2i 1 n * Ψ m i − − → GW 2i 1 (mn) * equals Ψ mn i in H(Z[ 1 2 ])
. By Lemma 5.5.1 and Lemma 5.5.7, it will then suffice to show that, for each odd i ∈ N and each smooth Z[ 1 2 ]-scheme X, the composite
GW 2i 0 (X) ω(n) −i ·ψ n −−−−−−→ GW 2i 0 (X) 1 n * ω(m) −i ·ψ m − −−−−−− → GW 2i 0 (X) 1 (mn) *
equals ω(mn) −i · ψ mn . But this follows from Proposition 5.1.1 and Lemma 5.3.2.
Ternary laws for Hermitian K-theory
Recall from [DF23, §2.3] that ternary laws are the analogues for Sp-oriented cohomology theories (or spectra) of formal group laws for oriented cohomology theories. In short, the problem is to understand the Borel classes (in the relevant cohomology theory) of the symplectic bundle U 1 ⊗ U 2 ⊗ U 3 on HP n × HP n × HP n , where U i are the universal bundles on the respective factors. The ternary laws permit to compute Borel classes of threefold products of symplectic bundles. At present, there are few computations of such laws, including MW-motivic cohomology and motivic cohomology which are examples of the so-called additive ternary laws [DF23, Definition 3.3.3]. In this section, we compute the ternary laws of Hermitian K-theory (and thus also of K-theory as a corollary), which are not additive.
Our first task is to express the Borel classes in Hermitian K-theory in terms of the λoperations. We will denote by σ i (X 1 , . . . , X 4 ) ∈ Z[X 1 , . . . , X 4 ] the elementary symmetric polynomials.
Lemma 6.1. Let X be a Z[ 1 2 ]-scheme and let e 1 , . . . , e 4 ∈ GW 2 0 (X) be the classes of rank two symplectic bundles over X. Then . . . , e 4 ) if i = 1. σ 2 (e 1 , . . . , e 4 ) + 4γ
λ i (e 1 + · · · + e 4 ) = σ 1 (e 1 ,
if i = 2. σ 3 (e 1 , . . . , e 4 ) + 3σ 1 (e 1 , . . . , e 4 )γ if i = 3. σ 4 (e 1 , . . . , e 4 ) + 2σ 2 (e 1 , . . . , e 4 )γ + 6γ 2 if i = 4.
Proof. In view of (4.2.a), it suffices to expand the product (1 + te 1 + γt 2 )(1 + te 2 + γt 2 )(1 + te 3 + γt 2 )(1 + te 4 + γt 2 ).
Lemma 6.2. In the ring Z[x 1 , x 2 , x 3 , x 4 , y], we have the following equalities: Proposition 6.3. Let X be a Z[ 1 2 ]-scheme. Let E be a symplectic bundle of rank 8 on X, and e ∈ GW 2 0 (X) its class. Then we have:
σ i (x 1 − y, . . . , x 4 − y) = σ 1 − 4y if i = 1, σ 2 − 3yσ 1 + 6y 2 if i = 2, σ 3 − 2σ 2 y + 3σ 1 y 2 − 4y 3 if i = 3, σ 4 − σ 3 y + σ 2 y 2 − σ 1 y 3 + y 4 if i = 4, where σ i = σ i (x 1 ,b GW i (E) = e − 4τ if i = 1. λ 2 (e) − 3τ e + 4(2 − 3ǫ)γ if i = 2. λ 3 (e) − 2τ λ 2 (e) + 3(1 − 2ǫ)γe − 8τ γ if i = 3. λ 4 (e) − τ λ 3 (e) − 2ǫγλ 2 (e) − τ γe + 2γ 2 if i = 4.
Proof. Using the symplectic splitting principle [PW21,§10], we may assume that E splits as an orthogonal sum of rank two symplectic bundles, whose classes in GW 2 0 (X) we denote by e 1 , . . . , e 4 . The Borel classes b GW i (E) are then given by the elementary symmetric polynomials in the elements e 1 − τ, . . . , e 4 − τ , which can be computed using Lemma 6.2. For i = 1, the result is immediate. For i = 2, we have σ 2 (e 1 − τ, . . . , e 4 − τ ) = σ 2 (e 1 , . . . , e 4 ) − 3τ σ 1 (e 1 , . . . , e 4 ) + 6τ 2 and σ 2 (e 1 , . . . , e 4 ) = λ 2 (e) − 4γ by Lemma 6.1. As τ 2 = 2(1 − ǫ)γ, we find σ 2 (e 1 − τ, . . . , e 4 − τ ) = λ 2 (e) − 4γ − 3τ e + 12(1 − ǫ)γ proving the case i = 2. We now pass to the case i = 3. Using Lemma 6.2, we find Since τ 3 e = 4τ γe and τ 4 = 8(1 − ǫ)γ 2 , we conclude summing up the previous expressions.
Our next task is to obtain an explicit formula for the λ-operations on products of three classes of rank two symplectic bundles, providing a different proof of [Ana17, Lemma 8.2]. It will be useful to have a basis for the symmetric polynomials in three variables u 1 , u 2 , u 3 . Following [DF23,§2.3.3], we set, for i, j, k ∈ N,
(6.a) σ(u i 1 u j 2 u k 3 ) = (a,b,c) u a 1 u b 2 u c 3
where the sum runs over the monomials u a 1 u b 2 u c 3 in the orbit of u i 1 u j 2 u k 3 under the action of the permutation of the variables u 1 , u 2 , u 3 . Lemma 6.4. Let X be a Z[ 1 2 ]-scheme, and let u 1 , u 2 , u 3 ∈ GW 2 0 (X) be the classes of rank two symplectic bundles on X. Then
λ i (u 1 u 2 u 3 ) = u 1 u 2 u 3 if i = 1. σ(u 2 1 u 2 2 )γ − 2σ(u 2 1 )γ 2 + 4γ 3 if i = 2. σ(u 3 1 u 2 u 3 )γ 2 − 5u 1 u 2 u 3 γ 3 if i = 3. σ(u 4 1 )γ 4 + u 2 1 u 2 2 u 2 3 γ 3 − 4σ(u 2 1 )γ 5 + 6γ 6 if i = 4. Proof.
In view of (A.d) and (4.2.a), this follows from Lemma C.3.2.
Finally, we are in position to compute the ternary laws of Hermitian K-theory. The computation is obtained by combining Proposition 6.3 and Lemma 6.4 (applied to γ −1 u 1 u 2 u 3 ).
Proposition 6.5. Let E 1 , E 2 , E 3 be symplectic bundles of rank 2 on a Z[ 1 2 ]-scheme X. Let u 1 , u 2 , u 3 be their respective classes in GW 2 0 (X). Then the Borel class b GW
i (E 1 ⊗ E 2 ⊗ E 3 ) ∈ GW 2i
0 (X) equals (using the notation of (6.a))
u 1 u 2 u 3 γ −1 − 4τ if i = 1, σ(u 2 1 u 2 2 )γ −1 − 2σ(u 2 1 ) − 3τ u 1 u 2 u 3 γ −1 + 12(1 − ǫ)γ if i = 2, σ(u 3 1 u 2 u 3 )γ −1 − 2(1 + 3ǫ)u 1 u 2 u 3 − 2τ γ −1 σ(u 2 1 u 2 2 ) + 4τ σ(u 2 1 ) − 16τ γ if i = 3, σ(u 4 1 ) + u 2 1 u 2 2 u 2 3 γ −1 − 4(1 − ǫ)γσ(u 2 1 ) − 2ǫσ(u 2 1 u 2 2 ) − τ σ(u 3 1 u 2 u 3 )γ −1 + 4τ u 1 u 2 u 3 + 8(1 − ǫ)γ 2 if i = 4
. As a consequence of this proposition, we obtain the explicit expression of the ternary laws associated to Hermitian K-theory (see [DF23, Definition 2.3.2]). We use the notation (6.a).
Theorem 6.6. The ternary laws
F i = F i (v 1 , v 2 , v 3 ) of Hermitian K-theory (over the base Spec(Z[ 1 2 ])) are F 1 = 2(1−ǫ)σ(v 1 )+τ γ −1 σ(v 1 v 2 )+γ −1 v 1 v 2 v 3 , F 2 = 2(1 − 2ǫ)σ(v 2 1 ) + 2(1 − ǫ)σ(v 1 v 2 ) + 2τ γ −1 σ(v 2 1 v 2 ) − 3τ γ −1 v 1 v 2 v 3 + γ −1 σ(v 2 1 v 2 2 ), F 3 = 2(1−ǫ)σ(v 3 1 )−2(1−ǫ)σ(v 2 1 v 2 )+8(2−3ǫ)v 1 v 2 v 3 +τ γ −1 σ(v 3 1 v 2 )−2τ γ −1 σ(v 2 1 v 2 2 )+3τ γ −1 σ(v 2 1 v 2 v 3 )+γ −1 σ(v 3 1 v 2 v 3 ), F 4 = σ(v 4 1 )−2(1−ǫ)σ(v 3 1 v 2 )+2(1−2ǫ)σ(v 2 1 v 2 2 )+2(1−ǫ)σ(v 2 1 v 2 v 3 )−τ γ −1 σ(v 3 1 v 2 v 3 )+2τ γ −1 σ(v 2 1 v 2 2 v 3 )+γ −1 σ(v 2 1 v 2 2 v 2 3 ).
Proof. We use the relations v i = u i − τ and the previous theorem. For b 1 , we find
u 1 u 2 u 3 = v 1 v 2 v 3 + τ σ(v 1 v 2 ) + τ 2 σ(v 1 ) + τ 3
and the result follows quite easily from τ 2 = 2(1 − ǫ)γ and τ 3 = 4τ γ. For i = 2, we first compute
σ(u 2 1 u 2 2 ) = σ(v 2 1 v 2 2 ) + 2τ σ(v 2 1 v 2 ) + 4(1 − ǫ)γσ(v 2 1 ) + 8(1 − ǫ)γσ(v 1 v 2 ) + 16τ γσ(v 1 ) + 24(1 − ǫ)γ 2 . Next, −2σ(u 2 1 ) = −2σ(v 1 ) 2 − 4τ σ(v 1 ) − 12(1 − ǫ)γ As b 2 = σ(u 2 1 u 2 2 )γ −1 − 2σ(u 2 1 ) − 3τ u 1 u 2 u 3 γ −1 + 12(1 − ǫ),
we finally obtain the result for b 2 . We now treat the case i = 3, for which we have
b 3 = σ(u 3 1 u 2 u 3 )γ −1 + (−2 − 6ǫ)u 1 u 2 u 3 − 2τ γ −1 σ(u 2 1 u 2 2 ) + 4τ σ(u 2 1 ) − 16τ γ Now, σ(u 3 1 u 2 u 3 ) = σ(v 3 1 v 2 v 3 ) + τ σ(v 3 1 v 2 ) + 2(1 − ǫ)γσ(v 3 1 ) + 3τ σ(v 2 1 v 2 v 3 ) + 6(1 − ǫ)γσ(v 2 1 v 2 )+ +12τ γσ(v 2 1 ) + 18(1 − ǫ)γv 1 v 2 v 3 + 28τ γσ(v 1 v 2 ) + 40(1 − ǫ)γ 2 σ(v 1 ) + 48τ γ 2 and we deduce that b 3 = 2(1 − ǫ)σ(v 3 1 ) − 2(1 − ǫ)σ(v 2 1 v 2 ) + 8(2 − 3ǫ)v 1 v 2 v 3 + τ γ −1 σ(v 3 1 v 2 )− −2τ γ −1 σ(v 2 1 v 2 2 ) + 3τ γ −1 σ(v 2 1 v 2 v 3 ) + γ −1 σ(v 3 1 v 2 v 3 )
. We conclude with the case i = 4. The Borel class reads
b 4 = σ(u 4 1 ) + γ −1 σ(u 2 1 u 2 2 u 2 3 ) − 2σ(u 2 1 u 2 2 ) − τ γ −1 σ(u 3 1 u 2 u 3 ) + 4τ u 1 u 2 u 3 + +2(1 − ǫ)σ(u 2 1 u 2 2 ) − 4(1 − ǫ)γσ(u 2 1 ) + 8(1 − ǫ)γ 2 . First, we note that σ(u 4 1 ) = σ(v 4 1 ) + 4τ σ(v 3 1 ) + 12(1 − ǫ)γσ(v 2 1 ) + 16τ γσ(v 1 ) + 24(1 − ǫ)γ 2 . while u 2 1 u 2 2 u 2 3 = σ(v 2 1 v 2 2 v 2 3 ) + 2τ σ(v 2 1 v 2 2 v 3 ) + 2(1 − ǫ)γσ(v 2 1 v 2 2 ) + 8(1 − ǫ)γσ(v 2 1 v 2 v 3 )+ +8τ γσ(v 2 1 v 2 ) + 32τ γv 1 v 2 v 3 + 8(1 − ǫ)γ 2 σ(v 2 1 ) + 32(1 − ǫ)γ 2 σ(v 1 v 2 ) + 32τ γ 2 σ(v 1 ) + 32(1 − ǫ)γ 3 .
Using the above, we finally find
b 4 = σ(v 4 1 ) − 2(1 − ǫ)σ(v 3 1 v 2 ) + 2(1 − 2ǫ)σ(v 2 1 v 2 2 ) + 2(1 − ǫ)σ(v 2 1 v 2 v 3 )− −τ γ −1 σ(v 3 1 v 2 v 3 ) + 2τ γ −1 σ(v 2 1 v 2 2 v 3 ) + γ −1 σ(v 2 1 v 2 2 v 2 3 ).
Remark 6.7. The ternary laws of the spectrum W representing (Balmer) Witt groups have been computed by Ananyevskiy in [Ana17, Lemma 8.2]. In view of the morphism of ring spectra GW → W, we may recover this result by setting 1 − ǫ = 0 and τ = 0 in the above expression.
The above theorem yields an expression of the ternary laws of K-theory (those can of course be computed more directly). As above, we want to write the Borel classes of threefold products of symplectic bundles in terms of the first Borel classes of the bundles, and we may use the forgetful functor from Hermitian K-theory to ordinary K-theory. Regarding periodicity, the forgetful functor maps τ to 2β 2 and γ to β 4 , where β is the Bott element (of bidegree (2, 1)).
Theorem 6.8. The ternary laws
F i = F i (v 1 , v 2 , v 3 , v 4 ) of K-theory are F 1 = 4σ(v 1 )+2β −2 σ(v 1 v 2 )+β −4 v 1 v 2 v 3 , F 2 = 6σ(v 2 1 )+4σ(v 1 v 2 )+4β −2 σ(v 2 1 v 2 )−6β −2 v 1 v 2 v 3 +β −4 σ(v 2 1 v 2 2 ), F 3 = 4σ(v 3 1 )−4σ(v 2 1 v 2 )+40v 1 v 2 v 3 +2β −2 σ(v 3 1 v 2 )−4β −2 σ(v 2 1 v 2 2 )+6β −2 σ(v 2 1 v 2 v 3 )+β −4 σ(v 3 1 v 2 v 3 ), F 4 = σ(v 4 1 ) − 4σ(v 3 1 v 2 ) + 6σ(v 2 1 v 2 2 ) + 4σ(v 2 1 v 2 v 3 ) − 2β −2 σ(v 3 1 v 2 v 3 ) + 4β −2 σ(v 2 1 v 2 2 v 3 ) + β −4 v 2 1 v 2 2 v 2 3 .
Appendix A. λ-rings
Here we recall a construction from [BGI71, V, §2.3]; a more accessible exposition can be found in [AT69,§1], where the terminology "λ-ring"/"special λ-ring" is used instead of "preλ-ring"/"λ-ring". Let R be a commutative ring. One defines a ring Λ(R), whose underlying set is 1 + tR [[t]]. The addition in Λ(R) is given by multiplication of power series, while multiplication in Λ(R) is given by the formula n∈N f n t n · n∈N g n t n = n∈N P n (f 1 , . . . , f n , g 1 , . . . , g n )t n , where P n are certain universal polynomials defined in (C.1.a) below. In this ring the neutral element for the addition is the constant power series 1, and the multiplicative identity is the power series 1 + t. A structure of pre-λ-ring on R is a morphism of abelian groups
(A.a) λ t = λ R t : R → Λ(R) ; r → n∈N λ n (r)t n .
When R, S are pre-λ-rings, a ring morphism f : R → S is called a morphism of pre-λ-rings if it commutes with the operations λ n , i.e. if the following diagram commutes Λ(R)
Λ(f ) / / Λ(S) R f / / λ R t O O S λ S t O O
When R is a ring, a pre-λ-ring structure on Λ(R) is defined by setting for j ∈ N {0}
λ j n∈N f n t n = i∈N Q i,j (f 1 , . . . , f ij )t i ,
where Q i,j are certain universal polynomials defined in (C.2.a). Then R → Λ(R) defines a functor from the category of rings to that of pre-λ-rings. A pre-λ-ring R is called a λ-ring if λ t is a morphism of pre-λ-rings. This amounts to the following relations, for all n, i, j ∈ N {0}:
(A.b)
λ n (xy) = P n (λ 1 (x), . . . , λ n (x), λ 1 (y), . . . , λ n (y)) for x, y ∈ R,
(A.c) λ i (λ j (z)) = Q i,j (λ 1 (z), . . . , λ ij (z)) for z ∈ R.
Note that if E is a subset of R such that (A.b) and (A.c) are satisfied for all x, y, z ∈ E, then (A.b) and (A.c) are satisfied for all x, y, z lying in the subgroup generated by E in R.
Note also that if R is a λ-ring, and x, y, z ∈ R, it follows from Lemma C.3.1 that (A.d) λ n (xyz) = R n (λ 1 (x), . . . , λ n (x), λ 1 (y), . . . , λ n (y), λ 1 (z), . . . , λ n (z)),
where R n is a polynomial defined in §C.3.
Lemma A.1. Let R be a commutative ring and x ∈ R. Then in Λ(R) we have λ 1 (1 + xt) = 1 + xt and λ i (1 + xt) = 0 for i > 1.
Proof. This amounts to verifying that Q ij (x, 0, . . .) = x when i = j = 1, and that Q ij (x, 0, . . .) = 0 when i > 1 or j > 1, which follows at once from (C.2.a) under U 1 → x and U s → 0 for s > 0.
Lemma A.2. Let R be a commutative ring and x ∈ R. Let f i ∈ R for i ∈ N be such that
f 0 = 1. Then n∈N f n t n · (1 + xt) = n∈N f n x n t n ∈ Λ(R).
Moreover, if x ∈ R × , then 1 + xt is invertible in Λ(R), and (1 + xt) i = 1 + x i t for all i ∈ Z.
Proof. The first formula amounts to verifying that P n (f 1 , . . . , f n , x, 0, . . .) = f n x n , which follows from (C.1.a) (and (C.0.a)) under V 1 → x and V j → 0 for j > 1. ] is a morphism of pre-λ-rings and λ t (x) = 1 + xt. In addition, λ n (rx i ) = λ n (r)x ni for any r ∈ R, i ∈ Z, n ∈ N.
Proof. Let S = R[x ±1
]. By Lemma A.2, the element 1 + xt ∈ Λ(S) is invertible and there exists then a unique pre-λ-ring structure λ t : S → Λ(S) such that λ t (x) = 1 + xt and R → S is a morphism of pre-λ-rings. Consider the diagram
Λ(S) Λ(λ S t ) / / Λ(Λ(S)) Λ(R) Λ(λ R t )
Using the fact that Λ(R) and Λ(S) are λ-rings [AT69, Theorem 1.4], we see that all maps are ring morphisms. The interior middle square is commutative because R is a λ-ring, and the right one because Λ(R) → Λ(S) is a morphism of pre-λ-rings. Commutativity of each of the other three interior squares follows from the fact that R → S is a morphism of pre-λ-rings. We conclude that the exterior square is a diagram of R-algebras. To verify its commutativity it thus suffices to observe its effect on x ∈ S, which is done using Lemma A.1. We have proved that S is λ-ring. The last statement follows from Lemma A.2.
Appendix B. Graded rings
Let S = S 0 ⊕ S 1 be a commutative Z/2-graded ring. There is a general procedure to construct a commutative Z-graded ring out of S, which we now explain. We may consider the ring of Laurent polynomials S[x ±1 ] as a graded ring by setting |x| = 1 and |s| = 0 for any s ∈ S. We consider the Z-graded subgroup S ⊂ S[x ±1 ] defined by
S i := S (i mod 2) · x i , for i ∈ Z.
It is straightforward to check that S is in fact a Z-graded subring of S[x ±1 ], and that the canonical homomorphism of abelian groups S → S defined by u → ux i for u ∈ S i and i = 0, 1, has the property that the composite with the projection
S → S π − → S/(x 2 − 1)
is an isomorphism of Z/2-graded rings. Suppose next that G is an abelian group, and that S is a G-graded ring having the structure of a λ-ring. We will say that S is a G-graded λ-ring if λ i (r) ∈ S ig for any i ∈ N, any g ∈ G and any r ∈ S g . As a corollary of Lemma A.3, we obtain the following result.
Lemma B.1. Let S be a commutative Z/2-graded λ-ring. Then, the structure of λ-ring on S[x ±1 ] defined in Lemma A.3 induces a λ-ring structure on S which turns it into a Z-graded λ-ring. If r ∈ S i for some i ∈ Z, there exists a unique s ∈ S (i mod 2) such that r = sx i and we have λ n (r) = λ n (s)x ni ∈ S ni . (1 + tU i ) = n∈N t n σ n (U ).
C.1. The polynomials P n . By the theory of symmetric polynomials, there are polynomials P n ∈ Z[X 1 , . . . , X n , Y 1 , . . . , Y n ] such that Let R be a commutative ring. For every x ∈ R, let us define elements ℓ i (x) ∈ R for each integer i ≥ 1 by the formula
(C.1.b) ℓ i (x) = x if i = 1, 1 if i = 2, 0 if i > 2.
For elements a 1 , . . . , a r ∈ R × , we consider the polynomial (C.1.c) π a 1 ,...,ar (t) = ε 1 ,...,εr∈{1,−1}
(1 + ta ε 1 1 · · · a εn n ) ∈ R[t].
These polynomials can be expressed inductively as (C.1.d) π a 1 ,...,ar (t) = π a 1 ,...,a r−1 (ta r ) · π a 1 ,...,a r−1 (ta −1 r ). Note that for any a ∈ R × π a (t) = 1 + (a + a −1 )t + t 2 , and for any a, b ∈ R × , setting x = a + a −1 and y = b + b −1 , (C.1.e) π a,b (t) = 1 + txy + t 2 (x 2 + y 2 − 2) + t 3 xy + t 4 .
Lemma C.1.1. Let R be a commutative ring and x, y ∈ R. Then P n (ℓ 1 (x), . . . , ℓ n (x), ℓ 1 (y), . . . , ℓ n (y)) =
1 if n ∈ {0, 4}, xy if n ∈ {1, 3}, x 2 + y 2 − 2 if n = 2, 0 if n > 4.
Proof. Consider the ring S = R[a, a −1 , b, b −1 ]/(x − a − a −1 , y − b − b −1 ). Then S contains R. We have σ i (a, a −1 ) = ℓ i (x) and σ i (b, b −1 ) = ℓ i (y) for all i, so that, by (C.1.a) and (C.1.c) π a,b (t) = n P n (ℓ 1 (x), . . . , ℓ n (x), ℓ 1 (y), . . . , ℓ n (y))t n .
Thus the statement follows from (C.1.e).
Lemma C.1.2. Let R be a commutative ring and n ∈ N {0}. Then for every r 1 , . . . , r n ∈ R, the element P n (r 1 , . . . , r n , ℓ 1 (B), . . . , ℓ n (B)) − B n r n ∈ R[B]
is a polynomial in B of degree ≤ n − 1.
Proof. We may assume that R = Z[X 1 , . . . , X n ] and that r i = X i for all i = 1, . . . , n. By algebraic independence of the elementary symmetric polynomials, the ring R is then a subring of R ′ = Z[U 1 , . . . , U n ], via X i → σ (1 + tU i B + t 2 U 2 i ).
Expanding the last product and looking at the t n -coefficients of both sides of the equation, we see that P n (σ 1 (U ), . . . , σ n (U ), ℓ 1 (B), . . . , ℓ n (B)) has leading term B n σ n (U ) as a polynomial in B (in view of (C.0.a)).
C.2. The polynomials Q i,j . By the theory of symmetric polynomials, there are polynomials Q i,j ∈ Z[X 1 , . . . , X ij ] (where i, j ∈ N) such that (C.2.a) 1≤α 1 <···<α j ≤m
(1 + U α 1 · · · U α j t) = Proof. Let S = R[a, a −1 ]/(x − a − a −1 ). Then S contains R. Setting w 1 = a, w 2 = a −1 and w k = 0 in S for k > 2, we have σ k (w) = ℓ k (x) for all k. Thus for all j ∈ N i∈N t i Q i,j (ℓ 1 (x), . . . , ℓ ij (x)) (C.2.a) = 1≤α 1 <···<α j ≤m
(1 + w α 1 · · · w α j t) = 1 + tx + t 2 if j = 1, 1 + t if j = 2, 1 otherwise.
C.3. The polynomials R n . By the theory of symmetric polynomials, there are polynomials R n ∈ Z[X 1 , . . . , X n , Y 1 , . . . , Y n , Z 1 , . . . , Z n ] such that 1≤i,j,k≤m R n = P n (X 1 , . . . , X n , P 1 (Y 1 , Z 1 ), . . . , P n (Y 1 , . . . , Y n , Z 1 , . . . , Z n )).
Proof Since the elements X r = σ r (U ), Y r = σ r (V ), Z r = σ r (W ) for r = 1, . . . , m are algebraically independent, this yields the statement. if n ∈ {1, 7}, x 2 y 2 + x 2 z 2 + y 2 z 2 − 2(x 2 + y 2 + z 2 ) + 4 if n ∈ {2, 6}, x 3 yz + xy 3 z + xyz 3 − 5xyz if n ∈ {3, 5}, x 4 + y 4 + z 4 + x 2 y 2 z 2 − 4(x 2 + y 2 + z 2 ) + 6 if n = 4, 0
if n > 8.
Proof. Consider the ring S = R[a, a −1 , b, b −1 , c, c −1 ]/(x − a − a −1 , y − b − b −1 , z − c − c −1 ). Then S contains R. We have σ i (a, a −1 ) = ℓ i (x), σ i (b, b −1 ) = ℓ i (y), σ i (c, c −1 ) = ℓ i (z) for all i. Writing r n = R n (ℓ 1 (x), . . . , ℓ n (x), ℓ 1 (y), . . . , ℓ n (y), ℓ 1 (z), . . . , ℓ n (z)), we have by definition of R n and (C.1.c) π a,b,c (t) = n∈N r n t n ∈ S[t].
Since π a,b,c (t) = π a,b (tc) · π a,b (tc −1 ) by (C.1.d), it follows from (C.1.e) that π a,b,c (t) equals
(1 + txyc + t 2 (x 2 + y 2 − 2)c 2 + t 3 xyc 3 + t 4 c 4 )(1 + txyc −1 + t 2 (x 2 + y 2 − 2)c −2 + t 3 xyc −3 + t 4 c −4 ).
To conclude, we compute the coefficients r n by expanding the above product. We have r 0 = r 8 = 1 and r n = 0 for n > 8, as well as r 1 = r 7 = xy(c + c −1 ) = xyz.
Using the fact that c 2 + c −2 = z 2 − 2, we have r 2 = r 6 = (x 2 + y 2 − 2)(c 2 + c −2 ) + x 2 y 2 = x 2 y 2 + x 2 z 2 + y 2 z 2 − 2(x 2 + y 2 + z 2 ) + 4. Now c 3 + c −3 = z 3 − 3z, hence r 3 = r 5 = xy(c 3 + c −3 ) + (x 2 + y 2 − 2)xy(c + c −1 ) = x 3 yz + xy 3 z + xyz 3 − 5xyz.
Finally c 4 + c −4 = z 4 − 4z 2 + 2, hence r 4 = c 4 + c −4 + x 2 y 2 (c 2 + c −2 ) + (x 2 + y 2 − 2) 2 = x 4 + y 4 + z 4 + x 2 y 2 z 2 − 4(x 2 + y 2 + z 2 ) + 6.
/
/ BGL[ 1 n ] in which the vertical morphisms are the forgetful maps and the bottom horizontal morphism is the Adams operation on K-theory defined by Riou [Rio10, Definition 5.3.2].
(E, ε) ⊗2 ] − 2 + [(F, ϕ) ⊗2 ] − 2 by 2.5 and 2.3
Lemma 4.1. 1 .
1Let M be a G-equivariant vector bundle over X equipped with a G-equivariant nondegenerate ε-symmetric bilinear form µ, for some ε ∈ {1, −1}. Assume that (M, µ) admits a (G-invariant) Lagrangian L, and let n ∈ N.
Proposition 4.2. 2 .
2Let G be a split reductive group scheme over Z[ 1 2 ]. Then the pre-λ-ring GW + (Spec(Z[ 1 2 ]); G) is a λ-ring. Proof. By [Zib15, Proposition 2.1] the pre-λ-ring GW + (Spec(Q); G Q ) is a λ-ring. It follows from Theorem 3.6 that GW + (Spec(Z[ 1 2 ]); G) is a pre-λ-subring of GW + (Spec(Q); G Q ), hence a λ-ring.
For every Z[ 1 2 ]-scheme X, the pre-λ-ring GW + (X) is a λ-ring. Proof. This follows from Proposition 4.2.2 (applied to the split reductive groups O n and O m × O n ), using the arguments of [BGI71, Exposé VI, Théorème 3.3] (see [Zib15, §3.2] for details).
Lemma 4.2. 4 .
4The relations (A.b) and (A.c) are satisfied for all x, y, z ∈ GW − (X).Proof. By the symplectic splitting principle [PW21, §10], we may assume that x, y, z are each represented by a rank two symplectic bundle. In view of (4.2.a), the relation (A.c) follows from Lemma C.2.1. The relation (A.b) has been verified in Proposition 2.7, see Lemma C.1.1.
5 .
5The Adams operations 5.1. The unstable Adams operations. The λ-operations constructed in §4 are not additive (with the exception of λ 1 ), and there is a standard procedure to obtain additive operations from the λ-operations which is valid in any pre-λ-ring, see e.g. [AT69, §5]. Indeed, for any Z[ 1 2 ]-scheme X, we define the (unstable) Adams operations ψ n : GW 2i 0 (X) → GW 2ni 0 (X) for n ∈ N {0}, i ∈ Z through the inductive formula (see e.g. [AT69, Proof of Proposition 5.4])
On the other hand, by induction we have γψ n−2 (τ ) = γτ γ Combining these two computations with (5.2.f) proves the statement when n is odd.Assume now that n is even. Using the induction hypothesis we have
with ω(n) by Lemma 5.3.3, as required. Assume that n is odd. Observe that h =
.
For n ∈ N, we define an element n ⋆ ∈ GW 0 0 (Spec(Z[ 1 2 ])) by n ⋆ = n if n is odd, n 2 h if n is even. (Recall from Notation 1.1 that h = 1 − ǫ ∈ GW 0 0 (Spec(Z[ 1 2 ])) is the hyperbolic class.) Lemma 5.4.2. Let R = GW even 0 (Spec(Z[ 1 2 ])). Then the R-algebras R[ 1 n ⋆ ] and R[ 1 ω(n) ] are isomorphic.
n ∈ N, we define an element n * ∈ B by n * = n if n is odd, n 2 (1 − e) if n is even. For any m, n ∈ N, we have (5.4.c) (mn) * = m * n * ∈ B.
0 (
0. . . and define S[ 1 n * ] to be its homotopy colimit in SH(Z[ 1 2 ]). Further, we set GW 1 n * := GW ∧ S 1 n * . This is naturally a motivic ring spectrum.The B-algebra structure on GW even Spec(Z[ 1 2 ])) induced by (5.4.d) is given by e → ǫ (the argument is detailed in the last paragraph of the proof of [PW19, Theorem 11.1.5]), and in particular maps n * to n ⋆ . It thus follows from Lemma 5.4.2, that for any i ∈ N, the morphism GW[ 1 n * ] → Σ an inverse in SH(Z[ 1 2 ]) (5.4.e) ω(n) −i : Σ i(n−1) T GW 1 n * → GW 1 n * .
The lim 1 -term vanishes by [PW19, Theorems 9.4,13.2,13.3] (see the proof of [PW19, Theorem 13.1]). Thus a map G → E in H(Z[ 1 2 ]) is determined by its restrictions to [Y m , E] H(Z[ 1 2 ]) , for m ∈ N, each of which is determined by its restriction to [(Y m,1 × · · · × Y m,r ) + , E] H(Z[ 1 2 ]) , in view of [PW19, Lemma 7.6]. The latter is the image of the tuple of canonical maps
of Lemma 5.5.7 and of the fact that ψ n (1) = 1. Proposition 5.5.9. For any integers m, n ∈ N, the composite in SH(Z[ to Ψ mn . (Here Ψ m [ 1 n * ] denotes the image of the morphism Ψ m under the localisation functor, and the last equality follows from (5.4.c).) Proof. For every i ∈ Z, applying the functor Ω ∞ T Σ i T : SH(Z[ 1 2 ]) → H(Z[ 1 2 ]) to the morphism Ψ m [ 1 n * ] : GW[ 1 n * ] → GW[ 1 (mn) * ] * : GW 2i 1 n * → GW 2i 1 (mn) * .
. . . , x 4 ) for any i ∈ {1, . . . , 4}. Proof. Direct computation. In the next statement b GW i denotes the i-th Borel class with values in Hermitian K-theory [PW21, Definition 8.3].
b
GW 3 (E) = σ 3 (e 1 , . . . , e 4 ) − 2τ σ 2 (e 1 , . . . ,e 4 ) + 3τ 2 e − 4τ 3 = λ 3 (e) − 3γe − 2τ (λ 2 (e) − 4γ) + 6(1 − ǫ)γe − 16τ γ = λ 3 (e) + 3(1 − 2ǫ)γe − 2τ λ 2 (e) − 8τ γ.In case i = 4, we have b GW 4 (E) = σ 4 (e 1 , . . . , e 4 ) − τ σ 3 (e 1 , . . . , e 4 ) + τ 2 σ 2 (e 1 , . . . , e 4 ) − τ 3 e + τ 4 . Using Lemma 6.1, we find σ 4 (e 1 , . . . , e 4 ) = λ 4 (e) − 2σ 2 (e 1 , . . . , e 4 )γ − 6γ 2 = λ 4 (e) − 2λ 2 (e)γ + 2γ 2 , τ σ 3 (e 1 , . . . , e 4 ) = τ (λ 3 (e) − 3γe) = τ λ 3 (e) − 3τ γe, τ 2 σ 2 (e 1 , . . . , e 4 ) = 2(1 − ǫ)γσ 2 (e 1 , . . . , e 4 ) = 2(1 − ǫ)γλ 2 (e) − 8(1 − ǫ)γ 2 .
tU i V j ) = n∈N t n P n (σ 1 (U ), . . . , σ n (U ), σ 1 (V ), . . . , σ n (V )) holds in Z[U 1 , . . . , U m , V 1 , . . . , V m ][t]for every m.
i (U ). The ring S = R ′ [B, A, A −1 ]/(B − A − A −1 ) then contains R ′ [B], and thus also R[B]. Since σ i (A, A −1 ) = ℓ i (B) for all i, we have in S[t] n i=1 P i (σ 1 (U ), . . . , σ n (U ), ℓ 1 (B), . . . , ℓ n (B))t i = n i=1 (1 + tU i A)(1 + tU i A −1 ), and thus, in R ′ [B][t], n i=1 P i (σ 1 (U ), . . . , σ n (U ), ℓ 1 (B), . . . , ℓ n (B))t i = n i=1
t i Q i,j (σ 1 (U ), . . . , σ ij (U )) holds in Z[U 1 , . . . , U m ][t] for every m. For instance, we have Q 1,j = X j for any j ∈ N {0}. Lemma C.2.1. Let R be a commutative ring and x ∈ R. ThenQ i,j (ℓ 1 (x), . . . , ℓ ij (x)(x) if j = 1 and i = 0, 1 if i = 1 and j = 2, or if i = 0, 0 otherwise.
( 1 +
1tU i V j W k ) = n∈N t n R n (σ 1 (U ), . .. , σ n (U ), σ 1 (V ), . . . , σ n (V ), σ 1 (W ), . . . , σ n (W )) holds in Z[U 1 , . . . , U m , V 1 , . . . , V m , W 1 , . . . , W m ][t] for every m.Lemma C.3.1. For n ≤ m, we have in Z[X 1 , . . . , X m , Y 1 , . . . , Y m , Z 1 , . . . , Z m ]
.
Observe that, in Z[U 1 , . . . , U m , V 1 , . . . , V m ][t], Since the elements Y r = σ r (V ) for r = 1, . . . , m are algebraically independent, in view of (C.1.a) it follows that we have in Z[U 1 , . . . , U m , Y 1 , . . . , Y m ][t], (writing Y s = 0 for s > m) (C.3.a) n∈N P n (σ 1 (U ), . . . , σ n (U ), Y 1 , . . . , Y n )t Now in Z[V 1 , . . . , V m , W 1 , . . . , W m ], set for any n ∈ N, p n = P n (σ 1 (V ), . . . , σ m (V ), σ 1 (W ), . . . , σ m (W )), so that, in Z[U 1 , . . . , U m , V 1 , . . . , V m , W 1 , . . . , W m ][t], 1≤i,j,k≤m(1 + tU i V j W k ) n (σ 1 (U ), . . . , σ n (U ), p 1 , . . . , p n )t n .
Lemma C. 3 . 2 .
32Let R be a commutative ring and x, y, z ∈ R. Then R n (ℓ 1 (x), . . . , ℓ n (x), ℓ 1 (y), . . . , ℓ n (y), ℓ 1 (z), . . . , ℓ n (
Lemma A.3. Let R be a λ-ring, and consider the ring of Laurent polynomials R[x ±1 ] with coefficients in R. Then there exists a unique structure of λ-ring on R[x ±1 ] such that R → R[x ±1
Appendix C. Some polynomial identities When U 1 , . . . , U m is a series of variables, we denote by σ n (U ) ∈ Z[U 1 , . . . , U m ] the elementary symmetric functions, defined by the formula, valid in Z[U 1 , . . . , U m ][t], (C.0.a)1≤i≤m
Acknowledgments. The first named author is grateful to Aravind Asok, Baptiste Calmès and Frédéric Déglise for useful discussions. Both authors warmly thank Alexey Ananyevskiy for sharing a preprint on Adams operations which has been a source of inspiration for the results of the present paper, and Tom Bachmann for very useful suggestions. They also heartily thank the referee for a careful reading and useful comments that helped correct mistakes and improve the exposition.Corollary 5.5.3. Let i ∈ Z be odd and let n ∈ N. Then the diagramwhere the upper horizontal arrow is induced by the map (1.d), and the lower one by (5.5.d).Proof. We have a commutative diagramCombining this diagram with Proposition 5.5.2 yields the corollary, in view of (5.5.e).Proposition 5.5.4. For any r, n ∈ N, the natural morphismis bijective. , and the proposition follows using a cofinality argument.The transition maps in the limit appearing in Proposition 5.5.4 are given by the compositewhere the first map is given by composition with the map T ∧2 ∧ GW 2i → GW 2(i+2) induced by (5.5.d). It thus follows from Proposition 5.5.3 that the family Ψ n i of (5.5.e), for i odd, defines an element of the limit appearing in Proposition 5.5.4 (with r = 1).
Vector fields on spheres. J F Adams, Ann. of Math. 75J. F. Adams, Vector fields on spheres, Ann. of Math. 75 (1962), 603-632.
Smooth models of motivic spheres and the clutching construction. A Asok, B Doran, J Fasel, IMRN. 61A. Asok, B. Doran, and J. Fasel, Smooth models of motivic spheres and the clutching construction, IMRN 6 (2016), no. 1, 1890-1925.
A Ananyevskiy, Stable operations and cooperations in derived Witt theory with rational coefficients, Annals of K-theory. 2A. Ananyevskiy, Stable operations and cooperations in derived Witt theory with rational coefficients, Annals of K-theory 2 (2017), no. 4, 517-560.
M F Atiyah, D O Tall, Group representations, λ-rings and the J-homomorphism. 8M. F. Atiyah and D. O. Tall, Group representations, λ-rings and the J-homomorphism, Topology 8 (1969), 253-297.
Motivic and realétale stable homotopy theory. T Bachmann, Compositio Math. 1545T. Bachmann, Motivic and realétale stable homotopy theory, Compositio Math. 154 (2018), no. 5, 883-917.
Witt groups, Handbook of K-theory. P Balmer, Springer1BerlinP. Balmer, Witt groups, Handbook of K-theory. Vol. 1, 2, Springer, Berlin, 2005, pp. 539-576.
Séminaire de Géométrie Algébrique du Bois Marie -1966-67 -Théorie des intersections et théorème de Riemann-Roch -(SGA6). P Berthelot, A Grothendieck, L Illusie, Lecture Notes in Math. 225Springer-VerlagP. Berthelot, A. Grothendieck, and L. Illusie, Séminaire de Géométrie Algébrique du Bois Marie -1966-67 -Théorie des intersections et théorème de Riemann-Roch -(SGA6), Lecture Notes in Math., vol. 225, Springer-Verlag, Berlin-New York, 1971.
T Bachmann, M J Hopkins, arXiv:2005.06778η-periodic motivic stable homotopy theory over fields. T. Bachmann and M. J. Hopkins, η-periodic motivic stable homotopy theory over fields, arXiv:2005.06778, 2020.
Éléments de mathématique. Algèbre. Chapitres 1à 3. Nicolas Bourbaki, HermannParisNicolas Bourbaki.Éléments de mathématique. Algèbre. Chapitres 1à 3. Hermann, Paris, 1970.
A Gersten-Witt spectral sequence for regular schemes. P Balmer, C Walter, Ann. Sci.École Norm. Sup. 4P. Balmer and C. Walter, A Gersten-Witt spectral sequence for regular schemes, Ann. Sci.École Norm. Sup. (4) 35 (2002), no. 1, 127-152.
The Borel character. Frédéric Déglise, Jean Fasel, J. Inst. Math. Jussieu. 222Frédéric Déglise and Jean Fasel. The Borel character. J. Inst. Math. Jussieu, 22(2):747-797, 2023.
A vanishing theorem for oriented intersection multiplicities. J Fasel, V Srinivas, Math. Res. Lett. 153J. Fasel and V. Srinivas, A vanishing theorem for oriented intersection multiplicities, Math. Res. Lett. 15 (2008), no. 3, 447-458.
Intersection theory using Adams operations. H Gillet, C Soulé, Invent. Math. 902H. Gillet and C. Soulé, Intersection theory using Adams operations, Invent. Math. 90 (1987), no. 2, 243-277.
Motivic symmetric spectra. R Jardine, Doc. Math., J. DMV. 5R. Jardine, Motivic symmetric spectra, Doc. Math., J. DMV 5 (2000), 445-553.
On the cyclic homology of exact categories. B Keller, J. Pure Appl. Algebra. 136B. Keller, On the cyclic homology of exact categories, J. Pure Appl. Algebra 136 (1999), 1-56.
J Milnor, D Husemoller, Symmetric bilinear forms. BerlinSpringer73J. Milnor and D. Husemoller, Symmetric bilinear forms, Ergebnisse der Mathematik und ihrer Gren- zgebiete. 3. Folge. A Series of Modern Surveys in Mathematics, vol. 73, Springer, Berlin, 1973.
On the motivic commutative ring spectrum BO. I Panin, C Walter, St. Petersbg. Math. J. 306I. Panin and C. Walter, On the motivic commutative ring spectrum BO, St. Petersbg. Math. J. 30 (2019), no. 6, 933-972.
Quaternionic Grassmannians and Borel classes in algebraic geometry, Algebra i Analiz. I Panin, C Walter, 33I. Panin and C. Walter, Quaternionic Grassmannians and Borel classes in algebraic geometry, Alge- bra i Analiz, 33 (2021), no. 1, 136-193.
Catégorie homotopique stable d'un site suspendu avec intervalle. J Riou, Bull. Soc. Math. France. 1354J. Riou, Catégorie homotopique stable d'un site suspendu avec intervalle, Bull. Soc. Math. France 135 (2007), no. 4, 495-547.
Algebraic K-theory, A 1 -homotopy and Riemann-Roch theorems. J Riou, J. Topology. 3J. Riou, Algebraic K-theory, A 1 -homotopy and Riemann-Roch theorems, J. Topology 3 (2010), 229- 264.
Hermitian K-theory, derived equivalences and Karoubi's fundamental theorem. M Schlichting, J. Pure Appl. Algebra. 2217M. Schlichting, Hermitian K-theory, derived equivalences and Karoubi's fundamental theorem, J. Pure Appl. Algebra 221 (2017), no. 7, 1729-1844.
Geometric models for higher Grothendieck-Witt groups in A 1 -homotopy theory. M Schlichting, G S Tripathi, Math. Ann. 3623-4M. Schlichting and G.S. Tripathi, Geometric models for higher Grothendieck-Witt groups in A 1 - homotopy theory, Math. Ann. 362 (2015), no. 3-4, 1143-1167.
Groupe de Grothendieck des schémas en groupes réductifs déployés. J.-P Serre, Publ. Math. Inst. HautesÉtudes Sci. 34J.-P. Serre, Groupe de Grothendieck des schémas en groupes réductifs déployés, Publ. Math. Inst. HautesÉtudes Sci. 34 (1968), 37-52.
The Stacks Project Authors. Stacks Project. The Stacks Project Authors. Stacks Project. http://stacks.math.columbia.edu.
Grothendieck-Witt groups of triangulated categories. C Walter, PreprintC. Walter, Grothendieck-Witt groups of triangulated categories, Preprint available at https://www.math.uiuc.edu/K-theory/0643/, 2003.
Symmetric representations rings are λ-rings. M Zibrowius, New York J. Math. 21M. Zibrowius, Symmetric representations rings are λ-rings, New York J. Math. 21 (2015), 1055-1092.
The γ-filtration on the Witt ring of a scheme. M Zibrowius, Quart. J. Math. 692M. Zibrowius, The γ-filtration on the Witt ring of a scheme, Quart. J. Math. 69 (2018), no. 2, 549-583.
. Institut Fourier -UMR. 558238058Université Grenoble-AlpesCSInstitut Fourier -UMR 5582, Université Grenoble-Alpes, CS 40700, 38058
France Email address: jean.fasel@univ-grenoble-alpes. Grenoble Cedex. 9Grenoble Cedex 9, France Email address: [email protected] URL: https://www-fourier.univ-grenoble-alpes.fr/~faselj/
. Mathematisches Institut, Ludwig-Maximilians-Universität München, D-80333Theresienstr. 39Mathematisches Institut, Ludwig-Maximilians-Universität München, Theresienstr. 39, D-80333
Germany Email address: olivier.haution@gmail. München, München, Germany Email address: [email protected] URL: https://haution.gitlab.io/
| [] |
[
"Large-scale turbulent driving regulates star formation in high-redshift gas-rich galaxies",
"Large-scale turbulent driving regulates star formation in high-redshift gas-rich galaxies"
] | [
"Noé Brucy \nAIM\nCEA\nCNRS\nUniversité Paris-Saclay\nUniversité Paris Diderot\nSorbonne Paris CitéF-91191Gif-sur-YvetteFrance\n",
"Patrick Hennebelle \nAIM\nCEA\nCNRS\nUniversité Paris-Saclay\nUniversité Paris Diderot\nSorbonne Paris CitéF-91191Gif-sur-YvetteFrance\n",
"Frédéric Bournaud \nAIM\nCEA\nCNRS\nUniversité Paris-Saclay\nUniversité Paris Diderot\nSorbonne Paris CitéF-91191Gif-sur-YvetteFrance\n",
"Cédric Colling \nAIM\nCEA\nCNRS\nUniversité Paris-Saclay\nUniversité Paris Diderot\nSorbonne Paris CitéF-91191Gif-sur-YvetteFrance\n"
] | [
"AIM\nCEA\nCNRS\nUniversité Paris-Saclay\nUniversité Paris Diderot\nSorbonne Paris CitéF-91191Gif-sur-YvetteFrance",
"AIM\nCEA\nCNRS\nUniversité Paris-Saclay\nUniversité Paris Diderot\nSorbonne Paris CitéF-91191Gif-sur-YvetteFrance",
"AIM\nCEA\nCNRS\nUniversité Paris-Saclay\nUniversité Paris Diderot\nSorbonne Paris CitéF-91191Gif-sur-YvetteFrance",
"AIM\nCEA\nCNRS\nUniversité Paris-Saclay\nUniversité Paris Diderot\nSorbonne Paris CitéF-91191Gif-sur-YvetteFrance"
] | [] | The question of what regulates star formation is a longstanding issue. To investigate this issue, we run simulations of a kiloparsec cube section of a galaxy with three kinds of stellar feedback: the formation of H II regions, the explosion of supernovae, and the ultraviolet heating. We show that stellar feedback is sufficient to reduce the averaged star formation rate (SFR) to the level of the Schmidt-Kennicutt law in Milky Way-like galaxies but not in high-redshift gas-rich galaxies suggesting that another type of support should be added. We investigate whether an external driving of the turbulence such as the one created by the large galactic scales could diminish the SFR at the observed level. Assuming that the Toomre parameter is close to 1 as suggested by the observations, we infer a typical turbulent forcing that we argue should be applied parallel to the plane of the galactic disk. When this forcing is applied in our simulations, the SFR within our simulations closely follows the Schmidt-Kennicutt relation. We found that the velocity dispersion is strongly anisotropic with the velocity dispersion alongside the galactic plane being up to 10 times larger than the perpendicular velocity. | 10.3847/2041-8213/ab9830 | [
"https://arxiv.org/pdf/2006.02084v2.pdf"
] | 219,259,835 | 2305.18012 | e1ee569686f3ccbcc3d4eb11a29353031e0a5415 |
Large-scale turbulent driving regulates star formation in high-redshift gas-rich galaxies
June 22, 2020
Noé Brucy
AIM
CEA
CNRS
Université Paris-Saclay
Université Paris Diderot
Sorbonne Paris CitéF-91191Gif-sur-YvetteFrance
Patrick Hennebelle
AIM
CEA
CNRS
Université Paris-Saclay
Université Paris Diderot
Sorbonne Paris CitéF-91191Gif-sur-YvetteFrance
Frédéric Bournaud
AIM
CEA
CNRS
Université Paris-Saclay
Université Paris Diderot
Sorbonne Paris CitéF-91191Gif-sur-YvetteFrance
Cédric Colling
AIM
CEA
CNRS
Université Paris-Saclay
Université Paris Diderot
Sorbonne Paris CitéF-91191Gif-sur-YvetteFrance
Large-scale turbulent driving regulates star formation in high-redshift gas-rich galaxies
June 22, 2020(Received 2020, April 7; Revised 2020, May 8; Accepted 2020, May 31) Submitted to ApJLDraft version Typeset using L A T E X twocolumn style in AASTeX62Star formation (1569)Galaxy dynamics (591)Galaxy physics (612)Interstellar medium (847)Radiative transfer simulations (1967)Magnetohydrodynamical simulations (1966)
The question of what regulates star formation is a longstanding issue. To investigate this issue, we run simulations of a kiloparsec cube section of a galaxy with three kinds of stellar feedback: the formation of H II regions, the explosion of supernovae, and the ultraviolet heating. We show that stellar feedback is sufficient to reduce the averaged star formation rate (SFR) to the level of the Schmidt-Kennicutt law in Milky Way-like galaxies but not in high-redshift gas-rich galaxies suggesting that another type of support should be added. We investigate whether an external driving of the turbulence such as the one created by the large galactic scales could diminish the SFR at the observed level. Assuming that the Toomre parameter is close to 1 as suggested by the observations, we infer a typical turbulent forcing that we argue should be applied parallel to the plane of the galactic disk. When this forcing is applied in our simulations, the SFR within our simulations closely follows the Schmidt-Kennicutt relation. We found that the velocity dispersion is strongly anisotropic with the velocity dispersion alongside the galactic plane being up to 10 times larger than the perpendicular velocity.
INTRODUCTION
The formation of stars is a key process that has a major impact on the galactic evolution. Its efficiency and rate are influenced by many factors, and the relative importance of each of them is still poorly understood. One of the main reasons why it is so hard to fully understand star formation is that it involves scales ranging from a few astronomical units up to several kiloparsecs, with about nine orders of magnitude between them. As a consequence, self-consistent simulations of star formation in a galaxy are out of reach for now, and some possibly important factors have to be neglected or added through subgrid models (Dubois & Teyssier 2008;Hopkins et al. 2011). Simulations of smaller regions of a galaxy are a useful complementary tool that enables the use of a higher resolution and the performance of parametric studies. An important challenge for this kind of numerical simulations is reproducing the Schmidt-Kennicutt (SK) law (Kennicutt 1998;Kennicutt & Evans 2012) that links the star formation rate (SFR) to the column density of gas. Previous results (Walch et al. 2015;Padoan et al. 2016;Iffrig & Hennebelle 2017;Kim & Ostriker 2017;Gatto et al. 2017) indicated that the magnetic field has a moderate effect on the SFR but that stellar feedback (namely H II regions and supernovae) can greatly reduce the SFR in Milky Way-like galaxies down to a rate consistent with the observed one. Colling et al. (2018) have shown that with a more comprehensive model of stellar feedback, including ionizing radiation as well as supernovae that explode after a delay corresponding to the stellar lifetime, the SFR typically lies a few times above the SK relation. However, they have shown that the galactic shear may be able, if it is strong enough, to reduce the SFR sufficiently to make it compatible with the SK law. In our work, we run simulations of a local region of a galactic disk within a kiloparsec cube box. We use a numerical setup that is very close to the one used by Colling et al. (2018). Our primary goal is to extend their results to galaxies with higher column densities with the aim to reproduce the SK law. The galaxies that we model have a stellar and dark matter potential similar to the Milky Way with a mean column density of gas Σ 0,gas that varies from 13 to 155 M · pc −2 , representative for Milky Way-like galaxies up to gas-rich galaxies at redshift z = 1-3 (Genzel et al. 2008(Genzel et al. , 2010Daddi et al. 2010). Since the total gravitational potential remains constant, so does the galactic shear, which is therefore not sufficient to regulate star formation (Colling et al. 2018). On the other hand, several recent studies have shown that injection of turbulence from galactic motions has to be taken into account in order to explain the observed velocity dispersion and SFR (Renaud et al. 2012;Krumholz et al. 2018;Meidt et al. 2020) as suggested by Bournaud et al. (2010). Possible source of turbulence include the orbital energy or even mass accretion onto the galaxies. The latter in particular requires a mechanism such as an instability to degrade this source of free energy. We test the effect of such injection of turbulence by adding a large-scale turbulent driving similar to the one used by Schmidt et al. (2009).
This Letter is organized as follows. In the section 2 we present our numerical setup and our simulations. In section 3 we investigate the relation between the SFR and the gas column density when only stellar feedback is at play. In section 4 we show the results of similar simulations when we add a turbulent driving. The necessity of the stellar feedback to quench star formation is investigated in section 5. Section 6 concludes the Letter.
NUMERICAL SETUP
Magnetohydrodynamic (MHD) Simulations
We use the RAMSES code (Teyssier 2002), to solve the equations of MHD with a Godunov solver (Fromang et al. 2006) on a cubic grid of 256 3 cells with periodic boundaries on the midplane and open vertical boundaries. The box represents a cubic region of the galactic disk of size L = 1 kpc, so the resolution is about 4 pc. Sink particles (Bleuler & Teyssier 2014) are used to follow the dense gas and model star formation. Sink creation is triggered when the gas density overpasses a threshold of 10 3 cm −3 (Colling et al. 2018). All the mass accreted by a sink is considered as stellar mass.
We use the same initial conditions as Colling et al. (2018). To sum up, the gas (atomic hydrogen) is initially distributed as a Gaussian along z-axis,
n(z) = n 0 exp − 1 2 z z 0 2 ,(1)
with n 0 a free density parameter and z 0 = 150 pc. The column density of gas (hydrogen and helium), integrated 2.0 × 10 5 77.4 1.6 ± 1.3 × 10 33 12
1.0 × 10 6 155 3.4 ± 3.0 × 10 34
Note. The total averaged injected power Pinj is computed by comparing the kinetic energy in the box before and after applying the turbulent force. Simulations in the nofeed group has no stellar feedback (see section 5).
along the z-axis (perpendicular to the disk) is then
Σ gas,0 = √ 2πm p n 0 z 0(2)
where m p = 1.4 × 1.66 · 10 −24 g is the mean mass per hydrogen atom. The initial temperature is chosen to be 8000 K to match the typical value of the temperature of the warm neutral medium (WNM) phase of the Interstellar Medium (ISM). An initial turbulent velocity field with a root mean square (RMS) dispersion of 5 km · s −1 and a Kolmogorov power spectrum with random phase (Kolmogorov 1941) is also added. Finally, we add a Gaussian magnetic field, oriented along the x − axis,
B x (z) = B 0 exp − 1 2 z z 0 2 ,(3)
with B 0 = 4 µG. The rotation of the galaxy is not modeled.
Stellar feedback
The simulations include models for the formation and expansion of H II region, explosion of supernovae (SNe) and the far-ultraviolet (FUV) feedback. The H II and SN feedback models are same as in Colling et al. (2018). As in Colling et al. (2018), the FUV heating is uniform. However, it is not kept constant at the solar neighborhood value because young O-B star contribute significantly to the FUV emission. As a first approximation, the FUV heating effect can be considered to be proportional to the SFR (Ostriker et al. 2010). The mean FUV density relative to the solar neighbourhood value G 0 can then be written as
G 0 = Σ SFR Σ SFR, = Σ SFR 2.5 × 10 −9 M · pc −2 · yr −1(4)
In our model, G 0 has a minimal value of 1 (as a background contribution) and follows the equation 4 when the SFR increases.
Injection of Turbulence
Bournaud et al. (2010), Krumholz & Burkhart (2016) and Krumholz et al. (2018) show that for galaxies with high column densities or high SFRs, large-scale gravitational instabilities are the main source of turbulent energy and dominate over stellar feedback. We investigate numerically the effect of this turbulent driving on star formation. We use a model for turbulent driving adapted from the generalization of the Ornstein-Uhlenbeck used and explained by several authors (Eswaran & Pope 1988;Schmidt et al. 2006Schmidt et al. , 2009Federrath et al. 2010). The driving is bidimensional (2D) because we consider disk-shaped galaxies and expect large-scale turbulence driving to act mainly within the disk plane. A numerical confirmation of the predominance of the 2D modes at large scale in global galactic simulations is given by Bournaud et al. (2010) More precisely, the turbulent forcing is described by an external force density f that accelerates the fluid on large scales. The evolution of the Fourier modes of the acceleration fieldf (k, t) follows
df (k, t) = −f (k, t) dt T + F 0 (k)P ζ k x k y 0 · dW t (5)
In this stochastic differential equation, dt is the timestep for integration and T is the autocorrelation time scale.
In our simulations, we T = 0.5 Myr and dt/T = 1/100. Tests shows that choosing different values for T does not significantly impact the simulations. The Wiener process W t and the projection operator P ζ are defined as in Schmidt et al. (2009), ζ being the solenoidal fraction. In our runs, ζ = 0.75, and as a consequence the turbulent driving is stronger for the solenoidal modes. This choice of ζ is motivated by the fact that more compressive drivings are prone to bolster star formation instead of reducing it. Furthermore, this choice is in agreement with the value of ζ = 0.78 ± 0.14 found by Jin et al. (2017) in their simulation of a Milky Way-like galaxy. Note that we apply it to a projection of the wavenumber k in the disk plane instead of k itself, so that the resulting force will have no vertical component. The forcing field f (x, t) is then computed from the Fourier transform
f (x, t) = f rms × f (k, t)e ik·x d 3 k(6)
The parameter f rms is directly linked to the power injected by the turbulent force into the simulation.
Estimation of the Injected Power
With general considerations we can get an idea of the power injected by large-scale turbulence. The specific power injected by turbulence at a given scale l can be related with the typical speed of the motions v l at that scale. This being true for each scale l, there is the following relation between and the velocity dispersion of the gas σ g .
∼ v 3 l l ∝ σ 3 g (7)
The disk is supposed to be at marginal stability, so that the Toomre parameter is Q ∼ 1. The Toomre parameter can be estimated as follows:
Q = σ g κ πΣ g G ∝ σ g κ Σ g (8)
where κ is the epicyclic frequency (which does not depend on the gas column density Σ g ). Equation 8 can be rewritten σ g ∝ Σ g , a relation outlined in both observational and computational studies of high-redshift galaxies (Genzel et al. 2010;Dekel et al. 2009;Bournaud 2014). This leads to the following estimation for the specific power ∝ Σ 3 g .
Therefore the total power injected by large-scale motions P LS scales as
P LS ∝ Σ 4 g .(10)
In the appendix 5, we provide a more detailed estimation of the absolute value of P LS .
List of Simulations
In order to test the impact of the stellar feedback and the turbulent driving, we ran three groups of simulations. The list of the simulations is available in Table 1. Simulations within the group noturb have no turbulent driving and enable to test the efficiency of stellar Figure 2. Evolution of the total stellar mass in the simulations. The total mass is compared to the stellar mass produced if the SFR was constant and matching the SK law (dotted lines). With only the stellar feedback quenching the star formation, the star formation rate matches the Kennicutt law only for one simulation with Σ0,gas = 12.9 M · pc −2 , slightly higher than the Milky Way. For higher column density, however, the SFR is well above the observed values. Adding the turbulent driving helps to reduce the SFR.
NOTURB Σ 0,gas = 19.4 [M .pc −2 ] Σ 0,gas = 38.7 [M .pc −2 ] Σ 0,gas = 77.4 [M .pc −2 ] Σ 0,gas = 155 [M .pc −2 ]
feedbacks as star formation regulators. In the group turb2.5 the mean power injected P inj scales as Σ 2.5 0,gas . The turb3.8 has a stronger injection of turbulent energy, which scales as Σ 3.8 0,gas , very close to P LS , the expected energy injected at large scales estimated in the section 2.3 (see Figure 3b). Simulations in the noturb group have no stellar feedback.
PURE STELLAR FEEDBACK SIMULATIONS
In this section we study the SFR when only stellar feedback regulates star formation (without additional turbulent driving, group noturb). Figure 1 features edge-on and face-on column density maps of the simulations. In noturb simulations, the gas tends to form clumpy structures. Ejection of gas out of the disk plane due to supernovae explosions is clearly visible in the simulations with a high initial gas column density. Figure 2 shows the evolution of the total sink mass during the simulation for several initial column density going from Σ gas,0 = 12.9 M · pc −2 to Σ gas,0 = 155 M · pc −2 . The dotted lines correspond to the expected stellar mass growth if the SFR was constant and scaled as in the SK law. For Σ gas,0 = 12.9 M · pc −2 (corresponding to a galaxy slightly heavier than the Milky way) the SFR is close to the observed one for similar galaxies. That means that for such galaxies, the feedback is strong enough to regulate the star formation rate. This is not true in the inner regions where the column density is higher and where the bar plays a considerable role in triggering and/or quenching star formation (Emsellem et al. 2015), and in the outer regions without stars, but these regions represent a small fraction of the total mass of the galaxy. However, the stellar mass growth is considerably faster than expected in heavier galaxies, with SFR that can overpass the observation by more than one order of magnitude. Interestingly, the SFR also follows a star formation law Σ SFR ∝ Σ N gas (see Figure 3a), but with an index N = 2.5, which is much steeper than the N = 1.4 determined by Kennicutt. This is unlikely to be due to an underestimation of the stellar feedback. First, all the main processes that may quench the star formation are included in the simulation, except stellar winds. Similar simulations with stellar winds shows that their effect on star formation are not completely negligible but modify it only by a factor of two (Gatto et al. 2017), and thus cannot explain the discrepancies we observe. Second, our FUV prescription (uniform heating proportional to the SFR) overestimates the heating because both absorption and the propagation delay are not well taken into account. Third, additional feedback effects strong enough to reduce star formation to the expected level for Σ gas,0 > 25 M · pc −2 would probably generate a too weak SFR for simulations with Σ gas,0 < 20 M · pc −2 which are already close to the observed SFR. Finally, Figure 3b shows that the expected turbulent power from stellar feedback is well below what is needed to quench star formation efficiently for highredshift galaxies. The inefficiency of stellar feedback to quench star formation in gas-rich galaxies suggests that another phenomenon is likely at play. Power from large scales (lower limit) Power from supernovae (upper limit) TURB3.8 (b) Figure 3. (a): Averaged surfacic SFR as a function of the initial column density. The SFR in computed at each step and averaged over a period of 40 Myr. With pure stellar feedback the star formation law have an index of 2.5, and thus star formation is quenched enough only for the galaxies with moderate column density. With soft (Pinj ∝ Σ 2.5 0,gas ) and strong Pinj ∝ Σ 3.8 0,gas ) turbulent driving the obtained star formation is closer to the SK law, and even very close for the strong injection (with an index of 1.5). (b): Injected power. The dotted orange line is fitted from our model turb3.8 and is a power law of index 3.8 (see Table 1). The blue and red filled lines are, respectively, an estimated lower bound for the turbulent power injected by large-scale motions (PLS) and an estimated upper bound for the power from the SNe converted into turbulence (PSN). The shaded regions indicate a reasonable range for these values. They are computed as explained in the appendix 5.
EFFECTS OF TURBULENCE INJECTION
In the previous section we have shown that a pure stellar feedback was not strong enough to quench star formation efficiently in galaxies with high column density. Figure 2 features the mass accreted by the sinks for several values of the initial gas column density Σ 0,gas with a turbulent forcing (with dominant solenoidal modes). We tested two scalings for the injected energy, P inj ∝ Σ 2.5 and P inj ∝ Σ 3.8 . In both sets of simulations, the stellar mass has been reduced from the pure feedback model, and more powerful driving is more efficient at reducing star formation. The turb3.8 group has stellar mass curve compatible with a SFR matching the SK law.
Indeed, in Figure 3a the star formation law derived from this group has an index N = 1.5, very close to the N = 1.4 of the SK relation. Therefore, large-scale turbulent driving enables to reproduce a formation law close to the SK law when pure stellar feedback cannot.
Turbulent driving has a considerable influence on the shape of the galactic disk, as can be seen in Figure 1 representing the face-on and edge-on column density map of gas with and without turbulent driving. Pure feedback simulations show a lot of small-scale structures and clumps, and a lot of gas is blasted out of the disk plane by SNe. When turbulent driving is applied, the gas tends to organize within huge filaments, with fewer and bigger clumps. A significant bulk motion is triggered. The effects of turbulent driving are also clearly visible on the density probability distribution function (PDF) and on the density profile in Figure 6, in the Appendix. When applied, turbulent driving increases the fraction of gas within low-density regions and can move the position of the disk plane. In all cases the scale height of the disk increases for higher value of the column density as the strength of stellar feedback or turbulent driving also increase, but a disk structure is still clearly apparent. More energetic turbulent driving (or 3D turbulent driving) completely destroys the disks, which sets a limit on the turbulent energy that can be injected. The driving being bidimensional and parallel to the galactic plane, it generates strongly anisotropic velocity dispersion (Figure 5, in the Appendix). The effect increases with the column density. For high-z galaxies, the velocity dispersion alongside the galactic plane σ 2D = σ 2
x + σ 2 y / √ 2 is 10 times higher than the vertical velocity dispersion σ z . By comparison, the velocity dispersion in pure feedback simulations is almost isotropic. Figure 4. Stellar mass, with and without feedback and turbulence. Feedback and turbulence are needed to quench star formation efficiently.
IS STELLAR FEEDBACK NEEDED AT ALL?
Previous studies Renaud et al. 2012;Krumholz et al. 2018;Hopkins et al. 2011) suggest that both large-scale turbulence and stellar feedback are needed to match observations. Block et al. (2010) argue that stellar feedback is crucial to inject energy back to large scales. With our setup, we can carry out a simple experiment to see if stellar feedback is necessary to quench star formation. To investigate this, we rerun two simulations of the turb3.8 group, namely those with n 0 = 1.5, f rms = 2×10 4 and n 0 = 6, f rms = 2×10 5 , with stellar feedback off (we switch off H II regions and SNe, and FUV heating is kept constant at solar neighborhood level), so that only the turbulent driving quenches star formation. On Figure 4, we can see that in such a configuration the SFR is higher than the one given by the Kennicutt law. For low gas column density, it is even higher than the one that we obtain with stellar feedback only. Thus, it appears that stellar feedback and largescale turbulence are complementary to quench star formation, and that the relative importance of stellar feedback diminishes as the gas column density increase. This result is in good agreement with the conclusion reached from global galactic simulations ).
CONCLUSION
We have presented simulations of kiloparsec cube regions of galaxies with and without stellar feedback and with and without turbulent driving (Table 1, figures 1,4). The simulated galaxies have a gas column density between 12.9 and 155 M · pc −2 . We reported the SFR in these simulations as function of the gas column density (Figure 2) and compared the obtained star formation law with the SK law (Figure 3a). Then we compared the power injected by the turbulent driving needed to reproduce the SK law with estimates of the turbulent power released by large-scale motions and stellar feedback (Figure 3b). The effect of the turbulent driving on the velocity dispersion ( Figure 5) and the distribution of the gas (Figure 1 and 6) were also studied. Our main findings are as follows.
1. Stellar feedback is able to explain the averaged SFR in Milky Way-like galaxies.
2. In high-redshift galaxies with high gas column densities, stellar feedback alone is too weak to quench star formation to a level consistent with the SK law: the obtained star formation law for the studied range of gas column densities is too steep compared to the SK law.
3. The addition of a mainly solenoidal large-scale bidimensional turbulent driving with a power injection P inj ∝ Σ 3.8 reduces considerably the SFR. The star formation obtained has an index N = 1.5, which is close to the observed SK relation.
4. The injected power is consistent with the power needed to maintain the disk at marginal stability (with a Toomre Q ≈ 1), which scales as P LS ∝ Σ 4 .
5. The resulting velocity dispersion is strongly anisotropic. The velocity dispersion parallel to the disk plane σ 2D can be up to 10 times higher than the vertical velocity σ z .
6. Stellar feedback remains necessary, but its importance decreases as the gas column density increases.
Large-scale turbulent driving is therefore necessary when studying star formation in kpc-sized regions of galaxies, especially when the gas fraction is high. A key question that arises is what is the exact nature and origin of the turbulence that needs to be injected.
We thank the referee for their comments that helped improve the article, and our colleagues for insightful discussions. This work was granted access to HPC resources from the TGCC on the Joliot Curie supercomputer under the allocation GENCI A0070407023. (Teyssier 2002) The section 2.4 provides an estimation on how the power injected via turbulence scales with column density. We can go further and estimate what is the absolute value of power injected, and compare it to the value used for the turb3.8 group of simulation that best reproduce the SK law and to the power injected by stellar feedback (Figure 3b). To get a relevant value, we must take into account the stellar contribution to the Toomre stability criterion. The formula for the Toomre parameter when both the gas and the star fluid are near instability is rather complicated, but the following equation is a acceptable approximation (Wang & Silk 1994;Romeo & Wiegert 2011;Romeo & Falstad 2013; ?)
Software: Ramses
1 Q = 1 Q g + 1 Q (A1) with Q g = σ g κ πΣ g G and Q = σ κ πΣ G (A2)
The stability criterion is still Q ≈ 1. For high-redshift galaxies, Σ g ≈ Σ (Daddi et al. 2010;Genzel et al. 2010) and σ g ≈ σ (as reported by Elmegreen & Elmegreen (2006) with measurement based on the thickness of edge-on stellar disks). In z = 0 Milky Way-like galaxies, Σ g ≈ 0.1 Σ (de Blok et al. 2008) and σ g ≈ 0.1 σ (Falcón- Barroso et al. 2017;Hennebelle & Falgarone 2012, and references therein). In both cases, Q g ≈ Q and then
Q g ≈ 2(A3)
Using equations 6 and A2 we get
P LS = Σ g L 2 · 2 σ 3 g L = 2LQ 3 g π 3 G 3 κ 3 Σ 4 g(A4)
where we take l = L/2 as typical injection scale (see section 2.3), with L = 1 kpc the length of one side the box. We take solar neighborhood value for the epicyclic frequency κ:
κ ≈ √ 2Ω ≈ √ 2 v R (A5)
with v = 220 km · s −1 and R = 8 kpc. As a result,
P LS ≈ 4.3 × 10 29 Σ g 10 M · pc −2 4 W (A6)
This value is probably a lower bound because the values of the velocity dispersion reported in the observations are usually derived under the assumption of isotropy. However, the velocity dispersion at the scales that we look at is dominated by the 2D velocity dispersion within the disk ( Figure 5). The shaded blue region in Figure 3b show the range of values of P LS if this underestimation was of a factor of one to two. Figure 3b emphasizes another important fact: it is completely unlikely that our turbulent driving mimick the effect of stellar feedback-driven turbulence. Indeed, the energy injected under the form of turbulence by the stellar feedback scales as the SFR, that is P feedback ∝ Σ SFR ∝ Σ 1.4 g , which is not compatible with the relation P inj ∝ Σ 3.8 g needed to reproduce the SK law. On Figure 3b, we illustrate this with an estimation of the energy injected by the dominant feedback mechanism, SNe. There is approximately one SN each time 100 M of stellar mass is created. It releases 10 51 erg into the interstellar medium. Iffrig & Hennebelle (2015) and Martizzi et al. (2016) have shown that at these scales, only a fraction of a few percent of this energy is converted into turbulence. We retain values between 1% and 5% as reasonable (red shaded region in Figure 3b). The upper bound for the turbulent power injected by the SN is then
P SN ≈ 4.0 × 10 30 Σ g 10 M · pc −2 1.4 W (A7)
It is clearly not sufficient for high-redshift galaxies, but dominates over the power P LS as estimated in Equation A4 for Milky way-like galaxies. This is coherent with our result that stellar feedback alone is sufficient in such galaxies.
B. EFFECT OF TURBULENCE ON DENSITY DISTRIBUTION
The Figure 6 gives more insight on the effects of the turbulence on the density distribution. The density profile shows that all simulations feature a stratified gas distribution, and that the profile is less steep when the gas column density or the turbulence forcing increase. Strong turbulence (for Σ 0,gas = 155) can trigger huge bulk motion that can move the position of the disk plane. Stronger turbulence can even disrupt the disk. The turbulent driving redistributes the gas and widens the gas PDF, increasing the fraction of gas in low-density regions, diminishing the gas available for star formation.
The simulations without driving convert a subsequent fraction of the gas into star because of the high SFR. At 60 Myr, 39 % and 58 %, respectively, of the total initial mass of gas in the box was accreted by the sinks for the Σ 0,gas = 77.4 M .pc −2 and Σ 0,gas = 155 M .pc −2 simulations without driving. This mass is took from the densest regions of the box, and as a consequence there is less dense gas remaining in the box. By contrast, for the same simulations with driving (group turb3.8) only about 6% of gas has been accreted at 60 Myr. This explains why the simulations without driving have less dense gas that the corresponding simulations with driving. Figure 6. Averaged density profile, top, and density volumic probability distribution function (PDF), bottom. All figures are made from snapshots taken at t ≈ 60 Myr. There is less dense gas in the simulations with high initial colunm density (Σ0,gas ≥ 77 M .pc −2 ) without driving because most it has been accreted by the sinks.
in Figure 7 in this article.
gas = 19.4 [M .pc −2 ] Σ 0,gas = 38.7 [M .pc −2 ] Σ 0,gas = 77.4 [M .pc −2 ] Σ 0,gas = 155 [M .pc
Figure 1 .
1Column density maps, edge-on (top panel) and face-on (bottom panel). All snapshots are taken around 60 Myr. The simulation without turbulence are dominated by the effects of the supernovae, while turbulent driving creates filamentary structures
,Figure 5 .
5Pymses Velocity dispersion measured in the simulations, where σ2D = σ 2x + σ 2 y / √ 2.The simulations with high 2D turbulent driving show a high anisotropy, while simulations without driving are almost isotropic.
Table 1 .
1List of Simulations.Group
n0
frms
Σ0,gas
Pinj
[cm −3 ]
[ M · pc −2 ]
[W]
noturb
1
0
12.9
0
1.5
0
19.4
0
2
0
25.8
0
3
0
38.7
0
4
0
51.6
0
6
0
77.4
0
12
0
155
0
turb2.5
1.5
2.5 × 10 4
19.4
1.7 ± 0.7 × 10 31
3
6.0 × 10 4
38.7
9.1 ± 3.8 × 10 31
6
1.0 × 10 5
77.4
5.6 ± 3.6 × 10 32
12
2.0 × 10 5
155
3.1 ± 2.2 × 10 33
turb3.8
1.5
2.0 × 10 4
19.4
1.1 ± 0.5 × 10 31
3
8.0 × 10 4
38.7
1.7 ± 1.1 × 10 32
6
2.0 × 10 5
77.4
1.6 ± 1.3 × 10 33
12
1.0 × 10 6
155
3.4 ± 3.0 × 10 34
nofeed
1.5
2.0 × 10 4
19.4
1.1 ± 0.5 × 10 31
6
Total mass of stars [10 6M ]
No driving
Σ SFR ∝ Σ 1.4
gas
12.9
19.4
25.8
38.7
51.6
77.4
155
Σ
0,gas [M .pc
−2 ]
0
50
100
150
200
250
Time [Myr]
0
2
4
6
8
10
12
14
Total mass of stars [10 6
M ]
TURB2.5
TURB3.8
Σ SFR ∝ Σ 1.4
gas
19.4
38.7
77.4
155
Σ
0,gas [M .pc
−2 ]
Σ 0,gas = 19.4 [M · pc −2 ]Total mass of stars [10 6
M ]
Driving + Feedback
Driving only
Feedback only
Σ SFR ∝ Σ 1.4
gas
0
50
100
150
200
250
Time [Myr]
0
5
10
15
20
25
30
Total mass of stars [10 6
M ]
Σ 0,gas = 77.4 [M · pc −2 ]
Driving + Feedback
Driving only
Feedback only
Σ SFR ∝ Σ 1.4
gas
[H/cc] Σ 0,gas = 19.4 [M .pc −2 ] Σ 0,gas = 38.7 [M .pc −2 ] Σ 0,gas = 77.4 [M .pc −2 ] Σ 0,gas = 155 [M .pc −2 ]0
500
1000
z [pc]
10 −1
10 0
10 1
ρ
0
500
1000
z [pc]
0
500
1000
z [pc]
0
500
1000
z [pc]
−4
−2
0
2
4
log(ρ) [H/cc]
10 −7
10 −5
10 −3
10 −1
P vol
−4
−2
0
2
4
log(ρ) [H/cc]
−4
−2
0
2
4
log(ρ) [H/cc]
−4
−2
0
2
4
log(ρ) [H/cc]
No driving
TURB2.5
TURB3.8
. A Bleuler, R Teyssier, Monthly Notices of the Royal Astronomical Society. 4454015Bleuler, A., & Teyssier, R. 2014, Monthly Notices of the Royal Astronomical Society, 445, 4015. http://adsabs.harvard.edu/abs/2014MNRAS.445.4015B
. D L Block, I Puerari, B G Elmegreen, F Bournaud, The Astrophysical Journal Letters. 7181Block, D. L., Puerari, I., Elmegreen, B. G., & Bournaud, F. 2010, The Astrophysical Journal Letters, 718, L1. http://adsabs.harvard.edu/abs/2010ApJ...718L...1B
. F Bournaud, ASP Conference Series. 486Bournaud, F. 2014, ASP Conference Series, 486, 101. http://adsabs.harvard.edu/abs/2014ASPC..486..101B
. F Bournaud, B G Elmegreen, R Teyssier, D L Block, I Puerari, 10.1111/j.1365-2966.2010.17370.x<pre>MNRAS</pre>, 409, 1088Bournaud, F., Elmegreen, B. G., Teyssier, R., Block, D. L., & Puerari, I. 2010, <pre>MNRAS</pre>, 409, 1088. http://dx.doi.org/10.1111/j.1365-2966.2010.17370.x
. C Colling, P Hennebelle, S Geen, O Iffrig, F Bournaud, A&A. 62021Colling, C., Hennebelle, P., Geen, S., Iffrig, O., & Bournaud, F. 2018, A&A, 620, A21
. E Daddi, F Bournaud, F Walter, The Astrophysical Journal. 713686Daddi, E., Bournaud, F., Walter, F., et al. 2010, The Astrophysical Journal, 713, 686. http://adsabs.harvard.edu/abs/2010ApJ...713..686D
. W J G De Blok, F Walter, E Brinks, The Astronomical Journal. 136de Blok, W. J. G., Walter, F., Brinks, E., et al. 2008, The Astronomical Journal, 136, 2648. http://adsabs.harvard.edu/abs/2008AJ....136.2648D
. A Dekel, R Sari, D Ceverino, The Astrophysical Journal. 703785Dekel, A., Sari, R., & Ceverino, D. 2009, The Astrophysical Journal, 703, 785. http://adsabs.harvard.edu/abs/2009ApJ...703..785D
. Y Dubois, R Teyssier, Astronomy and Astrophysics. 47779Dubois, Y., & Teyssier, R. 2008, Astronomy and Astrophysics, 477, 79.
. B G Elmegreen, D M Elmegreen, The Astrophysical Journal. 650644Elmegreen, B. G., & Elmegreen, D. M. 2006, The Astrophysical Journal, 650, 644.
. E Emsellem, F Renaud, F Bournaud, Monthly Notices of the Royal Astronomical Society. 4462468Emsellem, E., Renaud, F., Bournaud, F., et al. 2015, Monthly Notices of the Royal Astronomical Society, 446, 2468.
. V Eswaran, S B Pope, Computers and Fluids. 16257Eswaran, V., & Pope, S. B. 1988, Computers and Fluids, 16, 257
. J Falcón-Barroso, M Lyubenova, G Van De Ven, Astronomy & Astrophysics. 597Falcón-Barroso, J., Lyubenova, M., van de Ven, G., et al. 2017, Astronomy & Astrophysics, 597, A48. https://www.aanda.org/articles/aa/abs/2017/01/ aa28625-16/aa28625-16.html
. C Federrath, J Roman-Duval, R S Klessen, W Schmidt, M M Mac Low, A&A. 51281Federrath, C., Roman-Duval, J., Klessen, R. S., Schmidt, W., & Mac Low, M. M. 2010, A&A, 512, A81.
. S Fromang, P Hennebelle, R Teyssier, 10.1051/0004-6361:20065371A&A. 457371Fromang, S., Hennebelle, P., & Teyssier, R. 2006, A&A, 457, 371. http://dx.doi.org/10.1051/0004-6361:20065371
. A Gatto, S Walch, T Naab, Monthly Notices of the Royal Astronomical Society. 466Gatto, A., Walch, S., Naab, T., et al. 2017, Monthly Notices of the Royal Astronomical Society, 466, 1903. http://adsabs.harvard.edu/abs/2017MNRAS.466.1903G
. R Genzel, A Burkert, N Bouché, ApJ. 68759Genzel, R., Burkert, A., Bouché, N., et al. 2008, ApJ, 687, 59
. R Genzel, L J Tacconi, J Gracia-Carpio, Monthly Notices of the Royal Astronomical Society. 4072091Genzel, R., Tacconi, L. J., Gracia-Carpio, J., et al. 2010, Monthly Notices of the Royal Astronomical Society, 407, 2091.
. P Hennebelle, E Falgarone, Astronomy and Astrophysics Review. 2055Hennebelle, P., & Falgarone, E. 2012, Astronomy and Astrophysics Review, 20, 55.
. P F Hopkins, E Quataert, N Murray, Monthly Notices of the Royal Astronomical Society. 417950Hopkins, P. F., Quataert, E., & Murray, N. 2011, Monthly Notices of the Royal Astronomical Society, 417, 950. http://adsabs.harvard.edu/abs/2011MNRAS.417..950H
. O Iffrig, P Hennebelle, Astronomy and Astrophysics. 57695Iffrig, O., & Hennebelle, P. 2015, Astronomy and Astrophysics, 576, A95.
. A&A. 60470http://adsabs.harvard.edu/abs/2015A%26A...576A..95I -. 2017, A&A, 604, A70
. K Jin, D M Salim, C Federrath, Monthly Notices of the Royal Astronomical Society. 469Jin, K., Salim, D. M., Federrath, C., et al. 2017, Monthly Notices of the Royal Astronomical Society, 469, 383. http://adsabs.harvard.edu/abs/2017MNRAS.469..383J
. R C Kennicutt, N J Evans, Annual Review of Astronomy and Astrophysics. 50531Kennicutt, R. C., & Evans, N. J. 2012, Annual Review of Astronomy and Astrophysics, 50, 531.
. Jr Kennicutt, R C , 10.1086/305588ApJ. 498541Kennicutt, Jr., R. C. 1998, ApJ, 498, 541. http://dx.doi.org/10.1086/305588
. C.-G Kim, E C Ostriker, The Astrophysical Journal. 846133Kim, C.-G., & Ostriker, E. C. 2017, The Astrophysical Journal, 846, 133.
. A Kolmogorov, Akademiia Nauk SSSR Doklady. 30301Kolmogorov, A. 1941, Akademiia Nauk SSSR Doklady, 30, 301
. M R Krumholz, B Burkhart, Monthly Notices of the Royal Astronomical Society. 4581671Krumholz, M. R., & Burkhart, B. 2016, Monthly Notices of the Royal Astronomical Society, 458, 1671.
. M R Krumholz, B Burkhart, J C Forbes, R M Crocker, Monthly Notices of the Royal Astronomical Society. 4772716Krumholz, M. R., Burkhart, B., Forbes, J. C., & Crocker, R. M. 2018, Monthly Notices of the Royal Astronomical Society, 477, 2716. https: //academic.oup.com/mnras/article/477/2/2716/4962399
. D Martizzi, D Fielding, C.-A Faucher-Giguère, E Quataert, Monthly Notices of the Royal Astronomical Society. 4592311Martizzi, D., Fielding, D., Faucher-Giguère, C.-A., & Quataert, E. 2016, Monthly Notices of the Royal Astronomical Society, 459, 2311. http://adsabs.harvard.edu/abs/2016MNRAS.459.2311M
. S E Meidt, S C O Glover, J M D Kruijssen, ApJ. 89273Meidt, S. E., Glover, S. C. O., Kruijssen, J. M. D., et al. 2020, ApJ, 892, 73
. E C Ostriker, C F Mckee, A K Leroy, ApJ. 721975Ostriker, E. C., McKee, C. F., & Leroy, A. K. 2010, ApJ, 721, 975
. P Padoan, L Pan, T Haugbølle, Å Nordlund, The Astrophysical Journal. 82211Padoan, P., Pan, L., Haugbølle, T., & Nordlund, Å. 2016, The Astrophysical Journal, 822, 11. http://adsabs.harvard.edu/abs/2016ApJ...822...11P
. F Renaud, K Kraljic, F Bournaud, The Astrophysical Journal Letters. 76016Renaud, F., Kraljic, K., & Bournaud, F. 2012, The Astrophysical Journal Letters, 760, L16. http://adsabs.harvard.edu/abs/2012ApJ...760L..16R
. A B Romeo, N Falstad, Monthly Notices of the Royal Astronomical Society. 4331389Romeo, A. B., & Falstad, N. 2013, Monthly Notices of the Royal Astronomical Society, 433, 1389. http://adsabs.harvard.edu/abs/2013MNRAS.433.1389R
. A B Romeo, J Wiegert, Monthly Notices of the Royal Astronomical Society. 4161191Romeo, A. B., & Wiegert, J. 2011, Monthly Notices of the Royal Astronomical Society, 416, 1191. http://adsabs.harvard.edu/abs/2011MNRAS.416.1191R
. W Schmidt, C Federrath, M Hupp, S Kern, J C Niemeyer, 10.1051/0004-6361:200809967A&A. 494Schmidt, W., Federrath, C., Hupp, M., Kern, S., & Niemeyer, J. C. 2009, A&A, 494, 127. http://dx.doi.org/10.1051/0004-6361:200809967
. W Schmidt, W Hillebrandt, J C Niemeyer, Computers & Fluids. 35353Schmidt, W., Hillebrandt, W., & Niemeyer, J. C. 2006, Computers & Fluids, 35, 353. http://www.sciencedirect. com/science/article/pii/S0045793005000563
. R Teyssier, Astronomy and Astrophysics. 385Teyssier, R. 2002, Astronomy and Astrophysics, 385, 337. https://ui.adsabs.harvard.edu/abs/2002A&A...385..337T
. S Walch, P Girichidis, T Naab, Monthly Notices of the Royal Astronomical Society. 454Walch, S., Girichidis, P., Naab, T., et al. 2015, Monthly Notices of the Royal Astronomical Society, 454, 238. http://adsabs.harvard.edu/abs/2015MNRAS.454..238W
. B Wang, J Silk, The Astrophysical Journal. 427759Wang, B., & Silk, J. 1994, The Astrophysical Journal, 427, 759. http://adsabs.harvard.edu/abs/1994ApJ...427..759W
| [] |
[
"Probabilistic Fair Clustering *",
"Probabilistic Fair Clustering *"
] | [
"Seyed A Esmaeili \nDepartment of Computer Science\nUniversity of Maryland\nCollege Park\n",
"Brian Brubach \nDepartment of Computer Science\nUniversity of Maryland\nCollege Park\n",
"Leonidas Tsepenekas \nDepartment of Computer Science\nUniversity of Maryland\nCollege Park\n",
"John P Dickerson \nDepartment of Computer Science\nUniversity of Maryland\nCollege Park\n"
] | [
"Department of Computer Science\nUniversity of Maryland\nCollege Park",
"Department of Computer Science\nUniversity of Maryland\nCollege Park",
"Department of Computer Science\nUniversity of Maryland\nCollege Park",
"Department of Computer Science\nUniversity of Maryland\nCollege Park"
] | [] | In clustering problems, a central decision-maker is given a complete metric graph over vertices and must provide a clustering of vertices that minimizes some objective function. In fair clustering problems, vertices are endowed with a color (e.g., membership in a group), and the features of a valid clustering might also include the representation of colors in that clustering. Prior work in fair clustering assumes complete knowledge of group membership. In this paper, we generalize prior work by assuming imperfect knowledge of group membership through probabilistic assignments. We present clustering algorithms in this more general setting with approximation ratio guarantees. We also address the problem of "metric membership," where different groups have a notion of order and distance. Experiments are conducted using our proposed algorithms as well as baselines to validate our approach and also surface nuanced concerns when group membership is not known deterministically. * An earlier version of this paper was published in NeurIPS 2020. This version is updated to include a correction to the solution for the multi-color case under the large cluster assumption from polynomial time to fixed-parameter tractable. † | null | [
"https://export.arxiv.org/pdf/2006.10916v3.pdf"
] | 219,956,241 | 2006.10916 | beee9473790e3b74eda2b482e6b30ee26a47aecc |
Probabilistic Fair Clustering *
Seyed A Esmaeili
Department of Computer Science
University of Maryland
College Park
Brian Brubach
Department of Computer Science
University of Maryland
College Park
Leonidas Tsepenekas
Department of Computer Science
University of Maryland
College Park
John P Dickerson
Department of Computer Science
University of Maryland
College Park
Probabilistic Fair Clustering *
In clustering problems, a central decision-maker is given a complete metric graph over vertices and must provide a clustering of vertices that minimizes some objective function. In fair clustering problems, vertices are endowed with a color (e.g., membership in a group), and the features of a valid clustering might also include the representation of colors in that clustering. Prior work in fair clustering assumes complete knowledge of group membership. In this paper, we generalize prior work by assuming imperfect knowledge of group membership through probabilistic assignments. We present clustering algorithms in this more general setting with approximation ratio guarantees. We also address the problem of "metric membership," where different groups have a notion of order and distance. Experiments are conducted using our proposed algorithms as well as baselines to validate our approach and also surface nuanced concerns when group membership is not known deterministically. * An earlier version of this paper was published in NeurIPS 2020. This version is updated to include a correction to the solution for the multi-color case under the large cluster assumption from polynomial time to fixed-parameter tractable. †
Introduction
Machine-learning-based decisioning systems are increasingly used in highstakes situations, many of which directly or indirectly impact society. Examples abound of automated decisioning systems resulting in, arguably, morally repugnant outcomes: hiring algorithms may encode the biases of human reviewers' training data [14], advertising systems may discriminate based on race and inferred gender in harmful ways [43], recidivism risk assessment software may bias its risk assessment improperly by race [6], and healthcare resource allocation systems may be biased against a specific race [35]. A myriad of examples such as these and others motivate the growing body of research into defining, measuring, and (partially) mitigating concerns of fairness and bias in machine learning. Different metrics of algorithmic fairness have been proposed, drawing on prior legal rulings and philosophical concepts; [39] give a recent overview of sources of bias and fairness as presently defined by the machine learning community.
The earliest work in this space focused on fairness in supervised learning [24,36] as well as online learning [29]; more recently, the literature has begun expanding into fairness in unsupervised learning [17]. In this work, we address a novel model of fairness in clustering-a fundamental unsupervised learning problem. Here, we are given a complete metric graph where each vertex also has a color associated with it, and we are concerned with finding a clustering that takes both the metric graph and vertex colors into account. Most of the work in this area (e.g., [3,12,17]) has defined a fair clustering to be one that minimizes the cost function subject to the constraint that each cluster satisfies a lower and an upper bound on the percentage of each color it contains-a form of approximate demographic parity or its closely-related cousin, the p%-rule [13]. We relax the assumption that a vertex's color assignment is known deterministically; rather, for each vertex, we assume only knowledge of a distribution over colors.
Our proposed model addresses many real-world use cases. [3] discuss clustering news articles such that no political viewpoint-assumed to be known deterministically-dominates any cluster. Here, the color membership attribute-i.e., the political viewpoint espoused by a news article-would not be provided directly but could be predicted with some probability of error using other available features. [9] discuss the case of supervised learning when class labels are not known with certainty (e.g., due to noisy crowdsourcing or the use of a predictive model). Our model addresses such motivating applications in the unsupervised learning setting, by defining a fair cluster to be one where the color proportions satisfy the upper and lower bound constraints in expectation. Hence, it captures standard deterministic fair clustering as a special case.
Outline & Contributions. We begin ( §2) with an overview of related research in general clustering, fairness in general machine learning, as well as recent work addressing fairness in unsupervised learning. Next ( §3), we define two novel models of clustering when only probabilistic membership is available: the first assumes that colors are unordered, and the second embeds colors into a metric space, thus endowing them with a notion of order and distance. This latter setting addresses use cases where, e.g., we may want to cluster according to membership in classes such as age or income, whose values are naturally ordered. Following this ( §4), we present two approximation algorithms with theoretical guarantees in the settings above. We also briefly address the (easier but often realistic) "large cluster" setting, where it is assumed that the optimal solution does not contain pathologically small clusters. Finally ( §5), we verify our proposed approaches on four real-world datasets. We note that all proofs are put in the appendix due to the page limit.
Related Work
Classical forms of the metric clustering problems k-center, k-median, and k-means are well-studied within the context of unsupervised learning and operations research. While all of these problems are NP-hard, there is a long line of work on approximating them and heuristics are commonly used in many practical applications. This vast area is surveyed by [1] and we focus on approximation algorithms here. For k-center, there are multiple approaches to achieve a 2-approximation and this is the best possible unless P = N P [21,25,26]. Searches for the best approximations to k-median and k-means are ongoing. For k-median there is a (2.675 + ϵ)-approximation with a running time of n O((1/ϵ) log(1/ϵ)) [15] and for k-means, a 6.357-approximation is the best known [4].
The study of approximation algorithms that achieve demographic fairness for metric clustering was initiated by [17]. They considered a variant of k-center and k-median wherein each point is assigned one of two colors and the color of each point is known. Followup work extended the problem setting to the k-means objective, multiple colors, and the possibility of a point being assigned multiple colors (i.e. modeling intersecting demographic groups such as gender and race combined) [10][11][12]28]. Other work considers the one-sided problem of preventing over-representation of any one group in each cluster rather than strictly enforcing that clusters maintain proportionality of all groups [3].
In all of the aforementioned cases, the colors (demographic groups) assigned to the points are known a priori. By contrast, we consider a generalization where points are assigned a distribution on colors. We note that this generalizes settings where each point is assigned a single deterministic color. By contrast, our setting is distinct from the setting where points are assigned multiple colors in that we assume each point has a single true color. In the area of supervised learning, the work of [9] addressed a similar model of uncertain group membership. Other recent work explores unobserved protected classes from the perspective of assessment [30]. However, no prior work has addressed this model of uncertainty for metric clustering problems in unsupervised learning.
Preliminaries and Problem Definition
Let C be the set of points in a metric space with distance function d : C × C → R ≥0 . The distance between a point v and a set S is defined as d(v, S) = min j∈S d(i, j). In a k-clustering an objective function L k (C) is given, a set S ⊆ C of at most k points must be chosen as the set of centers, and each point in C must get assigned to a center in S through an assignment function ϕ : C → S forming a k-partition of the original set: C 1 , . . . , C k . The optimal solution is defined as a set of centers and an assignment function that minimizes the objective L k (C). The well known k-center, k-median, and k-means can all be stated as the following problem:
min S:|S|≤k,ϕ L k p (C) = min S:|S|≤k,ϕ v∈C d p (v, ϕ(v)) 1/p(1)
where p equals ∞, 1, and 2 for the case of the k-center, k-median, and kmeans, respectively. For such problems the optimal assignment for a point v is the nearest point in the chosen set of centers S. However, in the presence of additional constraints such as imposing a lower bound on the cluster size [2] or an upper bound [5,18,31] this property no longer holds. This is also true for fair clustering.
To formulate the fair clustering problem, a set of colors H = {h 1 , . . . , h ℓ , . . . , h m } is introduced and each point v is mapped to a color through a given function χ : C → H. Previous work in fair clustering [3,11,12,17] adds to the objective function of (1) the following proportional representation constraint, i.e.:
∀i ∈ S, ∀ h ℓ ∈ H : l h ℓ | C i | ≤ | C i,h ℓ | ≤ u h ℓ | C i |(2)
where C i,h ℓ is the set of points in cluster i having color h ℓ . The bounds l h ℓ , u h ℓ ∈ (0, 1) are given lower and upper bounds on the proportion of a given color in each cluster, respectively.
In this work we generalize the problem by assuming that the color of each point is not known deterministically but rather probabilistically. We also address the case where the colors lie in a 1-dimensional Euclidean metric space.
Probabilistic Fair Clustering
In probabilistic fair clustering, we generalize the problem by assuming that the color of each point is not known deterministically but rather probabilistically. That is, each point v has a given value p h ℓ v for each h ℓ ∈ H, representing the probability that point v has color h ℓ , with h ℓ ∈H p h ℓ v = 1. The constraints are then modified to have the expected color of each cluster fall within the given lower and upper bounds. This leads to the following optimization problem:
min S:|S|≤k,ϕ L k p (C) (3a) s.t. ∀i ∈ S, ∀ h ℓ ∈ H : l h ℓ |ϕ −1 (i)| ≤ v∈ϕ −1 (i) p h ℓ v ≤ u h ℓ |ϕ −1 (i)| (3b)
where ϕ −1 (i) refers to the set of points assigned to cluster i, or in other words C i . Following [11], we define a γ violating solution to be one for which for all i ∈ S:
l h ℓ |ϕ −1 (i)| − γ ≤ v∈ϕ −1 (i) p h ℓ v ≤ u h ℓ |ϕ −1 (i)| + γ(4)
This notion effectively captures the amount γ, by which a certain solution violates the fairness constraints.
Metric Membership Fair Clustering
Representing a point's (individual's) membership using colors may be sufficient for binary or other unordered categorical variables. However, this may leave information "on the table" when a category is, for example, income or age, since colors do not have an inherent sense of order or distance. For this type of attribute, the membership can be characterized by a 1-dimensional Euclidean space. Without loss of generality, we can represent the set of all possible memberships as the set of all consecutive integers from 0 to some R > 0, where R is the maximum value that can be encountered.
Hence, let H R = {0, . . . , r, . . . , R} where r is an integer and r ≥ 0. Each point v has associated with it a value r v ∈ H R . In this problem we require the average total value of each cluster to be within a given interval. Hence:
min S:|S|≤k,ϕ L k p (C) (5a) s.t. ∀i ∈ S : l|ϕ −1 (i)| ≤ v∈ϕ −1 (i) r v ≤ u|ϕ −1 (i)| (5b)
where l and u are respectively upper and lower bounds imposed on each cluster. Similar to section 3.1, we define a γ violating solution to be one for which ∀i ∈ S:
l|ϕ −1 (i)| − γ ≤ v∈ϕ −1 (i) r v ≤ u|ϕ −1 (i)| + γ(6)
4 Approximation Algorithms and Theoretical Guarantees 4
.1 Algorithms for the Two Color and Metric Membership Case
Our algorithm follows the two step method of [11], although we differ in the LP rounding scheme. Let PFC(k, p) denote the probabilistic fair clustering problem. The color-blind clustering problem, where we drop the fairness constraints, is denoted by Cluster(k, p). Further, define the fair assignment problem FA-PFC(S, p) as the problem where we are given a fixed set of centers S and the objective is to find an assignment ϕ minimizing L k p (C) and satisfying the fairness constraints 3b for probabilistic fair clustering or 5b for metric-membership. We prove the following (similar to theorem 2 in [11]): Theorem 4.1. Given an α-approximation algorithm for Cluster(k, p) and a γ-violating algorithm for FA-PFC(S, p), a solution with approximation ratio α + 2 and constraint violation at most γ can be achieved for PFC(k, p).
Proof. See appendix A.1
An identical theorem and proof follows for the metric membership problem as well.
Step 1, Color-Blind Approximation Algorithm:
At this step an ordinary (color-blind) α-approximation algorithm is used to find the cluster centers. For example, the Gonzalez algorithm [22] can be used for the k-center problem or the algorithm of [15] can be used for the k-median. This step results in a set S of cluster centers. Since this step does not take fairness into account, the resulting solution does not necessarily satisfy constraints 3b for probabilistic fair clustering and 5b for metric-membership.
Step 2, Fair Assignment Problem:
In this step, a linear program (LP) is set up to satisfy the fairness constraints. The variables of the LP are x ij denoting the assignment of point j to center i in S. Specifically, the LP is:
min j∈C,i∈S d p (i, j)x ij (7a) s.t. ∀i ∈ S and ∀ h ℓ ∈ H : (7b) l h ℓ j∈C x ij ≤ j∈C p h ℓ j x ij ≤ u h ℓ j∈C x ij (7c) ∀j ∈ C : i∈S x ij = 1, 0 ≤ x ij ≤ 1 (7d)
Since the LP above is a relaxation of FA-PFC(S, p), we have OPT LP FA-PFC ≤ OPT FA-PFC . We note that for k-center there is no objective, instead we have the following additional constraint: x ij = 0 if d(i, j) > w where w is a guess of the optimal radius. Also, for k-center the optimal value is always the distance between two points. Hence, through a binary search over the polynomially-sized set of distance choices we can WLOG obtain the minimum satisfying distance. Further, for the metric membership case p h ℓ j , l h ℓ and u j in 7c are replaced by r j , l and u, respectively.
What remains is to round the fractional assignments x ij resulting from solving the LP.
Rounding for the Two Color and Metric Membership Case
First we note the connection between the metric membership problem and the two color case of probabilistic fair clustering. Effectively the set H R = {0, 1, . . . , R} is the unnormalized version of the set of probabilities Algorithm 1 Form Flow Network Edges for Culster C i ⃗ A i are the points j ∈ ϕ −1 (i) in non-increasing order of p j initialize array ⃗ a of size |C i | to zeros, and set s = 1 put the assignment x ij for each point j in
⃗ A i in ⃗ z i according the vertex order in ⃗ A i for q = 1 to |C i | do ⃗ a(q) = ⃗ a(q) + x i ⃗ A i (s) , and add edge ( ⃗ A i (s), q) ⃗ z i (s) = 0 s = s + 1 {Move to the next vertex} repeat valueToAdd = min(1 − ⃗ a(q), ⃗ z i (s)) ⃗ a(q) = ⃗ a(q) + valueToAdd, and add edge ( ⃗ A i (s), q) ⃗ z i (s) = ⃗ z i (s) − valueToAdd if ⃗ z i (s) = 0 then s = s + 1 end if until ⃗ a(q) = 1 or s > | ⃗ A i | {until we have accumulated 1 or ran out of vertices} end for {0, 1 R , 2 R , . . . , 1}.
Our rounding method is based on calculating a minimumcost flow in a carefully constructed graph. For each i ∈ S, a set C i with |C i | = j∈C x ij vertices is created. Moreover, the set of vertices assigned to cluster i, i.e. ϕ −1 (i) = {j ∈ C |x ij > 0} are sorted in a non-increasing order according to the associated value r j and placed into the array ⃗ A i . A vertex in C i (except possibly the last) is connected to as many vertices in ⃗ A i by their sorting order until it accumulates an assignment value of 1. A vertex in ⃗ A i may be connected to more than one vertex in C i if that causes the first vertex in C i to accumulate an assignment value of 1 with some assignment still remaining in the ⃗ A i vertex. In this case the second vertex in C i would take only what remains of the assignment. See Algorithm 1 for full details. Appendix C demonstrates an example.
We denote the set of edges that connect all points in C to points in C i by
E C,C i . Also, let V flow = C ∪(∪ i∈S C i )∪S ∪{t} and E flow = E C,C i ∪E C i ,S ∪E S,t ,
where E C i ,S has an edge from every vertex in C i to the corresponding center i ∈ S. Finally E S,t has an edge from every vertex i in S to the sink t if j∈C x ij > j∈C x ij . The demands, capacities, and costs of the network are: Figure 1: Network flow construction.
v 1 v 2 v 3 v 4 v 5 v 6 c 1 i c 2 i c 3 i i i ′ t• Demands: Each v ∈ C has demand d v = −1 (a supply of 1), d u = 0 for each u ∈ C i , d i = j∈C x ij for each i ∈ S. Finally t has demand d t = |C| − i∈S d i .
• Capacities: All edge capacities are set to 1.
• Costs: All edges have cost 0, expect the edges in
E C,C i where ∀(v, u) ∈ E C,C i , d(v, u) = d(v, i) for the k-median and d(v, u) = d 2 (v, i).
For the k-center, either setting suffices.
See Figure 1 for an example. It is clear that the entire demand is | C | and that this is the maximum possible flow. The LP solution attains that flow. Further, since the demands, capacities and distances are integers, an optimal integral minimum-cost flow can be found in polynomial time. Ifx ij is the integer assignment that resulted from the flow computation, then violations are as follows:
Theorem 4.2.
The number of vertices assigned to a cluster (cluster size) is violated by at most 1, i.e. | j∈Cx ij − j∈C x ij | ≤ 1. Further for metric membership, the violation in the average value is at most 2R, i.e. | j∈Cx ij r j − j∈C x ij r j | ≤ 2R. It follows that for the probabilistic case, the violation in the expected value is at most 2.
Proof. For a given center i, every vertex q ∈ C i is assigned some vertices and adds value j∈ϕ −1 (i,q) R j x q ij to the entire average (expected) value of cluster i where ϕ −1 (i, q) refers to the subset in ϕ −1 (i) assigned to q. After the rounding,
j∈ϕ −1 (i,q) R j x q ij will become j∈ϕ −1 (i,q) R jx q ij .
Denoting max j∈ϕ −1 (i,q) R j and min j∈ϕ −1 (i,q) R j by R max q,i and R min q,i , respectively. The following bounds the maximum violation:
|C i | q=1 j∈ϕ −1 (i,q) R jx q ij − |C i | q=1 j∈ϕ −1 (i,q) R j x q ij = |C i | q=1 j∈ϕ −1 (i,q) R jx q ij − R j x q ij ≤ R max |C i |,i + |C i |−1 q=1 R max q,i − R min q,i = R max |C i |,i + R max 1,i − R min 1,i + R max 2,i − R min 2,i + R max 3,i − R min 3,i + · · · + R max |C i |−1,i − R min |C i |−1,i ≤ R max |C i |,i + R max 1,i − R min 1,i + R min 1,i − R min 2,i + R min 2,i − R min 3,i + · · · + R min |C i |−2,i − R min |C i |−1,i ≤ R max |C i |,i + R max 1,i − R min |C i |−1,i ≤ 2R − 0 = 2R
where we invoked the fact that R max k,i ≤ R min k−1,i . By a similar argument it can be shown that the maximum drop is −2R. For the probabilistic case, simply R = 1.
Our rounding scheme results in a violation for the two color probabilistic case that is at most 2, whereas for metric-membership it is 2R. The violation of 2R for the metric membership case suggests that the rounding is too loose, therefore we show a lower bound of at least R 2 for any rounding scheme applied to the resulting solution. This also makes our rounding asymptotically optimal. Proof. Consider the following instance (in Figure 2) with 5 points. Points 2 and 4 are chosen as the centers and both clusters have the same radius.
The entire set has average color:
2(0)+2( 3R 4 )+R 2+2+1 = 5R 2 5 = R 2 .
If the upper and lower values are set to u = l = R 2 , then the fractional assignments for cluster 1 can be:
x 21 = 1, x 22 = 1, x 23 = 1 2 , leading to average color 3R 4 +0+ R 2 1+1+ 1 2 = R 2 .
For cluster 2 we would have: x 43 = 1 2 , x 44 = 1, x 45 = 1 and the average color is
R( 3 4 + 1 2 ) 5 2 = 5R 4 5 2 = R 2 .
Only assignments x 23 and x 43 are fractional and hence will be rounded. WLOG assume that x 23 = 1 and x 43 = 0. It follows that the change (violation) in the assignment j r j x ij for a cluster i will be R 2 . Consider cluster 1, the resulting color is 3R Each points has its value written next to.
4 + R = 7R 4 , the change is | 7R 4 − 5R 4 | = R 2 . Similarly, for cluster 2 the change is | 5R 4 − 3R 4 | = R 2 .
Algorithms for the Multiple Color Case Under a Large Cluster Assumption:
First, we point out that for the multi-color case, the algorithm is based on the assumption that the cluster size is large enough. Specifically:
Assumption 4.1.
Each cluster in the optimal solution should have size at least L = Ω(n r ) where r ∈ (0, 1).
We firmly believe that the above is justified in real datasets. Nonetheless, the ability to manipulate the parameter r, gives us enough flexibility to capture all occurring real-life scenarios. Proof. First, each cluster C i has an amount of color h ℓ equal to
S h ℓ C i with E[S h ℓ C i ] = v∈C i p h ℓ v according to theorem B.2. Furthermore, since the cluster is valid it follows that: l h ℓ ≤ E[S h ℓ C i ] ≤ u h ℓ . Define l min = min h ℓ ∈H {l h ℓ } > 0,
then for any δ ∈ [0, 1] by Theorem B.1 we have:
Pr(|S h ℓ C i − E[S h ℓ C i ]| > δ E[S h ℓ C i ]) ≤ 2e − E[S h ℓ C i ]δ 2 /3 ≤ 2 exp(− δ 2 3 v∈C i p h ℓ v ) ≤ 2 exp(− δ 2 3 Ll min )
This upper bounds the failure probability for a given cluster. For the entire set we use the union bound and get:
Pr ∃i ∈ {1, . . . , k}, h ℓ ∈ H s.t. |S h ℓ C i − E[S h ℓ C i ]| > δ E[S h ℓ C i ] ≤ 2k|H| exp(− δ 2 3 Ll min ) ≤ 2 n L |H| exp(− δ 2 3 Ll min ) ≤ 2|H|n 1−r exp(− δ 2 3 l min n r )
It is clear that given r, δ, and l min there exists a constant c such that the above is bounded by 1 n c . Therefore, the result holds with high probability.
Given Theorem 4.4 our solution essentially forms a reduction from the problem of probabilistic fair clustering PFC(k, p) to the problem of deterministic fair clustering with lower bounded cluster sizes which we denote by DFC LB (k, p, L) (the color assignments are known deterministically and each cluster is constrained to have size at least L). Our algorithm (2) involves
Algorithm 2 Algorithm for Large Cluster PFC(k, p) Input: C, d, k, p, L, {(l h ℓ , u h ℓ )} h ℓ ∈H
Relax the upper and lower by ϵ:
∀ h ℓ ∈ H, l h ℓ ← l h ℓ (1 − ϵ) and u h ℓ ← u h ℓ (1 + ϵ) For each point v ∈ C sample its color independently according to p h ℓ v
Solve the deterministic fair clustering problem with lower bounded clusters DFC LB (k, p, L) over the generated instance and return the solution.
three steps. In the first step, the upper and lower bounds are relaxed since -although we have high concentration guarantees around the expectationin the worst case the expected value may not be realizable (could not be an integer). Moreover the upper and lower bounds could coincide with the expected value causing violations of the bounds with high probability. See appendix B for more details.
After that, the color assignments are sampled independently. The following deterministic fair clustering problem is solved for resulting set of points:
min S:|S|≤k,ϕ L k p (C) (8a) s.t. ∀i ∈ S : (1 − δ)l h ℓ | C i | ≤ | C i,h ℓ | ≤ (1 + δ)u h ℓ | C i | (8b) ∀i ∈ S : | C i | ≥ L (8c)
The difference between the original deterministic fair clustering problem and the above is that the bounds are relaxed by ϵ and a lower bound L is required on the cluster size. This is done in order to guarantee that the resulting solution satisfies the relaxed upper and lower bounds in expectation, because small size clusters do not have a Chernoff bound and therefore nothing ensures that they are valid solutions to the original PFC(k, p) problem.
The algorithm for solving deterministic fair clustering with lower bounded cluster sizes DFC LB is identical to the algorithm for solving the original deterministic fair clustering [11,12] problem with the difference being that the setup LP will have a bound on the cluster size. That is we include the following constraint ∀i ∈ S : ij x ij ≥ L. However, the lower bound on the cluster size causes an issue, since it is possible that a center or centers from the color-blind solution need to be closed. Therefore, we have to try all possible combinations for opening and closing the centers. Since there are at most 2 k possibilities, this leads to a run-time that is fixed parameter tractable O(2 k poly(n)). In theorem A.2 we show that this leads to an approximation ratio of α + 2 like the ordinary (deterministic) fair clustering case, where again α is the approximation ratio of the color blind algorithm. See also appendix D for further details. Theorem 4.5. Given an instance of the probabilistic fair clustering problem PFC(k, p), with high probability algorithm 2 results in a solution with violation at most ϵ and approximation ratio (α + 2) in O(2 k poly(n)) time.
Proof. First, given an instance I PFC of probabilistic fair clustering with optimal value OPT PFC the clusters in the optimal solution would with high probability be a valid solution for the deterministic setting, as showed in Theorem 4.4. Moreover the objective value of the solution is unchanged. Therefore, the resulting deterministic instance would have OPT DFC LB ≤ OPT PFC . Hence, the algorithm will return a solution with cost at most (α + 2) OPT DFC LB ≤ (α + 2) OPT PFC .
For the solution SOL DFC LB returned by the algorithm, each cluster is of size at least L, and the Chernoff bound guarantees that the violation in expectation is at most ϵ with high probability.
The run-time comes from the fact that DFC LB is solved in O(2 k poly(n)) time.
Experiments
We now evaluate the performance of our algorithms over a collection of real-world datasets. We give experiments in the two (unordered) color case ( §5.2), metric membership (i.e., ordered color) case ( §5.3), as well as under the large cluster assumption ( §5.4). We include experiments for the k-means case here, and the (qualitatively similar) k-center and k-median experiments to Appendix F.
Experimental Framework
Hardware & Software. We used only commodity hardware through the experiments: Python 3.6 on a MacBook Pro with 2.3GHz Intel Core i5 processor and 8GB 2133MHz LPDDR3 memory. A state-of-the-art commercial optimization toolkit, CPLEX [37], was used for solving all linear programs (LPs). NetworkX [23] was used to solve minimum cost flow problems, and Scikit-learn [41] was used for standard machine learning tasks such as training SVMs, pre-processing, and performing traditional k-means clustering.
Color-Blind Clustering. The color-blind clustering algorithms we use are as follows.
• [22] gives a 2-approximation for k-center.
• We use Scikit-learn's k-means++ module.
• We use the 5-approximation algorithm due to [8] modified with Dsampling [7] according to [11]. Generic-Experimental Setup and Measurements. For a chosen dataset, a given color h ℓ would have a proportion f h ℓ = |v∈C |χ(v)=h ℓ | | C | . Following [11], the lower bound is set to l h ℓ = (1 − δ)r h ℓ and the upper bound
is to u h ℓ = f h ℓ (1−δ)
. For metric membership, we similarly have f = v∈C rv | C | as the proportion, l = (1 − δ)f and u = f 1−δ as the lower and upper bound, respectively. We set δ = 0.2, as [11] did, unless stated otherwise.
For each experiment, we measure the price of fairness POF = Fair Solution Cost Color-Blind Cost .
We also measure the maximum additive violation γ as it appears in inequalities 4 and 6.
Two Color Case
Here we test our algorithm for the case of two colors with probabilistic assignment. We use the Bank dataset [40] which has 4,521 data points. We choose marital status, a categorical variable, as our fairness (color) attribute. To fit the binary color case, we merge single and divorced into one category. Similar to the supervised learning work due to [9], we make Bank's deterministic color assignments probabilistic by independently perturbing them for each point with probability p noise . Specifically, if v originally had color c v , then now it has color c v with probability 1 − p noise instead. To make the results more interpretable, we define p acc = 1 − p noise . Clearly, p acc = 1 corresponds to the deterministic case, and p acc = 1 2 corresponds to completely random assignments.
First, in Fig. 3(a), we see that the violations of the color-blind solution can be as large as 25 whereas our algorithm is within the theoretical guarantee that is less than 1. In Fig. 3(b), we see that in spite of the large violation, fairness can be achieved at a low relative efficiency loss, not exceeding 2% (POF ≤ 1.02). How does labeling accuracy level p acc impact this problem? Fig. 4 shows p acc vs POF for δ = 0.2 and δ = 0.1. At p acc = 1 2 , color assignments are completely random and the cost is, as expected, identical to color-blind cost. As p acc increases, the colors of the vertices become more differentiated, causing POF to increase, eventually reaching the maximum at p acc = 1 which is the deterministic case. Next, we test against an "obvious" strategy when faced with probabilistic color labels: simply threshold the probability values, and then run a deterministic fair clustering algorithm. Fig. 5(a) shows that this may indeed work for guaranteeing fairness, as the proportions may be satisfied with small violations; however, it comes at the expense of a much higher POF. Fig. 5(b) supports this latter statement: our algorithm can achieve the same violations with smaller POF. Further, running a deterministic algorithm over the thresholded instance may result in an infeasible problem. 1 Figure 5: Comparing our algorithm to thresholding followed by deterministic fair clustering: (a)maximum violation, (b) POF. 1 An intuitive example of infeasibility: consider the two color case where pv = 1 2 + ϵ, ∀ v ∈ C for some small positive ϵ. Thresholding drastically changes the overall probability to 1; therefore no subset of points would have proportion around 1 2 + ϵ.
Metric Membership
Here we test our algorithm for the metric membership problem. We use two additional well-known datasets: Adult [33], with age being the fairness attribute, and CreditCard [45], with credit being the fairness attribute. We apply a pre-processing step where for each point we subtract the minimum value of the fairness attribute over the entire set. This has the affect of reducing the maximum fairness attribute value, therefore reducing the maximum possible violation of 1 2 R, but still keeping the values non-negative. Fig. 6 shows POF with respect to the number of clusters. For the Adult dataset, POF is at most less than 5%, whereas for the CreditCard dataset it is as high at 25%. While the POF, intuitively, rises with the number of clusters allowed, it is substantially higher with the CreditCard dataset. This may be explained because of the correlation that exists between credit and other features represented in the metric space. Fig. 7, we compare the number of clusters against the normalized maximum additive violation. The normalized maximum additive violation is the same maximum additive violation γ from inequality 6-but normalized by R. We see that the normalized maximum additive violation is indeed less than 2 as theoretically guaranteed by our algorithm, whereas for the color-blind solution it is as high a 250.
The Large Cluster Assumption
Here we test our algorithm for the case of probabilistically assigned multiple colors under Assumption 4.1, which addresses cases where the optimal clustering does not include pathologically small clusters. We use the Census1990 [38] dataset. We note that Census1990 is large, with over 2.4 million points. We use age groups (attribute dAge in the dataset) as our fairness attribute, which yields 7 age groups (colors). 2 We then sample 100,000 data points and use them to train an SVM classifier 3 to predict the age group memberships. The classifier achieves an accuracy of around 68%. We use the classifier to predict the memberships of another 100,000 points not included in the training set, and sample from that to form the probabilistic assignment of colors. Although as stated earlier we should try all possible combinations in closing and opening the color-blind centers, we keep all centers as they are. It is expected that this heuristic would not lead to a much higher cost if the dataset and the choice of the color-blind centers is sufficiently well-behaved. Fig. 8 shows the output of our large cluster algorithm over 100,000 points and k = 5 clusters with varying lower bound assumptions. Since the clusters here are large, we normalize the additive violations by the cluster size. We see that our algorithm results in normalized violation that decrease as the lower bound on the cluster size increases. The POF is high relative to our previous experiments, but still less than 50%.
Conclusions & Future Research
Prior research in fair clustering assumes deterministic knowledge of group membership. We generalized prior work by assuming probabilistic knowledge of group membership. In this new model, we presented novel clustering algorithms in this more general setting with approximation ratio guarantees. We also addressed the problem of "metric membership," where different groups have a notion of order and distance-this addresses real-world use cases where parity must be ensured over, e.g., age or income. We also conducted experiments on slate of datasets. The algorithms we propose come with strong theoretical guarantees; on real-world data, we showed that those guarantees are easily met. Future research directions involve the assignment of multiple colors (e.g., race as well as self-reported gender) to vertices, in addition to the removal of assumptions such as the large cluster assumption.
Broader Impact
Guaranteeing that the color proportions are maintained in each cluster satisfies group (demographic) fairness in clustering. In real-world scenarios, however, group membership may not be known with certainty but rather probabilistically (e.g., learned by way of a machine learning model). Our paper addresses fair clustering in such a scenario and therefore both gener-alizes that particular (and well-known) problem statement and widens the scope of the application. In settings where a group-fairness-aware clustering algorithm is appropriate to deploy, we believe our work could increase the robustness of those systems. That said, we do note (at least) two broader points of discussion that arise when placing potential applications of our work in the greater context of society:
• We address a specific definition of fairness. While the formalization we address is a common one that draws directly on legal doctrine such as the notion of disparate impact, as expressed by [19] and others, we note that the Fairness, Accountability, Transparancy, and Ethics (FATE) in machine learning community has identified many such definitions [44]. Yet, there is a growing body of work exploring the gaps between the FATE-style definitions of fairness and those desired in industry (see, e.g., recent work due to Holstein et al. [27] that interviews developers about their wants and needs in this space), and there is growing evidence that stakeholders may not even comprehend those definitions in the first place Saha et al. [42]. Indeed, "deciding on a definition of fairness" is an inherently morally-laden, application-specific decision, and we acknowledge that making a prescriptive statement about whether or not our model is appropriate for a particular use case is the purview of both technicians, such as ourselves, and policymakers and/or other stakeholders.
• Our work is motivated by the assumption that, in many real-world settings, group membership may not be known deterministically. If group membership is being estimated by a machine-learning-based model, then it is likely that this estimator itself could incorporate bias into the membership estimate; thus, our final clustering could also reflect that bias. As an example, take a bank in the United States; here, it may not be legal for a bank to store information on sensitive attributes-a fact made known recently by the "Apple Card" fiasco of late 2019 [32]. Thus, to audit algorithms for bias, it may be the case that either the bank or a third-party service infers sensitive attributes from past data, which likely introduces bias into the group membership estimate itself. (See recent work due to Chen et al. [16] for an in-depth discussion from the point of view of an industry-academic team.)
We have tried to present this work without making normative statements about, e.g., the definition of fairness used; still, we emphasize the importance of open dialog with stakeholders in any system, and acknowledge that our Proof. Let I PFC a given instance of PFC(k, p), SOL PFC = (S * PFC , ϕ * PFC ) the optimal solution of I PFC and OPT PFC its corresponding optimal value. Also, for Cluster(k, p) and for any instance of it, the optimal value is denoted by OPT Cluster and the corresponding solution by SOL Cluster = (S * Cluster , ϕ * Cluster ). The proof closely follows that from [11]. First running the color-blind α approximation algorithm results in a set of centers S, an assignment ϕ, and a solution value that is at most αOPT Cluster ≤ α OPT PFC . Note that OPT Cluster ≤ OPT PFC since PFC(k, p) is a more constrained problem than Cluster(k, p). Now we establish the following lemma:
Lemma A.1. OPT FA-PFC ≤ (α + 2) OPT PFC
Proof. The lemma is established by finding the instance satisfying the inequality. Let ϕ ′ (v) = arg min i∈S d(i, ϕ * PFC (v)), i.e. an assignment that routes the vertices from the optimal center to the nearest center in color-blind solution S. For any point v the following holds:
d(v, ϕ ′ (v)) ≤ d(v, ϕ * PFC (v)) + d(ϕ * PFC (v), ϕ ′ (v)) ≤ d(v, ϕ * PFC (v)) + d(ϕ * PFC (v), ϕ(v)) ≤ d(v, ϕ * PFC (v)) + d(v, ϕ * PFC (v)) + d(v, ϕ(v)) = 2d(v, ϕ * PFC (v)) + d(v, ϕ(v))
stacking the distance values in the vectors ⃗ d(v, ϕ ′ (v)), ⃗ d(v, ϕ * PFC (v)), and ⃗ d(v, ϕ(v)). By the virtue of the fact that v∈C x p (v) 1/p is the ℓ p -norm of the associated vector ⃗ x and since each entry in ⃗ d(v, ϕ ′ (v)) is non-negative, the triangular inequality for norms implies:
v∈C d p (v, ϕ ′ (v)) 1/p ≤ 2 v∈C d p (v, ϕ * PFC (v)) 1/p + v∈C d p (v, ϕ(v)) 1/p
It remains to show that ϕ ′ satisfies the fairness constraints 3b, for any color h ℓ and any center i in S, denote N (i) = {j ∈ S * PFC | arg min i ′ ∈S d(i ′ , j) = i}, then we have:
v∈ϕ ′−1 (i) p h ℓ v |ϕ ′ −1 (i)| = j∈N (i) v∈ϕ * −1 PFC (j) p h ℓ v j∈N (i) |ϕ * −1 PFC (j)|
It follows by algebra and the lower and upper fairness constrain bounds satisfied by ϕ * PFC :
l h ℓ ≤ min j∈N (i) v∈ϕ * −1 PFC (j) p h ℓ v |ϕ * −1 PFC (j)| ≤ j∈N (i) v∈ϕ * −1 PFC (j) p h ℓ v j∈N (i) |ϕ * −1 PFC (j)| ≤ max j∈N (i) v∈ϕ * −1 PFC (j) p h ℓ v |ϕ * −1 PFC (j)| ≤ u h ℓ
This shows that there exists an instance for FA-PFC that both satisfies the fairness constraints and has cost ≤ 2 OPT PFC +αOPT Cluster ≤ (α + 2) OPT PFC . Now combining the fact that we have an α approximation ratio for the color-blind problem, along with an algorithm that achieves a γ violation to FA-PFC with a value equal to the optimal value for FA-PFC, the proof for theorem 4.1 is complete.
A.2 General Theorem for Lower Bounded Deterministic Fair Clustering
Before stating the theorem and proof, we introduce some definitions. Let FA-PFC-LB denote the fair assignment problem with lower bounded cluster sizes. Specifically, in FA-PFC-LB(S, p, L) we are given a set of clusters S and we seek to find an assignment ϕ : C → S so that the fairness constraints 8b are satisfied, in addition to constraint 8c for lower bounding the cluster size by at least L. Note that although we care about the deterministic case, the statement and proof hold for the probabilistic case. Since the deterministic case is a special case of the probabilistic, the proof follows for the deterministic case as well.
Theorem A.1. Given an α approximation algorithm for the color blind clustering problem Cluster(k, p) and a γ violating algorithm for the fair assignment problem with lower bounded cluster sizes FA-PFC-LB(S, p, L), a solution with approximation ratio α+2 and violation at most γ can be achieved for the deterministic fair clustering problem with lower bounded cluster size DFC LB (k, p) in time that is fixed parameter tractable O(2 k poly(n)).
Proof. First running the color-blind α approximation algorithm results in a set of centers S, an assignment ϕ, and a solution value that is at most αOPT Cluster ≤ α OPT DFC LB . Now we establish the equivalent to lemma A.1 for this problem:
Lemma A.2.
For the fair assignment problem with lower bounded cluster sizes FA-PFC-LB, we can obtain a solution of cost at most (α+2) OPT DFC LB in fixed-parameter tractable time O(2 k poly(n)).
Proof. The proof is very similar to the proof for lemma A.1. Letting SOL * DFC LB = (S * DFC LB , ϕ * DFC LB ) denote the optimal solution to DFC LB with optimal value OPT DFC LB . Similarly, define the assignment ϕ ′ (v) = arg min i∈S d(i, ϕ * DFC LB (v)), i.e. an assignment which routs vertices from the optimal center to the closest center in the color-blind solution. By identical arguments to those in the proof of lemma A.1, it follows that:
v∈C d p (v, ϕ ′ (v)) 1/p ≤ 2 v∈C d p (v, ϕ * DFC LB (v)) 1/p + v∈C d p (v, ϕ(v))
1/p and that:
l h ℓ ≤ v∈ϕ ′−1 (i) p h ℓ v |ϕ ′ −1 (i)| ≤ u h ℓ
What remains is to show that each cluster is lower bounded by L. Here we note that a center in S will either be allocated the vertices of one or more centers in S * DFC LB or it would not be allocated any vertices at all. If it is not allocated any vertices, then it is omitted as a center (since no vertices are assigned to it). If vertices for a center or more are routed to it, then it will have a cluster of size j∈N (i) |ϕ * −1 DFC LB (j)| ≥ L. This follows since any center in the optimal solution to DFC LB must satisfy the lower bound L. The issue is that we do not know if a color-blind center is not allocated any vertices and should be omitted. However, we can try all possible close and open combinations for the color-blind centers and solve the FA-PFC-LB for each combination. This can be done in O(2 k poly(n)) time (fixed-parameter tractable). Now combining the fact that we have an α approximation ratio for the color-blind problem, along with an algorithm that achieves a γ violation to FA-PFC-LB with value equal to the optimal value for FA-PFC-LB, the proof for theorem A.2 is complete.
B Further details on Independent Sampling and Large Cluster Solution
Here we introduce more details about independent sampling. In section B.1 we discuss the concentration bounds associated with the algorithm. In section B.2 we show that relaxing the upper and lower bounds might be necessary for the algorithm to have a high probability of success. Finally, in section B.3 we show that not enforcing a lower bound when solving the deterministic fair instance may lead to invalid solutions.
B.1 Independent Sampling and the Resulting Concentration Bounds
We recall the Chernoff bound theorem for the sum of a collection of independent random variables.
Theorem B.1. Given a collection of n many binary random variables where Pr[X j = 1] = p j and S = n j=1 X j . Then µ = E[S] = n j=1 p j and the following concentration bound holds for δ ∈ (0, 1):
Pr(|S − µ| > δµ) ≤ 2e −µδ 2 /3(9)
In the following theorem we show that although we do not know the true joint probability distribution D True , sampling according to the marginal probability p h ℓ v for each point v results in the amount of color having the same expectation for any collection of points. But furthermore, the amount of color would have a Chernoff bound for the independently sampled case.
Theorem B.2. Let Pr D True [X 1 = x 1 , .
. . , X n = x n ] equal the probability that (X 1 = x 1 , . . . , X n = x n ) where X i is the random variable for the color of vertex i and x i ∈ H (H being the set of colors) is a specific value for the realization and the probability is according to the true unknown joint probability distribution D True . Using X h ℓ i for the indicator random variable of color h ℓ for vertex i, then for any collection of points C, the amount of color h ℓ in the collection is S h ℓ D True = i∈C X h ℓ i,D True when sampling according to D True and it is S h ℓ D Indep = i∈C X h ℓ i,D Indep when independent sampling is done. We have that:
• In general: Pr D True [X 1 = x 1 , . . . , X n = x n ] ̸ = Pr D Indep [X 1 = x 1 , . . . , X n = x n ].
• Expectations agree on the of amount of color:
E[S h ℓ D True ] = E[S h ℓ D Indep ].
• The amount of color has a Chernoff bound for the independently sampled case S h ℓ D Indep .
Proof. The first point follows since we simply don't have the same probability distribution. The second is immediate from the linearity of expectations and the fact that both distributions agree in the marginal probabilities
(Pr D True [X i = h ℓ ] = Pr D Indep [X i = h ℓ ] = p h ℓ i ): E[S h ℓ D Indep ] = E i∈C X h ℓ i,D Indep = i∈C E X h ℓ i,D Indep = i∈C p h ℓ i = i∈C E X h ℓ i,D True = E[S h ℓ D True ]
The last point follows from the fact that S h ℓ D Indep is a sum of independent random variables and therefore the Chernoff bound has to hold (B.1).
B.2 Relaxing the Upper and Lower Bounds
Suppose for an instance I PFC of probabilistic fair clustering that there exists a color h ℓ for which the the upper and lower proportion bounds are equal, i.e. l h ℓ = u h ℓ . Suppose the optimal solution SOL PFC = (S * PFC , ϕ * PFC ), has a cluster C i which we assume can be made arbitrarily away than the other points. The Chernoff bound guaranteed by independent sampling would not be useful since the realization has to precisely equal the expectation, not be within a δ of the expectation. In this case sampling will not result in cluster C i having a balanced color and therefore the points in C i would have to merged with other points (if possible, since the entire instance maybe infeasible) to have a cluster with balance equal to l h ℓ and u h ℓ for color h ℓ . Since we assumed cluster C i can be made arbitrarily far away the cost of deterministic instance generated can be arbitrarily worse.
Note, that we do not really need l h ℓ = u h ℓ . Similar arguments can be applied if l h ℓ ̸ = u h ℓ , by assuming the that optimal solution has a cluster C i (which is arbitrarily far away) whose balance either precisely equals l h ℓ or u h ℓ . Simply note that with independent sampling would result in violation to the bounds for cluster C i . Therefore, in the worst case relaxing the bounds is necessary to make sure that a valid solution would remain valid w.h.p. in the deterministic instance generated by independent sampling.
B.3 Independent Sampling without Lower Bounded Cluster
Sizes Could Generate Invalid Solutions To show that enforcing a lower bound on the cluster size is necessary, consider the case shown in figure 9:(a) where the outlier points in the topright have probability 0.45 of being white, whereas the other points have probability 1 of being white. Let the lower and upper bounds for the white color be l white = 0.6 and u white = 1, respectively. Since the outlier points don't have the right color balance, they are merged with the other points, although that leads to a higher cost.
However, independent sampling would result in the outlier points being white with probability (0.45)(0.45) ≃ 0.2. This makes the points have the right color balance and therefore the optimal solution for deterministic fair clustering would have these points merged as shown in figure 9:(b). However, the cluster for the two outlier points is not a valid cluster for the probabilistic fair clustering instance Therefore, forcing a lower bound is necessary to make sure that a solution found in deterministic fair clustering instance generated by independent sampling is w.h.p. valid for the probabilistic fair clustering instance. For clarity, we write above each edge the assignment is "sends" to the vertex in C 1 . Notice how each vertex in C 1 receives a total assignment of 1, except for the last vertex c 3 1 .
Cluster 2: We follow the same procedure for cluster 2. First we calculate |C 2 | = j∈C x 1j = ⌈2.4⌉ = 3, this means the we will have 3 vertices in C 2 . The collection of vertices having non-zero assignments to center 2 are {1, 2, 3, 4}, sorting the vertices by a non-increasing order according to their probability we get ⃗ A 2 = [4, 2, 1, 3]. Now we follow algorithm 1, this leads to the graph shown in figure 11 Now we construct the entire graph by connecting For clarity, we write above each edge the assignment is "sends" to the vertex in C 2 . Notice how each vertex in C 2 receives a total assignment of 1, except for the last vertex c 3 2 .
the edges from each vertex in C 1 to the vertex for center 1 and each vertex in C 2 to the vertex for center 2. Finally, we connect the vertices for 1 and 2 to the vertex t. This leads to the graph in figure 12. Note that the edge weights showing the sent assignment are not put as they have no significance once the graph is constructed. The entire graph is constructed by the union of both subgraphs for clusters 1 and 2, but without repeating the vertices of C. Further, we drop the edge weights which designated the amount of LP assignment sent, as it has no affect on the following steps. Finally, the vertices of both C 1 and C 2 are connected to their centers 1 and 2 in S, respectively, and the centers themsevles are connected to vertex t. Figure 12 shows the final constructed graph.
For the case of metric membership the procedure is unaltered, but instead of sorting according to the probability value p v for a vertex, we sort according to the value r v .
D Further details on solving the lower bounded fair clustering problem
The solution for the lower bounded deterministic fair clustering problem, follows a similar two step solution framework.
Step (1) is unchanged and simply amounts to running a color-blind approximation algorithm with ratio α.
Step (2)
d p (i, j)x ij s.t. l h ℓ j∈C x ij ≤ j∈C:χ(j)=h ℓ x ij , ∀i ∈ S, ∀ h ℓ ∈ H (10) j∈C:χ(j)=h ℓ x ij ≤ u h ℓ j∈C x ij , ∀i ∈ S, ∀ h ℓ ∈ H (11) j∈C x ij ≥ L , ∀i ∈ S (12) j∈C x ij = 1 , ∀j ∈ C 0 ≤ x ij ≤ 1
, ∀i ∈ S, ∀j ∈ C Constraints 10 and 11 are the deterministic counterparts to constraints 7c, respectively. Constraint 12 is introduced to lower bound the cluster size. The issue is that a color-blind center may be closed (assigned no vertices) in the optimal solution, yet constraint 12 forces it to have at least L many points. Therefore the way to fix this is to try all possible combinations of closing and opening the centers which is a total of 2 k possibilities. This makes the run time fixed-parameter tractable time, i.e. O(2 k poly(n)). The resulting solution will have an approximation ratio of α + 2 (see A.2). What remains is to round the solution. We apply the network flow rounding from [12] (specifically section 2.2 in [12]). This results in a violation of at most 1 in the cluster size and a violation of at most 1 per color in any give cluster (lemma 8 in [12]).
E Dependent Rounding for Multiple Colors under a Large Cluster Assumption
Here we discuss a dependent rounding based solution for the k-center problem under the large cluster assumption 4.1. First we start with a brief review/introduction of dependent rounding.
E.1 Brief Summary of Dependent Rounding
Here we summarize the properties of dependent rounding, see [20] for full details. Given a bipartite graph (G = (A, B), E) each edge (i, j) ∈ E has a value 0 ≤ x ij ≤ 1 which will be rounded to X ij ∈ {0, 1}. Further for every vertex v ∈ A ∪ B define the fractional degree as d v = u:(v,u)∈E x vu and the integral degree as D v = u:(v,u)∈E X vu . Dependent rounding satisfies the following properties:
b] ≤ Π i∈S Pr[X i = b]
where S is any subset of indices from {1, . . . , t} and b ∈ {0, 1}, then we have for δ ∈ (0, 1):
Pr | i a i X i − µ| ≥ δµ ≤ 2e −µδ 2 /3
E.2 Multiple Color Large Cluster solution using Dependent Rounding
For the multiple color k-center problem satisfying assumption 4.1. Form the following bipartite graph (G = (A, B), E), A has all vertices of of C , and B has all of the vertices of S (the cluster centers). Given fractional assignments x ij that represent the weight of the edge, ∀(i, j) ∈ E. If x ij is the optimal solution to the lower bounded probabilistic fair assignment problem (theorem A.2), then applying dependent rounding leads to the following theorem:
Theorem E.2. Under assumption 4.1, the integer solution resulting from dependent rounding for the multi-color probabilistic k-center problem has:
(1) An approximation ratio of α + 2.
(2) For any color h ℓ and any cluster i ∈ S, the amount of color S h ℓ C i = j∈C p h ℓ j X ij is concentrated around the LP assigned color j∈C p h ℓ j x ij .
Proof. For (1): Note that the approximation ratio before applying dependent rounding is α + 2. By property 1 of dependent rounding if x ij = 0, then Pr[X ij = 1] = 0 and therefore a point will not be assigned to a center it was not already assigned to by the LP. For (2): Again by property 1 of dependent rounding E DR [X ij ] = (1)x ij + 0 = x ij where E DR refers to the expectation with respect to the randomness of dependent rounding, therefore for any cluster i the expected amount of color equals the amount of color assigned by the LP, i.e. E DR [S h ℓ
C i ] = E DR [ j∈C p h ℓ j X ij ] = j∈C p h ℓ j E DR [X ij ] = j∈C p h ℓ j x ij .
It follows by property 3 of dependent rounding and theorem E.1 that S h ℓ C i is highly concentrated around E DR [S h ℓ C i ]. Specifically :
Pr |S h ℓ C i − E DR [S h ℓ C i ]| ≥ δ E DR [S h ℓ C i ] ≤ 2e − E DR [S h ℓ C i ]δ 2 /3
Similar to the proof of 4.4, the probability of failure can be upper bounded by:
Pr ∃i ∈ {1, . . . , k}, h ℓ ∈ H ||S h ℓ C i − E[S h ℓ C i ]| > δ E[S h ℓ C i ] ≤ 2k|H| exp(− δ 2 3 Ll min ) ≤ 2 n L |H| exp(− δ 2 3 Ll min )
≤ 2|H|n 1−r exp(− δ 2 3 l min n r ) Therefore w.h.p the returned integral solution will be concentrated around the LP color assignments which are fair.
Note however, that obtaining the optimal fractional solution x ij takes O(2 k poly(n)) time.
F Further Experimental Details and Results
F.1 Further Details about the Datasets and the Experimental Setup
For each dataset, the numeric features are used as coordinates and the distance between points is equal to Euclidean distance. The numeric features are normalized prior to clustering. For metric membership in the Adult dataset, age is not used as a coordinate despite the fact that it is numeric since it is the fairness attribute. Similarly, for the CreditCard dataset, credit is not used as a coordinate.
When solving the min-cost flow problem, distances are first multiplied by a large number (1000) and then rounded to integer values. After obtaining the solution for the flow problem, the cost is calculated with the original distance values (which have not been rounded) to verify that the cost is not worse.
Although run-time is not a main concern in this paper. We find that we can solve large instances containing 100,000 points for the k-means with 5 clusters in less than 4 minutes using our commodity hardware.
F.2 Further Experiments
Here we verify the performance of our algorithm on the k-center and the k-median objectives. All datasets have been sub-sampled to 1,000 data points. For the two color probabilistic case, throughout we set p acc = 0.9 (see section 5.2 for the definition of p acc ).
F.2.1 k-center
As can be seen from figure 13 our violations are indeed less than 1 matching the theoretical guarantee. Similarly, for metric membership the normalized violation is less than 1 as well, see figure 14.
F.2.2 k-median
Similar observations apply to the k-median problems. That is, our algorithm indeed leads to small violations not exceeding 1 in keeping with the theory. See figure 15 for the two color probabilistic case and figure 16 for the metric membership case.
Theorem 4. 3 .
3Any rounding scheme applied to the resulting solution has a fairness constraint violation of at least R 2 in the worst case.
Figure 2 :
2Points 2 and 4 have been selected as centers by the integer solution.
Theorem 4. 4 .
4If Assumption 4.1 holds, then independent sampling results in the amount of color for each clusters to be concentrated around its expected value with high probability.
Figure 3 :
3For p acc = 0.7 & p acc = 0.8, showing (a): #clusters vs. maximum additive violation; (b): #clusters vs. POF .
Figure 4 :
4Plot showing p acc vs POF, (a):δ = 0.2 and (b):δ = 0.1.
Figure 6 :
6Plot showing the number of clusters vs POF In
Figure 7 :
7Plot showing the number of clusters vs the normalized maximum additive violation
Figure 8 :
8Plot showing the performance of our independent sampling algorithm over the Census1990 dataset for k = 5 clusters with varying values on the cluster size lower bound:(a)maximum violation normalized by the cluster size, (b)the price of fairness.
Figure 9 :
9(a): The two outlier points at the top-right have probabilities 0.45 of being white, whereas the rest have probabilities 1. All points are merged together to form a balanced cluster. (b): An instance of same points with the colors resulting from independent sampling. The two outlier points have been merged to form their own cluster.
C
Example on Forming the Network Flow Graph for the Two-Color (Metric Membership) Case Suppose we have two centers and 5 vertices and that the LP solution yields the following assignments for center 1: x 11 = 0.3, x 12 = 0.6, x 13 = 0.7, x 14 = 0, x 15 = 1.0 and the following assignments for center 2: x 21 = 0.7, x 22 = 0.4, x 23 = 0.3, x 24 = 1.0, x 25 = 0. Further let the probability values be: p 1 = 0.7, p 2 = 0.8, p 3 = 0.4, p 4 = 0.9, p 5 = 0.1. The following explains how the network flow graph is constructed.Cluster 1: First we calculate |C 1 | = j∈C x 1j = ⌈2.6⌉ = 3, this means the we will have 3 vertices in C 1 . The collection of vertices having non-zero assignments to center 1 are {1, 2, 3, 5}, sorting the vertices by a non-increasing order according to their probability we get ⃗ A 1 = [2, 1, 3, 5]. Now we follow algorithm 1, this leads to the graph shown infigure 10.
Figure 10 :
10Graph constructed in cluster 1.
Figure 11 :
11Graph constructed in cluster 2.
Figure 12 :
12Diagram for the final network flow graph.
Figure 13 :
13k-center for the two color probabilistic case using the Bank dataset. (a): number of clusters vs maximum violation, (b): number of clusters vs POF.
Figure 14 :
14k-center for the metric membership problem using the Adult dataset (metric membership over age). (a): number of clusters vs normalized maximum violation, (b): number of clusters vs POF.
Figure 15 :
15k-median for the two color probabilistic case using the Bank dataset. (a): number of clusters vs maximum violation, (b): number of clusters vs POF.
Figure 16 :
16k-median for the metric membership problem using the CreditCard dataset (metric membership over credit) (a): number of clusters vs normalized maximum violation, (b): number of clusters vs POF.
sets up an LP similar to that in section 4.1.2. The constraints in 7c still remain but with deterministic (not probabilistic) color assignments, further a new constraint lower bounding the cluster size is added. Specifically, we have the following LP:min
j∈C,i∈S
Group 0 is extremely rare, to the point that it violates the "large cluster" assumption for most experiments; therefore, we merged it with Group 1, its nearest age group.3 We followed standard procedures and ended up with a standard RBF-based SVM; the accuracy of this SVM is somewhat orthogonal to the message of this paper, and rather serves to illustrate a real-world, noisy labeler.
proposed approach serves as one part of a larger application ecosystem.
AcknowledgmentsDickerson and Esmaeili were supported in part by NSF CAREER Award IIS-1846237, DARPA GARD Award #HR112020007, DARPA SI3-CMD Award #S4761, DoD WHS Award #HQ003420F0035, NIH R01 Award NLM-013039-01, and a Google Faculty Research Award. We thank Keegan Hines for discussion about "fairness under unawareness" in practical settings, and for pointers to related literature.
Data clustering: Algorithms and applications. C Charu, Aggarwal, K Chandan, Reddy, Charu C Aggarwal and Chandan K Reddy. Data clustering: Algorithms and applications. 2013.
Achieving anonymity via clustering. Gagan Aggarwal, Rina Panigrahy, Tomás Feder, Dilys Thomas, Krishnaram Kenthapadi, Samir Khuller, An Zhu, ACM Transactions on Algorithms (TALG). 6349Gagan Aggarwal, Rina Panigrahy, Tomás Feder, Dilys Thomas, Krish- naram Kenthapadi, Samir Khuller, and An Zhu. Achieving anonymity via clustering. ACM Transactions on Algorithms (TALG), 6(3):49, 2010.
. Sara Ahmadian, Alessandro Epasto, Ravi Kumar, Mohammad Mahdian, arXiv:1905.12753Clustering without over-representation. arXiv preprintSara Ahmadian, Alessandro Epasto, Ravi Kumar, and Mohammad Mahdian. Clustering without over-representation. arXiv preprint arXiv:1905.12753, 2019.
Better guarantees for k-means and euclidean k-median by primal-dual algorithms. Sara Ahmadian, Ashkan Norouzi-Fard, Ola Svensson, Justin Ward, SIAM Journal on Computing. 0Sara Ahmadian, Ashkan Norouzi-Fard, Ola Svensson, and Justin Ward. Better guarantees for k-means and euclidean k-median by primal-dual algorithms. SIAM Journal on Computing, (0):FOCS17-97, 2019.
Centrality of trees for capacitated k-center. Aditya Hyung-Chan An, Chandra Bhaskara, Shalmoli Chekuri, Vivek Gupta, Ola Madan, Svensson, Mathematical Programming. 1541-2Hyung-Chan An, Aditya Bhaskara, Chandra Chekuri, Shalmoli Gupta, Vivek Madan, and Ola Svensson. Centrality of trees for capacitated k-center. Mathematical Programming, 154(1-2):29-53, 2015.
Julia Angwin, Jeff Larson, Surya Mattu, Lauren Kirchner, Machine bias. ProPublica. Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. Machine bias. ProPublica, May, 23:2016, 2016.
k-means++: The advantages of careful seeding. David Arthur, Sergei Vassilvitskii, StanfordTechnical reportDavid Arthur and Sergei Vassilvitskii. k-means++: The advantages of careful seeding. Technical report, Stanford, 2006.
Local search heuristics for k-median and facility location problems. Vijay Arya, Naveen Garg, Rohit Khandekar, Adam Meyerson, Kamesh Munagala, Vinayaka Pandit, SIAM Journal on computing. 333Vijay Arya, Naveen Garg, Rohit Khandekar, Adam Meyerson, Kamesh Munagala, and Vinayaka Pandit. Local search heuristics for k-median and facility location problems. SIAM Journal on computing, 33(3): 544-562, 2004.
Effectiveness of equalized odds for fair classification under imperfect group information. Pranjal Awasthi, Matthäus Kleindessner, Jamie Morgenstern, arXiv:1906.03284arXiv preprintPranjal Awasthi, Matthäus Kleindessner, and Jamie Morgenstern. Ef- fectiveness of equalized odds for fair classification under imperfect group information. arXiv preprint arXiv:1906.03284, 2019.
Scalable fair clustering. Arturs Backurs, Piotr Indyk, Krzysztof Onak, Baruch Schieber, Ali Vakilian, Tal Wagner, International Conference on Machine Learning. Arturs Backurs, Piotr Indyk, Krzysztof Onak, Baruch Schieber, Ali Vakilian, and Tal Wagner. Scalable fair clustering. In International Conference on Machine Learning, pages 405-413, 2019.
K Suman, Deeparnab Bera, Maryam Chakrabarty, Negahbani, arXiv:1901.02393Fair algorithms for clustering. arXiv preprintSuman K Bera, Deeparnab Chakrabarty, and Maryam Negahbani. Fair algorithms for clustering. arXiv preprint arXiv:1901.02393, 2019.
O Ioana, Martin Bercea, Samir Groß, Aounon Khuller, Clemens Kumar, Rösner, R Daniel, Melanie Schmidt, Schmidt, arXiv:1811.10319On the cost of essentially fair clusterings. arXiv preprintIoana O Bercea, Martin Groß, Samir Khuller, Aounon Kumar, Clemens Rösner, Daniel R Schmidt, and Melanie Schmidt. On the cost of essentially fair clusterings. arXiv preprint arXiv:1811.10319, 2018.
Adverse impact and test validation: A practitioner's guide to valid and defensible employment testing. Dan Biddle, Gower Publishing, LtdDan Biddle. Adverse impact and test validation: A practitioner's guide to valid and defensible employment testing. Gower Publishing, Ltd., 2006.
Help wanted: An examination of hiring algorithms, equity, and bias. M Bogen, A Rieke, UpturnTechnical reportM. Bogen and A. Rieke. Help wanted: An examination of hiring algorithms, equity, and bias. Technical report, Upturn, 2018.
An improved approximation for k-median, and positive correlation in budgeted optimization. Jaros Law Byrka, Thomas Pensyl, Bartosz Rybicki, Aravind Srinivasan, Khoa Trinh, Proceedings of the twenty-sixth annual ACM-SIAM symposium on Discrete algorithms. the twenty-sixth annual ACM-SIAM symposium on Discrete algorithmsSIAMJaros law Byrka, Thomas Pensyl, Bartosz Rybicki, Aravind Srinivasan, and Khoa Trinh. An improved approximation for k-median, and positive correlation in budgeted optimization. In Proceedings of the twenty-sixth annual ACM-SIAM symposium on Discrete algorithms, pages 737-756. SIAM, 2014.
Fairness under unawareness: Assessing disparity when protected class is unobserved. Jiahao Chen, Nathan Kallus, Xiaojie Mao, Geoffry Svacha, Madeleine Udell, Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT*). the Conference on Fairness, Accountability, and Transparency (FAT*)Jiahao Chen, Nathan Kallus, Xiaojie Mao, Geoffry Svacha, and Madeleine Udell. Fairness under unawareness: Assessing disparity when protected class is unobserved. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT*), pages 339-348, 2019.
Fair clustering through fairlets. Flavio Chierichetti, Ravi Kumar, Silvio Lattanzi, Sergei Vassilvitskii, Advances in Neural Information Processing Systems. Flavio Chierichetti, Ravi Kumar, Silvio Lattanzi, and Sergei Vassilvitskii. Fair clustering through fairlets. In Advances in Neural Information Processing Systems, pages 5029-5037, 2017.
Lp rounding for k-centers with non-uniform hard capacities. Marek Cygan, Mohammadtaghi Hajiaghayi, Samir Khuller, 2012 IEEE 53rd Annual Symposium on Foundations of Computer Science. IEEEMarek Cygan, MohammadTaghi Hajiaghayi, and Samir Khuller. Lp rounding for k-centers with non-uniform hard capacities. In 2012 IEEE 53rd Annual Symposium on Foundations of Computer Science, pages 273-282. IEEE, 2012.
Certifying and removing disparate impact. Michael Feldman, A Sorelle, John Friedler, Carlos Moeller, Suresh Scheidegger, Venkatasubramanian, International Conference on Knowledge Discovery and Data Mining (KDD). Michael Feldman, Sorelle A Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. Certifying and removing disparate impact. In International Conference on Knowledge Discovery and Data Mining (KDD), pages 259-268, 2015.
Dependent rounding and its applications to approximation algorithms. Rajiv Gandhi, Samir Khuller, Srinivasan Parthasarathy, Aravind Srinivasan, Journal of the ACM (JACM). 533Rajiv Gandhi, Samir Khuller, Srinivasan Parthasarathy, and Aravind Srinivasan. Dependent rounding and its applications to approximation algorithms. Journal of the ACM (JACM), 53(3):324-360, 2006.
Clustering to minimize the maximum intercluster distance. F Teofilo, Gonzalez, 0304-3975Theoretical Computer Science. Teofilo F. Gonzalez. Clustering to minimize the maximum intercluster distance. Theoretical Computer Science, 1985. ISSN 0304-3975.
Clustering to minimize the maximum intercluster distance. F Teofilo, Gonzalez, Theoretical Computer Science. 38Teofilo F Gonzalez. Clustering to minimize the maximum intercluster distance. Theoretical Computer Science, 38:293-306, 1985.
Networkx. high productivity software for complex networks. Aric Hagberg, Dan Schult, Pieter Swart, Conway, C Séguin-Charbonneau, B Ellison, J Edwards, Torrents, Aric Hagberg, Dan Schult, Pieter Swart, D Conway, L Séguin- Charbonneau, C Ellison, B Edwards, and J Torrents. Networkx. high productivity software for complex networks. Webová strá nka https://networkx. lanl. gov/wiki, 2013.
Equality of opportunity in supervised learning. Moritz Hardt, Eric Price, Nathan Srebro, Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS'16. the 30th International Conference on Neural Information Processing Systems, NIPS'16Red Hook, NY, USACurran Associates IncISBN 9781510838819Moritz Hardt, Eric Price, and Nathan Srebro. Equality of opportunity in supervised learning. In Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS'16, page 3323-3331, Red Hook, NY, USA, 2016. Curran Associates Inc. ISBN 9781510838819.
A best possible heuristic for the k-center problem. Dorit S Hochbaum, David B Shmoys, 0364-765XMath. Oper. Res. Dorit S. Hochbaum and David B. Shmoys. A best possible heuristic for the k-center problem. Math. Oper. Res., May 1985. ISSN 0364-765X.
A unified approach to approximation algorithms for bottleneck problems. Dorit S Hochbaum, David B Shmoys, 0004-5411J. ACM. Dorit S. Hochbaum and David B. Shmoys. A unified approach to approximation algorithms for bottleneck problems. J. ACM, May 1986. ISSN 0004-5411.
Improving fairness in machine learning systems: What do industry practitioners need?. Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daumé, Iii , Miro Dudik, Hanna Wallach, Proceedings of the Conference on Human Factors in Computing Systems (CHI). the Conference on Human Factors in Computing Systems (CHI)Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daumé III, Miro Dudik, and Hanna Wallach. Improving fairness in machine learning systems: What do industry practitioners need? In Proceedings of the Conference on Human Factors in Computing Systems (CHI), 2019.
Coresets for clustering with fairness constraints. Lingxiao Huang, Shaofeng Jiang, Nisheeth Vishnoi, Advances in Neural Information Processing Systems. Curran Associates, Inc32Lingxiao Huang, Shaofeng Jiang, and Nisheeth Vishnoi. Coresets for clustering with fairness constraints. In Advances in Neural Information Processing Systems 32, pages 7587-7598. Curran Associates, Inc., 2019.
Fairness in learning: Classic and contextual bandits. Matthew Joseph, Michael Kearns, Jamie Morgenstern, Aaron Roth, Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS'16. the 30th International Conference on Neural Information Processing Systems, NIPS'16Red Hook, NY, USACurran Associates IncISBN 9781510838819Matthew Joseph, Michael Kearns, Jamie Morgenstern, and Aaron Roth. Fairness in learning: Classic and contextual bandits. In Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS'16, page 325-333, Red Hook, NY, USA, 2016. Curran Associates Inc. ISBN 9781510838819.
Assessing algorithmic fairness with unobserved protected class using data combination. Nathan Kallus, Xiaojie Mao, Angela Zhou, arXiv:1906.00285arXiv preprintNathan Kallus, Xiaojie Mao, and Angela Zhou. Assessing algorithmic fairness with unobserved protected class using data combination. arXiv preprint arXiv:1906.00285, 2019.
The capacitated k-center problem. Samir Khuller, J Yoram, Sussmann, SIAM Journal on Discrete Mathematics. 133Samir Khuller and Yoram J Sussmann. The capacitated k-center problem. SIAM Journal on Discrete Mathematics, 13(3):403-418, 2000.
The Apple Card didn't 'see' gender-and that's the problem. Wired. Will Knight, Will Knight. The Apple Card didn't 'see' gender-and that's the problem. Wired, 2019.
Scaling up the accuracy of naive-bayes classifiers: A decision-tree hybrid. Ron Kohavi, Kdd. 96Ron Kohavi. Scaling up the accuracy of naive-bayes classifiers: A decision-tree hybrid. In Kdd, volume 96, pages 202-207, 1996.
Crafting papers on machine learning. P Langley, Proceedings of the 17th International Conference on Machine Learning (ICML 2000). Pat Langleythe 17th International Conference on Machine Learning (ICML 2000)Stanford, CAMorgan KaufmannP. Langley. Crafting papers on machine learning. In Pat Langley, editor, Proceedings of the 17th International Conference on Machine Learning (ICML 2000), pages 1207-1216, Stanford, CA, 2000. Morgan Kaufmann.
Millions of black people affected by racial bias in healthcare algorithms. Heidi Ledford, Nature. 5747780608Heidi Ledford. Millions of black people affected by racial bias in health- care algorithms. Nature, 574(7780):608, 2019.
K-nn as an implementation of situation testing for discrimination discovery and prevention. Thanh Binh, Salvatore Luong, Franco Ruggieri, Turini, 10.1145/2020408.2020488Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '11. the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '11New York, NY, USAAssociation for Computing MachineryBinh Thanh Luong, Salvatore Ruggieri, and Franco Turini. K-nn as an implementation of situation testing for discrimination discovery and prevention. In Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '11, page 502-510, New York, NY, USA, 2011. Association for Computing Ma- chinery. ISBN 9781450308137. doi: 10.1145/2020408.2020488. URL https://doi.org/10.1145/2020408.2020488.
Ibm Cplex User's Manual, Version 12 release 7. IBM ILOG CPLEX Optimization. IBM CPLEX User's Manual. Version 12 release 7. IBM ILOG CPLEX Optimization, 2016.
The learningcurve sampling method applied to model-based clustering. Christopher Meek, Bo Thiesson, David Heckerman, Journal of Machine Learning Research. 2Christopher Meek, Bo Thiesson, and David Heckerman. The learning- curve sampling method applied to model-based clustering. Journal of Machine Learning Research, 2(Feb):397-418, 2002.
A survey on bias and fairness in machine learning. Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, Aram Galstyan, Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. A survey on bias and fairness in machine learning, 2019.
A data-driven approach to predict the success of bank telemarketing. Sérgio Moro, Paulo Cortez, Paulo Rita, Decision Support Systems. 62Sérgio Moro, Paulo Cortez, and Paulo Rita. A data-driven approach to predict the success of bank telemarketing. Decision Support Systems, 62:22-31, 2014.
Scikit-learn: Machine learning in python. Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Journal of machine learning research. 12Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Pret- tenhofer, Ron Weiss, Vincent Dubourg, et al. Scikit-learn: Machine learning in python. Journal of machine learning research, 12(Oct): 2825-2830, 2011.
Measuring non-expert comprehension of machine learning fairness metrics. Debjani Saha, Candice Schumann, Duncan C Mcelfresh, John P Dickerson, Michelle L Mazurek, Michael Carl Tschantz, International Conference on Machine Learning (ICML). 2020Debjani Saha, Candice Schumann, Duncan C. McElfresh, John P. Dick- erson, Michelle L Mazurek, and Michael Carl Tschantz. Measuring non-expert comprehension of machine learning fairness metrics. In International Conference on Machine Learning (ICML), 2020.
Discrimination in online ad delivery. Latanya Sweeney, Queue. 113Latanya Sweeney. Discrimination in online ad delivery. Queue, 11(3): 10-29, 2013.
Fairness definitions explained. Sahil Verma, Julia Rubin, IEEE/ACM International Workshop on Software Fairness (FairWare). IEEESahil Verma and Julia Rubin. Fairness definitions explained. In 2018 IEEE/ACM International Workshop on Software Fairness (FairWare), pages 1-7. IEEE, 2018.
The comparisons of data mining techniques for the predictive accuracy of probability of default of credit card clients. I-Cheng Yeh, Che-Hui Lien, Expert Systems with Applications. 362I-Cheng Yeh and Che-hui Lien. The comparisons of data mining tech- niques for the predictive accuracy of probability of default of credit card clients. Expert Systems with Applications, 36(2):2473-2480, 2009.
. ⌋ ∀v ∈ A ∪ B : D V ∈ {⌊d V, ∀v ∈ A ∪ B : D v ∈ {⌊d v ⌋ , ⌈d v ⌉}
let E v denote any subset of edges incident on v, then Pr[ ev∈Ev X ev = b] ≤ Π ev∈Ev Pr. ∀v ∈ A ∪ B, X ev = b] where b ∈ {0, 1}∀v ∈ A ∪ B, let E v denote any subset of edges incident on v, then Pr[ ev∈Ev X ev = b] ≤ Π ev∈Ev Pr[X ev = b] where b ∈ {0, 1}.
E Theorem, X t be random variables taking values in {0, 1}, and E. a t be reals in [0, 1. i a i X i ] = µ. If Pr[ i∈S X i =Theorem E.1. Let a 1 , . . . , a t be reals in [0, 1], and X 1 , . . . , X t be random variables taking values in {0, 1}, and E[ i a i X i ] = µ. If Pr[ i∈S X i =
| [] |
[
"A MULTI-WAVELENGTH VIEW OF GALAXY EVOLUTION WITH AKARI",
"A MULTI-WAVELENGTH VIEW OF GALAXY EVOLUTION WITH AKARI"
] | [
"S Serjeant \nDepartment of Physical Sciences\nThe Open University\nMilton KeynesMK7 6AAUK\n",
"C Pearson \nDepartment of Physical Sciences\nThe Open University\nMilton KeynesMK7 6AAUK\n\nRAL Space\nSTFC Rutherford Appleton Laboratory\nOX11 0QXChilton, DidcotOxfordshireUK\n",
"G J White \nDepartment of Physical Sciences\nThe Open University\nMilton KeynesMK7 6AAUK\n\nRAL Space\nSTFC Rutherford Appleton Laboratory\nOX11 0QXChilton, DidcotOxfordshireUK\n",
"M W L Smith \nSchool of Physics and Astronomy\nCardiff University\nQueens Buildings, The ParadeCF24 3AACardiffUK\n",
"Y Doi \nDepartment of General System Studies\nGraduate School of Arts and Sciences\nThe University of Tokyo\n3-8-1 Komaba, Meguro-ku153-8902TokyoJapan\n"
] | [
"Department of Physical Sciences\nThe Open University\nMilton KeynesMK7 6AAUK",
"Department of Physical Sciences\nThe Open University\nMilton KeynesMK7 6AAUK",
"RAL Space\nSTFC Rutherford Appleton Laboratory\nOX11 0QXChilton, DidcotOxfordshireUK",
"Department of Physical Sciences\nThe Open University\nMilton KeynesMK7 6AAUK",
"RAL Space\nSTFC Rutherford Appleton Laboratory\nOX11 0QXChilton, DidcotOxfordshireUK",
"School of Physics and Astronomy\nCardiff University\nQueens Buildings, The ParadeCF24 3AACardiffUK",
"Department of General System Studies\nGraduate School of Arts and Sciences\nThe University of Tokyo\n3-8-1 Komaba, Meguro-ku153-8902TokyoJapan"
] | [] | AKARI's all-sky survey resolves the far-infrared emission in many thousands of nearby galaxies, providing essential local benchmarks against which the evolution of high-redshift populations can be measured. This review presents some recent results in the resolved galaxy populations, covering some well-known nearby targets, as well as samples from major legacy surveys such as the Herschel Reference Survey and the JCMT Nearby Galaxies Survey. This review also discusses the prospects for higher redshifts surveys, including strong gravitational lens clusters and the AKARI NEP field. | 10.5303/pkas.2012.27.4.305 | [
"https://arxiv.org/pdf/1208.3631v1.pdf"
] | 119,309,831 | 1208.3631 | 321ea3a92fb3fc8fb63a13e4fc9d96f28519b940 |
A MULTI-WAVELENGTH VIEW OF GALAXY EVOLUTION WITH AKARI
17 Aug 2012 27: 1 ∼ 2, 2012
S Serjeant
Department of Physical Sciences
The Open University
Milton KeynesMK7 6AAUK
C Pearson
Department of Physical Sciences
The Open University
Milton KeynesMK7 6AAUK
RAL Space
STFC Rutherford Appleton Laboratory
OX11 0QXChilton, DidcotOxfordshireUK
G J White
Department of Physical Sciences
The Open University
Milton KeynesMK7 6AAUK
RAL Space
STFC Rutherford Appleton Laboratory
OX11 0QXChilton, DidcotOxfordshireUK
M W L Smith
School of Physics and Astronomy
Cardiff University
Queens Buildings, The ParadeCF24 3AACardiffUK
Y Doi
Department of General System Studies
Graduate School of Arts and Sciences
The University of Tokyo
3-8-1 Komaba, Meguro-ku153-8902TokyoJapan
A MULTI-WAVELENGTH VIEW OF GALAXY EVOLUTION WITH AKARI
17 Aug 2012 27: 1 ∼ 2, 201210.5303/PKAS.2012.27.3.01(Received July 17, 2012; Accepted ????)(Print) Publications of the Korean Astronomical Society
AKARI's all-sky survey resolves the far-infrared emission in many thousands of nearby galaxies, providing essential local benchmarks against which the evolution of high-redshift populations can be measured. This review presents some recent results in the resolved galaxy populations, covering some well-known nearby targets, as well as samples from major legacy surveys such as the Herschel Reference Survey and the JCMT Nearby Galaxies Survey. This review also discusses the prospects for higher redshifts surveys, including strong gravitational lens clusters and the AKARI NEP field.
INTRODUCTION
This paper will review results on bright galaxies from AKARI and Herschel, and will cover from local galaxies to high-z galaxies, via gravitational lensing.
The first part of the review will be about the AKARI XTENDED Prioritised Study (PS) program. We have heard a great deal about the AKARI diffuse maps so far in this conference and this paper will focus on some early results on resolved galaxies. We will cover a few obvious old friends, such as Andromeda, and then progress from these anecdotal examples to discussing results from two major legacy surveys with robust selection criteria. Note that no claim will be made that any of these surveys are unbiased in any sense. There is no such thing as an unbiased survey in astronomy. A 'bias' is just a pejorative way of referring to the selection effects, and any astronomical catalogue of any nature has selection criteria of some sort. (Even "all luminous objects within this volume" would neglect neutral hydrogen systems, and would in practice have a luminosity limit anyway; astronomical surveys are defined by what they exclude, not what they include.) Instead of trying to perform the impossible feat of avoiding 'bi-ases', one must understand and quantify one's selection effects, which is the key advantage of these surveys over heterogenous compilations.
Having described some of the early results from the XTENDED PS program, the focus will move onto the Herschel ATLAS survey. This is in some ways a complementary project to the AKARI all-sky survey, and has mapped 1% of the sky to almost as shallow a depth as Herschel can achieve. Herschel ATLAS covers local as well as high redshift galaxies, and we'll go from low redshift to high redshift via gravitational lensing. Staying on the lensing theme, this review will discuss recent evidence for the links between the Herschel and AKARI populations that have been found using the cluster lens Abell 2218. Finally, this paper reviews the prospects for Herschel data in the AKARI NEP-Deep field, in the light of the current results from Herschel ATLAS and other surveys. band (left), IRAS 60 µm (centre) and in the AKARI allsky survey (right, with 160, 140, 90 µm as RGB respectively). Note the arms in M81 resolved by AKARI. The two galaxies are separated by about 37 ′ , or (38 ± 5) kpc in projection.
Figure 2: M31 in the optical (left; NASA APOD, Jason Ware) and in the AKARI all-sky survey (right, with 160, 140, 90 µm as RGB respectively). The diameter of M31 is approximately 30 kpc, and 1 • ≡ 13.6 kpc. Note the star-forming ring ∼ 10 kpc from the centre, and the warm central dust (also associated with a CO deficit).
THE
AKARI XTENDED PRIORITISED STUDY The goal of the XTENDED program is the scientific exploitation of an all-sky far-infrared atlas of resolved galaxies from AKARI. Unlike e.g. the heterogenous SINGS survey in which the selection criteria are difficult to quantify (Kennicutt et al., 2003), the sample is limited purely by far-infrared surface brightness. The project will test the frequency of occurence of cool extended dust components undetectable by IRAS, construct spatially-resolved radio/far-infrared correlations, map every known blue compact dwarf galaxy, and help determine the extent to which integrated properties of galaxies are reliable measures of the mean physical conditions within them. Work on the whole sample is still ongoing so this paper will discuss only a few key targets. There are rich seams to be mined. Figure 1 compares AKARI diffuse maps and IRAS images of the prototypical starburst M82, plus its companion M81. What is immediately clear is the improved angular resolution of AKARI, with the arms of M81 clearly discernable. AKARI succeeds in resolving the dust emission in many thousands of local galaxies. Figure 2 shows another old friend, M31, at optical and far-infrared wavelengths. The morphology agrees with ISO and Spitzer far-infrared observations (Haas et al., 1998, Gordon et al., 2006. M31 has the interesting property that 90% of the far-infrared emission is not associated with star formation. There is a higher temperature knot in the cold dust distribution at the centre of M31, associated with a deficit in CO, and there is also a star-forming ring about 10 kpc from the centre. The diameter of M31 is about 30 kpc in this image.
Figure 3 presents optical and far-infrared data on NGC 253, which has far-infrared emission from a superwind driven by its starburst. However note the yellow pixel in the centre, caused by saturation; clearly, care needs to be taken with far-infrared photometry of bright galaxies in the AKARI all-sky survey.
As a sanity check of the diffuse map fluxes, large aperture photometry measurements (4.75 ′ radius) were taken of Arp 220 in the all-sky diffuse maps. The flux measurements were 72, 74, 51, 68 Jy (all ±1 Jy) in the filters N60, WIDE-S, WIDE-L and N160 respectively, i.e. at 65 µm, 90 µm, 140 µm and 160 µm respectively. These are up to a factor of two fainter than catalogued fluxes from ISO and IRAS, which we believe is due to detector saturation in AKARI and/or incorrect flagging of peak fluxes as glitches (see e.g. Yamamura et al., 2010), although no saturation features such as that in Figure 3 were obviously visible in the images in any band. As a result we will restrict our discussion to targets with predicted fluxes < 50 Jy per beam.
AKARI OBSERVATIONS OF THE HERSCHEL
REFERENCE SURVEY So far we have covered only a few anecdotal examples of well-known galaxies. There is only so far one can go with anecdotal evidence, so the next stage is move to large samples with well-understood selection criteria. The first sample we will discuss is the Herschel Reference Survey (Boselli et al., 2010). This is a guaranteed time programme on Herschel to map about 300 local galaxies. It is volume limited and K-band limited, implying an effective minimum stellar mass selection depending on distance. It has many science goals, including a census of dust along the Hubble sequence, the connections between star formation and dust emission, the global extinction in galaxies as a function of type, the presence of dust in ellipticals and the cycle of dust destruction and creation. Ciesla et al., (2012) present submm data for galaxies in this sample. The survey only obtained submm Herschel data with the SPIRE instrument, and as a result over a third of their targets lack far-infrared data entirely. The AKARI all-sky survey is ideal to address this need.
As part of the XTENDED PS program, fluxes for the Herschel Reference Survey were extracted in a 300 ′′ × 300 ′′ box around each galaxy. Sky subtraction was achieved by iteratively estimating the mode of the pixel flux histogram in the (1.5 • ) 2 postage stamps around each target, rejecting > 3σ or < −3σ outliers. Flux errors were estimated by convolving the postage stamp with a kernel equivalent to the photometric aperture, then iteratively measuring the variance of the pixel count histogram of the smoothed image.
Careful sky subtraction was found to be important for the photometry, but there are still unresolved problems. Figure 4 shows two example SEDs from the combined Herschel and AKARI data from the Herschel Reference Survey. Clearly at least in the case of NGC 4100 (and in other targets not shown), there are still unresolved systematics in the fluxes; the discrepancies with the models suggest the systematics are no more than ∼ 30%. With this caveat in mind, we estimated dust masses assuming single temperature greybody fits (see e.g. Dunne et al., 2011). Clearly, galaxies do not have single temperatures, or even a few discrete temperatures; this is very much a "spherical cow" approximation. In some cases, an excess over the single-temperature fits at shorter AKARI wavelengths requires the existence of warmer dust phases. Future work will make use of radiative transfer modelling. For the present purposes, the fits are used only to provide order of magnitude estimates for dust masses, which will in any case be dominated by the cooler components. The grey body fitting assumed an emissivity of β = 1.5. Dust mass estimates are typically a few ×10 7 M ⊙ in this sample. Work is ongoing to improve the photometry in this sample.
AKARI OBSERVATIONS OF THE JCMT
NEARBY GALAXIES SURVEY The JCMT Nearby Galaxies Survey (Wilson et al., 2012) is another major legacy survey, with the goal of resolving sub-kpc structure in over 150 nearby galaxies. The goals, amongst others, are to see how morphology and environment affect star formation in nearby galaxies. The survey is covering the Spitzer SINGS sample (Kennicutt et al., 2003), in order to benefit from the multi-wavelength legacy data in SINGS. However, SINGS has a somewhat heterogeneous selection, so the JCMT survey is also observing neutral-hydrogenselected samples in the field and in Virgo. Again, the trouble with robust statistical selection is that one is often left with not much supplementary data. In particular, the field sample lacks far-infrared data. As with the Herschel Reference Survey, the AKARI diffuse maps can provide this far-infared data, sampling Figure 4: SEDs of two galaxies in the Herschel Reference Survey. The > 200 µm data is from Herschel SPIRE, while < 200 µm is from AKARI. The blue line is a grey body fit assuming an emissivity of β = 1.5. Note that the AKARI data clearly still has sources of systematic error of the order of ∼ 30% of unknown origin, but possibly related to sky subtraction. the peaks of the bolometric outputs of the galaxies. Figure 5 shows an example target from this survey, seen in the AKARI all-sky survey. Note that AKARI's angular resolution has the important advantage over IRAS that it can decouple flux from the galaxy from foreground cirrus structure.
Fluxes at 90 µm (WIDE-S filter) were extracted for all the JCMT nearby galaxies field sample following the same procedure as for the Herschel Reference Survey above, safely encompassing the optical diameters of the galaxies. Figure 6 shows the CO(3-2) luminosities compared to the far-infrared luminosities of homogenouslyselected high-redshift galaxies from Iono et al., (2009), Figure 5: AKARI all-sky 90 µm image of NGC 1156, from the JCMT Nearby Galaxies Survey. Note the surrounding cirrus structure. Lower angular resolution data would have difficulty decoupling the foreground cirrus and background galaxy. The image is 1.5 • on a side.
plus the AKARI measurements of the JCMT sample. The line is the extrapolation from the high-redshift populations, and is not a fit to the JCMT data. This extends the Iono et al., relation over five orders of magnitude in homogenously-selected samples. The overall relation is not quite linear, implying that the JCMT sample have far-IR to CO ratios lower than the ULIRGs in general. One can convert the far-infrared to CO luminosity ratio into a star formation rate to molecular gas mass ratio, resulting in a timescale. From this, one can derive a molecular gas depletion timescale of about 3 Gyr in the JCMT sample. The gas depletion timescale in submm galaxies is more like 0.5 Gyr, once one has remembered that different CO to H 2 conversions are appropriate for ultraluminous infrared galaxies and submm galaxies.
LOCAL GALAXIES AND LENSES IN HER-SCHEL ATLAS
The Herschel Astrophysical Terahertz Large Area Survey (H-ATLAS, Eales et al., 2010) is perhaps Herschel's answer to the AKARI all-sky survey. It has mapped about 1% of the sky to almost the submm confusion limits, discovering NGC4725 to have an "Andromeda analogue" dust ring. Many hundreds of thousands of local galaxies are expected in the survey; see e.g. Baes et far-infrared luminosityluminosity correlation for the homogenously-selected Iono et al., (2009) data (luminous infrared galaxies: yellow, submm-selected galaxies: cyan, quasars: magenta, Lyman-break galaxies: blue), plus the AKARI 90 µm data on the JCMT nearby galaxies (spirals: green filled circles, irregulars: green open circles, others: brown open circles). The red power-law line is not quite a linear relation, and is an extrapolation from the Iono sample, not a fit to the JCMT data. Photometric errors in the JCMT sample are not shown for clarity but are consistent with the scatter. al., 2010 for an early example of a nearby galaxy with components of highly obscured star formation which nonetheless contribute little to the global extinction in the galaxy. AKARI detections of Herschel ATLAS local galaxies are discussed at this conference in Pearson et al., (this volume). In roughly equal numbers at 500 µm to local galaxies, H-ATLAS also made the landmark discovery of a large population of strong gravitational lenses (Negrello et al., 2010), ultimately caused by the steep intrinsic 500 µm bright number counts of highredshift galaxies. There are many examples of submm galaxies with z > 2 far-infrared photometric redshifts but identifications with foreground spirals or ellipticals; not every obvious local galaxy optical ID is the site of the observed far-infrared emission. HST, IRAM, Herschel, SMA and other follow-ups are all ongoing.
AKARI DEEP FIELDS: LINKING THE LOCAL
AND HIGH-REDSHIFT UNIVERSE A further strong gravitational lensing system, studied by both AKARI and Herschel, is the galaxy cluster Abell 2218. At the 2009 AKARI conference we had just extended the 15 µm galaxy source counts an order of magnitude fainter than any other surveys, exploiting the lensing magnifications (Hopwood et al., 2010). Since then, Hopwood et al., (in prep.) have performed bespoke analysis of the SPIRE submm data in the field, stacked the submm fluxes of the AKARI 15 µm-selected population, and found the 15 µm population is responsible for ∼ 40 − 30% of the FIRAS backgrounds at 250 − 500 µm, with the uncertainties dominated by the 5σ infrared background measurements.
It should not be altogether surprising that there are such strong links between the mid-infrared and submm populations. To demonstrate this, we return finally to the first local galaxy discussed in this paper, M82. Figure 7 shows redshifted M82 template SEDs normalised to the 500 µm confusion limit compared to the AKARI NEP-Deep mid-infrared depths. The optical stellar populations will obviously not be representative of the high-redshift population but the mid-infrared fluxes should be realistic. The AKARI depths are clearly easily deep enough to detect the the 500 µm population with this SED.
The NEP field has already been observed with Herschel in a 9 deg 2 map, with more data still scheduled in priority 2 time. SCUBA-2 and LOFAR data are also imminent, and the field has been the target of many other multi-wavelength campaigns. In the longer term, the Euclid mission (launch 2019) will, in addition to its 20, 000 deg 2 wide survey, devote 40 deg 2 to deep cosmological fields. The location or locations of these 40 deg 2 are determined partly by the need to cover famous fields with lots of multi-wavelength legacy data, and partly by the constraints of the scanning strategy of the mission. The latter in particular very strongly favours the ecliptic poles, which are the natural deep field locations for many space observatory and survey missions.
CONCLUSIONS
Preliminary photometry for galaxies in the Herschel Reference Survey demonstrates dust masses typically a few ×10 7 M ⊙ . These would be unmeasurable without the AKARI all-sky survey. However, systematic errors of unknown origin of the order ∼ 30% are still present Figure 7: M82 local template, normalised to the 500 µm confusion limit, from z = 0 increasing in steps of δz = 0.5 (blue to red). Also shown are the SPIRE confusion limits, the NEP-Deep AKARI depths, and the PACS depths assuming all priority 2 scheduled data is obtained. Note that the AKARI depths are more than sufficient to detect the submm-selected population.
in at least some photometric measurements; work is ongoing to improve this photometry. Preliminary photometry for galaxies in the JCMT Nearby Galaxies Survey extends the far-infrared:CO(3-2) luminosityluminosity correlation down five orders of magnitude in homogenously-selected samples. The molecular gas depletion timescale for JCMT Nearby Galaxies Survey targets is typically ∼ 3 Gyr, about an order of magnitude longer than in high-redshift submm-selected galaxies. The AKARI ultra-deep 15 µm population contributes about a third or more of the extragalactic background light at 250 − 500 µm.
2 SerjeantFigure 1 :
21M82 (top) and M81 (bottom) in DSS B
Figure 3 :
3Optical image of NGC 253 (left; NASA APOD, CFHT) compared to the AKARI all-sky survey image (right) with 160, 140, 90 µm rendered as RGB respectively. The optical diameter is 27.5 ′ ≡ (28 ± 2) kpc. Note the superwind and the central saturated pixel.
Figure 6 :
6CO(3-2) vs.
http://pkas.kas.org
ACKNOWLEDGEMENTSWe thank the anonymous referee for helpful comments. SS thanks STFC for financial support under grants ST/G002533/1 and ST/J001597/1. The Second AKARI Conference was supported by BK21 program to Seoul National University by the Ministry of Education, Science and Technology, Center for Evolution and Origin of Universe (CEOU) at Seoul National University, National Research Foundation Grant No. 2006-341-C00018 to HMLee, Astronomy Program, Seoul National University Nagoya University Global COE Program: Quest for Fundamental Principles in the Universe, Division of Particle and Astrophysical Science, Nagoya University, and Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
Herschel-ATLAS: The dust energy balance in the edge-on spiral galaxy UGC 4754. M Baes, A&A. 51839Baes, M., et al., 2010, Herschel-ATLAS: The dust en- ergy balance in the edge-on spiral galaxy UGC 4754, A&A, 518, L39
The Herschel Reference Survey. A Boselli, PASP. 122261Boselli, A., et al., 2010, The Herschel Reference Sur- vey, PASP, 122, 261
Submillimetre Photometry of 323 Nearby Galaxies from the Herschel Reference Survey. L Ciesla, arXiv:1204.4726A&A in press. Ciesla, L., et al., 2012, Submillimetre Photometry of 323 Nearby Galaxies from the Herschel Reference Survey, A&A in press (arXiv:1204.4726)
Herschel-ATLAS: rapid evolution of dust in galaxies over the last 5 billion years. L Dunne, MNRAS. 417510Dunne, L., et al., 2011, Herschel-ATLAS: rapid evo- lution of dust in galaxies over the last 5 billion years, MNRAS, 417, 510
The Herschel ATLAS. S A Eales, PASP. 122499Eales, S. A., et al., 2010, The Herschel ATLAS, PASP, 122, 499
K D Gordon, Spitzer MIPS Infrared Imaging of M31: Further Evidence for a Spiral-Ring Composite Structure. 63887Gordon, K. D., et al., 2006, Spitzer MIPS Infrared Imaging of M31: Further Evidence for a Spiral- Ring Composite Structure, ApJL, 638, 87
M Haas, Cold dust in the Andromeda Galaxy mapped by ISO. 33833Haas, M., et al., 1998, Cold dust in the Andromeda Galaxy mapped by ISO, A&A, 338, L33
Ultra Deep Akari Observations of Abell 2218: Resolving the 15 µm Extragalactic Background Light. R Hopwood, ApJL. 71645Hopwood, R., et al., 2010, Ultra Deep Akari Observa- tions of Abell 2218: Resolving the 15 µm Extra- galactic Background Light, ApJL, 716, 45
D Iono, Luminous Infrared Galaxies with the Submillimeter Array. II. Comparing the CO (3-2) Sizes and Luminosities of Local and High-Redshift Luminous Infrared Galaxies. 6951537Iono, D., et al., 2009, Luminous Infrared Galaxies with the Submillimeter Array. II. Comparing the CO (3-2) Sizes and Luminosities of Local and High- Redshift Luminous Infrared Galaxies, ApJ, 695, 1537
R C Kennicutt, SINGS: The SIRTF Nearby Galaxies Survey. 115928Kennicutt, R. C., et al., 2003, SINGS: The SIRTF Nearby Galaxies Survey, PASP, 115, 928
The Detection of a Population of Submillimeter-Bright. M Negrello, Strongly Lensed Galaxies. 330800Negrello, M., et al., 2010, The Detection of a Popu- lation of Submillimeter-Bright, Strongly Lensed Galaxies, Science, 330, 800
The JCMT Nearby Galaxies Legacy Survey -VIII. CO data and the L CO(3−2) − L FIR correlation in the SINGS sample. C D Wilson, arXiv:1206.1629MNRAS in pressWilson, C.D., et al., 2012, The JCMT Nearby Galax- ies Legacy Survey -VIII. CO data and the L CO(3−2) − L FIR correlation in the SINGS sam- ple, MNRAS in press (arXiv:1206.1629)
AKARI/FIS All-Sky Survey Bright Source Catalogue Version 1.0 Release Note. I Yamamura, Yamamura, I., et al., 2010, AKARI/FIS All-Sky Sur- vey Bright Source Catalogue Version 1.0 Release Note
| [] |
[
"On the Strategyproofness of the Geometric Median",
"On the Strategyproofness of the Geometric Median"
] | [
"El-Mahdi El-Mhamdi \nÉcole Polytechnique EPFL\nEPFL Calicarpa\nCalicarpa, Tournesol\n",
"Sadegh Farhadkhani \nÉcole Polytechnique EPFL\nEPFL Calicarpa\nCalicarpa, Tournesol\n",
"Rachid Guerraoui \nÉcole Polytechnique EPFL\nEPFL Calicarpa\nCalicarpa, Tournesol\n",
"Lê-Nguyên Hoang \nÉcole Polytechnique EPFL\nEPFL Calicarpa\nCalicarpa, Tournesol\n"
] | [
"École Polytechnique EPFL\nEPFL Calicarpa\nCalicarpa, Tournesol",
"École Polytechnique EPFL\nEPFL Calicarpa\nCalicarpa, Tournesol",
"École Polytechnique EPFL\nEPFL Calicarpa\nCalicarpa, Tournesol",
"École Polytechnique EPFL\nEPFL Calicarpa\nCalicarpa, Tournesol"
] | [] | The geometric median, an instrumental component of the secure machine learning toolbox, is known to be effective when robustly aggregating models (or gradients), gathered from potentially malicious (or strategic) users. What is less known is the extent to which the geometric median incentivizes dishonest behaviors. This paper addresses this fundamental question by quantifying its strategyproofness. While we observe that the geometric median is not even approximately strategyproof, we prove that it is asymptotically α-strategyproof : when the number of users is large enough, a user that misbehaves can gain at most a multiplicative factor α, which we compute as a function of the distribution followed by the users. We then generalize our results to the case where users actually care more about specific dimensions, determining how this impacts α. We also show how the skewed geometric medians can be used to improve strategyproofness. | null | [
"https://export.arxiv.org/pdf/2106.02394v4.pdf"
] | 235,352,591 | 2106.02394 | 1421c6ee0fe4a6b98e19a293d2ac3df86e95a653 |
On the Strategyproofness of the Geometric Median
El-Mahdi El-Mhamdi
École Polytechnique EPFL
EPFL Calicarpa
Calicarpa, Tournesol
Sadegh Farhadkhani
École Polytechnique EPFL
EPFL Calicarpa
Calicarpa, Tournesol
Rachid Guerraoui
École Polytechnique EPFL
EPFL Calicarpa
Calicarpa, Tournesol
Lê-Nguyên Hoang
École Polytechnique EPFL
EPFL Calicarpa
Calicarpa, Tournesol
On the Strategyproofness of the Geometric Median
The geometric median, an instrumental component of the secure machine learning toolbox, is known to be effective when robustly aggregating models (or gradients), gathered from potentially malicious (or strategic) users. What is less known is the extent to which the geometric median incentivizes dishonest behaviors. This paper addresses this fundamental question by quantifying its strategyproofness. While we observe that the geometric median is not even approximately strategyproof, we prove that it is asymptotically α-strategyproof : when the number of users is large enough, a user that misbehaves can gain at most a multiplicative factor α, which we compute as a function of the distribution followed by the users. We then generalize our results to the case where users actually care more about specific dimensions, determining how this impacts α. We also show how the skewed geometric medians can be used to improve strategyproofness.
INTRODUCTION
There has recently been a growing interest in collaborative machine learning to efficiently utilize the ever-increasing amount of data and computational resources (McMahan et al., 2017;Kairouz et al., 2021;Abadi et al., 2015). Collaborative learning gathers information from multiple users (e.g., gradient vectors (Zinkevich et al., 2010), local model parameters (Dinh et al., 2020;Farhadkhani et al., 2022b) or users' preferences (Noothigattu et al., 2018Allouah et al., 2022)) and typically summarizes it in a single vector. While averaging is the most widely used method for aggregating multiple vectors into a single vector (Polyak and Juditsky, 1992), it suffers from severe security flaws: averaging can be arbitrarily manipulated by a single strategic user (Blanchard et al., 2017).
The geometric median is a promising "robust" alternative to averaging. It has been widely used in collaborative learning as it is a provably good approximation of the average (Minsker, 2015) and it is robust to a minority of malicious users (Lopuhaa and Rousseeuw, 1989). A large body of research known as "Byzantine learning" (Blanchard et al., 2017;Chen et al., 2017;El-Mhamdi et al., 2018;Rajput et al., 2019;Alistarh et al., 2018) uses the geometric median to ensure safe learning despite the presence of participants with arbitrarily malicious behavior (Farhadkhani et al., 2022a;Karimireddy et al., 2022;Acharya et al., 2022;Wu et al., 2020;So et al., 2021;Gu and Yang, 2021;Pillutla et al., 2022;Farhadkhani et al., 2022b). Interestingly, the geometric median also satisfies the fairness principle "one voter, one vote with a unit force" (see Section 2.2), making it ethically appealing.
In this paper, we study the extent to which the geometric median incentivizes strategic manipulations 1 . Ideally, we would like the geometric median to be strategyproof (Gibbard, 1973;Satterthwaite, 1975;Brandt et al., 2016), i.e., we want it to be in each voter's best interest to report their true preferred vector. Put differently, honesty would ideally be a dominant strategy (Chung and Ely, 2007). This is very different from Byzantine learning, which only focuses on the resilience of the training, usually assuming a majority of honest users. Conversely, we consider the more realistic case where every user wants to bias the algorithm towards their specific target state. Such considerations are critical for high-stake life-endangering applications such as content moderation and recommendation (Yue, 2019;Whitten-Woodring et al., 2020), in which different people have diverging preferences over what should be removed (Ribeiro et al., 2020;Bhat and Klein, 2020), accompanied with a warning message (Mena, 2020), and be promoted at scale (Michelman, 2020). Clearly, activists, companies and politicians all want to bias algorithms to promote certain views, products or ideologies (Hoang, 2020). These entities should thus be expected to behave untruthfully, if they can easily game the algorithms with fabricated behaviors. Now, assuming that each user wants to minimize the distance between the computed geometric median and their target vector, it is actually known that the geometric median fails to be strategyproof (Kim and Roush, 1984) (see Figure 1). However, raw strategyproofness is a binary worst-case analysis. In practice, optimizing strategic reporting may be costly (e.g., information gathering and computational costs, and the risk of being exposed), and hence may not be profitable if the potential gain is small. This prompts us to quantify the strategyproofness of the geometric median: how much can a strategic voter gain by misreporting their preferred vector (Lubin and Parkes, 2012;Wang et al., 2015;Han et al., 2015)?
Contributions. Our first contribution is to show that the geometric median fails to guarantee approximate strategyproofness. More precisely, for any α, we show that there exists a configuration where a strategic voter can gain a factor α by behaving strategically rather than truthfully.
Our main contribution is to then study the more specific case where voters' reported vectors come independently from an underlying distribution. We prove that, in the limit where the number of voters is large enough, and with high probability, the geometric median is indeed αstrategyproof. This goes through introducing and formalizing the notion of asymptotic strategyproofness with respect to the distribution of reported vectors. We show how to compute the bound α as a function of this distribution.
Our two first contributions apply to the case where a voter wants to minimize the Euclidean distance between the geometric median and their target vector. Essentially, this amounts to saying that the voters' preferences are isotropic, i.e., all dimensions have the same importance for the voters. However, in practical applications, a voter may care a lot more about certain dimensions than others, Our third contribution is a generalization to this setting, proving that, in a rigorous sense, the geometric median becomes less strategyproof if some dimensions are both more polarized and more important than others.
As a fourth important contribution, we show how strategyproofness can be improved by introducing and analyzing the skewed geometric median. Intuitively, this corresponds to skewing the feature space using a linear transformation Σ, computing the geometric median in the skewed space, and de-skewing the computed geometric median by applying Σ −1 . In essence, the skewed geometric median can be used to weaken pulls along polarized dimensions, and strengthen pulls along others. This helps limit the incentives to exaggerate preferences along more polarized dimensions, by intuitively giving voters more voting power along orthogonal dimensions "at the same cost".
Background. Classically called the Fermat-Weber solution (Brimberg, 2017), the geometric median solves a version of the widely studied (optimal) facility location problem (Hansen et al., 1985;Walsh, 2020;Lu et al., 2009;Feigenbaum and Sethuraman, 2015;Tang et al., 2020;Escoffier et al., 2011;Sui and Boutilier, 2015;Kyropoulou et al., 2019;Fotakis and Tzamos, 2013), as it minimizes the sum of distances of the agents to the chosen location. In one dimension, the geometric median coincides with the median, which was shown (Moulin, 1980) to be (group) strategyproof. But in higher dimensions, the geometric median is known to be not strategyproof (Kim and Roush, 1984). To the best of our knowledge, however, our paper is the first to analyze the geometric median in high dimension, with weakened forms of strategyproofness like (asymptotic) α-strategyproofness. As far as we know, we are also the first to investigate skewed geometric medians and skewed preferences.
Roadmap. The rest of the paper is organized as follows. Section 2 formally defines different notions of strategyproofness and the geometric median aggregation rule. Section 3 proves that this rule is not α-strategyproof, whilst Section 4 proves that it is asymptotically α-strategyproof. In Section 5, we generalize our result to non-isotropic voters' preferences and to the skewed geometric median. We provide a simple experiment in Section 6. Section 7 discusses related work, and Section 8 concludes. Due to space limitations, most of the proofs and some auxiliary results are provided in the appendices.
MODEL
We consider 1 + V voters. Each voter v ∈ [V ] ≜ {1, . . . , V } reports a (potentially fabricated) vector θ v ∈ R d . We denote by ⃗ θ ≜ (θ 1 , . . . , θ V ) the family of other voters' reported vectors. We then, without loss of generality 2 , analyze the incentives of voter 0. We assume that voter 0 has a preferred target vector t ∈ R d , but they report a potentially different, strategically crafted, vector s ∈ R d . A voting algorithm VOTE then aggregates all voters' vectors into a common decision vector VOTE(s, ⃗ θ) ∈ R d , which voter 0 would prefer to be close to their target vector t.
The Many Faces of Strategyproofness
We define the strategic gain as the best multiplicative gain that voter 0 can obtain by misreporting their preference, i.e. by reporting s instead of t. Strategyproofness bounds the maximal strategic gain.
Definition 1 (α-strategyproofness). VOTE is αstrategyproof if, for any others' vectors ⃗ θ ∈ R d×V , any target vector t ∈ R d and any strategic vote s ∈ R d , the strategic gain is at most 1 + α, i.e.
∀ ⃗ θ, t, s, VOTE(t, ⃗ θ) − t 2 ≤ (1+α) VOTE(s, ⃗ θ) − t 2 .
Smaller values of α yield stronger guarantees. If α = 0, then we simply say that VOTE is strategyproof.
The opposite of strategyproofness is an arbitrarily manipulable vote, which we define as follows.
Definition 2 (Arbitrarily manipulable). VOTE is arbitrarily manipulable by a single voter if, for any others' vectors ⃗ θ ∈ R d×V and any target vector t ∈ R d , there exists s ∈ R d such that VOTE(s, ⃗ θ) = t.
It is possible for a vector aggregation rule to be neither α-strategyproof nor arbitrarily manipulable. In fact, we show that this is the case for the geometric median. This remark calls for more subtle definitions of strategyproofness. In particular, it may be unreasonable to demand αstrategyproofness for all other voters' inputs ⃗ θ ∈ R d×V (this is known as dominant strategy incentive compatibility). In practice, other voters are usually expected to report some vectors more often than others. This motivates us to consider an alternative high-probability definition of α-strategyproofness 3 taking into account the distribution of vectors. We thus introduce and study asymptotic αstrategyproofness. To define this notion, we first assume that other voters' vectors are drawn 4 independently from some distributionθ over R d . Asymptotic strategyproofness then corresponds to strategyproofness in the limit where V is large enough.
Definition 3 (Asymptotic α-strategyproofness). VOTE is asymptotically α-strategyproof if, for any ε, δ > 0, there exists V 0 ≥ 1 such that, as long as there are V ≥ V 0 other voters whose reported vectors are drawn independently from distributionθ, then with probability at least 1 − δ, for any target vector t ∈ R d , and any strategic vote s ∈ R d , the strategic gain is bounded by 1 + α + ε, i.e., P ⃗ θ∼(θ) V [∀t, s : E(α + ε, t, s)] ≥ 1 − δ,
where E(α + ε, t, s) is the event
VOTE(t, ⃗ θ) − t ≤ (1 + α + ε) VOTE(s, ⃗ θ) − t .
If α = 0, we say that VOTE is asymptotically strategyproof.
Note that this definition implicitly depends on the distributionθ of voters' inputs. In fact, we prove that the geometric median is asymptotically α-strategyproof, for a value of α that we derive from the distributionθ.
Finally, we also study the more general case of nonisotropic preferences. To model this, we replace the Euclidean norm by the S-Mahalanobis norm, for some positive definite matrix S ≻ 0, which is given by ∥x∥ S ≜ ∥Sx∥ 2 . Intuitively, the eigenvectors with larger eigenvalues of S represent the directions that matter more to the voter. Now, if voter 0 has an S-skewed preference, then they aim to minimize the S-Mahalanobis norm between the result of VOTE(s, ⃗ θ) and the target vector t. This leads us to define strategyproofness for skewed preferences as follows. Definition 4. VOTE is α-strategyproof for an S-skewed preference if, for any others' vectors ⃗ θ ∈ R d×V , any target vector t ∈ R d and any strategic vote s ∈ R d , The maximal strategic S-skewed gain is at most 1 + α, i.e.
∀t, s, VOTE(t, ⃗ θ) − t S ≤ (1 + α) VOTE(s, ⃗ θ) − t S .
This notion can then be straightforwardly adapted to define asymptotic α-strategyproofness.
The Geometric Median
In this paper, we study the strategyproofness property of a particular VOTE, i.e., the geometric median. It can be defined for 1 + V voters using the average of distances between a vector z and the reported vectors:
L(s, ⃗ θ, z) ≜ 1 1 + V ∥z − s∥ 2 + v∈[V ] ∥z − θ v ∥ 2 .
We can now precisely define the geometric median. Definition 5. A geometric median GM operator is a function R d×(1+V ) → R d that outputs a minimizer of this average of distances, i.e., for any inputs s ∈ R d and ⃗ θ ∈ R d×V , we must have GM(s, ⃗ θ) ∈ arg min z∈R d L(s, ⃗ θ, z).
In dimension d ≥ 2, the uniqueness of GM(s, ⃗ θ) can be guaranteed when all vectors do not lie on a 1-dimensional line (Proposition 4 in Appendix A.2). Interestingly, the geometric median can be regarded as the result of a dynamic process, where, each voter pulls a point z towards their preferred vector with a unitary force. The geometric median is the equilibrium point, when all forces acting on z cancel out. It thus verifies the fairness principle "one voter, one vote with a unit force". Formal discussion is provided in Appendix A.1. + + Figure 1: A simple example where the geometric median fails to be strategyproof. This example is easy to analyze in the limit where θ 2 is infinitely far on the right, in which case its pull is always towards the right. Since the unit pulls of all voters must cancel out, there must be a third of a turn between any two unit pull. This shows why, as the strategic voter reports s rather than their target vector t, the geometric median moves up the dotted line, closer to t.
MANIPULABILITY AND STRATEGYPROOFNESS
While the average is arbitrarily manipulable by a single voter (Blanchard et al., 2017), the geometric median is robust even to a collusion of a strict minority of voters. However, we prove that the geometric median is not (even approximately) strategyproof in the general case.
The Geometric Median is Not Arbitrarily Manipulable
As opposed to the average, a strategic voter cannot arbitrarily manipulate the geometric median. This property is sometimes known as Byzantine resilience in distributed computing, or as statistical breakdown in robust statistics.
Here, we state it in the terminology of computational social choice, and we consider a slightly more general setting than individual manipulation. Namely, we consider group manipulation, by allowing a set of voters to collude. Even then, strategic voters can at most have a bounded impact. The proof of this result which is adapted from Lopuhaa and Rousseeuw (1989) is given in Appendix B.1. (1989)). The geometric median is not arbitrarily manipulable by any minority of colluding voters.
Proposition 1 (Lopuhaa and Rousseeuw
This result shows that a minority of strategic voters whose target vectors differ a lot from a large majority of other voters' reported vectors do not have full control over the output of the geometric median.
The Geometric Median is Not α-Strategyproof
The (geometric) median is slightly ill-behaved in dimension 1, when 1 + V is even. Typically, if V = 1, s = t = 0 and θ 1 = 1, then any point between 0 and 1 is a geometric median (according to our definition). A common solution for this case is to take the middle point of the interval of the middle vectors. However, this solution now fails to be strategyproof. Indeed, voter 0 could now obtain GM(s, θ 1 ) = t by reporting s = −1. To retrieve strategyproofness in this setting, Moulin (1980) essentially proposed to add one (or any odd number of) fictitious voters. But, in higher dimensions, even when it is perfectly well-defined, the geometric median fails to guarantee strategyproofness. Figure 1 provides a simple proof of this, where voter 0 can gain by a factor of nearly 2 √ 3/3 ≈ 1.15. Below, we prove a stronger result.
Theorem 1. Even under dim ⃗ θ ≥ 2, there is no value of α for which the geometric median is α-strategyproof.
This more precise result has important implications: if a voter knows they gain a lot by strategic misreporting, then they will more likely invest in, e.g., business intelligence, to optimize their (mis)reporting. Their reported preferences will then more likely diverge from their honest preferences. We sketch the proof of Theorem 1 below. The full proof is highly non-trivial and is given in Appendix B.2.
Sketch of proof. We study the achievable set A V , gathering all the possible values of the geometric median that a strategic voter can achieve by strategically choosing their reported vector. First we show that this set is the set
A V ≜ z ∈ R d ∃h ∈ ∇ z L( ⃗ θ, z), ∥h∥ 2 ≤ 1/V ,(1)
of points z where the loss restricted to other voters v ∈ [V ] has a subgradient of norm at most 1/V (Lemma 8).
The proof of the theorem then corresponds to the example of Figure 2, where other voters' vectors are nearly onedimensional. For a large number of voters, we prove, the achievable set is approximately a very flat ellipsoid defined by a matrix H that has very different eigenvalues. Then we show that the target vector t's pull is heavily skewed compared to the normal to the ellipsoid. This implies that voter 0 can obtain a significantly better geometric median by misreporting their target vector.
Interestingly, on the positive side, the proof of Theorem 1 requires the strategic voter's target vector to take very precise locations to gain a lot by lying. Thus, while the geometric median has failure modes where some voters have strong incentives to misreport their preferences, in practice, such incentives are unlikely to be strong. On the negative side, our proof suggests the possibility of a vicious cycle. Namely, it underlines the fact that a strategic voter's optimal strategy is to report a vector that is closer to the subspaces where other voters' vectors mostly lie. These subspaces may be interpreted as the more polarized dimensions. As a result, if all voters behave strategically, we should expect the reported vectors to be even more flattened on these subspaces than voters' true target vectors. But Figure 2: Illustration of the example where the geometric median fails to be α-strategyproof, for any value of α. Voters v ∈ [4] report vectors that are nearly one dimensional.
In the limit of a large number of voters, the achievable set for voter 0 is an ellipsoid. But the pull of voter 0's preferred vector turns out to be skewed compared to the normal to the ellipsoid. This means that voter 0 can obtain a significantly better geometric median by misreporting their preference.
then, if voters react strategically to the other voters' strategic votes, there are even more incentives to vote according to the one-dimensional line. In other words, the geometric median seems to initiate a vicious cycle where strategic voters are incentivized to escalate strategic behaviours, and this would lead them to essentially ignore all the dimensions of the vote except the most polarized one.
ASYMPTOTIC STRATEGYPROOFNESS
Our negative result of the previous section encourages us to weaken the notion of strategyproofness. We do so by replacing the bound on voters' strategic gains for all other voters' inputs with a bound for most of other inputs. We assume that each voter v reports a vector θ v drawn independently from a probability distributionθ. We then study the maximal strategic gain of voter 0, when there are many other voters whose reported vectors are obtained this way. Any bound α that holds with high probability as the number V of voters is sufficiently large guarantees what we call asymptotic α-strategyproofness (see Definition 3).
Throughout this section, we consider a given fixed distributionθ of voters' reported vectors. Our main result relies on the following mild smoothness assumption about the distributionθ of other voters' vectors, which is clearly satisfied by numerous classical probability distributions over R d , like the normal distribution (with Θ ≜ R d ).
Assumption 1. There is a convex open set Θ ⊆ R d , with d ≥ 5, such that the distributionθ yields a probability density function p continuously differentiable on Θ, and such that P [θ ∈ Θ] = 1 and E ∥θ∥ 2 = R d ∥θ∥ 2 p(θ)dθ < ∞.
To simplify notations, we leave the dependence to the distribution implicit. For any number V ∈ N of other voters, we denote by ⃗ θ V ∈ R d×V the random tuple of the V vot-ers' reported vectors, and we define
L 1:V (z) ≜ 1 V v∈[V ] ∥z − θ v ∥ 2 ,(2)
and g 1:V ≜ GM( ⃗ θ V ) the random average of distances and the geometric median for the voters v ∈ [V ]. We denote by L 0:V (s, z) and g 0:V ≜ GM(s, ⃗ θ V ) the similar quantities that also include voter 0's strategic vote s, and g † 0:V ≜ GM(t, ⃗ θ V ) the truthful geometric median, which results from voter 0's truthful reporting of t.
Infinite Limit
Consider the limit where V → ∞. The distributionθ defines its own average-of-distance function:
L ∞ (z) ≜ E θ∼θ [∥z − θ∥ 2 ] .(3)
We say that g ∞ is a geometric median of the distributioñ θ if it minimizes the loss L ∞ . Under Assumption 1, the support ofθ is of full dimension d, which guarantees the uniqueness of the geometric median (Proposition 12 in Appendix C). We denote by H ∞ ≜ ∇ 2 L ∞ (g ∞ ) the Hessian at the geometric median. The properties of this matrix will be central to the strategyproofness of the geometric median. Remark 1 (on the smoothness assumption). Note that Assumption 1 is a mild technical assumption, which intuitively guarantees that, for a sufficiently large number of voters, the infinite limit case will be approximately recovered. This will allow us to invoke some statistics ofθ to derive our strategyproofness bounds. In practice, assuming there are sufficiently many voters, thenθ may be estimated by the empirical distribution of the reported vectors.
The Geometric Median is Asymptotically α-Strategyproof
One of our main results is that the geometric median is asymptotically α-strategyproof, for some appropriate value of α that depends on the skewness of the Hessian matrix H ∞ . We define the skewness of a positive definite matrix S by
SKEW(S) ≜ sup x̸ =0 ∥x∥ 2 ∥Sx∥ 2 x T Sx − 1 (4) = sup ∥u∥ 2 =1 ∥Su∥ 2 u T Su − 1 .
This quantity bounds the angle between a vector x and its linear transformation Sx. It is straightforward that SKEW(βS) = SKEW(S) for all β > 0. Also the identity matrix has no skewness (SKEW(I) = 0). Intuitively, the more S distorts the space, typically by having very different eigenvalues, the more skewed it is. In Section 4.4, we derive upper and lower bounds on SKEW. We can now present our main theorem.
On the Strategyproofness of the Geometric Median + + + Figure 3: Illustration of our proof strategy. For a large number of voters, the achievable set A V is approximately an ellipsoid. To derive the strategyproof bounds, we study the orthogonal projection π 0 of the target vector t. Strategyproofness then depends on the angle between t − g † and t − π 0 , which we derive from the skewness of the positive definite matrix that approximately defines the ellipsoid.
Theorem 2. Under Assumption 1, the geometric median is asymptotically SKEW(H ∞ )-strategyproof.
Intuitively, the more the distribution of the reported vectors is flattened along some dimensions, which can be interpreted as more polarized dimensions, the worse the strategyproofness bound is. The proof of this theorem is given in Appendix C.2. In the next section, we provide a brief proof sketch to help the readers follow our reasoning.
Proof Techniques and Technical Challenges
The proof of Theorem 2 relies on the following steps:
1. Approximating the Achievable Set with an Ellipsoid. We first consider the infinite-case assuming a strategic voter with a very small voting power ε where the voting power of each voter is the magnitude of their pull compared to the sum of all pulls (see Section 2.2). By analyzing the Taylor's approximation of the gradient of the loss function for other voters, we show that the achievable set for the strategic voter (defined in (1)) becomes approximately an ellipsoid as ε → 0. Now, as shown in Figure 3, since the ellipsoid is convex, the best-possible achievable point for the strategic voter is the orthogonal projection π 0 of the target vector t on the ellipsoid. By comparing the distance between t and π 0 to the distance between t and the geometric median g † obtained by a truthful reporting of t, we then obtain what the strategic voter can gain by behaving strategically, in the infinite-voter case where they have a very small voting power ε. Intuitively, the more flattened the ellipsoid, the more the strategic voter can gain; conversely, for a quasi-hyperspherical ellipsoid, the strategic voter cannot gain by misreporting.
2. Deriving a Finite-voter Case from the Infinite One.
To obtain meaningful strategyproofness guarantees, we consider the finite-voter case with a large (but not infinite) number of voters. Unfortunately, the finite-voter case is trickier than the infinite-voter case. To retrieve the strategyproofness bound, we need in addition to bound the divergence between the finite-voter case and the infinite-voter case. Fortunately, for V large enough, the voting power of a single strategic voter is small, which allows us to quasireduce the finite-voter case to the infinite-voter case. In fact, one important challenge of the proof is to leverage the well-behaved smoothness of the infinite-voter case to derive bounds for the finite-voter case, where singularities and approximation bounds make the analysis trickier. Indeed, while the infinite-voter loss function is smooth enough everywhere (under Assumption 1), the finite-voter loss function is not differentiable everywhere. At any point θ v , it yields a nontrivial set of subgradients. This complicates the analysis, as we exploit higher order derivatives.
To address this difficulty, we identify different regions around the infinite-voter geometric median where the finite-voter loss function is well-behaved enough as shown in Figure 4. Namely, in high dimensions, assuming a smooth distributionθ, the distances between any two randomly drawn vectors are large. Concentration bounds allow us to guarantee that, with high probability, other voters' vectors θ v are all far away from the infinite geometric median g ∞ (Lemma 14 in the Appendix). This has two important advantages. First, it guarantees the absence of singularities in a region around g ∞ . Second, and more importantly, it allows us to control the variations of higherorder derivatives in this region (Lemma 16). This turns out to be sufficient to guarantee that the finite-voter geometric median is necessarily within this region.
Controlling the Largeness of the Third Derivative
Tensor. Another challenge that we encountered was to guarantee that the achievable set in the finite-voter setting is convex. This condition is indeed critical to provide an upper bound on α, since it enables us to determine the strategic voter's optimal strategy by studying the orthogonal projection of the target vector onto the achievable set.
To prove this condition, we identify a sufficient condition, which involves the third derivative tensor of the finite-voter loss function (lemmas 11 and 13). Fortunately, just as we manage to guarantee that the finite-voter geometric median is necessarily close enough to the infinite-voter geometric median (Lemma 15), using similar arguments based on concentration bounds, we successfully controlled the largeness of the third derivative tensor (Lemma 18). Therefore, for a large number of voters and with high probability, the achievable set is convex. Additionally, it is approximately an ellipsoid, which is characterized by the infinitevoter Hessian matrix H ∞ . As a result, and since "rounder" ellipsoids yield better strategyproofness guarantees, when the number of voters is sufficiently large, the strategic gain of a strategic voter is upper-bounded by how skewed the infinite-voter Hessian matrix H ∞ is. Figure 4: Illustration of the proof strategy for Theorem 2, which is based on the following claims that hold with high probability, for 2/d < 2r 1 < r 3 < r 2 < 1/2, and for V large enough. First, there is no vote in B(g ∞ , V −r1 ) (a ball centered on g ∞ , and of radius V −r1 ). Thus, L 1:V is infinitely differentiable there. Moreover, the second and third derivatives of L 1:V cannot be too different from the second and third derivatives of
L ∞ in B(g ∞ , V −r3 ). Plus, g 1:V lies in B(g ∞ , V −r2 )
, and the set of geometric medians that voter 0 can obtain by misreporting their preferences is approximately an ellipsoid centered on g 1:V . This ellipsoid lies completely inside B(g ∞ , V −r3 ).
Bounds on SKEW
As we saw, the asymptotic strategyproofness of the geometric median depends on the skewness of the Hessian matrix H ∞ , defined in Equation (4). In this section, we derive upper and lower bounds on the skewness function based on the ratio of the extreme eigenvalues of the matrix. Intuitively, the more different the eigenvalues of S are, the more skewed S is. We formalize this intuition with upper and lower bounds, whose proofs are given in Appendix C.3.
Proposition 2. Denote Λ ≜ max SP(S) min SP(S) the ratio of extreme eigenvalues of S. Then 1+Λ 2 √ Λ − 1 ≤ SKEW(S) ≤ Λ − 1.
In dimension 2, the lower-bound inequality is an equality.
SKEWNESS GENERALIZATIONS
We generalize our main result in two aspects. First, we consider skewed preferences where users give different weights to different dimensions. Second, we study the skewed geometric median which can be derived by rescaling the space before computing the geometric median.
Skewed Preferences
Our analysis so far rested on the assumption that voters have single-peaked preferences, which depend on the Euclidean distance between the geometric median and their preferred vectors. While this makes our analysis simpler, in practice, this assumption is not easy to justify. In fact, it seems reasonable to assume that some dimensions have greater importance for voters than others.
This motivates us to introduce S-skewed preferences, for a positive definite matrix S. More precisely, we say that a voter v has an S-skewed preference if they aim to minimize ∥g − θ v ∥ S , where g is the result of the vector vote and ∥z∥ S ≜ ∥Sz∥ 2 is the S-Mahalanobis norm. Intuitively, the matrix S allows us to highlight which directions of space
matter more to voter v. For instance, if S = Y 0 0 1 , with
Y ≫ 1, it means that the voter gives a lot more importance to the first dimension than to the second dimension.
The Skewed Geometric Median
Intuitively, to counteract voters' strategic exaggeration incentives, we could make it more costly to express strong preferences along the more polarized and more important dimensions. In other words, voters would have a unit force along less polarized dimensions, and a less-than-unit force along more polarized dimensions. We capture this intuition by introducing "Σ-skewed geometric median" for a positive definite matrix Σ ≻ 0.
Skewed Loss. We define the Σ-skewed infinite loss as
L Σ ∞ (z,θ) ≜ E θ∼θ ∥z − θ∥ Σ ,
using the Σ-Mahalanobis norm (∥z∥ Σ ≜ ∥Σz∥ 2 ), and we call Σ-skewed geometric median g Σ ∞ its minimum. We also introduce their finite-voter equivalents, for 1 + V voters, by
L Σ 0:V (s, z) ≜ 1 1 + V ∥s − z∥ Σ + 1 1 + V v∈[V ]
∥θ v − z∥ Σ , and g Σ 0:V = arg min z L Σ 0:V (s, z). Intuitively, this is equivalent to mapping the original space to a new space using the linear transformation Σ, and computing the geometric median in this new space (Lemma 23 in Appendix D).
Remark 2. Interestingly, we also show that this skewed geometric median can be interpreted as modifying the way we measure the norm of voters' forces in the original space, thereby guaranteeing its consistency with the fairness principle "one voter, one vote with a unit force". The formal discussion is given in Appendix E.
Strategyproofness of the Skewed Geometric Median for Skewed Preferences
For any skewing positive definite matrix Σ, we define
H Σ ∞ ≜ ∇ 2 L Σ ∞ (g Σ ∞ )
the Hessian matrix of the skewed loss at the skewed geometric median. We then have the following asymptotic strategyproofness guarantee for an appropriately skewing matrix. The sketch of the proof is provided in Appendix D.
Theorem 3. Under Assumption 1, the Σ-skewed geometric median is asymptotically SKEW(S −1 H Σ ∞ S −1 )strategyproof for a voter with S-skewed preferences. In particular, if H Σ ∞ = S 1/2 , then the Σ-skewed geometric median is asymptotically strategyproof for this voter.
Interpretation. Let us provide additional insights into what the theorem says. Intuitively, the theorem asserts that the strategyproofness of the normal geometric median (Σ = I) depends on how much an individual cares about polarized dimensions. More precisely, the more the voter cares about polarized dimensions, the less strategyproof the geometric median is.
Indeed, suppose that the first dimension is both highly polarized and very important to voter 0. The fact that it is polarized would typically correspond to a Hessian matrix of the form H ∞ = 1 0 0 X 2 , with X ≫ 1 (see the proof of Theorem 1). The fact that voter 0 cares a lot about the first dimension would typically correspond to a skewed prefer-
ence matrix S = Y 0 0 1 , with Y ≫ 1. We then have S −1 H ∞ S −1 = Y −2 0 0 X 2 . By Proposition 2, we then have SKEW(S −1 H ∞ S −1 ) = X 2 +Y −2 2 √ X 2 Y −2 − 1 = Θ(XY )
, which is very large for X, Y ≫ 1. In particular, this makes the normal geometric median unsuitable for voting problems where some dimensions are much more polarized and regarded as important by most voters. Now, interestingly, if we find a skewing matrix Σ that weakens the voters' pulls in the first dimensions, making the Hessian matrix approx-
imately H Σ ∞ ≈ 1 0 0 1 Y 2
, then the resulting geometric median becomes asymptotically strategyproof.
Remarks on the Skewed Hessian Matrix. In general, g Σ ∞ ̸ = g ∞ (Proposition 8 in Appendix A). This makes identifying a skewing matrix Σ such that H Σ ∞ = S 1/2 challenging. In particular, it is hard to determine how such a matrix relates to the statistics ofθ. We note however the following connection between the Hessian matrix ∇ 2 L Σ ∞ (z) of the Σ-skewed loss and the Hessian matrix ∇ 2 L ∞ (z) of the Euclidean loss. The proof is given in Appendix D.2.
Proposition 3. For any z ∈ R d , we have ∇ 2 L Σ ∞ (z) = Σ(∇ 2 L ∞ )(Σz, Σθ)Σ. Note that in particular, if g (H −1/2 ∞ ) ∞ = g ∞ and if ∇ 2 L ∞ (H −1/2 ∞ z, H −1/2 ∞θ ) = H ∞ , then the H −1/2 ∞ -
skewed geometric median is asymptotically strategyproof. This will be the case if the support ofθ − g ∞ lies in the union of the eigenspaces of Σ, as this implies that, when θ is drawn fromθ, the vectors Σθ − Σg ∞ and θ − g ∞ are colinear and point in the same direction with probability 1. But, in general, these assumptions do not hold. This makes the computation of the appropriate skewing challenging.
We thus leave open the problem of proving the existence and uniqueness (up to overall homothety) of such a matrix, as well as the design of algorithms to compute it.
NUMERICAL EXPERIMENT
Strategyproofness is commonly studied purely theoretically, as empirical strategyproofness evaluation is hard to perform in a meaningful and fair way. Indeed, it requires identifying optimal attacks against a system, which often amounts to solving an intractable optimization problem. In particular, if such an empirical evaluation fails to find an effective attack, it is unclear if this is because no such attack exists, or because no such attack has been found. Nevertheless, here we provide a simple experiment to evaluate the effect of the (skewness of the) underlying distribution on the strategic gain α when using the geometric median to aggregate voters' vectors. First, we sample 500000 vectors from a 2 dimensional Gaussian distributionθ with mean 0 and covariance matrix of c 0 0 1 c for a parameter c. Note that as shown in Proposition 2, c is closely related to the skewness of distributionθ. We assume the strategic voters have a 1% voting power, i.e., we simulate 5000 strategic voters all with the same target vector t. Then, to find a vulnerable target vector, we use a heuristic idea similar to that of Figure 2. Essentially, in each dimension, we find the extreme achievable geometric median for the strategic voters. The target vector t is then the combination of these extreme values of both dimensions. Finally, We approximately find the maximum strategic gain by performing a grid search of the best reported vector s in a neighborhood of t. Figure 5 shows the dependence of the strategic gain α on parameter c and validates the intuition that the more skewed the distribution, the less strategyproof geometric median is. This experiment demonstrates that the skewness of the underlying distribution is a crucial factor to consider when assessing the strategyproofness of the geometric median. The code is available at [this link].
RELATED WORK
Strategyproofness in one dimension has been extensively studied (Moulin, 1980;Procaccia and Tennenholtz, 2013;Feigenbaum and Sethuraman, 2015). It was shown (Moulin, 1980) that a generalized form of the median is group strategyproof, and that the randomized Con- dorcet voting system is also group strategyproof for singlepeaked preferences (Hoang, 2017). The one-dimension median was also leveraged for mechanism design without payment (Procaccia and Tennenholtz, 2013).
However, generalizing the median to higher dimensions is not straightforward (Lopuhaa and Rousseeuw, 1989). A common generalization known as the coordinate-wise median, was shown to be strategyproof, but not group strategyproof (Sui and Boutilier, 2015). The extent to which the generalized coordinate-wise median and the quantile mechanism are α-(group)-strategyproof have been studied by Sui and Boutilier (2015), though their definition slightly diverges from ours (their error is additive, not multiplicative). Remarkably, it was shown by Kim and Roush (1984) that, in dimension 2, the only strategyproof, anonymous and continuous voting system is the (generalized) coordinate-wise median.
Without restricting the dimension, but assuming the vectors to be taken from compact subsets of Euclidean spaces, strategyproof voting systems were characterized assuming all voters have generalized single-peaked preferences (Barberà et al., 1998). This approach built upon Border and Jordan (1983) which characterized strategyproof voting systems for Cartesian product ranges. In both cases, the set of strategyproof voting systems was defined as the class of generalized (coordinate-wise) median voter schemes which were shown in the case of Barberà et al. (1998) to also satisfy the intersection property introduced by 5 Barberà et al. (1997).
Overall, the coordinate-wise median has more desirable strategyproofness than the geometric median (Farhadkhani et al., 2021). It is also important to notice that, as opposed to the coordinate-wise median, the geometric median guarantees that the output vector belongs to the convex hull of voters' vectors (Proposition 9). This makes the coordinate-wise median unsuitable for problems where the space of relevant vectors is the convex hull of the input vectors. This holds, for instance, for the budget allocation problem, whose decision vector z must typically satisfy z ≥ 0 and z[i] = 1. In dimension 3, if three voters have preferences (1, 0, 0), (0, 1, 0) and (0, 0, 1), then the coordinate-wise median would output (0, 0, 0) which may be undesirable. On the other hand, the geometric median would output (1/3, 1/3, 1/3), which seems more desirable. Similarly, the coordinate-wise median is unfit to aggregate covariant matrices, which must be symmetric and semidefinite positive.
Another line of work focused on bounding the approximation ratio, which is the extent to which social cost is lost by using alternative aggregation rules like coordinate-wise median (Goel and Hann-Caruthers, 2020;Lu et al., 2009;Walsh, 2020). Several papers also consider other variations of this problem, e.g., choosing k facility locations instead of one (Escoffier et al., 2011), assigning different weights to different nodes (Zhang and Li, 2014), and assuming that the nodes lie on a network represented by a graph (Alon et al., 2010). Others have addressed the computational complexity of the geometric median (Cohen et al., 2016). Another work (Brady and Chambers, 1995) shows that for three agents the geometric median is the only rule that satisfies anonymity, neutrality, and Maskin-Monotonicity.
CONCLUSION
We analyzed different flavors of strategyproofness for the geometric median, an instrumental component of the secure machine learning toolbox. First, we showed that, in general, there can be no guarantee of approximatestrategyproofness, by exhibiting worst-case situations. However, we proved that, assuming that voters' vectors follow some distributionθ, asymptotic α-strategyproofness can be ensured. We then generalized our results to the case where some dimensions may matter more to the voters than other dimensions. In this setting, we proved that the geometric median becomes less strategyproof, when some dimensions are more polarized and more important than others. Finally, we showed how the skewed geometric median can improve asymptotic strategyproofness, by providing more voting rights along more consensual dimensions. Overall, our analysis helps better identify the settings where the geometric median can indeed be a suitable solution to high dimensional voting.
References
Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G., Davis, A., Dean, J., Devin, M., Ghemawat, S., Goodfellow, I., Harp, A., Irving, G., Isard, M., Jia, Y., Jozefowicz, R., Kaiser, L., Kudlur, M., Levenberg, J., Mané, D., Monga, R., Moore, S., Murray, D., Olah, C., Schuster, M., Shlens, J., Steiner, B., Sutskever, I., Talwar Farhadkhani, S., Guerraoui, R., Gupta, N., Pinot, R., and Stephan, J. (2022a). Byzantine machine learning made easy by resilient averaging of momentums. In Chaudhuri, K., Jegelka, S., Song, L., Szepesvari, C., Niu, G., and Sabato, S., editors, Han, S., Topcu, U., and Pappas, G. J. (2015). An approximately truthful mechanism for electric vehicle charging via joint differential privacy. In
Appendix Organization
The appendices are organized as follows:
• Appendix A proves some useful preliminary results about the geometric median that are needed in this paper.
• Appendix B includes the proofs of the results presented in Section 3 (in particular, the proof of Theorem 1).
• Appendix C includes some proofs and deferred results from Section 4 (in particular, the proof of Theorem 2).
• Appendix D includes some proofs and deferred results from Section 5 (in particular, the proof of Theorem 3).
• Appendix E discusses the notion of alternative unit forces and proves auxiliary results on the equivalence between ℓ p penality and ℓ q -unit force vote for 1 p + 1 q = 1 and the equivalence between Σ-skewed geometric median, and Σ −1 -unit forces.
A GEOMETRIC MEDIAN: PRELIMINARIES
In this section, we characterize a few properties of the geometric median, many of which are useful for our subsequent proofs. For the sake of exposition, we consider in this section a geometric median restricted to the voters v ∈ [V ], in which case, the loss function would be
L( ⃗ θ, z) ≜ 1 V v∈[V ] ∥z − θ v ∥ 2 .(5)
The generalization to 1 + V voters is straightforward.
A.1 Unit Forces
We first show that the geometric median verifies the fairness principle "one voter, one vote with a unit force". Consider a system in which each voter v pulls the output of voting z towards their location θ v with a unit force. Voter v's force is then given by the unit vector u z−θv in the direction of z − θ v . Any equilibrium of this process must then be a point z where all the forces cancel out, i.e., we must essentially have v∈V u z−θv = 0. Lemma 2 shows that this condition is equivalent to the computation of a geometric median. But first, let us characterize the gradient of the ℓ 2 -norm.
Lemma 1. The gradient of the Euclidean norm is a unit vector. More precisely, for all z ∈ R d , we have ∇ ∥z∥ 2 = u z , where u z ≜ z/ ∥z∥ 2 if z ̸ = 0, and otherwise u 0 ≜ B(0, 1) is the unit ball centered at the origin.
In the latter case, the ℓ 2 norm thus actually has a large set of subgradients.
Proof. Assume z ̸ = 0. We have ∇ ∥z∥ 2 2 = 2z. As a result, ∇ ∥z∥ 2 = ∇ ∥z∥ 2 2 = ∇ ∥z∥ 2 2 /2 ∥z∥ 2 2 = z/ ∥z∥ 2 = u z . Now consider the case z = 0. Then note that for all x ∈ R d , we have ∥x∥ 2 − ∥z∥ 2 = x T u x ≥ x T h for any vector h of Euclidean norm at most 1. This proves that ∇ z=0 ∥z∥ 2 ⊃ B(0, 1). On the other hand, if ∥h∥ 2 > 1, then we have ∥εu h ∥ 2 = ε < ε ∥h∥ 2 = (εu h ) T h. Thus h cannot be a subgradient, and thus ∇ z=0 ∥z∥ 2 = B(0, 1) = u 0 .
As an immediate corollary, the following condition characterizes the geometric medians.
Lemma 2. The sum of voters' unit pulls cancel out on g ≜ GM( ⃗ θ), i.e., 0 ∈ v∈[V ] u g−θv .
Proof. By Lemma 1, ∇ z ∥z − θ v ∥ 2 = u z−θv . Therefore, V ∇ z L( ⃗ θ, z) = v∈[V ] u z−θv . The optimality condition of g then implies 0 ∈ ∇ z L( ⃗ θ, g) and hence 0 ∈ v∈[V ] u g−θv .
Before moving on, we make one last observation about the second derivative of the Euclidean norm, which is very useful for the rest of the paper.
Lemma 3. Suppose z ̸ = 0. Then ∇ 2 ∥z∥ 2 = 1
∥z∥ 2 I − u z u T z
is a positive semi-definite matrix. The vector z is an eigenvector of the matrix associated with eigenvalue 0, while the hyperplane orthogonal to z is the (d − 1)-dimensional eigenspace of ∇ 2 ∥z∥ 2 associated with eigenvalue 1/ ∥z∥ 2 . Notation: We denote by z[i], the i-th coordinate of vector z.
Proof. For clarity, let us denote ℓ 2 (z) ≜ ∥z∥ 2 . By Lemma 1, we know that ∇ℓ 2 (z) = u z = z/ℓ 2 (z). We then have
∂ 2 ij ℓ 2 (z) = 1 ℓ 2 (z) 2 (ℓ 2 (z)∂ j z[i] − z[i]∂ j ℓ 2 (z)) = 1 ℓ 2 (z) δ j i − z[i] ℓ 2 (z) z[j] ℓ 2 (z) ,(6)
where δ j i = 1 if i = j, and 0 if i ̸ = j. Combining all coordinates then yields
∇ 2 ℓ 2 (z) = 1 ∥z∥ 2 I − z ∥z∥ 2 z T ∥z∥ 2 = 1 ∥z∥ 2 I − u z u T z .(7)
It is then clear that ∇ 2 ℓ 2 (z)z = 1
∥z∥ 2 z − u z u T z z = 0. Meanwhile, if x ⊥ z, then u T z x = 0, which then results in ∇ 2 ℓ 2 (z)x = x/ ∥z∥ 2 .
This proves the lemma.
Intuitively, the lemma says that the pull of z on 0 does not change if we slightly move z along the direction z. However, this pull is indeed changed if we move z in a direction orthogonal to z. Moreover, the further away z is from 0, the weaker is this change in direction.
A.2 Existence and Uniqueness
In dimension one, the definition of the geometric median coincides with the definition of the median. As a result, the geometric median may not be uniquely defined. Fortunately, in higher dimensions, the uniqueness can be guaranteed, under reasonable assumptions. We first prove a few useful lemmas about the strict convexity of convex and piecewise strictly convex functions. Proof. Consider x, y ∈ [0, 1], with x < y, λ ∈ (0, 1) and µ ≜ 1 − λ. Denote z = λx + µy. It is straightforward to verify that z ∈ (0, 1). Define x ′ ≜ x+z 2 and y ′ ≜ z+y 2 . Clearly, we have x ′ , y ′ ∈ (0, 1). Moreover, λx ′ + µy ′ = 1 2 (λx + µy) + 1 2 (λ + µ)z = z. By strict convexity of f in (0, 1), we then have f (z) < λf (x ′ ) + µf (y ′ ). Moreover, by convexity of f in [0, 1], we also have f (x ′ ) ≤ 1 2 f (x) + 1 2 f (z) and f (y ′ ) ≤ 1 2 f (y) + 1 2 f (z). Combining the three inequalities yields f (z) < 1 2 (λf (x) + µf (y)) + 1 2 f (z), from which we derive f (z) < λf (x) + µf (y). This allows to conclude.
Lemma 5. If f : [0, 1] → R is convex, and if there is w ∈ (0, 1) such that f is strictly convex on (0, w) and strictly convex on (w, 1). Then, for any x < w < y,
we have f (w) < y−w y−x f (x) + w−x y−x f (y).
Proof. Define x ′ ≜ x+w 2 and y ′ ≜ w+y 2 . Since f is strictly convex on (0, w), by Lemma 4, we know that it is strictly convex on [0, w]. As a result, we have f (
x ′ ) < 1 2 f (x) + 1 2 f (w). Similarly, we show that f (y ′ ) < 1 2 f (y) + 1 2 f (w). Note now that y−w y−x x ′ + w−x y−x y ′ = 1 2 (y−w)x+(w−x)y y−x + w 2 = w. Using the convexity of f over [0, 1], we then have f (w) ≤ y−w y−x f (x ′ ) + w−x y−x f (y ′ ) < y−w y−x 1 2 f (x) + 1 2 f (w) + w−x y−x 1 2 f (y) + 1 2 f (w) = 1 2 y−w y−x f (x) + w−x y−x f (y) + 1 2 f (w)
. Rearranging the terms yields the lemma.
Lemma 6. If f : [0, 1] → R is convex, and if there is w ∈ [0, 1] such that f is strictly convex on (0, w) and strictly convex on (w, 1), then f is strictly convex on [0, 1].
Proof. Consider x, z, y ∈ [0, 1], with x < z < y. We denote λ ≜ z−x y−x ∈ (0, 1) and µ ≜ 1 − λ. We then have z = λx + µy. If x ≥ w or y ≤ w, then by Lemma 4, we know that f (z) < λf (x) + µf (y). Moreover, Lemma 5 yields the same equation for the case x < z = w < y.
Now assume x < z < w < y. By Lemma 5, we have f (w) < y−w
y
−x f (x) + w−x y−x f (y). We also know that z = w−z w−x x + z−x w−x w. By strict convexity, we thus have f (z) < w−z w−x f (x) + z−x w−x f (w) < w−z w−x f (x) + z−x w−x y−w y−x f (x) + w−x y−x f (y) = λf (x) + µf (y).
The last case x < w < z < y is dealt with similarly.
Lemma 7. Assume that f : [0, 1] → R is convex, and that there is a finite number of points w 0 ≜ 0 < w 1 < . . . < w K−1 < w K ≜ 1 such that f is strictly convex on (w k−1 , w k ) for k ∈ [K]. Then f is strictly convex on [0, 1].
Proof. We prove this result by induction on K. For K = 1, we simply invoke Lemma 4. Now assume that it holds for K − 1, and let us use this to derive it for K. By induction, we know that f is strictly convex on (0, w K−1 ) (we can use the induction hypothesis more rigorously by defining g(x) ≜ f (xw K−1 )). Yet, by assumption, f is also known to be convex on (w K−1 , 1). Lemma 5 thus applies, and implies the strict convexity of f on [0, 1].
In what follows, we define the dimension of the tuple ⃗ θ of preferred vectors as the dimension of the affine space spanned by these vectors, i.e., dim
⃗ θ ≜ dim {θ v − θ w | v, w ∈ [V ]}.
We then have the following result.
Proposition 4. z → L( ⃗ θ, z) is infinitely differentiable for all z / ∈ {θ v | v ∈ [V ]}. Moreover, if dim ⃗ θ ≥ 2,
then for all such z, the Hessian matrix of the sum of distances is positive definite, i.e., ∇ 2 z L( ⃗ θ, z) ≻ 0. In particular, L is then strictly convex on R d , and has a unique minimum.
Proof. Define ℓ 2 (z) ≜ ∥z∥ 2 = √ z T z = i∈[d] z 2 i . This function is clearly infinitely differentiable for all points z ̸ = 0. Since L( ⃗ θ, z) = 1 V ℓ 2 (z − θ v ), it is also infinitely differentiable for z / ∈ {θ v | v ∈ [V ]}.
Moreover, by using triangle inequality and absolute homogeneity, we know that, for any λ ∈ [0, 1] and any θ v ∈ R d , we have
ℓ 2 ((λz + (1 − λ)z ′ ) − θ v ) = ℓ 2 (λ(z − θ v ) + (1 − λ)(z ′ − θ v )) (8) ≤ ℓ 2 (λ(z − θ v )) + ℓ 2 ((1 − λ)(z ′ − θ v )) = λℓ 2 (z − θ v ) + (1 − λ)ℓ 2 (z ′ − θ v ),(9)
which proves the convexity of z → ℓ 2 (z −θ v ). Since the sum of convex functions is convex, we also know that z → L( ⃗ θ, z) is convex too. Now, we know that dim(z, ⃗ θ) ≥ dim( ⃗ θ) ≥ 2. Therefore, there exists v, w ∈ [V ] such that a ≜ z − θ v and b ≜ z − θ w are not colinear. This implies that −1 < u T a u b < 1. By Lemma 3, we then have
∇ 2 z L( ⃗ θ, z) ⪰ 1 V ∥a∥ 2 I − u a u T a + 1 V ∥b∥ 2 I − u b u T b (10) ⪰ 1 V max {∥a∥ 2 , ∥b∥ 2 } 2I − u a u T a − u b u T b (11) ⪰ 1 V max {∥a∥ 2 , ∥b∥ 2 } 2I − 1 2 (u a + u b )(u a + u b ) T − 1 2 (u a − u b )(u a − u b ) T (12) = 2 V max {∥a∥ 2 , ∥b∥ 2 } I − 1 + u T a u b 2 (u a + u b )(u a + u b ) T ∥u a + u b ∥ 2 2 − 1 − u T a u b 2 (u a − u b )(u a − u b ) T ∥u a − u b ∥ 2 2 ,(13)
where we used ∥u a + u b ∥ 2 2 = 2 + 2u T a u b and ∥u a − u b ∥ 2 2 = 2 − 2u T a u b . This last matrix turns out to have eigenvalues equal to
1−u T a u b V max{∥a∥ 2 ,∥b∥ 2 } in the direction u a + u b , 1+u T a u b
V max{∥a∥ 2 ,∥b∥ 2 } in the direction u a − u b , and 1 V max{∥a∥ 2 ,∥b∥ 2 } in directions orthogonal to a and b. Since −1 < u T a u b < 1, all such quantities are strictly positive. Thus all eigenvalues of ∇ 2 z L( ⃗ θ, z) are strictly positive. This implies that along any segment (x, y) that contains no θ v , then z → L( ⃗ θ, z) is strictly convex. Given that z → L( ⃗ θ, z) is convex everywhere, and that there is only a finite number of points θ v , Lemma 7 applies, and proves the strict convexity of z → L( ⃗ θ, z) everywhere and along all directions. Uniqueness follows immediately from this.
To prove the existence of the geometric median, we observe that L(z) ≥ L(0), for z large enough. More precisely, denote
∆ ≜ max {∥θ v ∥ 2 | v ∈ [V ]}. Then L(0) ≤ ∆. Yet if ∥z∥ 2 ≥ 3∆, then ∥z − θ v ∥ 2 ≥ ∥z∥ 2 − ∥θ v ∥ 2 ≥ 3∆ − ∆ = 2∆,
which implies L(z) ≥ 2∆. Thus inf z∈R d L(z) = inf z∈B(0,∆) L(z), where B(0, ∆) is a ball centered on 0, and of radius ∆. By continuity of L and compactness of B(0, ∆), we know that this infimum is reached by some point in B(0, ∆).
A.3 Symmetries
We contrast here the symmetry properties of the average, the geometric median and the coordinate-wise median.
Proposition 5. Assuming uniqueness, the ordering of voters does not impact the average, the geometric median and the coordinate-wise median of their votes. This is known as the anonymity property.
Proof. All three operators can be regarded as minimizing an anonymous function, namely, the sum of square distances, the sum of distances and the sum of ℓ 1 distances (see Section E). All such functions are clearly invariant under re-ordering of voters' labels.
Proposition 6. Assuming uniqueness, the average, the geometric median and the coordinate-wise median are invariant under translation and homothety. The average and the geometric median are also invariant under any orthogonal transformation, but, in general, the coordinate-wise median is not.
Proof. The average and the geometric median can both be regarded as minimizing a function that only depends on Euclidean distances. Since any Euclidean isometry M is distance-preserving, if AVG and GM are the average and the geometric median of ⃗ θ, and if τ ∈ R d and λ > 0, it is clear that λM AVG( ⃗ θ) + τ and λM GM( ⃗ θ) + τ is the average and the geometric median of the family λM ⃗ θ + τ .
In Section E, we show that the coordinate-wise median CW(.) minimizes a function that depends on ℓ 1 distances. By the same argument as above, this guarantees that the coordinate-wise median of λ ⃗ θ + τ is λCW( ⃗ θ) + τ . Now consider the vectors θ 1 ≜ (0, 0), θ 2 ≜ (1, 2) and θ 3 ≜ (2, 1). The coordinate-wise median of these vectors is CW( ⃗ θ) = (1, 1). Now consider the rotation R = Proposition 7. Assuming uniqueness, if z is a center of symmetry of ⃗ θ, then it is the average, the geometric median and the coordinate-wise median.
Proof. We can pair all vectors of ⃗ θ different from z by their symmetry with respect to z. For any vote, the pull of each pair on z cancels out. Thus the sum of pulls vanishes.
Proposition 8. The average is invariant under any invertible linear transformation, but, even assuming uniqueness, in general, the geometric median and the coordinate-wise median are not.
This proposition might appear to be a weakness of the geometric median. Note that Section 5.2 actually leverages this to define the skewed geometric median and improve strategyproofness.
Proof. The average is linear. Thus, for any matrix M ∈ R d×d , we have AVG(M ⃗ θ) = M AVG( ⃗ θ). Moreover, the case of the coordinate-wise median follows from Proposition 6.
To see that the geometric median is not invariant under invertible linear transformation, consider θ 1 ≜ (1, 0), θ 2 ≜ (cos(τ /3), sin(τ /3)) = (−1/2, √ 3/2) and θ 3 ≜ (cos(2τ /3), sin(2τ /3)) = (−1/2, − √ 3/2), where τ ≈ 6.28 corresponds to a full turn angle. Then GM( ⃗ θ) = 0, since the sum of pulls at 0 cancel out. Now let us stretch space in the y-axis, using the matrix M = 1 0 0 2/ √ 3 . Clearly 0 is invariant under this stretch, as M 0 = 0. Moreover, we have M θ 1 = (1, 0), M θ 2 = (−1/2, 1) and M θ 3 = (−1/2, −1). The unit-force pull on 0 by voter 2 is then M θ 2 / ∥M θ 2 ∥ 2 = 2/ √ 5(−1/2, 1), while that of voter 3 is M θ 3 / ∥M θ 3 ∥ 2 = 2/ √ 5(−1/2, −1). Finally, voter 1 still pulls with a unit force towards the right. The sum of forces along the horizontal axis is then equal to 1 − 2/ √ 5 > 0. Thus despite being invariant, 0 is no longer the geometric median.
The case of the coordinate-wise median follows from Proposition 6.
Proposition 9. The average and the geometric median of a tuple of vectors belong to the convex hull of the vectors. In general, the coordinate-wise median does not.
Proof. Consider z not in the convex hull. Then there must exist a separating hyperplane, with a normal vector h, which goes from the convex hull to z. But then all vectors pull z in the direction of −h. The projection of the sum of forces on h thus cannot be nil, which shows that z cannot be an equilibrium. Now, to show that the coordinate-wise median may not lie within the convex hull of voters' vote, consider θ 1 = (1, 0, 0), θ 2 = (0, 1, 0) and θ 3 = (0, 0, 1). Then the coordinate-wise median is (0, 0, 0). This clearly does not belong to the convex hull of ⃗ θ.
Proposition 10. The geometric median is continuous on all points of ⃗ θ, if dim ⃗ θ ≥ 2.
Proof. Consider ⃗ θ ∈ R d×V with dim ⃗ θ ≥ 2. By Proposition 4, there is a unique geometric median g ≜ GM( ⃗ θ).
To prove the continuity of GM, let us consider a sequence of families ⃗ θ (n) such that ⃗ θ (n) → ⃗ θ, and let us prove that this family eventually has a unique geometric median g (n) , which converges to g as n → ∞.
First note that the set of families ⃗ x ∈ R d×V for which dim ⃗ x ≤ 1 is isomorphic to the set of matrices of R d×V of rank at most 1. It is well-known that this set is closed for all norms in R d×V (this can be verified by considering the determinants of all 2 × 2 submatrices, which are all continuous functions). Thus the set of families ⃗ x such that dim ⃗ x ≥ 2 is open. In particular, there is a ball centered on ⃗ θ whose points ⃗ x all satisfy dim ⃗ x ≥ 2. Since, for n ≥ N 0 large enough, ⃗ θ (n) must belong to this ball, it must eventually satisfy dim ⃗ θ (n) ≥ 2. This guarantees the uniqueness of g (n) ≜ GM( ⃗ θ (n) ) for n ≥ N 0 . Now consider any convergent subsequence g (n k ) → g * . Since the geometric median minimizes the loss L, for any n k ∈ N, we know that L(g (n k ) , ⃗ θ (n k ) ) ≤ L(g, ⃗ θ (n k ) ). Taking the limit then yields L(g * , ⃗ θ) ≤ L(g, ⃗ θ). Since g is the geometric median of ⃗ θ, we thus actually have L(g * , ⃗ θ) = L(g, ⃗ θ). But Proposition 4 guarantees the uniqueness of the geometric median. Therefore, we actually have g * = g. Put differently, any convergent subsequence of g (n) converges to g. Now by contradiction, assume g (n) does not converge to g. Thus, for any ε > 0, there is an infinite subsequence g (ni) of g (n) lies outside the open ball B(g, ε). But since the geometric median belongs to the convex hull of the vectors (Proposition 9), for n ≥ N 0 , g (ni) is clearly also bounded. Thus, by the Bolzano-Weierstrass theorem, the subsequence g (ni) must have at least one converging subsequence, whose limit g † lies outside the open ball B(g, ε). But this contradicts the fact that every convergent subsequence of g (n) converges to g. Therefore, g (n) must converge to g. This proves that the geometric median is continuous with respect to ⃗ θ.
A.4 Approximation of the Average
One interesting feature of the geometric median and of the coordinate-wise median is that they are provably a good approximation of the average. Note that the uniqueness of the geometric median or of the coordinate-wise median is not needed for the following well-known proposition.
Proposition 11 (Minsker (2015)). Denote by Σ( ⃗ θ) the covariance matrix of ⃗ θ defined by
Σ ij ( ⃗ θ) ≜ 1 V v∈[V ] (θ v [i] − AVG( ⃗ θ)[i])(θ v [j] − AVG( ⃗ θ)[j]).(14)
Then
AVG( ⃗ θ) − GM( ⃗ θ) 2 ≤ TR Σ( ⃗ θ) and AVG( ⃗ θ) − CW( ⃗ θ) 2 ≤ TR Σ( ⃗ θ) .
Proof. We start with the geometric median. Recall that GM
( ⃗ θ) minimizes z → E v [∥θ v − z∥ 2 ],
where v is drawn uniformly randomly from [V ]. It thus does better to minimize this term than AVG( ⃗ θ). We then have
AVG( ⃗ θ) − GM( ⃗ θ) 2 = E v [θ v ] − GM( ⃗ θ) 2 ≤ E v θ v − GM( ⃗ θ) 2 (15) ≤ E v θ v − AVG( ⃗ θ) 2 ≤ E v θ v − AVG( ⃗ θ) 2 2 = TR Σ( ⃗ θ) ,(16)
On the Strategyproofness of the Geometric Median + Figure 6: Resilience of the geometric median against coordinated attacks by a minority of strategic voters S, who pull on z in the opposite direction from a strict majority of truthful voters T , whose vectors are all in the ball centered on g T and of radius ∆.
where we also used Jensen's inequality twice for the function x → ∥x∥ 2 and t → t 2 .
We now address the case of the coordinate-wise median. On dimension i, using similar arguments as in the proof above, this square of the discrepancy can be upper-bounded by the variance of θ along dimension i. In other words,
we have AVG( ⃗ θ)[i] − CW( ⃗ θ)[i] ≤ Σ ii ( ⃗ θ)
. Squaring this inequality, and summing over all coordinates then yields
AVG( ⃗ θ) − CW( ⃗ θ) 2 2 ≤ Σ ii ( ⃗ θ) = TR Σ( ⃗ θ)
. Taking the square root yields the second inequality of the proposition.
B PROOFS OF SECTION 3 B.1 Proof of Proposition 1
Proof. Let us denote [V ] = T ∪ S a decomposition of the voters into two disjoint subsets of truthful and strategic voters. We assume a strict majority of truthful voters, i.e., |T | > |S|. Denote g T ≜ GM( ⃗ θ T ) the geometric median of truthful voters' preferred vectors, and ∆ ≜ max {∥θ t − g T ∥ 2 | t ∈ T } the maximum distance between a truthful voters' preferred vector and the geometric median g T . Now consider any point z / ∈ B(g T , ∆). The sum of forces on z by truthful voters has a norm equal to
t∈T u θt−z 2 ≥ t∈T u θt−z T u g T −z = t∈T u T θt−z u g T −z (17) ≥ |T | cos α = |T | 1 − sin 2 α = |T | 1 − ∆ 2 ∥z − g T ∥ 2 2 ,(18)
where α is defined in Figure 6 as the angle between g T − z and a tangent to B(g T ) that goes through z. But then the sum of all forces at z must be at least
t∈T u θt−z + v∈S u s−z 2 ≥ t∈T u θt−z 2 − v∈S u s−z 2 (19) ≥ |T | 1 − ∆ 2 ∥z∥ 2 2 − v∈S ∥u s−z ∥ 2 = |T | 1 − ∆ 2 ∥z − g T ∥ 2 2 − |S| > 0,(20)
as long as we have ∥z − g T ∥ 2 > |T |∆ √ |T | 2 −|S| 2 . A value of z that satisfies this strict inequality can thus not be a geometric median. Put differently, no matter what strategic voters do, we have
GM( ⃗ θ T , s) ∈ B GM( ⃗ θ T ), 1 − |S| 2 |T | 2 −1/2 max t∈T θ t − GM( ⃗ θ T ) 2 .(21)
This concludes the proof.
B.2 Proof of Theorem 1
To obtain Theorem 1, we make use of a technical lemma that characterizes the achievable set for the the strategic voter. Consider the set A V ≜ z ∈ R d ∃h ∈ ∇ z L( ⃗ θ, z), ∥h∥ 2 ≤ 1/V , of points z where the loss restricted to other voters v ∈ [V ] has a subgradient of norm at most 1/V . We now observe that, by behaving strategically, voter 0 can choose any value for the geometric median within A V .
Lemma 8. For any s ∈ R d , GM(s, ⃗ θ) ∈ A V . Moreover, for dim ⃗ θ ≥ 2 and s ∈ A V , we have GM(s, ⃗ θ) = s. θ, z). In other words, for any subgradient h 0:V ∈ ∇L s, ⃗ θ, z , there exists h 0 ∈ ∇ℓ 2 (z − s) and h 1:
Proof. Define ℓ 2 (z) ≜ ∥z∥ 2 . Now note that (1 + V )∇ z L s, ⃗ θ, z = ∇ z ℓ 2 (z − s) + V ∇ z L( ⃗V ∈ ∇L ⃗ θ, z such that (1 + V )h 0:V = h 0 + V h 1:V .
Note that any subgradient of ℓ 2 has at most a unit ℓ 2 -norm (Lemma 1). Thus, ∥h 0 ∥ 2 ≤ 1.
Now, assume z / ∈ A V . Then for any h 1:V ∈ ∇L ⃗ θ, z , we must have ∥h 1:V ∥ 2 > 1/V . As a result,
(1 + V ) ∥h 0:V ∥ 2 ≥ ∥V h 1:V + h 0 ∥ 2 ≥ V ∥h 1:V ∥ 2 − 1 > 0.(22)
Thus, 0 / ∈ ∇L s, ⃗ θ, z , which means that z cannot be a geometric median. For any s ∈ R d , we thus necessarily have
GM(s, ⃗ θ) ∈ A V .
Now assume that s ∈ A V . Then there must exist h 1:V ∈ ∇L ⃗ θ, s such that V ∥h 1:V ∥ 2 ≤ 1. Thus h 0 ≜ −V h 1:V ∈ ∇ℓ 2 (z − s), for z = s, since the set of subgradients of ℓ 2 at 0 is the unit closed ball. We then have h 0 + V h 1:V = 0 ∈ ∇L s, ⃗ θ, s . Thus s minimizes L s, ⃗ θ, · . The uniqueness of the geometric median for dim(s, ⃗ θ) ≥ dim ⃗ θ ≥ 2 (Proposition 4) then implies that GM(s, ⃗ θ) = s.
We now provide the detailed proof of Theorem 1 by formalizing the example of Figure 2.
Proof of Theorem 1. Define θ 1 = (−X, −1), θ 2 = (−X, 1), θ 3 = (X, −1) and θ 4 = (X, 1), with X ≥ 8. We define the sum of distance restricted to these four inputs as
L 0 (z) ≜ 1 4 4 v=1 ∥θ v − z∥ 2 .(23)
Since 0 is a center of symmetry of the four inputs, it is the geometric median. Moreover, it can then be shown that the Hessian matrix at this optimum is
H ≜ ∇ 2 z L 0 (0) = 1 4 (1 + X 2 ) −3/2 1 0 0 X 2 .(24)
Note that the ratio between the largest and smallest eigenvalues of this Hessian matrix H is X 2 , which can take arbitrarily large values. This observation turns out to be at the core of our proof. The eigenvalues also yield bounds on the norm of a vector to which H was applied. Using the inequality X ≥ 1,
1 32X 3 ∥z∥ 2 ≤ ∥Hz∥ 2 ≤ 1 4X ∥z∥ 2 ≤ ∥z∥ 2 .(25)
In the vicinity of 0, since ∇ z L 0 (0) = 0 and since L 0 is infinitely differentiable in 0, we then have
∇ z L 0 (z) = Hz + ε(z),(26)
where ∥ε(z)∥ 2 = O(∥z∥ 2 2 ) when z → 0. In fact, for X ≥ 1, we know that there exists A such that, for all z ∈ B(0, 1), where B(0, 1) is the unit Euclidean ball centered on 0, we have ∥ε(z)∥ 2 ≤ A ∥z∥ 2 2 . We also define λ ≜ inf z∈B(0,1) min SP(∇ 2 z L 0 (z)) and µ ≜ sup
z∈B(0,1) max SP(∇ 2 z L 0 (z))(27)
the minimal and maximal eigenvalues of the Hessian matrix of L 0 over the ball B(0, 1). By continuity (Lemma 20) and strong convexity, we know that µ ≥ λ > 0. We then have λI ⪯ ∇ 2 z L 0 (z) ⪯ µI over B(0, 1). From this, it follows that λ ∥z∥ 2 ≤ ∥∇ z L 0 (z)∥ 2 ≤ µ ∥z∥ 2 ,
for all z ∈ B(0, 1). Now, since ∇ 2 z L 0 (z) ⪰ 0 for all z ∈ R d , from this we also deduce that ∥∇ z L 0 (z)∥ 2 ≥ λ if z / ∈ B(0, 1).
Now consider V honest voters such that V /4 ∈ N and θ 4k+j = θ j , for j ∈ [4] and k ∈ [V /4 − 1]. We denote by ⃗ θ V this vector family. For any voter 0's strategic vote s, we then have
(1 + V )L(s, ⃗ θ V , z) = ∥s − z∥ 2 + V L 0 (z).(29)
Note that we then have
(1 + V )∇ z L(s, ⃗ θ V , z) = u z−s + V ∇L 0 (z),(30)
where u x ≜ x ∥x∥ 2 is the unit vector in the same direction as x. For all z / ∈ B(0, 1), we then have
∇ z L(s, ⃗ θ V , z) 2 ≥ V ∥∇L 0 (z)∥ 2 − ∥u z−s ∥ 2 1 + V ≥ V λ − 1 1 + V > 0,(31)
for V > 1/λ. Thus, for V > 1/λ, we know that, for any s, we have GM(s,
⃗ θ V ) ∈ B(0, 1), where the inequality ∥ε(z)∥ 2 ≤ A ∥z∥ 2 2 holds. Since L 0 is strictly convex, there exists a unique α V > 0 such that ∇ z L 0 (α V (X 3 , 1)) 2 = 1/V . Denote g V ≜ α V (X 3 , 1). Now define t = t(V ) ≜ g V + 1 √ V ∇ z L 0 (g V ).(32)
The force of t on g V is then the unit force with direction t
− g V = 1 √ V ∇ z L 0 (g V ). Since ∥∇ z L 0 (g V )∥ 2 = 1/V , this unit vector must be V ∇ z L 0 (g V ).
Plugging this into the gradient of L (Equation (30)) shows that ∇ z L(t, ⃗ θ V , g V ) = 0. Therefore, Lemma 2 and the uniqueness of the geometric median (Proposition 4) allow us to conclude that g V is the geometric median of the true preferred vectors, i.e., g V = GM(t, ⃗ θ V ). Also, we have
t − GM(t, ⃗ θ V ) 2 = 1 √ V ∥∇ z L 0 (g V )∥ 2 = V −3/2 .(33)
Since g V is a geometric median of t and ⃗ θ V , we know that, for V > 1/λ, we have g V ∈ B(0, 1). As a result, we have λ ∥g V ∥ 2 ≤ 1/V = ∥∇ z L 0 (g V )∥ 2 ≤ µ ∥g V ∥ 2 , and thus
1/µV ≤ ∥g V ∥ 2 ≤ 1/λV.(34)
Now, suppose that, instead of reporting t, voter 0 reports s, which is approximately the orthogonal projection of t on the ellipsoid {z | ∥Hz∥ 2 ≤ 1/V }. More precisely, voter 0's strategic vote is defined as
s = s(V ) ≜ t − 2 √ V g T V HHHg V ∥HHg V ∥ 2 2 HHg V (35) = g V + 1 √ V ∇ z L 0 (g V ) − 2 √ V g T V HHHg V ∥HHg V ∥ 2 2 HHg V .(36)
Given the inequalities ∥Hz∥ 2 ≤ ∥z∥ 2 (Equation (25)) and ∥g V ∥ 2 ≤ 1/λV , the norm of s can be upper-bounded by
∥s∥ 2 ≤ ∥g V ∥ 2 + 1 √ V ∥∇ z L 0 (g V )∥ 2 + 2 √ V ∥Hg V ∥ 2 ∥HHg V ∥ 2 ∥HHg V ∥ 2 ≤ 1 + (2 + λ)V −1/2 λV .(37)
Assuming V ≥ 1 + 3/λ then implies ∥s∥ 2 ≤ (3 + λ)/λV ≤ 1 and ∥s∥ 2 = O(1/V ). As a result, ∥ε(s)∥ 2 ≤ A ∥s∥
+ O(1/V 3 ),(40)
where the hidden constant in O(1/V 3 ) depends on λ and A. Moreover, given that ∥g V ∥ 2 = O(1/V ) (Equation (34)) and ∇ z L 0 (g V ) = Hg V + ε(g V ), by Equation (36), we have
∥Hs∥ 2 2 = ∥Hg V ∥ 2 2 + 2 √ V (Hg V ) T H Hg V + ε(g V ) − 2 g T V HHHg V ∥HHg V ∥ 2 2 HHg V + O(1/V 3 ) (41) = ∥∇ z L 0 (g V ) − ε(g V )∥ 2 2 + 2 √ V g T V HHHg V − 4 √ V g T V HHHg V + O(1/V 3 ) (42) ≤ ∥∇ z L 0 (g V )∥ 2 2 − 2 √ V g T V HHHg V + O(1/V 3 ) (43) ≤ 1 V 2 − 2 √ V g T V HHHg V + O(1/V 3 ).(44)
The hidden constants in O(1/V 3 ) depend on λ, A, H and X. Since H has strictly positive eigenvalues and does not depend on V , we know that g T V HHHg V = Θ(∥g V ∥ 2 2 ) = Θ(1/V 2 ). In particular, for V large enough 2 √ V g T V HHHg V = Θ(1/V 2.5 ) takes larger values than O(1/V 3 ). We then have ∥∇ z L 0 (s)∥ 2 < 1/V , which means that s lies inside the achievable set A V . Therefore, Lemma 8 implies that for V large enough, by reporting s instead of t, voter 0 can move the geometric median from g V to s, i.e., we have GM(s, ⃗ θ V ) = s. But then, the distance between voter 0's preferred vector t and the manipulated geometric median is given by X). The norm of this vector is then ∥HHg V ∥ 2 = 1 16 α V X 3 (1 + X 2 ) −5/2 . Moreover, its scalar product with Hg V yields (Hg V ) T (HHg V ) = 1 32 α 2 V X 6 (1 + X 2 ) −9/2 . We thus have
GM(s, ⃗ θ V ) − t 2 = 2 √ V g T V HHHg V ∥HHg V ∥ 2 2 HHg V 2 = 2 √ V (Hg V ) T (HHg V ) ∥HHg V ∥ 2 . (45) Now recall that g V = α V (X 3 , 1). Moreover, α V H(X 3 , 1) 2 = ∥Hg V ∥ 2 = ∥∇ z L 0 (g V ) − ε(g V )∥ 2 = 1/V +O(1/V 2 ). Since H(X 3 , 1) = 1 4 (1 + X 2 ) −3/2 (X 3 , X 2 ) = 1 4 X 2 (1 + X 2 ) −3/2 (X, 1), we have H(X 3 , 1) 2 = 1 4 X 2 (1 + X 2 ) −1 . Thus, α V = 4X −2 (1 + X 2 )/V + O(1/V 2 ). As a result, we have HHg V = 1 16 α V X 3 (1 + X 2 ) −3 (1,GM(s, ⃗ θ V ) − t 2 = α V √ V X 6 (1 + X 2 ) −9/2 X 3 (1 + X 2 ) −5/2 (46) = 4X (1 + X 2 )V 3/2 + O(V −5/2 ) (47) = 4X 1 + X 2 t − GM(t, ⃗ θ V ) 2 + O(V −5/2 ).(48)
In particular, for V large enough, we can then guarantee that
t − GM(t, ⃗ θ V ) 2 > 1 + X 2 8X GM(s, ⃗ θ V ) − t 2 (49) = 1 + X 2 − 8X + 1 8X GM(s, ⃗ θ V ) − t 2 .(50)
This proves that the geometric median fails to be X 2 −8X+1
8X
-strategyproof. But our proof holds for any value of X, and
X 2 −8X+1 8X
→ ∞ as X → ∞. Thus, there is no value of α such that the geometric median is α-strategyproof.
C PROOFS AND DIFFERENT RESULTS FROM SECTION 4
In this section we provide a formal proof for the main result of our paper which is Theorem 2. We start by proving a few useful facts about the infinite geometric median g ∞ defined on the distribution of the reported vectors.
C.1 Preliminary Results for the Infinite Limit Case
Lemma 9. Under Assumption 1, with probability 1, we have dim ⃗ θ V = min {V − 1, d}.
Proof. We prove this by induction over V . For V = 1, the lemma is obvious.
Assume now that the lemma holds for V ≤ d. Then dim ⃗ θ V = V − 1 with probability 1. The affine space generated by ⃗ θ V is thus a hyperplane, whose Lebesgue measure is zero. Assumption 1 then implies that the probability of drawing a point on this hyperplane is zero. In other words, with probability 1, θ V +1 does not belong to the hyperplane, which implies that dim ⃗ θ V +1 = dim ⃗ θ V + 1 = (V + 1) − 1 ≤ d, which proves the induction.
Now assume that the lemma holds for
V ≥ d + 1. Then dim ⃗ θ V = d with probability 1. We then have dim ⃗ θ V +1 ≥ dim ⃗ θ V = d.
Since this dimension cannot be strictly larger than d, we must then have dim ⃗ θ V +1 = d. This concludes the proof.
Combing Lemma 9 with Proposition 4 guarantees the uniqueness of the geometric median for V ≥ 3 under Assumption 1.
Lemma 10. If d ≥ k + 1, then x → ∥x∥ −k 2 is integrable in B(0, 1), and B(0,ε) ∥x∥ −k 2 dx = O(ε) as ε → 0. Proof. Consider the hyperspherical coordinates (r, φ 1 , . . . , φ d−1 ), where x j = r j−1 i=1 cos φ i sin φ j . We then have dx = r d−1 dr d−1 i=1 cos d−i−1 φ i dφ i . The integral becomes B(0,1) ∥x∥ −k 2 = C(d) 1 0 r −k r d−1 dr = C(d) 1 0 r d−1−k dr,(51)
where C(d) is obtained by integrating appropriately all the angles of the hyperspherical coordinates, which are clearly integrable. But
1 0 r d−1−k dr is also integrable when d − 1 − k ≥ 0.
We conclude by noting that we then have
ε 0 r d−1−k dr ∝ ε d−k = O(ε) for d − k ≥ 1.
Proposition 12. Under Assumption 1, L ∞ is five-times continuously differentiable with a strictly positive definite Hessian matrix on Θ. As a corollary, the geometric median g ∞ is unique and lies in Θ.
Proof. Let z ∈ Θ and δ > 0 such that B(z, δ) ⊂ Θ. By Leibniz's integral rule, we obtain
∇L ∞ (z) = Θ ∇ z ∥z − θ∥ 2 p(θ)dθ = Θ u z−θ p(θ)dθ.(52)
To deal with the singularity at θ = z, we first isolate the integral in the ball B(z, ε), for some 0 < ε ≤ δ. On this compact set, p is continuous and thus upper-bounded. We can then apply the previous lemma for k = 0 to show that this singularity is negligible as ε → 0. Moreover, Leibniz's integral rule does apply, since u z−θ p(θ) can be upper-bounded by p(θ) outside of B(z, δ), which is integrable by Assumption 1. This shows that L ∞ is continuously differentiable. To prove that it is twice-differentiable, we note that Leibniz's integral rule applies again. Indeed, we have
∇ 2 L ∞ (z) = Θ ∇ 2 z ∥z − θ∥ 2 p(θ)dθ = Θ I − u z−θ u T z−θ ∥z − θ∥ 2 p(θ)dθ,(53)
But note that each coordinate of the matrix
I−u z−θ u T z−θ ∥z−θ∥ 2 is at most 1 ∥z−θ∥ 2
. By virtue of the previous lemma, for d ≥ 2, this is integrable in z. Moreover, by isolating the integration in the ball B(z, ε), we show that the impact of the integration in this ball is negligible as ε → 0. Finally, the rest of the integration is integrable, as 1 ∥z−θ∥ 2 p(θ) can be upper-bounded by 1 δ p(θ) outside of B(z, δ), which is integrable by Assumption 1.
The cases of the third, fourth, and fifth derivatives are handled similarly, with now the bounds ∂ 3 ijk ∥z − θ∥ 2 ≤ 6/ ∥z − θ∥ 2 2 , ∂ 4 ijkl ∥z − θ∥ 2 ≤ 36/ ∥z − θ∥ 3 2 and ∂ 5 ijklm ∥z − θ∥ 2 ≤ 300/ ∥z − θ∥ 4 2 , and using d ≥ 5.
To prove the strict convexity, consider a point z ∈ Θ such that p(z) > 0. By continuity of p, for any two orthogonal unit vectors u 1 and u d and η > 0 small enough, we must have p(z + ηu 1 ) > 0 and p(z + ηu d ) > 0. For any ε > 0, there must then be a strictly positive probability to draw a point in B(z, ε), a point in B(z + ηu 1 , ε), and a point in B(z + ηu d , ε). Moreover, for ε much smaller than η, then the three points thereby drawn cannot be colinear. We then obtain a situation akin to the proof of Proposition 4. By the same argument, this suffices to prove that the Hessian matrix must be positive definite. Therefore, L ∞ is strictly convex.
It follows straightforwardly from this that the geometric median is unique. Its existence can be derived by considering a ball B(0, A) of probability at least 1/2 according toθ. If ∥z∥ 2 ≥ A + 2E ∥θ∥ 2 , then
L ∞ (z) ≥ 1 2 (A + 2E ∥θ∥ 2 − A) ≥ E ∥θ∥ 2 = L ∞ (0).(54)
Thus L ∞ must reach a minimum in B(0, A + 2E ∥θ∥ 2 ). Finally, we conclude that the geometric median must belong to Θ, by re-using the argument of Proposition 9.
C.2 Proof Steps for Theorem 2
In this section, we provide the full proof of Theorem 2 that consists of the following steps. First, in Section C.2.1, we find the sufficient conditions under which for a given function F the set {z : ∥∇F (z)∥ 2 ≤ 1} is convex. We then use this result to find sufficient conditions for the geometric median to become α-strategyproof in Section C.2.2. Then in Section C.2.3 we show that these conditions are satisfied with high probability when the number of voters is large enough.
Next, Section C.2.4 proves that the SKEW function is continuous which is necessary for the proof of our theorem. Finally, Section C.2.5 combines these steps (lemmas 13, 14, 15,18, and 16) to prove Theorem 2 .
C.2.1 Higher Derivatives and Unit-norm Gradients
Note that our analysis involves the third derivative tensor to guarantee the convexity of the achievable set (defined in (1)). Therefore, here we provide a discussion about higher-order derivatives. We consider here a three-times continuously differentiable convex function F , and we study the set of points z such that ∥∇F (z)∥ 2 ≤ 1. In particular, we provide a sufficient condition for the convexity of this set, based on the study of the first three derivatives of F . This convexity guarantee then allows us to derive a sufficient condition on L 1:V to guarantee α-strategyproofness.
To obtain such guarantee, let us recall a few facts about higher derivatives. In general, the n-th derivative of a function F :
R d → R at a point z is a (symmetric) tensor ∇ n F (z) : R d ⊗ . . . ⊗ R d n times
→ R, which inputs n vectors and outputs a scalar. This tensor ∇ n F (z) is linear in each of its n input vectors. More precisely, its value for input [x 1 ⊗ . . . ⊗ x n ] is
∇ n F (z)[x 1 ⊗ . . . ⊗ x n ] = i1∈[d]
. . .
in∈[d] (x 1 [i 1 ] x 2 [i 2 ] . . . x n−1 [i n−1 ] x n [i n ]) ∂ n i1...in F (z),(55)
where ∂ n i1...in F (z) is the n-th partial derivative of F with respect to the coordinates i 1 , i 2 , . . . , i n (by the symmetry of derivation, the order in which F is derived along the different coordinates does not matter).
For n = 1, we see that ∇F (z) is simply a linear form R d → R. By Euclidean duality, ∇F (z) can thus be regarded as a vector, called the gradient, such that ∇F (z)[x] = x T ∇F (z). Note that if F is assumed to be convex, but not differentiable, ∇F (z) represents its set of subgradients at point z, i.e., h ∈ ∇F (z) if and only if F (z + δ) ≥ F (z) + h T δ for all δ ∈ R d . From this definition, it follows straightforwardly that z minimizes F if and only if 0 ∈ ∇F (z).
For n = 2, ∇ 2 F (z) is now a bilinear form R d ⊗ R d → R. By isomorphism between (symmetric) bilinear forms and (symmetric) matrices, ∇ 2 F (z) can equivalently be regarded as a (symmetric) matrix, called the Hessian matrix, such that
∇ 2 F (z)[x ⊗ y] = x T ∇ 2 F (z) y. A bilinear form B : R d ⊗ R d → R is said to be positive semi-definite (respectively, positive definite), if B[x ⊗ x] ≥ 0 for all x ∈ R d (respectively, B[x ⊗ x] > 0 for all x ̸ = 0)
. If so, we write B ⪰ 0 (respectively, B ≻ 0). Moreover, given any x ∈ R d , the function y → B[x ⊗ y] becomes a linear form, which we denote B [x]. When the context is clear, B[x] can equivalently be regarded as a vector. Finally, given two bilinear form A, B :
R d ⊗ R d → R, we can define their composition A · B : R d ⊗ R d → R by A · B[x ⊗ y] ≜ A[x ⊗ B[y]] = x T ABy,
where, in the last equation, A and B are regarded as matrices.
We also need to analyze the third derivative of F , which can thus be regarded as a 3-linear form ∇ 3 F (z) : R d ⊗R d ⊗R d → R. Note as well that, for any 3-linear form W and any fixed first input w ∈ R d , the function (x ⊗ y) → W [w ⊗ x ⊗ y] is now a bilinear (symmetric) form R d ⊗ R d → R. This (symmetric) bilinear form will be written W [w] or W · w, which can thus equivalently be regarded as a (symmetric) matrix. Similarly, W [x ⊗ y] can be regarded as a linear form R d → R, or, by Euclidean duality, as a vector in R d .
Finally, we can state the following lemma, which provides a sufficient condition for the convexity of the sets of z ∈ R d with a unit-norm F -gradient. Figure 7: Illustration of what can be gained for target vector t ≜ g † 0:V + γ∇L 1:V (g † 0:V ). The orthogonal projection π 0 of t on the tangent hyperplane of the achievable set going through g † 0:V yields a lower bound on what can be achieved by voter 0 through their strategic vote s. This lower bound depends on the angle between ∇L 1:V (g † 0:V ) and the normal to the hyperplane ∇ 2 L 1:V (g † 0:V ) · ∇L 1:V (g † 0:V ).
Lemma 11. Assume that C ⊂ R d is convex and that
∇ 2 F (z) · ∇ 2 F (z) + ∇ 3 F (z) · ∇F (z) ⪰ 0 for all z ∈ C. Then z → ∥∇F (z)∥ 2 2 is convex on C. Proof. Fix i ∈ [d]
. By Taylor approximation of ∂ i F around z, for δ → 0, we have
∂ i F (z + δ) = ∂ i F (z) + j∈[d] δ j ∂ 2 ij F (z) + 1 2 j,k∈[d] δ j δ k ∂ 3 ijk F (z) + o(∥δ∥ 2 2 ).(56)
This equation can equivalently be written:
∇F (z + δ) = ∇F (z) + ∇ 2 F (z)[δ] + 1 2 ∇ 3 F (z)[δ ⊗ δ] + o(δ 2 ).(57)
Plugging this into the computation of the square norm of the gradient yields:
∥∇F (z + δ)∥ 2 2 = ∇F (z) + ∇ 2 F (z)[δ] + 1 2 ∇ 3 F (z)[δ ⊗ δ] + o(δ 2 ) 2 2 (58) = ∥∇F (z)∥ 2 2 + 2∇ 2 F (z) [∇F (z) ⊗ δ] + ∇ 2 F (z)[δ] 2 2 + ∇ 3 F (z) [∇F (z) ⊗ δ ⊗ δ] + o(∥δ∥ 2 2 ) (59) = ∥∇F (z)∥ 2 2 + 2∇ 2 F (z) [∇F (z) ⊗ δ] + ∇ 2 F (z) · ∇ 2 F (z) + ∇ 3 F (z) · ∇F (z) [δ ⊗ δ] + o(∥δ∥ 2 2 ).(60)
Therefore, matrix 2 ∇ 2 F (z) · ∇ 2 F (z) + ∇ 3 F (z) · ∇F (z) is the Hessian matrix of z → ∥∇F (z)∥ 2 2 = ∇F (z) T ∇F (z). Yet a twice differentiable function with a positive semi-definite Hessian matrix is convex.
Lemma 12. Assume that F is convex, and that there exists z * ∈ R d and β > 0 such that, for any unit vector u, there exists a subgradient h ∈ ∇F (z * + βu) such that u T h > 1. Then the set A ≜ z ∈ R d ∃h ∈ ∇F (z), ∥h∥ 2 ≤ 1 of points where ∇F has a subgradient of at most a unit norm is included in the ball B(z * , β).
Proof. Let z / ∈ B(z * , β). Then there must exist γ ≥ β and a unit vector u such that z − z * = γu. Denote z u ≜ z * + βu. We then have z −z u = (γ −β)u. Moreover, we know that there exists h zu ∈ ∇F (z u ) such that u T h zu > 1. By convexity of F , for any h z ∈ ∇F (z), we then have
(z − z u ) T h z − h zu = (γ − β)u T h z − h zu ≥ 0.(61)
From this, it follows that ∥h z ∥ 2 ≥ u T h z ≥ u T h zu > 1. Thus z / ∈ A.
C.2.2 Sufficient Conditions for α-Strategyproofness
Recall from (1) that the achievable set A V consists of the points z such that there exists a subgradient h ∈ ∇ z L 1:V (z) such that ∥h∥ 2 ≤ 1/V . Below, we identify a sufficient condition on A V to guarantee α-strategyproofness. Note that as explained in the previous section, this analysis involves the third derivative tensor to guarantee the convexity of the achievable set, so that the proof ideas illustrated in Figure 7 are applicable.
Lemma 13. Assume that dim ⃗ θ V ≥ 2 and that the following conditions hold for some β > 0:
• Smoothness: L 1:V is three-times continuously differentiable on B(g 1:V , 2β).
• Contains A V : For all unit vectors u, u T ∇L 1:V (g 1:V + βu) > 1/V .
• Convex A V : ∀z ∈ B(g 1:V , β), ∇ 2 L 1:V (z) · ∇ 2 L 1:V (z) + ∇ 3 L 1:V (z) · ∇L 1:V (z) ⪰ 0.
• Bounded skewness: ∀z ∈ B(g 1:V , β), SKEW(∇ 2 L 1:V (z)) ≤ α.
Then the geometric median is α-strategyproof for voter 0.
Proof. Given Lemma 8, we know that, for t ∈ A V , we have GM(t, ⃗ θ V ) − t 2 = 0, which guarantees α-strategyproofness for such voters.
Now assume t / ∈ A V , and recall that we defined g † 0:V ≜ GM(t, ⃗ θ V ) as the truthful geometric median. By Lemma 8, we know that g † 0:V ∈ A V . Thus t ̸ = g † 0:V . Moreover, applying Lemma 12 to F ≜ V L 1:V guarantees that A V ⊂ B(g 1:V , β). The first condition shows that L 1:V is 3-times differentiable in a neighborhood of g † 0:V . Plus, given the third condition, by Lemma 11, we know that A V is a convex set. Now, by definition, g † 0:V must minimize the loss L 0:V (t, ·), i.e., we must have
0 = ∇L 0:V (t, g † 0:V ) = u g † 0:V −t + V ∇L 1:V (g † 0:V ).(62)
Equivalently, we have u t−g † 0:V = V ∇L 1:V (g † 0:V ). This means that ∇L 1:V (g † 0:V ) 2 = 1/V , and that there must exist γ > 0 such that t = g † 0:V + γ∇L 1:V (g † 0:V ). For δ ∈ R d small enough, Taylor approximation then yields
∇F (g † 0:V + δ) 2 2 = ∇F (g † 0:V ) + ∇ 2 F (g † 0:V )[δ] + o(∥δ∥ 2 ) 2 2 (63) = ∇F (g † 0:V ) 2 2 + 2∇ 2 F (g † 0:V ) ∇F (g † 0:V ) ⊗ δ + o(∥δ∥ 2 )(64)= 1 + 2h T δ + o(∥δ∥ 2 ),(65)
where h ≜ ∇ 2 F (g † 0:V ) · ∇F (g † 0:V ). Since z → ∥∇F (z)∥ 2 2 is convex on B(g 1:V , β), we know that, in this ball, 2h is thus a subgradient of z → ∥∇F (z)∥ 2 2 at g † 0:V . Thus, in fact, for all δ ∈ B(g 1:V − g 0:V , β), we have ∇F (g † 0:V + δ) 2 2 ≥ 1 + 2h T δ. Now assume that
g † 0:V + δ ∈ A V . Then we must have 2h T δ ≤ ∇F (g † 0:V + δ) 2 2 − 1 ≤ 0. In other words, we must have A V ⊂ H, where H ≜ z ∈ R d h T z ≤ h T g † 0:V
is the half space of the hyperplane that goes through the truthful geometric median g † 0:V , and whose normal direction is h.
Using Lemma 8 and the inclusion
A V ⊂ H then yields inf s∈R d GM(s, ⃗ θ V ) − t 2 = inf z∈A V ∥z − t∥ 2 ≥ inf z∈H ∥z − t∥ 2 .(66)
Yet the minimal distance between a point t and a half space H is reached by the orthogonal projection π 0 of t onto H, as depicted in Figure 7. We then have
∥t − π 0 ∥ 2 = γ∇L 1:V (g † 0:V ) T h ∥h∥ 2 = γ∇ 2 L 1:V (g † 0:V ) ∇L 1:V (g † 0:V ) ⊗ ∇L 1:V (g † 0:V ) ∇ 2 L 1:V (g † 0:V ) · ∇L 1:V (g † 0:V ) 2 (67) ≥ γ ∇L 1:V (g † 0:V ) 2 1 + SKEW(∇ 2 L 1:V (g † 0:V )) ≥ γ∇L 1:V (g † 0:V ) 2 1 + α ,(68)
using our fourth assumption. Yet note that g † 0:
V − t 2 = γ∇L 1:V (g † 0:V ) 2 . We thus obtain g † 0:V − t 2 ≤ (1 + α) ∥t − π 0 ∥ 2 ≤ (1 + α) inf s∈R d GM(s, ⃗ θ V ) − t 2
, which is the lemma.
C.2.3 Finite-voter Guarantees
We show here that for a large enough number of voters and with high probability, finite-voter approximations are wellbehaved and, in some critical regards, approximate correctly the infinite case. The global idea of the proof is illustrated in Figure 4. In particular, we aim to show that, when V is large, the achievable set A V is approximately an ellipsoid within a region where L 1:V is infinitely differentiable. In particular, we show that, with arbitrarily high probability under the drawing of other voters' vectors, for V large enough, the conditions of Lemma 13 are satisfied for β ≜ Θ(V −1 ) and α ≜ SKEW(H ∞ ) + ε.
An Infinitely-differentiable Region. Now, in order to apply Lemma 13, we need to identify a region near g ∞ where, with high probability, the loss function L 1:V is infinitely differentiable. To do this, we rely on the observation that, in high dimensions, random points are very distant from one another. More precisely, the probability of randomly drawing a point ε-close to the geometric median g ∞ is approximately proportional to ε d , which is exponentially small in d. This allows us to prove that, with high probability, none of the first V voters will be V −r1 -close to the geometric median, where r 1 > 1/d is a positive constant.
Lemma 14. Under Assumption 1, for any δ 1 > 0, and
r 1 > 1/d, there exists V 1 (δ 1 ) ∈ N such that, for V ≥ V 1 (δ 1 ), with probability at least 1 − δ 1 , we have ∥θ v − g ∞ ∥ 2 ≥ V −r1 for all voters v ∈ [V ]
. In particular, in such a case, L 1:V is then infinitely differentiable in B(g ∞ , V −r1 ).
Proof. Denote p ∞ ≜ 1 + p(g ∞ ) the probability density at g ∞ . Since p is continuous, we know that there exists ε 0 > 0 such that p(z) ≤ p ∞ for all z ∈ B(g ∞ , ε 0 ). Thus, for any 0 < ε ≤ ε 0 , we know that P
[θ ∈ B(g ∞ , ε)] ≤ volume d (ε)p ∞ , where volume d (ε)
is the volume of Euclidean d-dimensional ball with radius ε. Yet this volume is known to be upperbounded by 8π 2 ε d /15 (Smith and Vamanamurthy, 1989). Thus for V ≥ ε −1/r1 0 (and thus V −r1 ≤ ε 0 ), we have
P [θ ∈ B(g ∞ , V −r1 )] ≤ 8π 2 15 p ∞ V −r1d . Now note that P ∀v ∈ [V ], θ v / ∈ B(g ∞ , V −r1 ) = 1 − P ∃v ∈ [V ], θ v ∈ B(g ∞ , V −r1 ) (69) ≥ 1 − v∈V P θ v ∈ B(g ∞ , V −r1 ) ≥ 1 − 8π 2 15 p ∞ V 1−r1d .(70)
Now recall that r 1 > 1 d . We thus have 8π 2 15 p ∞ V 1−r1d → 0 as V → ∞. But now taking V ≥ V 1 (δ 1 ) ≜ max ε −1/r1 0 , (8π 2 p ∞ /15δ 1 ) 1/(r1d−1) , we see that, with probability at least 1 − δ 1 , no voter v ∈ [V ] is V −r1 -close to g ∞ . Given the absence of singularity in B(g ∞ , V −r1 ) in such a case, L 1:V is infinitely differentiable in this region.
Approximation of the Infinite Geometric Median. The following lemma shows that as V grows, g 1:V gets closer to g ∞ with high probability.
Lemma 15. Under Assumption 1, for any δ 2 > 0, and 0 < r 2 < 1/2, there exists V 2 (δ 2 ) ∈ N such that, for all V ≥ V 2 (δ 2 ), with probability at least 1 − δ 2 , we have ∥g 1:
V − g ∞ ∥ 2 ≤ V −r2 .
Proof. Since ∇L ∞ (g ∞ ) = 0 and L ∞ is three times differentiable, using Taylor's theorem around g ∞ , for any z ∈ B(0, 1), we have ∇L ∞ (g ∞ + z) = H ∞ z + O(∥z∥ 2 2 ). In particular, there exist a constant A such that for any z ∈ B(0, 1), we have ∥∇L ∞ (g ∞ + z) − H ∞ z∥ 2 ≤ A ∥z∥ 2 2 . Now consider an orthonormal eigenvector basis u 1 , . . . , u d of H ∞ , with respective eigenvalues λ 1 , . . . , λ d . Note that since H ∞ is symmetric, we know that such a basis exists. We then define
λ min ≜ inf z∈B(g∞,1) min Sp(∇ 2 L ∞ (z)),(71)
the minimum eigenvalue of the Hessian matrix ∇ 2 L ∞ (z) over the closed ball B(g ∞ , 1). Note that using the same argument as Proposition 12, we can say ∇ 2 L ∞ (z) is continuous and positive definite for all z ∈ B(g ∞ , 1), therefore, λ min is strictly positive. Now for any i ∈ [d], j ∈ {−1, 1}, and 0 < ε < 1, we know that
∥∇L ∞ (g ∞ + jεu i ) − λ i jεu i ∥ 2 = ∥∇L ∞ (g ∞ + jεu i ) − H ∞ jεu i ∥ 2 ≤ A ∥jεu i ∥ 2 2 = Aε 2 .(72)
Now define η ≜ min 1−2r2 4r2 , 1 . Since 0 < r 2 < 1/2, we clearly have η > 0. Moreover, for ε < 1, since 2 ≥ 1 + η, we have ε 2 ≤ ε 1+η . Therefore, ∥∇L ∞ (g ∞ + jεu i ) − H ∞ jεu i ∥ 2 ≤ Aε 1+η . For any voter v ∈ [V ], we then define the random unit vector
X ijv ≜ θ v − g ∞ − jεu i ∥θ v − g ∞ − jεu i ∥ 2 .(73)
Note that, sinceθ is absolutely continuous with respect to the Lebesgue measure (Assumption 1), all vectors X ijv 's are well-defined with probability 1. By the definition of L 1:V and L ∞ , we then have
∇L 1:V (g ∞ + jεu i ) = 1 V V v=1 X ijv and ∇L ∞ (g ∞ + jεu i ) = E θv [X ijv ].(74)
Thus, for all k ∈ [d], ∇L 1:
V (g ∞ + jεu i )[k]
is just the average of V i.i.d. random variables within the range [−1, 1], and whose expectation is equal to ∇L ∞ (g ∞ + jεu i ) [k]. Therefore, by Chernoff bound, defining the event E ijk (t) ≜
{|∇L 1:V (g ∞ + jεu i )[k] − ∇L ∞ (g ∞ + jεu i )[k]| ≤ t} for every t > 0, we obtain P [E ijk (t)] ≥ 1 − 2 exp (−t 2 V /2). Defining e ij = ∇L 1:V (g ∞ + jεu i ) − λ i jεu i , under event E ijk Aε 1+η , by triangle inequality, we obtain |e ij [k]| ≤ |∇L 1:V (g ∞ + jεu i )[k] − ∇L ∞ (g ∞ + jεu i )[k]| + |∇L ∞ (g ∞ + jεu i )[k] − λ i jεu i [k]| (75) ≤ Aε 1+η + ∥∇L ∞ (g ∞ + jεu i ) − λ i jεu i ∥ 2 ≤ 2Aε 1+η .(76)
Denoting E * the event where such inequalities hold for all i, k ∈ [d] and j ∈ {−1, 1}, and using union bound, we have
P [E * ] ≥ P i∈[d] j∈{−1,1} k∈[d] E ijk Aε 2 = P ¬ i∈[d] j∈{−1,1} k∈[d] ¬E ijk Aε 2 (77) ≥ 1 − i∈[d] j∈{−1,1} k∈[d] P ¬E ijk Aε 1+η ≥ 1 − 4d 2 exp −A 2 ε 2+2η V /2 .(78)
Now note that by Proposition 4, we know that L 1:V is convex. Therefore, for any i ∈ [d] and j ∈ {−1, 1}, using the fact that g 1:V minimizes L 1:V , we have
(g 1:V − g ∞ − jεu i ) T ∇L 1:V (g ∞ + jεu i ) = (g 1:V − g ∞ − jεu i ) T (λ i jεu i + e ij ) ≤ 0.(79)
Rearranging the terms and noting that λ i > 0 then yields
(g 1:V − g ∞ ) T ju i + e ij ελ i ≤ (jεu i ) T (ju i + e ij ελ i ) = ε + j λ i u T i e ij = ε + j λ i e ij [i].(80)Now define ε 0 ≜ (λ min /4dA) 1/η . Under E * , for ε ≤ ε 0 , this then implies ∥e ij ∥ ∞ ≤ 2Aε 1+η ≤ 2Aεε η 0 = ελmin 2d ≤ ελi 2d , For every i ∈ [d] and j ∈ {−1, 1}, we then have (g 1:V − g ∞ ) T ju i + e ij ελ i ≤ ε + 1 λ i ∥e ij ∥ ∞ ≤ ε 1 + 1 2d ≤ 3ε 2 .(81)Now denote C ≜ ∥g 1:V − g ∞ ∥ ∞ . Thus, there exist i ∈ [d] and j ∈ {−1, 1} such that (g 1:V − g ∞ )[i] = jC.
We then obtain the lower bound
(g 1:V − g ∞ ) T ju i + e ij ελ i = C + (g 1:V − g ∞ ) T e ij ελ i ≥ C − ∥g 1:V − g ∞ ∥ 2 ∥e ij ∥ 2 ελ i (82) ≥ C − d ∥g 1:V − g ∞ ∥ ∞ ∥e ij ∥ ∞ ελ i ≥ C − C 2 = ∥g 1:V − g ∞ ∥ ∞ 2 ,(83)− g ∞ ∥ 2 ≤ √ d ∥g 1:V − g ∞ ∥ ∞ ≤ 3ε √ d. Now note that if V ≥ (3 √ dε 0 ) −1/r2 , then we have ε V ≜ V −r2 /3 √ d ≤ ε 0 .
Thus, under E * defined with ε V , the previous argument applies, which implies ∥g 1:V − g ∞ ∥ 2 ≤ V −r2 , as required by the lemma.
Now take V ≥ V 2 (δ 2 ) ≜ max 2(9d) 1+η A 2 ln 4d 2 δ2 1 1−2r 2 −2ηr 2 , (3 √ dε 0 ) −1/r2
. By definition of η, we have η ≤ 1−2r2 4r2 . As a result, using also the assumption r 2 < 1/2, we then have 1 − 2r 2 − 2ηr 2 ≥ 1−2r2 2 > 0. It then follows that
V 1−2r2−2η ≥ 2(9d) 1+η A 2 ln 4d 2 δ2 . We then have P [E * ] ≥ 1 − 4d 2 exp −A 2 ε 2+2η V V /2 = 1 − 4d 2 exp − A 2 V 1−2r2−2ηr2 2(9d) 1+η ≥ 1 − δ 2 ,(84)
which is what was needed for the lemma.
Approximation of the Infinite Hessian Matrix. To apply Lemma 13, we need to control the values of the Hessian matrix of L 1:V . In this section, we show that, similar to the finite-voter geometric median g 1:V , which is now known to be close to the infinite geometric median g ∞ , the Hessian matrix is close to the infinite Hessian matrix H ∞ at the infinite geometric median g ∞ . Lemma 16. Under Assumption 1, for 0 < 2r 1 < r 3 , for any ε 3 , δ 3 > 0, there exists V 3 (ε 3 , δ 3 ) ∈ N such that, for all V ≥ V 3 (ε 3 , δ 3 ), with probability at least 1 − δ 3 , there is no vote in the ball B(g ∞ , V −r1 ) and, for all z ∈ B(g ∞ , V −r3 ),
we have ∇ 2 L 1:V (z) − H ∞ ∞ ≤ ε 3 .
Before proving Lemma 16, we first start with an observation about unit vectors. Lemma 17. For any 0 < r 1 < r 3 , if ∥z∥ 2 ≥ V −r1 and ∥ρ∥ 2 ≤ V −r3 , then for any i ∈ [d], we have
|u z [i] − u z+ρ [i]| = O(V r1−r3 )(85)
Proof. We have the inequalities
|u z [i] − u z+ρ [i]| = z[i] ∥z∥ 2 − (z + ρ)[i] ∥z + ρ∥ 2 = ∥z + ρ∥ 2 z[i] − ∥z∥ 2 (z + ρ)[i] ∥z∥ 2 ∥z + ρ∥ 2 (86) ≤ (∥z + ρ∥ 2 − ∥z∥ 2 ) |z[i]| − ∥z∥ 2 ρ[i] ∥z∥ 2 (∥z∥ 2 − ∥ρ∥ 2 ) (87) ≤ ∥z + ρ∥ 2 − ∥z∥ 2 ∥z∥ 2 − ∥ρ∥ 2 + ρ[i] ∥z∥ 2 − ∥ρ∥ 2 (88) ≤ 2 ∥ρ∥ 2 ∥z∥ 2 − ∥ρ∥ 2 ,(89)
where we used the fact that ∥z + ρ∥ 2 − ∥z∥ 2 ≤ ∥ρ∥ 2 . We then have
|u z [i] − u z+ρ [i]| ≤ 2V −r3 V −r1 − V −r3 ≤ 2V r1−r3 1 − V r1−r3 = O(V r1−r3 ),(90)
which is the lemma.
We now move on to the proof of Lemma 16.
Proof of Lemma 16. Applying Lemma 14 shows that for V ≥ V 1 (δ 3 /2), under an event E no−voter that holds with probability at least 1 − δ 3 /2, the ball B(g ∞ , V −r1 ) contains no voters' preferred vectors.
For any voter v ∈ [V ], and any i, j ∈ [d], we define
a ijv ≜ ∇ 2 ℓ 2 (g ∞ − θ v )[i, j] = (I − u g∞−θv u T g∞−θv )[i, j] ∥g ∞ − θ v ∥ 2 .(91)
Sinceθ is absolutely continuous with respect to the Lebesgue measure, we know that a ijv is well-defined with probability 1. We then have
∇ 2 L 1:V (g ∞ )[i, j] = 1 V V i=1 a ijv and H ∞ [i, j] = ∇ 2 L ∞ (g ∞ )[i, j] = E θv [a ijv ].(92)
Moreover, we can upper-bound the variance of a ijv by
Var[a ijv ] ≤ E θv [a 2 ijv ] = Θ (I − u g∞−θ u T g∞−θ )[i, j] ∥g ∞ − θ∥ 2 2 p(θ)dθ ≤ Θ 1 ∥g ∞ − θ∥ 2 2 p(θ)dθ.(93)
By Lemma 10, we know that this integral is bounded, thus, we have Var[a ijv ] < ∞. We then define the maximal variance σ 2 ≜ max i,j Var[a ijv ] of the elements of the Hessian matrix. Since the voters' preferred vectors are assumed to be i.i.d, we then obtain
Var ∇ 2 L 1:V (g ∞ )[i, j] = 1 V 2 V v=1 Var[a ijv ] ≤ σ 2 V .(94)
Now applying Chebyshev's inequality on ∇ 2 L 1:
V (g ∞ )[i, j] yields P ∇ 2 L 1:V (g ∞ )[i, j] − H ∞ [i, j] ≥ ε 3 /2 ≤ 4Var ∇ 2 L 1:V (g ∞ )[i, j] ε 2 3 ≤ 4σ 2 V ε 2 3 .(95)
Using a union bound, we then obtain
P ∃i, j ∈ [d], ∇ 2 L 1:V (g ∞ )[i, j] − H ∞ [i, j] ≥ ε 3 /2 ≤ 4d 2 σ 2 V ε 2 3 .(96)Therefore, taking V ≥ 8d 2 σ 2 δ3ε 2 3 , the event E Hessian ≜ ∀i, j ∈ [d], ∇ 2 L 1:V (g ∞ )[i, j] − H ∞ [i, j] ≤ ε 3 /2 occurs with
probability at least 1−δ 3 /2. Taking a union bound shows that, for V ≥ max V 1 (δ 3 /2), 8d 2 σ 2 δ3ε 2 3 , the event E ≜ E no−vote ∩ E Hessian occurs with probability at least 1 − δ 3 .
We now bound the difference between finite-voter Hessian matrices at g ∞ and at a close point z, by
V ∇ 2 L 1:V (z)[i, j] − ∇ 2 L 1:V (g ∞ )[i, j] = v∈[V ] (I − u z−θv u T z−θv )[i, j] ∥z − θ v ∥ 2 − (I − u g∞−θv u T g∞−θv )[i, j] ∥g ∞ − θ v ∥ 2 (97) ≤ v∈[V ] (I − u z−θv u T z−θv )[i, j] ∥z − θ v ∥ 2 − (I − u g∞−θv u T g∞−θv )[i, j] ∥g ∞ − θ v ∥ 2 (98) ≤ v∈[V ] I[i, j](∥g ∞ − θ v ∥ 2 − ∥z − θ v ∥ 2 ) ∥z − θ v ∥ 2 ∥g ∞ − θ v ∥ 2 + u z−θv [i]u z−θv [j] ∥g ∞ − θ v ∥ 2 − u g∞−θv [i]u g∞−θv [j] ∥z − θ v ∥ 2 .(99)
Note that, under E, for all voters
v ∈ [V ], we have ∥g ∞ − θ v ∥ 2 ≥ V −r1 . Now assume z ∈ B(g ∞ , V −r3 ), Lemma 17 then applies with ρ ≜ ∥z − g ∞ ∥ 2 ≤ V −r3 , yielding |u g∞−θv [i] − u z−θv [i]| = O(V r1−r3 ) ≤ 1 for all i ∈ [d]
. Also, we have |u[i]| ≤ ∥u∥ 2 = 1 for all unit vectors. Under E, we then have
u z−θv [i]u z−θv [j] ∥g ∞ − θ v ∥ 2 − u g∞−θv [i]u g∞−θv [j] ∥z − θ v ∥ 2 ≤ u z−θv [i]u z−θv [j] ∥g ∞ − θ v ∥ 2 − u z−θv [i]u g∞−θv [j] ∥g ∞ − θ v ∥ 2 + u z−θv [i]u g∞−θv [j] ∥g ∞ − θ v ∥ 2 − u z−θv [i]u z−θv [j] ∥g ∞ − θ v ∥ 2 + u z−θv [i]u z−θv [j] ∥g ∞ − θ v ∥ 2 − u g∞−θv [i]u g∞−θv [j] ∥z − θ v ∥ 2 (100) ≤ O(V 2r1−r3 ) + 2 |∥g ∞ − θ v ∥ 2 − ∥z − θ v ∥ 2 | ∥g ∞ − θ v ∥ 2 ∥z − θ v ∥ 2 ≤ O(V 2r1−r3 ) + 2V −r3 V −r1 (V −r1 − V −r3 ) = O(V 2r1−r3 ),(101)
where, in the last line, we used the triangle inequality, which implies |∥g
∞ − θ v ∥ 2 − ∥z − θ v ∥ 2 | ≤ ∥g ∞ − z∥ 2 ≤ V −r3 and ∥z − θ v ∥ 2 ≥ ∥g ∞ − θ v ∥ 2 − ∥g ∞ − z∥ 2 ≥ V −r1 − V −r3 . Therefore, under E, we have ∇ 2 L 1:V (z)[i, j] − ∇ 2 L 1:V (g ∞ )[i, j] ≤ 1 V v∈[V ] O(V 2r1−r3 ) = O(V 2r1−r3 ).(102)
We now use the fact that 2r 1 < r 3 , which implies V 2r1−r3 → 0. Thus, for any ε 3 > 0, there exists a V ′ 3 (ε 3 ) such that, for
V ≥ V ′ 3 (ε 3 ), we have ∇ 2 L 1:V (z)[i, j] − ∇ 2 L 1:V (g ∞ )[i, j] ≤ ε 3 /2.(103)
Choosing V ≥ V 3 (ε 3 , δ 3 ) ≜ max V 1 (δ 3 /2), 8d 2 σ 2 δ3ε 2 3 , V ′ 3 (ε 3 ) , and combining the above guarantee with the guarantee about event E proved earlier, yields the result.
Third-derivative Approximation. Finally, to apply Lemma 13, we also need to control the third-derivative of L 1:V near the geometric median g 1:V . In fact, for our purposes, it will be sufficient to bound its norm by a possibly increasing function in V , as long as this function grows slower than V .
Definition 6. We denote ∇ 3 L(z) [i, j, k], the third derivative of L(z) with respect to z[i], z[j], and z[k], and ∇ 3 L(z) ∞ = max i,j,k ∇ 3 L(z)[i, j, k] .
Lemma 18. Under Assumption 1, for r 1 , r 3 > 0, there exists K ∈ R such that, for any δ 4 > 0, there exists V 4 (δ 4 ) ∈ N such that, for all V ≥ V 4 (δ 4 ), with probability at least 1 − δ 4 , no other voter's vector lies in the ball B(g ∞ , V −r1 ) and, for all z ∈ B(g ∞ , V −r3 ), we have ∇ 3 L 1:V (z) ∞ = K(1 + V 3r1−r3 ).
Proof. We use the same proof strategy as the previous Lemma. First note that using Lemma 10 for d ≥ 5, the variance of each element of the third derivative of L 1:V is bounded, i.e.,
∀(i, j, k) ∈ [d] 3 , Var ∇ 3 L 1:V (g ∞ )[i, j, k] = O Θ 1 ∥g ∞ − θ∥ 4 2 p(θ)dθ ≜ σ 2 < ∞.(104)
Similarly to the previous proof, we define the events
E ∇ 3 (t) ≜ ∀i, j, k ∈ [d], ∇ 3 L 1:V (g ∞ )[i, j, k] − ∇ 3 L ∞ (g ∞ )[i, j, k] ≤ t ,(105)E no−vote ≜ ∀v ∈ [V ], θ v / ∈ B(g ∞ , V −r3 ) and E ≜ E ∇ 3 (V r ) ∩ E no−vote ,(106)
where r ≜ max {0, 3r 1 − r 3 } ≥ 0. Using Chebyshev's bound and union bound, we know that, P [E ∇ 3 (t)] ≥ 1 − d 3 σ 2 /V t 2 , where the O hides a constant derived from the upper bound on the variance of ∇ 3 L 1:V (g ∞ )[i, j, k], and which depends only onθ. Therefore, we have P [E ∇ 3 (V r )] ≥ 1 − d 3 σ 2 V −(1+2r) . Now, assuming V ≥ V 4 (δ 4 ) ≜ max 2d 3 σ 2 /δ 1 1+2r , V 1 (δ 4 /2) , we know that the event E occurs with probability at least 1 − δ 4 . Now, we bound the deviation of ∇ 3 L 1:V (z) from ∇ 3 L 1:V (g ∞ ) for any z ∈ B(g ∞ , V −r3 ). It can be shown that
∇ 3 ℓ 2 (z)[i, j, k] = f (z)[i,j,k]] ∥z∥ 2 2 , where f (z)[i, j, k] ≜ 3u z [i] 3 − 3u z [i], if i = j = k 3u z [j] 2 u z [i] − u z [i], if i ̸ = j = k 3u z [i]u z [j]u z [k], if i ̸ = j ̸ = k .(107)|f (z)[i, j, k]] − f (g ∞ )[i, j, k]| = O(V r1−r3 ).(108)
Recall also that, like in the previous proof, under event E, for all voters
v ∈ [V ], we have ∥g ∞ − θ v ∥ 2 ≥ V −r1 , ∥z − θ v ∥ 2 ≥ V −r1 − V −r3 = Ω(V −r1 ) and |∥g ∞ − θ v ∥ 2 − ∥z − θ v ∥ 2 | ≤ ∥g ∞ − z∥ 2 ≤ V −r3
(using the triangle inequality). We then have
∇ 3 L 1:V (z)[i, j, k] − ∇ 3 L 1:V (g ∞ )[i, j, k] = v∈[V ] f (z − θ v )[i, j, k] ∥z − θ v ∥ 2 2 − v∈[V ] f (g ∞ − θ v )[i, j, k] ∥g ∞ − θ v ∥ 2 2 (109) ≤ 1 V v∈[V ] f (z − θ v )[i, j, k] ∥z − θ v ∥ 2 2 − f (g ∞ − θ v )[i, j, k] ∥z − θ v ∥ 2 2 + f (g ∞ − θ v )[i, j, k] ∥z − θ v ∥ 2 2 − f (g ∞ − θ v )[i, j, k] ∥g ∞ − θ v ∥ 2 2 (110) ≤ 1 V v∈[V ] O(V r1−r3 ) ∥z − θ v ∥ 2 2 + 6 ∥z − θ v ∥ 2 2 − ∥g ∞ − θ v ∥ 2 2 ∥z − θ v ∥ 2 2 ∥g ∞ − θ v ∥ 2 2 (111) ≤ O(V r1−r3 ) Ω (V −r1 ) 2 + 6 V v∈[V ] |∥z − θ v ∥ 2 − ∥g ∞ − θ v ∥ 2 | ∥z − θ v ∥ 2 + ∥g ∞ − θ v ∥ 2 ∥z − θ v ∥ 2 2 ∥g ∞ − θ v ∥ 2 2 (112) ≤ O(V 3r1−r3 ) + 6 V v∈[V ] V −r3 1 ∥z − θ v ∥ 2 ∥g ∞ − θ v ∥ 2 2 + 1 ∥z − θ v ∥ 2 2 ∥g ∞ − θ v ∥ 2 (113) ≤ O(V 3r1−r3 ) + 12V −r3 Ω (V −r1 ) 3 ≤ O(V 3r1−r3 ).(114)
Combining this with the guarantee of event E then yields
∇ 3 L 1:V (z)[i, j, k] ≤ ∇ 3 L ∞ (g ∞ )[i, j, k] + ∇ 3 L ∞ (g ∞ )[i, j, k] − ∇ 3 L 1:V (g ∞ )[i, j, k] + ∇ 3 L 1:V (g ∞ )[i, j, k] − ∇ 3 L 1:V (z)[i, j, k](115)≤ O(1) + O(V r ) + O(V 3r1−r3 ) = O(1) + O(V 3r1−r3 ),(116)
using the definition of r. Given that P [E] ≥ 1 − δ 4 , taking a bound K that can replace the O yields the lemma.
C.2.4 Skewness is Continuous
The last piece that is required for the proof of Theorem 2 is the fact that the function SKEW is continuous. To get there, we first prove a couple of lemmas about symmetric matrices.
Definition 7. We denote SYM d the set of symmetric d × d real matrices. We denote X[i, j] the element of the i-th row and j-th column of the matrix X, and ∥X∥ ∞ ≜ max i,j |X[i, j]|.
Lemma 19. For any symmetric matrices H, S ∈ SYM d , |min SP(H) − min SP(S)| ≤ d ∥H − S∥ ∞ .
Proof. Consider a unit vector u. We have
u T Hu − u T Su = u T (H − S)u = i,j∈[d] (H[i, j] − S[i, j])u[i]u[j](117)≤ i,j∈[d] |H[i, j] − S[i, j]| |u[i]| |u[j]| ≤ ∥H − S∥ ∞ i∈[d] |u[i]| j∈[d] |u[j]| (118) = ∥H − S∥ ∞ ∥u∥ 2 1 ≤ d ∥H − S∥ ∞ ∥u∥ 2 2 = d ∥H − S∥ ∞ ,(119)
where we used the well-known inequality ∥x∥ Lemma 20. The minimal eigenvalue is a continuous function of a symmetric matrix.
Proof. This is an immediate corollary of the previous lemma. As S → H, we clearly have min SP(S) → min SP(H).
Lemma 21. SKEW is continuous.
Proof. Consider H, S ≻ 0 two positive definite symmetric matrices. We have
|SKEW(H) − SKEW(S)| = sup ∥u∥ 2 =1 ∥Hu∥ 2 u T Hu − 1 − sup ∥u∥ 2 =1 ∥Su∥ 2 u T Su − 1 (120) ≤ sup ∥u∥ 2 =1 ∥Hu∥ 2 u T Hu − ∥Su∥ 2 u T Su (121) = sup ∥u∥ 2 =1 ∥Hu∥ 2 (u T Su) − ∥Su∥ 2 (u T Hu) (u T Hu)(u T Su) (122) ≤ sup ∥u∥ 2 =1 ∥Hu∥ 2 u T Su − u T Hu (u T Hu)(u T Su) + ∥Hu∥ 2 − ∥Su∥ 2 u T Su (123) ≤ sup ∥u∥ 2 =1 (SKEW(H) + 1) u T Su − u T Hu u T Su + ∥Hu∥ 2 − ∥Su∥ 2 u T Su .(124)
Now, for any unit vector u, we have
u T Su − u T Hu ≤ i,j∈[d] |u[i]| |u[j]| |S[i, j] − H[i, j]| ≤ ∥S − H∥ ∞ i,j∈[d] |u[i]| |u[j]| (125) = ∥S − H∥ ∞ ∥u∥ 2 1 = d ∥S − H∥ ∞ ∥u∥ 2 2 = d ∥S − H∥ ∞ ,(126)
using the inequality ∥x∥ 2 1 ≤ d ∥x∥ 2 2 . Moreover, by triangle inequality, we also have
|∥Hu∥ 2 − ∥Su∥ 2 | ≤ ∥Hu − Su∥ 2 ≤ i∈[d] j∈d |H[i, j] − S[i, j]| |u[j]| 2 (127) ≤ i∈[d] j∈d ∥H − S∥ ∞ |u[j]| 2 ≤ ∥H − S∥ ∞ d ∥u∥ 2 1 (128) ≤ ∥H − S∥ ∞ d 2 ∥u∥ 2 2 ≤ d ∥H − S∥ ∞ ∥u∥ 2 .(129)
Finally, note that u T Su ≥ min SP(S). Combining it all then yields
|SKEW(H) − SKEW(S)| ≤ 2 + SKEW(H) min SP(S) d ∥H − S∥ ∞ .(130)
By continuity of the minimal eigenvalue (Lemma 20), we know that min SP(S) → min SP(H) as S → H. This allows us to conclude that |SKEW(H) − SKEW(S)| → 0 as S → H, which proves the continuity of the SKEW function.
C.2.5 Proof of Theorem 2
Finally, we can prove Theorem 2.
Proof of Theorem 2. Let ε, δ > 0. Choose r 1 , r 2 , r 3 such that 6 2/d < 2r 1 < r 3 < r 2 < 1/2, and set δ 1 ≜ δ 2 ≜ δ 3 ≜ δ 4 ≜ δ/4. Define also λ min ≜ min SP(∇ 2 L ∞ (g ∞ )) and ε 3 ≜ min {λ min /2, ε 5 }, where ε 5 will be defined later on, based on the continuity of SKEW at H ∞ .
Now consider V sufficiently large to satisfy the requirements of lemmas 14, 15, 16, and 18. Denoting E V the event that contains the intersection of the guarantees of these lemmas, by union bound, we then know that P [E V ] ≥ 1−δ. We will now show that, for V large enough, under E V , the geometric median restricted to the first 1 + V voters is (SKEW(H ∞ ) + ε)strategyproof for voter 0. To do so, it suffices to prove that, under E V , the assumptions of Lemma 13 are satisfied, for β ≜ 2/λ min V .
First, let us show that for V large enough, under E V , the ball B(g 1:V , 2β) contains no preferred vector from the first V voters. To prove this, let z ∈ B(g 1:V , β). By triangle inequality, we have ∥z − g ∞ ∥ 2 ≤ ∥z − g 1:V ∥ 2 + ∥g 1:
V − g ∞ ∥ 2 ≤ 2β + V −r2 = O(V −1 + V −r2 ) = o(V −r1 )
, since r 1 < r 2 < 1. Thus, for V large enough, we have z ∈ B(g ∞ , V −r1 ). But by Lemma 14, under E V , this ball contains none of the preferred vectors from the first V voters. As a corollary, L 1:V is then infinitely differentiable in B(g 1:V , 2β). The first condition of Lemma 13 thus holds.
We now move on to the second condition. Note that, under event E V , by virtue of Lemma 16, for all z ∈ B(g 1:V , β) ⊂ B(g 1:V , 2β) ⊂ B(g ∞ , V −r1 ), we have ∇ 2 L 1:V (z) − H ∞ ∞ ≤ ε 3 = λ min /2. Lemma 19 then yields min SP(∇ 2 L 1:V (z)) ≥ min SP(H ∞ ) − ∇ 2 L 1:V (z) − H ∞ ∞ ≥ λ min − λ min /2 = λ min /2. By Taylor's theorem, we then know that, for any unit vector u, there exists z ∈ [g 1:V , g 1:V + βu] such that u T ∇L 1:V (g 1:V + βu) = u T ∇L 1:V (g 1:V ) + ∇ 2 L 1:V (z)[βu] = β∇ 2 L 1:
V (z)[u ⊗ u](131)
≥ β min SP(∇ 2 L 1:V (z)) ≥ 2 λ min V λ min = 2/V > 1/V,
where we used the fact that ∇L 1:V (g 1:V ) = 0. Thus the second condition of Lemma 13 holds too.
We now move on to the third condition. We have already shown that, under E V and for all z ∈ B(g 1:V , β), we have min SP(∇ 2 L 1:V (z)) ≥ λ min /2. From this, it follows that, for all z ∈ B(g 1:V , β), we have min SP(∇ 2 L 1:V (z) · ∇ 2 L 1:V (z)) ≥ λ 2 min /4. But now note that, for any coordinates i, j ∈ [d], we have
∇ 3 L 1:V (z)[i, j, k] |∇L 1:V (z)[k]| ≤ d ∇ 3 L 1:V (z) ∞ ∥∇L 1:V (z)∥ ∞ (134) ≤ Kd(1 + V 3r1−r3 )β = O(V −1 + V 3r1−r3−1 ).(135)
But since 2r 1 < r 3 < 1/2, we have 3r 1 − r 3 − 1 = r 1 − 1 < −1/2 < 0. Thus the bound above actually goes to zero, as V → ∞. In particular, for V large enough, we must have ∇ 3 L 1:V (z) · ∇L 1:V (z) ∞ ≤ λ 2 min /8. As a result, by Lemma 19, for all z ∈ B(g 1:V , β) and under E V , we then have min SP ∇ 2 L 1:V (z) · ∇ 2 L 1:V (z) + ∇ 3 L 1:V (z) · ∇L 1:V (z)
≥ min SP(∇ 2 L 1:V (z) · ∇ 2 L 1:V (z)) − ∇ 3 L 1:V (z) · ∇L 1:V (z) ∞ (137)
≥ λ 2 min 4 − λ 2 min 8 ≥ λ 2 min 8 .(138)
Therefore ∇ 2 L 1:V (z) · ∇ 2 L 1:V (z) + ∇ 3 L 1:V (z) · ∇L 1:V (z) ≻ 0, which is the third condition of Lemma 13.
Finally, the fourth and final condition of Lemma 13 holds by continuity of the function SKEW (Lemma 21). More precisely, since H ∞ ≻ 0, we know that SKEW is continuous in H ∞ . Thus, there exists ε 5 > 0 such that, if A is a symmetric matrix with ∥H ∞ − A∥ ∞ ≤ ε 5 , then A ≻ 0 and SKEW(A) ≤ SKEW(H ∞ ) + ε. Yet, by definition of ε 3 and Lemma 16, we know that all hessian matrices ∇ 2 L 1:V (z) for z ∈ B(g 1:V , β) satisfy the above property. Therefore, we know that for all such z, we have SKEW(∇ 2 L 1:V (z)) ≤ SKEW(H ∞ ) + ε, which is the fourth condition of Lemma 13 with α ≜ SKEW(H ∞ ) + ε.
Lemma 13 thus applies. It guarantees that, for V large enough, under the event E V which occurs with probability at least 1 − δ, the geometric median is (SKEW(H ∞ ) + ε)-strategyproof for voter 0. This corresponds to saying that the geometric median is asymptotically SKEW(H ∞ )-strategyproof.
C.3 Upper and Lower Bounds for Skewness (Proof of Proposition 2)
Proof. We first prove the upper-bound. Consider an orthonormal eigenvector basis of S of vectors u 1 , . . . , u d , with respective eigenvalues λ 1 , . . . , λ d . We now focus on a unit vector x in the form x = β i u i with β 2 i = 1. Note that β 2 i λ i and β 2 i λ 2 i can then be viewed as weighted averages of λ i 's and of their squares. As a result, we have β 2 i λ i ≥ λ min and β 2 i λ 2 i ≤ λ 2 max . As a result, we have
∥Sx∥ 2 2 (x T Sx) 2 = β 2 i λ 2 i ( β 2 i λ i ) 2 ≤ λ 2 max λ 2 min .(139)
On the Strategyproofness of the Geometric Median
Taking the square root and subtracting one proves the upper-bound. We now move on to proving the lower-bound. Denote λ 1 and λ d the two extreme eigenvalues of S, and u 1 and u d their orthogonal unit eigenvectors. Define
x = u1 √ λ1 + √ βu d √ λ d .
We then have Sx = √ λ 1 u 1 + √ βλ d u d , ∥x∥ 2 2 = λ −1 1 + βλ −1 d , x T Sx = 1 + β, and ∥Sx∥ 2 2 = λ 1 + βλ d . Combining this yields a ratio R(β) ≜ ∥x∥ 2 2 ∥Sx∥ 2 2 (x T Sx) 2 = (λ −1 1 + βλ −1 d )(λ 1 + βλ d ) (1 + β) 2 = β 2 + Lβ + 1 β 2 + 2β + 1 = 1 + L − 2 2 + β + β −1 ,
where L ≜ λ −1 1 λ d + λ 1 λ −1 d ≥ 2. Note that, similarly, we have β + β −1 ≥ 2. This implies R(β) ≤ 1 + L−2 4 , with equality for β = 1. The skewness is then greater than R(1) − 1. In two dimensions, the skewness is, in fact, equal to R(1) − 1 since x represents essentially all possible vectors in this case. Multiplying the numerator and denominator of R(1) by λ 1 λ d then yields the proposition. Proof. We provide a sketch of proof, which is based on Figure 8. By Taylor series and given concentration bounds, for V large enough and z → g Σ ∞ , the gradient of the skewed loss for 1 + V voters is then approximately given by
(1 + V )∇L Σ 0:V (s, ⃗ θ, z) ≈ ΣΣ(z − s) ∥z − s∥ Σ + V H Σ ∞ (z − g Σ ∞ ) + o( z − g Σ ∞ 2 ) (141) = Σu Σz−Σs + V H Σ ∞ (z − g Σ ∞ ) + o( z − g Σ ∞ 2 ).(142)
This quantity must cancel out for z = g Σ 0:V . Thus we must have Σ −1 H Σ ∞ (g Σ 0:V − g Σ ∞ ) ≈ 1 V u Σs−Σg Σ 0:V , which implies Σ −1 H Σ ∞ (g Σ 0:V − g Σ ∞ ) 2 2 = 1/V 2 . The achievable set A Σ V is thus approximately the ellipsoid g Σ ∞ + z z T H Σ ∞ Σ −2 H Σ ∞ z ≤ 1/V 2 . In particular, for V large enough, A Σ V is convex.
Meanwhile, denote g Σ, † 0:V the skewed geometric median when the strategic voter truthfully reports their preferred vector t. By Equation (142), we must have ΣΣ(t − g Σ, † 0:V ) ∝ H Σ ∞ (g Σ, † 0:V − g Σ ∞ ), which implies t − g Σ, † 0:V ∝ Σ −2 H Σ ∞ (g Σ, † 0:V − g Σ ∞ ). Now let us skew the space by matrix S, i.e., we map each point z in the original space to a point x ≜ Sz in the S-skewed space. Interestingly, since ∥z − θ v ∥ S = ∥x − Sθ v ∥ 2 , a voter with S-skewed preferences in the original space now simply wants to minimize the Euclidean distance in the S-skewed space. Now, note that since S is a linear transformation and since A Σ V is convex, so is SA Σ V . This allows us to re-use the orthogonal projection argument. Namely, denoting π 0 the orthogonal projection of St onto the tangent hyperplane to SA Σ V , we have
inf s∈R d t − GM Σ (s, ⃗ θ 1:V ) S = inf s∈R d St − SGM Σ (s, ⃗ θ 1:V ) 2 (143) = inf x∈SA Σ V ∥St − x∥ 2 ≥ ∥St − π 0 ∥ 2 .(144)
To compute π 0 , note that, for a large number of voters and with high probability, the achievable set SA Σ V in the Sskewed space is approximately the set of points Sg Σ ∞ + Sz such that z T H Σ ∞ Σ −2 H Σ ∞ z ≤ 1/V 2 . Equivalently, this corresponds to the set of points Sg Σ ∞ + x with x ≜ Sz (and thus z = S −1 x) such that (S −1 x) T H Σ ∞ Σ −2 H Σ ∞ (S −1 x) = x T (S −1 H Σ ∞ Σ −2 H Σ ∞ S −1 )x ≤ 1/V 2 . This is still an ellipsoid. The normal to the surface of SA Σ V at x 0 = Sg Σ, † 0:V − Sg Σ ∞ is then given by
S −1 H Σ ∞ Σ −2 H Σ ∞ S −1 x 0 = S −1 H Σ ∞ Σ −2 H Σ ∞ (g Σ, † 0:V − g Σ ∞ ).
Meanwhile, since t − g Σ, † 0:V ∝ Σ −2 H Σ ∞ (g Σ, † 0:V − g Σ ∞ ), we know that there exists γ > 0 such that St − Sg Σ, † 0:V = γSΣ −2 H Σ ∞ (g Σ, † 0:V − g Σ ∞ ). Then + + + + + + + + Figure 8: Proof techniques to determine the asymptotic strategyproofness of the Σ-skewed geometric median for S-skewed preferences. We skew space using S, so that in the skewed space, voter 0 wants to minimize the Euclidean distance between their preferred vector and the skewed geometric median. Strategyproofness then depends on the angle between the blue and orange vectors in the skewed space, as depicted in the figure.
≤ γ SΣ −2 H Σ ∞ (g Σ, † 0:V − g Σ ∞ ) 2 (γSΣ −2 H Σ ∞ (g Σ, † 0:V − g Σ ∞ )) T S −1 H Σ ∞ Σ −2 H Σ ∞ (g Σ, † 0:V −g Σ ∞ ) ∥S −1 H Σ ∞ Σ −2 H Σ ∞ (g Σ, † 0:V −g Σ ∞ )∥ 2 (146) = SΣ −2 H Σ ∞ (g Σ, † 0:V − g Σ ∞ ) 2 S −1 H Σ ∞ Σ −2 H Σ ∞ (g Σ, † 0:V − g Σ ∞ ) 2 (SΣ −2 H Σ ∞ (g Σ, † 0:V − g Σ ∞ )) T (S −1 H Σ ∞ Σ −2 H Σ ∞ (g Σ, † 0:V − g Σ ∞ ))(145)= ∥y 0 ∥ 2 S −1 H Σ ∞ S −1 y 0 2 y T 0 S −1 H Σ ∞ S −1 y 0 ≤ 1 + SKEW(S −1 H Σ ∞ S −1 ),(147)
E ALTERNATIVE UNIT FORCES
In this section we show that the fairness principle "one voter, one vote with a unit force" can be generalized to other vector votes when we use the right norm to measure the norm of voters' forces. First we consider the skewed geometric median, and then we analyze the minimizer of ℓ p distances.
E.1 Skewed Geometric Median
Interestingly, we can also interpret the skewed geometric median as an operator that yields unit forces to the different voters, albeit the norm of the forces is not measured by the Euclidean norm. To understand how forces are measured, let us better characterize the derivative of the skewed norm.
From this, it follows that
∂ i ∥z∥ Σ = ∂ i ∥z∥ 2 Σ = ∂ i ∥z∥ 2 Σ 2 ∥z∥ 2 Σ = (ΣΣz)[i] ∥z∥ Σ .(155)
Combining all coordinates yields the lemma.
It is noteworthy that, using a Σ-skewed loss, the gradient ∇ z ∥z∥ Σ is no longer colinear with z. In fact, it is not even colinear with Σz, which is the image of z as we apply the linear transformation Σ to the entire space. Similarly, this pull is no longer of Euclidean unit force. Nevertheless, it remains a unit force, as long as we measure its force with the appropriate norm.
Lemma 25. For all z, θ v ∈ R d , we have ∥∇ z ∥z − θ v ∥ Σ ∥ Σ −1 = 1. Put differently, using the Σ-skewed loss, voters have Σ −1 -unit forces.
Proof. Applying Lemma 24 yields
∥∇ z ∥z − θ v ∥ Σ ∥ Σ −1 = ΣΣz ∥z∥ Σ Σ −1 = Σ −1 ΣΣz 2 ∥z∥ Σ = ∥Σz∥ 2 ∥z∥ Σ = 1,(156)
which is the lemma.
E.2 ℓ p Norm
Interestingly, we prove below that considering other penalties measured by ℓ p distances is equivalent to assigning ℓ qunit forces to the voters. In particular, the coordinate-wise median can be interpreted as minimizing the ℓ 1 distances or, equivalently, assigning votes of ℓ ∞ unit force. In particular, the coordinate-wise median, which is known to be strategyproof, indeed implements the principle "one voter, one unit-force vote". In other words, this principle can guarantee strategyproofness; this requires a mere change of norm.
Proposition 13. Assume 1 p + 1 q = 1, with p, q ∈ [1, ∞]. Then considering an ℓ p penalty is equivalent to considering that each voter has an ℓ q -unit force vote. More precisely, any subgradient of the ℓ p penalty has at most a unit norm in ℓ q .
Authors are listed in alphabetical order. * Correspondence to: [email protected], and [email protected] of the 26 th International Conference on ArtificialIntelligence and Statistics (AISTATS) 2023, Valencia, Spain. PMLR: Volume 206. Copyright 2023 by the author(s).
Figure 5 :
5Dependence of the maximum strategic gain α on parameter c, where c is the square root of condition number of the underlying distribution's covariance matrix.
, K., Tucker, P., Vanhoucke, V., Vasudevan, V., Viégas, F., Vinyals, O., Warden, P.,Wattenberg, M., Wicke, M., Yu, Y., and Zheng, X. (2015). TensorFlow: Large-scale machine learning on heterogeneous distributed systems.Acharya, A., Hashemi, A., Jain, P., Sanghavi, S.,Dhillon, I. S., and Topcu, U. (2022).Robust training in high dimensions via block coordinate geometric median descent. In Camps-Valls, G., Ruiz, F. J. R., and Valera, I., editors, Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, volume 151 of Proceedings of Machine Learning Research, pages 11145-11168. PMLR. Alistarh, D., Allen-Zhu, Z., and Li, J. (2018). Byzantine stochastic gradient descent. In Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R., editors, Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc. Allouah, Y., Guerraoui, R., Hoang, L., and Villemaud, O. (2022). Robust sparse voting. CoRR, abs/2202.08656. Alon, N., Feldman, M., Procaccia, A., and Tennenholtz, M. (2010). Strategyproof approximation mechanisms for location on networks. Center for Rationality and Interactive Decision Theory, Hebrew University, Jerusalem, Discussion Paper Series. Barberà, S., Massó, J., and Neme, A. (1997). Voting under constraints. journal of economic theory, 76(2):298-321. Barberà, S., Massó, J., and Serizawa, S. (1998). Strategyproof voting on compact ranges. games and economic behavior, 25(2):272-291. Bhat, P. and Klein, O. (2020). Covert hate speech: white nationalists and dog whistle communication on Twitter. In Twitter, the Public Sphere, and the Chaos of Online Deliberation, pages 151-172. Springer.Blanchard, P., Mhamdi, E. M. E., Guerraoui, R., and Stainer, J. (2017). Machine learning with adversaries: Byzantine tolerant gradient descent. In Guyon, I., von Luxburg, U., Bengio, S., Wallach, H. M., Fergus, R., Vishwanathan, S. V. N., and Garnett, R., editors, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 119-129. Border, K. C. and Jordan, J. S. (1983). Straightforward elections, unanimity and phantom voters. The Review of Economic Studies, 50(1):153-170. Brady, R. L. and Chambers, C. P. (1995). A spatial analogue of may's theorem. Social Choice and Welfare, 71. Brandt, F., Conitzer, V., Endriss, U., Lang, J., and Procaccia, A. D. (2016). Handbook of computational social choice. Cambridge University Press. Brimberg, J. (2017). The fermat-weber location problem revisited. Mathematical Programming, 49. Chen, Y., Su, L., and Xu, J. (2017). Distributed statistical machine learning in adversarial settings: Byzantine gradient descent. Proc. ACM Meas. Anal. Comput. Syst., 1(2). Chung, K.-S. and Ely, J. C. (2007). Foundations of dominant-strategy mechanisms. The Review of Economic Studies, 74(2):447-476. Cohen, M. B., Lee, Y. T., Miller, G. L., Pachocki, J., and Sidford, A. (2016). Geometric median in nearly linear time. In Wichs, D. and Mansour, Y., editors, Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2016, Cambridge, MA, USA, June 18-21, 2016, pages 9-21. ACM. Dinh, C. T., Tran, N. H., and Nguyen, T. D. (2020). Personalized federated learning with moreau envelopes. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., and Lin, H., editors, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. El-Mhamdi, E.-M., Guerraoui, R., and Rouault, S. (2018). The hidden vulnerability of distributed learning in Byzantium. In Dy, J. and Krause, A., editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 3521-3530. PMLR.Escoffier, B.,Gourvès, L., Kim Thang, N., Pascual, F., and Spanjaard, O. (2011). Strategy-proof mechanisms for facility location games with many facilities. In Brafman, R. I., Roberts, F. S., and Tsoukiàs, A., editors, Algorithmic Decision Theory, pages 67-81, Berlin, Heidelberg. Springer Berlin Heidelberg.
2015 American Control Conference (ACC), pages 2469-2475. Hansen, P., Peeters, D., Richard, D., and Thisse, J.-F. (1985). The minisum and minimax location problems revisited. Operations Research, 33(6):1251-1265. Hoang, L. N. (2017). Strategy-proofness of the randomized condorcet voting system. Soc. Choice Welf., 48(3):679-701. Hoang, L. N. (2020). Science communication desperately needs more aligned recommendation algorithms. Frontiers in Communication, 5:115. Kairouz, P., McMahan, H. B., Avent, B., Bellet, A., Bennis, M., Bhagoji, A. N., Bonawitz, K., Charles, Z., Cormode, G., Cummings, R., D'Oliveira, R. G. L., Eichner, H., Rouayheb, S. E., Evans, D., Gardner, J., Garrett, Z., Gascón, A., Ghazi, B., Gibbons, P. B., Gruteser, M., Harchaoui, Z., He, C., He, L., Huo, Z., Hutchinson, B., Hsu, J., Jaggi, M., Javidi, T., Joshi, G., Khodak, M., Konecný, J., Korolova, A., Koushanfar, F., Koyejo, S., Lepoint, T., Liu, Y., Mittal, P., Mohri, M., Nock, R.,Özgür, A., Pagh, R., Qi, H., Ramage, D., Raskar, R., Raykova, M., Song, D., Song, W., Stich, S. U., Sun, Z., Suresh, A. T., Tramèr, F., Vepakomma, P., Wang, J., Xiong, L., Xu, Z., Yang, Q., Yu, F. X., Yu, H., and Zhao, S. (2021). Advances and open problems in federated learning. Foundations and Trends® in Machine Learning, 14(1-2):1-210. Karimireddy, S. P., He, L., and Jaggi, M. (2022).
Lemma 4 .
4If f is convex on [0, 1], and strictly convex on (0, 1), then it is strictly convex on [0, 1].
these vectors around (0, 0) by an anti-clockwise eighth of a turn. We then obtain the vectors Rθ 1 ≜ (0, 0),
1/V 2 ). Thus
2 ∥s∥ 2 ∥ε(s)∥ 2 + ∥ε(s
which follows from the convexity of t → t 2 ). Now consider u min a unit eigenvector of the eigenvalue min SP(S) of the symmetric matrix S. Then min SP(H) ≤ u T min Hu min ≤ u T min Su min + d ∥H − S∥ ∞ = min SP(S) + d ∥H − S∥ ∞ . Inverting the role of H and S then yields the lemma.
Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 6246-6283. PMLR. Farhadkhani, S., Guerraoui, R., and Hoang, L.-N. (2021). Strategyproof learning: Building trustworthy usergenerated datasets. ArXiV. Farhadkhani, S., Guerraoui, R., Hoang, L.-N., and Villemaud, O. (2022b). An equivalence between data poisoning and byzantine gradient attacks. In Proceedings of the 39th International Conference on Machine Learning, Proceedings of Machine Learning Research. Feigenbaum, I. and Sethuraman, J. (2015). Strategyproof mechanisms for one-dimensional hybrid and obnoxious facility location models. AAAI Workshops. Fotakis, D. and Tzamos, C. (2013). On the power of deterministic mechanisms for facility location games. In Fomin, F. V., Freivalds, R., Kwiatkowska, M., and Peleg, D., editors, Automata, Languages, and Programming, pages 449-460, Berlin, Heidelberg. Springer Berlin Heidelberg.Gibbard, A. (1973). Manipulation of voting schemes: a
general result. Econometrica: journal of the Economet-
ric Society, pages 587-601.
Goel, S. and Hann-Caruthers, W. (2020). Coordinate-
wise median: Not bad, not bad, pretty good. CoRR,
abs/2007.00903.
Gu, Z. and Yang, Y. (2021). Detecting malicious model up-
dates from federated learning on conditional variational
autoencoder. In 2021 IEEE International Parallel and
Distributed Processing Symposium (IPDPS), pages 671-
680.
Byzantine-robust learning on heterogeneous datasets via bucketing. In International Conference on Learning Representations.Kim, K. and Roush, F. (1984). Nonmanipulability in two dimensions. Mathematical Social Sciences, 8(1):29-43.Lopuhaa, H. P. and Rousseeuw, P. J.(1989). On the relation between s-estimators and m-estimators of multivariate location and covariance.The Annals of Statistics, pages 1662-1683. Lu, P., Wang, Y., and Zhou, Y. (2009). Tighter bounds for facility games. In Leonardi, S., editor, Internet and Network Economics, pages 137-148, Berlin, Heidelberg. Springer Berlin Heidelberg. Lubin, B. and Parkes, D. C. (2012). Approximate strategyproofness. Current Science, 103(9):1021-1032. McMahan, B., Moore, E., Ramage, D., Hampson, S., and Arcas, B. A. y. (2017). Communication-Efficient Learning of Deep Networks from Decentralized Data. In Singh, A. and Zhu, J., editors, Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, volume 54 of Proceedings of Machine Learning Research, pages 1273-1282. PMLR. Mena, P. (2020). Cleaning up social media: The effect of warning labels on likelihood of sharing false news on Facebook. Policy & internet, 12(2):165-183. Michelman, P. (2020). Can we amplify the good and contain the bad of social media? MIT Sloan Management Review, 62(1):1-5. Noothigattu, R., Gaikwad, S. N. S., Awad, E., Dsouza, S., Rahwan, I., Ravikumar, P., and Procaccia, A. D. (2018). A voting-based system for ethical decision making. In McIlraith, S. A. and Weinberger, K. Q., editors, Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 1587-1594. AAAI Press. Pillutla, K., Kakade, S. M., and Harchaoui, Z. (2022). Robust aggregation for federated learning. IEEE Transactions on Signal Processing, 70:1142-1154. Polyak, B. T. and Juditsky, A. B. (1992). Acceleration of stochastic approximation by averaging. SIAM Journal on Control and Optimization, 30(4):838-855. Procaccia, A. D. and Tennenholtz, M. (2013). Approximate mechanism design without money. ACM Trans. Econ. Comput. Rajput, S., Wang, H., Charles, Z., and Papailiopoulos, D. (2019). Detox: A redundancy-based framework for faster and more robust gradient aggregation. In Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché-Buc, F., Fox, E., and Garnett, R., editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc.Ribeiro, M. H., Jhaver, S., Zannettou, S., Blackburn, J., Cristofaro, E. D., Stringhini, G., and West, R. (2020). Does platform migration compromise content moderation? evidence from r/the donald and r/incels. CoRR, abs/2010.10397. Satterthwaite, M. A. (1975). Strategy-proofness and arrow's conditions: Existence and correspondence theorems for voting procedures and social welfare functions. Journal of economic theory, 10(2):187-217. Smith, D. J. and Vamanamurthy, M. K. (1989). How small is a unit ball? Mathematics Magazine, 62(2):101-107. So, J., Güler, B., and Avestimehr, A. S. (2021). Byzantineresilient secure federated learning. IEEE Journal on Selected Areas in Communications, 39(7):2168-2181. Sui, X. and Boutilier, C. (2015). Approximately strategyproof mechanisms for (constrained) facility location. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, AAMAS '15, page 605-613, Richland, SC. International Foundation for Autonomous Agents and Multiagent Systems. Tang, P., Yu, D., and Zhao, S. (2020). Characterization of group-strategyproof mechanisms for facility location in strictly convex space. In Proceedings of the 21st ACM Conference on Economics and Computation, EC '20, page 133-157, New York, NY, USA. Association for Computing Machinery. Walsh, T. (2020). Strategy proof mechanisms for facility location in euclidean and manhattan space. CoRR, abs/2009.07983. Wang, Q., Ye, B., Xu, T., Lu, S., and Guo, S. (2015). Approximately truthful mechanisms for radio spectrum allocation. IEEE Transactions on Vehicular Technology, Whitten-Woodring, J., Kleinberg, M. S., Thawnghmung, A., and Thitsar, M. T. (2020). Poison if you don't know how to use it: Facebook, democracy, and human rights in myanmar.Kyropoulou, M., Ventre, C., and Zhang, X. (2019). Mech-
anism design for constrained heterogeneous facility lo-
cation. In Algorithmic Game Theory: 12th International
Symposium, SAGT 2019, Athens, Greece, September 30
-October 3, 2019, Proceedings, page 63-76, Berlin,
Heidelberg. Springer-Verlag.
Minsker, S. (2015). Geometric median and robust estima-
tion in banach spaces. Bernoulli, 21(4):2308-2335.
Moulin, H. (1980). On strategy-proofness and single
peakedness. Public Choice, 35(4):437-455.
64(6):2615-2626.
The International Journal of Press/Politics,
25(3):407-425.
Wu, Z., Ling, Q., Chen, T., and Giannakis, G. B. (2020).
Federated variance-reduced stochastic gradient descent
with robustness to Byzantine attacks. IEEE Transactions
on Signal Processing, 68:4583-4596.
Yue, N. (2019). The" weaponization" of facebook in myan-
mar: A case for corporate criminal liability. Hastings LJ,
71:813.
Zhang, Q. and Li, M. (2014). Strategyproof mecha-
nism design for facility location games with weighted
agents on a line. Journal of Combinatorial Optimiza-
tion, 28(4):756-773.
Zinkevich, M., Weimer, M., Li, L., and Smola, A. (2010).
Parallelized stochastic gradient descent. In Lafferty, J.,
Williams, C., Shawe-Taylor, J., Zemel, R., and Culotta,
A., editors, Advances in Neural Information Processing
Systems, volume 23. Curran Associates, Inc.
then yields, under E * , the bound ∥g 1:Vwhere we used ∥x∥ 2 =
x[k] 2 ≤ d ∥x∥
2
∞ =
√
d ∥x∥ ∞ and the fact that ∥e ij ∥ ∞ ≤ ελi
2d . Combining Equations (81)
and (83)
Since |u[i]| ≤ 1 for all unit vectors u and all coordinates i ∈ [d], we see that |f (g ∞ − θ v )[i, j, k]| ≤ 6. Moreover, using Lemma 17, for any i, j, k ∈ [d], we have
∇ 3 L 1:V (z) · ∇L 1:V (z)[i, j] =k∈[d]
∇ 3 L 1:V (z)[i, j, k]∇L 1:V (z)[k]
(133)
≤
k∈[d]
Lemma 24. For all z ∈ R d − {0}, we have ∇ z ∥z∥ Σ = ΣΣz/ ∥z∥ Σ .Proof. Note that ∥z∥
2
Σ = ∥Σz∥
2
2 = i (Σz)[i] 2 = i
j Σ[i, j]x[j]
2
. We then have
∂ i ∥z∥
2
Σ =
j
2Σ[j, i](Σz)[j] = 2
j
k
Σ[j, i]Σ[j, k]z[k]
(153)
= 2
k
j
Σ[i, j]Σ[j, k]
z[k] = 2
k
(ΣΣ) [i, k]z[k] = 2(ΣΣz)[i].
Hence, we often use the term "voter" instead of "user".
Because all the votes that we consider are permutation invariant (Proposition 5 in Appendix A).
Our definition does not coincide with Bayesian incentive compatibility, which aims to bound one's expected strategic gain. 4 This setting is similar to "Worst-case IID susceptibility" proposed byLubin and Parkes (2012). But, we consider high probability bounds on the gain which is different from the expected regret defined by Lubin and Parkes (2012).
This property roughly guarantees a certain level of coordination between the decisions taken on each coordinate.
clearly for d ≥ 5 such r1, r2, r3 exist
AcknowledgementsWe thank Rafael Pinot for the very helpful comments on the introduction of the paper. We thank the anonymous reviewers for their constructive comments. This work has been supported in part by the Swiss National Science Foundation (SNSF) project 200021 200477.by defining y 0 ≜ SΣ −2 H Σ ∞ (g Σ, † 0:V − g Σ ∞ ). This concludes the sketch of the proof. A more rigorous proof would need to follow the footsteps of our main proof (Theorem 2).D.2 Proof of Proposition 3Proof. Let i, j ∈[d]. Note that ∂ j ((ΣΣz)[i]) = ∂ j ( k (ΣΣ)[i, k]z[k]) = (ΣΣ)[i, j]. As a result, using Lemma 24, we haveCombining all coordinates, replacing z by z − θ v , and averaging over all voters then yields the lemma.D.3 The computation of Σ-skewed Geometric MedianIntuitively, the computation of the Σ-skewed geometric median corresponds to skewing the space using the linear transformation Σ, computing the geometric median in the skewed space, and de-skewing the computed geometric median by applying Σ −1 . The following two lemmas formalize this intuition.Proof. This is straightforward, by expanding the definition of the terms.Lemma 23. g Σ ∞ (θ) = Σ −1 g ∞ (Σθ) and g Σ 0:V (s, ⃗ θ) = Σ −1 g 0:V (Σs, Σ ⃗ θ).Proof. By definition of g ∞ (Σθ), we know that it minimizes y → L ∞ (y, Σθ). It is then clear that Σ −1 g ∞ (Σθ) minimizes z → L ∞ (Σz, Σθ). The case of g Σ 0:V is similar.D.4 No Shoe Fits Them AllIn practice, we may expect different voters to assign a different importance to different dimensions. Unfortunately, this leads to the following impossibility theorem for asymptotic strategyproofness of any skewed geometric median.Corollary 1. Suppose voters v, w have S v and S w -skewed preferences, where the matrices S v and S w are not proportional. Then no skewed geometric median is asymptotically strategyproof for both.Proof. Asymptotic strategyproofness for S v requires using a Σ-skewed geometric median such thatgiven our assumption about these matrices, this cannot be proportional to the identity. Proposition 2 then implies that SKEW(S −1 w H Σ ∞ S −1 w ) > 0, which means that the Σ-skewed geometric median is not asymptotically strategyproof for voter w.We leave however open the problem of determining what shoe "most fits them all". In other words, assuming a set S of skewing matrices, each of which may represent how different voters' preferences may be skewed, which Σ(S)-skewed geometric median guarantees asymptotic α-strategyproofness for all voters, with the smallest possible value of α? And what is this optimal uniform asymptotic strategyproofness guarantee α(S) that can be obtained?Proof. Assume x ̸ = 0 and 1 < p, q < ∞. Then we haveusing the equality q(p − 1) = p derived from 1 p + 1 q = 1. Adding up all such quantities for j ∈ [d] yieldsThus the gradient of the ℓ p norm is unitary in ℓ q norm, when x ̸ = 0. Note then that a subgradient g at 0 must satisfy g T x ≤ ∥x∥ p for all x ∈ R d . This corresponds to saying that the operator norm of x → g T x must be at most one with respect to the norm ℓ p . Yet it is well-known that this operator norm is the ℓ q norm of g.In the case p = 1, then each coordinate is treated independently. On each coordinate, the derivative is then between −1 and 1 (and can equal [−1, 1] if x[j] = 0). This means that the gradients are of norm at most 1.The last case left to analyze is whenWhen |J max (x)| = 1, denoting j the only element of J max (x) and u j the j-th vector of the canonical basis, then the gradient of the ℓ ∞ is clearly u j , which is unitary in ℓ 1 norm. Moreover, note that if k / ∈ J max (x), then we clearly have ∂ k ∥x∥ ∞ = 0. Now, denote g ∈ ∇ ∥x∥ ∞ , let y ∈ R d , and assume for simplicity that x ≥ 0. We know that ∥x + εy∥ ∞ ≥ ∥x∥ ∞ + εg T y.For ε > 0 small enough, we then haveConsidering y[j] = −1 for j ∈ J max (x) and y[k] = 0 for all k ̸ = j then implies −g[j] ≤ 0, which yields g[j] ≥ 0. Generalizing it for all j's implies that g ≥ 0. Now, considering y[j] = 1 for all j ∈ J max (x) then yields j∈J max (x) g[j] = ∥g∥ 1 ≤ 1, which concludes the proof for x ≥ 0. The general case can be derived by considering axial symmetries. | [] |
[
"Hyperspectral Image Segmentation: A Preliminary Study on the Oral and Dental Spectral Image Database (ODSI-DB)",
"Hyperspectral Image Segmentation: A Preliminary Study on the Oral and Dental Spectral Image Database (ODSI-DB)"
] | [
"Luis C Garcia ",
"Peraza Herrera \nKing's College London\nUK\n",
"Conor Horgan \nKing's College London\nUK\n\nHypervision Surgical Ltd\nUK\n",
"Sebastien Ourselin \nKing's College London\nUK\n\nHypervision Surgical Ltd\nUK\n",
"Michael Ebner \nKing's College London\nUK\n\nHypervision Surgical Ltd\nUK\n",
"Tom Vercauteren \nKing's College London\nUK\n\nHypervision Surgical Ltd\nUK\n"
] | [
"King's College London\nUK",
"King's College London\nUK",
"Hypervision Surgical Ltd\nUK",
"King's College London\nUK",
"Hypervision Surgical Ltd\nUK",
"King's College London\nUK",
"Hypervision Surgical Ltd\nUK",
"King's College London\nUK",
"Hypervision Surgical Ltd\nUK"
] | [] | Visual discrimination of clinical tissue types remains challenging, with traditional RGB imaging providing limited contrast for such tasks. Hyperspectral imaging (HSI) is a promising technology providing rich spectral information that can extend far beyond three-channel RGB imaging. Moreover, recently developed snapshot HSI cameras enable real-time imaging with significant potential for clinical applications. Despite this, the investigation into the relative performance of HSI over RGB imaging for semantic segmentation purposes has been limited, particularly in the context of medical imaging. Here we compare the performance of state-of-the-art deep learning image segmentation methods when trained on hyperspectral images, RGB images, hyperspectral pixels (minus spatial context), and RGB pixels (disregarding spatial context). To achieve this, we employ the recently released Oral and Dental Spectral Image Database (ODSI-DB), which consists of 215 manually segmented dental reflectance spectral images with 35 different classes across 30 human subjects. The recent development of snapshot HSI cameras has made real-time clinical HSI a distinct possibility, though successful application requires a comprehensive understanding of the additional information HSI offers. Our work highlights the relative importance of spectral resolution, spectral range, and spatial information to both guide the development of HSI cameras and inform future clinical HSI applications. | 10.1080/21681163.2022.2160377 | [
"https://export.arxiv.org/pdf/2303.08252v1.pdf"
] | 256,222,840 | 2303.08252 | 8ba23f522004c41d8c8dfc807888fd7821f66834 |
Hyperspectral Image Segmentation: A Preliminary Study on the Oral and Dental Spectral Image Database (ODSI-DB)
Luis C Garcia
Peraza Herrera
King's College London
UK
Conor Horgan
King's College London
UK
Hypervision Surgical Ltd
UK
Sebastien Ourselin
King's College London
UK
Hypervision Surgical Ltd
UK
Michael Ebner
King's College London
UK
Hypervision Surgical Ltd
UK
Tom Vercauteren
King's College London
UK
Hypervision Surgical Ltd
UK
Hyperspectral Image Segmentation: A Preliminary Study on the Oral and Dental Spectral Image Database (ODSI-DB)
orcid l Hyperspectral image segmentation; dental reflectance
Visual discrimination of clinical tissue types remains challenging, with traditional RGB imaging providing limited contrast for such tasks. Hyperspectral imaging (HSI) is a promising technology providing rich spectral information that can extend far beyond three-channel RGB imaging. Moreover, recently developed snapshot HSI cameras enable real-time imaging with significant potential for clinical applications. Despite this, the investigation into the relative performance of HSI over RGB imaging for semantic segmentation purposes has been limited, particularly in the context of medical imaging. Here we compare the performance of state-of-the-art deep learning image segmentation methods when trained on hyperspectral images, RGB images, hyperspectral pixels (minus spatial context), and RGB pixels (disregarding spatial context). To achieve this, we employ the recently released Oral and Dental Spectral Image Database (ODSI-DB), which consists of 215 manually segmented dental reflectance spectral images with 35 different classes across 30 human subjects. The recent development of snapshot HSI cameras has made real-time clinical HSI a distinct possibility, though successful application requires a comprehensive understanding of the additional information HSI offers. Our work highlights the relative importance of spectral resolution, spectral range, and spatial information to both guide the development of HSI cameras and inform future clinical HSI applications.
Introduction
During an intervention, the physician or surgeon has to continuously decode the visual information into tissue types and pathological conditions. As a result of this process, decisions on how to continue with the intervention are taken. Hyperspectral cameras can capture visual information far beyond the three red, green, and blue (RGB) wavelengths that the naked human eye (and common endoscopes) can perceive. This additional data provides extra cues that facilitate the identification and characterization of relevant tissue structures that are otherwise imperceptible (Shapey et al. 2019;Ebner et al. 2021). To name a few examples, hyperspectral information has been used as an input to visualize tissue structures occluded by blood (Monteiro et al. 2004) or ligament tissue (Zuzak et al. 2008), display tissue oxygenation and perfusion (Best et al. 2011;Chin et al. 2017), classify images as cancerous/normal Fei et al. 2017;Bravo et al. 2017;Beaulieu et al. 2018), and improve the contrast between different anatomical structures such as liver/gallbladder (Zuzak et al. 2007), ureter (Nouri et al. 2016) and facial nerve (Nouri et al. 2016). The output of these systems is typically a classification score, an overlay with a segmentation, or a contrast-enhanced image. In this work, we focus on segmentation.
Image segmentation is a building box of many computer-assisted medical applications. In dentistry, the two most common diagnostic visualization strategies are visual inspection (RGB imaging) and X-ray imaging (Hyttinen et al. 2020). RGB imaging serves a multitude of purposes. To name a few, patient instruction and motivation, medico-legal reasons, treatment planning, liaison with dental laboratory, assessment of the baseline situation (when seeing a new patient), and progress monitoring (Ahmad 2009a,b). RGB imaging also provides valuable information for soft-tissue diagnostics and some surface features of hard tissue. However, the information it captures is restricted to the capabilities of the human eye, with spectral characteristics dictated by the central wavelengths of the short, middle, and long wavelength-detecting cones in the retina (450 nm, 520 nm, 660 nm). Unlike RGB imaging, X-ray provides anatomical and pathological information on hard-tissue structures such as the teeth and the alveolar bone. This additional information comes at the expense of exposing the patient to ionizing radiation and potential risks derived from the use of contrast agents.
In contrast to RGB imaging, hyperspectral cameras allow us to capture additional information beyond the usual three RGB bands. This new set of images corresponding to narrow and contiguous wavelength bands forms the reflection spectrum of the sample (in our case, the sample is the inside of the mouth). Although the applications and possible benefits of hyperspectral imaging (HSI) are an active field of research (Shapey et al. 2019;Manifold et al. 2021;Ebner et al. 2021;Seidlitz et al. 2022), we foresee that patients could potentially benefit from this technology in two different ways. First, the reflectance spectrum could be used to extract tissue properties and produce a range of pseudo-color images that enhance the visualization capabilities of clinicians . For example, displaying or highlighting imaging biomarkers or clinical conditions that are barely visible or not perceivable in RGB (Best et al. 2013;Zherebtsov et al. 2019;Hyttinen et al. 2018Hyttinen et al. , 2019. Second, hyperspectral images could provide computer-assisted diagnosis methods with additional information that can help improve the accuracy of detecting and diagnosing lesions (Boiko et al. 2019). The work presented in this manuscript is aimed at assessing the latter possible benefit.
The research hypothesis we are working with is that there may be perceivable differences in the reflectance spectrum of diseased tissue compared to that of healthy anatomy. For example, the average reflectance spectrum of all the pixels of a healthy tooth might be different to that of one affected by a certain condition (e.g. earlystage cavities). A preliminary step to the development of such quantitative dental and oral biomarkers is to segment the different anatomical structures accurately. In this scenario, a question that quickly arises is whether we can obtain an improved segmentation accuracy with state-of-the-art deep neural network architectures designed for 2D RGB image analysis by simply replacing the RGB input with a hyperspectral cube. Similarly, from a deep neural network design perspective, it is interesting to see how accuracy changes when we just use spectral information (i.e. when we classify pixels individually) or when we also use spatial information (i.e. when we segment images). That is, we aim to discover how the segmentation accuracy changes when reducing the information available from N hyperspectral bands to the usual three RGB bands. This is indeed one way to assess a lower bound 1 of the added value of hyperspectral information above and beyond RGB.
Contributions
In this work, we provide baseline results for the segmentation of the Oral and Dental Spectral Image Database (ODSI-DB) (Hyttinen et al. 2020). We evaluate how the segmentation performance changes when using different spatial and spectral resolutions as data inputs, guiding future developments in the field of dental reflectance and hyperspectral image segmentation. We provide the training code along with the models validated in this work 2 . Additionally, we propose an improved approach to reconstructing RGB images from hyperspectral raw data than that employed in ODSI-DB. This is useful to compensate for missing spectra, as it occurs in the images captured with Nuance EX camera used in ODSI-DB. Our code to perform such conversion is also made available 2 .
Related work
Recently, a literature review on deep learning techniques applied to hyperspectral medical images has been published (Khan et al. 2021). In the following paragraph we summarise some of the most recent work involving hyperspectral images in the context of surgery, and how the performance varies across different computer-assisted applications when using hyperspectral bands as opposed to traditional RGB imaging.
In Garifullin et al. (2018), the authors segmented the retinal vasculature, optic disc and macula using 30 spectral bands (380-780 nm). Their results showed an improvement of 2 percentage points (pp) for vessels and optic disc and 6 pp for the macula when comparing deep learning (DL) models trained on hyperspectral versus RGB images. In Ma et al. (2019Ma et al. ( , 2021, authors employ hyperspectral images for tumor classification and margin assessment. In this work, the model proposed by the authors for tissue classification on surgical specimens achieved a pixel-wise average AUC of 0.88 and 0.84 for hyperspectral and RGB, respectively. In Wang et al. (2021), the authors reported a difference of 2 pp in Dice coefficient for the segmentation of melanoma in histopathology samples of the skin when comparing the performance of a 2D U-net on RGB and hyperspectral images. Similarly, Trajanovski et al. (2021) showed that the Dice coefficient for the segmentation of tongue tumors increases from 0.80 to 0.89 when using hyperspectral information. Despite this recent body of work, to the best of our knowledge, there is no current benchmark on ODSI-DB, which as opposed to the previous literature, mostly targeting binary classification, has a considerably higher number of classes (35) and a substantial number of patient samples (> 200).
Materials and methods
Dataset details
The ODSI-DB dataset (Hyttinen et al. 2020) contains 316 images (215 are annotated, 101 are not) of 30 human subjects. The 215 annotated images are partially labelled, and the number of annotated pixels per image varies from image to image. The annotated pixels can belong to 35 possible classes. The number of annotated pixels per class is shown in Tab. A1 in the appendix. ODSI-DB contains images captured with two different cameras, 59 annotated images were taken with a Nuance EX (CRI, PerkinElmer, Inc., Waltham, MA, USA), and 156 were obtained with a Specim IQ (Specim, Spectral Imaging Ltd., Oulu, Finland). The pictures taken by the Nuance EX contain 51 spectral bands (450-950 nm with 10nm bands), and those captured by the Specim IQ have 204 bands (400-1000nm with approximately 3nm steps). The reflectance values for the images are in the normalized range [0, 1].
Reconstruction of RGB images from hyperspectral data
Although ODSI-DB contains RGB reconstructions of the hyperspectral images, we have observed that the RGB reconstruction method used to generate the RGB images provided does not compensate for the lack of hyperspectral information in the 400-450 nm range for the Nuance EX images. The lack of this information results in a yellow artifact in the reconstructed RGB images from the Nuance EX (see Fig. 1). We thus provide an alternative RGB reconstruction.
Nuance EX Specim IQ ODSI-DB RGB reconstruction
Our RGB reconstruction ODSI-DB RGB reconstruction Our RGB reconstruction Figure 1. Exemplary reconstructed RGB images provided in the ODSI-DB dataset compared to our RGB reconstructions. As can be observed, the RGB images reconstructed from the hyperspectral images captured with the Nuance EX are affected by a yellow artifact. This artifact is not present in those reconstructed from the Specim IQ. This occurs because the Nuance EX does not capture the 400-450 nm range, which carries information relevant to reconstruct the blue channel (and in a lesser degree the red) channel.
Our RGB reconstruction follows the method proposed by Magnusson et al. (2020), where the hyperspectral images are first converted to CIE XYZ and then to sRGB.
The conversion from CIE XYZ to linear sRGB is a linear transformation where the X, Y, and Z channels contribute largely to the red, green, and blue channels of the linear sRGB image, respectively. When converting hyperspectral images to CIE XYZ, the contribution (i.e. the weight) of each hyperspectral band to the CIE XYZ image is defined by a color matching function (CMF). We used the standard CIE 1931, shown in Fig. 2. However, as shown in Fig. 2, the hyperspectral bands in the range 400 − 450 nm have a considerable weight to reconstruct the Z channel (blue), and a minor weight to reconstruct the X (red) channel.
As the Nuance EX hyperspectral images do not have any information in this range, we miss a substantial amount of the information needed to reconstruct the Z (blue) channel correctly, which is why the images look yellow (see the blue medical-grade glove in the Fig. 1). On the other hand, the images captured with the Specim IQ camera have information in the 400 − 450 nm range (the camera range is 400 − 1000 nm). Therefore, the RGB reconstructions look realistic and do not display the yellow tint seen in those from the Nuance EX.
The purpose of the modified RGB reconstruction proposed in this section is to compensate for the missing information (380 − 450 nm for the Nuance EX and 380 − 400 nm for the Specim IQ). With this modification, we aim to make the RGB images produced from both cameras look alike prior to them being processed by the convolutional network. To do so and account for the missing wavelengths, we modify the CIE original CMF shown in Fig. 2. The modification consists of taking the CMF function in the missing range (e.g. 380 − 450 nm for the Nuance EX), flipping it over the vertical axis at the start of the captured wavelengths (450 nm for the Nuance EX), and summing it with the original CMF. More formally, the modified color matching functions are defined asx
n (λ) =x(λ) +x c (λ) y n (λ) =ȳ(λ) +ȳ c (λ) z n (λ) =z(λ) +z c (λ)(1)
wherex,ȳ,z are the original CIE 1931 2-deg color matching functions (CMFs) (Smith and Guild 1931) shown in Fig. 2 (left),x c ,ȳ c , andz c are the additive corrections to compensate for the missing information, andx n ,ȳ n , andz n are the corrected CMFs (shown in Fig. 2 center and right). As different cameras are missing different wavelength ranges, the additive corrections must be different. We define the CMF correction for the Nuance EX as
x c (λ) = x(2 × 450 − λ) 450 ≤ λ ≤ 450 + (450 − 380) 0 otherwisē y c (λ) = ȳ(2 × 450 − λ) 450 ≤ λ ≤ 450 + (450 − 380) 0 otherwisē z c (λ) = z(2 × 450 − λ) 450 ≤ λ ≤ 450 + (450 − 380) 0 otherwise(2)
and the CMF correction for the Specim IQ as
x c (λ) = x(2 × 400 − λ) 400 ≤ λ ≤ 400 + (400 − 380) 0 otherwisē y c (λ) = ȳ(2 × 400 − λ) 400 ≤ λ ≤ 400 + (400 − 380) 0 otherwisē z c (λ) = z(2 × 400 − λ) 400 ≤ λ ≤ 400 + (400 − 380) 0 otherwise(3)
The original CMF, along with those modified for the Specim IQ and Nuance EX images are shown in Fig. 2. Due to the nature of the proposed CMF modifications, the modified RGB reconstruction can be seen as a color normalization that transforms the input data to a common RGB space, easing the learning of the segmentation from a dataset containing a mix of Nuance EX and Specim IQ images. Once we have the modified CMFs, the conversion from hyperspectral to RGB is as follows
X = 1 N λx n (λ)f (λ)g(λ) dλ Y = 1 N λȳ n (λ)f (λ)g(λ) dλ Z = 1 N λz n (λ)f (λ)g(λ) dλ N = λȳ (λ)g(λ) dλ R G B = +3.2406255 −1.5372080 −0.4986286 −0.9689307 +1.8757561 +0.0415175 +0.0557101 −0.2040211 +1.0569959 X Y Z (4)
where f is the spectral density of the sample (i.e. the continuous version of the hyperspectral image), and g is the spectral density of the illuminant, D65 in our case. As we have these functions (x n ,ȳ n ,z n , f , g) typically sampled at different wavelengths, we interpolate all of them with a PCHIP 1-D monotonic cubic interpolator. We use the composite trapezoidal rule to evaluate the integral (with the image wavelengths as sample points).
After the RGB conversion, following the proposal by Magnusson et al. (2020) to avoid images looking overly dark, we apply the following gamma correction to all the RGB pixels γ(x) = 12.92x
x ≤ 0.0031308 1.055x 0.416 − 0.055 otherwise (5) where x is either the red, green, or blue intensity of each pixel (correction is applied to all the RGB channels).
Types of input and segmentation model
In this study, we evaluate model performance for pixel classification on the ODSI-DB dataset when different forms of data input are employed (see Fig. 3). We refer to rgbimage when we reconstruct the RGB image from the whole spectral range using a colour matching function as explained in Sec. 3.2. As the hyperspectral images in ODSI-DB have a different number of bands (450-950 nm with 10 nm steps for the Nuance EX, and 400-1000 nm with 3 nm steps for the Specim IQ), we linearly interpolate the images from both cameras to a fixed set of 170 evenly-spaced bands in the 450-950 range. As in most of the recent body of work in biomedical segmentation, backed up by the state-of-the-art results in most biomedical segmentation challenges, we chose the endoder-decoder 2D U-Net Ronneberger et al. (2015) as our go-to model to build the segmentation baseline. For the pixel-wise experiments (rgbpixel and spixel) a network with an equivalent number of 1 × 1 filters and skip layers was used. The network hyperparameters for each input type were tuned on a random 10% of the images contained in the training set.
Results and discussion
Hyperspectral vs RGB as feature vectors
Hyperspectral images have a higher spectral resolution than RGB images . As we are interested in seeing whether this increased resolution translates into a higher degree of discrimination among feature vectors, we run t-SNE (van der Maaten and Hinton 2008) on 10 million randomly selected pixels, with an equal number of pixels picked from each image. For visualisation purposes, we reduce the dimensionality from hyperspectral and RGB to 2D. The hyperparameters used for the experiment were 10, 200, and 1000 for perplexity, learning rate, and number of iterations, respectively. The initialization was performed with PCA. The visualization of the 2D features is shown in Fig. 4. As can be observed in this figure, in the Nuance EX visualization, the hyperspectral plot shows a boundary between oral mucosa and gingiva (attached and marginal) which is blurred in RGB, and also a clearer separation between attached and marginal gingiva themselves. In the Specim IQ visualization, the hyperspectral plot displays a sharper edge between skin and gingiva (attached and marginal), and also between hair and specular reflections.
Nuance EX RGB Hyperspectral
Specim IQ RGB Hyperspectral
Evaluation protocol
For performance testing purposes, the 215 annotated images provided in ODSI-DB are partitioned into training (90%) and testing (10%) 3 . The training/testing split was generated randomly. We consider a training/testing split as valid when all the classes are represented in the training set (so we can perform inference on images containing any of the classes). In order to generate a valid training/testing split, we follow the next steps. For each class, we make a list of all the images that contain pixels of such class. We randomly pick one of those images and put it in the training set. After looping over all the classes, the training set contains at least one image with pixels of each class. We split the remaining images into training and testing with a probability p=0.5. As there are classes whose pixels can be found only in one or two images of the dataset, not all the classes are present in the testing split. This is for example the case of the classes fibroma, makeup, malignant lesion, fluorosis, and pigmentation. Therefore, for reporting purposes, we ignore heavily underrepresented classes and concentrate on those tissue classes with at least 1 million pixel samples: skin, oral mucosa, enamel, tongue, lip, hard palate, attached gingiva, soft palate, and hair. In addition, we report class-based results and image-based results. The class-based results are computed by taking all the annotated pixels contained in the testing images as a single set. A confusion matrix is then built for each class, where the positives are the pixels of such class, and the negatives are the pixels of all the other classes. Sensitivity, specificity, accuracy and balanced accuracy (arithmetic mean of sensitivity and specificity) are reported for each class. An average across classes is also reported. To obtain imagebased results, we compute the average accuracy across all the images of the testing set, where the accuracy of any given image is calculated as the coefficient between the number of pixels accurately classified (regardless of the class) divided by the number of pixels annotated in the image.
Ablation study: spatial and spectral information
The class-based results are shown in Tables 1, 2, 3, 4 for the input modes rgbpixel, spixel, rgbimage, simage, respectively. The class-based comparison between RGB to hyperspectral pixel inputs led to close results, except for enamel and lip classes, where hyperspectral pixels helped improve the performance by 4 pp and 6 pp, respectively. The balanced accuracy over classes showed a slight improvement of 1.1 pp when using multiple bands. However, when comparing the accuracy averaged over images, where better-represented classes (i.e. skin, oral mucosa, enamel, tongue, lip) have a higher weight, the hyperspectral accuracy showed an improved accuracy of 10 pp over RGB pixel inputs.
When comparing the class-based rgbimage and simage results, a mild improvement is observed when using the extended spectral range. The average balanced accuracy achieved was 74.39 % for RGB reconstructions, and 76.18 % for hyperspectral images. These results, along with the 10 pp gap when moving from single-pixel inputs to images suggest that without considering DL architecture changes, the spatial information is the main driver of segmentation performance in dental imaging. Nonetheless, common dental conditions such as calculus, gingiva erosion, and caries are related to two classes in particular, attached gingiva and enamel. While the enamel is distinct from the rest of the tissue, and relatively trivial to spot in an RGB image, the attached gingiva is better segmented when hyperspectral information is available, as shown by its improved Table 5. Image-based accuracy for the different input types. The presented accuracy is the average of the images in the testing set. The accuracy for a single image is computed as the coefficient of the pixels correctly predicted divided by the total number of annotated pixels in the image. As when presenting class-based results, only the tissue classes with more than 1M pixels in the dataset have been considered. Accuracy results are provided in percentage. Table 5) display a clear performance gap (> 10 pp) between RGB and hyperspectral pixel inputs. This gap comes down to 2 pp when comparing RGB to hyperspectral images.
Input type
Conclusions
In this work we performed an ablation study to discern how the availability of spatial and spectral information impacts the segmentation performance. We reported baseline results for four types of input data, single RGB pixels, hyperspectral pixels, RGB images and hyperspectral images. In addition, we provided an improved method to reconstruct the RGB images from the hyperspectral data provided in ODSI-DB.
We reported a mild improvement in the segmentation results on ODSI-DB when using hyperspectral information. However, the main driver of segmentation performance for the dental anatomy present in the dataset seems to be the availability of spatial information. It is when moving from pixel classification to full image segmentation that we reported the largest rise in segmentation performance.
Future work stems in several directions. An interesting research question is whether, by means of hyperspectral imaging we can mitigate the annotation effort, which is one of the current issues in the CAI field. That is, the mild improvement in segmentation performance achieved with hyperspectral inputs could be potentially exploited to annotate fewer images without sacrificing segmentation performance. This would be particularly interesting for the field of hyperspectral endoscopy, as it represents an additional benefit in favour of the use of hyperspectral endoscopes. Another future direction is the exploration of convolutional architectures that take advantage of the hyperspectral nature of the data. Current state-of-the-art models such as U-Net have been optimised for RGB images, hence by simply replacing the input we fall on the risk of not taking full advantage of the hyperspectral information available.
TV is supported by a Medtronic / RAEng Research Chair [RCSRF1819\7\34]. CH is supported by an InnovateUK Secondment Scholars Grant (Project Number 75124). For the purpose of open access, the authors have applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission.
Disclosure statement
TV, ME, and SO are co-founders and shareholders of Hypervision Surgical. TV also holds shares from Mauna Kea Technologies.
Figure 2 .
2Original and modified color matching functions.
Figure 3 .
3The four types of input compared are single RGB pixels (rgbpixel), single hyperspectral pixels (spixel), RGB images (rgbimage), and hyperspectral images (simage). The format used to define the dimensions of the input is H, W, C, where H, W , and C represent height, width, and channels, respectively.
Figure 4 .
4t-SNE of 10 million randomly selected pixels (evenly distributed across images) from the ODSI-DB dataset. The dataset contains images captured with two different cameras, Nuance EX (51 bands) and Specim IQ (204 bands), hence the separated plots.
Table 1 .
1Class-based pixel classification results for the rgbpixel mode. To generate this table all the pixels contained in the images of the testing set are considered as a single set. All the results are provided in percentage.Class
Sensitivity Specificity Accuracy Balanced accuracy
Attached gingiva
54.89
68.08
67.74
61.49
Enamel
56.52
69.98
68.64
63.25
Hair
100.00
68.48
68.94
84.24
Hard palate
44.01
68.63
66.40
56.32
Lip
47.80
68.15
66.72
57.98
Oral mucosa
42.95
68.37
66.16
55.66
Skin
38.86
69.33
62.48
54.10
Soft palate
0.00
67.37
67.12
33.68
Tongue
2.14
62.77
54.61
32.46
Average
43.02
67.91
65.42
55.46
Table 2. Class-based pixel classification results for the spixel mode. To generate this table all the pixels
contained in the images of the testing set are considered as a single set. All the results are provided in percentage.
Class
Sensitivity Specificity Accuracy Balanced accuracy
Attached gingiva
54.73
68.09
67.74
61.41
Enamel
67.49
68.00
67.95
67.74
Hair
100.00
68.48
68.94
84.24
Hard palate
44.01
68.64
66.40
56.32
Lip
60.00
66.77
66.30
63.39
Oral mucosa
42.94
68.48
66.26
55.71
Skin
39.33
69.07
62.38
54.20
Soft palate
0.00
67.37
67.12
33.68
Tongue
2.14
62.77
54.61
32.46
Average
45.63
67.52
65.30
56.57
Table 3 .
3Class-based pixel classification results for the rgbimage mode. To generate this table all the pixels contained in the images of the testing set are considered as a single set. All the results are provided in percentage.Class
Sensitivity Specificity Accuracy Balanced accuracy
Attached gingiva
26.23
99.94
98.00
63.09
Enamel
49.40
99.04
94.11
74.22
Hair
84.77
98.94
98.73
91.85
Hard palate
0.53
99.64
90.64
50.08
Lip
33.94
98.91
94.34
66.43
Oral mucosa
78.03
92.09
90.87
85.06
Skin
78.19
97.22
92.94
87.71
Soft palate
49.13
99.42
99.23
74.27
Tongue
56.48
97.20
91.73
76.84
Average
50.74
98.04
94.51
74.39
Table 4. Class-based pixel classification results for the simage mode. To generate this table all the pixels
contained in the images of the testing set are considered as a single set. All the results are provided in percentage.
Class
Sensitivity Specificity Accuracy Balanced accuracy
Attached gingiva
40.64
99.48
97.93
70.06
Enamel
52.61
98.67
94.09
75.64
Hair
69.61
99.87
99.43
84.74
Hard palate
2.91
99.68
90.89
51.29
Lip
60.49
99.70
96.94
80.09
Oral mucosa
64.09
84.95
83.14
74.52
Skin
86.05
94.94
92.94
90.49
Soft palate
55.18
98.82
98.66
77.00
Tongue
64.58
98.99
94.36
81.79
Average
55.13
97.23
94.26
76.18
We refer to lower bound because it is possible that by tailoring the network model (instead of using a common 2D U-Net) to the nature of hyperspectral data we would render further accuracy improvements.2 https://github.com/luiscarlosgph/segodsidb .
The training/testing split is available for download at https://synapse.org/segodsidb .
Acknowledgement(s)This work was supported by core and project funding from the Wellcome/EPSRC [WT203148/Z/16/Z; NS/A000049/1; WT101957; NS/A000027/1]. This study/project is funded by the NIHR [NIHR202114]. The views expressed are those of the author(s) and not necessarily those of the NIHR or the Department of Health and Social Care. This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 101016985 (FAROS project).Appendix A. ODSI-DB dataset detailsTable A1. ODSI-DB statistics. Number of pixels per class and number of images in the dataset containing pixels of each class.ClassNumber
Digital dental photography. Part 1: an overview. I Ahmad, British Dental Journal. 2068Ahmad I. 2009a. Digital dental photography. Part 1: an overview. British Dental Journal. 206(8):403-407. Available from: http://www.nature.com/articles/sj.bdj.2009.306.
Digital dental photography. Part 2: purposes and uses. I Ahmad, British Dental Journal. 2069Ahmad I. 2009b. Digital dental photography. Part 2: purposes and uses. British Dental Journal. 206(9):459-464. Available from: http://www.nature.com/articles/sj.bdj.2009.366.
Automated diagnosis of colon cancer using hyperspectral sensing. R J Beaulieu, S D Goldstein, J Singh, B Safar, A Banerjee, N Ahuja, https:/onlinelibrary.wiley.com/doi/10.1002/rcs.1897The International Journal of Medical Robotics and Computer Assisted Surgery. 1431897Beaulieu RJ, Goldstein SD, Singh J, Safar B, Banerjee A, Ahuja N. 2018. Automated diagnosis of colon cancer using hyperspectral sensing. The International Journal of Medical Robotics and Computer Assisted Surgery. 14(3):e1897. Available from: https://onlinelibrary. wiley.com/doi/10.1002/rcs.1897.
Minimal Arterial In-flow Protects Renal Oxygenation and Function During Porcine Partial Nephrectomy: Confirmation by Hyperspectral Imaging. S L Best, A Thapa, M J Holzer, N Jackson, S A Mir, J A Cadeddu, K J Zuzak, Urology. 784Best SL, Thapa A, Holzer MJ, Jackson N, Mir SA, Cadeddu JA, Zuzak KJ. 2011. Min- imal Arterial In-flow Protects Renal Oxygenation and Function During Porcine Partial Nephrectomy: Confirmation by Hyperspectral Imaging. Urology. 78(4):961-966. Available from: https://linkinghub.elsevier.com/retrieve/pii/S0090429511006844.
Renal Oxygenation Measurement During Partial Nephrectomy Using Hyperspectral Imaging May Predict Acute Postoperative Renal Function. S L Best, A Thapa, N Jackson, E Olweny, M Holzer, S Park, E Wehner, K Zuzak, J A Cadeddu, http:/www.liebertpub.com/doi/10.1089/end.2012.0683Journal of Endourology. 278Best SL, Thapa A, Jackson N, Olweny E, Holzer M, Park S, Wehner E, Zuzak K, Cadeddu JA. 2013. Renal Oxygenation Measurement During Partial Nephrectomy Using Hyperspec- tral Imaging May Predict Acute Postoperative Renal Function. Journal of Endourology. 27(8):1037-1040. Available from: http://www.liebertpub.com/doi/10.1089/end.2012. 0683.
Deep Learning for Dental Hyperspectral Image Analysis. Color and Imaging Conference. O Boiko, J Hyttinen, P Fält, H Jäsberg, A Mirhashemi, A Kullaa, M Hauta-Kasari, 27Boiko O, Hyttinen J, Fält P, Jäsberg H, Mirhashemi A, Kullaa A, Hauta-Kasari M. 2019. Deep Learning for Dental Hyperspectral Image Analysis. Color and Imaging Confer- ence. 27(1):295-299. Available from: https://library.imaging.org/cic/articles/27/ 1/art00053.
Hyperspectral data processing improves PpIX contrast during fluorescence guided surgery of human brain tumors. J J Bravo, J D Olson, S C Davis, D W Roberts, K D Paulsen, S C Kanick, Scientific Reports. 719455Bravo JJ, Olson JD, Davis SC, Roberts DW, Paulsen KD, Kanick SC. 2017. Hyperspectral data processing improves PpIX contrast during fluorescence guided surgery of human brain tumors. Scientific Reports. 7(1):9455. Available from: http://www.nature.com/articles/ s41598-017-09727-8.
Hyperspectral Imaging Provides Early Prediction of Random Axial Flap Necrosis in a Preclinical Model. Plastic and Reconstructive Surgery. M S Chin, A G Chappell, G Giatsidis, D J Perry, J Lujan-Hernandez, A Haddad, H Matsumine, D P Orgill, J F Lalikos, 139Chin MS, Chappell AG, Giatsidis G, Perry DJ, Lujan-Hernandez J, Haddad A, Mat- sumine H, Orgill DP, Lalikos JF. 2017. Hyperspectral Imaging Provides Early Pre- diction of Random Axial Flap Necrosis in a Preclinical Model. Plastic and Re- constructive Surgery. 139(6):1285e-1290e. Available from: http://journals.lww.com/ 00006534-201706000-00027.
Intraoperative hyperspectral label-free imaging: from system design to first-in-patient translation. M Ebner, E Nabavi, J Shapey, Y Xie, F Liebmann, J M Spirig, A Hoch, M Farshad, S R Saeed, R Bradford, https:/iopscience.iop.org/article/10.1088/1361-6463/abfbf6Journal of Physics D: Applied Physics. 5429294003Ebner M, Nabavi E, Shapey J, Xie Y, Liebmann F, Spirig JM, Hoch A, Farshad M, Saeed SR, Bradford R, et al. 2021. Intraoperative hyperspectral label-free imaging: from system de- sign to first-in-patient translation. Journal of Physics D: Applied Physics. 54(29):294003. Available from: https://iopscience.iop.org/article/10.1088/1361-6463/abfbf6.
Spectral Image Enhancement for the Visualization of Dental Lesions. P Fält, J Hyttinen, L Fauch, A Riepponen, A Kullaa, M Hauta-Kasari, https:/link.springer.com/10.1007/978-3-319-94211-7_53International Conference on Image and Signal Processing. Fält P, Hyttinen J, Fauch L, Riepponen A, Kullaa A, Hauta-Kasari M. 2018. Spectral Image Enhancement for the Visualization of Dental Lesions. In: International Conference on Image and Signal Processing. p. 490-498. Available from: https://link.springer.com/10.1007/ 978-3-319-94211-7_53.
Label-free reflectance hyperspectral imaging for tumor margin assessment: a pilot study on surgical specimens of cancer patients. B Fei, G Lu, X Wang, H Zhang, J V Little, M R Patel, C C Griffith, M W El-Diery, A Y Chen, https:/www.spiedigitallibrary.org/journals/journal-of-biomedical-optics/volume-22/issue-08/086009/Label-free-reflectance-hyperspectral-imaging-for-tumor-margin-assessment/10.1117/1.JBO.22.8.086009.fullJournal of Biomedical Optics. 22081Fei B, Lu G, Wang X, Zhang H, Little JV, Patel MR, Griffith CC, El-Diery MW, Chen AY. 2017. Label-free reflectance hyperspectral imaging for tumor margin assessment: a pilot study on surgical specimens of cancer patients. Journal of Biomedical Optics. 22(08):1. Available from: https://www.spiedigitallibrary. org/journals/journal-of-biomedical-optics/volume-22/issue-08/086009/ Label-free-reflectance-hyperspectral-imaging-for-tumor-margin-assessment/ 10.1117/1.JBO.22.8.086009.full.
Hyperspectral Image Segmentation of Retinal Vasculature, Optic Disc and Macula. A Garifullin, P Koobi, P Ylitepsa, K Adjers, M Hauta-Kasari, H Uusitalo, L Lensu, Digital Image Computing: Techniques and Applications (DICTA); dec. IEEEGarifullin A, Koobi P, Ylitepsa P, Adjers K, Hauta-Kasari M, Uusitalo H, Lensu L. 2018. Hy- perspectral Image Segmentation of Retinal Vasculature, Optic Disc and Macula. In: 2018 Digital Image Computing: Techniques and Applications (DICTA); dec. IEEE. p. 1-5. Avail- able from: https://ieeexplore.ieee.org/document/8615761/.
Contrast Enhancement of Dental Lesions by Light Source Optimisation. J Hyttinen, P Fält, L Fauch, A Riepponen, A Kullaa, M Hauta-Kasari, http:/link.springer.com/10.1007/978-3-319-94211-7_54International Conference on Image and Signal Processing. Hyttinen J, Fält P, Fauch L, Riepponen A, Kullaa A, Hauta-Kasari M. 2018. Contrast En- hancement of Dental Lesions by Light Source Optimisation. In: International Conference on Image and Signal Processing. p. 499-507. Available from: http://link.springer.com/ 10.1007/978-3-319-94211-7_54.
Optical implementation of partially negative filters using a spectrally tunable light source, and its application to contrast enhanced oral and dental imaging. J Hyttinen, P Fält, H Jäsberg, A Kullaa, M Hauta-Kasari, Optics Express. 272334022Hyttinen J, Fält P, Jäsberg H, Kullaa A, Hauta-Kasari M. 2019. Optical implementation of partially negative filters using a spectrally tunable light source, and its application to contrast enhanced oral and dental imaging. Optics Express. 27(23):34022. Available from: https://opg.optica.org/abstract.cfm?URI=oe-27-23-34022.
. J Hyttinen, P Fält, H Jäsberg, A Kullaa, M Hauta-Kasari, Oral and Dental Spectral Image Database-ODSI-DB. Applied Sciences. 10207246Hyttinen J, Fält P, Jäsberg H, Kullaa A, Hauta-Kasari M. 2020. Oral and Dental Spectral Image Database-ODSI-DB. Applied Sciences. 10(20):7246. Available from: https://www. mdpi.com/2076-3417/10/20/7246.
Trends in Deep Learning for Medical Hyperspectral Image Analysis. U Khan, S Paheding, C P Elkin, V K Devabhaktuni, IEEE Access. 9Khan U, Paheding S, Elkin CP, Devabhaktuni VK. 2021. Trends in Deep Learning for Med- ical Hyperspectral Image Analysis. IEEE Access. 9:79534-79548. Available from: https: //ieeexplore.ieee.org/document/9385082/.
Detection of Head and Neck Cancer in Surgical Specimens Using Quantitative Hyperspectral Imaging. G Lu, J V Little, X Wang, H Zhang, M R Patel, C C Griffith, M W El-Deiry, A Y Chen, B Fei, http:/clincancerres.aacrjournals.org/lookup/doi/10.1158/1078-0432.CCR-17-0906Clinical Cancer Research. 2318Lu G, Little JV, Wang X, Zhang H, Patel MR, Griffith CC, El-Deiry MW, Chen AY, Fei B. 2017. Detection of Head and Neck Cancer in Surgical Specimens Using Quantitative Hy- perspectral Imaging. Clinical Cancer Research. 23(18):5426-5436. Available from: http: //clincancerres.aacrjournals.org/lookup/doi/10.1158/1078-0432.CCR-17-0906.
Adaptive deep learning for head and neck cancer detection using hyperspectral imaging. Visual Computing for Industry. L Ma, G Lu, D Wang, X Qin, Z G Chen, B Fei, https:/vciba.springeropen.com/articles/10.1186/s42492-019-0023-8Biomedicine, and Art. 21Ma L, Lu G, Wang D, Qin X, Chen ZG, Fei B. 2019. Adaptive deep learning for head and neck cancer detection using hyperspectral imaging. Visual Computing for Industry, Biomedicine, and Art. 2(1):18. Available from: https://vciba.springeropen.com/articles/10.1186/ s42492-019-0023-8.
Pixel-level tumor margin assessment of surgical specimen in hyperspectral imaging and deep learning classification. L Ma, M Shahedi, T Shi, M Halicek, J V Little, A Y Chen, L L Myers, B D Sumer, B Fei, https:/www.spiedigitallibrary.org/conference-proceedings-of-spie/11598/2581046/Pixel-level-tumor-margin-assessment-of-surgical-specimen-in-hyperspectral/10.1117/12.2581046.fullMedical Imaging 2021: Image-Guided Procedures, Robotic Interventions, and Modeling; feb. SPIE. Linte CA, Siewerdsen JH34Ma L, Shahedi M, Shi T, Halicek M, Little JV, Chen AY, Myers LL, Sumer BD, Fei B. 2021. Pixel-level tumor margin assessment of surgical specimen in hy- perspectral imaging and deep learning classification. In: Linte CA, Siewerd- sen JH, editors. Medical Imaging 2021: Image-Guided Procedures, Robotic In- terventions, and Modeling; feb. SPIE. p. 34. Available from: https://www. spiedigitallibrary.org/conference-proceedings-of-spie/11598/2581046/ Pixel-level-tumor-margin-assessment-of-surgical-specimen-in-hyperspectral/ 10.1117/12.2581046.full.
Creating RGB Images from Hyperspectral Images Using a Color Matching Function. In: IGARSS 2020 -2020 IEEE International Geoscience and Remote Sensing Symposium; sep. IEEE. M Magnusson, J Sigurdsson, S E Armansson, M O Ulfarsson, H Deborah, J R Sveinsson, Magnusson M, Sigurdsson J, Armansson SE, Ulfarsson MO, Deborah H, Sveinsson JR. 2020. Creating RGB Images from Hyperspectral Images Using a Color Matching Function. In: IGARSS 2020 -2020 IEEE International Geoscience and Remote Sensing Symposium; sep. IEEE. p. 2045-2048. Available from: https://ieeexplore.ieee.org/document/9323397/.
A versatile deep learning architecture for classification and label-free prediction of hyperspectral images. B Manifold, S Men, R Hu, D Fu, Nature Machine Intelligence. 34Manifold B, Men S, Hu R, Fu D. 2021. A versatile deep learning architecture for classification and label-free prediction of hyperspectral images. Nature Machine Intelligence. 3(4):306- 315. Available from: http://www.nature.com/articles/s42256-021-00309-y.
Towards applying hyperspectral imagery as an intraoperative visual aid tool. S T Monteiro, Y Kosugi, K Uto, E Watanabe, Proceedings of the Fourth IASTED International Conference VISUALIZATION. the Fourth IASTED International Conference VISUALIZATIONMarbella, SpainMonteiro ST, Kosugi Y, Uto K, Watanabe E. 2004. Towards applying hyperspectral imagery as an intraoperative visual aid tool. In: Proceedings of the Fourth IASTED International Conference VISUALIZATION, IMAGING, AND IMAGE PROCESSING; Marbella, Spain. p. 483-488.
Hyperspectral interventional imaging for enhanced tissue visualization and discrimination combining band selection methods. D Nouri, Y Lucas, S Treuillet, http:/link.springer.com/10.1007/s11548-016-1449-5International Journal of Computer Assisted Radiology and Surgery. 1112Nouri D, Lucas Y, Treuillet S. 2016. Hyperspectral interventional imaging for enhanced tissue visualization and discrimination combining band selection methods. International Journal of Computer Assisted Radiology and Surgery. 11(12):2185-2197. Available from: http:// link.springer.com/10.1007/s11548-016-1449-5.
U-Net: Convolutional Networks for Biomedical Image Segmentation. O Ronneberger, P Fischer, T Brox, http:/link.springer.com/10.1007/978-3-319-24574-4_28Miccai. SpringerRonneberger O, Fischer P, Brox T. 2015. U-Net: Convolutional Networks for Biomedical Image Segmentation. In: Miccai. Springer; p. 234-241. Available from: http://link.springer. com/10.1007/978-3-319-24574-4_28.
Robust deep learning-based semantic organ segmentation in hyperspectral images. S Seidlitz, J Sellner, J Odenthal, B Özdemir, A Studier-Fischer, S Knödler, L Ayala, T J Adler, H G Kenngott, M Tizabi, Medical Image Analysis. 80Seidlitz S, Sellner J, Odenthal J,Özdemir B, Studier-Fischer A, Knödler S, Ayala L, Adler TJ, Kenngott HG, Tizabi M, et al. 2022. Robust deep learning-based semantic organ seg- mentation in hyperspectral images. Medical Image Analysis. 80:102488. Available from: https://linkinghub.elsevier.com/retrieve/pii/S1361841522001359.
Intraoperative multispectral and hyperspectral label-free imaging: A systematic review of in vivo clinical studies. J Shapey, Y Xie, E Nabavi, R Bradford, S R Saeed, S Ourselin, T Vercauteren, https:/onlinelibrary.wiley.com/doi/10.1002/jbio.201800455Journal of Biophotonics. 129Shapey J, Xie Y, Nabavi E, Bradford R, Saeed SR, Ourselin S, Vercauteren T. 2019. Intraop- erative multispectral and hyperspectral label-free imaging: A systematic review of in vivo clinical studies. Journal of Biophotonics. 12(9). Available from: https://onlinelibrary. wiley.com/doi/10.1002/jbio.201800455.
The C.I.E. colorimetric standards and their use. T Smith, J Guild, https:/iopscience.iop.org/article/10.1088/1475-4878/33/3/301Transactions of the Optical Society. 333Smith T, Guild J. 1931. The C.I.E. colorimetric standards and their use. Transactions of the Optical Society. 33(3):73-134. Available from: https://iopscience.iop.org/article/ 10.1088/1475-4878/33/3/301.
Tongue Tumor Detection in Hyperspectral Images Using Deep Learning Semantic Segmentation. S Trajanovski, C Shan, Pjc Weijtmans, Sgb De Koning, Tjm Ruers, IEEE Transactions on Biomedical Engineering. 684Trajanovski S, Shan C, Weijtmans PJC, de Koning SGB, Ruers TJM. 2021. Tongue Tu- mor Detection in Hyperspectral Images Using Deep Learning Semantic Segmentation. IEEE Transactions on Biomedical Engineering. 68(4):1330-1340. Available from: https: //ieeexplore.ieee.org/document/9206125/.
Visualizing Data using t-SNE. L Van Der Maaten, G Hinton, Journal of Machine Learning Research. 986van der Maaten L, Hinton G. 2008. Visualizing Data using t-SNE. Journal of Machine Learning Research. 9(86):2579-2605.
Identification of Melanoma From Hyperspectral Pathology Image Using 3D Convolutional Networks. Q Wang, L Sun, Y Wang, M Zhou, M Hu, J Chen, Y Wen, Q Li, IEEE Transactions on Medical Imaging. 401Wang Q, Sun L, Wang Y, Zhou M, Hu M, Chen J, Wen Y, Li Q. 2021. Identification of Melanoma From Hyperspectral Pathology Image Using 3D Convolutional Networks. IEEE Transactions on Medical Imaging. 40(1):218-227. Available from: https://ieeexplore. ieee.org/document/9201095/.
Hyperspectral imaging of human skin aided by artificial neural networks. E Zherebtsov, V Dremin, A Popov, A Doronin, D Kurakina, M Kirillin, I Meglinski, A Bykov, Biomedical Optics Express. 1073545Zherebtsov E, Dremin V, Popov A, Doronin A, Kurakina D, Kirillin M, Meglinski I, Bykov A. 2019. Hyperspectral imaging of human skin aided by artificial neural networks. Biomedi- cal Optics Express. 10(7):3545. Available from: https://opg.optica.org/abstract.cfm? URI=boe-10-7-3545.
Intraoperative bile duct visualization using near-infrared hyperspectral video imaging. K J Zuzak, S C Naik, G Alexandrakis, D Hawkins, K Behbehani, E Livingston, The American Journal of Surgery. 1954Zuzak KJ, Naik SC, Alexandrakis G, Hawkins D, Behbehani K, Livingston E. 2008. Intraoper- ative bile duct visualization using near-infrared hyperspectral video imaging. The American Journal of Surgery. 195(4):491-497. Available from: https://linkinghub.elsevier.com/ retrieve/pii/S0002961008000172.
Characterization of a Near-Infrared Laparoscopic Hyperspectral Imaging System for Minimally Invasive Surgery. K J Zuzak, S C Naik, G Alexandrakis, D Hawkins, K Behbehani, E H Livingston, https:/pubs.acs.org/doi/10.1021/ac070367nAnalytical Chemistry. 7912Zuzak KJ, Naik SC, Alexandrakis G, Hawkins D, Behbehani K, Livingston EH. 2007. Char- acterization of a Near-Infrared Laparoscopic Hyperspectral Imaging System for Mini- mally Invasive Surgery. Analytical Chemistry. 79(12):4709-4715. Available from: https: //pubs.acs.org/doi/10.1021/ac070367n.
| [
"https://github.com/luiscarlosgph/segodsidb"
] |
[
"LipFormer: Learning to Lipread Unseen Speakers based on Visual-Landmark Transformers",
"LipFormer: Learning to Lipread Unseen Speakers based on Visual-Landmark Transformers"
] | [
"Feng Xue ",
"Yu Li ",
"Deyin Liu ",
"Yincen Xie ",
"Senior Member, IEEE, RichangLin Wu ",
"Senior Member, IEEEHong * "
] | [] | [] | Lipreading refers to understanding and further translating the speech of a speaker in the video into natural language. State-of-the-art lipreading methods excel in interpreting overlap speakers, i.e., speakers appear in both training and inference sets. However, generalizing these methods to unseen speakers incurs catastrophic performance degradation due to the limited number of speakers in training bank and the evident visual variations caused by the shape/color of lips for different speakers. Therefore, merely depending on the visible changes of lips tends to cause model overfitting. To address this problem, we propose to use multi-modal features across visual and landmarks, which can describe the lip motion irrespective to the speaker identities. Then, we develop a sentence-level lipreading framework based on visual-landmark transformers, namely LipFormer. Specifically, LipFormer consists of a lip motion stream, a facial landmark stream, and a cross-modal fusion. The embeddings from the two streams are produced by self-attention, which are fed to the cross-attention module to achieve the alignment between visuals and landmarks. Finally, the resulting fused features can be decoded to output texts by a cascade seq2seq model. Experiments demonstrate that our method can effectively enhance the model generalization to unseen speakers. | 10.1109/tcsvt.2023.3282224 | [
"https://export.arxiv.org/pdf/2302.02141v1.pdf"
] | 256,615,987 | 2302.02141 | 675811d13c027b9585a8d5a7004b70e7390777c9 |
LipFormer: Learning to Lipread Unseen Speakers based on Visual-Landmark Transformers
Feng Xue
Yu Li
Deyin Liu
Yincen Xie
Senior Member, IEEE, RichangLin Wu
Senior Member, IEEEHong *
LipFormer: Learning to Lipread Unseen Speakers based on Visual-Landmark Transformers
1Index Terms-LipreadingLandmarksTransformerLip mo- tion
Lipreading refers to understanding and further translating the speech of a speaker in the video into natural language. State-of-the-art lipreading methods excel in interpreting overlap speakers, i.e., speakers appear in both training and inference sets. However, generalizing these methods to unseen speakers incurs catastrophic performance degradation due to the limited number of speakers in training bank and the evident visual variations caused by the shape/color of lips for different speakers. Therefore, merely depending on the visible changes of lips tends to cause model overfitting. To address this problem, we propose to use multi-modal features across visual and landmarks, which can describe the lip motion irrespective to the speaker identities. Then, we develop a sentence-level lipreading framework based on visual-landmark transformers, namely LipFormer. Specifically, LipFormer consists of a lip motion stream, a facial landmark stream, and a cross-modal fusion. The embeddings from the two streams are produced by self-attention, which are fed to the cross-attention module to achieve the alignment between visuals and landmarks. Finally, the resulting fused features can be decoded to output texts by a cascade seq2seq model. Experiments demonstrate that our method can effectively enhance the model generalization to unseen speakers.
I. INTRODUCTION
L IPREADING is the inference on the speech of a speaker from a video clip, which could be presented with/without audial signals [1]- [3]. Lipreading offers an effective way to infer text, alternative to speech recognition, which renders implausible in disturbing circumstances, e.g, unknown speakers in the wild. Besides, lipreading shows enormous values to realworld applications, such as silent-movie processing and silent conversations [4]- [6].
Benefited by deep learning, lipreading has also witnessed its remarkable progression, which has demonstrated its trend to even surpass experienced subject experts. Early efforts are made to perform lipreading only at word-level [7], [8]. However, such lipreading method only corresponds to one word at each time. Compared to word-level lipreading, sentence-level lipreading [9]- [13] is more accurate in sentence prediction by predicting the texts depending on the contextual priors. For example, Assael et.al [14] proposed LipNet, which combines VGG [15], LSTM [16], and CTC [17], and thus achieved an accuracy of 95.2 on the GRID dataset [18]. In [19], the authors developed an approach based on attribute learning and F. Xue contrast learning, which greatly improved the performance of lipreading. However, the majority of current lipreading models are only trained and tested on publicly available datasets, which are limited in their training sample size and number of speakers. Moreover, the performance improvement of these methods are incremental to unseen speakers as they are mainly developed in the case of overlap speakers. (i.e., the test speaker has ever appeared in the training set). We hypothesize that these methods describe the lip motion using visual clues, however, if the model is only trained with visual lip motion, it will cause the overfitting due to the visual variations caused by the shape/color of lips and pronunciation habit of a particular speaker. As a consequence, this hampers the generalization ability of the model. For example, as shown in Figure 1, being overfitted to the visual variations e.g., the lip shape, the model translates into different texts even when two speakers say the same word. Therefore, developing a lipreading method simply using lip motion may slump the translation accuracy especially in unseen speakers. In real-life applications, a lipreading system is often required to make lip-to-text predictions for new faces, which may not be observed in the training bank. Also, learning a model with good generalization ability to unseen speaks is paramount to downstream applications. Fig. 2. The proposed LipFormer is an end-to-end two-stream architecture built upon a visual branch and a landmark branch. The input to the visual branch is a sequence of lip images. The input to the landmark branch is a 340-dimensional vector extracted from speaker frame. The embeddings from the two streams are forwarded to a cross-attention module to achieve the alignment, which allows for the cross-modal fusion. The resulting fused features can be decoded to output texts by a cascaded seq2seq model. Definitions of notations can be seen in Table I. identities, we aim to enhance the model in lipreading by calibrating the association between motion and texts via corrective alignment, such that the model can generalize to unseen speakers. In this paper, we explore to use landmarks as a corrective offset to achieve the true underlying association between lip motion and sentences for lipreading. In [20], the author proposed to learn object structural representations by using landmarks as a complementary feature to the pretrained deep representation in recognizing visual attributes. In [21], the authors proposed to extract facial geometric features by using landmarks, and both geometric and texture-based features can be used to improve the accuracy of facial expression recognition. Such a method can also help the model generalize to new faces, which are out of the training set. Inspired by those approaches, in this paper, we propose to introduce landmark features independent to the visual appearance of lips, and thus can eliminate the visual variations between speakers. In fact, landmarks encode positional priors of the speaker's face and lips (i.e., lip and facial landmarks). The motion trajectory encoded by those landmarks effectively describes lip motion irrespective to speakers. And the learned landmark embedding is less influenced by the visual variations. It shows potential to better the generalization of a lipreading model to the unseen speakers in inference.
In this paper, we aim to improve the generalization of a lipreading model in recognizing unseen speakers. To achieve this goal, we propose a sentence-level lipreading framework based on visual-landmark transformers, dubbed LipFormer. Specifically, we describe the lip motion with features from two modalities: visual features and landmarks. The visual features are extracted from lip regions for each speaker at frame level, and then self-attended to be discriminative. However, encoding the lip motions only using visual information can easily lead to over-fitting, owing to the strong bias towards visual variations caused by the shape/color of a speaker's lips. To this end, we propose to use landmarks extracted from face and lips as motion trajectory to eliminate such variations. Then, we employ the cross-attention to align and fuse such cross-modal features. The cross-attention of transformer can effectively learn the correspondence between the visual and landmark embeddings, so as to improve the joint representations for cross-modal fusion. Finally, we employ a cascaded sequence-to-sequence (seq2seq) to decode the fused features and generate the texts.
The main contributions of this paper are summarised below: 1) We propose a sentence-level lipreading framework based on visual-landmark transformers, which introduces corrective landmarks to minimize the biased visual variations, making the model generalize to unseen speakers.
2) The proposed model uses cross-modal features to describe lip motion, and a cross-attention is adopted to achieve the alignment between visuals and landmarks, which promotes the fusion of heterogeneous features and further improves the generalization of the model.
3) Extensive experiments are conducted on benchmark datasets to demonstrate the effectiveness of the proposed method in interpreting unseen speakers and a SOTA performance is achieved. In this section, we briefly review the lipreading methods, which can be categorized into two lines: conventional methods and deep learning-based lipreading methods.
A. Traditional lipreading methods 1) Pixel-based methods. They assumes that all pixels in the lip region contain vision-related information, and uses the pixel value of the lip region as the original features. The features are reduced in different ways to obtain expressive features. For example, Potamianos et al. [22] proposed HiLDA, which is widely used as a visual feature extractor in speech recognition tasks. Lucey et al. [23] further considered local features based on this, the author extracted local features of image patch, fusing global features with local features to further improve recognition accuracy. Tim et al. [24] normalize and concatenate the AAM features of consecutive frames to extract spatio-temporal features by linear transformation. 2) Shape-based methods. They extract features based on the shape of the lip region (lips, chin, etc.). For example, Papcun et al. [25] used articulatory features (AFs) for lipreading, but since this kind of features is too simple to distinguish similar word, it is generally applied to small-scale recognition tasks. Chan [26] combined geometric features with PCA features of the lip as visual features. Luettin et al. [27] applied the ASM model to lipreading, generating features from the coordinates of several key points. However, the shape-based model assumes that most of the information related to visual is on the contour represented by feature points, which inevitably leads to information loss.
Benefited by deep learning, lipreading has also witnessed its remarkable progression. Compared with traditional methods, deep learning-based methods has powerful feature learning capability. The deep learning method avoids the complex handcrafted feature extraction process, and the performance of its model can be further enhanced with large-scale data.
B. Deep learning-based lipreading methods
Lipreading can be carried out at word-level and sentencelevel lipreading. Early efforts are made to perform lipreading only at word-level, such lipreading video only corresponds to one word and had a small range of applications. For example, Chung [28] designed two CNN structures, i.e., early fusion and multiple towers, which can be combined for an all-once word-level translation from a sequence of lip motion. Petridis et al [7] proposed an end-to-end audio-visual model based on residual networks and BiGRU, which simultaneously learns to extract features directly from the image and audio. Stafylakis et al [8] combined 3D-CNN and 2D-CNN to extract visual features and obtained higher accuracy on the LRW dataset.
In this paper, we focus on sentence-level lipreading. Compared to word-level lipreading, sentence-level lipreading is more accurate, since it predicts the texts depending on the contextual priors. The LipNet [14] is the first end-to-end sentencelevel lipreading, which consists of 3DCNN, BiGRU, and CTC. LipNet achieves 95.2% accuracy on the GRID dataset. In [29]- [31], the model structure is similar to LipNet. However, CTC loss has conditional independence, i.e., each output unit is individually predicting the probability of a label. Therefore, CTC loss will focus on local information of adjacent frames, which is not suitable for predicting labels that require contextual information to discriminate. Considering the problems incurred by CTC loss, Xu et al. [12] proposed LCANet, which stacks two layers of Highway networks in 3DCNN. This can highly improve the quality of the extracted features. They essentially used attention mechanism to overcome the shortcomings of conditional independence assumption in CTC. The following work [19] improves the performance of the lipreading model by introducing attribute learning and contrast learning into the sentence-level lipreading pipeline.
Other lipreading methods are based on the seq2seq model. The most representative model is WAS [32], which uses a 5-layer 2DCNN and LSTM to extract visual features and auditory features. These features are fed to the seq2seq module to generate texts. Zhao [11] proposed CSSMCM, which combines factors such as pinyin and tones to help predict Chinese characters based on visual information. In [1], the authors proposed a knowledge distillation method that uses a speech recognition pre-trained model as a teacher model to optimize a lipreading model as a student model, improving the accuracy of lipreading.
Due to the excellent performance of Transformer, Zhou et al. [33] apply transformer to speech recognition. Ma et al. [34] apply the transformer to lipreading, and thus proposed the CTCH-LipNet, which first used 3DCNN to extract visual features, and then a cascaded architecture that consists of two transformers to predict pinyin and Chinese characters. Ma et al. [35] use both video and audio as input, and the transformer decodes the features into texts.
However, most existing lipreading methods use overlap speakers by default in experimental evaluation. This is prone to be overfitting to overlap speakers in the training set, while perform poorly to the unseen speakers. In this paper, we propose a sentence-level lipreading framework based on visual-landmark transformers that generalize the model to unseen speakers, thus solving the lipreading problem of unseen speakers.
III. THE METHOD
The architecture of our proposed LipFormer is shown in Fig 2, which consists of four modules: 1) a visual stream that extracts visual features of lip regions; 2) a landmark stream that encodes the trajectory of lip/facial movement of a speaker across the sequence; 3) a cross-modal fusion that learns the alignment between visuals and landmarks; and 4) a cascades seq2seq model for mapping fused features to texts.
A. Visual Embedding
For each video clip, we first extracted the face image in each frame by using the DLib face detector, and then apply an affine transformation to each face image to obtain the mouthcentered cropping with 160 × 80 pixel as the lip region. For a video clip with T frames, we can have a lip region sequence {T i }, where I i is the frame of the ith step (i = 1, . . . , T ). We first learn the per-frame feature embedding by applying the 3D convolution [36], followed by a ReLu layer and a maxpooling layer. During training, dropout regularization is used along with the 3DCNN to alleviate the saturation problem.
The visual features extracted by 3DCNN contain a lot of irrelevant information, such as the shape of lip, pronunciation habits of different speakers, etc. That information largely affects the accuracy of text generated by the decoder. In this paper, we combine 3DCNN with channel attention. The channel attention mechanism can learns the weights of each channel and improves the performance of useful visual features by suppressing irrelevant features. The obtained feature vector for each frame is denoted by f v i . To aggregate the spatial grids, two different spatial context descriptors are generated from the input feature map by using average pooling and maximum pooling, and these two outputs are fed into a shared network MLP to generate the channel attention map. Multiply the attention map and the input features to focus on important features. The structure of the visual branch is shown in Fig.3 (a).
Limited by the size of convolution kernel, CNN can only extract short-term spatio-temporal visual features. The bidirectional GRU [37], [38] is applied to extract the long-term features:
[s v i , o v i ] = GRU (f v i , s v i−1 ),(1)
where s v i , o v i are the hidden vector and output vector of the GRU, respectively.
B. Landmark Embeddings
Merely depending on the visual appearance will lead to inferior performance for lipreading. A primary reason is that lip motion has a significant visual variation caused by the very different shape and color of the lip or a particular pronunciation habit of a speaker. To eliminate the visual variations, we propose taking the landmarks as another feature embedding. The landmark embedding will be less influenced by the lip appearance and has better generalization ability to the unseen speakers during the training. The distribution of the 68 facial landmarks is shown in Fig. 3(b), among which there are 20 landmarks for lip, 17 landmarks for facial contour, and 31 landmarks for eyes, eyebrows, and noses. The structure of the landmark branch is shown in Fig. 3 (b).
More specifically, we utilize both lip landmarks and facial contour landmarks to construct the landmark embedding for lipreading. The change of facial contour position is found to be the most obvious with the lip motion. The motion trajectory between facial landmarks effectively describes lip motion. The features are constructed as follows: 1) We first calculate the angle between 20 landmarks of the lip and 17 landmarks of the face contour for each frame, i.e. the cosine, to obtain the angle matrix of (1,340); 2) We then compute the difference angle matrix for two adjacent frames to represent the motion change of landmarks. 3) For the T-frame, the (B,T,340) angle matrix is obtained as the input to the landmark branch. The matrix difference is the per-frame landmark embedding, denoted by
C. Cross-Modal Fusion via Transformer
The embeddings from the two streams are fed to the transformer. We use the transformer-encoder to achieve the cross-modal fusion. The encoder consists of an encoder layer, which is composed of three parts: a self-attention module, a cross-attention module, and a feed-forward network. The embeddings from the two streams are produced by self-attention, which are fed to the cross-attention module to achieve the alignment between visuals and landmarks. Three matrices of query Q, key K, and value V as input to the self-attention, which are generated from the input sequence z. Self-attention module extracting global information to establish global longterm dependency of lip motion. Cross-attention takes the Q of the current modality (e.g., video) and the K/V obtained in the opposite modality (e.g., landmark) as input, for each visual feature embedding, different weights are assigned to each landmark feature embedding by cross-attention, and the matching of weights can achieve visual-landmark embedding alignment to achieve cross-modal feature fusion. The output of the attention module is:
Attention = softmax Q i (K i ) T √ d · V i .(2)
where d is the length of the embedding vector.
The encoder layer is implemented as a feed-forward network, which contains two fully-connected layers with a ReLU non-linearity:
F F N (x) = F C(ReLU (F C(x))).
(3)
D. Text Generation
With the fused features as input, we feed them to a cascade sequence-to-sequence model. The seq2seq model is with an encoder-decoder structure, where both the encoder and decoder are LSTM models (sometimes GRU models). Encoder-Decoder model can predict arbitrary sequence correspondence. In the Chinese dataset, there are fewer pinyin categories, making it easier to predict the pinyin. For this reason, we choose pinyin as a middle layer when predicting Chinese characters and use a cascaded seq2seq model to decode text.
E. Pinyin Prediction
With the fused features f vm as input, we feed them to a Pinyin seq2seq model to decode the f vm into pinyin. We refer to the encoder and decoder as visual-landmark encoder and pinyin decoder, in which the encoder processes the visuallandmark feature sequence, and the decoder predicts the pinyin sequence. The GRU of encoder calculating the hidden layer vector by inputting f vm as:
(h vm e ) i = GRU vm e ((h vm e ) i−1 , e vm i ).(4)
The decoder progressively computes the hidden layer vector (h p d ) i based on the predicted result p i of the previous time step and the hidden state (h p d ) i−1 of the previous time step:
(h p d ) i = GRU p d ((h p d ) i−1 , E(p i )),(5)
where the embedding function E(·) maps p i to its vector space. To further utilize the information contained in the input vector, we apply the attention module to compute the weights of all hidden layer vectors in the encoder to generate a context vector, that assists the decoder predicting the pinyin. The weights are calculated as: Finally, the probability distribution of the current pinyin is calculated by splicing the GRU output and the context vector as:
att = softmax(F C(tanh(F C(Concat((h p d ) i ), h p e )))). (6)P (p i ) = softmax(F C((h p d ) i , h p e · att.(7)
F. Character Prediction
Similarly, With the pinyin sequence as input, we feed them to a character seq2seq model to translating pinyin to character, the computation process is similar to that of predicting pinyin sequences. First, the pinyin sequence is input to the pinyin encoder, and the GRU of the pinyin encoder calculating the hidden layer vector as:
(h p e ) i = GRU p e ((h p e ) i−1 , e p i ).(8)
In addition, a dual attention mechanism is employed in the process of predicting characters in order to take both visuallandmark and pinyin information. The character decoder calculates the context vector based on the visual-landmark encoder output and the pinyin encoder output, and then predicts the character:
(h c d ) i = GRU c d ((h c d ) i−1 , E(c i )).(9)c vm i = h vm e · att((h c d ) i , h vm e )).(10)c p i = h p e · att((h c d ) i , h p e )).(11)P (c i ) = softmax(F C((h c d ) i , c vm i , c p i ).(12)
G. Loss Function
To improve the prediction accuracy, the model first predicts pinyin and then translating pinyin into Chinese characters. We jointly optimize the loss function of these two processes. The loss function defined as:
L = L p + L c ,(13)
where L p = −
IV. EXPERIMENTS
In this section, we validate the proposed method by conducting extensive experiments on two benchmark datasets: CMLR [11] and GRID [18].
A. Dataset
The Chinese Mandarin Lip Reading (CMLR) dataset [11] is the largest Chinese Mandarin sentence lipreading dataset (example images are shown in Fig 4). The whole dataset contains sequences recorded by 11 speakers. We split the dataset to form the training and test set with 9 speakers and 2 speakers, respectively. Note the speakers are non-overlap in training and test. For experiments on overlap speakers, we divided the training, valid and test sets by following [11]. The CMLR division protocol is shown in Table II. The GRID dataset [18] has 33 speakers recorded, and is a widely used dataset in lipreading (example images are shown in Fig 5). Each sentences consists of a sequence of verb + color + preposition + letter + number + adverb. e.g. "bin blue at f five again". To split the datasets into unseen and overlap speakers, we follow the setting as suggested in [14]. The division protocol are provided in Table III.
B. Evaluation Metrics
To measure the performance of the proposed method and the baselines, we adopt the widely used evaluation metrics in Automatic Speech Recognition: Word Error Rate (WER) and Character Error Rate (CER). WER/CER is defined as the minimum number of word/character operations (including substitution, deletion, and insertion operations), which are required to convert the predicted label into the ground truth, and then divided by the number of words/characters in the ground truth. The calculation is defined as follows: where S denotes the substitution, D represents the deletion, I denotes the insertion, and N is the number of words in the ground truth. Note that a smaller WER/CER indicates a higher prediction accuracy. In addition, the evaluation metric we use on the Chinese dataset CMLR is CER only, where CER means the Character Error Rate. The CER is calculated in the same way as WER, that is, each Chinese character is regarded as an English word.
W ER/CER = 100 · S + D + I N ,(14)
C. Implementation Details
For each video clip, we first extracted the face image in each frame using the DLib face detector [39]. The coordinates of the landmarks generated by the face detector are used as the input of the landmark branch. The affine transformation is applied to each face image to obtain 160 × 80-pixel mouth-centered crop as the input to the visual branch. (see Fig 6). This experiment uses the Adam optimizer to optimize the parameters with an initial learning rate of 0.0003. When each training error did not improve within 4 epochs, the learning rate decreases by 50%.
D. Competitors
We compared our method with several state-of-the-art methods: LipNet [14], CSSMCM [11], CALLip [19], LCSNet [40] and WAS [32]. LipNet: LipNet is the first end-to-end sentence-level lipreading model that achieves an accuracy of 95.2 on the GRID dataset. CSSMCM: This model is specifically designed for Chinese lipreading, combining factors such as pinyin and tones to help predict Chinese characters. CALLip: CALLip improves the performance of the model by introducing attribute learning and contrast learning into the sentence-level lipreading pipeline, which using an attribute learning module to extract speaker identity features and eliminate cross-speaker variations. LCSNet: This model extract features that are more relevant to the lip motion by the channel attention module and the selective feature fusion module to improve recognition accuracy. WAS: This baseline uses video information to predict sentences by seq2seq model, and achieves advanced performance on the LRS dataset.
E. Comparison with Competitors
To prove the effectiveness of the proposed method, we first compare it to SOTA competitors on the CMLR and Tables IV and V, respectively. Table IV shows that the LipFormer has a notable improvement over the state-of-the-art methods for unseen speakers. Comparing to LipNet, LipFormer uses landmarks to jointly describe the lip motion. The comparison results show that the adoption of multi-modal features is beneficial to performance improvement. Compared with CSSMCM, the WER of LipFormer is further reduced by 6.9%. One reason is that LipFormer learned more consistent features of lip motion with multi-modal features, so that the model is well-generalized to the unseen speakers. For the overlap speakers, LipFormer outperforms the SOTA methods on the CMLR dataset and achieves a character error rate of 27.79%. It shows that LipFormer is well-suited to both unseen and overlap speakers. Table V shows the comparison results of different methods on the GRID dataset. Compared with LipNet, LipFormer can reduced the WER by 7.86% for unseen speakers and WER by 3.35% for overlap speakers. This affirms that landmarks can help solving the generalisation problem. It can also be observed that LipFormer outperforms other methods for both unseen and overlap speakers, even thought these speakers are from different ethnics.
F. Ablation study
To verify that the landmark branch and transformer module can effectively improve the lipreading accuracy for unseen speakers, we decouple the LipFormer framework and tested them on the CMLR and GRID datasets. The experimental results are shown in Tables VI and VII, respectively. The variants of our method, i.e., #1 and #2, which fuse the visual branch and landmark branch outperform that of the model with visual-only branch. Specifically, compared with visual-only model, for the unseen and overlap speakers, the WER of method #2 reduced by 4.66% and 1.15% on the CMLR dataset and reduced by 2.11% and 0.62% on the GRID dataset, respectively. This demonstrates that landmark can normalizes the lip shapes of different speakers, eliminate the irrelevant visual variations, enhance the generalization ability of the model to unseen speakers. Comparing LipFormer with Method #2, experimental results show that the transformer module learns the correspondence between visual-landmark embedding to achieve fusion of cross-modal heterogeneous features, further improve the performance of the model.
G. Sensitivity to Hyper-parameter
In this section, we conduct experiments on CMLR to investigate the effect of hyper-parameter in LipFormer on the model performance. The feature extractor encodes each time step as a feature embedding. To investigate the size of the feature embedding in the landmark branch affects the learning performance of the model, to this end, the size of embedding is controlled by controlling the number of output channels of the GRU in the landmark branch. Three different sizes are designed:256, 512, 1024. Table VIII summarize the performance of each model for different embedding sizes. In general, as the number of embedding increases, the performance of the model will be better. However, a too-large embedding size may cause overfitting of the model and the performance of the model may decrease instead of increasing. The optimal number of channels for the first layer of GRU output is 512 on the CMLR dataset both for unseen speakers and overlap speakers.
H. Case Study
To qualitatively analyze the performance of the proposed model, this section evaluates a part of the predicted results. Table IX show some sentences generated by each variant model on the CMLR and GRID datasets, where the characters highlighted in red are incorrect.
It can be seen from Table IX that there are some differences between the sentences generated by Visual-only model and the ground truth on the CMLR dataset, such as predicting"人口" (which means population) as "中外"(which means Chineseforeign), predicting "一些" (which means some) as "议程" (which means agenda) ,etc. This shows that the model translates different texts when people say the same word due to the visual variations such as lip shape. Visual-only+Landmark model generates sentences that are closer to the ground truth, which indicates that landmark can increase the accuracy of different speakers when pronouncing the same sentence. The sentences generated by LipFormer are correct, This shows that Transformer can promote the fusion of cross modal information. On the GRID dataset, some letters are easy to predict errors, for example, "f" is predicted to be "s". These errors are mainly caused by letters with similar pronunciations. Fig. 9. Illustration of the alignment of visual-landmark embedding on the CMLR dataset. Where the vertical axis represents the visual embedding and the horizontal axis represents the landmark embedding. dataset by the cross-attention module, respectively. Each row in the figure represents a visual modality, and each column represents a landmark modality. The highlighted area in the figure indicates the degree of alignment between the visuallandmark feature embedding during feature fusion. Figure 9 shows that in the process of feature fusion, cross-modal attention can learn the corresponding relationship between two modal feature embeddings, achieve the alignment between visual-landmark embedding, the diagonal trend is obvious. When pronouncing the same word, attention is concentrated on the corresponding different modal frames.
I. Alignment Visualization
V. CONCLUSION
In this paper, we propose a cross-modal Transformer framework for sentence-level lipreading, which can generalize to unseen speakers by using landmarks as motion trajectories to calibre the visual variations. The model can level up the alignment of heterogeneous features by cross-modal fusion suggested by the cross-attention. Extensive experimental results show that our framework can effectively generalize to unseen speakers. In future work, we put forward the relevance of this research in more challenging datasets, e.g., side-view speakers.
Fig. 1 .
1The visual variations of lip motion can easily cause the overfitting of a lipreading model by associating spurious correlation between motion and texts. In this paper, we propose to use landmarks to calibrate the cross-modal association.
Fig. 3 .
3The two-stream architecture with a visual branch (a) and a landmark branch (b). Given the input as a sequence of lip regions, we use 3DCNN combined with channel attention to extract the features, followed by Bi-GRU to encode temporal orders. The input to the landmark branch is the landmark embedding: the difference of the angle matrix between adjacent frames. The angle matrix is calculated between 20 landmarks of the lip and 17 landmarks of the face contour for each frame and encoded as a 340-dimensional vector.
ith frame. Given {f m 1 , f m i , ...f m T } of all the T frames, we apply a bi-directional GRU to extract the longterm features so as to obtain the output feature sequence {s m 1 , s m i , ...s m T }.
Fig. 4 .
4Static images from the CMLR dataset.
n=1 logP (p n |x, p 1 , p 2 , ...p n−1 ), L c = − N n=1 logP (c n |x, c 1 , c 2 , ...c n−1 )
Fig. 5 .
5Static images from the GRID dataset.
Fig. 6 .
6Examples of the mouth-centered crop.
Fig 7 and Fig 8 shows the WER curves of model variants on two datasets.
Fig. 7 .
7Performance of model variants on CMLR dataset. (a) Unseen. (b) Overlap.
Fig. 8 .
8Performance of model variants on GRID dataset. (a) Unseen. (b) Overlap.
Figure 9a
9aFigure 9a and Figure 9b visualize the alignment between visuals and landmarks at 75 and 100 frames on the CMLR
, Y. Li, Y. Xie, L. Wu, R. Hong are Hefei University of Technology, China. D. Liu is with Anhui University, China Corresponding author: Richang HongThe pronunciation habits
are different
Speaker-1
Speaker-2
The shape/color of lips
are different
Visual clues
Two speakers say the same word
The shape of lip landmark
is similar
Landmark clues
Lip movement is similar
when pronouncing
Given limited access to the number of samples and speakerarXiv:2302.02141v1 [cs.CV] 4 Feb 2023
…
ni
Feed forward
LipFormer Encoder
Character Seq2Seq
Pinyin Seq2Seq
Landmark feature extraction
Visual feature extraction
Self-attention
Self-attention
Cross-attention
Cross-attention
Feature
alignment
…
ni
hao
你
好
TABLE I NOTATIONS
IAND DEFINITIONS.Symbol
Definition
f v
T , f m
T
visual feature, landmark feature
f vm
visual-landmark embedding sequence
f p
pinyin embedding sequence
GRUe
The subscript e indicates the encoder
GRU d
The subscript d indicates the decoder
GRU vm
e
GRU unit in visual-landmark encoder
GRU p
e , GRU p
d
GRU unit in pinyin encoder and pinyin decoder
GRU c
d
GRU unit in character decoder
h vm
e , h p
e
visual-landmark encoder output, pinyin encoder output
c vm
p , c p
c , c vm
c
context vector calculated by the attention
II. RELATED WORK
TABLE II STATISTICS
IIFOR THE CMLR DATASET.Set
Speakers Sentences
Unseen
Train
9
81094
Test
2
20978
Overlap Train
11
71452
Test
11
20418
TABLE III STATISTICS
IIIFOR THE GRID DATASET.Set
Speakers Sentences
Unseen
Train
29
28837
Test
4
3986
Overlap Train
33
24408
Test
33
8415
TABLE IV PERFORMANCE
IVCOMPARISON WITH STATE-OF-THE-ARTS ON CMLR. -: RESULTS NOT AVAILABLE. BEST RESULTS ARE IN BOLDFACE. PERFORMANCE COMPARISON WITH STATE-OF-THE ARTS ON GRID. BEST RESULTS ARE IN BOLDFACE.Methods
Unseen Overlap
LipNet
52.18
33.41
WAS
-
38.93
CSSMCM
50.08
32.48
CALLip
-
31.18
LCSNet
46.98
30.03
LipFormer
43.18
27.79
TABLE V
Methods
Unseen Overlap
LipNet
17.5
4.8
WAS
14.6
3.0
CALLip
-
2.48
LCSNet
11.6
2.3
LipFormer
9.64
1.45
GRID datasets. We empirically observed that the CTC loss
could result in non-convergence during the training on the
CMLR dataset. Hence, we replaced the CTC with the cascaded
Seq2Seq module. We report the results for both the unseen
and overlap speakers of two datasets. Experimental results are
shown in
TABLE VI PERFORMANCE
VIOF LIPFORMER AND ITS VARIANTS ON CMLR. THE FRONT-END STRUCTURE OF LIPFORMER IS VISUAL-ONLY+LANDMARK+TRANSFORMER. BEST RESULTS ARE IN BOLDFACE.TABLE VII PERFORMANCE OF LIPFORMER AND ITS VARIANTS ON GRID. BEST RESULTS ARE IN BOLDFACE.# Methods
Unseen Overlap
1 Visual-only
48.14
29.15
2 Visual-only+Landmark
43.48
28.0
3 LipFormer
43.18
27.79
# Methods
Unseen Overlap
1 Visual-only
12.35
2.82
2 Visual-only+Landmark
10.24
2.2
3 LipFormer
9.64
1.45
0
5
10
15
20
25
epoch
0.40
0.45
0.50
0.55
0.60
0.65
0.70
0.75
0.80
WER
Visual-only
Visual-only+Landmark
LipFormer
TABLE VIII PERFORMANCE
VIIIOF MODELS WITH DIFFERENT EMBEDDING SIZES ON CMLR DATASET.# Methods Unseen Overlap
1 256
43.61
28.34
2 512
43.18
27.79
3 1024
43.47
28.12
TABLE IX PREDICTION
IXOF DIFFERENT MODELS ON CMLR AND GRID.Methods
CMLR
GRID
Unseen
Overlap
Unseen
Overlap
Ground truth
人口较少名族发展规划
一些新做法取代了老传统
bin blue at f three please
bin blue at d two soon
Visual-only
中外较少名族发展模范
议程改进取代了措施
bin blue at s three please
bin blue by d five soon
Visual-only+Landmark
人口较少名族检查规划
一些新做法取代了老烧
bin blue at l three please
bin blue at d two again
LipFormer
人口较少名族发展规划
一些新做法取代了老传统
bin blue at f three please
bin blue at d two soon
Hearing lips: Improving lip reading by distilling speech recognizers. Y Zhao, R Xu, X Wang, P Hou, H Tang, M Song, Y. Zhao, R. Xu, X. Wang, P. Hou, H. Tang, and M. Song, "Hearing lips: Improving lip reading by distilling speech recognizers," 2019.
Understanding pictograph with facial features: end-to-end sentence-level lip reading of chinese. X Zhang, H Gong, X Dai, F Yang, N Liu, M Liu, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33X. Zhang, H. Gong, X. Dai, F. Yang, N. Liu, and M. Liu, "Understanding pictograph with facial features: end-to-end sentence-level lip reading of chinese," in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, no. 01, 2019, pp. 9211-9218.
Comparison of human and machine-based lip-reading. S Hilder, R W Harvey, B.-J Theobald, AVSP. S. Hilder, R. W. Harvey, and B.-J. Theobald, "Comparison of human and machine-based lip-reading." in AVSP, 2009, pp. 86-89.
Lip print recognition for security systems by multi-resolution architecture. J O Kim, W Lee, J Hwang, K S Baik, C H Chung, Future Generation Computer Systems. 202J. O. Kim, W. Lee, J. Hwang, K. S. Baik, and C. H. Chung, "Lip print recognition for security systems by multi-resolution architecture," Future Generation Computer Systems, vol. 20, no. 2, pp. 295-301, 2004.
Lip print recognition for security systems by multi-resolution architecture. O K Jin, W Lee, J Hwang, K S Baik, C H Chung, Future Generation Computer Systems. 202O. K. Jin, W. Lee, J. Hwang, K. S. Baik, and C. H. Chung, "Lip print recognition for security systems by multi-resolution architecture," Future Generation Computer Systems, vol. 20, no. 2, pp. 295-301, 2004.
Discriminative multi-modality speech recognition. B Xu, C Lu, Y Guo, J Wang, B. Xu, C. Lu, Y. Guo, and J. Wang, "Discriminative multi-modality speech recognition," 2020.
End-to-end audiovisual speech recognition. S Petridis, T Stafylakis, P Ma, F Cai, G Tzimiropoulos, M Pantic, 2018 IEEE international conference on acoustics, speech and signal processing. ICASSPS. Petridis, T. Stafylakis, P. Ma, F. Cai, G. Tzimiropoulos, and M. Pantic, "End-to-end audiovisual speech recognition," in 2018 IEEE interna- tional conference on acoustics, speech and signal processing (ICASSP).
. IEEE. IEEE, 2018, pp. 6548-6552.
Combining residual networks with lstms for lipreading. T Stafylakis, G Tzimiropoulos, arXiv:1703.04105arXiv preprintT. Stafylakis and G. Tzimiropoulos, "Combining residual networks with lstms for lipreading," arXiv preprint arXiv:1703.04105, 2017.
Efficient end-to-end sentence-level lipreading with temporal convolutional networks. T Zhang, L He, X Li, G Feng, Applied Sciences. 11156975T. Zhang, L. He, X. Li, and G. Feng, "Efficient end-to-end sentence-level lipreading with temporal convolutional networks," Applied Sciences, vol. 11, no. 15, p. 6975, 2021.
Mutual information maximization for effective lip reading. X Zhao, S Yang, S Shan, X Chen, 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition. FG 2020X. Zhao, S. Yang, S. Shan, and X. Chen, "Mutual information max- imization for effective lip reading," in 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020).
. IEEE. IEEE, 2020, pp. 420-427.
A cascade sequence-to-sequence model for chinese mandarin lip reading. Y Zhao, R Xu, M Song, Proceedings of the ACM Multimedia Asia. the ACM Multimedia AsiaY. Zhao, R. Xu, and M. Song, "A cascade sequence-to-sequence model for chinese mandarin lip reading," in Proceedings of the ACM Multime- dia Asia, 2019, pp. 1-6.
Lcanet: End-to-end lipreading with cascaded attention-ctc. K Xu, D Li, N Cassimatis, X Wang, 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition. FGK. Xu, D. Li, N. Cassimatis, and X. Wang, "Lcanet: End-to-end lipreading with cascaded attention-ctc," in 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018).
. IEEE. IEEE, 2018, pp. 548-555.
Fastlr: Nonautoregressive lipreading model with integrate-and-fire. J Liu, Y Ren, Z Zhao, C Zhang, B Huai, J Yuan, Proceedings of the 28th ACM International Conference on Multimedia. the 28th ACM International Conference on MultimediaJ. Liu, Y. Ren, Z. Zhao, C. Zhang, B. Huai, and J. Yuan, "Fastlr: Non- autoregressive lipreading model with integrate-and-fire," in Proceedings of the 28th ACM International Conference on Multimedia, 2020, pp. 4328-4336.
Lipnet: End-to-end sentence-level lipreading. Y M Assael, B Shillingford, S Whiteson, N. De Freitas, arXiv:1611.01599arXiv preprintY. M. Assael, B. Shillingford, S. Whiteson, and N. De Freitas, "Lipnet: End-to-end sentence-level lipreading," arXiv preprint arXiv:1611.01599, 2016.
. K Chatfield, K Simonyan, A Vedaldi, A Zisserman, Return of the devil in the details: Delving deep into convolutional nets," arXiv e-printsK. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman, "Return of the devil in the details: Delving deep into convolutional nets," arXiv e-prints, 2014.
Empirical evaluation of gated recurrent neural networks on sequence modeling. J Chung, C Gulcehre, K H Cho, Y Bengio, Eprint ArxivJ. Chung, C. Gulcehre, K. H. Cho, and Y. Bengio, "Empirical evaluation of gated recurrent neural networks on sequence modeling," Eprint Arxiv, 2014.
Connectionist temporal classification: Labelling unsegmented sequences with recurrent neural networks. A Auvolat, T Mesnard, A. Auvolat and T. Mesnard, "Connectionist temporal classification: Labelling unsegmented sequences with recurrent neural networks," 2006.
An audio-visual corpus for speech perception and automatic speech recognition. M Cooke, J Barker, S Cunningham, X Shao, The Journal of the Acoustical Society of America. 1205M. Cooke, J. Barker, S. Cunningham, and X. Shao, "An audio-visual corpus for speech perception and automatic speech recognition," The Journal of the Acoustical Society of America, vol. 120, no. 5, pp. 2421- 2424, 2006.
Callip: Lipreading using contrastive and attribute learning. Y Huang, X Liang, C Fang, Proceedings of the 29th ACM International Conference on Multimedia. the 29th ACM International Conference on MultimediaY. Huang, X. Liang, and C. Fang, "Callip: Lipreading using contrastive and attribute learning," in Proceedings of the 29th ACM International Conference on Multimedia, 2021, pp. 2492-2500.
Unsupervised discovery of object landmarks as structural representations. Y Zhang, Y Guo, Y Jin, Y Luo, Z He, H Lee, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionY. Zhang, Y. Guo, Y. Jin, Y. Luo, Z. He, and H. Lee, "Unsupervised dis- covery of object landmarks as structural representations," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 2694-2703.
Realtime facial expression recognition using facial landmarks and neural networks. M A Haghpanah, E Saeedizade, M T Masouleh, A Kalhor, 2022 International Conference on Machine Vision and Image Processing (MVIP). IEEEM. A. Haghpanah, E. Saeedizade, M. T. Masouleh, and A. Kalhor, "Real- time facial expression recognition using facial landmarks and neural networks," in 2022 International Conference on Machine Vision and Image Processing (MVIP). IEEE, 2022, pp. 1-7.
On dynamic stream weighting for audio-visual speech recognition. V Estellers, M Gurban, J P Thiran, IEEE Transactions on Audio Speech and Language Processing. 204V. Estellers, M. Gurban, and J. P. Thiran, "On dynamic stream weight- ing for audio-visual speech recognition," IEEE Transactions on Audio Speech and Language Processing, vol. 20, no. 4, pp. 1145-1157, 2012.
Patch-based analysis of visual speech from multiple views. P J Lucey, G Potamianos, S Sridharan, Proceedings of the International Conference on Auditory-Visual Speech Processing. the International Conference on Auditory-Visual Speech ProcessingP. J. Lucey, G. Potamianos, and S. Sridharan, "Patch-based analysis of visual speech from multiple views," in Proceedings of the International Conference on Auditory-Visual Speech Processing 2008, 2008.
Cultural factors in the regression of non-verbal communication perception. T Sheerman-Chase, E.-J Ong, R Bowden, 2011 IEEE international conference on computer vision workshops (ICCV workshops. IEEET. Sheerman-Chase, E.-J. Ong, and R. Bowden, "Cultural factors in the regression of non-verbal communication perception," in 2011 IEEE international conference on computer vision workshops (ICCV work- shops). IEEE, 2011, pp. 1242-1249.
Inferring articulation and recognizing gestures from acoustics with a neural network trained on x-ray microbeam data. G Papcun, J Hochberg, T R Thomas, F Laroche, S Levy, Journal of the Acoustical Society of America. 922Pt 1G. Papcun, J. Hochberg, T. R. Thomas, F. Laroche, and S. Levy, "Inferring articulation and recognizing gestures from acoustics with a neural network trained on x-ray microbeam data," Journal of the Acoustical Society of America, vol. 92, no. 2 Pt 1, pp. 688-700, 1992.
Hmm-based audio-visual speech recognition integrating geometric-and appearance-based visual features. M T Chan, IEEE Fourth Workshop on Multimedia Signal Processing. M. T. Chan, "Hmm-based audio-visual speech recognition integrating geometric-and appearance-based visual features," in IEEE Fourth Work- shop on Multimedia Signal Processing, 2001.
Speechreading using probabilistic models. J Luettin, N A Thacker, Computer Vision and Image Understanding. 652J. Luettin and N. A. Thacker, "Speechreading using probabilistic mod- els," Computer Vision and Image Understanding, vol. 65, no. 2, pp. 163-178, 1997.
Lip reading in the wild. J S Chung, A Zisserman, Computer Vision -ACCV. J. S. Chung and A. Zisserman, "Lip reading in the wild," in Computer Vision -ACCV 2016, 2017, pp. 87-103.
End-to-end low-resource lip-reading with maxout cnn and lstm. I Fung, B Mak, I. Fung and B. Mak, "End-to-end low-resource lip-reading with maxout cnn and lstm," 2018, pp. 2511-2515.
Improving speaker-independent lipreading with domain-adversarial training. M Wand, J Schmidhuber, M. Wand and J. Schmidhuber, "Improving speaker-independent lipread- ing with domain-adversarial training," 2017.
Investigations on end-to-end audiovisual fusion. M Wand, J Schmidhuber, N T Vu, M. Wand, J. Schmidhuber, and N. T. Vu, "Investigations on end-to-end audiovisual fusion," 2018.
Lip reading sentences in the wild. J Son Chung, A Senior, O Vinyals, A Zisserman, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionJ. Son Chung, A. Senior, O. Vinyals, and A. Zisserman, "Lip reading sentences in the wild," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 6447-6456.
Syllable-based sequence-tosequence speech recognition with the transformer in mandarin chinese. S Zhou, L Dong, S Xu, B Xu, arXiv:1804.10752arXiv preprintS. Zhou, L. Dong, S. Xu, and B. Xu, "Syllable-based sequence-to- sequence speech recognition with the transformer in mandarin chinese," arXiv preprint arXiv:1804.10752, 2018.
A transformer-based model for sentencelevel chinese mandarin lipreading. S Ma, S Wang, X Lin, 2020 IEEE Fifth International Conference on Data Science in Cyberspace (DSC). 2020S. Ma, S. Wang, and X. Lin, "A transformer-based model for sentence- level chinese mandarin lipreading," in 2020 IEEE Fifth International Conference on Data Science in Cyberspace (DSC), 2020.
Visual speech recognition for multiple languages in the wild. P Ma, S Petridis, M Pantic, P. Ma, S. Petridis, and M. Pantic, "Visual speech recognition for multiple languages in the wild," 2022.
3d personvlad: Learning deep global representations for video-based person re-identification. L Wu, Y Wang, L Shao, M Wang, IEEE Transactions on Neural Networks and Learning Systems. 3011L. Wu, Y. Wang, L. Shao, and M. Wang, "3d personvlad: Learning deep global representations for video-based person re-identification," IEEE Transactions on Neural Networks and Learning Systems, vol. 30, no. 11, pp. 3347-3359, 2019.
Deep coattention based comparators for relative representation learning in person re-identification. L Wu, Y Wang, J Gao, M Wang, Z.-J Zha, D Tao, IEEE Transactions on Neural Networks and Learning Systems. 322L. Wu, Y. Wang, J. Gao, M. Wang, Z.-J. Zha, and D. Tao, "Deep co- attention based comparators for relative representation learning in person re-identification," IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 2, pp. 722-735, 2021.
Crossmodal retrieval with heterogeneous graph embedding. D Chen, M Wang, H Chen, L Wu, J Qin, W Peng, Proceedings of the 30th ACM International Conference on Multimedia. the 30th ACM International Conference on MultimediaD. Chen, M. Wang, H. Chen, L. Wu, J. Qin, and W. Peng, "Cross- modal retrieval with heterogeneous graph embedding," in Proceedings of the 30th ACM International Conference on Multimedia, 2022, pp. 3291-3300.
Automatic detection of driver impairment based on pupillary light reflex. A Amodio, M Ermidoro, D Maggi, S Formentin, S M Savaresi, IEEE transactions on intelligent transportation systems. 208A. Amodio, M. Ermidoro, D. Maggi, S. Formentin, and S. M. Savaresi, "Automatic detection of driver impairment based on pupillary light reflex," IEEE transactions on intelligent transportation systems, vol. 20, no. 8, pp. 3038-3048, 2018.
Lcsnet: End-to-end lipreading with channel-aware feature selection. F Xue, T Yang, K Liu, Z Hong, M Cao, D Guo, R Hong, ACM Transactions on Multimedia Computing. 2022TOMMCommunications, and ApplicationsF. xue, T. Yang, K. Liu, Z. Hong, M. Cao, D. Guo, and R. Hong, "Lcsnet: End-to-end lipreading with channel-aware feature selection," ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), 2022.
| [] |